id
stringlengths
20
20
content
stringlengths
211
2.4M
meta
dict
BkiUbj85qWTD6eww3KAh
\section{Methods} \subsection{Time-resolved experiments} Bulk-like flakes of \ce{CrI3}, grown using chemical vapor transport, were deposited onto an Au coated sample holder and placed in a variable temperature liquid helium flow optical cryostat. Ultrafast pulses were supplied by a regeneratively amplified Ti:Sapphire laser, which generated pulses with an 800 nm central wavelength, 85 fs duration (measured at the sample position), and 100 kHz repetition rate. The beam was split into pump and probe arms using a plate beam splitter. The pump beam was sent through a mechanical delay line, and both beams were passed through achromatic wave plates and subsequently focused onto optically flat regions of the sample at near-normal incidence using a 20X near-IR optimized apochromatic objective. The nominal $1/e^2$ focal spot diameter of the probe was approximately $\SI{10}-\SI{15}{\si{\um}}$, and the pump was approximately $\SI{20}{\si{\um}}$, yielding a pump fluence of $\sim\SI{0.64}{\si{\milli\joule\per\square\cm}}$. We performed experiments where the pump polarization was horizontally polarized (i.e., parallel to the optical table surface), right circularly polarized, and left circularly polarized. The probe beam was linearly polarized, and upon reflection, the beam was decomposed into its two orthogonal linear components using a Wollaston prism. These components were then differentially detected using a balanced Si photodiode detector and the output signal was fed into a lock-in amplifier to isolate the TRPR signal. Both the pump and probe beams were modulated using a 7/5 slotted optical chopper, allowing the lock-in amplifier to be referenced to the sum inter-modulation frequency (1.2 kHz) to eliminate pump scattering effects. \subsection{Density functional theory calculations} Our {\it ab initio} simulations were carried out within density functional theory. We used both the pseudopotential plane wave method implemented in the Vienna ab initio simulation package (VASP)~\cite{Kresse1993,Kresse1996} and in Quantum Espresso (QE), and the full-potential all-electron linearized augmented plane wave (FP-LAPW) method implemented in the Wien2k code~\cite{Blaha2020}. The local density approximation was used for the exchange-correlation functional throughout all our {\it ab initio} simulations. For the VASP calculations, we chose an energy cutoff of 500 eV for the plane-wave basis set and used a $10\times 10 \times 10$ ($10 \times 10 \times 3$) $\Gamma$-centered $k$-point mesh to sample the bulk (monolayer) Brillouin zone to perform the structure optimization, $\Gamma$-point phonon, Raman tensor (aided with the vasp\_raman python script~\cite{Fonarim2013}), and electronic structure calculations. The QE calculations used a plane-wave basis set with a cutoff energy of 60 Ry. A $10\times 10 \times 10$ $k$-point mesh was used to calculate the electronic structure of bulk CrI$_{3}$ and to support a calculation of the phonon dispersion at the $\Gamma$-point. In the Wien2k simulations, we focused on calculating the electronic band structure for bulk CrI$_3$ by taking a $15 \times 15 \times 15$ $k$-point mesh with a muffin-tin radius of 2.5$a_0$ (Cr) and 2.35$a_0$ (I). Here $a_0$ is the Bohr radius. We also used VASP to perform total energy calculations for the FM and AFM states of a CrI$_3$ monolayer, from which a Heisenberg model with nearest-neighbor exchange interactions was fit to obtain the exchange interaction. The spin-lattice coupling strength was then determined by distorting the equilibrium lattice according to the displacement field for a chosen vibration mode. The electronic structure obtained in our simulations on both bulk and 2D CrI$_3$ is consistent with those reported in the literature \cite{Webster2018,KumarGudelli2019,Lado2017}. We note that SOC reduces the band gap. However, since SOC does not qualitatively change the lattice dynamics \cite{Webster2018}, we did not include it in the calculation of the Raman tensor and estimate of the spin-lattice coupling strength. \subsection{Monte Carlo simulations} We performed classical Monte Carlo simulations on the coupled spin-phonon system described by the Hamiltonian $H=H_{\rm ph}+H_{\rm sp}$ to generate a thermal ensemble of states. We studied lattices of $2 \times L\times L$ spins, with system sizes up to $L=48$, with periodic boundary conditions. To improve the statistical convergence of the simulations, we employed a parallel tempering scheme~\cite{Hukushima1996} to simulate 144 logarithmically spaced temperature points between $T_\mathrm{min}\approx 7~\mathrm{K}$ and $T_\mathrm{max}\approx 90~\mathrm{K}$ in parallel. Each simulation was equilibrated for $10^6$~sweeps before taking measurements of the specific heat and the equilibrium magnetization for an additional $10^7$~sweeps; a single sweep is defined as one attempted update per spin or phonon degree of freedom. As shown in the SI, the chosen coupling constants reasonably reproduce the experimental spin-wave dispersion.~\cite{Chen2018} In addition, the computed temperature-dependent specific heat and magnetization yield a $T_C$ in reasonable agreement with experiments. \subsection{Dynamical simulations} We modeled the impact of the pump laser as a helicity-dependent lattice distortion, given by $X_{\pm}(t=0)= X + \xi_1 \left( 1 + \sigma \xi_2 m \right)$ at time $t=0$. Here, $X$ is the equilibrium lattice displacement, $m$ is the equilibrium magnetization of the spin configuration, and the sign $\pm$ distinguishes $\sigma_+$ and $\sigma_-$ pump pulses. Such a distortion could arise from an ultrafast trigonal splitting of a photoexcited Jahn-Teller active Cr $e_g$ level, with the splitting being sensitive to the local Cr spin due to Hund's coupling. We applied this ultrafast distortion to individual configurations selected from our Monte Carlo ensemble with $L=32$. The post-distortion dynamics were described using the Landau-Lifshitz equations for spins, \begin{align} \label{eq:landaulifshitz} \hbar \frac{\mathrm{d}S_\mathbf{r}^x}{\mathrm{d}t} \!=\!\!\!\! &\sum\limits_{\gamma=x,y,z} \!\!\!\widetilde{J}_H \! \left( S_\mathbf{r}^z S_{\mathbf{r}_x}^z \!-\! S_\mathbf{r}^y S_{\mathbf{r}_x}^z \right) \!+\! \widetilde{J}_K \! \left( S_\mathbf{r}^z S_{\mathbf{r}_y}^y \!-\! S_\mathbf{r}^y S_{\mathbf{r}_z}^z \right) \nonumber\\ & \!+\! \widetilde{J}_\Gamma \left( S_\mathbf{r}^z S_{\mathbf{r}_x}^z \!+\! S_\mathbf{r}^z S_{\mathbf{r}_z}^z \!-\! S_\mathbf{r}^y S_{\mathbf{r}_x}^y \!-\! S_\mathbf{r}^y S_{\mathbf{r}_y}^x \right) \,, \end{align} with cyclic permutations of $(x,y,z)$ yielding $\mathrm{d}S_\mathbf{r}^{y}/\mathrm{d}t$ and $\mathrm{d}S_\mathbf{r}^{z}/\mathrm{d}t$, coupled with Newton's equations for the lattice \begin{equation} \frac{\mathrm{d}X}{\mathrm{d}t} = \frac{P}{M} \quad\textrm{and}\quad \frac{\mathrm{d}P}{\mathrm{d}t} = -M \Omega^2 X - \frac{\partial H_\mathrm{spin}}{\partial X} \,. \label{eq:Newton} \end{equation} In equation~\eqref{eq:landaulifshitz}, $\mathbf{r}_\gamma$ denotes the nearest-neighbor lattice site of $\mathbf{r}$ along bond direction $\gamma$. These equations were numerically integrated using a ninth-order Runge-Kutta algorithm~\cite{Rackauckas2017} and adaptive time steps with a local relative error tolerance of $\varepsilon_\mathrm{rel}=10^{-13}$. This yielded converged results up to timescales of approximately $25\,\mathrm{ps}$ in the magnetically ordered phase. We averaged the resulting dynamical observables over $2000$ Monte Carlo configurations with positive magnetization (when projected onto the twofold degenerate polarization axis). In particular, we extracted the time-dependent deviation of the magnetization, $\Delta m_{\pm}(t) = \langle m_{\pm}(t) \rangle - m$, where $m$ is the mean magnetization in thermal equilibrium and $\langle m_{\pm}(t) \rangle$ is the time-dependent magnetization after the ultrafast $\sigma_\pm$-helicity-dependent distortion was applied. We determined the ratio of Fourier components $m_{+}(\Omega)/m_{-}(\Omega)$ by calculating the Fourier transform $m_{\pm}(\Omega) = \mathrm{FT}(\Delta m_{\pm}(t))$ and fitting a Gaussian profile to the peak found at the $A_{1g}^{2}$ phonon frequency $\Omega$. The ratio $m_{+}(\Omega)/m_{-}(\Omega)$ is independent of the overall distortion $\xi_1$, while a helicity-dependent splitting $\xi_2 \approx 0.18 /\si{\mu_B}$ was found to reproduce our experimental results below $T_C$. Additional results on the time-resolved magnetization and phonon displacement are shown in the SI. Finally, our results are qualitatively robust upon tuning the Hamiltonian parameters, as long as there are at least two exchange interactions with distinct spin-phonon couplings.
{ "attr-fineweb-edu": 1.802734, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbl7xK2li-DeXypsm
\section{Introduction} Simulations of Wilson-type fermions at realistic quark masses require an improved action with good chiral properties and scaling behavior. A systematic improvement scheme that removes discretization errors order by order in the lattice spacing $a$ has been proposed by Symanzik~\cite{Symanzik:1983dc} and developed for on-shell quantities in~\cite{Luscher:1984xn,Sheikholeslami:1985ij}. $\mathcal{O}(a)$ improvement of the Wilson fermion action is achieved by complementing it with the so-called clover term~\cite{Sheikholeslami:1985ij}, provided the associated clover coefficient is tuned properly. Wilson-type fermions break all chiral symmetries. This introduces an additive negative mass renormalization term in the action, which gives rise to singularities in the quark propagator at small quark masses and makes the approach to the chiral regime difficult. A chiral improvement of the action is expected to reduce the additive mass renormalization and the spread of negative eigenvalues. Surprisingly, this is not accomplished by the clover action. While the magnitude of the additive mass term decreases with increasing clover term, the problem of negative eigenvalues is more severe for the clover than for the standard Wilson action. It is well known that via a combination of link fattening and tuning of the clover coefficient, it is possible to reduce both the negative mass term and the spread of negative eigenvalues~\cite{DeGrand:1998mn,Boinepalli:2004fz,Capitani:2006ni}. The focus of this investigation is to determine the clover coefficient and the additive mass renormalization for plaquette and Symanzik improved gauge action and stout link clover fermions in one-loop lattice perturbation theory. The Symanzik improved gauge action reads~\cite{Symanzik:1983dc} \begin{equation} S_G^{\rm Sym} = \frac{6}{g^2} \,\,\left\{c_0 \sum_{\rm Plaquette} \frac{1}{3}\, {\rm Re\, Tr\,}(1-U_{\rm Plaquette}) \, + c_1 \sum_{\rm Rectangle} \frac{1}{3}\,{\rm Re \, Tr\,}(1- U_{\rm Rectangle})\right\} \label{SG} \end{equation} with $c_0+8c_1=1$ and \begin{equation} c_0=\frac{5}{3}\,, \quad c_1=-\frac{1}{12}\,. \end{equation} This reduces to the standard plaquette action $S_G^{\rm Plaq}$ for $c_1=0$. Clover fermions have the action for each quark flavor~\cite{Sheikholeslami:1985ij} \begin{eqnarray} S_F &=& a^4\, \sum_x \Big\{ - \frac{1}{2a} \, \left[\bar{\psi}(x) \widetilde U_\mu(x)\,(1-\gamma_\mu)\, \psi(x+a\hat{\mu}) \right. \nonumber \\ && \hspace{8mm}\left. + \, \bar{\psi}(x) \widetilde U_\mu^\dagger(x-a\hat{\mu})\,(1+\gamma_\mu)\, \psi(x-a\hat{\mu})\right] \label{SF} \\ && \hspace{8mm} + \, \frac{1}{a}\, (4 + a m_0 +a m)\, \bar{\psi}(x)\psi(x) - c_{SW}\, g\, \frac{a}{4}\, \bar{\psi}(x)\, \sigma_{\mu\nu} F_{\mu\nu}(x)\, \psi(x) \Big\} \,, \nonumber \end{eqnarray} where \begin{equation} am_0=\frac{1}{2\kappa_c} - 4 \,, \label{kc} \end{equation} $\kappa_c$ being the critical hopping parameter, is the additive mass renormalization term, and $F_{\mu\nu}(x)$ is the field strength tensor in clover form with $\sigma_{\mu\nu}=(i/2)\,(\gamma_\mu\gamma_\nu-\gamma_\nu\gamma_\mu)$. We consider a version of clover fermions in which we do not smear links in the clover term, but the link variables $U_\mu$ in the next neighbor terms have been replaced by (uniterated) stout links~\cite{Morningstar:2003gk} \begin{equation} \widetilde{U}_\mu(x) = e^{i\, Q_\mu(x)} \, U_\mu(x) \label{Ustout} \end{equation} with \begin{equation} Q_\mu(x)=\frac{\omega}{2\,i} \left[V_\mu(x) U_\mu^\dagger(x) - U_\mu(x)V_\mu^\dagger(x) -\frac{1}{3} {\rm Tr} \,\left(V_\mu(x) U_\mu^\dagger(x) - U_\mu(x)V_\mu^\dagger(x)\right)\right] \, . \end{equation} $V_\mu(x)$ denotes the sum over all staples associated with the link and $\omega$ is a tunable weight factor. Stout smearing is preferred because (\ref{Ustout}) is expandable as a power series in $g^2$, so we can use perturbation theory. Many other forms of smearing do not have this nice property. Because both the unit matrix and the $\gamma_\mu$ terms are smeared, each link is still a projection operator in the Dirac spin index. The reason for not smearing the clover term is that we want to keep the physical extent in lattice units of the fermion matrix small which is relevant for non-perturbative calculations. In that respect we refer to these fermions as SLiNC fermions, from the phrase {\bf S}tout {\bf Li}nk{\bf N}on-perturbative {\bf C}lover. The improvement coefficient $c_{SW}$ as well as the additive mass renormalization $am_0$ are associated with the chiral limit. So we will carry out the calculations for massless quarks, which simplifies things, though it means that we cannot present values for the mass dependent corrections. For complete $\mathcal{O}(a)$ improvement of the action there are five terms which would have to be added to the $\mathcal{O}(a)$ effective action, they are listed, for example, in \cite{Luscher:1996sc}. Fortunately, in the massless case only two remain, \begin{eqnarray} \mathcal{O}_1 &=& \bar{\psi} \sigma_{\mu\nu} F_{\mu\nu} \psi\,, \\ \mathcal{O}_2 &=& \bar{\psi} \stackrel{\leftrightarrow}{D} \stackrel{\leftrightarrow}{D} \psi \,. \end{eqnarray} The first is the clover term, the second is the Wilson mass term. We have both in our action, there is no need to add any other terms to the action. In perturbation theory \begin{equation} c_{SW}=1 + g^2 \, c_{SW}^{(1)} + {\mathcal{O}(g^4)}\,. \label{csw} \end{equation} The one-loop coefficient $c_{SW}^{(1)}$ has been computed for the plaquette action using twisted antiperiodic boundary conditions~\cite{Wohlert:1987rf} and Schr\"odinger functional methods~\cite{Luscher:1996vw}. Moreover, using conventional perturbation theory, Aoki and Kuramashi~\cite{Aoki:2003sj} have computed $c_{SW}^{(1)}$ for certain improved gauge actions. All calculations were performed for non-smeared links and limited to on-shell quantities. We extend previous calculations of $c_{SW}^{(1)}$ to include stout links. This is done by computing the one-loop correction to the off-shell quark-quark-gluon three-point function. The improvement of the action is not sufficient to remove discretization errors from Green functions. To achieve this, one must also improve the quark fields. The most general form consistent with BRST symmetry is~\cite{Martinelli:2001ak}\footnote{In~\cite{Martinelli:2001ak} the authors use $\ensuremath{\stackrel{\rightarrow}{\slashed{D}}}$ and $\ensuremath{\slashed{\partial}}$ instead of $\ensuremath{\stackrel{\rightarrow}{\slashed{D}}}$ and $\ensuremath{\slashed{A}}$ - both choices are equivalent. Our choice is motivated by the discussion of off-shell improvement in the next section.} \begin{equation} \psi_{\star}(x)=\left(1 + a \,c_D \ensuremath{\stackrel{\rightarrow}{\slashed{D}}} + a \,i\,g\,\,c_{NGI} \ensuremath{\slashed{A}}(x) \right) \,\psi(x)\,. \label{imppsi} \end{equation} {}From now we denote improved quark fields and improved Green functions by an index~$\star$. These are made free of $\mathcal O (a)$ effects by fixing the relevant improvement coefficients. There is no {\it a priori} reason that the gauge variant contribution $c_{NGI} \ensuremath{\slashed{A}}(x)$ vanishes. The perturbative expansion of $c_{NGI}$ has to start with the one-loop contribution~\cite{Martinelli:2001ak}. As a byproduct of our calculation we determine that coefficient $c_{NGI}^{(1)}$ \begin{equation} c_{NGI}=g^2\,c_{NGI}^{(1)} + {\mathcal{O}(g^4)} \label{cNGI} \end{equation} and find that it is indeed nonvanishing. \section{Off-shell improvement} It is known~\cite{Aoki:2003sj} that the one-loop contribution of the Sheikoleslami-Wohlert coefficient in conventional perturbation theory can be determined using the quark-quark-gluon vertex $\Lambda_\mu(p_1,p_2,c_{SW})$ sandwiched between {\sl on-shell} quark states. $p_1$ ($p_2$) denotes the incoming (outgoing) quark momentum. In general that vertex is an {\sl amputated} three-point Green function. Let us look at the ${\mathcal{O}}(a)$ expansion of tree-level $\Lambda^{(0)}_\mu(p_1,p_2,c_{SW})$ which is derived from action (\ref{SF}) \begin{equation} \Lambda^{(0)}_\mu(p_1,p_2,c_{SW}) = -i\, g \,\gamma_\mu -g\, {\textstyle \frac{1}{2}} \, a\, {\bf 1} (p_1 + p_2)_\mu + c_{SW} \,i\, g\, {\textstyle \frac{1}{2}} \, a \,\sigma_{\mu \alpha} (p_1 -p_2)_\alpha\\ +\mathcal{O}(a^2)\,. \label{treevertex} \end{equation} For simplicity we omit in all three-point Green functions the common overall color matrix $T^{a}$. That tree-level expression between on-shell quark states is free of order $\mathcal O (a)$ if the expansion of $c_{SW}$ starts with one, as indicated in (\ref{csw}) \begin{equation} \bar u(p_2) \, \Lambda^{(0)}_{\star\mu}(p_1,p_2) \, u(p_1) = \bar u (p_2) \, (-i\, g \,\gamma_\mu )\, u(p_1) \,. \label{treeverteximproved} \end{equation} Therefore, at least a one-loop calculation of the $\Lambda_\mu(p_1,p_2,c_{SW}^{(1)})$ is needed as necessary condition to determine $c_{SW}^{(1)}$. The {\sl off-shell} improvement condition states that the {\sl non-amputated} improved quark-quark-gluon Green function $G_{\star \mu}(p_1,p_2,q)$ has to be free of $\mathcal{O}(a)$ terms in one-loop accuracy. In position space that non-amputated improved quark-quark-gluon Green functions is defined via expectation values of improved quark fields and gauge fields as \begin{equation} G_{\star\mu}(x,y,z)=\langle \psi_{\star}(x)\, \overline{\psi}_{\star}(y) \, A_\mu(z)\rangle \,. \end{equation} Since the gluon propagator is $\mathcal{O}(a)$-improved already, we do not need to improve gauge fields. Using relation (\ref{imppsi}) we can express the function $G_{\star\mu}$ by the unimproved quark fields $\psi$ \begin{eqnarray} G_{\star\mu}(x,y,z) &=& G_{\mu}(x,y,z)+ a\,c_{D}\,\left\langle \left(\ensuremath{\slashed{D}} \ensuremath{\slashed{D}}^{-1} +\ensuremath{\slashed{D}}^{-1}\ensuremath{\slashed{D}} \right)A_\mu \right\rangle \nonumber \\ & & \quad\quad +\, i \, a \, g \, c_{NGI}\,\left\langle\left(\ensuremath{\slashed{A}} \ensuremath{\slashed{D}}^{-1} +\ensuremath{\slashed{D}}^{-1}\ensuremath{\slashed{A}} \right)A_\mu \right\rangle \,, \end{eqnarray} where $G_{\mu}(x,y,z)$ is the unimproved Green function. Taking into account \begin{equation} \left\langle \left(\ensuremath{\slashed{A}} \ensuremath{\slashed{D}}^{-1}+\ensuremath{\slashed{D}}^{-1}\ensuremath{\slashed{A}} \right)A_\mu \right\rangle = 2\,a\,c_{D} \, \delta(x-y)\,\left\langle A_\mu(z) \right\rangle \end{equation} and setting $\langle A_\mu(z) \rangle=0$ (unless there is an unexpected symmetry breaking), we obtain the following relation between the improved and unimproved Green function \begin{equation} G_{\star\mu}(x,y,z)= G_{\mu}(x,y,z)+i\,a \, g \,c_{NGI}\,\left\langle\left(\ensuremath{\slashed{A}} \ensuremath{\slashed{D}}^{-1}+\ensuremath{\slashed{D}}^{-1}\ensuremath{\slashed{A}} \right) A_\mu \right\rangle\,. \label{qqgimp2} \end{equation} {}From (\ref{qqgimp2}) it is obvious that tuning only $c_{SW}$ to its optimal value in $G_{\mu}(x,y,z)$, there would be an $\mathcal{O}(a)$ contribution left in the improved Green function. The requirement that $G_{\star\mu}(x,y,z)$ should be free of ${\mathcal O}(a)$ terms leads to an additional condition which determines the constant $c_{NGI}$. It has not been calculated before. Taking into account the expansion (\ref{cNGI}) of $c_{NGI}$ we get in momentum space ($\mathcal{F}[\cdot]$ denotes the Fourier transform) \begin{equation} i\, a \, g\, c_{NGI}\,\mathcal{F} \Big[\left\langle\left(\ensuremath{\slashed{A}} \ensuremath{\slashed{D}}^{-1} + \ensuremath{\slashed{D}}^{-1}\ensuremath{\slashed{A}} \right)A_\mu \right\rangle^{\rm tree}\Big] = i\, a \, g^3\, c_{NGI}^{(1)} \left(\gamma_\nu\frac{1}{i\, \ensuremath{\slashed{p}}_1} +\frac{1}{i\, \ensuremath{\slashed{p}}_2}\gamma_\nu\right)\, K^{\rm tree}_{\nu\mu}(q)\,, \label{cNGI1} \end{equation} or its amputated version \begin{equation} i\, a \, g\, c_{NGI}\, \mathcal{F} \Big[\left\langle\left(\ensuremath{\slashed{A}} \ensuremath{\slashed{D}}^{-1} + \ensuremath{\slashed{D}}^{-1}\ensuremath{\slashed{A}} \right)A_\mu \right\rangle^{\rm tree}_{\rm amp}\Big] = -a \, g^3\, c_{NGI}^{(1)} \left(\ensuremath{\slashed{p}}_2\, \gamma_\mu+\gamma_\mu\, \ensuremath{\slashed{p}}_1\right)\,. \label{cNGI1amp} \end{equation} The relation between non-amputated and amputated unimproved and improved three-point Green functions are defined by \begin{eqnarray} G_\mu(p_1,p_2,q)&=& S(p_2)\, \Lambda_\nu(p_1,p_2,q,c_{SW}^{(1)})\, S(p_1)\, K_{\nu\mu}(q)\,, \label{nonamp} \\ G_{\star \mu}(p_1,p_2,q)&=& S_\star(p_2)\, \Lambda_{\star\nu}(p_1,p_2,q) \, S_\star(p_1) \, K_{\nu\mu}(q) \,, \label{nonampimp} \end{eqnarray} $K_{\nu\mu}(q)$ denotes the full gluon propagator which is $\mathcal{O}(a)$-improved already, $S(p)$ and $S_\star(p)$ the corresponding quark propagators. With the definition of the quark self energy \begin{equation} \Sigma(p)= \frac{1}{a} \Sigma_0 + i \, \ensuremath{\slashed{p}} \, \Sigma_1(p) + \frac{a \, p^2}{2} \Sigma_2(p) \end{equation} the unimproved and improved inverse quark propagators are given by \begin{eqnarray} S^{-1}(p)&=&i \, \ensuremath{\slashed{p}}\, \Sigma_1(p) +\frac{a \,p^2}{2}\Sigma_2(p)= i \, \ensuremath{\slashed{p}} \,\Sigma_1(p)\left(1-\frac{1}{2}a\, i \, \ensuremath{\slashed{p}}\, \frac{\Sigma_2(p)}{\Sigma_1(p)} \right)\,, \label{S} \\ S_\star^{-1}(p)&=&i \, \ensuremath{\slashed{p}}\, \Sigma_1(p)\,. \label{selfenergy} \end{eqnarray} Using the Fourier transformed (\ref{qqgimp2}) with (\ref{cNGI1amp}) and amputating the Green function~(\ref{nonamp}), taking into account the inverse quark propagators (\ref{S}), we get the off-shell improvement condition in momentum space \begin{eqnarray} \Lambda_{\mu}(p_1,p_2,q,c_{SW}^{(1)})&=&\Lambda_{\star \mu}(p_1,p_2,q)+ a \, g^3 c_{NGI}^{(1)} (\ensuremath{\slashed{p}}_2 \, \gamma_\mu +\gamma_\mu\, \ensuremath{\slashed{p}}_1) \nonumber\\ & & \hspace{-0.7cm} -\, \frac{a}{2}\,i\, \ensuremath{\slashed{p}}_2 \, \frac{\Sigma_2(p_2)}{\Sigma_1(p_2)}\, \Lambda_{\star\mu}(p_1,p_2,q) -\frac{a}{2}\,\Lambda_{\star\mu}(p_1,p_2,q)\, i\, \ensuremath{\slashed{p}}_1 \, \frac{\Sigma_2(p_1)}{\Sigma_1(p_1)} \,. \label{impcond} \end{eqnarray} This expression should hold to order $\mathcal{O}(g^3)$ by determining both $c_{NGI}^{(1)}$ and $c_{SW}^{(1)}$ correctly. It is clear from (\ref{impcond}) that the improvement term $\propto c_{NGI}^{(1)}$ does not contribute if both quarks are on-shell. \section{The one-loop lattice quark-quark-gluon vertex} The diagrams contributing to the amputated one-loop three-point function are shown in Fig.~\ref{fig2}. \begin{figure}[!htb] \begin{center} \includegraphics[scale=0.3,width=0.8\textwidth]{feyn3.eps} \end{center} \caption{One-loop diagrams contributing to the amputated quark-quark-gluon vertex.} \label{fig2} \end{figure} The calculation is performed with a mixture of symbolic and numerical techniques. For the symbolic computation we use a {\it Mathematica} package that we developed for one-loop calculations in lattice perturbation theory (for a more detailed description see ~\cite{Gockeler:1996hg}). It is based on an algorithm of Kawai et al.~\cite{Kawai:1980ja}. The symbolic treatment has several advantages: one can extract the infrared singularities exactly and the results are given as functions of lattice integrals which can be determined with high precision. The disadvantage consists in very large expressions especially for the problem under consideration. In the symbolic method the divergences are isolated by differentiation with respect to external momenta. Looking at the general analytic form of the gluon propagator for improved gauge actions~\cite{Horsley:2004mx} one easily recognizes that a huge analytic expression would arise. As discussed in ~\cite{Horsley:2004mx} we split the full gluon propagator $D_{\mu\nu}^{{\rm Sym}}(k,\xi)$ \begin{equation} D_{\mu\nu}^{{\rm Sym}}(k,\xi)=D_{\mu\nu}^{{\rm Plaq}}(k,\xi) + \Delta D_{\mu\nu}(k)\,, \label{Dprop} \end{equation} where $\xi$ is the covariant gauge parameter ($\xi=0$ corresponds to the Feynman gauge). The diagrams with $D_{\mu\nu}^{{\rm Plaq}}(k,\xi)$ only contain the logarithmic parts and are treated with our {\it Mathematica} package. The diagrams with at least one $\Delta D_{\mu\nu}(k)$ are infrared finite and can be determined safely with numerical methods. The decomposition (\ref{Dprop}) means that we always need to calculate the plaquette action result, as part of the calculation for the improved gauge action. Therefore, we will give the results for both plaquette gauge action and Symanzik improved gauge action using the corresponding gluon propagators $D_{\mu\nu}^{{\rm Plaq}}$ and $D_{\mu\nu}^{{\rm Sym}}$, respectively. Because the numerical part determines the accuracy of the total result we discuss it in more detail. There are several possibilities to combine the various contributions of the one-loop diagrams as given in Fig.~\ref{fig2}. In view of a later analysis we have decided to group all coefficients in front of the independent color factors $C_F$ and $N_c$ and the powers of the stout parameter $\omega$ \begin{eqnarray} \Lambda^{{\rm num.}}_\mu &=& C_F\,\left(C_{0}+C_{1}\,\omega+C_{2}\,\omega^2+C_{3}\,\omega^3\right)+ N_c\,\left(C_{4}+C_{5}\,\omega+C_{6}\,\omega^2+C_{7}\,\omega^3\right)\label{num2}\,, \end{eqnarray} where the $C_i$ have to be computed numerically. In order to obtain $C_i$ we first add all contributions of the diagrams shown in (\ref{fig2}) and integrate afterwards. We have used a Gauss-Legendre integration algorithm in four dimensions (for a description of the method see~\cite{Gockeler:1996hg}) and have chosen a sequence of small external momenta $(p_1,p_2)$ to perform an extrapolation to vanishing momenta. Let us illustrate this by an example: the calculation of the coefficient $C_4$. We know the general structure of the one-loop amputated three-point function as (we set $a=1$) \begin{eqnarray} M_\mu(p_1,p_2) &=& \gamma_\mu\, A(p_1,p_2) + {\rm\bf 1}\, p_{1,\mu}\, B(p_1,p_2) + {\rm\bf 1}\, p_{2,\mu}\, C(p_1,p_2) \nonumber\\ & & + \,\sigma_{\mu\alpha}\,p_{1,\alpha} \,D(p_1,p_2) + \sigma_{\mu\alpha}\,p_{2,\alpha} \,E(p_1,p_2)\,. \end{eqnarray} {}From this we can extract the coefficients by the following projections \begin{eqnarray} {\rm Tr}\,\gamma_\mu M_\mu &=& 4\,A(p_1,p_2), \quad\quad \mu \quad {\rm fixed}\,, \nonumber\\ {\rm Tr}\,M_\mu &=& 4 \,p_{1,\mu}\, B(p_1,p_2) + 4 \,p_{2,\mu} \,C(p_1,p_2) \,, \\ \sum_\mu\, {\rm Tr}\,\sigma_{\nu\mu}\,M_\mu &=& 12 \,p_{1,\nu} \,D(p_1,p_2) + 12 \,p_{2,\nu}\, E(p_1,p_2)\,. \nonumber \label{proj1} \end{eqnarray} Relations (\ref{proj1}) show that one has to compute the three-point function for all four values of $\mu$. Further they suggest choosing the external momenta orthogonal to each other: $p_1 \cdot p_2 = 0$. A simple choice is $p_{1,\mu}=(0,0,0,p_{1,4})$ and $p_{2,\mu}=(0,0,p_{2,3},0)$. We discuss the determination of $B(p_1,p_2)$ and $C(p_1,p_2)$ in more detail. For small momenta they can be described by the ansatz \begin{eqnarray} B(p_1,p_2)&=& B_0 + B_1\,p_1^2 + B_2 \,p_2^2\,, \nonumber\\ C(p_1,p_2)&=& C_0 + C_1\,p_1^2 + C_2 \,p_2^2\,. \label{BC} \end{eqnarray} The choice of the momenta is arbitrary except for two points. First, they should be sufficiently small in order to justify ansatz (\ref{BC}). Second, they should not be integer multiples of each other in order to avoid accidental symmetric results. The symmetry of the problem demands the relation $B_0=C_0$ which must result from the numerical integration also. Performing the integration at fixed $p_1$ and $p_2$ we obtain complex $4\times 4$ matrices for $M_3(p_1,p_2)$ and $M_4(p_1,p_2)$ from which the quantities $B(p_1,p_2)$ and $C(p_1,p_2)$ are extracted via (\ref{proj1}). A nonlinear regression fit with ansatz (\ref{BC}) gives \begin{eqnarray} B_0&=&0.00553791 \quad {\rm with \,\, fit\,\, error}\,\, \delta B_0=7\times 10^{-8}\,, \nonumber\\ C_0&=&0.00553789 \quad {\rm with \,\, fit\,\, error}\,\, \delta C_0=6\times 10^{-8}\,. \label{fitBC} \end{eqnarray} It shows that the symmetry is fulfilled up to an error of $\mathcal{O}(10^{-7})$ which sets one scale for the overall error of our numerical calculations. In Fig.~\ref{fig3} we \begin{figure}[!htb] \begin{center} \includegraphics[scale=0.01,width=0.8\textwidth]{BCextrapol.eps} \end{center} \caption{$B(p_1,p_2)$ (circles) and $C(p_1,p_2)$ (squares) as function of $p_1^2$ together with their corresponding linear fits in $p_1^2$.} \label{fig3} \end{figure} show the almost linear dependence of $B(p_1,p_2)$ and $C(p_1,p_2)$ on $p_1^2$. (In the integration we haven chosen $p_{1,\mu}=0.87\, p_{2,\mu}$ so that we can restrict the plot to one variable.) Another source of errors is the numerical Gauss-Legendre integration routine itself. We have chosen a sequence of $n^4=14^4$, $18^4$, $22^4$, $26^4$ and $30^4$ nodes in the four-dimensional hypercube and have performed an extrapolation to infinite nodes with an $1/n^4$ fit ansatz. Both procedures, Gauss-Legendre integration and the fit $p \rightarrow 0$, give a combined final error of $10^{-6}$. The third error source are the errors of the lattice integrals of our {\it Mathematica} calculation for the terms containing the plaquette propagator $D_{\mu\nu}^{{\rm Plaq}}$ only. These integrals have been calculated up to a precision of $\mathcal{O}(10^{-10})$. Therefore, their errors can be neglected in comparison with the others. Summarizing we find that the error of our numerical procedure is of $\mathcal{O}(10^{-6})$. Additionally, we have checked our results by an independent code which completely numerically computes the one-loop contributions for each diagram including the infrared logarithms. Both methods agree within errors. The Feynman rules for non-smeared Symanzik gauge action have been summarized in~\cite{Aoki:2003sj}. For the stout smeared gauge links in the clover action the rules restricted to equal initial and final quark momenta are given in~\cite{Capitani:2006ni}. As mentioned in the introduction we perform a one-level smearing of the Wilson part in the clover action. The corresponding Feynman rules needed for the one-loop quark-quark-gluon vertex are much more complicated than those in~\cite{Capitani:2006ni}. The qqgg-vertex needed in diagrams (c) and (d) of Fig.~\ref{fig2} receives an additional antisymmetric piece. The qqggg-vertex in diagram (e) does not even exist in the forward case. The Feynman rules are given in Appendix A. The diagrams which are needed for the calculation of the quark propagator are shown in Fig.~\ref{fig1}. We have performed our calculation in general covariant gauge. \begin{figure}[!htb] \begin{center} \includegraphics[scale=0.01,width=0.8\textwidth]{selfenergy2.eps} \end{center} \caption{One-loop diagrams contributing to the quark self energy.} \label{fig1} \end{figure} \section{Results for the improvement coefficients and critical hopping parameter} The anticipated general structure for the amputated three-point function in one-loop is \begin{eqnarray} \Lambda_\mu(p_1,p_2,q)&=& \Lambda^{{\overline{MS}}}_\mu(p_1,p_2,q) +A_{\rm lat}\,i\,\frac{g^3}{16\pi^2}\,\gamma_\mu \nonumber\\ & & + \, B_{\rm lat}\,\frac{a}{2}\,\frac{g^3}{16\pi^2}\,\left(\ensuremath{\slashed{p}}_2\,\gamma_\mu +\gamma_\mu\,\ensuremath{\slashed{p}}_1\right) + C_{\rm lat}\,i\,\frac{a}{2}\,\frac{g^3}{16\pi^2}\,\sigma_{\mu\alpha}\,q_\alpha \,. \label{Lam} \end{eqnarray} $\Lambda^{{\overline{MS}}}_\mu(p_1,p_2,q)$ is the universal part of the three-point function, independent of the chosen gauge action, computed in the $\overline{MS}$-scheme \begin{eqnarray} \Lambda^{{\overline{MS}}}_\mu(p_1,p_2,q)&=& -i\, g\, \gamma_\mu - g\, \frac{a}{2}\,{\bf 1}\left( p_{1,\mu}+p_{2,\mu}\right)- c_{SW}\,i\, g\,\frac{a}{2}\sigma_{\mu\alpha}\,q_\alpha \nonumber\\ & & + \,i\, \frac{1}{2}\,\frac{g^3}{16\pi^2}\,\Lambda^{{\overline{MS}}}_{1,\mu}(p_1,p_2,q) + \frac{a}{2}\frac{g^3}{16\pi^2} \,\Lambda^{{\overline{MS}}}_{2,\mu}(p_1,p_2,q)\,. \label{LamMS} \end{eqnarray} We have calculated the complete expressions for $\Lambda^{{\overline{MS}}}_{1,\mu}(p_1,p_2,q)$ and $\Lambda^{{\overline{MS}}}_{2,\mu}(p_1,p_2,q)$. The $\mathcal{O}(a)$ contribution, $\Lambda^{{\overline{MS}}}_{2, \mu}(p_1,p_2,q)$, simplifies if we set $c_{SW}=1+\mathcal{O}(g^2)$ as in (\ref{csw}). After some algebra we find \begin{eqnarray} \Lambda^{{\overline{MS}}}_{2,\mu}(p_1,p_2,q) &=& \frac{1}{2}\left(\ensuremath{\slashed{p}}_2\,\Lambda^{{\overline{MS}}}_{1,\mu}(p_1,p_2,q)+ \Lambda^{{\overline{MS}}}_{1,\mu}(p_1,p_2,q)\,\ensuremath{\slashed{p}}_1\right) \nonumber\\ \label{lambda2} & & -\, C_F\left(\ensuremath{\slashed{p}}_2\, \gamma_\mu \,(1-\xi)(1-\log(p_2^2/\mu^2))\right. \\ & & \quad \quad \, \left. +\gamma_\mu\,\ensuremath{\slashed{p}}_1 \,(1-\xi)(1-\log(p_1^2/\mu^2))\right)\,, \nonumber \end{eqnarray} where $\mu^2$ is the $\overline{MS}$ mass scale (not to be confused with the index $\mu$). Therefore, we only need $\Lambda^{{\overline{MS}}}_{1,\mu}(p_1,p_2,q)$ to present the one-loop result (\ref{LamMS}). $\Lambda^{{\overline{MS}}}_{1,\mu}(p_1,p_2,q)$ is given in Appendix B. If we insert (\ref{Lam}) and (\ref{LamMS}) with (\ref{lambda2}) into the off-shell improvement relation (\ref{impcond}) we get the following conditions that all terms of order $\mathcal{O}(ag^3)$ have to vanish \begin{eqnarray} \left(c_{SW}^{(1)} - \frac{C_{\rm lat}}{16\pi^2}\right)\,\sigma_{\mu\alpha}\,q_\alpha &=& 0\,, \label{cSWcond}\\ \left(c_{NGI}^{(1)} - \frac{1}{32\pi^2}\,\left(A_{\rm lat}-B_{\rm lat}-\Sigma_{21}\right)\right) \left(\ensuremath{\slashed{p}}_2 \, \gamma_\mu+ \gamma_\mu\, \ensuremath{\slashed{p}}_1 \right) &=&0\,, \label{cNGIcond} \end{eqnarray} with $\Sigma_{21}$ defined from (\ref{S}) as \begin{eqnarray} \frac{\Sigma_2(p)}{\Sigma_1(p)}&=&1+\frac{g^2\,C_F}{16\pi^2}\left((1-\xi)(1-\log(a^2p^2))+\Sigma_{21,0} \right) \nonumber\\ &\equiv&1+\frac{g^2\,C_F}{16\pi^2}\left((1-\xi)(1-\log(p^2/\mu^2))\right)+\frac{g^2}{16\pi^2}\Sigma_{21} \label{SigmaWP} \end{eqnarray} and \begin{equation} \Sigma_{21}=C_F\,\left( -(1-\xi)\log(a^2\mu^2)+\Sigma_{21,0} \right)\,. \label{SigmaWPr} \end{equation} The constant $\Sigma_{21,0}$ depends on the chosen lattice action. It should be noted that equations (\ref{cSWcond}) and (\ref{cNGIcond}) are obtained by using the general structure (\ref{lambda2}) only -- we do not need to insert the complete calculated result for $\Lambda^{{\overline{MS}}}_{1,\mu}(p_1,p_2,q)$. In order to get momentum independent and gauge invariant improvement coefficients we see from (\ref{cSWcond}) that $C_{\rm lat}$ itself has to be constant and gauge invariant. From (\ref{cNGIcond}) and (\ref{SigmaWPr}) we further conclude that the $\log(a^2\mu^2)$-terms from $A_{\rm lat}$ and $B_{\rm lat}$ have to cancel those from $\Sigma_{21}$. The same is true for the corresponding gauge terms. The terms $\propto(1-\xi)(1- \log(p_i^2/\mu^2))$ ($i=1,2$) coming from (\ref{SigmaWP}) are canceled by the corresponding terms in (\ref{lambda2}). Therefore, the relation between $\Lambda^{{\overline{MS}}}_{1,\mu}(p_1,p_2,q)$ and $\Lambda^{{\overline{MS}}}_{2,\mu}(p_1,p_2,q)$ as given in (\ref{lambda2}) is a nontrivial result. Once more, it should be emphasized that this relation only holds if we use $c_{SW}=1$ at leading order in $g^2$. If (\ref{lambda2}) were not true, we would not be able to improve the Green functions by adding the simple $\mathcal{O}(a)$ terms we have considered. For completeness we also give the corresponding one-loop values for the quark field improvement coefficient $c_D$ as defined in (\ref{imppsi}). They can be derived from the $\mathcal{O}(a)$ improvement of the quark propagator. The one-loop improvement coefficient $c_D^{(1)}$ is related to the quark self energy by \begin{equation} c_D = -\frac{1}{4}\,\left(1+\frac{g^2\, C_F}{16 \pi^2} \, \left(2\,\Sigma_1-\Sigma_2\right)\right) +\mathcal{O}(g^4)\equiv -\frac{1}{4}\,\left(1+g^2\,c_D^{(1)}\right)+\mathcal{O}(g^4)\,. \label{cD1} \end{equation} $c_D^{(1)}$ has been calculated for ordinary clover fermions and plaquette gauge action in~\cite{Capitani:2000xi}. Now we present our numerical results for general covariant gauge $\xi$ and as function of the stout parameter $\omega$. For the plaquette action with stout smearing the quantities $A_{\rm lat}$, $B_{\rm lat}$ and $C_{\rm lat}$ are obtained as \begin{eqnarray} A_{\rm lat}^{\rm Plaq}&=&C_F\,\Big(9.206269 +3.792010\,\xi - 196.44601\,\omega + 739.683641\,\omega^2 \nonumber \\ && \quad \quad + (1-\xi)\log (a^2\mu^2) \Big) \nonumber\\ & &+ \, N_c\,\left(-4.301720 + 0.693147\,\xi + \,(1-\xi/4)\log (a^2\mu^2) \right)\,, \nonumber\\ B_{\rm lat}^{\rm Plaq}&=&C_F\,\Big(9.357942 + 5.727769\,\xi - 208.583208\,\omega + 711.565256\,\omega^2 \nonumber \\ && \quad \quad + 2\,(1-\xi)\log (a^2\mu^2) \Big) \\ & &+\, N_c\,\left(-4.752081 +0.693147\,\xi +3.683890\,\omega + (1-\xi/4)\log (a^2\mu^2) \right)\,, \nonumber\\ C_{\rm lat}^{\rm Plaq}&=&C_F\,\left(26.471857 + 170.412296\,\omega - 582.177099\,\omega^2\right) \nonumber\\ & &+\, N_c\,\left(2.372649 + 1.518742\,\omega -44.971612\,\omega^2\right)\,. \nonumber \label{ABCplaq} \end{eqnarray} For the stout smeared Symanzik action we get \begin{eqnarray} A_{\rm lat}^{\rm Sym}&=&C_F\,\Big(5.973656 +3.792010\,\xi - 147.890719\,\omega + 541.380348\,\omega^2 \nonumber \\ && + \, (1-\xi)\log (a^2\mu^2) \Big) \nonumber\\ & &+ \, N_c\,\left(-3.08478 + 0.693159\,\xi - 0.384236\,\omega + (1-\xi/4)\log (a^2\mu^2) \right)\,, \nonumber\\ B_{\rm lat}^{\rm Sym}&=&C_F\,\Big(6.007320 + 5.727769\,\xi - 163.833410\,\omega + 542.892478\,\omega^2 \nonumber \\ && + \, 2\,(1-\xi)\log (a^2\mu^2) \Big) \\ & &+\, N_c\,\left(-13.841082 +0.693179\,\xi +3.039641\,\omega + (1-\xi/4)\log (a^2\mu^2) \right)\,, \nonumber \\ C_{\rm lat}^{\rm Sym}&=&C_F\,\left(18.347163 + 130.772885\,\omega - 387.690744\,\omega^2\right) \nonumber\\ & &+\, N_c\,\left(2.175560 + 2.511657\,\omega -50.832203\,\omega^2\right)\,. \nonumber \label{ABC} \end{eqnarray} As shown in (\ref{impcond}) (or equivalently (\ref{cNGIcond})) we need the self energy parts $\Sigma_1(p)$ and $\Sigma_2(p)$ as defined in (\ref{S}) to solve the off-shell improvement condition. They have the general form \begin{eqnarray} \Sigma_{1}(p)&=&1-\frac{g^2\, C_F}{16 \pi^2} \, \left[(1-\xi)\log (a^2p^2) +\Sigma_{1,0}\right]\,,\nonumber\\ \Sigma_{2}(p)&=&1-\frac{g^2\, C_F}{16 \pi^2} \, \left[2\,(1-\xi)\log (a^2p^2) +\Sigma_{2,0}\right]\,. \label{SigmapW} \end{eqnarray} For the plaquette and Symanzik actions we obtain \begin{eqnarray} \Sigma_{1,0}^{\rm Plaq} &=& 8.206268 - 196.446005\,\omega + 739.683641\,\omega^2+4.792010\,\xi\,,\nonumber\\ \Sigma_{2,0}^{\rm Plaq} &=& 7.357942 - 208.583208\,\omega + 711.565260\,\omega^2+7.727769\,\xi\,,\nonumber\\ \Sigma_{1,0}^{\rm Sym} &=& 4.973689 - 147.890720\,\omega + 541.380518\,\omega^2+4.792010\,\xi\,,\\ \Sigma_{2,0}^{\rm Sym} &=& 4.007613 - 163.833419\,\omega + 542.892535\,\omega^2+7.727769\,\xi\,.\nonumber \label{sigmas} \end{eqnarray} This results in the following expressions for $\Sigma_{21}$ as defined in (\ref{SigmaWPr}) \begin{eqnarray} \Sigma_{21}^{\rm Plaq} &=& C_F\, \Big(-0.151673 - 1.935759\,\xi + 12.137203\,\omega + 28.118384\,\omega^2 \nonumber\\ & & \quad\quad\quad -\,(1-\xi)\,\log(a^2\mu^2)\Big)\,, \nonumber\\ \Sigma_{21}^{\rm Sym} &=& C_F\, \Big(-0.033924 - 1.935759\,\xi + 15.942699\,\omega-1.512017\,\omega^2 \\ & & \quad\quad\quad -\, (1-\xi)\,\log(a^2\mu^2)\Big)\,. \nonumber \label{SigmaWPnum} \end{eqnarray} Inserting the corresponding numbers into (\ref{cSWcond}), (\ref{cNGIcond}) and (\ref{cD1}), we obtain the one-loop contributions of the clover improvement coefficient \begin{eqnarray} c_{SW}^{(1),{\rm Plaq}}&=&C_F\,\left(0.167635 + 1.079148\,\omega - 3.697285\,\omega^2\right) \nonumber\\ & &+\, N_c\,\left(0.015025 + 0.009617\,\omega - 0.284786\,\omega^2\right)\,, \label{cswplaq}\\ c_{SW}^{(1),{\rm Sym}}&=&C_F\,\left(0.116185 + 0.828129\,\omega - 2.455080\,\omega^2\right) \nonumber\\ & &+\, N_c\,\left(0.013777 + 0.015905\,\omega - 0.321899\,\omega^2\right)\,, \label{cswSym} \end{eqnarray} the off-shell quark field improvement coefficient \begin{eqnarray} c_{NGI}^{(1),{\rm Plaq}}&=& N_c\,\left(0.001426 - 0.011664 \,\omega \right)\,, \label{cNGIplaq}\\ c_{NGI}^{(1),{\rm Sym}}&=& N_c\,\left(0.002395 - 0.010841\,\omega \right)\,, \label{cNGISym} \end{eqnarray} and the on-shell quark field improvement coefficient \begin{eqnarray} c_D^{(1),{\rm Plaq}}&=& C_F\,\left( 0.057339 + 0.011755\,\xi - 1.167149\,\omega + 4.862163\,\omega^2\right)\,, \label{cD2}\\ c_D^{(1),{\rm Sym}} &=& C_F\,\left(0.037614 + 0.011755\,\xi - 0.835571\,\omega + 3.418757\,\omega^2 \right)\,, \label{cD3} \end{eqnarray} for the plaquette and Symanzik action, respectively. For $\omega=0$ both the plaquette result (\ref{cswplaq}) and the Symanzik result (\ref{cswSym}) agree, within the accuracy of our calculations, with the numbers quoted in~\cite{Wohlert:1987rf,Luscher:1996vw} and~\cite{Aoki:2003sj}. {}From Ward identity considerations it is known that the coefficient $c_{NGI}$ has to be proportional to $N_c$ only. Additionally, $c_{NGI}$ and $c_{SW}$ should be gauge invariant. Both conditions are fulfilled within the errors which have been discussed in the previous section. It should be noted that (\ref{cNGIplaq}) and (\ref{cNGISym}) are the first one-loop results for the quark field improvement coefficient $c_{NGI}$. The gauge dependent improvement coefficient $c_D$ depends only on the color factor $C_F$ because it is determined by $\mathcal{O}(a)$ improvement of the quark propagator. The additive mass renormalization is given by \begin{equation} am_0=\frac{g^2\, C_F}{16 \pi^2} \,\frac{\Sigma_0}{4} \,. \end{equation} This leads to the critical hopping parameter $\kappa_c$, at which chiral symmetry is approximately restored, \begin{equation} \kappa_c=\frac{1}{8}\left( 1- \frac{g^2\, C_F}{16 \pi^2} \,\frac{\Sigma_0}{4}\right)\,. \label{kappac} \end{equation} Using the plaquette or Symanzik gauge actions, we obtain \begin{eqnarray} \Sigma_0^{\rm Plaq} &=& -31.986442 + 566.581765\,\omega -2235.407087\,\omega^2 \,, \label{Sigma0plaq} \\ \Sigma_0^{\rm Sym} &=& -23.832351 + 418.212508\,\omega - 1685.597405\,\omega^2\,. \label{Sigma0sym} \end{eqnarray} This leads to the perturbative expression for $\kappa_c$ \begin{eqnarray} \kappa_c^{\rm Plaq} &=& \frac{1}{8} \left[ 1 + g^2 \, C_F \left(0.050639 - 0.896980 \,\omega + 3.697285 \,\omega^2 \right) \right] \,, \label{kappacplaq} \\ \kappa_c^{\rm Sym} & =& \frac{1}{8} \left[ 1 + g^2 \, C_F \left( 0.037730 - 0.662090\,\omega +2.668543\,\omega^2 \right) \right] \,. \label{kappacSym} \end{eqnarray} For both actions $am_0$ can be tuned to zero for admissible values of $\omega$. Using the smaller possible value we find $\omega=0.089396$ for the plaquette action and $\omega=0.088689$ for the Symanzik gauge action. \section{Mean field improvement} It is well known that one-loop perturbation theory in the bare coupling constant $g^2$ leads to a poor approximation. The coefficient of $g^2$ is large in most quantities, and the series converges poorly. One traditional way to reduce this problem is by mean field improvement, which consists of two ideas. The first is that we calculate each quantity in a simple mean field approximation, and then re-express the perturbative result as the mean field result multiplied by a perturbative correction factor. If the mean field approximation is good, the correction factor will be close to 1, and we have resolved the problem of the large one-loop coefficient. As a good internal test of this part, we can simply look to see how large the coefficient in this correction factor is (the ``tadpole improved coefficient''), compared with the initial unimproved coefficient. The second part of the mean field approximation is that we change our expansion parameter from the bare coupling $g^2$ to some ``boosted'' coupling constant, $g^2_{MF}$, which we hope represents physics at a more relevant scale, and leads to a more rapidly convergent series. A well-chosen boosted coupling would reduce the two-loop coefficient. Unfortunately we usually cannot test this part of the improvement procedure, because the two-loop coefficient is unknown. Fortunately, if the mean field approximation is good, the exact choice of boosted coupling constant will not be too crucial, because the lowest order improved coefficient will be a small number. \subsection{Mean field approximation for smeared fermions} In the mean field approximation we typically assume that the gauge fields on each link are independently fluctuating variables, and that we can simply represent the links by an average value $u_0$. Typical choices for $u_0$ would be to choose $u_0^4$ to be the average plaquette value, or to choose $u_0$ to be the average link value in the Landau gauge. A natural question is how we should extend the mean field approximation if we employ smearing. One possibility is to express everything in terms of two quantities, $u_0$, a mean value for the unsmeared link, and $u_S$, a mean value for smeared links\footnote{PR would like to thank Colin Morningstar for conversations on this point.}. We will discuss the relation between these two quantities later, first we want to make a general point about mean field approximations and smearing. The reason we smear our gauge links is to suppress very short range fluctuations in the gauge field, which is justified by the argument that these short range fluctuations are very lattice-dependent, rather than physical. However, put another way, suppressing short range fluctuations means that we are correlating nearby gauge links. So there is a certain tension between smearing and the mean field notion that each link is fluctuating independently. We will take the attitude that it does still make sense to use the mean field approximation if smearing is mild -- but we should treat the results with some degree of caution if extreme smearing is used. Applying this double-$u$ mean field approximation to the SLiNC fermion matrix we find the following results for the principal fermion quantities, \begin{eqnarray} && \Sigma_1(p) \approx u_S \,, \quad \Sigma_2(p) \approx u_S \,, \quad Z_\psi \approx u_S \,, \quad \kappa_c \approx \frac{1}{8 u_S}\,, \quad c_{SW} \approx \frac{u_S}{u_0^4} \end{eqnarray} (we define $Z_\psi$ by the relation $S^{\rm ren} = Z_\psi S^{\rm lat}$). For reasonable smearing we expect the smeared link $u_S$ to be closer to 1 than the bare link $u_0$, so most quantities will lie closer to their tree-level values with smearing. However, the clover coefficient $c_{SW}$ is an exception; it will be further from 1 with smearing than without, because we construct our clover term from unsmeared links. As a result, we obtain the mean field expressions for $\kappa_c$ and $c_{SW}$ by performing the following replacements \begin{equation} \kappa_c(g^2) \rightarrow \kappa_c^{MF}(g_{MF}^2,u_S)= \frac{1}{8}\,\frac{u_S^{\rm pert}(g_{MF}^2)}{u_S}\,\kappa_c(g_{MF}^2) \end{equation} and \begin{equation} c_{SW}(g^2) \rightarrow c_{SW}^{MF}(g_{MF}^2,u_S,u_0)= \frac{u_S}{u_0^4}\,\frac{u_0^{\rm pert}(g_{MF}^2)^{\,4}}{u_S^{\rm pert}(g_{MF}^2)}\,c_{SW}(g_{MF}^2)\,. \end{equation} Here $u_S$ and $u_0$ are the measured smeared and unsmeared links at the given coupling and $u_S^{\rm pert}$ and $u_0^{\rm pert}$ denote the corresponding expressions in lattice perturbation theory. \subsection{The smeared plaquette in perturbation theory} We will use $u_S^{\rm pert}$ derived from the smeared perturbative plaquette $P_S$ \begin{equation} u_S^{\rm pert} \equiv P_S^{1/4}. \end{equation} To one-loop order we have \begin{equation} u_S^{\rm pert} = 1 - \frac{g^2\, C_F}{16 \pi^2} \, k_S\,, \end{equation} with\footnote{We have written this integral for the case of a plaquette in the 1-2 plane, any orientation gives the same result.} \begin{eqnarray} k_S = 8 \pi^2 a^4 \int \frac{d^4 k}{(2 \pi)^4}\hspace{-3mm} & D_{\alpha \beta}(k) & \hspace{-3mm} \Bigl[\, V_{\alpha 1}(k,\omega) V_{\beta 1}(k,\omega) s_2^2(k) +V_{\alpha 2}(k,\omega) V_{\beta 2}(k,\omega) s_1^2(k) \nonumber \\ & -& \hspace{-7mm} \left( V_{\alpha 1}(k,\omega) V_{\beta 2}(k,\omega) + V_{\beta 1}(k,\omega) V_{\alpha 2}(k,\omega) \right) s_1(k) s_2(k) \Bigr] \end{eqnarray} where $D_{\alpha \beta}(k)$ the gluon propagator for the action in question. The smearing function $V_{\alpha \mu}(k,\omega)$ is defined in (\ref{Vdef}) in Appendix A, $s_\mu(k)$ and $s^2(k)$ used below are given in~(\ref{eq:A2}). Using symmetry and the definition of $V$, the expression simplifies to \begin{equation} k_S = 16 \pi^2 a^4 \int \frac{d^4 k}{(2 \pi)^4} \left[ D_{1 1}(k) s_2(k) s_2(k) - D_{12}(k) s_1(k) s_2(k) \right] \left( 1 - 4 \, \omega \, s^2(k) \right)^2 \,. \label{SmearedPlaq} \end{equation} We can see from this form that mild smearing has the effect of suppressing the contribution from large $k$. Setting $\omega = 0$ in $k_S$, we recover the unsmeared link in perturbation theory \begin{equation} u_0^{\rm pert}= 1 - \frac{g^2\, C_F}{16 \pi^2} \, k_S(\omega=0)\,. \label{u0} \end{equation} For the plaquette action propagator we can calculate the integral exactly. The result is \begin{equation} k_S^{\rm Plaq} = \pi^2 \left( 1 - 16 \,\omega + 72 \,\omega^2 \right) \,. \end{equation} Let us see how well this improves the expressions for $\kappa_c$ and $c_{SW}$. Using the result ~(\ref{kappacplaq}) we find \begin{equation} \kappa_c^{{\rm Plaq},MF} = \frac{1}{8 u_S} \left[ 1 + g_{MF}^2\, C_F \, \left( -0.011861 + 0.103020 \,\omega -0.802715\,\omega^2 \right) \right] \end{equation} which successfully reduces the perturbative coefficients for every power of $\omega$. Trying the same thing with the clover coefficient (\ref{cswplaq}) gives \begin{eqnarray} c_{SW}^{{\rm Plaq},MF} = \frac{u_S}{u_0^4}\, \Bigl\{ \hspace{-3mm} &1& \hspace{-3mm} + \, g^2_{MF}\, \Bigl[ C_F\,\left(-0.019865 + 0.079148\,\omega + 0.813321\,\omega^2\right) \nonumber\\ & &+\, N_c\,\left(0.015025 + 0.009617\,\omega - 0.284786\,\omega^2\right)\,\Bigr] \Bigr\} \,. \end{eqnarray} Again, mean field improvement works well. For the Symanzik action we calculate the integral in (\ref{SmearedPlaq}) numerically, and get the result \begin{equation} k^{\rm Sym}_S = \pi^2 \left( 0.732525 -11.394696\,\omega + 50.245225\,\omega^2 \right)\,. \end{equation} The corresponding mean field improved expressions for $\kappa_c$ (\ref{kappacSym}) and $c_{SW}$ (\ref{cswSym}) are \begin{eqnarray} \kappa_c^{{\rm Sym}, MF} &=& \frac{1}{8 u_S} \left[1 + g_{MF}^2 \,C_F \, \left( -0.008053 + 0.0500781\,\omega -0.471784\,\omega^2 \right) \right] \,, \\ c_{SW}^{{\rm Sym},MF }& = & \frac{u_S}{u_0^4} \Big\{ 1 + g^2_{MF} \, \Big[ C_F\,\left(-0.0211635 + 0.115961\,\omega + 0.685247\,\omega^2 \right) \nonumber\\ & &+\, N_c\,\left(0.013777 + 0.015905\,\omega - 0.321899\,\omega^2\right)\,\Big] \Big\} \,. \end{eqnarray} \subsection{Choice of $g^2_{MF}$} In this section we discuss the boosted coupling for $SU(3)$, we have set $N_c=3$, $C_F=4/3$ throughout. From higher order continuum calculations we know that $g^2_{{\overline{MS}}}(\mu)$ is a good expansion parameter if $\mu$ is close to the appropriate physical scale. On the other hand, series in the bare lattice coupling $g^2(a)$ usually converge poorly. To understand this difference let us compare the two couplings. To one-loop order we have \begin{equation} \frac{1}{g^2_{{\overline{MS}}}(\mu)} - \frac{1}{g^2(a)} = 2b_0 \left(\log\frac{\mu}{\Lambda_{{\overline{MS}}}} - \log\frac{1}{a\Lambda_{\rm lat}}\right) = 2b_0 \log(a\mu) + d_g + N_f\, d_f \, , \label{gg} \end{equation} where $b_0=(11-2N_f/3)/(4\pi)^2$, and $N_f$ is the number of flavors. The ratio of $\Lambda$ parameters is thus given by \begin{equation} \frac{\Lambda_{\rm lat}}{\Lambda_{{\overline{MS}}}} = \exp \left(\frac{d_g + N_f\, d_f}{2b_0}\right) \, . \end{equation} The coefficient $d_g$ is known for the plaquette and Symanzik gauge action~\cite{Hasenfratz}: \begin{equation} d_g^{\rm Plaq} = -0.4682\,, \quad d_g^{\rm Sym} = -0.2361 \,. \end{equation} In Appendix C we show that $d_f$ is independent of the stout smearing parameter $\omega$. Therefore, we can use the value for clover fermions computed in~\cite{Booth:2001qp} \begin{equation} d_f=0.0314917 \,. \label{df} \end{equation} For $N_f=3$ this leads to \begin{eqnarray} \frac{\Lambda_{\rm lat}}{\Lambda_{{\overline{MS}}}} &= 0.038 \quad \mbox{Plaquette}\,,\\ \frac{\Lambda_{\rm lat}}{\Lambda_{{\overline{MS}}}} &= 0.289 \quad \mbox{Symanzik}\,. \end{eqnarray} These ratios are far from 1, especially for the plaquette action, which explains the poor convergence of series in $g^2(a)$. Now let us see what happens to the Lambda ratio if we make the popular choice of boosted coupling \begin{equation} g_{MF}^2 = \frac{g^2}{u_0^4} \, . \label{gmf} \end{equation} Upon inserting (\ref{u0}) and (\ref{gmf}) in (\ref{gg}), we obtain \begin{equation} \frac{1}{g^2_{{\overline{MS}}}(\mu)} - \frac{1}{g_{MF}^2(a)} = 2b_0 \left(\log\frac{\mu}{\Lambda_{{\overline{MS}}}} - \log\frac{1}{a\Lambda_{\rm lat}^{MF}}\right) = 2b_0 \log(a\mu) + d_g + N_f\, d_f +\frac{k_u}{3\pi^2} \, , \end{equation} which gives \begin{equation} \frac{\Lambda_{\rm lat}^{MF}}{\Lambda_{{\overline{MS}}}} = \exp \left(\frac{d_g + N_f\, d_f + k_u/3\pi^2}{2b_0}\right) \, . \label{ratio} \end{equation} For $N_f=3$ the numerical values of this ratio are \begin{eqnarray} \frac{\Lambda_{\rm lat}^{MF}}{\Lambda_{{\overline{MS}}}} &= 0.702 \quad \mbox{Plaquette}\,,\\ \frac{\Lambda_{\rm lat}^{MF}}{\Lambda_{{\overline{MS}}}} &= 2.459 \quad \mbox{Symanzik}\,. \end{eqnarray} We see that mean field improvement drives $\Lambda_{\rm lat}$ towards $\Lambda_{{\overline{MS}}}$ for both the plaquette and Symanzik gauge action, giving $g_{MF}^2 \approx g_{{\overline{MS}}}^2$, so that $g_{MF}^2$ appears to be a good expansion parameter in both cases. A perfect match is obtained for $\mu=1/0.702 a$ ($\mu=1/2.459 a$) for the plaquette (Symanzik) action. \section{Concluding remarks} In the present paper we have computed the improvement coefficient $c_{SW}$ and the additive mass renormalization/critical hopping parameter in one-loop perturbation theory for general stout parameter $\omega$ performing a single smearing. To separate the effect of improving the gauge action from the effect of tuning the fermion action, we have done the calculation for both the plaquette action and the tree-level Symanzik gauge action. In addition we also present the $\mathcal{O}(g^2)$ corrections to the coefficients $c_{NGI}$ and $c_D$ needed to $\mathcal{O}(a)$ improve the quark fields in the most general case. We give mean field (tadpole) improved results for $\kappa_c$ and $c_{SW}$. For both the plaquette and the Symanzik action the boosted coupling $g_{MF}^2$ turns out to be close to $g_{{\overline{MS}}}^2$, which makes $g_{MF}^2$ a good expansion parameter. We thus may expect that the perturbative series converges rapidly. For $N_f=3$ flavors of dynamical quarks it turns out that the one-loop improved Symanzik gauge action~\cite{Luscher:1984xn} coincides largely with its tree-level counterpart, with coefficients $c_0 \approx 5/3$, $c_1 \approx -1/12$ and $c_2 \approx 0$~\cite{Hao:2007iz}. This makes the tree-level Symanzik action (\ref{SG}) stand out against other improved gauge actions, at least from the perturbative point of view. SLiNC fermions represent a family of ultralocal, ultraviolet filtered clover fermions. While they share all prominent features of clover fermions, among them $\mathcal{O}(a)$ improvement and flavor symmetry, they allow to further optimize the chiral properties of the action by tuning the fattening of the links. In our forthcoming simulations with $N_f=2+1$ and $2+1+1$ flavors of dynamical quarks at realistic pion masses we shall employ this combination of gauge and fermion actions. Knowing the perturbative (asymptotic) value of $c_{SW}$, we can derive a closed expression for $c_{SW}$ that covers the whole range of $g^2$. We will do so in a subsequent paper employing the Schr\"odinger functional method. The one-loop coefficient $c_{SW}^{(1)}$ varies only slightly within the interval $0 \leq \omega \leq 0.2$ for both the plaquette and Symanzik action. For $\omega=0.1$, which is our favorite value, the tadpole improved one-loop coefficient becomes $c_{SW}^{(1)} \approx 0$, indicating that mean field approximation works well. The final result is $c_{SW}^{MF} \approx u_S/u_0^4$ to a very good approximation for both gauge actions, where $u_S$ is the average smeared link, found by measuring the smeared plaquette, and $u_0$ the average unsmeared link, found by measuring the unsmeared plaquette. This is to be compared with $c_{SW}^{MF} \approx 1/u_0^3$ over fermions with no smearing. We therefore expect $c_{SW}$ to be a steeper function of $g^2$ in the case of SLiNC fermions than for clover fermions. Stout link fattening reduces the additive mass renormalization considerably, with and without tadpole improvement, as expected. In fact, the critical hopping parameter $\kappa_c$ can be tuned to its continuum value of $1/8$ for an appropriate choice of $\omega$. We also confirm by early simulations with this action~\cite{preparation} that the spread of the negative eigenvalues is reduced by a factor of $\approx 2$ for $\omega=0.1$ and non-perturbative $c_{SW}$, as compared to ordinary clover fermions. SLiNC fermions have many other appealing features as well. The renormalization factors of quark bilinear operators, for example, come out to be very close to unity, which hints at virtually continuum-like behavior. \section*{Acknowledgment} This investigation has been supported by DFG under contract FOR 465 (Forschergruppe Gitter-Hadronen-Ph\"anomenologie). We also acknowledge support by the EU Integrated Infrastructure Initiative Hadron Physics (I3HP) under contract number RII3-CT-2004-506078. \renewcommand{\theequation}{A.\arabic{equation}} \setcounter{equation}{0} \section*{Appendix A: Feynman rules} In this Appendix we give the Feynman rules for quark-gluon vertices derived from action (\ref{SF}) with single stout smeared gauge link variables in the Wilson part and general Wilson parameter $r$. The pieces in the vertices proportional to $c_{SW}$ are denoted with $\widetilde{V}$. They have been rederived using our notations and they agree with the Feynman rules given in~\cite{Aoki:2003sj}. In the vertices we denote the incoming/outgoing quark momenta by $p_1/p_2$. The incoming gluons are described by momenta $k_i$, Lorentz indices $\alpha,\beta,\gamma$ and color indices $a,b,c=1,\dots,N_c^2-1$. For the color matrices we have: \begin{eqnarray} &&T^a T^b = \frac{1}{2 N_c} \delta^{ab} I_{N_c} + \frac{1}{2}( d^{abc}+ i \,f^{abc}) T^c \nonumber \\ && C_F = \frac{N_c^2-1}{2 N_c} \,, \quad [T^a,T^b]=T^a T^b - T^b T^a\,, \quad \{T^a,T^b\}=T^a T^b + T^b T^a \\ && T_{ss}^{abc}=\{T^a,\{T^b,T^c\}\} \,, \quad T_{aa}^{abc}=[T^a,[T^b,T^c]] \,, \quad T_{sa}^{abc}=\{T^a,[T^b,T^c]\} \,. \nonumber \end{eqnarray} We use the abbreviations \begin{eqnarray} &&\ms{k}{\mu}=\sin\left(\frac{a}{2}k_\mu\right), \quad \mc{k}{\mu}=\cos\left(\frac{a}{2}k_\mu\right) \,, \quad s^2(k) = \sum_\mu \mss{k}{\mu} \,, \nonumber \\ && s^2(k_1,k_2)= \sum_\mu \ms{k_1+k_2}{\mu}\ms{k_1-k_2}{\mu} \equiv s^2(k_1)-s^2(k_2)\,. \label{eq:A2} \end{eqnarray} For later use we give the bare massless quark propagator \begin{equation} S(k) = \frac{a}{ i \sum_\mu \gamma_\mu \ms{2 k}{\mu} + r \sum_\mu \left( 1 - \mc{2k}{\mu} \right) }\,. \label{quarkprop} \end{equation} The structure of the Wilson quark-gluon vertices is \begin{eqnarray} W_{1\mu}(p_2,p_1) &=& {i} \, \mc{p_2+p_1}{\mu} \,\gamma_\mu + r\,\ms{p_2+p_1}{\mu} \nonumber \\ W_{2\mu}(p_2,p_1) &=& {i}\, \ms{p_2+p_1}{\mu}\,\gamma_\mu - r\,\mc{p_2+p_1}{\mu} \label{eq:A3} \,. \end{eqnarray} Let us introduce the following functions to be useful in the definitions of the improved vertices \begin{eqnarray} V_{\alpha\mu}(k,\omega)& =& \delta_{\alpha\mu} + 4\, \omega \, v_{\alpha\mu}(k) \label{Vdef}\\ v_{\alpha\mu}(k)&=&\ms{k}{\alpha}\ms{k}{\mu} -\delta_{\alpha\mu} \, s^2(k) \nonumber \\ g_{\alpha\beta\mu}(k_1,k_2)&=& \delta_{\alpha\beta} \mc{k_1+k_2}{\alpha} \ms{k_1-k_2}{\mu} \nonumber\\ &&-\, \delta_{\alpha\mu} \mc{k_2}{\alpha}\ms{2 k_1+k_2}{\beta}+ \delta_{\beta\mu} \mc{k_1}{\beta}\ms{2 k_2+k_1}{\alpha} \\ w_{\alpha\mu}(k_1,k_2)&=& \ms{k_1+k_2}{\alpha}\ms{k_1-k_2}{\mu}- \delta_{\alpha\mu} \, s^2(k_1,k_2)\,, \\ w_{\alpha\mu}(k,0)&=&v_{\alpha\mu}(k)\nonumber \end{eqnarray} \subsection*{The qqg-vertex: $V_\alpha^a(p_2,p_1,k_1; c_{SW},\omega)$} The qqg-vertex including stout smeared links and clover contribution is given by the expression ($p_1+k_1=p_2$) \begin{eqnarray} V_\alpha^a(p_2,p_1,k_1; c_{SW},\omega) &=& - g\, T^a\, \sum_\mu V_{\alpha\mu}(k_1,\omega)\, W_{1\mu}(p_2,p_1)+ c_{SW}\,\widetilde{V}_\alpha^a(k_1)\,. \end{eqnarray} The stout smeared part shows the separation property mentioned in~\cite{Capitani:2006ni}. The clover part is given by \begin{eqnarray} \widetilde{V}_\alpha^a(k_1)&=& -i\,g\, T^a\, \frac{r}{2}\,\sum_\mu \sigma_{\alpha\mu} \mc{k_1}{\alpha}\ms{2k_1}{\mu}\,. \end{eqnarray} \subsection*{The qqgg-vertex: $V_{\alpha\beta}^{ab}(p_2,p_1,k_1,k_2; c_{SW},\omega)$} We define the qqgg-vertex as follows ($p_1+k_1+k_2=p_2$): \begin{eqnarray} V_{\alpha\beta}^{ab}(p_2,p_1,k_1,k_2; c_{SW},\omega)=V_{\alpha\beta}^{\{a,b\}} + V_{\alpha\beta}^{[a,b]}+ c_{SW}\,\widetilde{V}_{\alpha\beta}^{ab}(k_1,k_2)\,. \end{eqnarray} The stout smeared part is separated into two parts proportional to $\{T^a,T^b\}$ and $[T^a,T^b]$. The anticommutator part shows the factorization property mentioned for two and four quark operators \begin{eqnarray} V_{\alpha\beta}^{\{a,b\}} &=& \frac{1}{2} a\,g^2\,\{T^a,T^b\}\, \sum_\mu V_{\alpha\mu}(k_1,\omega)\, V_{\beta\mu}(k_2,\omega)\,W_{2\mu}(p_2,p_1) \,. \end{eqnarray} The commutator part is given by \begin{eqnarray} V_{\alpha\beta}^{[a,b]}&=& \frac{1}{2} a\,g^2\,[T^a,T^b]\, 4 \, \omega \sum_\mu g_{\alpha\beta\mu}(k_1,k_2)\, \,W_{1\mu}(p_2,p_1) \,. \label{eq:a12} \end{eqnarray} Note that this part is proportional to $\omega$. The part $\propto c_{SW}$ has been used in the form \begin{eqnarray} \label{eq:a13} \widetilde{V}_{\alpha\beta}^{ab}(k_1,k_2)&=& i\,\frac{r}{4} a\,g^2\,[T^a,T^b]\, \Big\{2\, \sigma_{\alpha\beta}\big[2 \mc{k_1}{\beta}\mc{k_2}{\alpha}\mc{k_1+k_2} {\alpha}\mc{k_1+k_2}{\beta} \\ && - \, \mc{k_1}{\alpha}\mc{k_2}{\beta}\big]+ \delta_{\alpha\beta}\,\sum_\mu\,\sigma_{\alpha\mu}\ms{k_1+k_2}{\alpha} \left[\ms{2k_2}{\mu}-\ms{2k_1}{\mu}\right]\Big\} \,. \nonumber \end{eqnarray} Both (\ref{eq:a12}) and (\ref{eq:a13}) vanish for tadpole diagrams along quark lines. \subsection*{The qqggg-vertex: $V_{\alpha\beta\gamma}^{abc}(p_2,p_1,k_1,k_2,k_3; c_{SW},\omega)$} We present that vertex contribution in the following form ($p_1+k_1+k_2+k_3=p_2$) \begin{eqnarray} &&\hspace{-20mm}V_{\alpha\beta\gamma}^{abc}(p_2,p_1,k_1,k_2,k_3; c_{SW},\omega)=\frac{1}{6} \, a^2 g^3 \times \nonumber\\ && \sum_\mu \bigg\{ W_{1\mu}(p_2,p_1)\,\Big[F^{abc}_{\alpha\beta\gamma\mu}(k_1,k_2,k_3) + {\rm cyclic \ perm.}\Big] \nonumber \\ && -\, 6 \,\omega \, W_{2\mu}(p_2,p_1) \, \Big[T_{sa}^{abc} \, V_{\alpha\mu}(k_1) \, g_{\beta\gamma\mu}(k_2,k_3) + {\rm cyclic \ perm.} \Big] \bigg\} \nonumber\\ && +\, c_{SW}\, \widetilde{V}_{\alpha\beta\gamma}^{abc}(k_1,k_2,k_3) \,. \label{eq:A11} \end{eqnarray} Cyclic permutations have to be performed in the gluon momenta as well as in the color and Lorentz indices of the three gluons. Note that the general stout smeared part is proportional both to $W_{1\mu}$ and $W_{2\mu}$. The coefficient $F^{abc}_{\alpha\beta\gamma\mu}(k_1,k_2,k_3)$ is decomposed into its different color structures: \begin{eqnarray} F^{abc}_{\alpha\beta\gamma\mu}(k_1,k_2,k_3)&=& T_{ss}^{abc} f^{(1)}_{\alpha\beta\gamma\mu}(k_1,k_2,k_3) + T_{aa}^{abc}\, \big( f^{(2)}_{\alpha\beta\gamma\mu}(k_1,k_2,k_3) - f^{(2)}_{\alpha\gamma\beta\mu}(k_1,k_3,k_2)\big) \nonumber \\ && +\, \left(T_{ss}^{abc}- \frac{1}{N_c} d^{abc}\right) f^{(3)}_{\alpha\beta\gamma\mu}(k_1,k_2,k_3) \,, \end{eqnarray} where the $f^{(i)}_{\alpha\beta\gamma\mu}$ are given as \begin{eqnarray} f^{(1)}_{\alpha\beta\gamma\mu}(k_1,k_2,k_3)&=&\frac{1}{2} \, V_{\alpha\mu}(k_1,\omega)\, V_{\beta\mu}(k_2,\omega)\, V_{\gamma\mu}(k_3,\omega) \,, \nonumber \\ f^{(2)}_{\alpha\beta\gamma\mu}(k_1,k_2,k_3)&=& \frac{1}{2} \, V_{\alpha\mu}(k_1,\omega)\, V_{\beta\mu}(k_2,\omega)\, \delta_{\gamma\mu} -\frac{1}{2} \,\delta_{\alpha\mu}\delta_{\beta\mu} \, V_{\gamma\mu}(k_3,\omega) \\ &+& \hspace{-2mm} 6 \, \omega \, \delta_{\alpha\beta} \Big[ \mc{k_1-k_2}{\mu} \mc{2 k_3+k_1+k_2}{\beta} \delta_{\gamma\mu} + \ms{k_3}{\mu} \ms{k_3 + 2 k_1}{\gamma} \, \delta_{\beta\mu} \Big] \,, \nonumber \\ f^{(3)}_{\alpha\beta\gamma\mu}(k_1,k_2,k_3)&=& 2 \, \omega \, \delta_{\beta\gamma} \Big[\big( 3\, w_{\alpha\mu}(k_1,k_2+k_3) + v_{\alpha\mu}(k_1+k_2+k_3)\big) \, \delta_{\alpha\beta} \nonumber \\ &+& \hspace{-2mm} 12 \ms{k_1}{\beta} \ms{k_2}{\alpha} \ms{k_3}{\alpha} \big( \ms{k_1+k_2+k_3}{\beta} \, \delta_{\alpha\mu}- \ms{k_1+k_2+k_3}{\alpha} \delta_{\beta\mu}\big) \Big] \,. \nonumber \end{eqnarray} The clover part of the qqggg-vertex is given by \begin{equation} \widetilde{V}_{\alpha\beta\gamma}^{abc}(k_1,k_2,k_3)=\frac{1}{6}\, \bigg\{\widetilde{\widetilde{V}}_{\alpha\beta\gamma}^{abc}(k_1,k_2,k_3) + {\rm total \ perm.}\bigg\} \label{Vtotclover} \end{equation} with \begin{eqnarray} {\widetilde{\widetilde{V}}}_{\alpha\beta\gamma}^{abc}(k_1,k_2,k_3)&=&-3\,i\,g^3\,a^2\,r\times\nonumber\\ && \hspace{-12mm}\bigg[T^aT^bT^c\delta_{\alpha\beta}\delta_{\alpha\gamma} \sum_\mu\,\sigma_{\alpha\mu}\bigg\{-\frac{1}{6}\mc{k_1+k_2+k_3}{\alpha}\ms{2(k_1+k_2+k_3)}{\mu} \nonumber\\ && \hspace{-10mm} + \, \mc{k_1+k_2+k_3}{\alpha}\mc{k_1+k_2+k_3}{\mu}\mc{k_3-k_1}{\mu}\ms{k_2}{\mu}\bigg\} \nonumber\\ && \hspace{-10mm} - \, \frac{1}{2}\bigg[T^aT^bT^c+T^cT^bT^a\bigg]\,\sigma_{\alpha\beta}\times \\ && \hspace{-10mm} \bigg\{2\, \delta_{\beta\gamma}\mc{k_1+k_2+k_3}{\alpha}\mc{k_1+k_2+k_3}{\beta} \mc{k_3+k_2}{\alpha}\ms{k_1}{\beta} \nonumber\\ && \hspace{-8mm} + \, \delta_{\beta\gamma}\ms{k_3+k_2}{\beta}\mc{k_1+2k_2}{\alpha} \nonumber\\ && \hspace{-8mm} + \, \delta_{\alpha\gamma}\ms{k_1+2k_2+k_3}{\alpha}\mc{k_1+k_2+k_3} {\beta}\mc{k_3-k_1}{\beta}\bigg\}\bigg] \nonumber\,. \end{eqnarray} In (\ref{Vtotclover}) the total permutation has to be performed in the gluon momenta, color and Lorentz indices. We only need this vertex for the gluon tadpole diagram of Fig.~\ref{fig2}, which simplifies the expressions. In the tadpole contribution to the vertex (\ref{eq:A11}) we denote the external gluon momentum by $q=p_2-p_1$, the color index of the gluon by $a$ and the internal momenta by $k$ and $-k$. The color indices ($b,c$) of the remaining gluons forming the tadpole are summed up using the color diagonality $\delta^{bc}$ of the gluon propagator, $k$ is the gluon momentum in the tadpole loop. So the stout smeared tadpole contribution is defined from the general qqggg-vertex (explicitly symmetrized in the three gluons) as \begin{eqnarray} V_{\alpha\beta\gamma}^a(p_2,p_1,k)&=& \sum_{b=1}^{N_c^2-1}\bigg\{ V_{\alpha\beta\gamma}^{a b b}(p_2,p_1,q,k, -k)+c_{SW}\,\widetilde{V}_{\alpha\beta\gamma}^{abb}(p_2,p_1,q,k,-k)\bigg\} \nonumber\\ &=&\frac{1}{6} a^2\,g^3 \, T^a\, \sum_\mu \, W_{1\mu}(p_2,p_1) V_{\alpha\beta\gamma\mu}(q,k) \\ && +\, c_{SW}\,\sum_{b=1}^{N_c^2-1}\,\widetilde{V}_{\alpha\beta\gamma}^{abb}(p_2,p_1,q,k,-k) \,.\nonumber \end{eqnarray} Using that definition we obtain for the stout smeared part \begin{eqnarray} V_{\alpha\beta\gamma\mu}(q,k)&=& \Bigg\{ \left(6 \, C_F-N_c\right) \, f^{(1)}_{\alpha\beta\gamma\mu}(q,k,-k) + \frac{N_c}{2} \Big[ f^{(2)}_{\beta\gamma\alpha\mu}(k,-k,q)-f^{(2)}_{\beta\alpha\gamma\mu}(k,q,-k) \nonumber\\ && - \, f^{(2)}_{\gamma\alpha\beta\mu}(-k,q,k)+f^{(2)}_{\gamma\beta\alpha\mu}(-k,k,q) \Big]+ 4 \, C_F \, f^{(3)}_{\alpha\beta\gamma\mu}(q,k,-k) \\ && + \, (4 \, C_F-N_c) \Big[ f^{(3)}_{\beta\gamma\alpha\mu}(k,-k,q) +f^{(3)}_{\gamma\alpha\beta\mu}(-k,q,k) \Big] \Bigg\} \,. \nonumber \end{eqnarray} {}From that expression a convenient representation is found in the form \begin{eqnarray} V_{\alpha\beta\gamma\mu}(q,k)& =& \Bigg\{ \left(6 \, C_F-N_c\right) \, V_{\alpha\mu}(q,\omega)\, V_{\beta\mu}(k,\omega)\, V_{\gamma\mu}(k,\omega) \nonumber\\ && \hspace{-3mm} +\, \frac{N_c}{2} \Big[ 2 \,\delta_{\alpha\mu} V_{\beta\mu}(k,\omega)\, V_{\gamma\mu}(k,\omega) - V_{\alpha\mu}(q,\omega)\, \big( \delta_{\beta\mu} \, V_{\gamma\mu}(k,\omega) + \delta_{\gamma\mu} \, V_{\beta\mu}(k,\omega) \, \big) \Big] \nonumber\\ && \hspace{-3mm}+\, 2 \, \omega \, \Big[3 \left( 4 \, C_F-N_c \right) \, C_{\alpha\beta\gamma\mu}(q,k) + N_c \, D_{\alpha\beta\gamma\mu}(q,k) \Big] \Bigg\} \,. \end{eqnarray} The structures $C_{\alpha\beta\gamma\mu}$ and $D_{\alpha\beta\gamma\mu}$, additionally contributing to $O(\omega)$, are \begin{eqnarray} C_{\alpha\beta\gamma\mu}(q,k)&=& - 4 \, \big[\delta_{\alpha\mu} \mss{p}{\gamma}-\delta_{\alpha\gamma}\ms{p}{\alpha}\ms{p}{\mu}\big]\, \big[\delta_{\beta \gamma}\mss{k}{\mu} - \delta_{\beta \mu} \ms{k}{\beta}\ms{k}{\gamma}\big] \nonumber \\ && - \, 4 \, \delta_{\gamma\mu} \ms{p}{\beta}\ms{k}{\alpha} \big[ \delta_{\alpha\beta} \ms{p}{\mu}\ms{k}{\mu } -\delta_{\alpha\mu} \ms{p}{\beta}\ms{k}{\beta} - \delta_{\beta\mu} \ms{p}{\alpha}\ms{k}{\alpha} \big] \nonumber \\ && - \, \delta_{\alpha\mu}\delta_{\beta\mu}\delta_{\gamma\mu} \, \big[ 2 s^2(p)+ 2 s^2(k) - s^2(p+k) - s^2(p-k) \big] \,, \nonumber \\ && \\ D_{\alpha\beta\gamma\mu}(q,k)&=& - 3\, \delta_{\alpha\gamma}\delta_{\beta\mu} \mc{p+k}{\beta}\mc{p+k}{\gamma} - 3\, \delta_{\alpha\beta}\delta_{\gamma\mu} \mc{p-k}{\beta}\mc{p-k}{\gamma} \nonumber \\ && + \, 4 \, \delta_{\beta\gamma}(\delta_{\alpha\beta}+\delta_{\beta\mu}) \ms{p}{\alpha}\ms{p}{\mu } + 4 \, \delta_{\alpha\mu} (\delta_{\beta\mu} +\delta_{\gamma\mu}) \ms{k}{\beta} \ms{k}{\gamma } \nonumber \\ && - \, 2\, \delta_{\alpha\mu} \delta_{\beta\mu} \delta_{\gamma\mu} \big[s^2(p)+s^2(k)\big] + 6 \, \delta_{\alpha\mu}\delta_{\beta\gamma} \, \big[2 \mcc{p}{\gamma}\mcc{k}{\alpha} -1 \big] \,. \nonumber \end{eqnarray} \renewcommand{\theequation}{B.\arabic{equation}} \setcounter{equation}{0} \section*{Appendix B: Three-point function - universal part} As discussed above, the universal part of the three-point function has the form (\ref{lambda2}) when $c_{SW}=1 + \mathcal{O}(g^2)$. Therefore, it is sufficient to give only the one-loop result for $\Lambda^{{\overline{MS}}}_{1,\mu}(p_1,p_2,q)$. It is cast into the following form ($q=p_2 - p_1$) \begin{eqnarray} \Lambda^{{\overline{MS}}}_{1,\mu}(p_1,p_2,q) &=&F_1(p_1,p_2)\,\gamma_\mu+ F_2(p_1,p_2)\,\ensuremath{\slashed{p}}_2\, \gamma_\mu\ensuremath{\slashed{p}}_1 \nonumber \\& & +\, [ F_3(p_1,p_2) \,p_{1,\mu}+F_4(p_1,p_2) \,p_{2,\mu}]\, \ensuremath{\slashed{p}}_1 \label{Blam1} \\& & +\, [ F_5(p_1,p_2)\,p_{2,\mu}+F_6(p_1,p_2)\,p_{1,\mu}]\, \ensuremath{\slashed{p}}_2 \,. \nonumber \end{eqnarray} Due to the symmetries $F_5(p_1,p_2)=F_3(p_2,p_1)$ and $F_6(p_1,p_2)=F_4(p_2,p_1)$ we have four independent functions $F_i(p_1,p_2)$ only. We represent them as follows: \begin{eqnarray} F_1(p_1,p_2)&=&4\,C_F\,\xi - \frac{N_c}{2} (12+2\xi-\xi^2) + 2\,\Theta \left(\,\mathcal{C}_{1}\,\,\mathcal{S}+ N_c\,p_1.p_2 + C_F \,q^2\right) \nonumber\\ & &+\left(C_F(1-\xi)+\frac{N_c}{4}(4-\xi)\right) \log\left( \frac{p_1^2p_2^2 }{\left(\mu^2\right)^2} \right) \\ & &+\, V_1(p_1,p_2) \log \left( \frac{p_1^2}{q^2}\right)+V_1(p_2,p_1) \log\left( \frac{p_2^2}{q^2}\right) \,, \nonumber \end{eqnarray} \begin{eqnarray} \hspace{-1mm}F_2(p_1,p_2)=\frac{\Theta}{8}\left(2N_c(6-\xi)+ \,\mathcal{C}_2\, \frac{p_1.p_2\, q^2}{\Delta} \right) +\frac{\,\mathcal{C}_2\,}{4 \Delta} \left[ p_1.q \log\left( \frac{p_1^2}{q^2}\right) - p_2.q \log\left( \frac{p_2^2}{q^2}\right) \right] \,, \end{eqnarray} \begin{eqnarray} F_3(p_1,p_2)&=&\,\mathcal{C}_3\,\frac{p_2^2}{2\,\Delta}+\frac{2\,N_c\,\xi}{q^2} +\frac{\Theta}{8\,\Delta} \Big[ 4 N_c\,\xi (p_1.p_2)^2 + \left(2 \,\mathcal{C}_3\, (6 \, \mathcal{S} + p_2^2)-\,\mathcal{C}_4\, p_1.q \right)\, p_2^2 \Big] \nonumber\\ & & +\, \frac{1}{q^2}\left[V_2(p_1,p_2) \log\left( \frac{p_1^2}{q^2}\right)+V_3(p_1,p_2) \log\left( \frac{p_2^2}{q^2}\right) \right] \,, \end{eqnarray} \begin{eqnarray} F_4(p_1,p_2)&=&-\,\mathcal{C}_3\, \frac{p_1.p_2}{2\,\Delta}-\frac{2\,N_c\,\xi}{q^2} + \frac{\Theta}{8\,\Delta} \Big[ 4 (8\,C_F- N_c (4-\xi))\, (p_1.p_2)^2 \nonumber\\ & & -\, (12 \,\mathcal{C}_3\, \mathcal{S} + 4 \,\mathcal{C}_6\, \, p_1^2 +(\,\mathcal{C}_5\,+ 8\, C_F(2+\xi) ) \, p_2^2) \, p_1.p_2 +\,\mathcal{C}_{7}\, \, p_1^2\, p_2^2 \Big] \\ & & +\, \frac{1}{q^2}\left[ V_4(p_1,p_2) \log\left( \frac{p_1^2}{q^2}\right)+V_5(p_1,p_2) \log\left( \frac{p_2^2}{q^2}\right) \right] \,. \nonumber \end{eqnarray} The function $V_i$ in front of the logarithms are found as follows \begin{eqnarray} V_1(p_1,p_2) &=& C_F (3+\xi)-\frac{N_c}{4} (4-\xi) +\,\mathcal{C}_{1}\, \frac{ p_2.q \,p_1^2 }{\Delta} \nonumber\,, \nonumber \\ V_2(p_1,p_2) &=& \frac{1}{4 \, \Delta} \Big[ (4 \,\mathcal{C}_3\,-\,\mathcal{C}_4\,- 4 N_c \, \xi)\, p_2^2 \, q^2 \nonumber \\ && + \, (12 \,\mathcal{C}_3\, \mathcal{S}+ 4 N_c \, \xi \, p_1.p_2 + (\,\mathcal{C}_5\, + 8 \,C_F)\, q^2) \,p_2.q \Big] \,, \nonumber\\ V_3(p_1,p_2) &=& \frac{1}{4\,\Delta\,p_1^2} \left[ -4 N_c\,\xi \, p_1.p_2 \, p_2.q \, p_1^2+\left( -12 \,\mathcal{C}_3\,\mathcal{S} \, p_1.q +\,\mathcal{C}_4\, p_1^2 \, q^2 \right) \,p_2^2 \right]\,, \\ V_4(p_1,p_2) &=& V_2(p_2,p_1) + \frac{1}{4\,\Delta} \left[ - 8 \, C_F (1+\xi)\, p_1.q + ( 4 \, C_F (1- 3 \xi) + N_c (5-\xi) \xi)\,p_1^2 \right] \,, \nonumber\\ V_5(p_1,p_2) &=& V_2(p_1,p_2) +\frac{1}{4\,\Delta}\left[ (8\,C_F +N_c (2-\xi)\xi)\, p_2.q + (1+\xi) (4 \, C_F + N_c \, \xi) p_2^2 \right] \,.\nonumber \end{eqnarray} We have introduced the kinematic functions \begin{eqnarray} \Delta &=& (p_1.p_2)^2 - p_1^2\,p_2^2\,, \quad \mathcal{S} = \frac{p_1^2\,p_2^2\,q^2}{4\,\Delta}\,, \nonumber \\ \Theta &=& \frac{4}{\pi^2\,\sqrt{\Delta}}\Bigg({\rm Sp}\left(\frac{p_2.q+\sqrt{\Delta}}{p_2^2} \right)- {\rm Sp}\left(\frac{p_2.q-\sqrt{\Delta}}{p_2^2} \right) \\ && + \, \frac{1}{2}\log\left( \frac{p_1.p_2-\sqrt{\Delta}}{p_1.p_2 +\sqrt{\Delta}} \right) \log\left( \frac{q^2}{p_2^2} \right)\Bigg)\,, \nonumber \end{eqnarray} with ${\rm Sp}(x)$ being the Spence function: $$ {\rm Sp}(x)=-\int_0^x\,dy \frac{\log (1-y)}{y}\nonumber\,. $$ The quantities $\cg{i}$ depend on the color factors and gauge parameter and have the values \begin{eqnarray} \,\mathcal{C}_{1}\, &=& C_F\,(3+\xi)-\frac{1}{2} N_c\,(1-\xi)\,, \nonumber \\ \,\mathcal{C}_2\, &=& 8\,C_F + N_c\, (2+(3-\xi)\xi))\,, \nonumber\\ \,\mathcal{C}_3\, &=& 4\,C_F\,(1+\xi)-N_c\,(4+(1-\xi)\xi)\,, \nonumber\\ \,\mathcal{C}_4\, &=& 8\,C_F\,(2+\xi)-N_c\,(12+(4-3\xi)\xi)\,,\\ \,\mathcal{C}_5\, &=& -N_c\,(4-(2+\xi)\xi)\,, \nonumber\\ \,\mathcal{C}_6\, &=& 4\,C_F-N_c\,(1-\xi)\,, \nonumber\\ \,\mathcal{C}_{7}\,&=& 8 \, C_F - N_c\, (16 -\xi^2)\,. \nonumber \end{eqnarray} In order to express the one-loop result (\ref{Blam1}) in terms of Spence functions, logarithms and rational functions of external momenta we have proceeded in two steps. First we have expanded all tensor integrals over the internal momentum into scalar three-point integrals times tensor functions of the external momenta~\cite{Kizilersu:1995iz}. Then we used recursion relations of Davydychev~\cite{Davydychev:1992xr} to reduce these scalar three-point integrals into scalar two-point integrals and $\Theta$. \renewcommand{\theequation}{C.\arabic{equation}} \setcounter{equation}{0} \section*{Appendix C: $\omega$-independence of $d_f$} We find $d_f$, the coefficient which tells us the fermionic shift in $\Lambda_{\rm lat}$, by calculating the massless quark vacuum polarization in a gluon with $a^2 q^2 \ll 1$: \begin{eqnarray} \lefteqn{ \Pi_{\alpha \beta}^{a b}(q; c_{SW}, \omega) =} \nonumber \\ & -& N_f \int \frac{d^4 k}{(2 \pi)^4} {\rm Tr} \left[ V_\alpha^a(q+k,k,q; c_{SW}, \omega) S(k) V_\beta^b (k, q+k, -q; c_{SW}, \omega) S(k+q) \right] \nonumber \\ & -& N_f \int \frac{d^4 k}{(2 \pi)^4} {\rm Tr} \left[ V_{\alpha \beta}^{\{a,b\}}(k,k,q,-q; c_{SW}, \omega) S(k) \right] \,. \end{eqnarray} The quark propagator $S$ and the vertices $V$ are defined in Appendix A, the trace here is over both spin and color. The corresponding one-loop diagrams are shown in Fig. \ref{fig4}. \begin{figure}[!htb] \begin{center} \includegraphics[scale=0.01,width=0.8\textwidth]{gluonself1.eps} \end{center} \caption{One-loop quark vacuum polarization diagrams.} \label{fig4} \end{figure} In the required limit of small $a^2 q^2$ we can expand in $q^2$ and drop any terms $\mathcal{O}(a^2q^4)$. We then get \begin{eqnarray} \Pi_{\alpha \beta}^{a b}(q; c_{SW}, \omega) &=& \Pi_{\alpha \beta}^{a b}(q; c_{SW}, 0) - 2\, \omega\, N_f\,\delta^{a b} g^2 a^2 \times \nonumber\\ & & \Bigg\{ \sum_\mu\,\left(q_\alpha q_\mu - q^2 \delta_{\alpha \mu}\right) \int \frac{d^4 k}{(2 \pi)^4} {\rm Tr} \left[ W_{1 \mu}(k,k) S(k) W_{1 \beta}(k,k) S(k) \right] \nonumber \\ & & +\, a \left(q_\alpha q_\beta - q^2 \delta_{\alpha \beta}\right) \int \frac{d^4 k}{(2 \pi)^4} {\rm Tr} \left[ W_{2 \beta}(k,k) S(k) \right] \label{Pilong} \\ & &+ \sum_\mu\,\left(q_\beta q_\mu - q^2 \delta_{\beta \mu}\right) \int \frac{d^4 k}{(2 \pi)^4} {\rm Tr} \left[ W_{1 \alpha}(k,k) S(k) W_{1 \mu }(k,k) S(k) \right] \nonumber \\ & & + \, a \left(q_\alpha q_\beta - q^2 \delta_{\alpha \beta}\right) \int \frac{d^4 k}{(2 \pi)^4} {\rm Tr} \left[ W_{2 \alpha}(k,k) S(k) \right]\Bigg\} + \mathcal{O}(a^2 q^4) \nonumber \end{eqnarray} where $\Pi_{\alpha \beta}^{a b}(q; c_{SW}, 0)$ is the vacuum polarization tensor with no smearing, $W_1$ and $W_2$ are the Wilson quark gluon vertices defined in (\ref{eq:A3}), and the trace is now only over the spin index. All $\omega^2$ terms have dropped out because they first appear at $\mathcal{O}(a^2q^4)$. Calculating $\Pi_{\alpha \beta}^{a b}(q; c_{SW}, 0)$ in one loop for $c_{SW}=1$ leads to the value of $d_f$ given in Eq.~(\ref{df}). From power counting we would at first expect the integrals $\propto \,\omega$ in (\ref{Pilong}) to have values proportional to $1/a^2$ or $1/a^3$, and to make a finite contribution to $d_f$. However we show now that there is a perfect cancellation between the continuum-like diagram Fig.~\ref{fig4}(a) (the integrals involving $W_1$) and the tadpole contribution Fig.~\ref{fig4}(b) (those with $W_2$). To do this we use the identities \begin{eqnarray} \frac{ \partial}{\partial k_\mu} S(k) &=& - \, S(k) W_{1 \mu}(k,k) S(k)\,, \\ \frac{ \partial}{\partial k_\mu} W_{1 \nu}(k,k) &=& - \, a \,\delta_{\mu \nu} W_{2 \mu}(k,k) \end{eqnarray} which follow immediately from the definitions. Eq.~(\ref{Pilong}) becomes \begin{eqnarray} \Pi_{\alpha \beta}^{a b}(q; c_{SW}, \omega) &=& \Pi_{\alpha \beta}^{a b}(q; c_{SW}, 0) + 2 \,\omega\, N_f\, \delta^{a b} g^2 a^2 \times\nonumber\\ & & \bigg\{ \sum_\mu\,\left(q_\alpha q_\mu - q^2 \delta_{\alpha \mu}\right) \int \frac{d^4 k}{(2 \pi)^4}\, \frac{\partial}{\partial k_\beta} {\rm Tr} \left[ W_{1 \mu}(k,k) S(k) \right] \label{Pishort} \\ &&+\, \sum_\mu\,\left(q_\beta q_\mu - q^2 \delta_{\beta \mu}\right) \int \frac{d^4 k}{(2 \pi)^4}\, \frac{\partial}{\partial k_\alpha} {\rm Tr} \left[ W_{1 \mu}(k,k) S(k) \right]\bigg\} + \mathcal{O}(a^2 q^4) \nonumber \,. \end{eqnarray} The integrals are now zero because $W_1$ and $S$ are periodic, \begin{equation} \int_{-\pi/a}^{\pi/a} d k_\alpha \frac{\partial}{\partial k_\alpha} {\rm Tr} \left[ W_{1 \mu}(k,k) S(k) \right] = {\rm Tr} \left[ W_{1 \mu}(k,k) S(k) \right] \Big|_{k_\alpha=-\pi/a}^{k_\alpha=\pi/a} = 0 \,. \end{equation} Thus we have proved that the vacuum polarization is independent of smearing the one-link part of the fermion action, \begin{equation} \Pi_{\alpha \beta}^{a b}(q; c_{SW}, \omega) = \Pi_{\alpha \beta}^{a b}(q; c_{SW}, 0) + \mathcal{O}(a^2 q^4) \end{equation} which implies that $d_f$ depends on $r$ and $c_{SW}$, but not on $\omega$.
{ "attr-fineweb-edu": 1.546875, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUboE5qX_AYyuLNhpB
\section{Introduction} In recent years there has been a surge of interest in applying tensor network methods to calculate the properties of lattice spin and gauge models \cite{PhysRevD.88.056005,Kadoh2019,PhysRevD.99.114507,PhysRevD.99.074502,PhysRevD.98.094511,qpotts,PhysRevD.100.054510,Wang_2014,PhysRevE.89.013308}. In low dimensions these formulations can avoid the usual sign problems associated with negative or complex probability weights that plague Monte Carlo approaches, and can yield very efficient computational algorithms \cite{PhysRevD.89.016008,PhysRevLett.115.180405,PhysRevLett.99.120601,PhysRevB.86.045139}. For compact fields the general strategy has been to employ character expansions for all Boltzmann factors occurring in the partition function and subsequently to integrate out the original fields, yielding an equivalent formulation in terms of integer---or half-integer---valued fields. Typically local tensors can be built from these discrete variables and the partition function recast as the full contraction of all tensor indices. However, writing local tensors for models with relativistic lattice fermions is more complicated~\cite{10.1093/ptep/ptx080,PhysRevD.90.014508,10.1093/ptep/ptv022,Kadoh2018}. One reason is tied to the Grassmann nature of the fermions which can induce additional, non-local sign problems which may be hard to generate from local tensor contractions. However, Gattringer et. al. have shown in Ref.~\cite{GATTRINGER2015732} that a suitable dual formulation can be derived in the case of the massless Schwinger model which is free of these sign problems. Using this dual representation they have formulated a general Monte Carlo algorithm that can be used to simulate the model even in the presence of non-zero chemical potential and topological terms \cite{GOSCHL201763}. Other directions into the investigation of the Schwinger model have appeared in recent years as well. One approach has been the use of other numerical renormalization group methods like the density matrix renormalization group (DMRG) with matrix product states or matrix product operators (MPS or MPO). The massive Schwinger model with staggered fermions was investigated in Ref.~\cite{PhysRevD.66.013002} using the DMRG. In Ref~\cite{Banuls2013} the mass spectrum of the Schwinger model was calculated at zero and finite mass, and in Ref.~\cite{PhysRevD.94.085018} the authors studied the Schwinger model at finite temperature using the DMRG with MPO. The effect of truncation on the number of representations retained in the electric field basis for the Schwinger model was investigated in Ref.~\cite{PhysRevD.95.094509}. In Ref.~\cite{PhysRevD.100.036009} the confinement properties of the Schwinger model in the presence of a topological term were studied, and in Ref.~\cite{lena_topo} the authors considered the effects of a topological term on the vacuum structure of the model, again using the DMRG. Similarly, Ref.~\cite{PhysRevD.98.074503} looked at a $\mathbb{Z}_{n}$ formulation of the Schwinger model using the DMRG. They found that at large $n$, one recovers similar results to the original continuous $U(1)$ symmetry in the Schwinger model. Out of equilibrium properties were looked at in Ref.~\cite{giuseppe_znqed} for that same model. On top of that, proposals and investigations into the potential for quantum simulations and computations of the Schwinger model were done in Refs.~\cite{PhysRevLett.112.201601,Muschik_2017,PhysRevA.98.032331}. In Ref.~\cite{PhysRevLett.112.201601} the lattice Schwinger model was considered for quantum simulation using cold atoms in an optical lattice. In Ref.~\cite{Muschik_2017} the authors considered general $U(1)$ lattice gauge theories and they integrate out the gauge degrees of freedom, being left with a model of strictly matter, interacting non-locally. This model would be implemented using trapped ions. In Ref.~\cite{PhysRevA.98.032331}, the authors considered the joint computation of the lattice Schwinger model using classical and quantum computers. In this paper we show that the dual world-line formulation from Ref.~\cite{GATTRINGER2015732} can be replicated by contraction of a suitable tensor network. It should be noted that a tensor formulation of the model allows for the definition of a transfer matrix, quantum Hamiltonian, and local Hilbert space. Rather than following a Monte Carlo strategy we instead use and follow the philosophy of the tensor renormalization group to coarse grain this tensor network. From this we calculate the partition function and free energy. We show that the results agree well with both Ref.~\cite{GOSCHL201763} and conventional hybrid Monte Carlo simulations where the latter can be performed. We start by reviewing the construction of the dual representation and show how the resulting dimer/loop representation can be obtained by the contraction of a suitable tensor network and derive the form of the fundamental tensor that is needed. We then describe the results of a coarsening of this tensor network using the HOTRG algorithm, calculate the free energy and its derivatives and compare the results to Monte Carlo simulations. We then go on to add a topological term to the action with coupling $\theta$. We conclude with a summary of the advantages and disadvantages of the method in this context. \section{Massless Schwinger Model and its dual representation} We begin with the one-flavor staggered action for the massless Schwinger model on a $N_{s} \times N_{\tau}$ lattice with action \begin{equation} S = S_{F} + S_{g} \end{equation} with \begin{align} \nonumber S_{F} = \frac{1}{2} \sum_{x, \mu} \eta_{\mu}(x) &[ \bar{\psi}(x) U_{\mu}(x) \psi(x+ \mu) \\ &- \bar{\psi}(x+\mu)U^{\dagger}_{\mu}(x) \psi(x) ] \end{align} and \begin{equation} S_{g} = - \beta \sum_{x} {\rm Re}\, [ U_P(x) ], \end{equation} where the Abelian gauge field $U_\mu(x)=e^{i A_\mu(x)}$ lives on the link between lattice sites $x$ and $x+\mu$ and the fermions $\psi(x)$ and $\bar{\psi}(x)$ live at the sites. $U_P$ is the usual Wilson plaquette operator $U_P(x) = \sum_{\mu < \nu }U_{\mu}(x) U_{\nu}(x+\mu) U^{\dagger}_{\mu} (x + \nu) U^{\dagger}_{\nu}(x) $. The partition function for this model is then given by \begin{align} \nonumber Z &= \int D[U] D[\bar{\psi}]D[\psi] \; e^{-S} \\ &=\int D[U] e^{\beta \sum_x {\rm Re} [U_P(x)]}Z_F(U) \end{align} with $ \int D[U] = \prod_{x} \int_{-\pi}^{\pi} dA_{\mu}(x) / 2\pi $, $ \int D[\bar{\psi}] D[\psi] = \prod_{x} \int d\bar{\psi}(x) d\psi(x) $, and $Z_{F}$ represents the part of the partition function that depends on the fermion fields. Following Ref.~\cite{GATTRINGER2015732}, and using the same notaion for clarity, we first integrate out the fermions and generate an effective action depending only on the gauge fields. As a first step we redefine the link variables such that the staggered fermion phases $ \eta_{\mu}(x) $ can be absorbed into modified link variables $U_\mu(x)\to \eta_\mu(x)U_\mu(x)$. Under this transformation the gauge action picks up an overall negative sign but the measure is invariant. The Boltzmann factor associated with each bilinear fermion term can be written as the product of forward and backward hopping terms yielding a partition function \begin{align} \nonumber Z_F = &\int D[U] D[\bar{\psi}] D[\psi] \times \\ \nonumber &\prod_x \sum_{k=0}^{1} \left( \frac{1}{2}\bar{\psi}(x)U_\mu(x)\psi(x+\mu) \right)^{k} \times \\ &\sum_{\bar{k}=0}^{1} \left( \frac{1}{2}\bar{\psi}(x+\mu)U^\dagger_\mu(x)\psi(x) \right)^{\bar{k}}. \end{align} Notice that higher order terms in the expansion of the Boltzmann factors vanish because of the Grassmann nature of the fermions. There are several ways to generate a non-zero contribution to $Z_F$. In each case, the Grassmann integration at each site must be saturated. To saturate the Grassmann integrations, exactly one forward and one backward hopping term must be associated with each site. This gives rise to a simple collection of possibilities. On the one hand, there may be a single forward and backward hop along the same link. This obviously saturates the integration, and is referred to as a dimer. On the other hand, there may be a forward and backward hop on two different links at a site. This indicates the passage of fermionic current through the site, and again saturates the integration measure there. Furthermore because of gauge invariance any non-dimer contribution to $Z_F$ must correspond to a closed loop. Fig.~\ref{fig:vertex_tensor} shows the allowed site contributions. A bold link indicates the presence of a $\frac{1}{2}U$ or a $-\frac{1}{2}U^\dagger$ factor along that link. Notice that the links are oriented corresponding to the presence of an arrow on each bold link whose direction is conserved through a site. \begin{figure*} \begin{tikzpicture} \draw[step=1.0,black,ultra thick,dashed] (0.0,-1.0) -- (0.0,0.0) -- (1.0,0.0); \draw[->,black,ultra thick] (0.0,1.0) -- (0.0 ,0.0) -- (-1.0,0.0) ; \end{tikzpicture} \hfill \begin{tikzpicture} \draw[step=1.0,black,ultra thick,dashed] (-1.0,0.0) -- (0.0,0.0) -- (0.0,-1.0); \draw[->,black,ultra thick] (0.0,1.0) -- (0.0 ,0.0) -- (1.0,0.0) ; \end{tikzpicture} \hfill \begin{tikzpicture} \draw[step=1.0,black,ultra thick,dashed] (-1.0,0.0) -- (0.0,0.0) -- (1.0,0.0); \draw[->,black,ultra thick] (0.0,1.0) -- (0.0 ,0.0) -- (0.0,-1.0) ; \end{tikzpicture} \hfill \begin{tikzpicture} \draw[step=1.0,black,ultra thick,dashed] (-1.0,0.0) -- (0.0,0.0) -- (1.0,0.0) ; \draw[step=1.0,black,ultra thick,dashed] (0.0,-1.0) -- (0.0,0.0) ; \draw[->,black,ultra thick] (-0.1,1.0) -- (-0.1 ,0.0) ; \draw[->,black,ultra thick] (0.1,0.0) -- (0.1,1.0) ; \end{tikzpicture} \vspace{0.5cm} \begin{tikzpicture} \draw[step=1.0,black,ultra thick,dashed] (0.0,1.0) -- (0.0,0.0) -- (1.0,0.0); \draw[->,black,ultra thick] (0.0,-1.0) -- (0.0 ,0.0) -- (-1.0,0.0) ; \end{tikzpicture} \hfill \begin{tikzpicture} \draw[step=1.0,black,ultra thick,dashed] (-1.0,0.0) -- (0.0,0.0) -- (0.0,1.0); \draw[->,black,ultra thick] (0.0,-1.0) -- (0.0 ,0.0) -- (1.0,0.0) ; \end{tikzpicture} \hfill \begin{tikzpicture} \draw[step=1.0,black,ultra thick,dashed] (-1.0,0.0) -- (0.0,0.0) -- (1.0,0.0); \draw[->,black,ultra thick] (0.0,-1.0) -- (0.0 ,0.0) -- (0.0,1.0) ; \end{tikzpicture} \hfill \begin{tikzpicture} \draw[step=1.0,black,ultra thick,dashed] (-1.0,0.0) -- (0.0,0.0) -- (1.0,0.0) ; \draw[step=1.0,black,ultra thick,dashed] (0.0,0.0) -- (0.0,1.0) ; \draw[->,black,ultra thick] (-0.1,0.0) -- (-0.1 ,-1.0) ; \draw[->,black,ultra thick] (0.1,-1.0) -- (0.1,0.0) ; \end{tikzpicture} \vspace{0.5cm} \begin{tikzpicture} \draw[step=1.0,black,ultra thick,dashed] (0.0,-1.0) -- (0.0,0.0) -- (1.0,0.0); \draw[->,black,ultra thick] (-1.0,0.0) -- (0.0 ,0.0) -- (0.0,1.0) ; \end{tikzpicture} \hfill \begin{tikzpicture} \draw[step=1.0,black,ultra thick,dashed] (0.0,1.0) -- (0.0,0.0) -- (0.0,-1.0); \draw[->,black,ultra thick] (-1.0,0.0) -- (0.0 ,0.0) -- (1.0,0.0) ; \end{tikzpicture} \hfill \begin{tikzpicture} \draw[step=1.0,black,ultra thick,dashed] (0.0,1.0) -- (0.0,0.0) -- (1.0,0.0); \draw[->,black,ultra thick] (-1.0,0.0) -- (0.0 ,0.0) -- (0.0,-1.0) ; \end{tikzpicture} \hfill \begin{tikzpicture} \draw[step=1.0,black,ultra thick,dashed] (0.0,1.0) -- (0.0,0.0) -- (1.0,0.0) ; \draw[step=1.0,black,ultra thick,dashed] (0.0,-1.0) -- (0.0,0.0) ; \draw[->,black,ultra thick] (-1.0,0.1) -- (0.0,0.1) ; \draw[->,black,ultra thick] (0.0,-0.1) -- (-1.0,-0.1) ; \end{tikzpicture} \vspace{0.5cm} \begin{tikzpicture} \draw[step=1.0,black,ultra thick,dashed] (0.0,-1.0) -- (0.0,0.0) -- (-1.0,0.0); \draw[->,black,ultra thick] (1.0,0.0) -- (0.0 ,0.0) -- (0.0,1.0) ; \end{tikzpicture} \hfill \begin{tikzpicture} \draw[step=1.0,black,ultra thick,dashed] (0.0,1.0) -- (0.0,0.0) -- (0.0,-1.0); \draw[->,black,ultra thick] (1.0,0.0) -- (0.0 ,0.0) -- (-1.0,0.0) ; \end{tikzpicture} \hfill \begin{tikzpicture} \draw[step=1.0,black,ultra thick,dashed] (-1.0,0.0) -- (0.0,0.0) -- (0.0,1.0); \draw[->,black,ultra thick] (1.0,0.0) -- (0.0 ,0.0) -- (0.0,-1.0) ; \end{tikzpicture} \hfill \begin{tikzpicture} \draw[step=1.0,black,ultra thick,dashed] (0.0,1.0) -- (0.0,0.0) -- (0.0,-1.0) ; \draw[step=1.0,black,ultra thick,dashed] (-1.0,0.0) -- (0.0,0.0) ; \draw[->,black,ultra thick] (1.0,-0.1) -- (0.0,-0.1) ; \draw[->,black,ultra thick] (0.0,0.1) -- (1.0,0.1) ; \end{tikzpicture} \caption{Sixteen non-zero possibilities for $ \psi, \bar{\psi} $ integration at a site. These 16 possibilities end up being exactly the nonzero elements of the fermion tensor.} \label{fig:vertex_tensor} \end{figure*} For a loop $\ell$ with length $L(\ell)$ one finds a contribution with absolute value \begin{equation} \left(\frac{1}{2}\right)^{L(\ell)}\prod_{x,\mu\in \ell}\left(U_\mu \right)^{k_\mu(x)} \end{equation} where $ k_{\mu}(x) = \pm 1 $ distinguishes between $U_\mu(x)$ and $U^\dagger_\mu(x)$. In addition each loop carries a certain $Z_2$ phase which depends on the length of the loop and its winding along the temporal direction given by \begin{equation} - (-1)^{\frac{1}{2}L(l)}(-1)^{W(l)}. \end{equation} Here, the overall negative sign is the usual one for closed fermion loops while the second factor keeps track of the number of forward hops which is exactly half the total length of the loop for a closed loop. Finally the factor $ (-1)^{W(l)}$ of the loop will be determined by the number of windings of the loop along the temporal direction assuming anti-periodic boundary conditions for the fermions. Using dimers and loops as basic constituents for non-zero contributions to the fermionic partition function we can write \begin{align} \label{eq:z_f} \nonumber Z_F = \left( \frac{1}{2} \right)^{V} &\sum_{l,d} (-1) ^{N_L + \frac{1}{2} \sum_{l} L(l) + \sum_{l} W(l)} \times \\ &\prod_{l} \left[ \prod_{x,\mu \in l} U^{k_{\mu}(x)}_{\mu}(x) \right]. \end{align} To proceed further we will need to construct this loop representation from the contraction of more basic objects located at sites and we take up this task in the next section. \subsection{Tensor Formulation of the Fermionic Partition Function} We need to construct a local tensor which under contraction along lattice links yields $Z_F$. Let us ignore the overall sign for now and just deal with the magnitude. We allow two types of indices per link to capture separately the incoming and outgoing fermion lines making the fermion site tensor a rank eight object. Since each site is either the endpoint of a dimer, or has fermionic current incoming and outgoing from it is then modeled by the tensor structure (we leave off the gauge link factors for now) \begin{equation} T^{x}_{k_1\bar{k_1}k_2\bar{k_2}k_3\bar{k_3}k_4\bar{k_4}} = \left\{ \begin{array}{ll} 1 & \text{if any two $k_i$ and $\bar{k_i}$ are} \\ & \text{one and others are zero.} \\ 0 & \text{otherwise} \end{array} \right. \end{equation} where each $ (k_i ,\bar{k_i} ) = 0,1$. A graphical representation of this tensor is shown in Fig.~\ref{fig:three_tensors}~(a). \begin{figure*} \begin{center} \begin{tikzpicture}[scale=0.40] \draw node[scale=1.3] at (-5., 5.) {(a)}; \draw node[scale=1.3] at (5.8,0) { $ \}$}; \draw node[scale=1.3] at (7.9,0) { $ (k , \bar{k})$}; \draw node[scale=1.3] at (10.,0) { $ \{$}; \draw[red,ultra thick] (-5.,0.5) -- (5.0 ,0.5) ; \draw[red,ultra thick] (-5.0,-0.5) -- (5. ,-0.5) ; \draw[red,ultra thick] (-0.5, 5.) -- (-0.5 ,-5.0) ; \draw[red,ultra thick] (0.5,-5.0) -- (0.5 , 5.) ; \draw node[scale=1.2] at (11., 5.) {(b)}; \draw node[scale=1.3] at (15.0,6.0) { $m$}; \draw node[scale=1.3] at (15.0,-6.0) { $m^{\prime}$}; \draw[green,cap=round,line width = 0.8mm] (15.0,-5.0) -- (15.0,5.0); \draw[red,cap=round,line width = 0.8mm] (15.0,0.5) -- (19.0 ,0.5) ; \draw[red,cap=round,line width = 0.8mm] (19.0,-0.5) -- (15.0 ,-0.5) ; \draw[red,cap=round,line width = 0.8mm] (11,0.5) -- (15.0 ,0.5) ; \draw[red,cap=round,line width = 0.8mm] (15.0,-0.5) -- (11.0 ,-0.5) ; \draw node[scale=1.3] at (25., 5.) {(c)}; \draw node[scale=1.3] at (30.0,-6.0) { $m_{1}$}; \draw node[scale=1.3] at (30.0,6.0) { $m_{2}$}; \draw node[scale=1.3] at (24.0,.0) { $m_{3}$}; \draw node[scale=1.3] at (36.0,0.0) { $m_{4}$}; \draw[cap=round,line width = 0.8mm,brown] (30.0,-5.0) -- (30.0,5.0); \draw[cap=round,line width = 0.8mm,brown] (25.0,0.0) -- (35.0,0.0) ; \end{tikzpicture} \caption{(a) Fermion tensor associated with the sites of the lattice. The two lines in each direction can take on the values of unoccupied, or occupied with a forward or backwards current. Each pair can then have four states, unoccupied, outgoing fermionic current, incomming fermionic current, and both outgoing and incommming current, \emph{i.e.} a dimer. (b) The constraint tensor associated with the links. This tensor enforces that the difference between the $m$ electric field numbers appropriately matches, and compensates, the fermionic current accross the link. (c) The gauge field tensor associated with each plaquette. This tensor has four indices, but the only non-vanishing elements are when all indices take the same value, \emph{i.e.} it is diagonal in all four indices. Each nonzero element is associated with weight factors given by modified Bessel functions.} \label{fig:three_tensors} \end{center} \end{figure*} By repeatedly contracting this site tensor with copies of itself over the lattice it can be seen that we generate the full set of closed loops and dimers for the model at zero gauge coupling {\rm excluding} the overall factor of minus one for each closed fermion loop. The absolute value of the partition function at zero gauge coupling is then, \begin{equation} Z_F^{\beta=\infty}= \sum_{\{k,\bar{k}\}} \prod_{x} T^{x}_{k_1\bar{k_1}k_2\bar{k_2}k_3\bar{k_3}k_4\bar{k_4}}. \end{equation} Here, $ \{k,\bar{k}\} $ denote the set of $ k,\bar{k} $ values for the entire lattice. Said another way, the 16 possible vertex configurations for fermion hopping in Fig.~\ref{fig:vertex_tensor} are captured as nonzero tensor elements in the $T$ tensor. \subsection{Integrating out the gauge fields} The fermion partition function in the previous section does not include any contribution or interaction with the gauge fields. To proceed further we will employ a character expansion of the Boltzmann factors associated with the gauge action. This will ensure that each plaquette in the lattice will carry an integer variable. Integration of the link gauge field in the background of a particular set of fermion loops restricts the plaquette variables to change by plus or minus one on crossing any fermion line. In this section, we will describe this in detail and, along with the tensor from the previous section, construct a tensor network that when fully contracted reproduces the full partition function for the massless Schwinger model. To integrate the gauge links we first start by performing a character expansion on the Boltzmann factor corresponding to the pure gauge plaquette action \begin{align} \nonumber &e^{-\beta \cos{\left[A_{\mu}(x) + A_{\nu}(x+\mu) - A_{\mu}(x+\nu) - A_{\nu}(x)\right]} } = \\ &\sum_{m=-\infty}^{m=\infty} I_{m}(-\beta) e^{i m \left[A_{\mu}(x) + A_{\nu}(x+\mu) - A_{\mu}(x+\nu) - A_{\nu}(x)\right] } . \end{align} Each plaquette $p$ is now labeled by an integer $m_p$. Note that $I_{m}(-\beta) = (-1)^{m} I_{m}(\beta)$. Furthermore, each link $\ell$ is shared by two plaquettes $p$ and $p^\prime$ each of which supplies a factor of $e^{i m_p A_\ell}$ and $e^{-i m_{p^\prime} A_\ell}$. In addition, the link carries a factor of $e^{ik_\ell A_\ell}$ or $e^{-i\bar{k}_\ell A_\ell}$ coming from $Z_F$. Thus, in total, links carry two $m$ indices inherited from their neighboring plaquettes together with a $k$ and a $\bar{k}$ index associated with the fermionic hopping terms. The integral over the link field then gives \begin{equation} \label{eq:cnst} \int_{-\pi}^{\pi} \frac{dA_\ell}{2\pi} e^{i (m_p - m_{p^\prime} + k_\ell - \bar{k_\ell} ) A_\ell } = \delta_{ m_p - m_{p^\prime} + k_\ell -\bar{k_\ell}, 0} . \end{equation} This allows us to write the partition function as \begin{align} \nonumber Z &= \sum_{ \{m_p\} } \sum_{\{k_\ell,\bar{k_\ell}\}} \prod_\ell \delta_{ m_p - m_{p^\prime} + k_\ell -\bar{k_\ell},0} \prod_{p}I_{m_p}(\beta) \times \\ &\prod_{x} T^{x}_{k_1\bar{k_1}k_2\bar{k_2}k_3\bar{k_3}k_4\bar{k_4}} \times (-1)^{N_{L} + N_{P} + \frac{1}{2}\sum_{l} L(l)} \label{ZZ} \end{align} where $\{m_p\}$ denotes the set of plaquette integers over the entire lattice, $\{k_\ell,\bar{k}_\ell\}$ represent $k$ indices over the links, and $N_{P} = \sum_{p} m_{p}$. At this point we have included all the minus signs for completeness. For periodic boundary conditions, the sum of winding numbers must always be zero, since one is restricted to the total charge-0 sector of the theory. Note that for this situation the overall $\pm 1$ factor is always positive~\cite{GATTRINGER2015732}. Now, associated with each link are $m$ fields and $k$ fields, and a constraint between them. Associated with each plaquette is a single $m$ field. This lets us define a link tensor, and a plaquette tensor. Link tensors have indices connecting to fermion tensors (the $T$ tensors) living on each site, and guage-field indices connecting to plaquette tensors (on each plaquette). We define this link tensor, $A$, as, \begin{equation} A_{m_i m_j k_a \bar{k_a} k_b \bar{k_b}} \equiv \delta_{ m_i - m_j + k_a -\bar{k_a},0}\delta_{k_a,k_b} \delta_{\bar{k_a} ,\bar{k_b}}. \end{equation} Fermion-like indices on link tensors are purely diagonal as seen from the definition involving the $\delta$ function constraints on links. A diagram showing the relative position of the fermion and plaquette indices is shown in Fig.~\ref{fig:three_tensors}~(b). Since there is only a single $m$ associated with each plaquette, a tensor definition must only depend on that single $m$. A plaquette tensor, $B$, can be defined as, \begin{equation} B_{m_1m_2m_3m_4} = \left\{ \begin{array}{ll} I_{m}(\beta) & \text{if } m_1=m_2=m_3=m_4 \\ & \quad = m \\ 0 & {\rm otherwise .} \end{array} \right. \end{equation} A graphical representation for the $ B $ tensor associated with plaquettes is shown in Fig.~\ref{fig:three_tensors}~(c). These definitions of the $ A $ and $ B $ tensors allow us to write the partition function as follows, \begin{align} \nonumber Z = \sum_{\{k,\bar{k}\}} \sum_{ \{m_p\} } &\left( \prod_{p} B_{m_im_jm_km_l} \right)\left( \prod_{l} A_{m_i m_j k_a \bar{k_a} k_b \bar{k_b}} \right) \times \\ &\left( \prod_{x} T_{k_a\bar{k_a}k_b\bar{k_b}k_c\bar{k_c}k_d\bar{k_d}} \right). \end{align} This contraction over three unique tensor types can be represented as the tensor network shown in Fig.~\ref{fig:tn}. Since the fermionic $k$ indices always come in $k$, $\bar{k}$ pairs, we can form a product state of those two indices to reduce the complexity of the notation, \begin{align} \nonumber T \rightarrow T' &= T_{(k_a \otimes \bar{k_a})(k_b \otimes \bar{k_b})(k_c \otimes \bar{k_c})(k_d \otimes \bar{k_d})} \\ &= T_{K_a K_b K_c K_d}. \end{align} \begin{align} A \rightarrow A' = A_{m_i m_j (k_a \otimes \bar{k_a})( k_b \otimes \bar{k_b})} = A_{m_i m_j K_a K_b} \end{align} The new enlarged $K$ indices take values from 0 to 3, enumerating the four possible states each link can have: unoccupied, incoming, outgoing, and dimer. The $A$ tensors are still diagonal in the new $K$ indices. \begin{figure} \centering \begin{tikzpicture}[scale=0.8] \draw[step=5cm,black,cap=round,line width=0.8mm,dashed] (0.0,0.0) grid (10.0,10.0); \draw[cap=round,line width = 0.8mm,brown] (1.0,2.5) -- (4.0,2.5); \draw[cap=round,line width = 0.8mm,brown] (2.5,1.0) -- (2.5,4.0) ; \draw node[scale=1.3] at (2.8,2.8) { $B$}; \draw[cap=round,line width = 0.8mm,brown] (6.0,2.5) -- (9.0,2.5); \draw[cap=round,line width = 0.8mm,brown] (7.5,1.0) -- (7.5,4.0) ; \draw node[scale=1.3] at (7.8,2.8) { $B$}; % \draw[cap=round,line width = 0.8mm,brown] (1.0,7.5) -- (4.0,7.5); \draw[cap=round,line width = 0.8mm,brown] (2.5,6.0) -- (2.5,9.0) ; \draw node[scale=1.3] at (7.8,7.8) { $B$}; \draw[cap=round,line width = 0.8mm,brown] (6.0,7.5) -- (9.0,7.5); \draw[cap=round,line width = 0.8mm,brown] (7.5,6.0) -- (7.5,9.0) ; \draw node[scale=1.3] at (2.8,7.8) { $B$}; \draw[green,cap=round,line width = 0.8mm] (2.5,4.2) -- (2.5,5.7); \draw[red,cap=round,line width = 0.5mm] (1.5,4.9) -- (3.5 ,4.9) ; \draw[red,cap=round,line width = 0.5mm] (1.5,5.1) -- (3.5 ,5.1) ; \draw node[scale=1.3] at (2.8,5.4) { $A$}; % \draw[green,cap=round,line width = 0.8mm] (7.5,4.2) -- (7.5,5.7); \draw[red,cap=round,line width = 0.5mm] (6.5,4.9) -- (8.5 ,4.9) ; \draw[red,cap=round,line width = 0.5mm] (6.5,5.1) -- (8.5 ,5.1) ; \draw node[scale=1.3] at (7.8,5.4) { $A$}; % \draw[green,cap=round,line width = 0.8mm] (4.2,2.5) -- (5.7,2.5); \draw[red,cap=round,line width = 0.5mm] (4.9,1.5) -- (4.9 ,3.5) ; \draw[red,cap=round,line width = 0.5mm] (5.1,1.5) -- (5.1 ,3.5) ; \draw node[scale=1.3] at (5.3,2.8) { $A$}; % \draw[green,cap=round,line width = 0.8mm] (4.2,7.5) -- (5.7,7.5); \draw[red,cap=round,line width = 0.5mm] (4.9,6.5) -- (4.9 ,8.5) ; \draw[red,cap=round,line width = 0.5mm] (5.1,6.5) -- (5.1 ,8.5) ; \draw node[scale=1.3] at (5.3,7.8) { $A$}; % \draw[red,cap=round,line width = 0.5mm] (5.1,5.1) -- (6.1 ,5.1) ; \draw[red,cap=round,line width = 0.5mm] (5.1,5.1) -- (5.1 ,6.1) ; \draw[red,cap=round,line width = 0.5mm] (3.9,5.1) -- (4.9 ,5.1) ; \draw[red,cap=round,line width = 0.5mm] (4.9,5.1) -- (4.9 ,6.1) ; \draw[red,cap=round,line width = 0.5mm] (5.1,3.9) -- (5.1 ,4.9) ; \draw[red,cap=round,line width = 0.5mm] (5.1,4.9) -- (6.1 ,4.9) ; \draw[red,cap=round,line width = 0.5mm] (3.9,4.9) -- (4.9 ,4.9) ; \draw[red,cap=round,line width = 0.5mm] (4.9,3.9) -- (4.9,4.9) ; \draw node[scale=1.3] at (5.4,5.4) { $T$}; % \end{tikzpicture} \caption{Elementary tensors $T$, $A$, and $B$. When these tensors are contracted in the pattern shown here the world-line representation of the partition function is generated exactly.} \label{fig:tn} \end{figure} \section{Transfer Matrix} Using the tensors defined in the previous sections, one can build a transfer matrix for this model. The transfer matrix can be defined as the product of two types of matrices. In this section, we first define and construct these two different matrices. Then, by combining these two matrices in the appropriate way we can define a transfer matrix. The partition function is the trace of the $ N_{\tau}^{\text{th}} $ power of this final matrix. The first type of matrix we define is the $ \mathcal{B} $ matrix. It is made by contracting alternating $B$ and $A$ tensors along a time-slice. \begin{align} \nonumber &\mathcal{B}_{(m_1\otimes \cdots m_N \otimes K_{1}\otimes \cdots K_{N} ) ( m^{'}_1 \otimes \cdots m^{'}_N \otimes K_{1}^{\prime} \otimes \cdots K_{N}^{\prime})} = \\ \nonumber &B_{m m^{'} m_1 m^{'}_1} A_{m^{'} m^{''} K_{1} K_{1}^{\prime}} B_{m^{''}m^{'''} m_2 m^{'}_2} \times \\ & A_{m^{'''}m^{''''} K_{2} K_{2}^{\prime}} \cdots B_{m^{(N-1)} m m_N m^{'}_N} \end{align} where a sum over repeated indices is implied. Diagrammatically $ \mathcal{B}$ is represented as Fig.~\ref{fig:B}. An important feature of this matrix is that it is diagonal, due to the diagonal nature of the $B$ tensors, and the $K$ indices in the $A$ tensors. This means incoming states through this matrix do not change into other states. \begin{figure} \centering \begin{tikzpicture}[scale=0.86] \draw[black, cap=round,line width=0.8mm, dashed] (0,0) -- (5,0) -- (5,5) -- (10,5); \draw[black, cap=round,line width=0.8mm, dashed] (5,0) -- (10,0); \draw[black,cap=round,line width=0.8mm,dashed] (5,5) -- (0,5); \draw[cap=round,line width=0.8mm,brown] (1.0,2.5) -- (4.0,2.5); \draw[cap=round,line width = 0.8mm,brown] (2.5,1.0) -- (2.5,4.0) ; \draw node[scale=1.3] at (0.8,2.7) {$m_3$}; \draw node[scale=1.3] at (4.1,2.7) {$m_4$}; \draw node[scale=1.3] at (5.8,2.7) {$m_5$}; \draw node[scale=1.3] at (9.1,2.7) {$m_6$}; \draw node[scale=1.3] at (2.5,4.3) {$m^{'}_1$}; \draw node[scale=1.3] at (2.5,0.7) {$m_1$}; \draw node[scale=1.3] at (7.5,4.3) {$m^{'}_2$}; \draw node[scale=1.3] at (7.5,0.7) {$m_2$}; \draw[cap=round,line width = 0.8mm,brown] (6.0,2.5) -- (9.0,2.5); \draw[cap=round,line width = 0.8mm,brown] (7.5,1.0) -- (7.5,4.0) ; \draw[green,cap=round,line width = 0.8mm] (4.2,2.5) -- (5.7,2.5); \draw[red,cap=round,line width = 0.5mm] (4.9,1.9) -- (4.9 ,3.1) ; \draw[red,cap=round,line width = 0.5mm] (5.1,1.9) -- (5.1 ,3.1) ; \draw node[scale=1.3] at (5.4, 1.7) {$K_1$}; \draw node[scale=1.3] at (5.4, 3.5) {$K^{'}_1$}; \end{tikzpicture} \caption{Construction of part of the $\mathcal{B}$ matrix. In principle the construction continues to the left and right with $A$ tensors contracted with the $B$ tensors, and so on.} \label{fig:B} \end{figure} \vspace{0.5cm} In analogy with the construction of $ \mathcal{B} $ we define the $\mathcal{A}$ matrix as the alternating contraction of $T$ and $A$ tensors along a time-slice, \begin{align} \nonumber &\mathcal{A}_{(m_1 \otimes \cdots m_N \otimes K_{1} \otimes \cdots K_{N} ) ( m^{'}_1 \otimes \cdots m^{'}_N \otimes K_{1}^{\prime} \otimes \cdots K_{N}^{\prime})} = \\ & A_{m_1 m^{'}_1 \bar{K}_1 \bar{K}_2 } T_{\bar{K}_{2} \bar{K}_{3} K_{1} K_{1}^{\prime} } A_{m_2 m^{'}_2 \bar{K}_{3} \bar{K}_{4}} \cdots A_{m_N m^{'}_N \bar{K}_{N} \bar{K}_{1} } \end{align} with a diagrammatic representation given by Fig.~\ref{fig:amatrix}. This matrix has off-diagonal elements, and is responsible for the changing of states between time-slices. This matrix moves fermionic current across space, and through time, with the appropriate shift in the electric field to balance. \vspace{0.5cm} \begin{figure*} \begin{tikzpicture} \draw[black, cap=round,line width=0.8mm, dashed] (0,0) -- (5,0) -- (15,0); \draw[black, cap=round,line width=0.8mm, dashed] (5,-2) -- (5,2) ; \draw[black, cap=round,line width=0.8mm, dashed] (10,-2) -- (10,2); \draw[green,cap=round,line width = 0.8mm] (2.5,-1.1) -- (2.5,1.1); \draw[red,cap=round,line width = 0.5mm] (1.5,0.1) -- (3.5 ,0.1) ; \draw[red,cap=round,line width = 0.5mm] (1.5,-0.1) -- (3.5 ,-0.1) ; \draw[green,cap=round,line width = 0.8mm] (7.5,-1.1) -- (7.5,1.1); \draw[red,cap=round,line width = 0.5mm] (6.5,0.1) -- (8.5 ,0.1) ; \draw[red,cap=round,line width = 0.5mm] (6.5,-0.1) -- (8.5 ,-0.1) ; \draw[green,cap=round,line width = 0.8mm] (12.5,-1.1) -- (12.5,1.1); \draw[red,cap=round,line width = 0.5mm] (11.5,0.1) -- (13.5 ,0.1) ; \draw[red,cap=round,line width = 0.5mm] (11.5,-0.1) -- (13.5 ,-0.1) ; \draw node[scale=1.3] at (2.5, 1.3) {$m_1$}; \draw node[scale=1.3] at (2.5, -1.3) {$m^{'}_1$}; \draw node[scale=1.3] at (7.5, 1.3) {$m_2$}; \draw node[scale=1.3] at (7.5, -1.3) {$m^{'}_2$}; \draw node[scale=1.3] at (12.5, 1.3) {$m_3$}; \draw node[scale=1.3] at (12.5, -1.3) {$m^{'}_3$}; \draw node[scale=1.3] at (1.25, 0.4) {$\bar{K}_1$}; \draw node[scale=1.3] at (3.75, 0.4) {$\bar{K}_2$}; \draw node[scale=1.3] at (6.25, 0.4) {$\bar{K}_{3}$}; \draw node[scale=1.3] at (8.75, 0.4) {$\bar{K}_4$}; \draw node[scale=1.3] at (11.25, 0.4) {$\bar{K}_5$}; \draw node[scale=1.3] at (5.3,1.3) {$K_1$}; \draw node[scale=1.3] at (5.3,-1.3) {$K^{'}_1$}; \draw node[scale=1.3] at (10.3,1.3) {$K_2$}; \draw node[scale=1.3] at (10.3,-1.3) {$K^{'}_2$}; \draw[red,cap=round,line width = 0.5mm] (5.1,0.1) -- (6.1 ,0.1) ; \draw[red,cap=round,line width = 0.5mm] (5.1,0.1) -- (5.1 ,1.1) ; \draw[red,cap=round,line width = 0.5mm] (3.9,0.1) -- (4.9 ,0.1) ; \draw[red,cap=round,line width = 0.5mm] (4.9,0.1) -- (4.9 ,1.1) ; \draw[red,cap=round,line width = 0.5mm] (5.1,-1.1) -- (5.1 ,-0.1) ; \draw[red,cap=round,line width = 0.5mm] (5.1,-0.1) -- (6.1 ,-0.1) ; \draw[red,cap=round,line width = 0.5mm] (3.9,-0.1) -- (4.9 ,-0.1) ; \draw[red,cap=round,line width = 0.5mm] (4.9,-1.1) -- (4.9,-0.1) ; \draw[red,cap=round,line width = 0.5mm] (10.1,0.1) -- (11.1 ,0.1) ; \draw[red,cap=round,line width = 0.5mm] (10.1,0.1) -- (10.1 ,1.1) ; \draw[red,cap=round,line width = 0.5mm] (8.9,0.1) -- (9.9 ,0.1) ; \draw[red,cap=round,line width = 0.5mm] (9.9,0.1) -- (9.9 ,1.1) ; \draw[red,cap=round,line width = 0.5mm] (10.1,-1.1) -- (10.1 ,-0.1) ; \draw[red,cap=round,line width = 0.5mm] (10.1,-0.1) -- (11.1 ,-0.1) ; \draw[red,cap=round,line width = 0.5mm] (8.9,-0.1) -- (9.9 ,-0.1) ; \draw[red,cap=round,line width = 0.5mm] (9.9,-1.1) -- (9.9,-0.1) ; \end{tikzpicture} \caption{Construction of matrix $\mathcal{A}$. In principle the construction continues to the left and right, alternating contraction between $A$ and $T$ tensors. This matrix is responsible for moving fermionic current around in space and time, and adjusting the gradient of the electric field to compensate.} \label{fig:amatrix} \end{figure*} Using the definitions above we can recast the partition function into an alternating product of $ \mathcal{B} $ and $\mathcal{A}$ matrices. This alternating product can be broken up, and recast as the $N_{\tau}^{\text{th}}$ power of a single matrix, \begin{equation} \label{eq:tm} \mathcal{T}_{\alpha \beta} = \sqrt{\mathcal{B}}_{\alpha \delta} \mathcal{A}_{\delta \gamma} \sqrt{\mathcal{B}}_{\gamma \beta} \end{equation} where the square root is well-defined since $ \mathcal{B}$ is diagonal in all of it's indices (and its matrix elements are positive). The indices in Eq.~\eqref{eq:tm} are collective indices as defined before in the definitions of the $\mathcal{B}$ and $\mathcal{A}$ tensors. Now we can write the partition function as follows, \begin{equation} Z = {\rm Tr\;} [ \mathcal{T}^{N_t} ]. \end{equation} \section{Fundamental Tensor for TRG} \subsection{Asymmetric tensor} In order to have efficient numerical calculations using the TRG, the tensor network structure should be translationally invariant. This means that for whatever fundamental tensor one uses, it must contract naturally with itself. That is, the top indices of the fundamental tensor should be compatible for contraction with the bottom indices, and the indices on the left side of the tensor should be compatible for contraction with the indices on the right. For this goal, we define a tensor, $ \mathcal{M} $, using a single elementary plaquette tensor (the $ B $), two link tensors (the $A$s), and a single fermion $ T $ tensor. This is shown diagrammatically in Fig.~\ref{fig:trg}. As can be seen from the figure, there are two different types of indices associated with each direction in the tensor. Each direction has one $m$ index, and one $K$ index. However, repeated contraction of this tensor with itself in the appropriate pattern reproduces the partition function. This is the only fundamental tensor necessary to do that. The tensor is then explicitly given as, \begin{align} \label{eq:fund} \nonumber & \mathcal{M}_{m_1 m_2 m_3 m_4 K_{1} K_2 K_3 K_4 } = \sum_{m'_{1}, m'_{2}, \bar{K}_{1}, \bar{K}_{2}} B_{m_1 m^{'}_1 m_2 m^{'}_2} \times \\ & A_{m^{'}_2 m_3 K_{1} \bar{K}_1 } T_{\bar{K}_1 K_2 \bar{K}_2 K_3} A_{m_4 m^{'}_1 \bar{K}_{2} K_{4}}. \end{align} Here the $K$ indices always have dimension four, however the $m$ indices run over all integers. The $m$ indices are constrained by the $K$ indices though. Looking at a single direction, the total size of the state-space associated with two of the indices is $D_{\text{bond}} = N_{\text{gauge}} \times 4$, where $N_{\text{gauge}}$ is the number of states allowed for the $B$ tensor index in practice. \begin{figure}[t] \centering \begin{tikzpicture}[scale=0.8] \draw[step=5cm,black,cap=round,line width=0.8mm,dashed] (0.0,0.0) grid (10.0,10.0); \draw[cap=round,line width = 0.8mm,brown] (1.0,2.5) -- (4.0,2.5); \draw[cap=round,line width = 0.8mm,brown] (2.5,1.0) -- (2.5,4.0) ; \draw node[scale=1.3] at (2.8,2.8) { $B$}; \draw[cap=round,line width = 0.8mm,brown] (6.0,2.5) -- (9.0,2.5); \draw[cap=round,line width = 0.8mm,brown] (7.5,1.0) -- (7.5,4.0) ; \draw node[scale=1.3] at (7.8,2.8) { $B$}; \draw[cap=round,line width = 0.8mm,brown] (1.0,7.5) -- (4.0,7.5); \draw[cap=round,line width = 0.8mm,brown] (2.5,6.0) -- (2.5,9.0) ; \draw node[scale=1.3] at (7.8,7.8) { $B$}; \draw[cap=round,line width = 0.8mm,brown] (6.0,7.5) -- (9.0,7.5); \draw[cap=round,line width = 0.8mm,brown] (7.5,6.0) -- (7.5,9.0) ; \draw node[scale=1.3] at (2.8,7.8) { $B$}; \draw[green,cap=round,line width = 0.8mm] (2.5,4.2) -- (2.5,5.7); \draw[red,cap=round,line width = 0.5mm] (1.5,4.9) -- (3.5 ,4.9) ; \draw[red,cap=round,line width = 0.5mm] (1.5,5.1) -- (3.5 ,5.1) ; \draw node[scale=1.3] at (2.8,5.4) { $A$}; \draw[green,cap=round,line width = 0.8mm] (7.5,4.2) -- (7.5,5.7); \draw[red,cap=round,line width = 0.5mm] (6.5,4.9) -- (8.5 ,4.9) ; \draw[red,cap=round,line width = 0.5mm] (6.5,5.1) -- (8.5 ,5.1) ; \draw node[scale=1.3] at (7.8,5.4) { $A$}; \draw[green,cap=round,line width = 0.8mm] (4.2,2.5) -- (5.7,2.5); \draw[red,cap=round,line width = 0.5mm] (4.9,1.5) -- (4.9 ,3.5) ; \draw[red,cap=round,line width = 0.5mm] (5.1,1.5) -- (5.1 ,3.5) ; \draw node[scale=1.3] at (5.3,2.8) { $A$}; \draw[green,cap=round,line width = 0.8mm] (4.2,7.5) -- (5.7,7.5); \draw[red,cap=round,line width = 0.5mm] (4.9,6.5) -- (4.9 ,8.5) ; \draw[red,cap=round,line width = 0.5mm] (5.1,6.5) -- (5.1 ,8.5) ; \draw node[scale=1.3] at (5.3,7.8) { $A$}; \draw[red,cap=round,line width = 0.5mm] (5.1,5.1) -- (6.1 ,5.1) ; \draw[red,cap=round,line width = 0.5mm] (5.1,5.1) -- (5.1 ,6.1) ; \draw[red,cap=round,line width = 0.5mm] (3.9,5.1) -- (4.9 ,5.1) ; \draw[red,cap=round,line width = 0.5mm] (4.9,5.1) -- (4.9 ,6.1) ; \draw[red,cap=round,line width = 0.5mm] (5.1,3.9) -- (5.1 ,4.9) ; \draw[red,cap=round,line width = 0.5mm] (5.1,4.9) -- (6.1 ,4.9) ; \draw[red,cap=round,line width = 0.5mm] (3.9,4.9) -- (4.9 ,4.9) ; \draw[red,cap=round,line width = 0.5mm] (4.9,3.9) -- (4.9,4.9) ; \draw node[scale=1.3] at (5.4,5.4) { $T$}; \draw[blue,cap=round,line width=0.5mm] (2.6,5.0) -- (5.0,5.0) -- (5.0,7.4) -- (2.6,7.4) -- (2.6,5.0); \end{tikzpicture} \caption{Construction of tensor $\mathcal{M}$ shown as the four tensors sharing the blue loop. This is a possible single tensor which can be contracted with itself recursively to generate the partition function.} \label{fig:trg} \end{figure} \subsection{Symmetric tensor} It's possible to form a completely symmetric tensor in both space and time, as opposed to the asymmetric tensor constructed above. This tensor formulation relies on ``dressing'' the link fermion tensors in their surrounding gauge field configurations. This is possible because of how the $B$ tensor is completely diagonal in its four indices. To construct the symmetric tensor, the first step is to separate the $B$ tensor into eight smaller pieces, four of which are associated with the adjacent link tensors, and the other four are associated with the four adjacent site tensors, \begin{align} \nonumber & B_{m_1 m_2 m_3 m_4} = \sum_{\alpha,\beta,\gamma,\sigma} b_{m_1 \sigma \alpha} b_{m_2 \alpha \beta} b_{m_3 \beta \gamma} b_{m_4 \gamma \sigma} \\ &= \sum_{\alpha,\beta,\gamma,\sigma,\rho,\lambda,\chi,\psi} b_{m_1 \psi \alpha} \delta_{\alpha \beta} b_{m_2 \beta \gamma} \delta_{\gamma \sigma} b_{m_3 \sigma \rho} \delta_{\rho \lambda} b_{m_4 \lambda \chi} \delta_{\chi \psi}. \end{align} The $b$ tensors are also diagonal, and the $\delta$ matrices are simply Kronecker deltas. This decomposition can be seen graphically in Fig.~\ref{fig:Bsplit}. In principle, each of the above sums runs over all the integers; however, in practice one is forced to restrict the sum. \begin{figure} \centering \begin{tikzpicture}[scale=0.5] \draw node[scale=1.3] at (-9.6, -3) { $ m_1$}; \draw node[scale=1.3] at (-9.6, 3) { $ m_2 $}; \draw node[scale=1.3] at (-12, 0.3) { $m_3$}; \draw node[scale=1.3] at (-6, 0.3) { $m_4$}; \draw node[scale=1.3] at (-0.6, -3) { $ m_1$}; \draw node[scale=1.3] at (-0.6, 3) { $ m_2 $}; \draw node[scale=1.3] at (-3, 0.3) { $m_3$}; \draw node[scale=1.3] at (3, 0.3) { $m_4$}; \draw[cap=round,line width = 0.8mm,brown] (-9.0,-3.0) -- (-9.0,3.0); \draw[cap=round,line width = 0.8mm,brown] (-12.0,0.0) -- (-6.0,0.0) ; \draw[cap=round,line width = 0.8mm,brown] (0.0,-3.0) -- (0.0, -2.0); \draw[cap=round,line width = 0.8mm,brown] (0.0, 3.0) -- (0.0, 2.0); \draw[cap=round,line width = 0.8mm,brown] (-3.0,0.0) -- (-2.0,0.0) ; \draw[cap=round,line width = 0.8mm,brown] (3.0,0.0) -- (2.0,0.0) ; \draw[cap=round,line width = 0.8mm,brown] (-2., -1.2) -- (-2., 1.2) ; \draw[cap=round,line width = 0.8mm,brown] (2., -1.2) -- (2., 1.2) ; \draw[cap=round,line width = 0.8mm,brown] (-1.2, -2.) -- (1.2, -2.) ; \draw[cap=round,line width = 0.8mm,brown] (-1.2, 2.) -- (1.2, 2.) ; \draw[cap=round,line width = 0.8mm,brown] (-1.4, 2.) -- (-2., 2.) -- (-2., 1.4) ; \draw[cap=round,line width = 0.8mm,brown] (1.4, 2.) -- (2., 2.) -- (2., 1.4) ; \draw[cap=round,line width = 0.8mm,brown] (-1.4, -2.) -- (-2., -2.) -- (-2., -1.4) ; \draw[cap=round,line width = 0.8mm,brown] (1.4, -2.) -- (2., -2.) -- (2., -1.4) ; \draw[->,black,ultra thick] (-5.25, 0.0) -- (-3.75, 0.0) ; \end{tikzpicture} \caption{A graphical representation of how the decomposition of the $B$ tensor takes place. Each smaller tensor is also diagonal so that all $m$ indices must take on the same values.} \label{fig:Bsplit} \end{figure} The $b$ tensors are contracted with adjacent $A$ tensors, and the Kronecker deltas are moved to the surrounding site tensors. The new $A$ tensors, $\tilde{A}$, are given by, \begin{equation} \tilde{A}_{(m_1 K m_2) ({m'}_1 K' {m'}_2)} = \sum_{\alpha, \beta} b_{\alpha m_1 {m'}_1} A_{\alpha \beta K K'} b_{\beta m_2 {m'}_2}. \end{equation} This $\tilde{A}$ matrix is diagonal, since it is diagonal in all three sets of indices (the $K$s, and the $m$s) due to the aforementioned diagonal nature of the $B$ tensor and the already diagonal nature of the $K$ indices in the $A$ tensor. This tensor can be seen in Fig.~\ref{fig:Atilde}. \begin{figure} \centering \begin{tikzpicture}[scale=0.86] \draw[black, cap=round,line width=0.8mm, dashed] (0,0) -- (5,0) -- (5,5) -- (10,5); \draw[black, cap=round,line width=0.8mm, dashed] (5,0) -- (10,0); \draw[black,cap=round,line width=0.8mm,dashed] (5,5) -- (0,5); \draw[cap=round,line width=0.8mm,brown] (1.0, 2.5) -- (1.5, 2.5); \draw[cap=round,line width=0.8mm,brown] (4.0, 2.5) -- (3.5, 2.5); \draw[cap=round,line width = 0.8mm,brown] (2.5, 1.0) -- (2.5, 1.5) ; \draw[cap=round,line width = 0.8mm,brown] (2.5, 4.0) -- (2.5, 3.5) ; \draw[cap=round,line width=0.8mm,brown] (6.0, 2.5) -- (6.5, 2.5); \draw[cap=round,line width=0.8mm,brown] (9.0, 2.5) -- (8.5, 2.5); \draw[cap=round,line width = 0.8mm,brown] (7.5, 1.0) -- (7.5, 1.5) ; \draw[cap=round,line width = 0.8mm,brown] (7.5, 4.0) -- (7.5, 3.5) ; \draw[cap=round,line width = 0.8mm,brown] (1.5, 2) -- (1.5, 3.) ; \draw[cap=round,line width = 0.8mm,brown] (1.5, 3.2) -- (1.5, 3.5) -- (1.8, 3.5) ; \draw[cap=round,line width = 0.8mm,brown] (2, 3.5) -- (3, 3.5) ; \draw[cap=round,line width = 0.8mm,brown] (3.2, 3.5) -- (3.5, 3.5) -- (3.5, 3.2) ; \draw[cap=round,line width = 0.8mm,brown] (3.5, 3.) -- (3.5, 2.) ; \draw[cap=round,line width = 0.8mm,brown] (3.5, 1.8) -- (3.5, 1.5) -- (3.2, 1.5) ; \draw[cap=round,line width = 0.8mm,brown] (3., 1.5) -- (2, 1.5) ; \draw[cap=round,line width = 0.8mm,brown] (1.8, 1.5) -- (1.5, 1.5) -- (1.5, 1.8) ; \draw[cap=round,line width = 0.8mm,brown] (6.5, 2) -- (6.5, 3.) ; \draw[cap=round,line width = 0.8mm,brown] (6.5, 3.2) -- (6.5, 3.5) -- (6.8, 3.5) ; \draw[cap=round,line width = 0.8mm,brown] (7, 3.5) -- (8, 3.5) ; \draw[cap=round,line width = 0.8mm,brown] (8.2, 3.5) -- (8.5, 3.5) -- (8.5, 3.2) ; \draw[cap=round,line width = 0.8mm,brown] (8.5, 3.) -- (8.5, 2.) ; \draw[cap=round,line width = 0.8mm,brown] (8.5, 1.8) -- (8.5, 1.5) -- (8.2, 1.5) ; \draw[cap=round,line width = 0.8mm,brown] (8., 1.5) -- (7, 1.5) ; \draw[cap=round,line width = 0.8mm,brown] (6.8, 1.5) -- (6.5, 1.5) -- (6.5, 1.8) ; \draw[teal, cap=round,line width=0.5mm, dashed] (6.8, 3.1) -- (3.2, 3.1) -- (3.2, 1.9) -- (6.8, 1.9) -- (6.8, 3.1); \draw[green,cap=round,line width = 0.8mm] (4.25,2.5) -- (5.75,2.5); \draw[red,cap=round,line width = 0.5mm] (4.9,1.9) -- (4.9 ,3.1) ; \draw[red,cap=round,line width = 0.5mm] (5.1,1.9) -- (5.1 ,3.1) ; \end{tikzpicture} \caption{The modified $\tilde{A}$ tensor (boxed in teal), built from the original $A$ tensor, and the $b$ tensors from the decomposition of the two $B$ tensors on the adjacent plaquettes.} \label{fig:Atilde} \end{figure} For the site tensor ($T$ tensor), we now ``wrap'' it in Kronecker deltas which enforce that all four site tensors around a plaquette have the same $m$-plaquette number associated with that plaquette. The new $\tilde{T}$ tensor has the form, \begin{align} \nonumber & \tilde{T}_{(m_1 K_1 m_8)(m_4 K_2 m_5)(m_2 K_3 m_3)(m_6 K_4 m_7)} = \\ & T_{K_1 K_2 K_3 K_4} \delta_{m_1 m_2} \delta_{m_3 m_4} \delta_{m_5 m_6} \delta_{m_7 m_8}, \end{align} and can be seen in Fig.~\ref{fig:Ttilde}. \begin{figure*} \centering \begin{tikzpicture}[scale=0.8] \draw[red, line width=1mm] (-9, 0.25) -- (-3, 0.25); \draw[red, line width=1mm] (-9, -0.25) -- (-3, -0.25); \draw[red, line width=1mm] (-6.25, 3) -- (-6.25, -3); \draw[red, line width=1mm] (-5.75, 3) -- (-5.75, -3); \draw[brown, line width=1mm] (-9, -1.4) -- (-7.3, -1.4) -- (-7.3, -3) ; \draw[brown, line width=1mm] (-9, 1.4) -- (-7.3, 1.4) -- (-7.3, 3) ; \draw[brown, line width=1mm] (-3, 1.4) -- (-4.7, 1.4) -- (-4.7, 3) ; \draw[brown, line width=1mm] (-3, -1.4) -- (-4.7, -1.4) -- (-4.7, -3) ; \draw[->, line width=0.5mm] (-7.1, -1.2) -- (-6.4, -0.4); \draw[->, line width=0.5mm] (-7.1, 1.2) -- (-6.4, 0.4); \draw[->, line width=0.5mm] (-4.9, -1.2) -- (-5.6, -0.4); \draw[->, line width=0.5mm] (-4.9, 1.2) -- (-5.6, 0.4); \draw[->, line width=0.5mm] (-2.2, 0) -- (-0.8, 0); \draw[red, line width=1mm] (0, 0.1) -- (6, 0.1); \draw[red, line width=1mm] (0, -0.1) -- (6, -0.1); \draw[red, line width=1mm] (3.1, 3) -- (3.1, -3); \draw[red, line width=1mm] (2.9, 3) -- (2.9, -3); \draw[brown, line width=1mm] (0, -0.3) -- (2.7, -0.3) -- (2.7, -3) ; \draw[brown, line width=1mm] (0, 0.3) -- (2.7, 0.3) -- (2.7, 3) ; \draw[brown, line width=1mm] (6, 0.3) -- (3.3, 0.3) -- (3.3, 3) ; \draw[brown, line width=1mm] (6, -0.3) -- (3.3, -0.3) -- (3.3, -3) ; \end{tikzpicture} \caption{The modified fermion tensor. The corners of the decomposed $B$ tensor are moved to the $T$ tensor at each site. These corners are Kronecker deltas, and enforce that each site around a plaquette has the same plaquette quantum $m$ number.} \label{fig:Ttilde} \end{figure*} At this point, there are no $B$ tensors remaining. The partition function is simply a contraction of the $\tilde{A}$ and $\tilde{T}$ tensors. To construct a single, symmetric, translation invariant tensor, we split the diagonal $\tilde{A}$ into two halves using the singular value decomposition, \begin{align} \nonumber \tilde{A}_{I J} & = \sum_{\alpha, \beta} U_{I \alpha} \lambda_{\alpha \beta} U^{\dagger}_{\beta J} \\ \nonumber & = \sum_{\alpha, \beta, \gamma} (U_{I \alpha} \sqrt{\lambda_{\alpha \beta}})(\sqrt{\lambda_{\beta\gamma}} U^{\dagger}_{\gamma J}) \\ & = \sum_{\alpha} L_{I \alpha} {L^{\dagger}}_{\alpha J}. \end{align} Furthermore, there are singular values with value zero, and they can be removed to decrease the size of the state space. This is equivalent to taking the square-root of the $\tilde{A}$ matrix and removing the zero columns (rows). With the $L$ matrices we can now form a symmetric tensor, by contracting four of these matrices with a $\tilde{T}$, \begin{equation} S_{ijkl}(\beta) = \sum_{\alpha,\beta,\gamma,\delta} \tilde{T}_{\alpha \beta \gamma \delta} L_{\alpha i} L_{\beta j} L_{\gamma k} L_{\delta l}. \end{equation} This tensor is symmetric in space and time, and since the $L$ matrices are diagonal, its nonzero tensor elements are constrained by the fermion tensor, $T$. This final $S$ tensor satisfies the same constraint as the original fermion $T$ tensor, however with tensor elements with values other than $1$, instead given by linear combinations of modified Bessel functions which are functions of the gauge coupling. \section{Numerical Simulation: HOTRG and HMC} We implemented the HOTRG algorithm to evaluate $\ln Z$ using the tensor defined in Eq.~ \eqref{eq:fund} as a translation invariant tensor for coarse-graining. We measured the average plaquette, \begin{equation} \langle U_{p} \rangle = \frac{1}{N_{s}N_{\tau}} \frac{\partial \ln Z}{\partial \beta}, \end{equation} as a function of the gauge coupling and compared it to numerical data from Ref.~\cite{GOSCHL201763}. In this case our computation using HOTRG completely agrees with the worm algorithm generated data. Moreover we can add a $ \theta $ term to the original action which results in new couplings, expressed as linear combinations of the gauge coupling and theta parameter, $ \eta = \frac{\beta}{2} - \frac{\theta}{4 \pi} $ and $ \bar{\eta} = \frac{\beta}{2} + \frac{\theta}{4 \pi} $. For the tensor construction here we only need to redefine the plaquette tensor, $B$, with $ I_{m} (\beta) $ replaced by $ I_{m} (2 \sqrt{\eta\bar{\eta}}) \left({\eta / \bar{\eta}} \right)^{m/2} $. To ensure the formulation is valid, we measured a couple of observables, including the average plaquette $ \langle U_{p} \rangle $, and the topological charge, $\langle Q \rangle$ as a function of the $\theta$ parameter. The topological charge is defined as, \begin{equation} \langle Q \rangle = \frac{1}{N_{s}N_{\tau}} \frac{\partial \ln Z}{\partial \theta}. \end{equation} The results of the calculation of the average plaquette as a function of $\beta$ for different system sizes can be seen in Fig.~\ref{fig:avg_plaq}, \begin{figure}[tbp] \centering \includegraphics[width=8.6cm]{plaq.pdf} \caption{Average Plaquette vs. $ \beta $ for lattice sizes with $N_{\tau} = N_{s} = 4$, $8$, and $16$ and compared with data from Ref.~\cite{GOSCHL201763}. For this data $N_{\text{gauge}} = 3$ is sufficient to achieve similar accuracy to MC data.} \label{fig:avg_plaq} \end{figure} and as a function of $\theta$ in Fig.~\ref{fig:plaqvstheta}. \begin{figure}[tbp] \centering \includegraphics[width=8.6cm]{plaqvstheta2.pdf} \caption{Average Plaquette vs. $ \theta $ for a lattice with $N_{s} = N_{\tau} = 4$. Here $N_{\text{gauge}} = 5$ is necessary to achieve similar accuracy to the MC data.} \label{fig:plaqvstheta} \end{figure} We find good agreement and convergence across a wide range of $\beta$ values for the relatively small number of gauge states, $N_{\text{gauge}} = 3$ and 5. The results for the topological charge can be seen in Fig.~\ref{fig:topo_charge}, \begin{figure} \centering \includegraphics[width=8.6cm]{avg_q_x4_t4_b1p6_D2.pdf} \caption{The topological charge as a function of $\theta$. Here we compare with Ref.~\cite{GOSCHL201763}. We find a slightly larger range of plaquette quantum numbers are necessary---in contrast to the average plaquette---to achieve consistent results. In this case, the plaquette numbers had to be allowed to run from $m=-2$ to $2$.} \label{fig:topo_charge} \end{figure} and again we find good agreement across the range scanned, however to obtain this result a larger $N_{\text{gauge}} = 5$ was necessary. We also noted that for large $\theta$ values, data on larger volumes was significantly more noisy. We discuss possible explanations and solutions in the conclusions. \section{Conclusions} In this paper we have constructed a tensor network formulation of the massless lattice Schwinger model with staggered fermions. We have considered both the usual action and one in which a topological term is added. The addition of the latter term induces a sign problem and renders the model intractable for a conventional hybrid Monte Carlo simulation. Using the HOTRG algorithm we have computed the free energy and its derivatives and compared the results, where possible, with both hybrid Monte Carlo simulations and simulations based on a dual representation based on fermion loops. Where comparison is possible the agreement is good with the tensor network calculation being superior computationally to Monte Carlo. That said, we have experienced difficulties measuring observables for large values of the topological coupling $\theta$. Typically the signal for an operator like the plaquette becomes very noisy after several iterations of the blocking scheme. Additionally, arguments used for the positivity of terms in the sum of the partition function assume a complete lattice with boundary conditions and lattice size already achieved~\cite{GATTRINGER2015732}. In contrast, the HOTRG does not know before-hand what the final size of the lattice will be, or what the boundary conditions will be at that size. This in turn gives the algorithm more freedom to choose which states are relevant during truncation, even though those very states may be projected out in the final step of blocking; making them useless. A tensor construction scheme which uses an environment tensor might achieve better results at larger volumes, since, the forward-backward iteration from a complete lattice should retroactively adjust the intermediate states kept during truncation at smaller volumes. Of course in the continuum limit the partition function should be independent of $\theta$ and the difficulties are likely related at least in part to this fact --- as the chiral symmetry of the lattice action is restored the system will develop chiral zero modes which will suppress the contribution of any topological field configurations to the partition function. The $\theta$ dependence is restored in the presence of a fermion mass. However in that case there are non-trivial $-1$ factors which appear in the dual representation of the partition function. Part of the phase depends on the number of closed fermion loops appearing in any particular dual configuration. It is extremely hard to see how this phase can be reconstructed from the contraction of local tensors and we have not been able to generalize the tensor network described here to the case of non-zero masses. This should sound a cautionary note to the idea that tensor network formulations of lattice field theories are free of sign problems. In the case of fermion theories this may not be generically the case. \begin{acknowledgments}The authors would like to thank the members of the QuLat collaboration for stimulating discussions. SC, YM, and JUY were supported by the U.S. Department of Energy (DOE) under Award Number DE-SC0019139. \end{acknowledgments}
{ "attr-fineweb-edu": 1.513672, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbp7xaKPQoekS21Kr
\section{Introduction} A crucial method (including what is now known as the Dolgopyat inequality) to prove exponential decay of correlations for Anosov flows with $C^1$ stable and unstable foliations was developed by Dolgopyat \cite{D}. Liverani \cite{L} obtained exponential decay of correlations for Anosov flows with contact structure (and hence geodesic flow on compact negatively curved manifolds of any dimension). Baladi \& Vall\'ee \cite{BalVal} further refined the method of \cite{D} to prove exponential decay of correlations for suspension semiflows over one-dimensional piecewise $C^2$ expanding Markov maps with $C^1$ roof functions. This was extended to the multidimensional setting by Avila et al.\ \cite{AGY}, to prove exponential decay of correlations of Teichm\"uller flows. Ara\'ujo \& Melbourne \cite{AM} showed that the method can be adapted to suspension semiflows over $C^{1+\alpha}$ maps with $C^1$ roof functions, which enabled them to prove that the classical Lorenz attractor has exponential decay of correlations. In all of the above works, the results are applied to $C^\alpha$ observables for some $\alpha > 0$. In this paper, we consider a class of non-Markov maps (see Section~\ref{sec:setup}), obtain a Dolgopyat inequality on the space of bounded variation (BV) observables (Theorem~\ref{th-main}). The Dolgopyat inequality obtained in this paper automatically allows us to obtain exponential decay of correlations for skew-products on ${\mathbb T}^2$ as considered by Butterley and Eslami \cite{BE,Eslami}, where the developed methods do not exploit the presence of the Markov structure. Most probably, a proof of exponential decay for \text{BV}\ observables for the class of non Markov maps considered here is not the easiest route; one could, for instance, think of inducing to a Markov map for which exponential decay of correlation of $C^2$ observables is known and then use approximation arguments to pass to \text{BV}\ observables. Instead, we believe that the benefit of the Dolgopyat inequality in this setting is that it can be used to study perturbations of the flow (such as inserting holes in the Poincar\'e map); it is not at all clear that this can be economically done via inducing. The main new ingredient of the proof is to locate and control the sizes of the jumps associated with \text{BV}\ functions (see Section~\ref{sec-newingr}). \subsection{Specific Examples}\label{sec:examples} Our results (i.e., the Dolgopyat type inequality given by Theorem~\ref{th-main}) apply to typical AFU maps presented in Section~\ref{sec:setup}. By typical we mean the whole clas of AFU maps (studied by Zweim\"uller \cite{Zwei0,Zwei}) satisfying assumption \eqref{eq:k} below. This assumption is very mild, see Remark~\ref{rem:typical}. In particular, this class contains some standard families, such as the shifted $\beta$-transformations $F:[0,1] \to [0,1]$, $x \mapsto \beta x + \alpha \pmod 1$ for fixed $\alpha \in [0,1)$ and $\beta > 1$. Another important example is the First Return Map of a (non-Markov) Manneville-Pomeau map. That is, $$ F = f^\tau:[\frac12,1] \to [\frac12,1]\quad \text{ for }\quad \tau(x) = \min\{ n \geq 1 : f^n(x) \in [\frac12,1]\}, $$ where $$ f:[0,1] \to [0,1], \quad x \mapsto \begin{cases} x(1+2^\alpha x^\alpha) & x \in [0,\frac12); \\ \gamma(2x-1) & x \in [\frac12,1], \end{cases} $$ is a non-Markov Manneville-Pomeau map with fixed $\alpha > 0$ and {\color{green} $\gamma \in (\frac12,1]$}. The assumptions below apply to these to these examples, albeit that \eqref{eq:k} holds for all parameters with the exception of a set of Hausdorff dimension $< 1$, see Remark~\ref{rem:typical}. The UNI condition \eqref{eq:UNI} is a generic condition on the roof function of the type previously considered in \cite{BalVal, AGY}. \section{Set-up, notation, assumptions and results.}\label{sec:setup} We start this section by discussing the class of AFU maps studied by Zweim\"uller \cite{Zwei0, Zwei}. We present their conditions in Subsections~\ref{sec:AFU}-\ref{subsec-standhyp}. \subsection{The AFU map $F$.}\label{sec:AFU} Let $Y$ be an interval and $F:Y \to Y$ a topologically mixing piecewise $C^2$ AFU map ({\em i.e.,\ } uniformly expanding with finite image partition and satisfying Adler's condition), preserving a probability measure $\mu$ which is absolutely continuous w.r.t.\ Lebesgue measure $\text{Leb}$. Let $\alpha$ be the partition of $Y$ into domains of the branches of $F$, and $\alpha^n = \bigvee_{i=0}^{n-1} F^{-i}\alpha$. Thus $F^n:a \to F^n(a)$ is a monotone diffeomorphism for each $a \in \alpha^n$. The collection of inverse branches of $F^n$ is denoted as $\mathcal{H}_n$, and each $h \in \mathcal{H}_n$ is associated to a unique $a \in \alpha^n$ such that $h:F^n(a) \to a$ is a contracting diffeomorphism. \subsection{Uniform expansion.} Let \begin{equation}\label{eq:rho} \rho_0 = \inf_{x \in Y} |F'(x)| \quad \text{ and } \quad \rho = \rho_0^{1/4}. \end{equation} Since $F$ is uniformly expanding, $\rho_0 > \rho > 1$, but in fact, we will assume that $\rho_0 > 2^{4/3}$, which can be achieved by taking an iterate. \subsection{Adler's condition.} This condition states that $\sup_{a \in \alpha} \sup_{x \in a} \frac{|F''(x)|}{|F'(x)|^2} < \infty$. As $F$ is expanding, $\frac{|(F^n)''(x)|}{|(F^n)'(x)|^2}$ is bounded uniformly over the iterates $n \geq 1$, $a \in \alpha^n$ and $x \in a$ as well. Thus, there is $C_1 \geq 0$ such that \begin{equation}\label{eq:adler} \frac{|(F^n)''(h(x))|}{|(F^n)'(h(x))|^2} \leq C_1 \quad \text{ and } \quad \frac{h'(x)}{h'(x')} \leq e^{C_1 |x - x'|} \end{equation} for all $n \geq 1$, $h \in \mathcal{H}_n$ and $x, x' \in \text{dom}(h)$. The second inequality follows from the first by a standard computation. \subsection{Finite image partition.}\label{sec:finite_image} The map $F$ need not preserve a Markov partition, but has the finite image property. Therefore $K := \min\{ |F(a)| : a \in \alpha\}$ is positive. We assume that $F$ is topologically mixing. This implies that there is $k_1 \in \mathbb{N}$ such that $F^{k_1}(J) \subset Y$ for all intervals $J$ of length $|J| \geq \delta_0 := \frac{K(\rho_0-2)}{5e^{C_1}\rho_0}$ (this choice of $\delta_0$ is used in Lemma~\ref{lem:eta1}). Let $X_1 = X'_1$ be the collection of boundary points of $F(a)$, $a \in \alpha$, where $\alpha$ is the partition of $Y$ into branches of $F$. Due to the finite image property, $X_1$ is a finite collection of points; we denote its cardinality by $N_1$. Inductively, let $X'_k = F(X'_{k-1})$, {\em i.e.,\ } the set of ``new'' boundary points of the $k$-th image partition, and $X_k = \cup_{j \leq k} X'_j$. Therefore $\# X'_k \leq k N_1$. Let $\{ \xi_i \}_{i=0}^M$ be a collection of points containing $X_k$, and put in increasing order, Then $$ \mathcal{P}_k =\{ (\xi_{i-1}, \xi_i) : i = 1, \dots ,M \} $$ is a partition of $Y$, refining the {\em image partition of $F^k$}. In other words, the components of $Y \setminus \{ \xi_i \}_{i=0}^M$ are the atoms of $\mathcal{P}_k$. \subsection{Roof function.} Let $\varphi:Y \to \mathbb{R}^+$ be a piecewise $C^1$ function, such that $\varphi \geq 1$ and \begin{equation}\label{eq:C2} C_2 := \sup_{h \in \mathcal{H}} \sup_{x \in \text{dom}(h)} |(\varphi \circ h)'(x)| < \infty. \end{equation} Since a main application is the decay of correlations of the vertical suspension semi-flow on $\{ (y,u) : y \in Y, 0 \leq u \leq \varphi(y)\}/(y, \varphi(y)) \sim (F(y),0)$, see Subsection~\ref{sec:semiflow}, we will call $\varphi$ the {\em roof function}. Also assume that there is $\varepsilon_0 > 0$ such that \begin{equation}\label{eq:sum} C_3 := \sup_{x \in Y} \sum_{h \in \mathcal{H}, x \in \text{dom}(h)} |h'(x)| e^{\varepsilon_0 \varphi \circ h(x)} < \infty. \end{equation} \subsection{Further assumption on $F$ (relevant for the non-Markov case)}\label{subsec-standhyp} We first discuss some known properties of the transfer operator and twisted transfer operator. Let $\text{Leb}$ denote Lebesque measure. Define the \text{BV}-norm $\| v \|_{\text{BV}}$ of $v:I \to \mathbb{C}$, for an interval $I \subset \mathbb{R}$, as the sum of its $L^1$-norm (w.r.t. $\text{Leb}$) $\| v \|_1$ and the total variation $\text{Var}_I v = \inf_{\tilde v = v\text{ a.e.}} \sup_{x_0 < \dots < x_N \in I} \sum_{i=1}^N |\tilde v(x_i)-\tilde v(x_{i-1})|$. Let $\mathcal{L}:L^1(Y,\text{Leb})\to L^1(Y,\text{Leb})$ be the transfer operators associated to $(Y, F)$ given by $\mathcal{L}^n v= \sum_{h\in\mathcal{H}_n} |h'| v\circ h$, $n\geq 1$. For $s = \sigma + ib \in \mathbb{C}$, let $\mathcal{L}_s$ be the twisted version of $\mathcal{L}$ defined via $\mathcal{L}_s v= \mathcal{L}(e^{s\varphi} v)$ with iterates \begin{equation*} \mathcal{L}_s^n v= \sum_{h\in\mathcal{H}_n} e^{s\varphi_n\circ h} |h'| v\circ h, \quad n\geq 1. \end{equation*} We first note that for $s = \sigma \in \mathbb{R}$, \begin{prop}\label{prop-bnorm} There exist $\varepsilon\in (0,1)$ such that for all $|\sigma|<\varepsilon$, $\|\mathcal{L}_{\sigma} \|_{\text{BV}}<\infty$. \end{prop} \begin{proof} By Remark~\ref{rem-LYtwist}, there exist $c_1, c_2>0$ and $\varepsilon\in (0,1)$ such that $\text{Var}_Y (\mathcal{L}_{\sigma} v) \leq c_1 \text{Var}_Y v +c_2 \|v\|_\infty$, for all $|\sigma|<\varepsilon$. Note that for any $v \in \text{BV}(Y)$, $\|v\|_\infty\leq \text{Var}_Y v+\|v\|_1$. Hence, $\text{Var}_Y (\mathcal{L}_{\sigma} v) \leq (c_1+c_2)\text{Var}_Y v +c_2\|v\|_1$. Also, $\int_Y |\mathcal{L}_{\sigma} v|\, d\text{Leb}\leq C_2\|v\|_\infty\leq C_2(\text{Var}_Y v+\|v\|_1)$ and the conclusion follows. \end{proof} It is known that $\mathcal{L}_0=\mathcal{L}$ has a simple eigenvalue $\lambda_0=1$ with eigenfunction $f_0 \in \text{BV}$, \cite[Lemma 4]{Zwei0} (see also \cite{Rychlik}), and $\frac{1}{C_4} \leq f_0(x) \leq C_4$ for all $x \in Y$, see \cite[Lemma 7]{Zwei}. Hence, $f_0$ is bounded from above and below. This together with Proposition~\ref{prop-bnorm} implies that there exists $\varepsilon\in (0,1)$ such that $\mathcal{L}_{\sigma}$ has a family of simple eigenvalues $\lambda_{\sigma}$ for $|\sigma|<\varepsilon$ with \text{BV}\ eigenfunctions $f_{\sigma}$. We assumed above that $F$ has the finite image property, but not that $F^n$ has the finite image property uniformly over $n \geq 1$. We put a condition on $F$ as follows: the lengths of the atoms $p \in \mathcal{P}_k$, with $k$ specified below, do not decrease faster than $\rho^{-k}$: \begin{equation}\label{eq:k} \min_{p \in \mathcal{P}_k} \text{Leb}(p) > \frac{16 C_8}{C_9}\frac{\sup f_{\sigma}}{\inf f_{\sigma}} \rho^{-k}, \end{equation} where $C_8 = 3C_7/\eta_0$ with $\eta_0:=(\sqrt 7-1)/2$ and $C_7 \geq 1$ is as in Lemma~\ref{lem:jump_fsigma}, and $C_9$ is as in Lemma~\ref{lem:G}. Note that $\frac{\sup f_{\sigma}}{\inf f_{\sigma}}<\infty$ for $|\sigma|$ small (see Remark~\ref{rem:lambda}). \begin{remark}\label{rem:typical} Assumption~\eqref{eq:k} is trivially satisfied if $F$ is Markov. For many one-parameter families of non-Markov AFU maps, one can show that \eqref{eq:k} only fails at a parameter set of Hausdorff dimension $< 1$. This follows from the shrinking targets results \cite[Theorem 1 and Corollary 1]{AP} and includes the family of shifted $\beta$-transformations $x \mapsto \beta x + \alpha \bmod 1$. \end{remark} Throughout we fix $k\geq 2k_1$ sufficiently large to satisfy: \begin{equation}\label{eq:kE} \rho^k (\rho-1) > 12 N_1 C_8, \end{equation} (Inequality \eqref{eq:kE} will be used in estimates in Section~\ref{sec-disc-jumps}.) Furthermore, we assume that \begin{equation}\label{eq:k3} \rho^{-2k}(\sup f_0 +\text{Var} f_0 ) \Big(\frac{1}{\inf f_0 }+\text{Var}\Big(\frac{1}{f_0 }\Big)\Big)<1, \end{equation} where $f_0$ is the positive eigenfunction of $\mathcal{L}_0$ associated to eigenvalue $\lambda_0 = 1$. \subsection{UNI condition restricted to atoms of the image partition $\mathcal{P}_k$} \label{subsec-uni} Fix $k$ as in Subsection~\ref{subsec-standhyp}. Let $C'_2 := \frac{C_2 \rho_0}{\rho_0-1}$ and $C_{10} := (C_1e^{C_1} + 2(1+\varepsilon_0) e^{\varepsilon_0 C'_2}C'_2 + 2C_6)/(2\eta_0-4\rho_0^{-k})$, where it follows from \eqref{eq:kE} that the denominator $2\eta_0 - 4\rho_0^{-k} > 0$. We assume that there exist $D>0$ and a multiple $n_0$ of $k$ such that both \begin{equation}\label{eq-n0-1} C_{10}\rho_0^{-n_0}\frac{4\pi}{D}\leq \frac{1}{4}(2-2\cos\frac{\pi}{12})^{1/2}, \end{equation} and the UNI (uniform non-integrability) condition holds: \begin{equation}\label{eq:UNI} \forall \text{ atom } p \in \mathcal{P}_k, \ \exists h_1, h_2 \in \mathcal{H}_{n_0}\ \text{ such that } \inf_{x \in p} |\psi'(x)| \geq D, \end{equation} for $\psi = \varphi_{n_0} \circ h_1 - \varphi_{n_0} \circ h_2: p \to \mathbb{R}$. \subsection{Main result} Let $b\in\mathbb{R}$. For the class of \text{BV}\ functions we define \begin{equation}\label{eq-bnorm} \|v\|_b= \frac{\text{Var}_Y v }{1+|b|} + \|v\|_1. \end{equation} With the above specified, we can state our main result, a Dolgopyat type inequality. \begin{thm}\label{th-main} Suppose that all the above assumptions, ~\eqref{eq:rho} -- \eqref{eq:UNI}, on the AFU map $F$, on $k$ and on the roof function $\varphi$ hold (in particular, we assume that UNI~\eqref{eq:UNI} hold for some $D>0$). Then there exists $A\geq n_0$ and $\varepsilon, \gamma<1$ such that for all $|\sigma| < \varepsilon$ and $|b| > \max\{4\pi/D, 2\}$ and for all $n\geq A \log|b|$, $$ \| \mathcal{L}_s^n\|_{b}\leq \gamma^n. $$ \end{thm} An immediate consequence of the above result (see, for instance,~\cite{BalVal}) is \begin{cor} \label{cor-main} Suppose that all the above assumptions, ~\eqref{eq:rho} -- \eqref{eq:UNI}, on the AFU map $F$, on $k$ and on the roof function $\varphi$ hold. For every $0<\alpha<1$ there exists $\varepsilon\in (0,1)$ and $b_0>0$ such that for all $|b|\geq b_0$ and for all $|\sigma|<\varepsilon$, $$ \| (I - \mathcal{L}_s)^{-1}\|_{b} \leq |b|^\alpha. $$ \end{cor} \begin{remark} A similar, but simplified, argument (obtained by taking $\sigma=0$ throughout the proof of Theorem~\ref{th-main} in this paper) shows that without assuming condition \eqref{eq:sum} (that guarantees exponential tail for the roof function $\varphi$) and with no restriction on the class of \text{BV}\ functions, one obtains that for every $0<\alpha<1$, there exists $b_0>0$ such that for all $|b|\geq b_0$, $\| (I - \mathcal{L}_{ib})^{-1} \|_{b} \leq |b|^\alpha$. Of course, this type of inequality does not imply exponential decay of correlation for suspension semiflows, but we believe it to be useful when proving sharp mixing rates for $\text{BV}$ observables in the non exponential situation via renewal type arguments (such as sharp bounds for polynomial decay of correlation). \end{remark} \subsection{Application to suspension semi-flows}\label{sec:semiflow} Corollary~\ref{cor-main} can be used to obtain exponential decay of correlations in terms of \text{BV}\ functions for suspension semiflows over AFU maps with a $C^1$ roof function. Let $Y^\varphi:=\{(y,u)\in Y\times R:0\leq u\leq R(y)\}/\!\!\!\sim$, where $(y,\varphi(y))\sim (F y,0)$, be the suspension over $Y$. The suspension semiflow $F_t:Y^\varphi\to Y^\varphi$ is defined by $F_t(y,u)=(y,u+t)$ computed modulo identifications. The probability measure $\mu^\varphi:=(\mu\times Leb)/\bar\varphi$, where $ \bar\varphi:=\int_Y\varphi d\mu$ is $F_t$-invariant. \\[3mm] \paragraph{Class of observables} Let $F_{\text{BV},m}(Y^\varphi)$ be the class of observables consisting of $v(y,u) : Y^\varphi\to\mathbb{C}$ such that $v$ is $\text{BV}(Y)$ in $y$ and $C^m$ in $u$, so $\|v\|_{\text{BV}, m}:=\sum_{j=0}^m \| \partial_t^j v \|_{\text{BV}}<\infty$. For $v\in L^1(Y^\varphi)$ and $w\in L^\infty(Y^\varphi)$ define the correlation function \[ \rho_{t}(v,w):=\int_{Y^\varphi} v w\circ F_t\, d\mu^\varphi-\int_{Y^\varphi} v \,d\mu^\varphi \int_{Y^\varphi} w \,d\mu^\varphi. \] The result below gives exponential decay of correlation for $v\in F_{\text{BV},2}(Y^\varphi)$ and $w\in L^\infty(Y^\varphi)$. It is likely that this also follows by reinducing $F$ to a Gibbs-Markov AFU map, to which \cite{BalVal,AM} apply, together with an approximation argument of \text{BV}\ functions by $C^2$ functions. However, it is worthwhile to have the argument for the original map $F$, for instance in situations where reinducing is problematic, such as for families of open AFU maps with shrinking holes. \begin{thm}\label{thm-decay} Suppose that all the above assumptions, ~\eqref{eq:rho} -- \eqref{eq:UNI}, on the AFU map $F$ and the roof function $\varphi$ hold. Then there exist constants $a_0, a_1>0$ such that \[ |\rho_{t}(v,w)|\leq a_0 e^{-a_1 t} \|v\|_{\text{BV}, 2}\|w\|_{\infty}, \] for all $v\in F_{\text{BV},2}(Y^\varphi)$ and $w\in L^\infty(Y^\varphi)$. \end{thm} The proof of Theorem~\ref{thm-decay} is given in Appendix~\ref{sec:expo}. Corollary~\ref{cor-main} also implies exponential decay of correlations in terms of \text{BV}\ functions for skew products on $\mathbb{T}^2$ as considered in~\cite{BE, Eslami}. We note, however, that the strength of Corollary~\ref{cor-main} is not needed in the set-up of ~\cite{BE, Eslami} as, in those works, the roof function is bounded and one can restrict the calculations to the imaginary axis. \section{Twisted and normalized twisted transfer operators} \label{sec-twisted-normalized} We start with the continuty of operator $\mathcal{L}_s$ in $\text{BV}$. \begin{prop}\label{prop-continuity} Let $\varepsilon_0 > 0$ and $C_3<\infty$ be as in ~\eqref{eq:sum}. Then there exists $C>0$ and $\varepsilon \in (0, \varepsilon_0)$ such that for all $|\sigma_1|,|\sigma_2|<\varepsilon$ and for all $|b_1|, |b_2| \leq 1$, $\|\mathcal{L}_{\sigma_1+ib_1}-\mathcal{L}_{\sigma_2+ib_2}\|_{\text{BV}}\leq C\varepsilon_0^{-1}|\sigma_1-\sigma_2|$. \end{prop} The proof of Proposition~\ref{prop-continuity} is deferred to the end of Appendix~\ref{sec-LY}. \begin{remark}\label{rem:lambda} An immediate consequence of Proposition~\ref{prop-continuity} is that for any $\delta\in(0,1)$, there exists $\varepsilon\in (0,1)$ such that \[ \sup_{|\sigma|<\varepsilon}|\lambda_{\sigma}-1|<\delta,\quad \sup_{|\sigma|<\varepsilon} \|\frac{f_{\sigma}}{f_0}-1\|_{\text{BV}}<\delta, \quad \sup_{|\sigma|<\varepsilon}\|\frac{f_{\sigma}}{f_0}-1\|_{\infty}<\delta \] for all $|\sigma| < \varepsilon$. Recall that $\frac{1}{C_4} \leq f_0(x) \leq C_4$ for all $x \in Y$. It follows that $\frac{f_{\sigma}(x)}{f_{\sigma}(y)} = \frac{f_{\sigma}(x)}{f_0(x)} \frac{f_0(x)}{f_0(y)} \frac{f_0(y)}{f_{\sigma}(y)} \leq (1+\delta) C^2 (1-\delta)^{-1} < \infty$ for all $x,y \in Y$. Hence, $\frac{\sup f_{\sigma}}{\inf f_{\sigma}} \leq C_{5}$ for $C_{5} := \frac{1+\delta}{1-\delta} C_4^2$ and $|\sigma| < \varepsilon$. \end{remark} Since $\lambda_0 = 1$ and $f_0$ is strictly positive, due to the continuity of $\lambda_{\sigma}$ and $f_{\sigma}$ in $\sigma$, we can ensure that for $\varepsilon>0$ sufficiently small \begin{equation}\label{eq-lamfsigma} \rho^{-1/4} <\lambda_{\sigma} \mbox{ and } f_{\sigma} \mbox{ is strictly positive for all } |\sigma|<\varepsilon. \end{equation} By assumption~\eqref{eq:k3} and Remark~\ref{rem:lambda}, we can choose $\varepsilon$ small enough such that for all $|\sigma| < \varepsilon$, \begin{equation}\label{eq:rho3} \rho^{-2k}(\sup f_{\sigma}+\text{Var} f_{\sigma}) \Big(\frac{1}{\inf f_{\sigma}}+\text{Var}\Big(\frac{1}{f_{\sigma}}\Big)\Big)<1. \end{equation} (The above formula will be used in the proof of Proposition~\ref{prop-LYineq}.) \begin{lemma}\label{lem:eps} There exists $\varepsilon\in (0,1)$ so small that for all $|\sigma| < \varepsilon$ and for all $n\geq 1$, \begin{equation}\label{eq:rho2} \frac{1}{\lambda_{\sigma}^n} \sup_{h \in \mathcal{H}_n} \sup_{x \in \text{dom}(h)} |h'(x)| e^{\sigma \varphi_n \circ h(x)} \leq \rho^{-3n}. \end{equation} \end{lemma} \begin{remark} Without assumption \eqref{eq:sum} (i.e., without the exponential tail assumption), we still have $$ \sup_{h \in \mathcal{H}_n} \sup_{x \in \text{dom}(h)} |h'(x)| e^{\sigma \varphi_n \circ h(x)} \leq \rho^{-3n} $$ for $-\varepsilon < \sigma \leq 0$. \end{remark} \begin{proof} We start with $n = 1$. By continuity of $\lambda_{\sigma}$, we can take $\varepsilon$ so small that $\lambda_{\sigma}^{4u} \rho_0^{u-1} > C_3$ for $u = \lfloor \varepsilon_0/(4\varepsilon)\rfloor$ with $\varepsilon_0\in (0,1)$ and $C_3$ such that \eqref{eq:sum} hold. For $h \in \mathcal{H}_1$ assume by contradiction that $\lambda_{\sigma}^{-1} |h'(x)| e^{\sigma \varphi \circ h(x)} > \rho^{-3}$ for some $x \in \text{dom}(h)$. Since $|h'| \leq \rho_0^{-1} = \rho^{-4}$ we have $$ \lambda_{\sigma}^{-1} e^{\sigma \varphi \circ h(x)} \geq \lambda_{\sigma}^{-1} \rho^4 |h'| e^{\sigma \varphi \circ h(x)} > \rho = \rho_0^{1/4} \geq |h'|^{-1/4}. $$ Therefore, \begin{eqnarray*} |h'| e^{\varepsilon_0 \varphi \circ h} &>& |h'| e^{4u\varepsilon \varphi \circ h} \geq |h'| e^{4u\sigma \varphi \circ h} \geq |h'| (\lambda_{\sigma}^{-1} e^{\sigma \varphi \circ h} )^{4u} \lambda_{\sigma}^{4u}\\ &\geq& |h'|^{1-u} \lambda_{\sigma}^{4u} \geq \rho_0^{u-1} \lambda_{\sigma}^{4u} \geq C_3 \end{eqnarray*} contradicting \eqref{eq:sum}. The statement for $n \geq 1$ follows immediately. \end{proof} Let $$ \tilde \mathcal{L}_s v = \frac{1}{\lambda_{\sigma} f_{\sigma}} \mathcal{L}_s( f_{\sigma} v) \quad \text{ and }\quad \tilde \mathcal{L}_{\sigma} v = \frac{1}{\lambda_{\sigma} f_{\sigma}} \mathcal{L}_{\sigma}( f_{\sigma} v) $$ be the \emph{normalized} versions of $\mathcal{L}_s$ and $\mathcal{L}_{\sigma}$. \begin{prop}[Lasota-Yorke type inequality] \label{prop-LYineq} Choose $k$ and $\varepsilon_1\in (0,1)$ such that \eqref{eq:rho3} and ~\eqref{eq:rho2} hold. Define $\Lambda_{\sigma} = \lambda_{2\sigma}^{1/2} / \lambda_{\sigma}$. Then, there exist $\varepsilon \leq \varepsilon_1$, $\rho> 1$ and $c>0$ such that for all $s=\sigma+ib$ with $|\sigma|<\varepsilon$ and $b\in\mathbb{R}$, \[ \text{Var}_Y (\tilde \mathcal{L}_s^{nk} v) \leq\rho^{-nk} \text{Var}_Y v + c(1+ |b|)\Lambda_{\sigma}^{nk}(\|v\|_\infty\|v\|_1)^{1/2}. \] for all $v\in \text{BV}(Y)$ and all $n\geq 1$. \end{prop} Proposition~\ref{prop-LYineq} would be meaningless if $\Lambda_\sigma < 1$, but one can check that $1 \leq \Lambda_\sigma = 1 + O(\sigma^2)$. The proof of Proposition~\ref{prop-LYineq} is deferred to Appendix~\ref{sec-LY}. In what follows we focus on the controlling the term containing $(\|v\|_\infty\|v\|_1)^{1/2}$ and proceed as in \cite{BalVal}: we estimate the $L^2$ norm of $\tilde \mathcal{L}_s^n$ for $n$ large enough. Once we obtain a good estimate for the $L^2$ norm, we combine it with the estimate in Proposition~\ref{prop-LYineq} (following the pattern in \cite{AM, AGY, BalVal}) to prove Theorem~\ref{th-main}. \section{New ingredients of the proof} \label{sec-newingr} The basic strategy of the proof using the cancellation lemma follows \cite{AM, AGY, BalVal}. For the non-Markov AFU maps, we use the space $\text{BV}$, and hence observables $u,v \in \text{BV}$ can have jumps. The task is to locate and control the sizes of these jumps. Given a discontinuity point $x$ for a function $v$, we define the {\em size of the jump} at $x$ as \begin{equation}\label{eq:jumpsize} \text{Size } v(x) = \lim_{\delta \to 0} \sup_{\xi, \xi' \in (x-\delta, x+\delta) } |v(\xi)-v(\xi')|. \end{equation} Recall that the oscillation of a function $v:I \to \mathbb{C}$ on a subinterval $I \subset Y$ is defined as $$ \text{Osc}_I v = \sup_{\xi, \xi' \in I} |v(\xi) -v(\xi')|. $$ It follows that \begin{equation}\label{eq:oscsize} \text{Osc}_I v \leq \text{Osc}_{I^\circ} v + \text{Size } v(x) + \text{Size } v(y) \end{equation} for $I = [x,y]$ with interior $I^\circ$. For positive functions, \eqref{eq:jumpsize} reduces to \begin{equation}\label{eq:jumpsize2} \text{Size } u(x) = \limsup_{\xi \to x} u(\xi) - \liminf_{\xi \to x} u(\xi) = |\lim_{\xi \uparrow x} u(\xi) - \lim_{\xi \downarrow x} u(\xi)|. \end{equation} We adopt the convention $u(x) =\limsup_{\xi \to x} u(\xi)$ at discontinuity points, so we always have the trivial inequality $\text{Size } u(x) \leq u(x)$. \begin{defn}\label{def:expjumpsize} Let $k\geq 1$ such that~\eqref{eq:k} holds and take $C_7$ as in Lemma~\ref{lem:jump_fsigma}. We say that a pair of functions $u,v \in \text{BV}(Y)$ with $|v| \leq u$ and $u>0$ has {\em exponentially decreasing jump-sizes}, if the discontinuities of $u$ and $v$ belong to $X_\infty = \cup_{j\geq 1} X'_j$ and if $x \in X'_j$ for $j > k$ is such a discontinuity, then \begin{equation}\label{eq:EJ} \text{Size } v(x) , \text{Size } u(x) \leq C_7 \rho^{-j} u(x). \end{equation} \end{defn} \begin{example} For the reader's convenience, we provide a simple example of functions $(u,v)$ with exponentially decreasing jump-sizes. Assume that $Y = [p,q]$. Let $\{a_i\}_{i \geq 1}$ be a sequence in $\mathbb{C}$ such that $|a_i| \to 0$ exponentially fast, and $\{x_i\}_{i \geq 1} \subset [p, q]$. Then $$ v = \sum_{i \geq 1} a_i 1_{[x_i,q]} \qquad u = \sum_{i \geq 1} |a_i| 1_{[x_i,q]} $$ is a pair of functions having exponentially decreasing jump-sizes when $X'_j = \{ x_j\}$. Indeed, let $\delta' > 0$ be arbitrary and let $N \in \mathbb{N}$ be such that $\sum_{i > N} |a_i| \leq \delta'$. Assuming for simplicity that the $x_i$ are distinct, we have \begin{align*} \text{Size } v(x_j) & = \lim_{\delta \to 0} \sup_{\xi, \xi' \in (x_j-\delta, x_j+\delta)} \left| \sum_{i\geq 1} a_i \Big(1_{[x_i,q]}(\xi) - 1_{[x_i,q]}(\xi')\Big) \right| \\ & \leq \lim_{\delta \to 0} \sup_{\xi, \xi' \in (x_j-\delta, x_j+\delta)} \left| \sum_{i=1}^N a_i\Big( 1_{[x_i,q]}(\xi) - 1_{[x_i,q]}(\xi')\Big) \right| + \delta' = |a_j| + \delta'. \end{align*} Since $\delta'$ was arbitrary, $\text{Size } v(x_j) \leq |a_j|$. So, $\text{Size } v(x_j)$ is exponentially small in $j$. On the other hand, if $x \notin \{ x_i\}_{i \in \mathbb{N}}$, then $v$ is continuous at $x$, so $\text{Size } v(x) = 0$. A similar computation holds for $\text{Size } u(x_j)$. \end{example} Definition~\ref{def:expjumpsize} states that the discontinuities of $(u,v)$ can only appear in $X_\infty := \cup_{j \geq 1} X'_j$, and we will see in Proposition~\ref{prop:inductive_jumpsize} that this property is preserved under $(u,v) \mapsto (\tilde\mathcal{L}_{\sigma}^n u, \tilde\mathcal{L}_s^n v)$. For a given $n$, we will distinguish between two types of discontinuities of $\tilde \mathcal{L}_{\sigma}^n u$.\\ {\bf (i)} {\em Created} discontinuities. In this case $x \in \partial\text{dom}(h)$ for some $ h \in \mathcal{H}_n$ and $x \in X'_j$ for some $1 \leq j \leq n$. The discontinuity is created because the sum $\sum_{h \in \mathcal{H}, \xi \in \text{dom}(h)}$ involved in $\tilde \mathcal{L}_{\sigma}^n u$ runs over a different collection of inverse branches depending on whether $\xi$ is close to the left or close to the right of $x$: in only one of the cases $h$ is part of this collection. It is not important whether the function $u$ is continuous at $y = h(x)$.\\ {\bf (ii)} {\em Propagated} discontinuities. Here the function $u:Y\to\mathbb{R}_{+}$ has discontinuities. Hence, it is discontinuous at $y = h(x)$ for some $h \in \mathcal{H}_n$. In this case $y \in X'_j$ for some $j \geq 1$ and hence $x \in X'_{j+n}$. Consequently, we define a cone $\mathcal{C}_b$ of $\text{BV}$ functions with discontinuities of the type prescribed in Definition~\ref{def:expjumpsize}. In Appendix~\ref{sec:Hofbauer}, we prove that the eigenfunction $f_{\sigma}$ and $1/f_\sigma$ belong to $\mathcal{C}_b$. This argument is independent of Section~\ref{sec-invcone} where the invariance of $\mathcal{C}_b$ under the transformation $(u,v) \mapsto (\tilde \mathcal{L}_{\sigma}^n(\chi u), \tilde \mathcal{L}_s^nv)$ is proved. This invariance depends crucially on Proposition~\ref{prop:inductive_jumpsize}, which together with an inductive bound on $\frac{\sup u|_p}{\inf u|_p}$ for $p \in \mathcal{P}_k$ and assumption \eqref{eq:k} imply that discontinuities indeed behave as outlined in this section. To deal with \text{BV}\ observables $v \notin \mathcal{C}_b$, we exploit the fact that the size of discontinuities at points $x \notin X_\infty$ decrease exponentially under iteration of $\tilde \mathcal{L}_s$. This means that $\tilde \mathcal{L}_s^nv$ converges exponentially fast to $\mathcal{C}_b$ and this suffices to prove the results for arbitrary \text{BV}\ observables. \section{Towards the cone condition: discontinuities and jump-sizes} \label{sec-disc-jumps} Recall the sets $X'_j$ from Section~\ref{sec:finite_image} and let $k$ satisfy the conditions in Subsection~\ref{subsec-standhyp}. To deal with the discontinuities of $(u,v)$, we introduce the ``extra term'' for intervals $I \subset Y$: \begin{equation}\label{eq:E} E_I(u) := \sum_{j > k} \rho^{-j} \sum_{x \in X'_j \cap I^\circ} \limsup_{\xi \to x} u(\xi), \end{equation} where we recall that $\# X'_j \leq N_1$ for all $j \geq 1$. The choice of $k$ in \eqref{eq:kE} implies that $C_8 E_I(u) \leq \frac{1}{12} \sup_I u$ for every $I$ contained in a single atom of $\mathcal{P}_k$. Throughout this and the next section we set $n = 2k$. We start with two lemmas on the properties of the eigenfunction $f_{\sigma}$, which will be proved in Section~\ref{sec:Hofbauer}. We recall (see Remark 1.4) that $f_{\sigma}$ is the positive eigenfunction of $\mathcal{L}_{\sigma}$ with eigenvalue $\lambda_{\sigma}$. \begin{lemma}\label{lem:jump_fsigma} There are $C_6, C_7 \geq 1$ such that for all $\sigma$ with $|\sigma|<\varepsilon$ the following holds: \begin{enumerate} \item $f_{\sigma}$ has discontinuities only in $X_\infty$, and if $x_j \in X'_j$, then $\text{Size } f_{\sigma}(x_j) \leq C_7 \rho^{-3j} \sup f_{\sigma}$. \item For every interval $I \subset Y$ we have $$ \text{Osc}_{I^\circ}(f_{\sigma}) \leq C_6 \text{Leb}(I) \inf_I f_{\sigma} + C_7 E_I(f_{\sigma}) \ \text{ and } \ \text{Osc}_{I^\circ}\Big(\frac{1}{f_{\sigma}}\Big) \leq C_6 \text{Leb}(I)\inf_I \frac{1}{f_{\sigma}} + C_7 E_I\Big(\frac{1}{f_{\sigma}}\Big). $$ \end{enumerate} \end{lemma} \begin{lemma}\label{lem:G} Choose $k$ such that \eqref{eq:k} holds and set $n = 2k$. Then there exists $\varepsilon \in (0, 1)$ and $C_9 \in (0, 1)$ such that $$ \lambda_{\sigma}^{-n} \inf_{x \in Y} \sum_{\stackrel{h \in \mathcal{H}_n, x \in \text{dom}(h)}{\text{range}(h) \subset p}} |h'(x)| e^{\sigma \varphi_n \circ h(x)} \geq C_9 \text{Leb}(p) $$ for all $p \in \mathcal{P}_k$ and $|\sigma| < \varepsilon$. \end{lemma} The main result in this section is the following. \begin{prop}\label{prop:inductive_jumpsize} Choose $k$ such that \eqref{eq:k} holds and set $n = 2k$. If the pair $(u,v)$ with $|v| \leq u$ has exponentially decreasing jump-sizes \eqref{eq:EJ}, then for each $x \in X'_j$ with $j > k$, we have $$ \text{Size } \tilde\mathcal{L}_{\sigma}^n u(x)\ ,\ \text{Size } \tilde\mathcal{L}_s^n v(x) \leq \frac14 \max_{p \in \mathcal{P}_k}\frac{\sup u|_p}{\inf u|_p}\ C_7 \rho^{-j} \tilde\mathcal{L}_{\sigma}^n u(x). $$ \end{prop} \begin{remark}\label{rem:multiple} It is possible that $x$ belongs to different $X'_j$'s at the same time. This means that the discontinuity at $x$ is propagated by different branches of $F$ (or $x \in X'_1 \cap X'_j$ for some $j \geq 2$, and the discontinuity at $x$ is generated in $\mathcal{P}_1$ as well as propagated from another discontinuity at some point in $X'_{j-1}$). In this case, we add the jump-sizes at $x$ but the proof remains the same, {\em i.e.,\ } writing $x = x_j = x_{j'}$ for $x_j \in X'_j$ and $x_{j'} \in X'_{j'}$, $\text{Size } v(x) = \text{Size } v(x_j) + \text{Size } v(x_{j'}) \leq C_7 (\rho^{-j} + \rho^{-j'}) \| u \|_\infty$. \end{remark} \begin{proof}[Proof of Proposition~\ref{prop:inductive_jumpsize}] By Lemma~\ref{lem:jump_fsigma}, we know that $f_{\sigma}$ and $1/f_{\sigma}$ have exponentially decreasing jump-sizes with parameters $C_7$ and $\rho^3$. Let $y = \tilde h(x)$ for some $\tilde h \in \mathcal{H}_r$ and $r > k$ to be determined below. Let $p \in \mathcal{P}_k$ such that $y \in \overline{p}$. Then \begin{eqnarray}\label{eq:Lnu} \tilde \mathcal{L}_{\sigma}^r u(x) &\geq & \frac{1}{\lambda_{\sigma}^r f_{\sigma}(x)} \sum_{\stackrel{h \in \mathcal{H}_r}{\text{range}(h) \subset p}} |h'| e^{\sigma \varphi_r \circ h(x)} (f_{\sigma} u)\circ h(x) \nonumber \\ &\geq & \frac{\inf f_{\sigma}}{f_{\sigma}(x)} \frac{\inf u|_p}{\sup u|_p} u(y) \lambda_{\sigma}^{-r} \sum_{\stackrel{h \in \mathcal{H}_r, x \in \text{dom}(h)}{\text{range}(h) \subset p}} |h'(x)| e^{\sigma \varphi_r \circ h(x)} \nonumber \\ &\geq & \frac{\inf f_{\sigma}}{f_{\sigma}(x)} \frac{\inf u|_p}{\sup u|_p} C_9 \text{Leb}(p) u(y) \end{eqnarray} by Lemma~\ref{lem:G}. First take $j > n$ and $x \in X'_j$, so $x$ is a discontinuity propagated from some $y \in X'_{j-n}$. Let $\tilde h \in \mathcal{H}_n$ such that $\tilde h(x) = y$ be the corresponding inverse branch. This is the only inverse branch that contributes to $\text{Size } \tilde \mathcal{L}_s^n v(x)$. We compute using \eqref{eq:rho2} and Lemma~\ref{lem:jump_fsigma}, \begin{eqnarray}\label{eq:jump1} &\text{Size }& \!\!\!\!\tilde \mathcal{L}_s^n v(x) = \text{Size } \Big( |\tilde h'| e^{s \varphi_n \circ \tilde h} \frac{(f_{\sigma} v) \circ \tilde h}{\lambda_{\sigma}^n f_{\sigma}}\Big) (x) \nonumber \\ &\leq& \frac{1}{\lambda_{\sigma}^n} |\tilde h'(x)| e^{\sigma \varphi_n \circ \tilde h(x)} \Big( \frac{|v(y)|}{f_{\sigma}(x)} \text{Size } f_{\sigma}(y) + f_{\sigma}(y) |v(y)| \text{Size } \frac{1}{f_{\sigma}}(x) + \frac{f_{\sigma}(y)}{f_{\sigma}(x)} \text{Size } v(y) \Big) \nonumber \\ &\leq& 4\rho^{-3n} \frac{\sup f_{\sigma}}{f_{\sigma}(x)} u(y) \times \begin{cases} C_7 \rho^{-(j-n)} & \text{ if } j-n > k,\\ 1 & \text{ if } j-n \leq k. \end{cases} \end{eqnarray} This distinction is because \eqref{eq:EJ} only holds for $j-n > k$; for $j-n \leq k$ we only have the trivial bound $\text{Size } v(y) \leq u(y)$. The factor $4$ is to account for the three terms in the penultimate line above; in particular, $\text{Size } v(y) \leq 2 u(y)$, so the factor $4$ appears despite the presence of just three terms. Since $\rho^{-2n} \leq \rho^{-4k}$, we have \begin{equation}\label{eq:sizen} \text{Size } \tilde \mathcal{L}_s^n v(x) \leq \frac{4\sup f_{\sigma}}{\rho^{3k} f_{\sigma}(x)} C_7 \rho^{-j} u(y) \end{equation} in either case. Combining \eqref{eq:sizen} and \eqref{eq:Lnu} for $y = \tilde h(x)$ and $r=n$, and using the bound on $\text{Leb}(p)$ from \eqref{eq:k} we obtain $$ \text{Size }\tilde \mathcal{L}_s^n v(x) \leq \frac{ 4C_7 }{C_9 \rho^{3k} \text{Leb}(p)} \frac{\sup u|_p}{\inf u|_p} \frac{\sup f_{\sigma}}{\inf f_{\sigma}} \ \rho^{-j} \tilde \mathcal{L}_{\sigma}^n u(x) \leq \frac14 \frac{\sup u|_p}{\inf u|_p}\ C_7 \rho^{-j} \tilde \mathcal{L}_{\sigma}^n u(x). $$ Now take $k < j \leq n$, so the discontinuity at $x \in X'_j$ is created by non-onto branches of $F^n$, and there exist $y \in X'_1$ and an inverse branch $\tilde h \in \mathcal{H}_{j-1}$ such that $y = \tilde h(x)$. Then, analogous to \eqref{eq:jump1}, \begin{eqnarray*} \text{Size } \tilde \mathcal{L}_s^n v(x) &=& \text{Size } \Big( |\tilde h'| e^{s \varphi_{j-1} \circ \tilde h} \frac{(f_{\sigma} v) \circ \tilde h}{\lambda_{\sigma}^n f_{\sigma}}\Big) (x) \\ &\leq& \frac{1}{\lambda_{\sigma}^n} |\tilde h'(x)| e^{\sigma \varphi_{j-1} \circ \tilde h(x)} \frac{4\sup f_{\sigma}}{f_{\sigma}(x)} u(y) \\ &\leq& \frac{\rho^{-3(j-1)}}{\lambda_{\sigma}^{n-j+1} } \frac{4\sup f_{\sigma}}{f_{\sigma}(x)} u(y) \leq \frac{4C_7 \sup f_{\sigma}}{\rho^k f_{\sigma}(x)} \ \rho^{-j} u(y) \end{eqnarray*} because $C_7 \geq 1$, $k < j \leq n$ and $\lambda_{\sigma}^{-4} \leq \rho$ by \eqref{eq-lamfsigma}. Combining this with \eqref{eq:Lnu} to bound $u(y)$ (but applied to $r=j$) and \eqref{eq:k} gives $$ \text{Size }\tilde \mathcal{L}_s^n v(x) \leq \frac{ 4C_7 }{C_9 \rho^k \text{Leb}(p)} \frac{\sup u|_p}{\inf u|_p} \frac{\sup f_{\sigma}}{\inf f_{\sigma}} \ \rho^{-j} \tilde \mathcal{L}_{\sigma}^n u(x) \leq \frac14 \frac{\sup u|_p}{\inf u|_p}\ C_7 \rho^{-j} \tilde \mathcal{L}_{\sigma}^n u(x), $$ as before. The computations for $\tilde\mathcal{L}_{\sigma}^n u$ are the same. \end{proof} \section{Cancellation lemma} \label{sec-cance} We define a cone of function pairs $(u,v)$: \begin{align} \label{eq:cone} \mathcal{C}_b = \Big\{ (u ,v) \ : & \ 0 < u \ , \ 0 \leq |v| \leq u\ , (u,v) \text{ has exponentially decreasing } \nonumber\\ & \text{ jump-sizes } \eqref{eq:EJ} \text{ and } \text{Osc}_I v \leq C_{10} |b| \text{Leb}(I)\sup u|_I + C_8 E_I(u), \\ & \text{ for all intervals $I$ contained in a single atom of } \mathcal{P}_k \Big\}. \nonumber \end{align} Recall that the choice of $k$ in \eqref{eq:kE} implies that $C_8 E_I(u) \leq \frac{1}{12} \sup_I u$ for every $I$ contained in a single atom of $\mathcal{P}_k$. In Section~\ref{sec-invcone} we show that $\mathcal{C}_b$ is 'invariant' in the sense of~\cite{BalVal}: see Lemma~\ref{lemma-invcone}. In this section we provide a cancellation lemma for pairs of functions in $\mathcal{C}_b$ similar to the one in~~\cite{BalVal}. The statement and proof of Lemma~\ref{lemma-cancellation} below follows closely the pattern of the statements and proofs of ~\cite[Lemma 2.4]{BalVal} and ~\cite[Lemma 2.9]{AM}. In this section, we abbreviate $$ A_{s,h,n}= e^{s\varphi_n\circ h} |h'| v\circ h $$ for $h \in \mathcal{H}_n$ and $\varphi_n = \sum_{j=0}^{n-1} \varphi \circ F^j$. \begin{lemma}\label{lemma-cancellation} Fix $k$ such that~\eqref{eq:k} holds. Recall that $\eta_0=\frac{\sqrt 7-1}{2}\in (2/3,1)$. Assume that the UNI condition in Subsection~\ref{subsec-uni} holds (with constant $D>0$, $k$ fixed and $n_0\geq 1$). Set $\Delta=\frac{2\pi}{D}$. There exists $\delta\in (0,\Delta)$ such that the following hold for all $|\sigma|<\varepsilon$, $|b|>2\Delta$ and for all $(u,v)\in\mathcal{C}_b$: Let $p\in\mathcal{P}_k$ and let $h_1, h_2 \in \mathcal{H}_{n_0}$ be the branches from UNI. For every $y_0\in p$ there exists $y_1\in B_{\Delta/|b|}(y_0)$ such that one of the following inequalities holds on $B_{\delta/|b|}(y_1)$: \begin{itemize} \item[Case $h_1$.] $|A_{s,h_1,n_0}(f_{\sigma} v)+A_{s,h_2,n_0}(f_{\sigma} v)|\leq \eta_0 A_{\sigma,h_1,n_0}(f_{\sigma} u)+ A_{\sigma,h_2,n_0}(f_{\sigma} u)$. \item[Case $h_2$.] $|A_{s,h_1,n_0}(f_{\sigma} v)+A_{s,h_2,n_0}(f_{\sigma} v)|\leq A_{\sigma,h_1,n_0}(f_{\sigma} u)+\eta_0 A_{\sigma,h_2,n_0}(f_{\sigma} u)$. \end{itemize} \end{lemma} \begin{proof} Choose $\delta \in (0, \Delta)$ sufficiently small such that \begin{equation}\label{eq-delta} \delta \frac{D}{16\pi}< \frac{1}{12}, \quad\quad C_0\delta<\frac{\pi}{6}. \end{equation} Let $y_0\in Y$. Note that for $m=1,2$, \begin{align*} \sup_{B_{\delta/|b|}(y_0)}|v\circ h_m|\leq \text{Osc}_{B_{\delta/|b|}(y_0)}(v\circ h_m)+ \inf_{B_{\delta/|b|}(y_0)}|v\circ h_m| +\text{Size } v(B_{\delta/|b|}(y_0)). \end{align*} Since $(u,v)\in\mathcal{C}_b$, \begin{align*} \sup_{B_{\delta/|b|}(y_0)}|v\circ h_m|\leq C_{10} \text{Leb}(h_m(B_{\delta/|b|}(y_0)))|b|\sup_{B_{\delta/|b|}(y_0)}(u\circ h_m)&+ \inf_{B_{\delta/|b|}(y_0)}|v\circ h_m|\\ &+ C_8 E_{B_{\delta/|b|}(y_0)}(u). \end{align*} But \[ C_{10}\text{Leb}(h_m(B_{\delta/|b|}(y_0)))\leq C_{10} \rho_0^{-n_0}\text{Leb}(B_{\delta/|b|}(y_0))=C_{10}\rho_0^{-n_0}\frac{\delta}{|b|}\leq \frac{D}{16\pi}\frac{\delta}{|b|}, \] where in the last inequality we have used~\eqref{eq-n0-1}. Putting the above together with the estimate on $E_I(u)$ below equation~\eqref{eq:E} and using the choice of $\delta$ and $k$, \begin{align}\label{eq-1} \sup_{B_{\delta/|b|}(y_0)}|v\circ h_m| \leq \frac{1}{6} \sup_{B_{\delta/|b|}(y_0)}(u\circ h_m)+ \inf_{B_{\delta/|b|}(y_0)}|v\circ h_m|. \end{align} {\bf Case 1.} Suppose that $\inf_{B_{\delta/|b|}(y_0)}|v\circ h_m|\leq \frac{1}{2} \sup_{B_{\delta/|b|}(y_0)}(u\circ h_m)$ for $m=1,2$. Then~\eqref{eq-1} implies that \[ \sup_{B_{\delta/|b|}(y_0)}|v\circ h_m|\leq (\frac{1}{2}+\frac{1}{6})\sup_{B_{\delta/|b|}(y_0)}(u\circ h_m)=\frac{2}{3}\sup_{B_{\delta/|b|}(y_0)}(u\circ h_m) <\eta_0\sup_{B_{\delta/|b|}(y_0)}(u\circ h_m). \] Thus, for $m=1,2$, $|A_{s,h_m,n_0}(f_{\sigma} v)(y)|\leq \eta_0 A_{\sigma,h_m,n_0}(f_{\sigma} u)(y)$ for all $y\in B_{\delta/|b|}(y_0)$. So, Case $h_m$ holds with $y_1=y_0$. \\[2mm] {\bf Case 2.} Suppose the reverse; that is, suppose that $\inf_{B_{\delta/|b|}(y_0)}|v\circ h_m|> \frac{1}{2} \sup_{B_{\delta/|b|}(y_0)}(u\circ h_m)$ for $m=1,2$. For $m=1,2$, write $A_{s,h_m,n_0}(f_{\sigma} v)(y)=r_m(y)e^{i\theta_m(y)}$. Let $\theta(y)=\theta_1(y)-\theta_2(y)$. Choose $\delta$ as in~\eqref{eq-delta} and recall $\Delta=\frac{2\pi}{D}$. A calculation~\cite[Lemma 2.3]{BalVal} shows that if $\cos\theta\leq 1/2$ then $r_1e^{i\theta_1}+ r_2e^{i\theta_2}\leq\max\{\eta_0 r_1+r_2, r_1+\eta_0 r_2\}$. Thus, the conclusion follows once we show that $\cos\theta(y)\leq 1/2$, or equivalently $|\theta(y)-\pi|<2\pi/3$, for all $y\in B_{\delta/|b|}(y_1)$ for some $y_1\in B_{\Delta/|b|}(y_0)$. In what follows we show that $|\sup_{B_{\delta/|b|}(y_1)}\theta-\pi|<2\pi/3$, for some $y_1\in B_{\Delta/|b|}(y_0)$. We start by restricting to $B_{\xi/|b|}(y_0)$, where $\xi=\delta+\Delta$. Note that $\theta=V-b\psi$, where $\psi=\psi_{h_1,h_2}$ is the quantity defined in UNI and $V=\arg(v\circ h_1)-\arg(v\circ h_2)$. We first estimate $\text{Osc}_{B_{\xi/|b|}(y_0)}V$. For this purpose, we recall a basic trigonometry result (also used in in~\cite{BalVal} and~\cite{AM}): if $|z_1|, |z_2|\geq c$ and $|z_1-z_2|\leq c(2-2\cos\omega)^{1/2}$ for $c>0$ and $|\omega|<\pi$ then $|\arg(z_1)-\arg(z_2)|\leq \omega$. Since $(u,v)\in\mathcal{C}_b$ and $\xi<4\pi/D$ for $m=1,2$, we have by~\eqref{eq-n0-1} \begin{align}\label{eq-sup1} \nonumber\text{Osc}_{B_{\xi/|b|}(y_0)}(v\circ h_m)&\leq C_{10} \rho_0^{-n_0}\frac{4\pi}{D}\sup_{B_{\xi/|b|}(y_0)}(u\circ h_m)\\ &\leq \frac{1}{4}(2-2\cos\frac{\pi}{12})^{1/2}\sup_{B_{\xi/|b|}(y_0)}(u\circ h_m). \end{align} Recalling the assumption of Case 2, \begin{align}\label{eq-sup2} \nonumber\sup_{B_{\xi/|b|}(y_0)}|v\circ h_m|&\geq \Big|\sup_{B_{\xi/|b|}(y_0)}|v\circ h_m|-\text{Osc}_{B_{\xi/|b|}(y_0)}(v\circ h_m)\Big|\\ &\geq \frac{1}{2} \sup_{B_{\delta/|b|}(y_0)}(u\circ h_m)-\frac{1}{4}\sup_{B_{\xi/|b|}(y_0)}(u\circ h_m)=\frac{1}{4}\sup_{B_{\xi/|b|}(y_0)}(u\circ h_m). \end{align} By equations~\eqref{eq-sup1} and~\eqref{eq-sup2}, \[ \sup_{z_1, z_2 \in B_{\delta/|b|}(y_0)} \Big|\arg(v\circ h_m(z_1))-\arg(v\circ h_m(z_2))\Big|\leq\frac{\pi}{12}, \] and thus \begin{align}\label{eq-oscV} \text{Osc}_{B_{\xi/|b|}(y_0)}V\leq \frac{\pi}{6}. \end{align} Next, recall the UNI assumption in Subsection~\ref{subsec-uni}. Note that for any $z\in B_{\Delta/|b|}(y_0)$, \[ |b(\psi(z)-\psi(y))|\geq |b||z-y_0|\inf|\psi'|\geq D|b||z-y_0|=\frac{2\pi}{\Delta}|b||z-y_0|. \] Since $|b|>2\Delta$, the ball $B_{\Delta/|b|}(y_0)\subset Y$ contains an interval of length at least $\Delta/|b|$. Hence, as $z$ varies in $B_{\Delta/|b|}(y_0)$, it fills out an interval around $0$ of length at least $2\pi$ $b(\psi(z)-\psi(y))$. This means that we can choose $y_1\in B_{\Delta/|b|}(y_0)$ such that \[ b(\psi(y_1)-\psi(y))=\theta(y_0)-\pi\mod 2\pi. \] Note that $\theta(y_0)-V(y_0)+b\psi(y_0)=0$. Using the above displayed equation, \[ \theta(y_1)-\pi=V(y_1) -b\psi(y_1)-\pi+\theta(y_0)-V(y_0)+b\psi(y_0)=V(y_1)-V(y_0). \] Together with~\eqref{eq-oscV}, the above equation implies that $|\theta(y_1)-\pi|\leq \pi/6$. Recalling $\sup_Y|\psi'|\leq C_0$ and our choice of $\delta$, \begin{align*} \Big|\sup_{B_{\delta/|b|}(y_1)}\theta-\pi\Big|&\leq \frac{\pi}{6}+\sup_{B_{\delta/|b|}(y_1)}\Big| \theta-\theta(y_1)\Big| \\ &\leq \frac{\pi}{6}+|b| \sup_{B_{\delta/|b|}(y_1)}\Big| \psi-\psi(y_1)\Big| +\text{Osc}_{B_{\delta/|b|}(y_1)} V+\text{Osc}_{B_{\Delta/|b|}(y_0)} V \\ & \leq \frac{\pi}{6}+C_0\delta+2\text{Osc}_{B_{\xi/|b|}(y_0)}V \leq\frac{4\pi}{6}=\frac{2\pi}{3}, \end{align*} which ends the proof. \end{proof} Let $I^p$ be a closed interval contained in an atom of $\mathcal{P}_k$ such that if Lemma~\ref{lemma-cancellation} holds on $B_{\delta/|b|}(y_1)$, we also have $B_{\delta/|b|}(y_1)\subset I^p$. Write $type(I^p)=h_m$ if we are in case $h_m$. Then we can find finitely many disjoint intervals $I_j^p=[a_j, b_{j+1}]$, $j=0,\dots, N-1$ (with $0=b_0 \leq a_0<b_1<a_1<\ldots<b_N\leq a_n=1$) of $type(I_j^p) \in \{h_1, h_2\}$ with $\text{diam}(I_j^p)\in [\delta/|b|, 2\delta/|b|]$ and gaps $J_j^p=[b_j,a_j]$, $j=0,\dots, N$ with $\text{diam}(J_j^p)\in (0, 2\Delta/|b|]$. Let $\chi:Y\to[\eta, 1]$, with $\eta\in [\eta_0,1)$ be a $C^1$ function as constructed below (as in~\cite{AM, BalVal}): \begin{itemize} \item Let $p\in\mathcal{P}_k$, $h\in\mathcal{H}_n$ for $n\in\mathbb{N}$ and write $h|_p: p\to h(p)$. Set $\chi\equiv 1$ on $Y\setminus (h_1(p) \cup h_2(p))$. \item On $h_1(p)$ we require that $\chi(h_1(y))=\eta$ for all $y$ lying in the middle third of an interval of type $h_1$ and that $\chi(h_1(y))=1$ for all $y$ not lying in an interval of type $h_1$. \item On $h_2(p)$ we require that $\chi(h_2(y))=\eta$ for all $y$ lying in the middle third of an interval of type $h_2$ and that $\chi(h_2(y))=1$ for all $y$ not lying in an interval of type $h_2$. \end{itemize} Since $\text{diam} (I_j^p)\geq\delta/|b|$, we can choose $\chi$ to be $C^1$ with $|\chi'|\leq \frac{3(1-\eta)|b|}{\delta P}$ where $P=\min_{m=1,2}\{\inf |h'_m|\}$. From here on we choose $\eta\in [\eta_0,1)$ sufficiently close to $1$ so that $|\chi'|\leq |b|$. Since $p\in\mathcal{P}_k$ is arbitrary in the statement of Lemma~\ref{lemma-cancellation} and the construction of $\chi$ above, we obtain \begin{cor}\label{cor-canc} Let $\delta,\Delta$ be as in Lemma~\ref{lemma-cancellation}. Let $|b|\geq 4\pi/D$ and $(u,v)\in\mathcal{C}_b$. Let $\chi=\chi(b,u,v)$ be the $C^1$ function described above. Then $|\tilde \mathcal{L}_s^{n_0}v(y)|\leq \tilde \mathcal{L}_{\sigma}^{n_0}(\chi u)(y)$, for all $s=\sigma+ib$, $|\sigma|<\varepsilon$ and all $y\in Y$. \end{cor} The following intervals $\hat I^p$ and $\hat J^p$ are constructed as in~\cite{AM, BalVal}. Let $\hat I^p=\cup_{j=0}^{N-1}\hat I_j^p$, where $\hat I_j^p$ denotes the middle third of $I_j^p$. Let $\hat J_j$ be the interval consisting of $J_j$ together with the rightmost third of $I_{j-1}^p$ and the leftmost third of $I_j^p$. Define $\hat J_0^p$ and $\hat J_p^N$ with the obvious modifications. By construction, $\text{diam}(\hat I_j^p)\geq \frac{1}{3}\frac{\delta}{|b|}$ and $\text{diam}(\hat J_j^p)\geq (\frac{4}{3}+2\Delta)\frac{\delta}{|b|}$. Hence, there is a constant $\delta'=\delta/(4\delta+6\Delta)>0$ (independent of $b$) such that $\text{diam}(\hat I_j^p)\geq \delta'\text{diam}(\hat J_j^p)$ for $j=0,\dots, N-1$. \begin{prop}\label{prop-intsupinfw} Suppose that $w$ is a positive function with $\frac{\sup_p w}{\inf_p w}\leq M$ for some $M>0$. Then $\int_{\hat I^p}w\, d\text{Leb}\geq \delta'' \int_{\hat J^p}w\, d\text{Leb}$, where $\delta''=(2 M)^{-1} \delta'$. \end{prop} \begin{proof}Compute that \begin{eqnarray*} \int_{\hat I^p}w\, d\text{Leb} &\geq& \text{Leb}(\hat I_j^p)\inf_p w\geq M^{-1} \delta'\text{Leb}(\hat J_j^p) \sup_p w \\ &=& 2\delta''\text{Leb}(\hat J_j^p) \inf_p w \geq 2\delta'' \int_{\hat J_j^p}w\, d\text{Leb}. \end{eqnarray*} Here the factor $2$ takes care of the intervals $\hat J_0^p$ and $\hat J_p^N$.~\end{proof} \section{Invariance of the cone} \label{sec-invcone} Recall that the cone $\mathcal{C}_b$ was defined in \eqref{eq:cone}. The main result of this section is: \begin{lemma}\label{lemma-invcone} Assume $|b| \geq 2$. Then $\mathcal{C}_b$ is invariant under $(u,v) \mapsto (\tilde\mathcal{L}_{\sigma}^{n_0}(\chi u), \tilde\mathcal{L}_s^{n_0}v)$, where $\chi = \chi(b,u,v) \in C^1(Y)$ comes from Corollary~\ref{cor-canc}. \end{lemma} \begin{proof} Since $\chi u\geq \eta u>0$ and $\tilde \mathcal{L}_{\sigma}$ is a positive operator we have $\tilde\mathcal{L}_{\sigma}^{n_0}(\chi u)>0$. The condition $|\tilde\mathcal{L}_s^{n_0}v|\leq \tilde\mathcal{L}_{\sigma}^{n_0}(\chi u)$ follows from Corollary~\ref{cor-canc}. In what follows we check the other cone conditions for the pair $(\tilde\mathcal{L}_{\sigma}^{n_0}(\chi u), \tilde\mathcal{L}_s^{n_0}v)$. For simplicity of exposition, we assume that $n_0 = 2qk$ for some $q \geq 1$. We will start with invariance of the exponential jump-size and oscillation conditions under $(u,v) \mapsto (\tilde \mathcal{L}_{\sigma}^n u, \tilde \mathcal{L}_s^n v)$ for a smaller exponent $n = 2k$. Iterating this, we get to the required exponent $n_0$. Hence define \begin{eqnarray*} (u_1, v_1) &=& (\tilde \mathcal{L}_{\sigma}^{n} u, \tilde \mathcal{L}_s^{n}v) \\ (u_2, v_2) &=& (\tilde \mathcal{L}_{\sigma}^{n} u_1, \tilde \mathcal{L}_s^{n}v_1) \\ \vdots \ \quad & \vdots & \qquad \quad \vdots \\ (u_{q-1}, v_{q-1}) &=& (\tilde \mathcal{L}_{\sigma}^{n} u_{q-2}, \tilde \mathcal{L}_s^{n}v_{q-2}) \\ (u_q, v_q) &=& (\tilde \mathcal{L}_{\sigma}^{n} u_{q-1}, \tilde \mathcal{L}_s^{n} v_{q-1}) = (\tilde \mathcal{L}_{\sigma}^{n_0} u, \tilde \mathcal{L}_s^{n_0}v). \end{eqnarray*} Since $|v| \leq u$, this construction shows that $|v| \leq u$ for all $1 \leq i \leq q$. We will now show by induction that $(u_i, v_i)$ satisfies \eqref{eq:EJ} and $\text{Osc}_I v_i \leq C_{10} |b| \text{Leb}(I) \sup_I u_i + C_8 E_I(u_i)$ for all $1 \leq i \leq q$. {\bf The {\lq}exponential decrease of jump-sizes{\rq} condition in $\mathcal{C}_b$.} Without loss of generality we can refine (if needed) the partition $\mathcal{P}_k$ such that \begin{equation}\label{eq:Pk} C_{10} |b| \text{Leb}([\xi_{i-1}, \xi_i]) \leq \mbox{\small $\frac23$}, \end{equation} for all $i$. Then the oscillation condition applied to $(u,v=u)$ combined with \eqref{eq:Pk} and the fact that $E_I(u) \leq \frac{1}{12} \sup_p u$ give $\sup_p u - \inf_p u = \text{Osc}_p u \leq (\frac23 + \frac{1}{12}) \sup_p u$. Therefore $\frac{\sup u|_p}{\inf u|_p} \leq 4$ for each $p \in \mathcal{P}_k$. The invariance of the exponential jump-size condition follows by Proposition~\ref{prop:inductive_jumpsize}, that is: the pair $(\tilde \mathcal{L}_{\sigma}^n u, \tilde \mathcal{L}_s^n v)$ satisfies \eqref{eq:EJ} as well. {\bf The {\lq}oscillation{\rq} condition in $\mathcal{C}_b$.} For the invariance of the oscillation condition, we need to verify $$ \text{Osc}_I(\tilde\mathcal{L}_s^nv) \leq C_{10} |b| \text{Leb}(I) \sup_{x \in I}(\tilde\mathcal{L}_{\sigma}^n u)(x) + C_8 E_I (\tilde\mathcal{L}_{\sigma}^nu). $$ For this purpose, we split $\text{Osc}_I(\tilde\mathcal{L}_s^nv)$ into a sum of jump-sizes at non-onto branches ({\em i.e.,\ } $\partial \text{dom}(h) \cap I^\circ \neq \emptyset$, corresponding to the ``created'' discontinuities), and a sum of onto branches (which includes ``propagated'' discontinuities). Because of \eqref{eq:oscsize}, this gives the following: \begin{eqnarray*} \text{Osc}_I(\tilde\mathcal{L}_s^nv) &\leq & \sum_{h \in \mathcal{H}_n, \partial \text{dom}(h) \cap I^\circ \neq \emptyset } \text{Size } \Big( |h'| e^{s \varphi_n \circ h(x)} \frac{(f_{\sigma} v) \circ h }{\lambda_{\sigma}^n f_{\sigma} } \Big)(\partial \text{dom}(h) \cap I^\circ) \\ && + \sum_{h \in \mathcal{H}_n, \text{dom}(h) \cap I^\circ \neq \emptyset} \text{Osc}_I\Big( |h'| e^{s \varphi_n \circ h} \frac{(f_{\sigma} v) \circ h }{\lambda_{\sigma}^n f_{\sigma} } \Big) \\ &=& O_1 + O_2. \end{eqnarray*} For the term $O_1$ we use Proposition~\ref{prop:inductive_jumpsize}, and recall that $I \subset p$, so each created discontinuity $x$ in this sum belong to $X'_j$ for some $k < j \leq n$. We obtain \begin{equation}\label{eq:E1} O_1 \leq C_7 \sum_{j=k+1}^n \rho^{-j} \sum_{x \in X'_j \cap I^\circ} \tilde\mathcal{L}_{\sigma}^n u(x), \end{equation} which contributes to $E_I(\tilde\mathcal{L}_{\sigma}^n(\chi u))$. Now for the sum $O_2$ (concerning the interiors of $\text{dom}(h)$, $h \in \mathcal{H}_n$), we decompose the summands into five parts, according to the five factors $|h'|$, $e^{s \varphi_n \circ h}$, $f_{\sigma} \circ h$, $1/f_{\sigma}$ and $v \circ h$ of which the oscillations have to be estimated. The estimates for this five parts are as follows.\\ {\bf The term with $|h'|$.} For each $h \in \mathcal{H}_n$ we have $1 = h' \circ F^n \cdot(F^n)'$ and $0 = h'' \circ F^n \cdot ((F^n)')^2 + h' \circ F^n \cdot (F^n)''$. Using Adler's condition \eqref{eq:adler} for the branches of $F^n$, \begin{equation}\label{eq:hpp} |h''(\xi)| = \frac{ |(F^n)'' \circ h(\xi)|}{|(F^n)' \circ h(\xi)|^2} \cdot |h'(\xi)| \leq C_1 |h'(\xi)| \end{equation} for each $n \geq 1$ and $\xi \in a \in \alpha^n$. Hence by the Mean Value Theorem, $$ \text{Osc}_{I^\circ}(|h'|) \leq \text{Leb}(I) |h''(\xi)| \leq C_1 \text{Leb}(I) |h'(\xi)| \leq C_1e^{C_1} \text{Leb}(I) \inf_{x \in \text{dom}(h) \cap I} |h'(x)|. $$ Summing over all $h \in \mathcal{H}_n$ with $\text{dom}(h) \cap I^\circ \neq \emptyset$, we get \begin{equation}\label{eq:O1} \sum_{\stackrel{h \in \mathcal{H}_n}{\text{dom}(h) \cap I^\circ \neq \emptyset}} \text{Osc}_{I^\circ}(|h'|) \sup_{x \in \text{dom}(h) \cap I^\circ} e^{\sigma \varphi_n \circ h(x)} \frac{(f_{\sigma} |v|) \circ h(x)}{\lambda_{\sigma}^n f_{\sigma}(x)} \leq C_1 e^{C_1} \text{Leb}(I) \sup_{x \in I} (\tilde \mathcal{L}_{\sigma}^n u)(x). \end{equation} \noindent {\bf The term with $e^{s \varphi_n \circ h}$}. Write $\varphi_n(x) = \sum_{i=0}^{m-1} \varphi \circ F^i(x)$ and $h = h_n\circ h_{n-1} \circ \dots \circ h_1 \in \mathcal{H}_n$ where $h_j \in \mathcal{H}_1$ for $1 \leq j \leq n$. Then by \eqref{eq:C2} \begin{align}\label{eq:phin} |(\varphi_n \circ h)'| &\leq \sum_{j=0}^{n-1}| (\varphi \circ h_{n-j} \circ F^{j+1} \circ h)'| = \sum_{j=0}^{n-1} |(\varphi \circ h_{n-j})'| \cdot |(F^{j+1} \circ h)'| \nonumber \\ & \leq C_2 \sum_{j=0}^{n-1} \rho_0^{-(n-(j+1))} \leq \frac{C_2 \rho_0}{\rho_0-1} =: C'_2. \end{align} By the Mean Value Theorem $\frac{\sup_{x \in I} e^{\sigma \varphi_n \circ h(x)}} {\inf_{x \in I} e^{\sigma \varphi_n \circ h(x)}} \leq e^{\sigma (\varphi_n \circ h)'(\xi) \text{Leb}(I)} \leq e^{\varepsilon C'_2}$. Therefore \begin{eqnarray*} \text{Osc}_{I^\circ}(e^{s \varphi_n \circ h}) &=& |s| e^{\sigma \varphi_n \circ h(\xi)} |(\varphi_n \circ h)'(\xi)| \text{Leb}(I) \\ &\leq & (1+\varepsilon)|b| \frac{\sup_{x \in I} e^{\sigma \varphi_n \circ h(x)}} {\inf_{x \in I} e^{\sigma \varphi_n \circ h(x)}} \inf_{x \in I} e^{\sigma \varphi_n \circ h(x)} \sup_{x \in I} (\varphi_n \circ h)'(x)\\ &\leq & (1+\varepsilon) e^{\varepsilon C'_2} C'_2 |b| \text{Leb}(I) \inf_{x \in I} e^{\sigma \varphi_n \circ h(x)}. \end{eqnarray*} Summing over all $h \in \mathcal{H}_n$ with $\text{dom}(h) \cap I^\circ \neq \emptyset$, this gives \begin{align}\label{eq:O2} \sum_{\stackrel{h \in \mathcal{H}_n}{\text{dom}(h) \cap I^\circ \neq \emptyset}} \text{Osc}_{I^\circ}(e^{s \varphi_n \circ h}) & \sup_{x \in \text{dom}(h) \cap I^\circ} |h'(x)| \frac{(f_{\sigma} |v|) \circ h(x)}{\lambda_{\sigma}^n f_{\sigma}(x)} \nonumber \\ & \leq (1+\varepsilon) e^{\varepsilon C'_2} C'_2 |b| \text{Leb}(I) \sup_{x \in I} (\tilde \mathcal{L}_{\sigma}^n u)(x). \end{align} \noindent {\bf The term with $f_{\sigma} \circ h$}. Applying Lemma~\ref{lem:jump_fsigma}, part 2 to $f_{\sigma} \circ h$ we find \begin{equation}\label{eq:oscfh} \text{Osc}_{I^\circ}(f_{\sigma} \circ h) \leq C_6 \text{Leb}(h(I)) \inf_{x\in h(I)} f_{\sigma}(x) + C_7 E_{h(I)}(f_{\sigma}). \end{equation} For an arbitrary $h \in \mathcal{H}_n$, the first term in \eqref{eq:oscfh}, multiplied by $\sup_{x \in \text{dom}(h) \cap I^\circ} |h'(x)|\ | e^{s \varphi_n \circ h(x)} | \ \frac{|v| \circ h(x)}{\lambda_{\sigma}^n f_{\sigma}(x)}$ is bounded by $$ C_6 \text{Leb}(h(I))\sup_{x \in \text{dom}(h) \cap I^\circ} |h'(x)| e^{\sigma \varphi_n \circ h(x)} \frac{ (f_{\sigma} u) \circ h(x)}{\lambda_{\sigma}^n f_{\sigma}(x)}. $$ Summing over all $h \in \mathcal{H}_n$ with $\text{dom}(h) \cap I^\circ \neq \emptyset$ gives \begin{equation}\label{eq:O3} \sum_{\stackrel{h \in \mathcal{H}_n}{\text{dom}(h) \cap I^\circ \neq \emptyset}} C_6 \text{Leb}(h(I)) \sup_{x \in \text{dom}(h) \cap I^\circ} |h'(x)| e^{\sigma \varphi_n \circ h(x)} \frac{ (f_{\sigma} u) \circ h(x)}{\lambda_{\sigma}^n f_{\sigma}(x)} \leq C_6 \rho_0^{-n} \text{Leb}(I) \sup_{x \in I} (\tilde \mathcal{L}_{\sigma}^n u)(x). \end{equation} The second term in \eqref{eq:oscfh} is a sum over propagated discontinuities $x \in I^\circ$, and for each $x$ we let $\tilde h \in \mathcal{H}_n$ be the inverse branch such that $f_{\sigma}$ has a discontinuity at $y = \tilde h(x)$, and $j>k$ is such that $x \in X'_j$. By Lemma~\ref{lem:jump_fsigma} the term in $E_{h(I)}(f_{\sigma})$ related to $y$ is bounded by $C_7 \rho^{-3(j-n)} f_{\sigma}(y)$. Multiplied by $|\tilde h'(x)|\ | e^{s \varphi_n \circ\tilde h(x)} |\ \frac{|v| \circ\tilde h(x)}{\lambda_{\sigma}^n f_{\sigma}(x)}$, and using \eqref{eq:Lnu} to obtain an upper bound for $u\circ \tilde h(x) = u(y)$, this gives \begin{eqnarray*} \frac{C_7}{\rho^{3(j-n)}} f_{\sigma}(y) |\tilde h'(x)|\ e^{\sigma \varphi_n \circ\tilde h(x)} \frac{|v| \circ\tilde h(x)}{\lambda_{\sigma}^n f_{\sigma}(x)} &\leq & \frac{C_7}{\rho^{3(j-n)}} \rho^{-3n} \frac{(f_{\sigma} u) \circ \tilde h(x)}{\lambda_{\sigma}^n f_{\sigma}(x)} \\ &\leq & C_7 \rho^{-j} \frac{\sup f_{\sigma}}{\inf f_{\sigma}} \frac{\sup u|_p}{\inf u|_p} \frac{1}{\rho^k C_9 \text{Leb}(p)} \tilde \mathcal{L}_{\sigma}^n u(x). \end{eqnarray*} Since $\frac{\sup u|_p}{\inf u|_p} \leq 4$, the bound on $\text{Leb}(p)$ in \eqref{eq:k} gives $\frac{\sup f_{\sigma}}{\inf f_{\sigma}} \frac{\sup u|_p}{\inf u|_p} \frac{1}{\rho^k C_9 \text{Leb}(p)} \leq 1$. Hence, summing over all propagated discontinuities $x \in I^\circ$ and corresponding branches, we get \begin{equation}\label{eq:E2} C_7 \sum_{j > n} \sum_{x \in X'_j \cap I^\circ} \rho^{-3(j-n)} f_{\sigma}(y) |h'(x)|\ e^{\sigma \varphi_n \circ h(x)} \frac{|v| \circ h(x)}{\lambda_{\sigma}^n f_{\sigma}(x)} \le C_7 \sum_{j > n} \rho^{-j} \sum_{x \in X'_j \cap I^\circ} \tilde\mathcal{L}_{\sigma}^n u(x). \end{equation} which contributes to $E_I(\tilde\mathcal{L}_{\sigma}^n u)$. \noindent {\bf The term with $1/f_{\sigma}$}. Applying Lemma~\ref{lem:jump_fsigma}, part 2.\ to $f_{\sigma} \circ h$ we find \begin{equation}\label{eq:osc1f} \text{Osc}_{I^\circ}(1/f_{\sigma}) \leq C_6 \text{Leb}(I) \inf_{x\in h(I)} 1/f_{\sigma}(x) + C_7 E_I(1/f_{\sigma}). \end{equation} For $h \in \mathcal{H}_n$, the first term of \eqref{eq:osc1f}, multiplied by $\sup_{x \in \text{dom}(h) \cap I^\circ} |h'(x)|\ | e^{s \varphi_n \circ h(x)} | \ \frac{(f_{\sigma}|v|) \circ h(x)}{\lambda_{\sigma}^n}$ is bounded by $$ C_6 \text{Leb}(I) \sup_{x \in \text{dom}(h) \cap I^\circ} |h'(x)| e^{\sigma \varphi_n \circ h(x)} \frac{ (f_{\sigma} u) \circ h(x)}{\lambda_{\sigma}^nf_{\sigma}(x)}. $$ Summing over all $h \in \mathcal{H}_n$ with $\text{dom}(h) \cap I^\circ \neq \emptyset$ gives \begin{equation}\label{eq:O4} \sum_{\stackrel{h \in \mathcal{H}_n}{\text{dom}(h) \cap I^\circ \neq \emptyset}} C_6 \text{Leb}(I) \sup_{x \in \text{dom}(h) \cap I^\circ} |h'(x)| e^{\sigma \varphi_n \circ h(x)} \frac{ (f_{\sigma} u) \circ h(x)}{\lambda_{\sigma}^n f_{\sigma}(x)} \leq C_6 \text{Leb}(I) \sup_{x \in I} (\tilde \mathcal{L}_{\sigma}^n u)(x). \end{equation} The second term of \eqref{eq:osc1f} is a sum over propagated discontinuities $x \in I^\circ$. Take $j > k$ such that $x \in X'_j$. Lemma~\ref{lem:jump_fsigma} gives that the term in $E_I$ related to $x$ is bounded by $C_7 \rho^{-3j}/f_{\sigma}(x)$. Multiplying with $|h'(x)| \ |e^{\sigma \varphi_n \circ h(x)}|\ \frac{(f_{\sigma} u) \circ h(x)}{\lambda_{\sigma}^n}$ and then summing over all $x \in \cup_{j > k} X'_j \cap I^\circ$ and $h \in \mathcal{H}_n$ with $x \in \text{dom}(h)$ gives \begin{equation}\label{eq:E3} C_7 \sum_{j > k} \rho^{-3j} \sum_{x \in X'_j \cap I^\circ} |h'(x)| e^{\sigma \varphi_n \circ h(x)} \frac{(f_{\sigma} u) \circ h(x)}{\lambda_{\sigma}^n f_{\sigma}(x) } \leq C_7 \sum_{j > k} \rho^{-j}\sum_{x \in X'_j \cap I^\circ} (\tilde \mathcal{L}_{\sigma}^n u)(x), \end{equation} which contributes to $E_I(\tilde \mathcal{L}_{\sigma}^n u)$. \noindent {\bf The term with $v$}. Using the cone condition for $v$, we obtain \begin{eqnarray}\label{eq:oscvh} \text{Osc}_{I^\circ}(v \circ h) &\leq & C_{10} \text{Leb}(h(I)) |b| \sup_{x \in h(I)} u(x) + C_8 E_{h(I)}(u) \nonumber \\ &\leq & \rho_0^{-n} \frac{\sup u|_{h(I)}}{\inf u|_{h(I)}} C_{10} \text{Leb}(I)\ |b|\ \inf_{x \in h(I)} u(x) + C_8 E_{h(I)}(u). \end{eqnarray} For $h \in \mathcal{H}_n$, the first term of \eqref{eq:oscvh}, multiplied by $\sup_{x \in \text{dom}(h) \cap I^\circ} |h'(x)|\ | e^{s \varphi_n \circ h(x)} | \ \frac{f_{\sigma} \circ h(x)}{\lambda_{\sigma}^n f_{\sigma}(x)}$, is bounded by $$ 4\rho_0^{-n} C_{10} |b| \text{Leb}(I) \sup_{x \in \text{dom}(h) \cap I^\circ} |h'(x)| e^{\sigma \varphi_n \circ h(x)} \frac{ (f_{\sigma} u) \circ h(x)}{\lambda_{\sigma}^nf_{\sigma}(x)}. $$ Summing over all $h \in \mathcal{H}_n$ with $\text{dom}(h) \cap I^\circ \neq \emptyset$ gives \begin{equation}\label{eq:O5} \sum_{\stackrel{h \in \mathcal{H}_n}{\text{dom}(h) \cap I^\circ \neq \emptyset}} \frac{4C_{10}}{\rho_0^n} |b| \text{Leb}(I) \sup_{x \in \text{dom}(h) \cap I^\circ} |h'(x)| e^{\sigma \varphi_n \circ h(x)} \frac{ (f_{\sigma} u) \circ h(x)}{\lambda_{\sigma}^nf_{\sigma}(x)} \leq \frac{4C_{10}}{\rho_0^n} |b| \text{Leb}(I) \sup_{x \in I} (\tilde \mathcal{L}_{\sigma}^n u)(x). \end{equation} The second term of \eqref{eq:oscvh} is a sum over propagated discontinuities $x \in I^\circ$. For each such $x$ we let $\tilde h \in \mathcal{H}_n$ be the inverse branch such that $v$ has a discontinuity at $y = \tilde h(x)$, and $j$ is such that $x \in X'_j$. \\ {\bf Case a:} Assume that $j-n > k$. Since $u$ has exponentially decreasing jump-sizes, we get that the term in $E_{h(I)}$ related to $y$ is bounded by $C_7 \rho^{-(j-n)} u(y)$. After multiplying by $|\tilde h'(x)|\ | e^{s \varphi_n \circ\tilde h(x)} |\ \frac{f_{\sigma} \circ\tilde h(x)}{\lambda_{\sigma}^n f_{\sigma}(x)}$, and using \eqref{eq:Lnu} for an upper bound of $u \circ \tilde h(x) = u(y)$, we have \begin{eqnarray*} C_7 \rho^{-(j-n)} u(y) |h'(x)|\ e^{\sigma \varphi_n \circ h(x)} \frac{f_{\sigma} \circ h(x)}{\lambda_{\sigma}^n f_{\sigma}(x)} &\leq & C_7 \rho^{-(j-n)} \rho^{-3n} \frac{(f_{\sigma} u) \circ \tilde h(x)}{f_{\sigma}(x)} \\ &\leq & C_7 \rho^{-j} \frac{\sup f_{\sigma}}{\inf f_{\sigma}} \frac{\sup u|_p}{\inf u|_p} \frac{1}{\rho^k C_9 \text{Leb}(p)} \tilde \mathcal{L}_{\sigma}^n u(x) \\ &\leq & \frac{C_7}{C_8} \rho^{-j} \tilde \mathcal{L}_{\sigma}^n u(x), \end{eqnarray*} because $\frac{\sup u|_p}{\inf u|_p} \leq 4$, and using the bound on $\text{Leb}(p)$ from \eqref{eq:k}. \\ {\bf Case b:} Assume that $j-n \leq k$. Then \eqref{eq:jumpsize} doesn't apply to the term in $E_{h(I)}$ related to $y$, so it can only be bounded by $u(y)$. Multiplied by $|\tilde h'(x)|\ | e^{s \varphi_n \circ\tilde h(x)} |\ \frac{f_{\sigma} \circ\tilde h(x)}{\lambda_{\sigma}^n f_{\sigma}(x)}$, and using \eqref{eq:Lnu} for obtaining an upper bound of $u \circ \tilde h(x) = u(y)$, we have \begin{eqnarray*} u(y) |h'(x)|\ e^{\sigma \varphi_n \circ h(x)} \frac{f_{\sigma} \circ h(x)}{\lambda_{\sigma}^n f_{\sigma}(x)} &\leq & \rho^{-3n} \frac{(f_{\sigma} u) \circ \tilde h(x)}{f_{\sigma}(x)} \\ &\leq & \rho^{-2(n-k)} \frac{\sup f_{\sigma}}{\inf f_{\sigma}} \frac{\sup u|_p}{\inf u|_p} \frac{1}{\rho^k C_9 \text{Leb}(p)} \rho^{-j} \tilde \mathcal{L}_{\sigma}^n u(x) \\ &\leq & \frac{1}{C_8} \rho^{-j} \tilde \mathcal{L}_{\sigma}^n u(x), \end{eqnarray*} because $\frac{\sup u|_p}{\inf u|_p} \leq 4$, and using the bound on $\text{Leb}(p)$ from \eqref{eq:k}. Hence, summing over all propagated discontinuities $x \in I^\circ$ and corresponding branches, we get \begin{equation}\label{eq:E4} C_7 \sum_{j > n} \sum_{x \in X'_j \cap I^\circ} \rho^{-(j-n)} f_{\sigma}(y) |h'(x)|\ e^{\sigma \varphi_n \circ h(x)} \frac{|v| \circ h(x)}{\lambda_{\sigma}^n f_{\sigma}(x)} \le \frac{C_7}{C_8} \sum_{j > n} \rho^{-j} \sum_{x \in X'_j \cap I^\circ} \tilde\mathcal{L}_{\sigma}^n u(x), \end{equation} which contributes to $E_I(\tilde\mathcal{L}_{\sigma}^n u)$. This completes the treatment of the five terms. Combining terms \eqref{eq:O1}, \eqref{eq:O2}, \eqref{eq:O3}, \eqref{eq:O4} and \eqref{eq:O5}, the oscillation part is bounded by $$ \left(C_1 e^{C_1} + (1+\varepsilon)|b| e^{\varepsilon C'_2} C'_2 + (1+\rho_0^{-n})C_6 + 4C_{10}\rho_0^{-n} \right) \text{Leb}(I) \sup_I(\tilde \mathcal{L}_{\sigma}^n u) $$ and by the choice of $C_{10}$ in Subsection~\ref{subsec-uni}, this is less than $C_{10} |b| \text{Leb}(I) \eta_0 \sup_I(\mathcal{L}_{\sigma}^n u)$ whenever $|b| \geq 2$. Recall $C_8 = 3C_7/\eta_0$. Combining \eqref{eq:E1}, \eqref{eq:E2}, \eqref{eq:E3} and \eqref{eq:E4}, the jump part is bounded by $$ 3C_7 E_I(\tilde\mathcal{L}^n u) \leq C_8 \eta_0 E_I(\tilde\mathcal{L}^n u). $$ This concludes the induction step, proving that \begin{eqnarray*} \text{Osc}_{I^\circ}(\tilde\mathcal{L}_s^{n_0}v) &\leq & C_{10} \eta_0 |b| \text{Leb}(I) \sup_I (\tilde\mathcal{L}_{\sigma}^{n_0} u) + C_8 \eta_0 E_I (\tilde\mathcal{L}_{\sigma}^{n_0} u) \\ &\leq & C_{10} |b| \text{Leb}(I) \sup_I (\tilde\mathcal{L}_{\sigma}^{n_0} (\chi u)) + C_8 E_I (\tilde\mathcal{L}_{\sigma}^{n_0} (\chi u)) \end{eqnarray*} as required.~\end{proof} \section{Proof of Theorem~\ref{th-main}} \label{sec-proofmain} Given Lemma~\ref{lemma-cancellation} and Lemma~\ref{lemma-invcone}, the proof of the $L^2$ contraction for functions in $\mathcal{C}_b$ goes almost word by word as the proof of~\cite[Theorem 2.16]{AM} with some obvious modifications. We sketch the argument in Subsection~\ref{sub-L2}. In Subsection~\ref{sub-L2BV} we deal with arbitrary \text{BV}\ observables satisfying a mild condition via the $\|\,\|_b$ norm. In Subsection~\ref{sub-Completing the argument}, we complete the argument required for the proof of Theorem~\ref{th-main}. \subsection{$L^2$ contraction for functions in $\mathcal{C}_b$} \label{sub-L2} \begin{lemma}\label{lemma-L2} There exist $\varepsilon\in (0,1)$ and $\beta\in (0,1)$ such that for all $m\geq 1$, $s=\sigma+ib$, $|\sigma|<\varepsilon$, $|b|\geq\max\{4\pi/D,2\}$, \[ \int |\tilde \mathcal{L}_s^{mn_0}v|^2\, d\text{Leb}\leq \beta^m \|v\|_\infty^2, \] for all $v\in \text{BV}$ such that $(u,v)$ for $u=cst$ satisfy condition~\eqref{eq:EJ} in Definition~\ref{def:expjumpsize}. \end{lemma} \begin{proof} Set $u_0\equiv \|v\|_\infty$, $v_0=v$ and for $m\geq 0$, define \[ u_{m+1}=\tilde \mathcal{L}_{\sigma}^{n_0}(\chi_m u_m),\quad v_{m+1}=\tilde \mathcal{L}_s(v_m), \] where $\chi_m$ is a function depending on $b,u_m, v_m$. Since by definition $(u_0,v_0)\in\mathcal{C}_b$, it follows from Lemma~\ref{lemma-invcone} that $(u_m,v_m)\in\mathcal{C}_b$, for all $m$. Thus, we can construct $\chi_m:=\chi(b,u_m, v_m)$ inductively as in Corollary~\ref{cor-canc}. As in~\cite{AM, BalVal}, it is enough to show that there exists $\beta\in (0,1)$ such that $\int u_{m+1}^2 \, d\text{Leb}\leq \beta\int u_{m}^2 \, d\text{Leb}$ for all $m \geq 0$. Then $|\tilde \mathcal{L}_s^{mn_0}v|= |\tilde \mathcal{L}_s^{mn_0}v_0|= |v_m|\leq u_m$ and thus, \[ \int |\tilde \mathcal{L}_s^{mn_0}v|^2\, d\text{Leb}\leq\int u_{m}^2 d\text{Leb}\leq \beta^m \int u_{0}^2\, d\text{Leb}=\beta^m \|v\|_\infty^2, \] as required. Let $\hat I^p,\hat J^p$ be as constructed before the statement of Proposition~\ref{prop-intsupinfw} and note that $Y=(\cup_p\hat I^p)\cup (\cup_p\hat J^p)$. Proceeding as in the proof of ~\cite[Lemma 2.13]{AM} (which relies on the use of the Cauchy-Schwartz inequality), we obtain that there exists $\eta_1<1$ such that for any $p\in\mathcal{P}_k$, $$ u_{m+1}^2(y) \leq \begin{cases} \xi(\sigma)\eta_1 (\tilde \mathcal{L}_0^{n_0} u_{m}^2)(y)& \text{ if } y\in\hat I^p,\\ \xi(\sigma)(\tilde \mathcal{L}_0^{n_0} u_{m}^2)(y)& \text{ if } y\in\hat J^p, \end{cases} $$ where $\xi(\sigma)=\lambda_{\sigma}^{-2n_0}\sup_p(f_0/f_{\sigma})\sup_p(f_{2\sigma}/f_{\sigma})\sup_p(f_{\sigma}/f_0)\sup_p(f_{\sigma}/f_{2\sigma})$. Since $(u_m,v_m)\in\mathcal{C}_b$, we have, in particular, that for any $p\in\mathcal{P}_k$, $\sup_p u_m-\inf_p u_m \leq \text{Osc}_p u \leq (\frac{2}{3}+\frac{1}{12})\sup_p u_m$ and thus, $\frac{\sup_p u_m}{\inf_p u_m}\leq 4$. Similarly, $\frac{\sup_p u_m^2}{\inf_p u_m^2}\leq 16$. Hence, \begin{align*} \frac{\sup_p \tilde \mathcal{L}_0^{n_0}(u_m^2)}{\inf_p \tilde \mathcal{L}_0^{n_0}(u_m^2)} &= \frac{\sup_p \sum_{h\in\mathcal{H}_{n_0}}|h'| (f_0\circ h) (u_m^2\circ h)/f_0}{\inf_p \sum_{h\in\mathcal{H}_{n_0}}|h'| (f_0\circ h) (u_m^2\circ h)/f_0} \\ &\leq 16 \Big(\frac{\sup_p f_0}{\inf_p f_0}\Big)^2\ \frac{\sup_p \sum_{h\in\mathcal{H}_{n_0}}|h'|}{\inf_p \sum_{h\in\mathcal{H}_{n_0}}|h'|}<\infty. \end{align*} Let $w:= \tilde \mathcal{L}(u_m^2)$, set $M:=16 \Big(\frac{\sup_p f_0}{\inf_p f_0}\Big)^2\ \frac{\sup_p \sum_{h\in\mathcal{H}_{n_0}}|h'|}{\inf_p \sum_{h\in\mathcal{H}_{n_0}}|h'|}$ and note that $w$ satisfies the conditions of Proposition~\ref{prop-intsupinfw} for such $M$. For any $p\in\mathcal{P}_k$, it follows that $\int_{\hat I^p} w\, d\text{Leb}\geq \delta''\int_{\hat J^p} w\, d\text{Leb}$ and thus, \[ \int_{\cup_p \hat I^p} w\, d\text{Leb}\geq \delta''\int_{\cup_p\hat J^p} w\, d\text{Leb}. \] From here on the argument goes word by word as the argument used at the end of the proof of~\cite[Theorem 2.16]{AM}. We provide it here for completeness. Let $\beta'=\frac{1+\eta_1\delta''}{1+\delta''}<1$. Then $\delta''=\frac{1-\beta'}{\beta'-\eta_1}$ and thus, $(\beta'-\eta_1)\int_{\cup_p \hat I^p} w\, d\text{Leb}\geq (1-\beta')\int_{\cup_p\hat J^p} w\, d\text{Leb}$. Since also $Y=(\cup_p\hat I^p)\cup (\cup_p\hat J^p)$, we obtain $\eta_1\int_{\cup_p \hat I^p} w\, d\text{Leb} +\int_{\cup_p\hat J^p} w\, d\text{Leb}\leq \beta' \int_{Y} w\, d\text{Leb}$. Putting the above together, \begin{align*} \int_{Y} u_{m+1}^2\, d\text{Leb} &\leq \xi(\sigma)\Big(\eta_1\int_{\cup_p \hat I^p} w\, d\text{Leb} +\int_{\cup_p\hat J^p} w\, d\text{Leb}\Big) \\ &\leq \xi(\sigma)\beta'\int_Y \tilde \mathcal{L}_0^{n_0} (u_{m+1}^2)\, d\text{Leb}=\xi(\sigma)\beta'\int_Y u_{m}^2\, d\text{Leb}. \end{align*} To conclude, recall that by Remark~\ref{rem:lambda}, if necessary, we can shrink $\varepsilon$ such that $\beta:=\xi(\sigma)\beta'<1$ for all $|\sigma|<\varepsilon$.~\end{proof} \subsection{Dealing with arbitrary \text{BV}\ observables via the $\|\,\|_b$ norm} \label{sub-L2BV} The cone $\mathcal{C}_b$ represents only a specific class of \text{BV}\ observables, namely with discontinuities of prescribed size and location. It is, in fact, the smallest Banach space that is invariant under $(u,v) \mapsto (\tilde \mathcal{L}_{\sigma} u, \tilde \mathcal{L}_s v)$ and contains all continuous \text{BV}\ functions. In this section we are concerned with the behaviour of $\tilde \mathcal{L}_s^r$ acting on \text{BV}\ functions satisfying a certain mild condition (less restrictive than belonging to $\mathcal{C}_b$). To phrase such a condition we let $C_{11}$ be a positive constant such that \begin{equation} \label{eq-cst11} C_{11} = 64 (1+c)^2 \Big( \frac{\sup f_{\sigma}}{\inf f_{\sigma}} \Big)^2 \, \frac{\sup f_{2\sigma}}{\inf f_{2\sigma}} \left( \frac{\sup f_{\sigma}}{\inf f_{\sigma}}\frac{\sup f_0}{\inf f_0} \right)^2, \end{equation} where $c$ is the constant in the statement of Proposition~\ref{prop-LYineq}. We use the following hypothesis: \begin{equation}\label{eq:H} \begin{cases} \text{Var}_Y v\leq C_{11} |b|^2 \rho^{m n_0} \|v\|_1 & \text{ if } \sigma \geq 0,\\ \text{Var}_Y(e^{\sigma \varphi_{m n_0}} v) \leq C_{11} |b|^2 \rho^{m n_0} \|e^{\sigma \varphi_{m n_0}}v\|_1 & \text{ if } \sigma < 0. \end{cases} \tag{$H_{\sigma,m}$} \end{equation} The next result, Proposition~\ref{prop:to-cone}, says that for $v \in \text{BV}(Y)$ such that if \eqref{eq:H}, then $\tilde \mathcal{L}_s^rv$ is exponentially close to the cone $\mathcal{C}_b$ in $\| \ \|_\infty$, because jumps-sizes of discontinuities of $v$ outside $X_\infty$ die out at an exponential rate and are not newly created by the dynamics of $F$. \begin{prop}\label{prop:to-cone} There exists $\varepsilon\in (0,1)$ such that for all $s=\sigma+ib$, $|\sigma|<\varepsilon$, $|b|\geq\max\{4\pi/D,2\}$, and all $v \in \text{BV}(Y)$ such that \eqref{eq:H} holds for some $m \geq 1$, there exists a pair $(u_{m n_0}, w_{m n_0}) \in \mathcal{C}_b$ such that \begin{equation*} \| \tilde \mathcal{L}_s^{m n_0}v - w_{m n_0} \|_\infty \leq 2C_{10} \ \rho^{-m n_0} |b|\| v \|_\infty \ \mbox{ and }\ \| w_{m n_0}\|_\infty \leq\| v \|_\infty. \end{equation*} \end{prop} The above result will allow us to prove \begin{lemma}\label{lemma-L2BV} There exist $\varepsilon\in (0,1)$ and $\beta\in (0,1)$ such that for all $s=\sigma+ib$, $|\sigma|<\varepsilon$, $|b|\geq\max\{4\pi/D,1\}$ and for all $m\geq 1$, \[ \|\tilde \mathcal{L}_s^{3mn_0}v\|_b\leq (1+|b|)^{-1} \text{Var}_Y(\tilde \mathcal{L}_s^{3mn_0}v)+(2C_{10}\rho^{-m n_0}|b|+\beta^{m})\|v\|_\infty. \] for all $v\in \text{BV}(Y)$ satisfying \eqref{eq:H}. \end{lemma} \begin{proof}[Proof of Proposition~\ref{prop:to-cone}] Let $v \in \text{BV}(Y)$ be arbitrary and take $r = m n_0$ (this is a multiple of $k$ because $n_0$ is). Write $g_r = \tilde \mathcal{L}_s^rv$ and $\bar g_r = \tilde\mathcal{L}_{\sigma}^r |v|$; for every fixed $b\in\mathbb{R}$, they belong to $\text{BV}(Y)$ as well by Proposition~\ref{prop-LYineq}. Therefore $g_r$ has at most countably many discontinuity points, which we denote by $\{ x_i \}_{i \in \mathbb{N}}$. Assume throughout this proof that $g_r$ is continuous from the right; this can be achieved by adjusting $g_r$ at $\{ x_i \}_{i \in \mathbb{N}}$, so it has no effect on the $L^p$-norm for any $p \in [1,\infty]$. To estimate the jump-size $|a_i|$ of $g_r$ at $x_i \in X'_j$ for some $j \leq r$, we note that this discontinuity is created by non-onto branches of $F^r$, and there exist $y \in X'_1$ and an inverse branch $\tilde h \in \mathcal{H}_{j-1}$ such that $y_i = \tilde h(x_i)$. The jump-size of $\tilde \mathcal{L}_s^rv$ at $x_i$ can be expressed as a sum of $h \in \mathcal{H}_{r-(j-1)}$ which in the summand is composed with $\tilde h$. Then \begin{align}\label{eq:sizeg} \text{Size }& \tilde \mathcal{L}_s^r v(x_i) \leq \sum_{h \in \mathcal{H}_{r-(j-1)}} \!\!\!\!\!\! |(h \circ \tilde h)'(x_i)|\ |e^{s \varphi_{r-(j-1)} \circ h \circ \tilde h(x_i) + s \varphi_{j-1} \circ \tilde h(x_i)}|\ \frac{(f_{\sigma} v) \circ h \circ \tilde h(x_i)}{\lambda_{\sigma}^r f_{\sigma}(x_i)} \nonumber \\ &= \sum_{h \in \mathcal{H}_{r-(j-1)}} \!\!\!\!\!\! |h'(y_i)|\ e^{\sigma \varphi_{r-(j-1)} \circ h(y_i)} \frac{(f_{\sigma} v) \circ h(y_i)}{\lambda_{\sigma}^{r-(j-1)} f_{\sigma}(y_i)} |\tilde h'(x_i)| \ e^{\sigma \varphi_{j-1} \circ \tilde h(x_i)} \frac{f_{\sigma}(y_i)}{\lambda_{\sigma}^{j-1}f_{\sigma}(x_i)} \nonumber \\ &\leq \Big( \sum_{h \in \mathcal{H}_{n-(j-1)}}\!\!\!\!\!\! |h'(y_i)|\ e^{\sigma \varphi_{r-(j-1)} \circ h(y_i)} \frac{f_{\sigma} \circ h(y_i)}{\lambda_{\sigma}^{r-(j-1)} f_{\sigma}(y_i)}\Big) \ \| v \|_\infty\ \rho^{-3(j-1)}\ \frac{\sup f_{\sigma}}{\inf f_{\sigma}} \nonumber \\ &\leq \| v \|_\infty\ \rho^3\ \frac{\sup f_{\sigma}}{\inf f_{\sigma}} \rho^{-3j}. \end{align} where the sum in brackets in the penultimate line is $1$ because $f_{\sigma}$ is an eigenfunction of $\mathcal{L}_{\sigma}$. For $r>k$, let $Q_{r}$ be an interval partition of $Y$ refining $\mathcal{P}_r$ such that $\frac12 \rho^{-r} < \text{Leb}(I_{r}) < 2 \rho^{-r}$ for every $I_{r} \in Q_{r}$. In fact, by adjusting $Q_r$ by an arbitrary small amount if necessary, we can assume that $g_r$ and $\bar g_r$ are continuous at every point in $\partial I_r \setminus X_r$, $I_r \in Q_r$. Construct $w_r$ and $u_r$ to be affine on each $(p,q) = I_r \in Q_r$ such that $$ \lim_{x \downarrow p} w_r(x) = \lim_{x \downarrow p} g_r(x) \quad \text{ and } \quad \lim_{x \uparrow q} w_r(x) = \lim_{x \uparrow q} g_r(x) $$ and similarly $$ \lim_{x \downarrow p} u_r(x) = \lim_{x \downarrow p} \bar g_r(x) \quad \text{ and } \quad \lim_{x \uparrow q} u_r(x) = \lim_{x \uparrow q} \bar g_r(x). $$ Then $w_r$ and $u_r$ are continuous on $Y \setminus X_r$ and as $\bar g_r \geq |g_r|$, it is immediate that $u_r \geq |w_r|$ on $Y$. The main estimate now concerns the oscillation $$ \text{Osc}_{I_r} g_r = \text{Osc}_{I_r} \left( \sum_{h \in \mathcal{H}_r, I_r \subset \text{dom}(h)} \frac{e^{s\varphi_r \circ h} |h'|}{\lambda_{\sigma}^r f_{\sigma}} (f_{\sigma} v) \circ h \right) \quad \text{ for } I_r \in Q_r, $$ which we will split into five terms similar to the proof of the invariance of the cone.\\[2mm] {\bf The term with $|h'|$} is bounded above by $C_1 e^{C_1} \text{Leb}(I_r) \sup_{x \in I_r} \tilde \mathcal{L}_{\sigma}^r|v|$ as in \eqref{eq:O1}.\\[2mm] {\bf The term with $e^{s \varphi_n \circ h}$} is bounded above by $(1+|\sigma|) e^{\sigma C'_2} C'_2 |b| \text{Leb}(I_r) \sup_{x \in I_r} \tilde \mathcal{L}_{\sigma}^r|v|$ as in \eqref{eq:O2}.\\[2mm] {\bf The term with $1/f_{\sigma}$} is bounded above, by combining \eqref{eq:O4} and \eqref{eq:E3}, by $$ C_6 \text{Leb}(I_r) \sup_{x \in I_r} \tilde \mathcal{L}_{\sigma}^r |v| + C_7 \text{Leb}(I_r) \sum_{j > r} \rho^{-j} \sum_{x \in X'_j \cap I_r} \tilde \mathcal{L}_{\sigma}^r|v|(x). $$ Here the second term is bounded by $C_7 N_1 \frac{\rho^{-r}}{\rho-1} \sup_{x \in I_r} \tilde \mathcal{L}_{\sigma}^r|v| \leq 2C_7\frac{N_1}{\rho-1} \text{Leb}(I_r) \sup_{x \in I_r} \tilde \mathcal{L}_{\sigma}^r|v|$, where we recall that $\# X'_j \leq N_1$ for all $j \geq 1$. \\[2mm] {\bf The term with $f_{\sigma} \circ h$} is bounded above, by combining \eqref{eq:O3} and \eqref{eq:E2} and arguing as in the previous case, by $$ C_6 \rho_0^{-r} \text{Leb}(I_r) \sup_{x \in I_r} \tilde\mathcal{L}_{\sigma}^r|v| + C_7 \sum_{j > r} \rho^{-j}\!\!\! \sum_{x \in X'_j \cap I_r} \tilde \mathcal{L}_{\sigma}^r|v|(x) \leq (C_6 \rho_0^{-r} + 2C_7 \frac{N}{\rho-1}) \text{Leb}(I_r) \sup_{x \in I_r} \tilde \mathcal{L}_{\sigma}^r|v|. $$ {\bf The term with $v \circ h$}: First we treat the case $\sigma \geq 0$. By Lemma~\ref{lem:Varvar} (which also gives a lower bound $r_0$ for $r$) \[ \|v\|_1 \leq \frac{K_1}{\text{Leb}(I_r)} \int_{F^{-r}(I_r)} |v|\, d\text{Leb} \quad \text{ for all }\, I_r \in Q_r, \] where $K_1 = 6e^{C_1}/\eta$. Recall that \eqref{eq:H} holds with $C_{11}>1$ as defined in~\eqref{eq-cst11}. Compute that \begin{align*} \sum_{\stackrel{h \in \mathcal{H}_r}{I_r \subset \text{dom}(h)}} & \left( \sup_{x \in I_r} \frac{|e^{s \varphi_r \circ h}| \, |h'|}{\lambda_{\sigma}^r f_{\sigma}} f_{\sigma} \circ h \right) \text{Osc}_{I_r}(v \circ h) \leq \rho^{-3r} \frac{\sup f_{\sigma}}{\inf f_{\sigma}} \sum_{\stackrel{h \in \mathcal{H}_r}{I_r \subset\text{dom}(h)}} \text{Osc}_{h(I_r)} v\\ \leq\ & \rho^{-3r} \frac{\sup f_{\sigma}}{\inf f_{\sigma}} \text{Var}_{F^{-r}(I_r)} v\leq 2\rho^{-2r} \text{Leb}(I_r) \frac{\sup f_{\sigma}}{\inf f_{\sigma}} \text{Var}_{Y} v\\ \leq\ & 2\rho^{-2r} \text{Leb}(I_r) \frac{\sup f_{\sigma}}{\inf f_{\sigma}}\, C_{11} |b|^2 \rho^r \int_Y |v|\, d\text{Leb} \\ \leq\ & 2 C_{11} |b|^2 K_1\rho^{-r} \frac{\sup f_{\sigma}}{\inf f_{\sigma}}\int_{F^{-r}(I_r)} |v| \, d\text{Leb}\\ \leq\ & 2C_{11}|b|^2 K_1\rho^{-r}\left(\frac{\sup f_{\sigma}}{\inf f_{\sigma}}\right)^2 \sum_{\stackrel{h \in \mathcal{H}_r}{I_r \subset\text{dom}(h)}} \int_{I_r} \frac{ |h'|}{f_{\sigma}} (f_{\sigma} |v|) \circ h \, d\text{Leb}. \end{align*} Because $\sigma \geq 0$, we can continue as \begin{align*} \sum_{\stackrel{h \in \mathcal{H}_r}{I_r \subset\text{dom}(h)}} & \Big( \sup_{x \in I_r} \frac{|e^{s \varphi_r \circ h}| \, |h'|}{\lambda_{\sigma}^r f_{\sigma}} f_{\sigma} \circ h \Big) \text{Osc}_{I_r}(v \circ h) \\ \leq\ & 2 C_{11} |b|^2 K_1\rho^{-r} \lambda_{\sigma}^r \left(\frac{\sup f_{\sigma}}{\inf f_{\sigma}}\right)^2 \sum_{\stackrel{h \in \mathcal{H}_r}{I_r \subset\text{dom}(h)}} \int_{I_r} \frac{e^{\sigma \varphi_r \circ h} |h'|}{\lambda_{\sigma}^r f_{\sigma}} (f_{\sigma} |v|) \circ h \, d\text{Leb}\\ \leq\ & 2 C_{11}|b|^2 K_1\rho^{-r} \lambda_{\sigma}^r \left(\frac{\sup f_{\sigma}}{\inf f_{\sigma}}\right)^2 \text{Leb}(I_r) \sup_{x \in I_r} \tilde \mathcal{L}_{\sigma}^r|v|. \end{align*} Since $\rho > \lambda_{\sigma}$, we obtain the upper bound $\text{Leb}(I_r) \sup_{x \in I_r} \tilde \mathcal{L}_{\sigma}^r|v|$ by taking $r$ sufficiently large. Now we treat the case $\sigma < 0$. By Lemma~\ref{lem:Varvar} applied to $e^{\sigma \varphi_r} v$ (and with the same lower bound $r_0$ for $r$ as before) \[ \| e^{\sigma \varphi_r} v\|_1 \leq \frac{K_1}{\text{Leb}(I_r)} \int_{F^{-r}(I_r)} |e^{\sigma \varphi_r} v|\, d\text{Leb} \quad \text{ for all }\, I_r \in Q_r. \] Note that \begin{align*} \sum_{\stackrel{h \in \mathcal{H}_r}{I_r \subset\text{dom}(h)}} & \left(\sup_{x \in I_r} \frac{|e^{s \varphi_r \circ h}| \, |h'|}{\lambda_{\sigma}^r f_{\sigma}} f_{\sigma} \circ h \right) \text{Osc}_{I_r}(v \circ h) \\ \leq \ & e^{\varepsilon C'_2} \sum_{\stackrel{h \in \mathcal{H}_r}{I_r \subset\text{dom}(h)}} \left( \sup_{x \in I_r} \frac{|h'|}{\lambda_{\sigma}^r f_{\sigma}} f_{\sigma} \circ h \right) \text{Osc}_{I_r}( (e^{\sigma \varphi_r} v) \circ h) \\ \leq \ & e^{\varepsilon C'_2} \lambda_{\sigma}^{-r} \frac{\sup f_{\sigma}}{\inf f_{\sigma}} \rho_0^{-r}\ \text{Osc}_{I_r}( (e^{\sigma \varphi_r} v) \circ h). \end{align*} Estimating the oscillation as in the case $\sigma \geq 0$, and using \eqref{eq:H}, we find the upper bound \begin{align*} \sum_{\stackrel{h \in \mathcal{H}_r}{I_r \subset\text{dom}(h)}} & \left(\sup_{x \in I_r} \frac{|e^{s \varphi_r \circ h}| \, |h'|}{\lambda_{\sigma}^r f_{\sigma}} f_{\sigma} \circ h \right) \text{Osc}_{I_r}(v \circ h) \\ \leq\ & 2e^{\varepsilon C'_2} C_{11}|b|^2 K_1\rho^{-3r} \left(\frac{\sup f_{\sigma}}{\inf f_{\sigma}}\right)^2 \sum_{\stackrel{h \in \mathcal{H}_r}{I_r \subset\text{dom}(h)}} \int_{I_r} \frac{ e^{\sigma \varphi_r \circ h} |h'|}{\lambda_{\sigma}^r f_{\sigma}} (f_{\sigma} |v|) \circ h \, d\text{Leb} \\ \leq\ & 2e^{\varepsilon C'_2} C_{11}|b|^2 K_1\rho^{-3r} \left(\frac{\sup f_{\sigma}}{\inf f_{\sigma}}\right)^2 \text{Leb}(I_r) \sup_{x \in I_r} \tilde \mathcal{L}_{\sigma}^r |v|. \end{align*} By taking $r$ sufficiently large, we obtain again the upper bound $\text{Leb}(I_r) \sup_{x \in I_r} \tilde \mathcal{L}_{\sigma}^r|v|$, and this finishes the case $\sigma < 0$. Putting all terms together, \begin{equation}\label{eq:oscg} \text{Osc}_{I_r} g_r \leq C_{10} |b| \ \text{Leb}(I_r) \sup_{I_r} \tilde \mathcal{L}_{\sigma}^r|v|, \end{equation} and since $w_r$ is an affine interpolation of $g_r$, with the same limit values at all points $x_i \in X_r$, \[ \| g_r - w_r\|_\infty \leq C_{10} |b| \ \text{Leb}(I_r) \sup_{I_r} \tilde \mathcal{L}_{\sigma}^r|v| \leq 2 C_{10} |b| \rho^{-r} \| v \|_\infty. \] Also, since $w_r$ is an affine interpolation of $g_r$, we have $\| w_r \| \leq \| g_r \|_\infty \leq \| v \|_\infty$. We still need to complete the argument why $(u_r, w_r) \in \mathcal{C}_b$. By \eqref{eq:oscg}, the affine function $w_r|_{I_r}$ has slope $C_{10} |b| \sup_{I_r} \tilde\mathcal{L}_{\sigma}^r|v| = C_{10} |b| \sup_{I_r} |u_r|$. This means that for every subinterval $I \subset I_r$, we also have $$ \text{Osc}_I w_r \leq C_{10} |b| \text{Leb}(I) \sup_I u_r. $$ If on the other hand, $I$ intersects several contiguous $I_r \in Q_r$ (but is contained in an atom of $\mathcal{P}_k$), then we have to include the jump-sizes of discontinuity points at $\partial I_r$ as well. But since $Q_r$ refines $\mathcal{P}_r$ and $g_q$ is continuous at all boundary points $q \in \partial I_r \setminus X_r$, and the jump-sizes of $g_r$ and $w_r$ coincide at every $x_i \in X'_j$ (and decrease exponentially in $j$ by \eqref{eq:sizeg}) we conclude that $$ \text{Osc}_I w_r \leq C_{10} |b| \text{Leb}(I) \sup_I u_r + C_8 E_I(u_r). $$ This shows that $(u_r, w_r) \in \mathcal{C}_b$, as required. \end{proof} \begin{proof}[Proof of Lemma~\ref{lemma-L2BV}] For $m \geq 1$ let $(w_{mn_0}, u_{mn_0})\in\mathcal{C}_b$ be as in the statement of Proposition~\ref{prop:to-cone}. Let $v\in \text{BV}$. Using the definition of $\|\,\|_b$ norm, \begin{align*} \|\tilde \mathcal{L}_s^{3mn_0} v\|_b&=(1+|b|)^{-1}\text{Var}_Y(\tilde \mathcal{L}_s^{3mn_0} v)+\|\tilde \mathcal{L}_s^{3mn_0}v\|_1\\ &\leq (1+|b|)^{-1}\text{Var}_Y(\tilde \mathcal{L}_s^{3mn_0} v)+\| \tilde \mathcal{L}_s^{2mn_0}(\tilde \mathcal{L}_s^{mn_0} v-w_{mn_0})\|_1+\|\tilde \mathcal{L}_s^{2mn_0}w_n\|_1\\ &\leq (1+|b|)^{-1}\text{Var}_Y(\tilde \mathcal{L}_s^{3mn_0} v)+2C_{10}\rho^{-mn_0}|b|\|v\|_\infty+\beta^{m}\|w_{mn_0}\|_\infty, \end{align*} where in the last inequality we have used Proposition~\ref{prop:to-cone} and Lemma~\ref{lemma-L2}. The conclusion follows since $\|w_{mn_0}\|_\infty\leq \| v\|_\infty$ (as in the statement of Proposition~\ref{prop:to-cone}).~\end{proof} \subsection{Completing the argument} \label{sub-Completing the argument} In this section we complete the proof of Theorem~\ref{th-main} via a couple of lemmas. \begin{lemma}\label{lemma-complete-1} There exist $\varepsilon\in (0,1)$, $A>0$ and $\gamma_1\in (0,1)$ such that for all $s=\sigma+ib$, $|\sigma|<\varepsilon$, $|b|\geq\max\{4\pi/D,2\}$ and for all $m \geq A\log(1+|b|)$, \[ \|\tilde \mathcal{L}_s^{3mn_0} v\|_b\leq \gamma_1^{3m}\|v\|_b \] for all $v \in \text{BV}(Y)$ satisfying \eqref{eq:H}. \end{lemma} \begin{proof} First, we estimate $(1+|b|)^{-1}\text{Var}_Y(\tilde \mathcal{L}_s^{3mn_0} v)$. For $m \in \mathbb{N}$, recall from Proposition~\ref{prop:to-cone} and Lemma~\ref{lemma-L2} that \begin{eqnarray*} \| \tilde \mathcal{L}_s^{2mn_0} v \|_1 &\leq& \|\tilde \mathcal{L}_s^{mn_0} (\tilde \mathcal{L}_s^{mn_0} v - w_{mn_0}) \|_1 + \| \tilde \mathcal{L}_s^{mn_0} w_{mn_0} \|_1 \\ &\leq& \|\tilde \mathcal{L}_s^{mn_0} (\tilde \mathcal{L}_s^{mn_0} v - w_{mn_0}) \|_\infty + \beta^m \| w_{mn_0} \|_\infty \\ &\leq& 2C_{10} \rho^{-mn_0} \| v \|_\infty + \beta^m \| v \|_\infty \leq 4\beta^m \| v \|_\infty \end{eqnarray*} where we used $C_{10} \rho^{-mn_0} \leq 2\beta^m$. By Proposition~\ref{prop-LYineq} (which is allowed since $n_0$ is a multiple of $k$) and recalling that $\Lambda_{\sigma} := \lambda_{2\sigma}^{1/2}/\lambda_{\sigma} \geq 1$, we compute \begin{align}\label{eq-varb} \nonumber \text{Var}_Y(\tilde \mathcal{L}_s^{3mn_0} v) &\leq\rho^{-mn_0}\text{Var}_Y(\tilde \mathcal{L}_s^{2mn_0} v)+c(1+|b|) \Lambda_{\sigma}^{mn_0}(\|\tilde \mathcal{L}_s^{2mn_0} v\|_1\,\|\tilde \mathcal{L}_s^{2mn_0} v\|_\infty)^{1/2}\\ \nonumber &\leq \rho^{-mn_0}\text{Var}_Y(\tilde \mathcal{L}_s^{2mn_0} v)+2c(1+|b|) \Lambda_{\sigma}^{mn_0}\beta^{m/2}\| v\|_\infty\\ &\leq \rho^{-mn_0}\text{Var}_Y(\tilde \mathcal{L}_s^{2mn_0} v)+2c(1+|b|) \Lambda_{\sigma}^{mn_0}\beta^{m/2}(\text{Var}_Y v+\|v\|_{1}). \end{align} where in the last inequality we have used $\|v\|_\infty\leq \text{Var}_Y v+\|v\|_{1}$. Also by Proposition~\ref{prop-LYineq}, \begin{align*} \text{Var}_Y(\tilde \mathcal{L}_s^{2mn_0} v) &\leq \rho^{-2mn_0}\text{Var}_Yv+c(1+|b|) \Lambda_{\sigma}^{2mn_0} \|v\|_\infty\\ &\leq \rho^{-2mn_0}\text{Var}_Yv+c(1+|b|) \Lambda_{\sigma}^{2mn_0} (\text{Var}_Y v+\|v\|_{1}). \end{align*} Plugging the above inequality into~\eqref{eq-varb} we get \begin{align*} \text{Var}_Y(&\tilde \mathcal{L}_s^{3mn_0} v)\leq \rho^{-3mn_0} \text{Var}_Y v+c(1+|b|)(\rho^{-mn_0} \Lambda_{\sigma}^{2mn_0}+2\Lambda_{\sigma}^{mn_0}\beta^{m/2}) (\text{Var}_Y v+\|v\|_{1}). \end{align*} Multiplying this $(1+|b|)^{-1}$ and inserting it in Lemma~\ref{lemma-L2BV} (which relies on the assumption \eqref{eq:H}) gives \begin{align*} \|\tilde \mathcal{L}_s^{3mn_0} v\|_b \leq\ & (1+|b|)^{-1}\rho^{-3mn_0}\text{Var}_Y v+c(\rho^{-mn_0} \Lambda_{\sigma}^{2mn_0} +2 \Lambda_{\sigma}^{mn_0}\beta^{m/2}) (\text{Var}_Y v+\|v\|_{1})\\ &+(2C_{10}\rho^{-mn_0}|b|+\beta^{m})(\text{Var}_Y v+\|v\|_{1}). \end{align*} Hence, \begin{align*} \|\tilde \mathcal{L}_s^{3mn_0} v\|_b \leq &\ (1+|b|)^{-1}\Big(\rho^{-3mn_0} + (1+|b|)(c \Lambda_{\sigma}^{2mn_0} \rho^{-mn_0} \\ & \qquad \qquad \quad + 2c \Lambda_{\sigma}^{mn_0}\beta^{m/2}+2C_{10}|b|\rho^{-mn_0} + \beta^m) \Big) \text{Var}_Y v\\ &+(c \Lambda_{\sigma}^{2mn_0} \rho^{-mn_0} + 2c \Lambda_{\sigma}^{mn_0}\beta^{m/2}+2C_{10}|b|\rho^{-mn_0} + \beta^m) \|v\|_1 \\ \leq &\ (1+|b|)^2 (2C_{10} + c) (\Lambda_{\sigma}^{2mn_0} \rho^{-m n_0} + \Lambda_\sigma^{m n_0} \beta^{m/2})\| v \|_b. \end{align*} Let $A>0$ be so large that $\gamma_1 := \max\{ \Lambda_{\sigma}^{2n_0}\rho^{-1}, \Lambda_{\sigma}^{n_0} \beta^{1/2} \} \exp( \frac{6\log(2C_0+c)}{A})<1$. Then $(1+|b|)^2 (2C_{10} + c) (\Lambda_{\sigma}^{2mn_0} \rho^{-m n_0} + \Lambda_\sigma^{m n_0} \beta^{m/2})<\gamma_1^m$ for all $m>A\log(1+|b|)$, and the conclusion follows. \end{proof} To complete the proof of Theorem~\ref{th-main} we still need to deal with \text{BV}\ functions violating \eqref{eq:H}. \begin{lemma}\label{lemma-complete-2} There exist $\varepsilon\in (0,1)$ and $\gamma_2 \in (0,1)$ such that for all $s=\sigma+ib$, $|\sigma|<\varepsilon$, $|b|\geq\max\{4\pi/D,2\}$ and for all $m \geq 1$, \[ \|\tilde \mathcal{L}_s^{m n_0}v \|_b\leq \gamma_2^m \| v \|_b \] for all $v \in \text{BV}(Y)$ violating \eqref{eq:H}. \end{lemma} \begin{proof} By continuity in $\sigma$, $1\leq \Lambda_{\sigma} < \rho^{1/2}$ for all $|\sigma|$ sufficiently small. Then clearly also $\gamma_2 := \Lambda_{\sigma}^{n_0} \rho^{-n_0/2} < 1$. We first treat the case $\sigma \geq 0$, so by assumption, $\text{Var}_Y v > C_{11} |b|^2 \rho^{m n_0} \|v\|_1$. Using Proposition~\ref{prop-LYineq} (which is allowed since $n_0$ is a multiple of $k$), we compute that \begin{align*} \text{Var}_Y(\tilde \mathcal{L}_s^{m n_0} v)&\leq\rho^{-m n_0}\text{Var}_Y v+c(1+|b|) \Lambda_{\sigma}^{mn_0}(\|v\|_1\|v\|_\infty)^{1/2} \\ &\leq \rho^{-m n_0}\text{Var}_Y( v)+c(1+|b|) \Lambda_{\sigma}^{mn_0}(\|v\|_1(\text{Var}_Y v +\|v\|_1)^{1/2}\\ &\leq \rho^{-m n_0}\text{Var}_Y( v)+c(1+|b|) \Lambda_{\sigma}^{mn_0} \Big(\frac{\rho^{-m n_0}}{C_{11}|b|^2}\text{Var}_Yv\Big(\text{Var}_Yv+\frac{\rho^{-m n_0}}{C_{11}|b|^2}\text{Var}_Y v\Big)\Big)^{1/2}\\ &\leq \rho^{-m n_0}\text{Var}_Y v+\frac{c}{C_{11}^{1/2}}\frac{\sqrt{65}}{8} \frac{1+|b|}{|b|} \Lambda_{\sigma}^{mn_0} \rho^{-m n_0/2}\text{Var}_Yv \\ &\leq (\rho^{-m n_0} + \frac{1}{8K_2} \frac{3\sqrt{65}}{16} \Lambda_{\sigma}^{mn_0} \rho^{-m n_0/2} ) \text{Var}_Y v, \end{align*} where we have used $C_{11}|b|^2 > 64$ and abbreviated $K_2 := \frac{\sup f_{\sigma}}{\inf f_{\sigma}}\frac{\sup f_0}{\inf f_0}$. Therefore \[ (1+|b|)^{-1}\text{Var}_Y(\tilde \mathcal{L}_s^{m n_0} v)\leq (1+|b|)^{-1} \frac1{4K_2}\gamma_2^m \text{Var}_Y v \] for $m$ sufficiently large. By \eqref{eq:LYineq1} at the end of the proof of Proposition~\ref{prop-LYineq}, $$ \|\tilde \mathcal{L}_{\sigma}^{m n_0} |v|\ \|_1\leq \Lambda_{\sigma}^{mn_0} \frac{\sup f_{\sigma}}{\inf f_{\sigma}} \Big( \frac{\sup f_{2\sigma}}{\inf f_{2\sigma}} \Big)^{1/2} \,(\|v\|_\infty\|v\|_1)^{1/2}. $$ Note that $\|\tilde \mathcal{L}_s^{m n_0} v\|_1\leq \|\tilde \mathcal{L}_{\sigma}^{m n_0} |v|\|_1$. so we have \begin{align*} \|\tilde \mathcal{L}_s^{m n_0} v\|_1 &\leq \Lambda_{\sigma}^{m n_0}\, \frac{\sup f_{\sigma}}{\inf f_{\sigma}} \Big( \frac{\sup f_{2\sigma}}{\inf f_{2\sigma}} \Big)^{1/2} \Big((\text{Var}_Y v +\|v\|_1)\|v\|_1\Big)^{1/2}\\ &\leq \Lambda_{\sigma}^{m n_0}\, \frac{\sup f_{\sigma}}{\inf f_{\sigma}} \Big( \frac{\sup f_{2\sigma}}{\inf f_{2\sigma}} \Big)^{1/2} \Big( (1+\frac{\rho^{-m n_0}}{C_{11} |b|^2}) \frac{\rho^{-m n_0}}{C_{11} |b|^2} \Big)^{1/2} \text{Var}_Y v \\ &\leq \frac{\sup f_{\sigma}}{\inf f_{\sigma}} \Big( \frac{\sup f_{2\sigma}}{\inf f_{2\sigma}} \Big)^{1/2} \frac{\sqrt{65}}{8}C_{11}^{-1/2} |b|^{-1} \Lambda_{\sigma}^{m n_0} \rho^{-m n_0/2} \ \text{Var}_Y v. \end{align*} The choice of $C_{11}$ gives that $\frac{\sup f_{\sigma}}{\inf f_{\sigma}} \Big( \frac{\sup f_{2\sigma}}{\inf f_{2\sigma}} \Big)^{1/2}<C_{11}^{1/2}/8K_2$. Hence, the choice of $\gamma_2$ gives $\|\tilde \mathcal{L}_s^{m n_0} v\|_1\leq \frac 1{4K_2} (1+|b|)^{-1} \gamma_2^m \text{Var}_Y v$. Together, $\| \tilde \mathcal{L}_s^{m n_0} \|_b \leq \frac{1}{2K_2} (1+|b|)^{-1} \gamma_2^m \text{Var}_Y v$. Now if $\sigma < 0$, then the assumption is $\text{Var}_Y (e^{\sigma \varphi_{m n_0}} v) > C_{11} |b|^2 \rho^{m n_0} \| e^{\sigma \varphi_{m n_0}} v\|_1$. The above computation gives $$ \| \tilde \mathcal{L}_s^{m n_0} v \|_b \leq \frac{\sup f_{\sigma}}{\inf f_{\sigma}}\frac{\sup f_0}{\inf f_0}\ \| \tilde \mathcal{L}_{ib}^{m n_0}( e^{\sigma \varphi_{m n_0}} v) \|_b \leq \frac12 (1+|b|)^{-1}\ \gamma_2^m(2\text{Var}_Y v + \| v \|_1), $$ where we have used (since $\sigma < 0$) that $\text{Var}_Y(e^{\sigma \varphi_{m n_0}} v) \leq \text{Var}_Y v + \| v \|_\infty \leq 2\text{Var}_Y v + \| v \|_1$. Therefore $\| \tilde \mathcal{L}_s^{m n_0} \|_b \leq (1+|b|)^{-1} \gamma_2^m \| v \|_b$ and this proves the lemma. \end{proof} \begin{proof}[Proof of Theorem~\ref{th-main}] Let $\varepsilon \in (0,1)$ be such that the conclusion of Lemmas~\ref{lemma-complete-1}, \ref{lemma-complete-2} and Proposition~\ref{prop-LYineq} hold, and take $\gamma = \max\{ \gamma_1^{1/2}, \gamma_2^{1/2}\}$. Let $|\sigma| < \varepsilon$, $n \in \mathbb{N}$ and $v \in \text{BV}(Y)$ be arbitrary. Recall that $|b|\geq \max\{ 4\pi/D,2\}$. Let $A$ be the constant used in Lemma~\ref{lemma-complete-1}; without loss of generality, we can assume that $A \log |b| > 3n_0$. By the proof of Proposition~\ref{prop-LYineq} (see also Remark~\ref{rem-LYtwist}), there is $A'$ such that the operator norm \begin{equation}\label{eq:Aprime} \| \tilde \mathcal{L}_s^{n'} \|_b \leq A' (1+|b|) \quad \text{ for all } |\sigma| < \varepsilon, b \in \mathbb{R}, n' \in \mathbb{N}. \end{equation} Take \begin{equation}\label{eq:nminimal} n \geq 2 \max \left\{ \frac{A}{n_0} \log(1+ |b|) \ ,\ \log(\Lambda_{\sigma}^{-1} \frac{\sup f_{\sigma}}{\inf f_{\sigma}}A'(1+|b|))\right\}. \end{equation} Because the contraction in Lemmas~\ref{lemma-complete-1} and \ref{lemma-complete-2} happen at different time steps, we carry out the following algorithm: \begin{enumerate} \item Let $m_0 \in \mathbb{N}$ be maximal such that $3m_0n_0 \leq n$. If $m_0 < A\log(1+|b|)$, then continue with Step 4, otherwise continue with Step 2. \item If $v$ satisfies ($H_{\sigma, m_0}$), then $\| \tilde \mathcal{L}_s^{3m_0n_0} v \|_b \leq \gamma^{6m_0} \| v \|_b$ by Lemma~\ref{lemma-complete-1}, and we continue with Step 4.\\ If $v$ does not satisfy ($H_{\sigma, m_0}$), then $\| \tilde \mathcal{L}_s^{m_0n_0 } v \|_b \leq \gamma^{2m_0} \| v \|_b$ by Lemma~\ref{lemma-complete-2}. Let $v_1 = \tilde \mathcal{L}_s^{m_0n_0 } v$ and let $m_1 \in \mathbb{N}$ be maximal such that $3m_1n_0 \leq n-m_0n_0 $.\\ If $m_1 < A\log |b|$, then continue with Step 4, otherwise continue with Step 3. \item If $v_1$ satisfies ($H_{\sigma, m_1}$), then $\| \tilde \mathcal{L}_s^{3m_1n_0 } v_1 \|_b \leq \gamma^{6m_0} \| v \|_b$ by Lemma~\ref{lemma-complete-1}. Therefore $$ \| \tilde \mathcal{L}_s^{(3m_1+m_0)n_0 } v \|_b = \| \tilde \mathcal{L}_s^{3m_1n_0 } v_1 \|_b \leq \gamma^{6m_1}\| v_1 \|_b = \gamma^{3m_1}\| \tilde \mathcal{L}_s^{3m_1n_0} v \|_b \leq \gamma^{6m_1+2m_0} \| v \|_b, $$ and we continue with Step 4.\\ If $v_1$ does not satisfies ($H_{\sigma, m_1}$), then $\| \tilde \mathcal{L}_s^{m_1n_0 } v_1 \|_b \leq \gamma^{2m_0} \| v_1 \|_b$ by Lemma~\ref{lemma-complete-2}. Let $v_2 = \tilde \mathcal{L}_s^{m_1n_0 } v_1$ and let $m_2 \in \mathbb{N}$ be maximal such that $3m_2n_0 \leq n-(m_0+m_1)n_0$ and repeat Step 3. Each time we pass through Step 3, we introduce the next integer $m_i$ and $v_i = \tilde\mathcal{L}_s^{m_{i-1}}v_{i-1}$. As soon as $m_i < A \log(1+|b|)$ we continue with Step 4. \item Let $p = p(v)$ be the number of times that this algorithm passes through Step 3. Note that $p < \infty$ because each time Step 3 is taken, $n-(m_0 + m_1 + \dots + m_i)n_0$ decreases by a factor $2/3$. Thus we find a sequence $(m_i)_{i=0}^p$ and we can define $$ M_p = M_p(v) = \begin{cases} m_0 + \dots + m_{p-1} + 3m_p, & \text{ or }\\ m_0 + \dots + m_{p-1} + m_p, \end{cases} $$ depending on whether $v_{p-1} = \tilde \mathcal{L}_s^{(m_0+ \dots + m_{p-1})n_0} v$ satisfies ($H_{\sigma,m_{p-1}}$) or not. In either case we have $n-M_pn_0 < A \log(1+|b|)$ and $\| \tilde \mathcal{L}_s^{M_pn_0} v \|_b \leq \gamma^{2M_p} \| v \|_b$. \end{enumerate} By \eqref{eq:Aprime}, we have for all $v \in \text{BV}(Y)$ $$ \| \tilde \mathcal{L}_s^nv\|_b = \| \tilde \mathcal{L}_s^{n-M_pn_0} (\tilde \mathcal{L}_s^{M_pn_0} v) \|_b \leq \| \tilde \mathcal{L}_s^{n-M_pn_0} \|_b \ \| \tilde \mathcal{L}_s^{M_pn_0} v \|_b \leq A' (1+|b|) \gamma^{2M_p} \| v \|_b. $$ Also $\| \mathcal{L}_s^nv\|_b \leq \lambda_{\sigma}^{-1} \frac{\sup f_{\sigma}}{\inf f_{\sigma}}\| \tilde \mathcal{L}_s^n v\|_b$. Therefore, using $n-M_pn_0 < A\log |b|$, \begin{eqnarray*} \| \mathcal{L}_s^nv\|_b & \leq & \lambda_{\sigma}^{-1} \frac{\sup f_{\sigma}}{\inf f_{\sigma}} A' (1+|b|) \gamma^{2M_p} \| v \|_b \\ & \leq & \lambda_{\sigma}^{-1} \frac{\sup f_{\sigma}}{\inf f_{\sigma}} A' (1+|b|) \gamma^{(-A \log |b|)/n_0} \gamma^{2n} \| v \|_b \\ & \leq & \lambda_{\sigma}^{-1} \frac{\sup f_{\sigma}}{\inf f_{\sigma}} A' (1+|b|) \gamma^{n/2} \ \gamma^{(-A \log |b|)/n_0} \gamma^{n/2}\ \gamma^n \| v \|_b \leq \gamma^n \| v \|_b, \end{eqnarray*} since $n$ is chosen large enough as in \eqref{eq:nminimal}. This completes the proof. \end{proof}
{ "attr-fineweb-edu": 1.263672, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbpLxK0zjCxN_ryop
\section{Introduction} Matrix functions are an evergreen topic in matrix algebra due to their wide use in applications \cite{gant59,lanc85,horn,high, gol12}. It is not hard to imagine why the interaction of structures with matrix functions is an intriguing subject. In fact, in many cases structured matrices arise and can be exploited for speeding up algorithms, reducing storage costs or allowing to execute otherwise not feasible computations. The property we are interested in is the {\em quasi-separability}. That is, we want to understand whether the submatrices of $f(A)$ contained in the strict upper triangular part or in the strict lower triangular part, called {\em off-diagonal submatrices}, have a ``small'' numerical rank. Studies concerning the numerical preservation of data-sparsity patterns were carried out recently \cite{benzi2013decay,benzi2014decay,benzi-simoncini,canuto}. Regarding the quasiseparable structure \cite{vanbarel:book1,vanbarel:book2,eide2005,eide2013}, in \cite{gavrilyuk2002mathcal,gavrilyuk2005data,hackbusch2016hierarchical} Gavrilyuk, Hackbusch and Khoromskij addressed the issue of approximating some matrix functions using the hierarchical format \cite{borm}. In these works the authors prove that, given a low rank quasiseparable matrix $A$ and a holomorphic function $f(z)$, computing $f(A)$ via a quadrature formula applied to the contour integral definition, yields an approximation of the result with a low quasiseparable rank. Representing $A$ with a $\mathcal H$-matrix and exploiting the structure in the arithmetic operations provides an algorithm with almost linear complexity. The feasibility of this approach is equivalent to the existence of a rational function $r(z) = \frac{p(z)}{q(z)}$ which well-approximates the holomorphic function $f(z)$ on the spectrum of the argument $A$. More precisely, since the quasiseparable rank is invariant under inversion and sub-additive with respect to matrix addition and multiplication, if $r(z)$ is a good approximation of $f(z)$ of low degree then the matrix $r(A)$ is an accurate approximation of $f(A)$ with low quasiseparable rank. This argument explains the preservation of the quasiseparable structure, but still needs a deeper analysis which involves the specific properties of the function $f(z)$ in order to provide effective bounds to the quasiseparable rank of the matrix $f(A)$. In this article we deal with the analysis of the quasiseparable structure of matrix functions by studying the interplay between the off-diagonal singular values of the matrices $A$ and $B$ such that $B=f(A)$. Our intent is to understand which parameters of the model come into play in the numerical preservation of the structure and to extend the analysis to functions with singularities. In Section \ref{sec:off-diagonal} we see how the integral definition of a matrix function enables us to study the structure of the off-diagonal blocks in $f(A)$. In Section~\ref{sec:QR} we develop the analysis of the singular values of structured outer products and we derive bounds for the off-diagonal singular values of matrix functions. In Section~\ref{sec:poles} we adapt the approach to treat functions with singularities. The key role is played by an extension of the Dunford-Cauchy formula to the case in which some singularities lie inside the contour of integration. In Section~\ref{sec:computational} we comment on computational aspects and we perform some experiments for validating the theoretical results, while in Section~\ref{sec:concludingremarks} we give some concluding remarks. \subsection{Definitions of matrix function} In \cite{high} ---which we indicate as a reference for this topic--- the author focuses on three equivalent definitions of matrix function. For our purposes we recall only two of them: one based on the Jordan canonical form of the argument and the other which is a generalization of the Cauchy integral formula. \begin{definition}\label{def:matrix-func1} Let $A\in\mathbb C^{m\times m}$ and $f(z)$ be a function holomorphic in a set containing the spectrum of $A$. Indicating with $J=\hbox{diag}(J_1,\ldots,J_p)=V^{-1}AV$ the Jordan canonical form of $A$, we define $f(A):=V\cdot f(J)\cdot V^{-1}=V\cdot \diag(f(J_k))\cdot V^{-1}$ where $J_k$ is an $m_k \times m_k$ Jordan block and \[ \quad J_k= \begin{bmatrix} \lambda_k&1\\ &\ddots&\ddots\\ &&\ddots&1\\ &&&\lambda_k \end{bmatrix},\quad f(J_k)=\begin{bmatrix} f(\lambda_k)&f'(\lambda_k)&\dots&\frac{f^{(m_k-1)}(\lambda_k)}{(m_k-1)!}\\ &\ddots&\ddots&\vdots\\ &&\ddots&f'(\lambda_k)\\ &&&f(\lambda_k) \end{bmatrix}. \] \end{definition} \begin{definition}[Dunford-Cauchy integral formula] Let $f(z)$ be a holomorphic function in $\mathcal D\subseteq\mathbb C$ and $A\in\mathbb C^{m\times m}$ be a matrix whose spectrum is contained in $\Omega\subset\mathcal D$. Then we define \begin{equation}\label{cauchyformula} f(A):= \frac{1}{2\pi i}\int_{\partial\Omega}(zI-A)^{-1}f(z)dz. \end{equation} The matrix-valued function $\mathfrak R(z):=(zI-A)^{-1}$ is called \emph{resolvent}. \end{definition} Suppose that the spectrum of $A$ is contained in a disc $\Omega=B(z_0,r):=\{|z-z_0|<r\}$ where the function is holomorphic. Then, it is possible to write $f(A)$ as an integral \eqref{cauchyformula} along $S^1:=\partial B(0,1)$ for a matrix with spectral radius less than $1$. In fact, \begin{align*} \frac{1}{2\pi i}\int_{\{|z-z_0|=r\}}(zI-A)^{-1}f(z)dz\quad=\quad \frac{1}{2\pi i}\int_{S^1}(wI-\tilde A)^{-1}f(rw+z_0)dw \end{align*} where $\tilde A=r^{-1}(A-z_0I)$ has the spectrum contained in $B(0,1)$. Given the above remark it is not restrictive to consider only the case of $A$ having spectral radius less than $1$. \begin{remark} \label{rem:spectralradiusD} In the following we will often require, besides the non singularity of $(zI-A)$, also that $(zI-D)$ is invertible along the path of integration for any trailing diagonal block $D$. This is not restrictive since --- given a sufficiently large domain of analyticity for $f$ --- one can choose $r$ large enough which guarantees this property. As an example, any $r$ such that $r\geq\norm {A}$ is a valid choice for any induced norm. \end{remark} \section{Off-diagonal analysis of $f(A)$} \label{sec:off-diagonal} The study of the decay of the off-diagonal singular values has been investigated by \cite{chandrasekaran2010numerical} concerning the block Gaussian elimination on certain classes of quasiseparable matrices; in \cite{netna,qcr-krylov} the authors have proved fast decay properties that have been used to show the numerical quasiseparable preservation in the cyclic reduction \cite{buzbee,hockney,bim:book,SMC,CR}. The aim of this section is characterizing the structure of the off-diagonal blocks by means of the integral definition of $f(A)$. \subsection{Structure of an off-diagonal block} Consider the Dunford-Cauchy integral formula \eqref{cauchyformula} in the case $\partial\Omega=S^1$ and $A$ with the spectrum strictly contained in the unit disc. In this case the spectral radius of $A$ is less than $1$ and we can expand the resolvent as $(zI-A)^{-1}=\sum_{n\geq 0}z^{-(n+1)}A^n$. Applying component-wise the residue theorem we find that the result of the integral in \eqref{cauchyformula} coincides with the coefficient of degree $-1$ in the Laurent expansion of $(zI-A)^{-1}f(z)$. Thus, examining the Laurent expansion of an off-diagonal block, we can derive a formula for the corresponding block in $f(A)$. Partitioning $A$ as follows \[ A= \begin{bmatrix} \bar A&\bar B\\ \bar C&\bar D \end{bmatrix}\quad\Rightarrow\quad \mathfrak R(z)= \begin{bmatrix} zI-\bar A&-\bar B\\ -\bar C&zI-\bar D \end{bmatrix}^{-1} \] and supposing that the spectral radius of $\bar D$ is less than $1$ (which is not restrictive thanks to Remark~\ref{rem:spectralradiusD}) we get \[ \mathfrak R(z) =\begin{bmatrix} S_{zI-\bar D}^{-1}&*\\ (zI-\bar D)^{-1}\bar CS_{zI-\bar D}^{-1}&* \end{bmatrix}, \] where $S_{zI-\bar D} =zI-\bar A-\bar B(zI-\bar D)^{-1}\bar C$ is the Schur complement of the bottom right block and $*$ denotes blocks which are not relevant for our analysis. We can write the Laurent expansion of the two inverse matrices: \[ (zI-\bar D)^{-1}=\sum_{j\geq 0}z^{-(j+1)}\bar D^j,\qquad S_{zI-\bar D}^{-1} = \begin{bmatrix} I & 0 \end{bmatrix} \cdot \left(\sum_{j\geq 0}z^{-(j+1)}A^j\right)\cdot \begin{bmatrix} I \\ 0 \end{bmatrix}, \] where for deriving the expansion of $S_{zI-\bar D}^{-1}$ we used that it corresponds to the upper left block in $\mathfrak R(z)$. Let $f(z)=\sum_{n\geq 0}a_nz^n$ be the Laurent expansion of $f$ in $S^1$ and let $\mathfrak R(z)\cdot f(z):=\begin{bmatrix} *&*\\ G(z)&* \end{bmatrix}$, then \begin{equation}\label{cbar} G(z)=\sum_{n\geq 0}a_n\sum_{j\geq 0}\bar D^j\bar C\cdot [I\quad 0]\cdot \sum_{s\geq 0}A^sz^{n-j-s-2}\cdot [I\quad 0]^t. \end{equation} Exploiting this relation we can prove the following. \begin{lemma}\label{lem:tech} Let $A= \begin{bmatrix} \bar A&\bar B\\ \bar C&\bar D \end{bmatrix}$ be a square matrix with square diagonal blocks, $\bar C=uv^t$ and suppose that the spectrum of $A$ and $\bar D$ is contained in $B(0, 1)$. Consider $f(z)=\sum\limits_{n\geq 0}a_nz^n$ for $|z|\leq 1$ and let $f(A)=\begin{bmatrix} *&*\\ \tilde C&* \end{bmatrix}$ be partitioned according to $A$. Then \begin{align*} \tilde C&=\sum_{n\geq 1}a_n \left[ u\ \vline\ \bar D\cdot u\ \vline\ \dots\ \vline\ \bar D^{n-1}\cdot u \right]\cdot \left[ (A^t)^{n-1}\tilde v\ \vline\ \dots\ \vline\ A^t\tilde v\ \vline\ \tilde v \right]^t[ I \ 0 ]^t \end{align*} with $\tilde v=[I\ 0]^tv$. \end{lemma} \begin{proof} By the Dunford-Cauchy formula, the subdiagonal block $\tilde C$ is equal to $\int_{S^1}G(z)dz$. By means of the residue theorem we can write the latter as the coefficient of degree $-1$ in \eqref{cbar}, that is \[ \tilde C = \sum_{n\geq 1}a_n\sum_{j=0}^{n-1}\bar D^juv^t\cdot [I\ 0]A^{n-j-1}[I\ 0]^t=\sum_{n\geq 1}a_n\sum_{j=0}^{n-1}\bar D^ju\tilde v^tA^{n-j-1}[I\ 0]^t, \] which is in the sought form. \end{proof} \begin{remark}\label{rem:outer} The expression that we obtained for $\tilde C$ in the previous Lemma is a sum of outer products of vectors of the form $\bar D^ju$ with $(A^t)^{n-j-1}\tilde v$, where the spectral radii of $A$ and $\bar D$ are both less than $1$. This implies that the addends become negligible for a sufficiently large $n$. So, in order to derive bounds for the singular values, we will focus on the truncated sum \begin{equation} \label{kryl_negl} \sum_{n= 1}^sa_n \left[ u\ \vline\ \bar D\cdot u\ \vline\ \dots\ \vline\ \bar D^{n-1}\cdot u \right]\cdot \left[ (A^t)^{n-1}\tilde v\ \vline\ \dots\ \vline\ A^t\tilde v\ \vline\ \tilde v \right]^t[ I \ 0 ]^t \end{equation} which can be rewritten as: \begin{equation} \label{kyl/horn} \left[ u\ \vline\ \bar D\cdot u\ \vline\ \dots\ \vline\ \bar D^{s-1}\cdot u \right]\cdot \left[ \sum_{n=0}^{s-1}a_{n+1}(A^t)^n\tilde v\ \vline\ \dots\ \vline\ (a_s A^t+a_{s-1}I)\tilde v\ \vline\ a_s \tilde v \right]^t[ I \ 0 ]^t. \end{equation} The columns of the left factor span the Krylov subspace $\mathcal K_n(\bar D,u):=\operatorname{Span}\{u,\bar D u, \dots,\bar D^{n-1}u\}$. Let $p(z):=\sum_{n=0}^{s-1}a_{n+1} z^n$. Looking closely at the columns of the right factor in \eqref{kyl/horn} we can see that they correspond to the so called Horner shifts (which are the intermediate results obtained while evaluating a polynomial using the Horner rule \cite{henrici}) of $p(A^t)\tilde v$. In the following we will refer to the patterns in the factors of \eqref{kyl/horn} as \emph{Krylov} and \emph{Horner} matrices, respectively. \end{remark} \section{Outer products, QR factorization and singular values}\label{sec:QR} The problem of estimating the numerical rank of an outer product is addressed for example in \cite{netna}, where the authors estimate the singular values of a matrix $X = \sum_{i = 1}^{n} u_i v_i^*$ ---where the superscript $*$ stands for the usual complex conjugate transposition--- exploiting the exponential decay in the norms of the rank $1$ addends. However, such an estimate is sharp only when the vectors $u_i$ and $v_i$ are orthogonal. In general, the singular values of $X$ decay quickly also when the vectors $u_i$ and/or $v_i$ tend to become parallel as $i$ increases. For this reason, in this work we rephrase the expression $X = \sum_{i = 1}^n u_i v_i^*$ as $X = \sum_{i = 1}^m \tilde u_i \tilde v_i^*$ where $\tilde u_i$ and $\tilde v_i$ are chosen as ``orthogonal as possible''. To this aim we study the QR decomposition of the matrices \[ U = \begin{bmatrix} \\ u_1 & u_2 & \cdots & u_n \\ \\ \end{bmatrix},\qquad V = \begin{bmatrix} \\ v_1 & v_2 & \cdots & v_n \\ \\ \end{bmatrix}. \] We indicate their QR decompositions as $U = Q_UR_U$ and $V = Q_VR_V$ where $Q_U,Q_V$ are $m \times m$ and $R_U,R_V$ have $m$ rows and $n$ columns. This section is divided into five parts. In the first we study the element-wise decay in the QR factorization of Krylov matrices. In the second we show how to handle the case in which the matrix $A$ is not diagonalizable. In the third we study the same properties for Horner matrices. In Section~\ref{sec:krylovhornersingdecay} we show that the singular values of a Krylov/Horner outer product inherit the decay. Finally, in Section~\ref{sec:singdecay} we derive bounds for the off-diagonal singular values of $f(A)$. \subsection{Decay in the entries of the $R$ factor for Krylov matrices} In this section we show how to exploit the relation between Krylov subspaces and polynomial approximation \cite{saad2003iterative}. More precisely, we relate the decay in the matrix $R$ with the convergence of a minimax polynomial approximation problem in a subset of the complex plane. The rate of convergence of the latter problem depends on the geometry of the spectrum of $A$. In particular, for every compact connected subset of $\mathbb C$ that contains the spectrum we obtain an exponent for the decay depending on its logarithmic capacity \cite{landkof,markushevich2005theory}. In order to simplify the exposition, in this section we will assume that the matrix $A$ is diagonalizable. However, this is not strictly required and in the next subsection we show how to relax this hypothesis. Our approach is inspired by the one of Benzi and Boito in \cite{benzi2013decay,benzi2014decay}, where the authors proved the numerical preservation of sparsity patterns in matrix functions. For a classic reference of the complex analysis behind the next definitions and theorems we refer to \cite{markushevich2005theory}. \begin{definition}[Logarithmic capacity] Let $F \subseteq \mathbb C$ be a nonempty, compact and connected set, and denote with $G_\infty$ the connected component of the complement containing the point at the infinity. Since $G_\infty$ is simply connected, in view of the Riemann Mapping Theorem we know that there exists a conformal map $\Phi(z)$ which maps $G_\infty$ to the complement of a disc. If we impose the normalization conditions \[ \Phi(\infty) = \infty, \qquad \lim_{z \to \infty} \frac{\Phi(z)}{z} = 1 \] then this disc is uniquely determined. We say that its radius $\rho$ is the \emph{logarithmic capacity} of $F$ and we write $\lc(F)=\rho$. Let $\Psi=\Phi^{-1}$, for every $R>\rho$ we indicate with $C_R$ the image under $\Psi$ of the circle $\{|z|=R\}$. \end{definition} The logarithmic capacity is strictly related to the following well-known result of polynomial approximation in the complex plane. \begin{lemma}[Corollary~2.2 in \cite{ellacott1983computation}]\label{lem:minimax-approx} Let $F$ be a Jordan region whose boundary is of finite total rotation $\mathcal V$ and of logarithmic capacity $\rho$. If $f(z)$ is an analytic function on $\mathbb{C}$ then $\forall r>\rho$ and any integer $i\geq 0$ there exists a polynomial $p_i(z)$ of degree at most $i$ such that \[ \lVert f(z)-p_i(z)\rVert_{\infty, F}\leq \frac{M(r)\mathcal V}{\pi(1-\frac{\rho}{r})}\left(\frac{\rho}{r}\right)^{i+1}. \] with $M(r):=\max_{C_r} |f(z)|$. \end{lemma} In order to exploit Lemma~\ref{lem:minimax-approx} in our framework we need to introduce some new constants related to the geometry of the set $F$. \begin{definition} Given $F \subseteq \mathbb C$ compact, connected with $\lc(F)=\rho\in(0,1)$ we indicate with $R_F$ the quantity \[ R_F:=\sup\{R>\rho:\ C_R \text{ is strictly contained in the unit circle}\}. \] \end{definition} \begin{definition} We say that $F\subset \mathbb C$ is {\em enclosed} by $(\rho,R_F,\mathcal V_F)$ if $\exists F'$ Jordan region whose boundary has finite total rotation\footnote{% See \cite[Section 2, p. 577]{ellacott1983computation} for the definition of total rotation.} $\mathcal V_F$, $\lc(F')=\rho$, $R_F=R_{F'}$ and $F\subseteq F'$. \end{definition} \begin{definition} We say that $A\in\mathbb C^{m\times m}$ is {\em enclosed} by $(\rho,R_A,\mathcal V_A)$ if the set of its eigenvalues is enclosed by $(\rho,R_A,\mathcal V_A)$. \end{definition} \begin{definition} Let $J$ be the Jordan canonical form of $A\in\mathbb C^{m\times m}$. Let $\mathbb V :=\{V\in \mathbb{C}^{m\times m}:\ V^{-1}AV=J\}$. We define the quantity \[ \kappa_{s}(A):=\inf_{V\in\mathbb V}\parallel V\parallel_2 \parallel V^{-1}\parallel_2. \] \end{definition} We can now proceed to study the $R$ factor of a Krylov matrix. \begin{theorem} \label{thm:krylov-decay} Let $A\in\mathbb{C}^{m\times m}$ be a diagonalizable matrix enclosed by $(\rho,R_A,\mathcal V_A)$, $\rho\in(0,1)$ and $b\in\mathbb{C}^m$. Moreover, let $U$ be the matrix whose columns span the $n$-th Krylov subspace $\mathcal{K}_n(A,b)$: \[ U = \left[ \begin{array}{c|c|ccc} b & Ab & A^2 b & \ldots & A^{n-1} b \\ \end{array} \right]. \] Then $\forall r\in (\rho,R_A)$ the entries of the R factor in the QR decomposition of $U$ satisfy \[ |R_{ij}| \leq c(r)\cdot \kappa_{s}(A)\cdot \left(\frac{\rho}{r}\right)^{i} \delta^j \] where $\delta=\max_{z\in C_r}|z|$ and $c(r)=\frac{\mathcal V_A}{\delta\pi(1-\frac{\rho}{r})}\cdot \norm {b}_2$. \end{theorem} \begin{proof} Let $QR = U$ be the QR factorization of $U$ and $V^{-1}AV=D$ the spectral decomposition of $A$. Notice that the quantity $\norm{R_{i+1:j,j}}_2$ is equal to the norm of the projection of $u_j$ on the orthogonal to the space spanned by the first $i$ columns of $U$, that is $\mathcal K_{i}(A, b)^\perp$. It is well-known that the Krylov subspace $\mathcal{K}_i(A,b)$ contains all the vectors of the form $p(A) b$ where $p$ has degree at most $i-1$. In particular, we have: \begin{align*} |R_{i+1,j}| \leq \norm{R_{i+1:j,j}}_2 \leq \min_{deg(p)=i-1}\lVert p(A) b - u_{j} \rVert_2&\leq \min_{deg(p)=i-1}\lVert p(D) - D^{j-1} \rVert_2 \lVert V^{-1}\rVert_2 \lVert V\rVert_2\lVert b\rVert_2\\ &\leq \frac{M(r)\mathcal V_A}{\pi(1-\frac{\rho}{r})}\left(\frac{\rho}{r}\right)^i \kappa_{s}(A) \norm{b}_2, \end{align*} where $M(r)=\max_{C_r} |z|^{j-1}=\delta^{j-1}$. \end{proof} \subsection{Non diagonalizable case}\label{subsec:nondiag} The diagonalizability hypothesis can be relaxed using different strategies. We first propose to rely on a well-known result by Crouzeix \cite{crouzeix2007numerical} based on the numerical range. Then, we discuss another approach consisting in estimating the minimax approximation error on the Jordan canonical form. \subsubsection{Numerical range} In the spirit of the results found in \cite{benzi2014decay}, we can give an alternative formulation that avoids the requirement of diagonalizability. The price to pay consists in having to estimate the minimax error bound on a set larger than the spectrum. To be precise, we need to consider the numerical range of the matrix $A$. \begin{definition} Let $A$ be a matrix in $\mathbb C^{m \times m}$. We define its numerical range $\mathcal W(A)$ as the set \[ \mathcal W(A) = \left\{ x^* A x \ | \ x \in \mathbb C^m, \ \ \lVert x \rVert_2 = 1 \right\} \subseteq \mathbb C. \] \end{definition} The numerical range is a compact convex subset of $\mathbb C$ which contains the eigenvalues of $A$. When $A$ is normal $\mathcal W(A)$ is exactly the convex hull of the eigenvalues of $A$. Moreover, it has a strict connection with the evaluation of matrix functions \cite{crouzeix2007numerical}, which is described by the following result. \begin{theorem}[Crouzeix]\label{thm:crouzeix} There is a universal constant $2\leq \mathcal C\leq 11.08$ such that, given $A\in\mathbb{C}^{m\times m}$, and a continuous function $g(z)$ on $\mathcal W(A)$, analytic in its interior, the following inequality holds: \[ \lVert g(A)\rVert_2\leq \mathcal C \cdot \lVert g(z)\rVert_{\infty, \mathcal W(A)}. \] \end{theorem} Whenever the numerical range $\mathcal W(A)$ has a logarithmic capacity smaller than $1$ it is possible to extend Theorem~\ref{thm:krylov-decay}. \begin{corollary} Let $A\in\mathbb{C}^{m\times m}$ be such that the field of values $\mathcal W(A)$ is enclosed by $(\rho,R_{\mathcal{W(A)}},\mathcal V_{\mathcal W(A)})$, $\rho\in(0,1)$ and $b\in\mathbb{C}^m$. Moreover, let $U$ be the matrix whose columns span the $n$-th Krylov subspace $\mathcal{K}_n(A,b)$: \[ U = \left[ \begin{array}{c|c|c|c|c} b & Ab & A^2 b & \ldots & A^{n-1} b \\ \end{array} \right]. \] Then $\forall r\in (\rho,R_{W(A)})$ the entries of the R factor in the QR decomposition of $U$ satisfy \[ |R_{ij}| \leq c(r)\cdot \left(\frac{\rho}{r}\right)^{i} \delta^j \] where $\delta=\max_{z\in C_r}|z|$ and $c(r)=\frac{\mathcal C\cdot\mathcal V_{\mathcal W(A)}}{\delta\pi(1-\frac{\rho}{r})}\cdot \norm{b}_2$. \end{corollary} \begin{proof} Follow the same steps in the proof of Theorem~\ref{thm:krylov-decay} employing Theorem~\ref{thm:crouzeix} to bound $R_{ij}$. \end{proof} \subsubsection{Jordan canonical form} An alternative to the above approach is to rely on the Jordan canonical form in place of the eigendecomposition. More precisely, we can always write any matrix $A$ as $A = V^{-1} J V$ with $J$ being block diagonal with bidiagonal blocks (the so-called Jordan blocks). This implies that the evaluation of $f(J)$ is block diagonal with blocks $f(J_t)$ where $f(J_t)$ have the following form: \[ \quad J_t= \begin{bmatrix} \lambda_t&1\\ &\ddots&\ddots\\ &&\ddots&1\\ &&&\lambda_t \end{bmatrix} \in\mathbb C^{m_t\times m_t},\quad f(J_t)=\begin{bmatrix} f(\lambda_t)&f'(\lambda_t)&\dots&\frac{f^{(m_t-1)}(\lambda_t)}{(m_t-1)!}\\ &\ddots&\ddots&\vdots\\ &&\ddots&f'(\lambda_t)\\ &&&f(\lambda_t) \end{bmatrix}. \] We can evaluate the matrix function $f(A)$ by $f(A) = V^{-1} f(J) V$. One can estimate the norm $\norm{R_{i+1:j,j}}_2$ as in the proof of Theorem~\ref{thm:krylov-decay}: \begin{equation}\label{Rjordan} |R_{i+1,j}| \leq \norm{R_{i+1:j,j}}_2 \leq \min_{deg(p)=i-1}\lVert p(A) b - u_{j} \rVert_2\leq \min_{deg(p)=i-1}\lVert p(J) - J^{j-1} \rVert_2 \cdot \kappa_s(A)_2 \cdot \lVert b\rVert_2 \end{equation} where $p(J)=\diag(p(J_t))$, $J^j=\diag(J_t^j)$ and \begin{equation}\label{jordan} p(J_t)-J_t^j=\begin{bmatrix} p(\lambda_t)-\lambda_t^j&p'(\lambda_t)- j\lambda_t^{j-1}&\dots&\frac{p^{(m_t-1)}(\lambda_t)}{(m_t-1)!}-\frac{j!}{(j-m_t)!(m_t-1)!}\lambda_h^{j-m_t}\\ &\ddots&\ddots&\vdots\\ &&\ddots&p'(\lambda_t)- j\lambda_t^{j-1}\\ &&&p(\lambda_t)-\lambda_t^j \end{bmatrix}. \end{equation} We can rephrase \eqref{Rjordan} as a problem of simultaneous approximation of a function and its derivatives \begin{lemma} Let $\mathcal S$ be a simply connected subset of the complex plane and suppose that $\exists z_0\in\mathcal S$ such that each element of $\mathcal S$ can be connected to $z_0$ with a path of length less than $1$. Let $p(z)$ be a degree $i$ polynomial approximating the holomorphic function $f'(z)$ in $\mathcal S$, such that $|f'(z) - p(z)| \leq \epsilon$ in $\mathcal S$. Then there exists a polynomial $q(z)$ of degree $i + 1$ with $q'(z) = p(z)$ such that \[ |q(z) - f(z)| \leq \epsilon \qquad z \in \mathcal S, \] \end{lemma} \begin{proof} Define $q(z)$ as follows: \[ q(z) = f(z_0) + \int_{\gamma} p(z), \qquad \gamma \text{ any path connecting } z_0 \text{ and } z. \] The above definition uniquely determines $q(z)$, and we know that it is a polynomial of degree $i + 1$. Given $z \in \mathcal S$ choose $\gamma$ a path connecting $z_0$ to $z$ with length less than $1$, we have: \[ |f(z) - q(z)| = |f(z_0) + \int_{\gamma} f'(z) - f(z_0) - \int_{\gamma} p(z)| \leq \int_{\gamma} |f'(z) - p(z)| \leq \epsilon. \] \end{proof} If $m_{t'}$ is the maximum size among all the Jordan blocks we can find a minimax approximating polynomial for the $m_{t'}$ derivative of $z^j$. The above Lemma guarantees that, with the latter choice, the matrix \eqref{jordan} has the $(i,j)$-th entry bounded in modulus by $\frac{\epsilon}{(j-i)!}$ when $j\geq i$. An easy computation shows that both the $1$ and $\infty$ norms of \[ T=\epsilon\begin{bmatrix} 1&1&\frac{1}{2!}&\dots&\frac{1}{(m_{t'}-1)!}\\ &\ddots&\ddots&\ddots&\vdots\\ &&\ddots&\ddots&\frac{1}{2!}\\ &&&\ddots&1\\ &&&&1 \end{bmatrix} \] are bounded by $\epsilon e$, where $e$ is the Napier's constant. We then have $\norm{p(J)-J^k}_2\leq\norm{T}_2\leq \sqrt{\norm{T}_1\norm{T}_{\infty}}\leq \epsilon e$. Using this relation one can prove the next result by following the same steps as in the proof of Theorem~\ref{thm:krylov-decay}. \begin{theorem} Let $A\in\mathbb{C}^{m\times m}$, $b\in\mathbb{C}^m$ and $F$ be the convex hull of the spectrum of $A$. Suppose that $F \subseteq B(0,1)$ is enclosed by $(\rho,R_F,\mathcal V_F)$, $\rho\in(0,1)$ and indicate with $m_{t'}$ the size of the largest Jordan block of $A$. Moreover, let $U$ be the matrix whose columns span the $n$-th Krylov subspace $\mathcal{K}_n(A,b)$: \[ U = \left[ \begin{array}{c|c|ccc} b & Ab & A^2 b & \ldots & A^{n-1} b \\ \end{array} \right]. \] Then $\forall r\in (\rho,R_F)$ the entries of the R factor in the QR decomposition of $U$ satisfy \[|R_{ij}| \leq c(r)\cdot \kappa_{s}(A)\cdot \left(\frac{\rho}{r}\right)^{i-(m_{t'}-1)} \delta^j, \] where $\delta=\max_{z\in C_r}|z|$and $c(r)=\frac{e\cdot\mathcal V_{F}}{\delta\pi(1-\frac{\rho}{r})}\cdot \norm{b}_2$. \end{theorem} \subsection{Decay in the entries of the $R$ factor for Horner matrices} Here, we show that the two-way decay in the $R$ factor is shared by the right one in \eqref{kyl/horn}, which we have identified as Horner matrix. \begin{theorem}\label{thm:horner-decay} Let $A\in\mathbb{C}^{m\times m}$ be a diagonalizable matrix enclosed by $(\rho,R_A,\mathcal V_A)$, $\rho\in(0,1)$ and $b\in\mathbb{C}^m$. Moreover let $U$ be the matrix: \[ U =\left[ a_s b\ \vline\ (a_s A+a_{s-1}I)b\ \vline\ \dots\ \vline \ \sum_{j=0}^{s-1}a_{j+1}A^j b \right] \] where the finite sequence $\{a_j\}_{j=1,\dots, s}$ verifies \[ |a_j|\leq \hat \gamma \cdot \hat \rho^{j},\quad \hat \gamma>0, \quad \hat \rho\in(0,1),\qquad j=1,\dots, s. \] Then the R factor in the QR decomposition of $U$ is entry-wise bounded by \[|R_{ij}| \leq c\cdot \kappa_{s}(A)\cdot \left(\frac{\rho}{R_A}\right)^{i} \hat \rho^{i+(s-j)} \] where $c=\frac{\hat \rho\hat{\gamma}\mathcal V_A}{\pi(1-\hat\rho)(1-\frac{\rho}{R_A})}\norm{b}_2$. \end{theorem} \begin{proof} Here we assume that $a_{s} \neq 0$. This is not restrictive because if $j < s$ is the largest $j$ such that $a_{j'} = 0$ for any $j' > j$ the first $s - j$ columns of $U$ are zero, and can be ignored. Observe that the $j$-th column of $U$ is of the form $q(A) b$ where $q$ is the polynomial defined by the coefficients $a_j$ in reversed order, i.e., \[ q(x) := \sum\limits_{n=0}^{j-1}a_{s-j+1+n}x^n. \] The subspace spanned by the first $i$ columns of $U$ contains all the vectors of the form $p(A) b$ where $p$ is a polynomial of degree at most $i-1$. With the same argument used for proving Theorem~\ref{thm:krylov-decay} we can bound the entries of $R$ in this way \[ |R_{ij}|\leq \min_{deg(p)=i-1}\norm{p(D)-\sum_{n=0}^{j-1}a_{s-j+1+n}D^{n}}_2\cdot \kappa_s(A)\cdot \norm{b}_2. \] Moreover \begin{align*} \min_{deg(p)=i-1}\norm{p(D)-\sum_{n=0}^{j-1}a_{s-j+1+n}D^n}_2&= \min_{deg(p)=i-1}\norm{p(D)-\sum_{n=i}^{j-1}a_{s-j+1+n}D^n}_2\\ &\leq \sum_{n=i}^{j-1}|a_{s-j+1+n}|\min_{deg(p)=i-1}\norm{p(D)-D^n}_2\\ &\leq \sum_{n=i}^{j-1}\hat \gamma\hat \rho^{s-j+1+n}\min_{deg(p)=i-1}\norm{p(D)-D^n}_2\\ &\underbrace{\leq}_{\text{Lemma~\ref{lem:minimax-approx}}}\sum_{n=i}^{j-1}\hat \gamma\hat \rho^{s-j+1+n} \frac{\mathcal V_A}{\pi(1-\frac{\rho}{R_A})}\left(\frac{\rho}{R_A}\right)^i\\ &\leq \frac{\hat \rho\hat{\gamma}\mathcal V_A}{\pi(1-\hat\rho)(1-\frac{\rho}{R_A})} \hat\rho^{s-j+i}\left(\frac{\rho}{R_A}\right)^i, \end{align*} where we used Lemma~\ref{lem:minimax-approx} with $r=R_A$. \end{proof} \begin{remark} In view of the above arguments we can rephrase Theorem~\ref{thm:krylov-decay} for non diagonalizable matrices. We obtain similar statements involving $\lc(\mathcal W(A))$ in place of $ \lc(A)$ or with a shifted column decay. The same technique can be used to generalize the results of the next sections. The proofs and statements are analogous to the diagonalizable case. Therefore, we do not report them. \end{remark} \subsection{Decay in the singular values of Krylov/Horner outer products} \label{sec:krylovhornersingdecay} \subsubsection{Some preliminaries} In what follows, we indicate with $\Pi_m$ the counter identity of order $m$: \[ \Pi_m:=\begin{bmatrix} &&1\\ &\iddots\\ 1 \end{bmatrix}\in\mathbb R^{m\times m}, \] which is the matrix which flips the columns. Due to technical reasons, we also need to introduce the following quantity. \begin{definition} Given $A\in\mathbb C^{m\times m}$ enclosed by $(\rho,R_{A},\mathcal V_{A})$ and a parameter $R\in\mathbb R^+$ we define \[ \Lambda(\rho,R_{A},\mathcal V_{A},R):=\frac{\mathcal V_{A}^2}{\pi^2(R-1)(1-\frac{\rho}{R_{A}})\sqrt{1-(\frac{\rho}{R R_A})^2}}\cdot \min_{\rho< r<R_A}\frac{1}{\delta(r)(1-\delta(r)^2)(\frac{r}{\rho}-1)\sqrt{(1-\frac{\rho^2}{r^2})}}, \] where $\delta(r):=\max\{\frac{1}{R},\max_{C_r}|z|\}$. \end{definition} \subsubsection{The estimates} Now, we have all the ingredients for studying the singular values of Krylov/Horner outer products. For simplicity we state a result in the diagonalizable case, but we highlight that it is easy to recover analogous estimates for the general framework employing the techniques of Section~\ref{subsec:nondiag}. \begin{theorem}\label{thm:kryl-horn-decay} Let $b_1\in\mathbb C^{m}$, $b_2\in\mathbb C^{n}$ and $A_1\in\mathbb C^{m\times m},A_2\in\mathbb C^{n\times n}$ be two diagonalizable matrices enclosed by $(\rho,R_{A},\mathcal V_{A})$ with $\rho\in(0,1)$. Then for any finite sequence $\{a_j\}_{j=1,\dots, s}$ which verifies \[ |a_j|\leq \hat \gamma\cdot R^{-j}, \quad R>1,\quad j\in\{1,\dots, s\}, \] the singular values of \begin{equation}\label{horn-outer} X=\left[ \begin{array}{c|c|cc} b_1 & A_1b_1 & \ldots & A_1^{s-1} b_1 \\ \end{array} \right]\cdot \left[ \sum_{j=0}^{s-1}a_{j+1}A_2^j b_2 \ \vline\ \dots\ \vline\ (a_s A_2+a_{s-1}I)b_2\ \vline \ a_s b_2 \right]^t \end{equation} can be bounded by \begin{align*} \sigma_{l}(X) &\leq \gamma\cdot e^{-(\alpha+\alpha') (l+1)},\qquad \alpha=\log\left(\frac{R_A}{\rho}\right),\qquad \alpha'=\log\left(R\right), \end{align*} where $ \gamma :=\hat{\gamma}\cdot\kappa_s(A_1) \kappa_s(A_2)\norm {b_1}_2\norm{b_2}_2\cdot \Lambda(\rho,R_{A},\mathcal V_{A},R)$. \end{theorem} \begin{proof} Consider the matrices $U$ and $V$ defined as follows: \[ U = \left[ \begin{array}{c|c|cc} b_1 & A_1b_1 & \ldots & A_1^{s-1} b_1 \\ \end{array} \right], \quad V = \left[ a_s b_2 \ \vline\ (a_s A_2+a_{s-1}I)b_2\ \vline\ \dots\ \vline \ \sum_{j=0}^{s-1}a_{j+1}A_2^j b_2 \right], \] so that we have $X = U \Pi_s V^t$ as in Equation~\eqref{horn-outer}. Moreover, let $(Q_U, R_U)$ and $(Q_V, R_V)$ be the QR factorizations of $U$ and $V$ respectively. Applying Theorem~\ref{thm:krylov-decay} and Theorem~\ref{thm:horner-decay} we get that $\forall r\in(\rho,R_{A})$ \[ |R_{U,ij}| \leq c_1(r)\cdot e^{-\eta i - \beta j}\qquad \text{and}\qquad |R_{V,ij}| \leq c_2\cdot e^{-(\alpha+\alpha') i - \beta (s-j)}, \] with $\eta =\log\left(\frac{r}{\rho}\right)$, $\beta=|\log(\delta)|$, $c_1(r)=\frac{\mathcal V_{A_1}}{\delta\pi(1-\frac{\rho}{r})}\cdot \kappa_s(A_1)\cdot \norm {b_1}_2$ and $c_2=\frac{\hat \rho\hat{\gamma}\mathcal V_{A_2}}{\pi(1-\hat\rho)(1-\frac{\rho}{R_{A}})}\kappa_s(A_2)\norm{b_2}_2$. In order to bound the singular values of $X$ we look at those of $S = R_U \Pi_s R_V^*$. The entry $(i,j)$ of $S$ is obtained as the sum: \[ S_{ij} = \sum_{h = 1}^s R_{U,ih} \cdot R_{V,j(s-h)}, \qquad | R_{U,ih} \cdot R_{V,j(s-h)} | \leq c\cdot e^{-\eta i-(\alpha+\alpha')j - 2\beta h}, \] where $c=c_1(r)\cdot c_2$. Summing all the bounds on the addends we obtain \[ |S_{ij}| \leq \frac{c}{1-e^{-2\beta }} e^{-\eta i-(\alpha+\alpha') j}. \] We can estimate the $l$-th singular value by setting the first $l-1$ columns of $S$ to zero. Let $S_l$ be the matrix composed by the last $m - l+1$ columns of $S$. Since this matrix can be seen as the residue of a particular choice for a rank $l-1$ approximation of $S$ we have $ \sigma_{l}(S) \leq \norm{S_l}_2 $. The entries of $S_l$ satisfy the relation $(S_l)_{ij} \leq \tilde \gamma e^{-(\alpha+\alpha') l} e^{-\eta i-(\alpha+\alpha')(j-1))}$ where $\tilde \gamma=\frac{c}{1-e^{-2\beta}}$, so we obtain: \begin{align*} \norm*{\frac{e^{(\alpha+\alpha') l}}{\tilde \gamma} S_l}_F^2 &= \sum_{i = 1}^{m-l} \sum_{j = 1}^n |\frac{e^{(\alpha+\alpha') l}}{\tilde \gamma}(S_k)_{i,j}|^2 \leq \frac{e^{-2\eta}}{(1-e^{-2\eta})(1-e^{(-\alpha+\alpha')})}. \end{align*} Since $\norm{S_l}_2 \leq \norm{S_l}_F$ we have $\sigma_{l}(S) \leq \frac{\tilde \gamma e^{-\eta}}{\sqrt{(1-e^{-2\eta})(1-e^{-2(\alpha+\alpha')})}} e^{-(\alpha+\alpha') l}=\gamma e^{-(\alpha+\alpha') l}$. \end{proof} Our final aim is to estimate the singular values of \eqref{kryl_negl} by estimating the singular values of one of its finite truncations \eqref{kyl/horn}. In order to justify that, we need to show that the addends in \eqref{kryl_negl} become negligible. Observe that the latter are outer products of two Krylov matrices in which the second factor appears in a reverse order. This means that the row-decay in its $R$ factor has an opposite direction. In the next result we see how this fact implies the negligibility. \begin{theorem} \label{thm:sdecay2} Let $U = Q_U R_U$ and $V = Q_V R_V$ be QR factorizations of $U \in \mathbb{C}^{m \times n}$ and $V \in \mathbb{C}^{m \times n}$. Let $\alpha, \beta$ and $c$ be positive constants such that $|R_{U,ij}|,|R_{V,ij}| \leq c e^{-\alpha i - \beta j}$ for any $i,j$. Then the matrix $X = U\Pi_n V^*$ has singular values bounded by \[ \sigma_{l}(X) \leq \gamma e^{-\alpha (l+1)}, \qquad \gamma := \frac{c^2ne^{-(n+1)\beta}}{(1 - e^{-2\alpha})} . \] \end{theorem} \begin{proof} We can write $X = U\Pi_n V^* = Q_U R_U \Pi_n R_V^* Q_V^*$, so its singular values coincide with the ones of $S = R_U \Pi_{n,m} R_V^*$. The element in position $(i,j)$ of $S$ is obtained as the a sum \[ S_{ij} = \sum_{l = 1}^n R_{U,il} \cdot R_{V,j(n-l)}, \qquad | R_{U,il} \cdot R_{V,j(n+1-l)} | \leq c^2 e^{-\alpha(i+j) - \beta (n+1)} \] according to our hypotheses. Since the bound on the elements in the above summation is independent of $n$ we can write $ |S_{ij}| \leq c^2 n e^{-\beta (n+1)} e^{-\alpha(i+j)} $. The thesis can then be obtained by following the same procedure as in Theorem~\ref{thm:kryl-horn-decay}. \end{proof} \begin{remark}\label{negligible} Observe that the larger $n$ the closer the quantity $ne^{-\beta n}$ is to $0$. Therefore for sufficiently big $n$ the resulting matrix is negligible. \end{remark} \subsection{Decay in the off-diagonal singular values of $f(A)$} \label{sec:singdecay} We start with a few technical results that will make some proofs smoother. \begin{lemma}\label{dyads} Let $A^+=\sum_{j=0}^{+\infty} A_j$ with $A_j\in\mathbb{R}^{m\times n}$ matrices of rank $k$ and suppose that $\left\| A_j\right\|_2\leq \gamma e^{-\alpha |j|}$. Then \[ \sigma_l(A^+)\leq \frac{\gamma}{1-e^{-\alpha}}\cdot e^{-\alpha \frac{l-k}{k}}. \] \end{lemma} \begin{proof} Note that $\sum\limits_{j< \lceil \frac{l-k}{k}\rceil}A_j$ is at most a rank-$(l-1)$ approximation of $A$. This implies that \begin{align*} \sigma_l(A)\leq \left\| A-\sum_{j< \lceil \frac{l-k}{k}\rceil}A_j\right\|_2&=\left\| \sum_{j\geq \lceil \frac{l-k}{k}\rceil}A_j \right\|_2\leq \sum_{j\geq \lceil \frac{l-k}{k}\rceil}\gamma e^{-\alpha j}=\\ &=\gamma e^{-\alpha\lceil \frac{l-k}{k}\rceil}\sum_{j\geq 0}e^{-\alpha j}=\frac{\gamma}{1-e^{-\alpha}}\cdot e^{-\alpha \lceil \frac{l-k}{k}\rceil}. \end{align*} \end{proof} \begin{lemma} \label{lem:sumdecay} Let $A = \sum_{i = 1}^k A_i \in \mathbb{C}^{n \times n}$ where $ \sigma_j(A_i) \leq \gamma e^{- \alpha j}$, for $j = 1, \ldots, n$. Then $ \sigma_j(A) \leq \tilde \gamma e^{-\alpha \frac{j - k}{k} }, \quad \tilde \gamma = \frac{k\gamma}{1 - e^{-\alpha}} $. \end{lemma} \begin{proof} Relying on the SVD, we write $A_i=\sum_{j=1}^{\infty}\sigma_j(A_i)u_{i,j}v_{i,j}^*$ where $u_{i,j}$ and $v_{i,j}$ are the singular vectors of $A_i$ and where, for convenience, we have expanded the sum to an infinite number of terms by setting $\sigma_j(A_i)=0$ for $j>n$. This allows us to write \[ A = \sum_{i = 1}^k A_i = \sum_{j = 1}^{\infty} \left( \sum_{i = 1}^k \sigma_j(A_i) u_{i,j} v_{i,j}^* \right) = \sum_{j = 1}^\infty \tilde A_j. \] Observe that $\tilde A_j$ have rank $k$ and $\lVert A_j \rVert \leq k\gamma e^{-\alpha j}$. Applying Lemma~\ref{dyads} completes the proof. \end{proof} \begin{lemma}\label{lem:rank-corr} Let $A,B\in\mathbb C^{m\times m}$ and suppose that $B$ has rank $k$. Then \[ \sigma_{j+k}(A+B)\leq \sigma_j(A). \] \end{lemma} \begin{proof} For the Eckart-Young-Mirsky theorem $\forall j=1,\dots,m$ $\exists \widetilde A$ of rank $j$ such that $\norm{A-\widetilde A}_2=\sigma_j(A)$. Therefore, since $\widetilde A+ B$ has rank less than or equal to $j+k$ we have \[ \sigma_{j+k}(A+B)\leq \norm{(A+B)-(\widetilde A+B)}_2=\sigma_j(A). \] \end{proof} We are ready to study singular values of the matrix resulting from applying a function to a matrix. We prefer to begin by stating a simpler result which holds for matrices with spectrum contained in $B(0,1)$ and function holomorphic on a larger disk. In the following corollaries it is shown how to adapt this result to more general settings. \begin{theorem}\label{thm:matrix-func-decay} Let $A\in\mathbb C^{m\times m}$ be quasiseparable of rank $k$ and such that $A$ and all its trailing submatrices are enclosed in $(\rho,R_A,\mathcal V_A)$ and diagonalizable. Consider $f(z)$ holomorphic on $B(0,R)$ with $R>1$. Then, we can bound the singular values of a generic off-diagonal block $\tilde C$ in $f(A)$ with \[ \sigma_l(\tilde C)\leq \gamma e^{-\frac{(\alpha+\alpha')l}{k}},\quad \alpha=\log\left(\frac{R_A}{\rho}\right),\quad \alpha'=\log( R), \] where $\gamma:= \max\limits_{|z|=R} |f(z)|\cdot \kappa_{max}^2\cdot \norm {A}_2\cdot \Lambda(\rho,R_A,\mathcal V_A,R)\cdot \frac{k\cdot \rho}{R R_A-\rho}$ and $\kappa_{max}$ is the maximum among the spectral condition numbers of the trailing submatrices of $A$. \end{theorem} \begin{proof} Consider the partitioning $A= \left[\begin{smallmatrix} \bar A&\bar B\\ \bar C&\bar D \end{smallmatrix}\right]$ and for simplicity the case $k=1$, $\bar C=uv^t$. The general case is obtained by linearity summing $k$ objects of this kind coming from the SVD of $\bar C$ and applying Lemma~\ref{lem:sumdecay}. We rewrite the Dunford-Cauchy formula for $f(A)$ \[ f(A)=\frac{1}{2\pi i}\int_{S^1}(zI-A)^{-1}f(z)dz. \] Let $f(z)=\sum_{n\geq 0}a_nz^n$ be the Taylor expansion of $f(z)$ in $B(0,R)$. The corresponding off-diagonal block $\tilde C$ in $f(A)$ can be written as the outer product in Remark~\ref{rem:outer} \begin{equation}\label{representation} \left[ u\ \vline\ \bar D\cdot u\ \vline\ \dots\ \vline\ \bar D^{s-1}\cdot u \right]\cdot \left[ \sum_{n=0}^{s-1}a_{n+1}( A^t)^n\bar v\ \vline\ \dots\ \vline\ (a_s A^t+a_{s-1}I)\bar v\ \vline\ a_s \bar v \right]^t[ I \ 0 ]^t+g_s(A), \end{equation} where $\bar v=[I\ 0]^tv$ and $g_s(A)$ is the remainder of the truncated Taylor series at order $s$. Since $f(z)$ is holomorphic in $B(0,R)$ the coefficients of $f(z)$ verify \cite[Theorem 4.4c]{henrici} \[ |a_j|\leq \max_{|z|=R}|f(z)| \cdot R^{-j}. \] Applying Theorem~\ref{thm:kryl-horn-decay} we get that $\forall r\in(\rho,R_{A})$ \[ \sigma_l(\tilde C-g_s(A))\leq\gamma e^{-(\alpha+\alpha') l}, \] with $\alpha,\alpha',\delta,\kappa_{max}$ as in the thesis and $\gamma= \max\limits_{|z|=R} |f(z)|\cdot \kappa_{max}^2\norm {A}_2\cdot \Lambda(\rho,R_A,\mathcal V_A,R)$. Observing that this bound is independent on $s$ and $\lim_{s\to\infty}g_s(A)=0$ we get the thesis. \end{proof} \begin{corollary}\label{cor:decay-func} Let $A\in\mathbb C^{m\times m}$ be a $k$-quasiseparable matrix, $z_0\in\mathbb C$ and $R'\in\mathbb R^+$ such that $R'^{-1}(A-z_0I)$ is enclosed in $(\rho,R_A,\mathcal V_A)$. Then, for any holomorphic function $f(z)$ in $B(z_0,R)$ with $R>R'$, any off-diagonal block $\tilde C$ in $f(A)$ has singular values bounded by \[ \sigma_l(\tilde C)\leq \gamma e^{-\frac{(\alpha+\alpha')l}{k}},\quad \alpha=\log\left(\frac{R_A}{\rho}\right),\quad \alpha'=\log\left( \frac{R}{R'}\right), \] where $\gamma:= \max\limits_{|z-z_0|=R} |f(z)|\cdot \kappa_{max}^2\cdot\norm {A-z_0I}_2\cdot \Lambda(\rho,R_A,\mathcal V_A,R)\cdot \frac{k\cdot \rho}{R R_A-\rho R'}$ and $\kappa_{max}$ is the maximum among the spectral condition numbers of the trailing submatrices of $R'^{-1}(A-z_0I)$. \end{corollary} \begin{proof} Define $g(z)=f(R'z+z_0)$ which is holomorphic on $B(0,\frac{R}{R'})$. Observing that $f(A)=g(R'^{-1}(A-z_0I))$ we can conclude by applying Theorem~\ref{thm:matrix-func-decay}. \end{proof} \begin{remark} If we can find $z_0\in\mathbb C$ such that $\norm{A-z_0I}_2<R$ then it is always possible to find $(\rho,R_A,\mathcal V_A)$ with $\rho\in(0,1)$ which satisfies the hypothesis of the previous corollary. A worst case estimate for $\frac{\rho}{R_A}$ is $\frac{\norm{A-z_0I}_2}{R}$ since this is the radius of a circle containing the spectrum of the rescaled matrix and --- given that the Riemann map for a ball centered in $0$ is the identity --- $R_A=1$. \end{remark} \begin{example}[Real spectrum] We here want to estimate the quantity $\frac{R_A}{\rho}$ in the case of a real spectrum for the matrix $A$. Suppose that --- possibly after a scaling --- the latter is contained in the symmetric interval $[-a,a]$ with $a\in(0,1)$. The logarithmic capacity of this set is $\frac{a}{2}$ and the inverse of the associated Riemann map is $\psi(z)=z+\frac{a^2}{4}$. This follows by observing that the function $z+z^{-1}$ maps the circle of radius $1$ into $[-2,2]$, so then it is sufficient to compose the latter with two homothetic transformations to get $\psi(z)$. Moreover, observe that --- given $r\geq\frac{a}{2}$ --- $\psi$ maps the circle of radius $r$ into an ellipse of foci $[-a,a]$. Therefore, in order to get $R_A$ it is sufficient to compute for which $r$ we have $\psi(r)=1$. This corresponds to finding the solution of $r+\frac{a^2}{4r}=1$ which is greater than $\frac{a}{2}$. This yields \[ R_A=\frac{1+\sqrt{1-a^2}}{2}\quad \Rightarrow\quad \frac{R_A}{\rho}=\frac{1+\sqrt{1-a^2}}{a}. \] \end{example} \section{Functions with singularities} \label{sec:poles} If some singularities of $f$ lie inside $B(z_0,R)$ then $f(A)\neq \int_{\partial B(z_0,R)}f(z)(zI-A)^{-1}dz$. However, since the coefficients of the Laurent expansion of $f$ with negative degrees in \eqref{cbar} do not affect the result, the statement of Theorem~\ref{thm:matrix-func-decay} holds for the matrix $\int_{\partial B(z_0,R)}f(z)(zI-A)^{-1}dz$. In this section we prove that --- under mild conditions --- the difference of the above two terms still has a quasiseparable structure. This numerically preserves the quasiseparability of $f(A)$. \subsection{An extension of the Dunford-Cauchy integral formula} The main tool used to overcome difficulties in case of removable singularities will be the following result, which is an extension of the integral formula used in Definition~\ref{def:matrix-func1}. \begin{theorem}\label{thm:ext-cauchy} Let $f(z)$ be a meromorphic function with a discrete set of poles $\mathcal P$ and $A\in\mathbb C^{m\times m}$ with spectrum $\mathcal S$ such that $\mathcal S\cap\mathcal P=\emptyset$. Moreover, consider $\Gamma$ simple closed curve in the complex plane which encloses $\mathcal S$ and $T:=\{z_1,\dots,z_t\}\subseteq \mathcal P$ subset of poles with orders $d_{1},\dots,d_{t}$ respectively. Then \[ \frac{1}{2\pi i}\int_{\Gamma}(zI-A)^{-1}f(z)dz=f(A)+\sum_{j=1}^tR_j(z_jI-A), \] where $R_j$ is the rational function \[ R_j(z):=\sum_{l=1}^{d_j}(-1)^{l+1}\frac{f_j^{(d_j-l)}(z_j)}{(d_j-l)!}z^{-l} \] and $f_j(z)=(z-z_j)^{d_j}f(z)$, extended to the limit in $z_j$. In particular if the poles in $T$ are simple then \[ \frac{1}{2\pi i}\int_{\Gamma}(zI-A)^{-1}f(z)dz=f(A)+\sum_{j=1}^tf_j(z_j)\cdot(z_jI-A)^{-1}=f(A)+\sum_{j=1}^tf_j(z_j)\mathfrak R(z_j). \] \end{theorem} \begin{proof} We first prove the statement for $A$ diagonalizable. Assume that $V^{-1}AV=\hbox{diag}(\lambda_1,\ldots,\lambda_n)$, then \begin{equation}\label{residue} \frac{1}{2\pi i}\int_{\Gamma}(zI-A)^{-1}f(z)dz=V^{-1}\begin{bmatrix} \frac{1}{2\pi i}\int_{\Gamma}\frac{f(z)}{z-\lambda_1}\\ &\ddots\\ &&\frac{1}{2\pi i}\int_{\Gamma}\frac{f(z)}{z-\lambda_m} \end{bmatrix}V. \end{equation} Applying the Residue theorem we arrive at \[ \frac{1}{2\pi i} \int_{\Gamma}\frac{f(z)}{z-\lambda_p}=\res\left(\frac{f}{z-\lambda_p},\lambda_p\right)+\sum_{j=1}^t\res\left(\frac{f}{z-\lambda_p},z_j\right),\qquad p=1,\dots,m. \] Since $\lambda_p$ is a simple pole of $\frac{f}{z-\lambda_p}$ the first summand is equal to $f(\lambda_p)$. On the other hand $z_j$ is a pole of order $d_j$ of $\frac{f}{z-\lambda_p}$, therefore its residue is \[ \res\left(\frac{f}{z-\lambda_p},z_j\right)= \frac{1}{(d_j-1)!}\lim_{z\to z_j}\frac{\partial^{d_j-1}}{\partial z^{d_j-1}}\left((z-z_j)^{d_j}\frac{f}{z-\lambda_p}\right)=\frac{1}{(d_j-1)!}\frac{\partial^{d_j-1}}{\partial z^{d_j-1}}\left(\frac{f_j}{z-\lambda_p}\right)(z_j). \] One can prove by induction (see Appendix) that, given a sufficiently differentiable $f_j(z)$, it holds \begin{equation}\label{ind1} \frac{\partial^{d-1}}{\partial z^{d-1}}\left(\frac{f_j(z)}{z-\lambda_p} \right)= \sum_{l=1}^{d}(-1)^{l+1}\frac{(d-1)!}{(d-l)!}f_j^{(d-l)}(z)(z-\lambda_p)^{-l},\quad d\in\mathbb N. \end{equation} Setting $d=d_j$ in \eqref{ind1} we derive \[ \res\left(\frac{f}{z-\lambda_p},z_j\right)=R_j(z_j-\lambda_p). \] To conclude it is sufficient to rewrite the diagonal matrix in \eqref{residue} as \[ \begin{bmatrix} f(\lambda_1)\\ &\ddots\\ &&f(\lambda_m) \end{bmatrix}+\sum\limits_{j=1}^t \begin{bmatrix} R_j(z_j-\lambda_1)\\ &\ddots\\ && R_j(z_j-\lambda_m) \end{bmatrix}. \] We now prove the thesis for \[ A=\begin{bmatrix} \lambda&1\\ &\ddots&\ddots\\ &&\ddots&1\\ &&&\lambda \end{bmatrix}, \] because the general non diagonalizable case can be decomposed in sub-problems of that kind. We have that \[ \frac{1}{2\pi i}\int_{\Gamma}(zI-A)^{-1}f(z)dz=\frac{1}{2\pi i}\begin{bmatrix} \int_{\Gamma}\frac{f(z)}{z-\lambda}&\int_{\Gamma}\frac{f(z)}{(z-\lambda)^2}&\dots&\int_{\Gamma}\frac{f(z)}{(z-\lambda)^m}\\ &\ddots&\ddots&\vdots\\ &&\ddots&\int_{\Gamma}\frac{f(z)}{(z-\lambda)^2}\\ &&&\int_{\Gamma}\frac{f(z)}{z-\lambda} \end{bmatrix}. \] In order to reapply the previous argument is sufficient to prove that \begin{itemize} \item[(i)] $\res(\frac{f}{(z-\lambda)^{h+1}},\lambda)=\frac{f_j^{(h)}(\lambda)}{h!}$ $h=1,\dots,m-1$, \item[(ii)] $\res(\frac{f}{(z-\lambda)^{h+1}},z_j)=\frac{R_j^{(h)}(z_j-\lambda)}{h!}$ $h=1,\dots,m-1$. \end{itemize} The point $(i)$ is a direct consequence of the fact that $\lambda$ is a pole of order $h+1$ of the function $\frac{f(z)}{(z-\lambda)^{h+1}}$. Concerning $(ii)$ observe that $z_j$ is again a pole of order $d_j$ for the function $\frac{f(z)}{(z-\lambda)^{h+1}}$ so \[ \res \left( \frac{f}{(z-\lambda)^{h+1}},z_j\right) = \frac{1}{(d_j-1)!} \frac{\partial^{d_j-1}}{\partial z^{d_j-1}}\left(\frac{f_j(z)}{(z-\lambda)^{h+1}}\right)(z_j).\] One can prove by induction (see Appendix) that, for each $d, h \in \mathbb N$: \begin{equation}\label{ind2} \frac{\partial^{d-1}}{\partial z^{d-1}}\left(\frac{f_j(z)}{(z-\lambda)^{h+1}} \right)=\frac{(d-1)!}{h!}\sum_{l=1}^{d}(-1)^{l+h+1}\frac{(l+h-1)!}{(d-l)!(l-1)!} f_j^{(d-l)}(z)(z-\lambda)^{-(h+l)} \end{equation} Successive derivation of $R_j$ repeated $h$ times yields: \[ R_j^{(h)}(z)=\sum_{l=1}^{d_j}(-1)^{l+h+1}\frac{(l+h-1)!}{(d_j-l)!(l-1)!} f_j^{(d_j-l)}(z_j)z^{-(h+l)}, \] and by setting $d=d_j$ in \eqref{ind2} we finally get $(ii)$. \end{proof} \subsection{Functions with poles} As a direct application of Corollary~\ref{cor:decay-func} we can give a concise statement in the case of simple poles. \begin{corollary}\label{cor:poles} Let $A\in\mathbb C^{m\times m}$ be a quasiseparable matrix with rank $k$, $z_0\in\mathbb C$ and $R'\in\mathbb R^+$ such that $R'^{-1}(A-z_0I)$ is enclosed in $(\rho,R_A,\mathcal V_A)$. Consider $R>R'$ and a function $f(z)$ holomorphic on the annulus $\mathcal A:=\{R'<|z-z_0|<R\}$. If the disc $B(z_0,R')$ contains $t$ simple poles of $f$ then any off-diagonal block $\tilde C$ in $f(A)$ has singular values bounded by \[ \sigma_l(\tilde C)\leq \gamma e^{-\frac{(\alpha+\alpha')(l-tk)}{k}},\quad \alpha=\log\left(\frac{R_A}{\rho}\right),\quad \alpha'=\log\left( \frac{R}{R'}\right), \] where $\gamma:= \max\limits_{|z-z_0|=R} |f(z)|\cdot \kappa_{max}^2\cdot\norm {A-z_0I}_2\cdot \Lambda(\rho,R_A,\mathcal V_A,R)\cdot \frac{k\cdot \rho}{R R_A-\rho R'}$ and $\kappa_{max}$ is the maximum among the spectral condition numbers of the trailing submatrices of $R'^{-1}(A-z_0I)$. \end{corollary} \begin{proof} Let $f(z)=\sum\limits_{n\in\mathbb Z}a_nz^n$ be the series expansion of $f$ in $\mathcal A$ and $z_{1},\dots,z_{t}$ be the simple poles of $f$ inside $B(z_0,R')$. Then \[ |a_j|\leq \norm{f(z)}_{\infty, \partial B(z_0,R)}\cdot \left(\frac{R'}{R}\right)^j,\qquad n\geq 0. \] According to what we observed at the beginning of Section~\ref{sec:poles} we can apply Corollary~\ref{cor:decay-func} to the off-diagonal singular values of $B:=\int_{\partial B(z_0,R')}f(z)(zI-A)^{-1}dz$. Moreover, using Theorem~\ref{thm:ext-cauchy} we get \[ f(A)=B-\sum_{j=1}^tf_{j}(z_{j})\cdot(z_{j}I-A)^{-1}. \] Observing that the right summand has at most quasiseparable rank $tk$ we can conclude, using Lemma~\ref{lem:rank-corr}, that the bound on the singular values of $f(A)$ is the same which holds for $B$, but shifted by the quantity $t\cdot k$. \end{proof} \subsection{Functions with essential singularities} Consider the case of a function $f(z)$ holomorphic in $\mathbb C\setminus\{a\}$ with an essential singularity in $a$. Moreover, suppose that $a$ is not an eigenvalue of the argument $A\in\mathbb C^{m\times m}$. In a suited punctured disk $B(a,R)\setminus\{a\}$ --- which contains the spectrum of $A$ --- we can expand $f$ as \[ f(z):=\sum_{n\in\mathbb Z}a_n(z-a)^n. \] In particular we can decompose $f$ as $f_1(z-a)+f_2((z-a)^{-1})$ with $f_i$ holomorphic on $B(0,R)$ for $i=1,2$. Therefore \[ f(A)=f_1(A-aI)+f_2((A-aI)^{-1}). \] Since $f_1$ and $f_2$ are both holomorphic and the operations of shift and inversion preserve the quasiseparable rank we can apply Theorem~\ref{thm:matrix-func-decay} and Lemma~\ref{lem:sumdecay} in order to get estimates on the off-diagonal singular values of $f(A)$. One can use this approach in the case of finite order poles and find equivalent bounds to Corollary~\ref{cor:poles}, although in a less explicit form. \subsection{Functions with branches} We conclude this section describing how to re-adapt the approach in the case of functions with multiple branches. The same trick can be used to deal with other scenarios, such as the presence of singularities that has been described previously. The main idea is that, in the integral definition of a matrix function, the path $\Gamma$ does not need to be a single Jordan curve, but can be defined as a union of a finite number of them. The only requirement is that the function is analytic in the Jordan regions, and that the spectrum is contained in their union. In our setting, it might happen that we cannot enclose the spectrum in a single ball without capturing also the branching point. However, it is always possible to cover it with the union of a finite number of such balls. In this context, assuming that the path $\Gamma$ is split as the borders of $t$ balls, denoted by $\Gamma_1, \ldots, \Gamma_t$, one has \[ f(A) = \sum_{i = 1}^t \int_{\Gamma_i} f(z) \mathfrak R(z) dz. \] Assuming that the number $t$ is small enough, we can obtain the numerical Quasiseparability of $f(A)$ by the quasiseparability of each of the addends and then relying on Lemma~\ref{lem:sumdecay}. Inside each $\Gamma_i = B(z_i, r_i)$ we can perform the change of variable $\tilde z := r_i (z - z_i)$ and write the resolvent as (here the coefficient $D$ will be different by scaling and translation in every $\Gamma_i$): \[ \mathfrak R(\tilde z) = \begin{bmatrix} * & * \\ (\tilde zI - D)^{-1} C(\tilde z) S_{D}(\tilde z)^{-1} & * \\ \end{bmatrix}, \qquad \begin{cases} (\tilde zI - D)^{-1} = \sum_{j \in \mathbb Z} D_j \tilde z^j \\ S_D^{-1}(\tilde z) = \sum_{s \in \mathbb Z} H_s \tilde z^s \\ \end{cases} \] The construction of the coefficients $D_j$ can be done by writing $D$ in Jordan canonical from as \[ V^{-1} D V = \begin{bmatrix} J_{\text{in}} \\ & J_{\text{out}} \\ \end{bmatrix}, \qquad V = \begin{bmatrix} V_{1} & V_{2} \\ \end{bmatrix}, \quad V^{-1} = \begin{bmatrix} W_{1} \\ W_{2} \\ \end{bmatrix} \] where $J_{\text{in}}$ refers to the part of the spectrum inside $\Gamma_i$, and $J_{\text{out}}$ the one outside. Thanks to the change of variable in the integral, this corresponds to asking that the spectrum of $J_{\text{in}}$ is inside the unit disc, and the one of $J_{\text{out}}$ outside. Then, one has the following definition for $D_j$: \[ D_j = \begin{cases} V_{1} J_{\text{in}}^{-j-1} W_{1} & j < 0 \\ -V_{2} J_{\text{out}}^{-j-1} W_{2} & j \geq 0 \\ \end{cases}, \] and an analogous formula holds for the coefficients $H_s$. This provides the Laurent expansion of the off-diagonal block in the integrand. A similar analysis to the one carried out in the previous sections can be used to retrieve the decay on the singular values of this block. \section{Computational aspects and validation of the bounds} \label{sec:computational} In the previous sections we have proved that the numerical quasiseparable structure is often present in $f(A)$. This property can be used to speed up the matrix arithmetic operations and then to efficiently evaluate $f(A)$ by means of contour integration. We briefly describe the strategy in the next subsections and we refer the reader to \cite{hackbusch2016hierarchical} for more details. In Section~\ref{sec:validation} we will compare our bounds with the actual decay in some concrete cases. \subsection{Representation and arithmetic operations} In order to take advantage of the quasiseparable structure we need a representation that enable us to perform the storage and the matrix operations cheaply. We rely on the framework of Hierarchical representations originally introduced by Hackbusch \cite{hackbusch1999sparse,hackbusch2016hierarchical} in the context of integral and partial differential equations. It consists in a class of recursive block representations with structured sub-matrices that allows the treatment of a number of data-sparse patterns. Here, we consider a particular member of this family --- sometimes called Hierarchical off-diagonal low-rank representation (HODLR) --- which has a simple formulation and an effective impact in handling quasiseparable matrices. Let $A\in\mathbb{C}^{m\times m}$ be a $k$-quasiseparable matrix and consider the partitioning \[A=\left[\begin{smallmatrix}A_{11}&A_{22}\\ A_{21}&A_{22}\end{smallmatrix}\right],\]where $A_{11}\in\mathbb C^{m_1\times m_1}$, $A_{22}\in\mathbb C^{m_2\times m_2}$, with $m_1:=\lfloor \frac{m}{2} \rfloor $ and $m_2:=\lceil \frac{m}{2} \rceil$. Observe that the antidiagonal blocks $A_{12}$ and $A_{21}$ do not involve any element of the main diagonal of $A$, hence we can represent them in a compressed form as an outer product of rank $k$. Moreover, the diagonal blocks $A_{11}$ and $A_{22}$ are square matrices which are again $k$-quasiseparable. Therefore it is possible to re-apply this procedure recursively. We stop when the diagonal blocks reach a minimal dimension $m_{\text{min}}$, and we store them as full matrices. The process is described graphically in Figure~\ref{fig:Hmatrices}. \begin{figure}[!ht] \centering \includegraphics[width=0.2\textwidth]{figura1}\qquad \includegraphics[width=0.2\textwidth]{figura2} \qquad \includegraphics[width=0.2\textwidth]{figura3}\qquad \includegraphics[width=0.2\textwidth]{figura4} \caption{The behavior of the block partitioning in the HODLR-matrix representation. The blocks filled with grey are low rank matrices represented in a compressed form, and the diagonal blocks in the last step are stored as dense matrices.}\label{fig:Hmatrices} \end{figure} If $m_{\text{min}}$ and $k$ are negligible with respect to $m$ then the storage cost of each sub-matrix is $O(m)$. Since the levels of the recursion are $O(\log(m))$, this yields a linear-polylogarithmic memory consumption with respect to the size of the matrix. The HODLR representation acts on a matrix by compressing many of its sub-blocks. Therefore, it is natural to perform the arithmetic operations in a block-recursive fashion. The basic steps of these procedures require arithmetic operations between low-rank matrices or $m_{min}\times m_{min}$-matrices. If the rank of the off-diagonal blocks is small compared to $m$, then the algorithms performing the arithmetic operations have linear polylogarithmic complexities \cite{borm}[Chapter 6]. The latter are summarized in Table~\ref{tab:complexity} where it is assumed that the constant $k$ bounds the quasiseparable rank of all the matrices involved. Moreover, the operations are performed adaptively with respect to the rank of the blocks. This means that the result of an arithmetic operation will be an HODLR matrix with the same partitioning, where each low rank block is a truncated reduced SVD of the corresponding block of the exact result. This operation can be carried out with linear cost, assuming the quasiseparable stays negligible with respect to $m$. Hence the rank is not fixed a priori but depends on a threshold $\epsilon$ at which the truncation is done. We refer to \cite{hackbusch2016hierarchical} for a complete description. In our experiments we set $\epsilon$ equal to the machine precision $2.22\cdot 10^{-16}$ and $m_{\text{min}}=64$. \begin{table} \begin{center} \resizebox{0.8\textwidth}{!} { \footnotesize \begin{tabular}{cc} \hline Operation & Computational complexity \\ \hline Matrix-vector multiplication & $O(k m\log(m))$\\ Matrix-matrix addition & $O(k^2 m\log(m))$\\ Matrix-matrix multiplication & $O(k^2 m\log(m)^2)$\\ Matrix-inversion & $O(k^2 m\log(m)^2)$\\ Solve linear system & $O(k^2 m\log(m)^2)$\\ \hline \end{tabular} } \end{center} \caption{Computational complexity of the HODLR-matrix arithmetic. The operation \emph{Solve linear system} comprises to compute the LU factorization of the coefficient matrix and to solve the two triangular linear systems}\label{tab:complexity} \end{table} \subsection{Contour integration} The Cauchy integral formula \eqref{cauchyformula} can be used to approximate $f(A)$ by means of a numerical integration scheme. Recall that, given a complex valued function $g(x)$ defined on an interval $[a,b]$ one can approximate its integral by \begin{equation} \label{quadrature} \int_{a}^b g(x) dx\ \approx \ \sum_{k = 1}^N w_k \cdot g(x_k) \end{equation} where $w_k$ are the \emph{weights} and $x_k$ are the \emph{nodes}. Since we are interested in integrating a function on $S^1$ we can write \[ \frac{1}{2\pi i}\int_{S^1} f(z) (zI - A)^{-1} dz = \frac{1}{2\pi}\int_{0}^{2\pi} f(e^{ix}) (I - e^{-ix} A)^{-1} dx, \] where we have parametrized $S^1$ by means of $e^{ix}$. The right-hand side can be approximated by means of \eqref{quadrature}, so we obtain: \begin{equation}\label{quadrature2} f(A) \approx \frac{1}{2\pi}\sum_{k = 1}^N w_k \cdot f(e^{ix_k}) (I - e^{-ix_k} A)^{-1} = \frac{1}{2\pi}\sum_{k = 1}^N w_k \cdot e^{ix_k} f(e^{ix_k})\mathfrak R(e^{ix_k}). \end{equation} \begin{algorithm} \caption{Pseudocode for the evaluation of a contour integral on $S^1$}\label{alg:contour} \begin{algorithmic}[1] \Procedure{ContourIntegral}{$f,A$}\Comment{Evaluate $\frac{1}{2\pi i}\int_{S^1}f(z)(zI-A)^{-1}dz$} \State $N\gets1$ \State $M\gets f(1)\cdot(I-A)^{-1}$ \State $err\gets\infty$ \While{$err> \sqrt u$} \State $M_{\text{old}}\gets M$ \State $M\gets\frac 12 M$\Comment{The new weights are applied to the old evaluations} \State $N\gets 2N$ \For{$j=1,3,\dots,N-1$}\Comment{Sum the evaluations on the new nodes} \State $z \gets e^{\frac{2 \pi i j}{N}}$ \State $M\gets M + \frac{z f(z)}N \cdot (zI - A)^{-1}$ \EndFor \State $err \gets \lVert M - M_{\text{old}} \rVert_2$ \EndWhile \State \Return $M$ \EndProcedure \end{algorithmic} \end{algorithm} This approach has already been explored in \cite{gavrilyuk2002mathcal}, mainly for the computation of $f(A) b$ due to the otherwise high cost of the inversions in the general case. The pseudocode of the procedure is reported in Algorithm~\ref{alg:contour}. Algorithm~\ref{alg:contour} --- based on \eqref{quadrature2} --- can be carried out cheaply when $A$ is represented as an HODLR-matrix, since the inversion only requires $O(m \log^2(m))$ flops. Moreover, not only the resolvent $\mathfrak R(e^{ix_k})$ is representable as a HODLR-matrix, but the same holds for the final result $f(A)$ in view of Theorem~\ref{thm:matrix-func-decay}. This guarantees the applicability of the above strategy even when dealing with large dimensions. The results in Section~\ref{sec:poles} enable us to deal with functions having poles inside the domain of integration. The only additional step that is required is to compute the correction term described in Theorem~\ref{thm:ext-cauchy}. Notice that this step just requires additional evaluations of the resolvent and so does not change the asymptotic complexity of the whole procedure. We show now an example where Theorem~\ref{thm:ext-cauchy} can be used to derive an alternative algorithm for the evaluation of matrix functions with poles inside the domain. More precisely, we consider a matrix $A$ with spectrum contained in the unit disc, and the evaluation of the matrix function $f(A)$ with $f(z) = \frac{e^{z}}{\sin(z)}$. Applying Theorem~\ref{thm:ext-cauchy} yields \[ f(A) = \int_{S^1} f(z) \mathfrak R(z) dz + A^{-1}. \] One can then choose to obtain $f(A)$ by computing $e^A \cdot ( \sin A )^{-1}$, which requires the evaluation of two integrals and one inverse, or using the above formula, which only requires one integral and an inverse. We used an adaptive doubling strategy for the number of nodes i.e., starting with $N$-th roots of the unit for a small value of $N$. We apply the quadrature rule \eqref{quadrature2} and we double $N$ until the quality of the approximation is satisfying. In order to check this, we require that the norm of the difference between two consecutive approximations is smaller than a certain threshold. The $2$-norm of an HODLR-matrix can be estimated in linear time as shown in \cite{hackbusch2016hierarchical}. Since the quadrature rule is quadratically convergent \cite{trefethen2014exponentially} and the magnitude of the distance between the approximations at step $k$ and $k+1$ is a heuristic estimate for the error at step $k$ we choose as threshold $\sqrt u$ where $u$ is the unit round-off. In this way we should get an error of the order of $u$. We show in Table~\ref{tab:sin}, where the approach relying on Theorem~\ref{thm:ext-cauchy} and on computing the function separately are identified by the labels ``sum'' and ``inv'', respectively, that the first choice is faster (due to the reduced number of inversions required) and has a similar accuracy. The matrices in this example have been chosen to be $1$-quasiseparable and Hermitian, and we have verified the accuracy of the results by means of a direct application of Definition~\ref{def:matrix-func1}. In particular, the timings confirm the almost linear complexity of the procedure. \begin{table} \centering \pgfplotstabletypeset[ columns={0, 4, 6, 7, 9}, columns/0/.style={ column name = Size }, columns/4/.style={ column name = $t_{\text{inv}}$, postproc cell content/.style={ /pgfplots/table/@cell content/.add={}{ s} } }, columns/6/.style={ column name = $\Res_{\text{inv}}$ }, columns/7/.style={ column name = $t_{\text{sum}}$, postproc cell content/.style={ /pgfplots/table/@cell content/.add={}{ s} } }, columns/9/.style={ column name = $\Res_{\text{sum}}$ } ]{expsin.txt} \caption{Timing and accuracy on the computation of the matrix function $f(z) = e^z \sin(z)^{-1}$ on a $1$-quasiseparable Hermitian matrix $A$ with spectrum contained the unit disc. The residues are measured relatively to the norm of the computed matrix function $f(A)$.} \label{tab:sin} \end{table} \subsection{Validation of the bounds}\label{sec:validation} This section is devoted to check the accuracy of the estimates for the singular values that we have proved in the paper. In order to do so we compute some matrix function on quasiseparable matrices and verify the singular values decay in one large off-diagonal block. In particular, for a matrix of order $m$ --- $m$ even --- we consider the off-diagonal block with row indices from $\frac m2+1$ to $m$ and column indices from $1$ to $\frac m2$. Then, we compare the obtained result with the theoretical bound coming from Theorem~\ref{thm:matrix-func-decay}. Notice that Theorem~\ref{thm:matrix-func-decay} provides a family of bounds depending on a parameter $R$ which can be chosen as long as $f(z)$ is holomorphic in $B(0,R)$. So, in every experiment we estimated the $l$-th singular value by choosing the parameter $R$ which provides the tighter bound, among the admissible values for the function $f$ under consideration. We choose two particular classes of $1$-quasiseparable matrices for the tests, since we can easily determine the bounds on them: \begin{description} \item[Hermitian tridiagonal matrices] These matrices are generated with elements taken from a random Gaussian distribution $N(0,1)$, and are then scaled and shifted so that their spectrum is contained in a ball of center $0$ and radius $\frac 3 4$. These matrices are normal and the same holds for their submatrices, so we can avoid the computation of the constants $\kappa_s(\cdot)$ which are all equal to $1$. \item[Hessenberg (scaled) unitary matrices] We consider a random unitary matrix which is also upper Hessenberg, and so in particular it is $1$-quasiseparable (since unitary matrices are rank symmetric - the rank of the lower off-diagonal blocks is equal to the corresponding block above). We then scale the matrices multiplying by $\frac 3 4$, in order to keep the spectrum on the circle of radius $\frac 3 4$. We obtain these matrices in MATLAB by running the command \verb/[A,~] = .75 * qr(hess(randn(N)));/ where $N$ is the chosen dimension. \end{description} As a first example we consider the matrix exponential $e^A$ which can be easily computed by means of {\tt expm}. We have computed it for many random tridiagonal matrices of size $1000 \times 1000$, and the measured and theoretical decays in the submatrix $e^A(501:1000,1:500)$ are reported in Figure~\ref{fig:exptridiag}. \begin{figure} \centering \begin{tikzpicture \begin{semilogyaxis}[ xlabel=$l$,ylabel=Singular values ($\sigma_l$), ymin = 1e-20, xmax = 20,ymax=1e8, width=.45\linewidth, height=.25\textheight, ytick={1e-20,1e-14,1e-8,1e-2,1e4}, ] \addplot table[x index=0,y index=1] {expbound.txt}; \foreach \j in {2, ..., 101} { \addplot[gray, no marks,thin,dashed] table[x index = 0, y index = \j] {expbound.txt}; } \legend{Theorem~\ref{thm:matrix-func-decay}, Singular values}; \end{semilogyaxis} \end{tikzpicture} \begin{tikzpicture \begin{semilogyaxis}[ xlabel=$l$,ylabel=Singular values ($\sigma_l$), ymin = 1e-20, xmax = 20,ymax=1e8, width=.45\linewidth, height=.25\textheight, ytick={1e-20,1e-14,1e-8,1e-2,1e4}, ] \addplot table[x index=0,y index=1] {exp2bound.txt}; \addplot[gray, no marks,thin,dashed] table[x index = 0, y index = 2] {exp2bound.txt}; \legend{Theorem~\ref{thm:matrix-func-decay}, Singular values}; \end{semilogyaxis} \end{tikzpicture} \caption{On the left, the bound on the singular values of the off-diagonal matrices of $e^A$ for $100$ random Hermitian tridiagonal matrices scaled in order to have spectral radius $\frac 3 4$ are shown. In the right picture the same experiment with a scaled upper Hessenberg unitary matrix is reported (with $1$ matrix only).} \label{fig:exptridiag} \end{figure} \begin{figure} \centering \begin{tikzpicture \begin{semilogyaxis}[ xlabel=$l$,ylabel=Singular values ($\sigma_l$), ymin = 1e-20, xmax = 20,ymax=1e8, width=.45\linewidth, height=.25\textheight, ytick={1e-20,1e-14,1e-8,1e-2,1e4}, ] \addplot table[x index=0,y index=1] {logbound.txt}; \foreach \j in {2, ..., 101} { \addplot[gray,no marks,very thin,dashed] table[x index = 0, y index = \j] {logbound.txt}; } \legend{Theorem~\ref{thm:matrix-func-decay}, Singular values}; \end{semilogyaxis} \end{tikzpicture}~~\begin{tikzpicture \begin{semilogyaxis}[ xlabel=$l$,ylabel=Singular values ($\sigma_l$), ymin = 1e-20, xmax = 20,ymax=1e8, width=.45\linewidth, height=.25\textheight, ytick={1e-20,1e-14,1e-8,1e-2,1e4}, ] \addplot table[x index=0,y index=1] {log2bound.txt}; \addplot[gray, no marks,thin,dashed] table[x index = 0, y index = 2] {log2bound.txt}; \legend{Theorem~\ref{thm:matrix-func-decay}, Singular values}; \end{semilogyaxis} \end{tikzpicture} \caption{The picture reports the same experiment of Figure~\ref{fig:exptridiag}, with the logarithm in place of the exponential. The matrices have however been shifted by $4I$ in order to make the function well-defined. Since this corresponds to evaluating the function $\log(z + 4)$ on the original matrix, one can also find a suitable ball centered in $0$ where the function is analytic.} \label{fig:logtridiag} \end{figure} \begin{figure} \centering \begin{tikzpicture \begin{semilogyaxis}[ xlabel=$l$,ylabel=Singular values ($\sigma_l$), ymin = 1e-20, xmax = 20, ymax=1e8, width=.45\linewidth, height=.25\textheight, ytick={1e-20,1e-14,1e-8,1e-2,1e4}, ] \addplot table[x index=0,y index=1] {sqrtbound.txt}; \foreach \j in {2, ..., 101} { \addplot[gray,no marks,very thin,dashed] table[x index = 0, y index = \j] {sqrtbound.txt}; } \legend{Theorem~\ref{thm:matrix-func-decay}, Singular values}; \end{semilogyaxis} \end{tikzpicture}~~\begin{tikzpicture \begin{semilogyaxis}[ xlabel=$l$,ylabel=Singular values ($\sigma_l$), ymin = 1e-20, xmax = 20,ymax=1e8, width=.45\linewidth, height=.25\textheight, ytick={1e-20,1e-14,1e-8,1e-2,1e4}, ] \addplot table[x index=0,y index=1] {sqrt2bound.txt}; \addplot[gray, no marks,thin,dashed] table[x index = 0, y index = 2] {sqrt2bound.txt}; \legend{Theorem~\ref{thm:matrix-func-decay}, Singular values}; \end{semilogyaxis} \end{tikzpicture} \caption{In the left picture the bounds on the singular values of the off-diagonal matrices of $\sqrt{4I + A}$ for $100$ random Hermitian tridiagonal matrix scaled in order to have spectral radius $\frac 3 4$ are shown. In the right picture the same experiment is repeated for a scaled and shifted upper Hessenberg unitary matrix.} \label{fig:sqrttridiag} \end{figure} Similarly, in Figure~\ref{fig:logtridiag} we have reported the analogous experiment concerning the function $\log(4I + A)$. In fact, in order for the logarithm to be well defined, we need to make sure that the spectrum of the matrix inside the logarithm does not have any negative value. As a last example for the tridiagonal matrices we have considered the case of the function $\sqrt{4I+A}$, where the matrix has been shifted again in order to obtain a reasonable estimate by moving the spectrum away from the branching point. The result for this experiment is reported in Figure~\ref{fig:sqrttridiag}. In the same figures we have reported also the experiments in the case of the scaled unitary Hessenberg matrix. In this case the variance in the behavior of the singular values was very small in the experiments, and so we have only reported one example for each case. Notice that while in the symmetric (or Hermitian) case every trailing diagonal submatrix is guaranteed to be normal, this is not true anymore for the scaled unitary Hessenberg matrices. Nevertheless, one can verify in practice that these matrices are still not far from normality, and so the bounds that we obtain do not degrade much. \section{Concluding remarks} \label{sec:concludingremarks} The numerical preservation of the quasiseparable structure when computing a matrix function is an evident phenomenon. Theoretically, this can be explained by the existence of accurate rational approximants of the function over the spectrum of the argument. In this work we have given a closer look to the off-diagonal structure of $f(A)$ providing concrete bounds for its off-diagonal singular values. The off-diagonal blocks have been described as a product between structured matrices with a strong connection with Krylov spaces. This ---combined with polynomial interpolation techniques--- is the key for proving the bounds. Moreover, we have developed new tools to deal with the difficulties arising in the treatment of singularities and branching points. In particular the formula of Corollary~\ref{cor:poles} can be employed with the technology of Hierarchical matrices for efficiently computing matrix functions with singularities. An example of this strategy has been provided along with the numerical validation of the bounds.
{ "attr-fineweb-edu": 1.738281, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbprxaJiQnuolFp6y
\section{\large Introduction} F-theory \cite{F} was originally defined as a particular D-manifold vacuum of the type IIB string. Alternatively, it may be seen as a decompactification of the type IIA string, thus lifting the well known duality between the type IIA string and the heterotic string in 6 dimensions \cite{HT,Wstd} to a duality between the heterotic string on $T^2$ and F-theory on an elliptically fibered $K3$ surface. This has afforded new insights into the study of compactifications of the heterotic string \cite{MVI,MVII}. The mechanisms by which enhanced gauge groups arise are quite different on the two sides of the duality: On the F-theory side, as for the type IIA string, singularities of the manifold give rise to gauge groups; while on the heterotic side, singularities of the manifold give rise to gauge group enhancement only if small instantons of the gauge bundle lie precisely on these singularities \cite{Wsi,SW2}. In ref.~\cite{FMW} several methods for constructing the relevant bundles were given. These results were recently used \cite{AM} to derive precisely which gauge groups and how many tensor multiplets occur when $k$ instantons are placed on a singularity of type $G$, with $G$ any simply laced Lie group.\footnote{Similar results had been derived with different methods in \cite{Int,BI1,BI2}.} Practically all Calabi--Yau manifolds studied by physicists have an interpretation in terms of toric geometry, so it is natural to consider the manifestations of the above phenomena also in these terms. Whereas earlier studies in this context \cite{KM,KMP} focused on local properties, perhaps the most elegant application involves considering complete polyhedra: Take an elliptic Calabi-Yau threefold that has $K3$ fibers. In this case it turns out that the $K3$ polyhedron is contained, as a subpolyhedron, in the Calabi-Yau polyhedron, and that the Dynkin diagrams of the gauge groups that occur upon compactification of the type IIA string on the corresponding threefold \cite{CF} can be seen in this $K3$ polyhedron. This phenomenon, which can be explained in terms of the intersection patterns of the toric divisors \cite{egs}, can of course be lifted to F-theory and has been studied extensively in this context \cite{CPR3}. So far, however, these studies have focused on the F-theory or IIA duals of compactifications of the $E_8\times E_8$ string. The purpose of the present note is to show that toric methods can equally easily be used for the description of the duals of the $SO(32)$ string. In fact, we will find that the same three-dimensional polyhedron that leads to a $K3$ surface with two $E_8$ singularities may also (upon choosing a different elliptic fibration and blowing down a different set of toric divisors) lead to a $D_{16}$ singularity. This is what we will explain in section two. In section three we consider two examples of compactifications to lower dimensions, thereby reproducing the recently found \cite{AM} `record gauge group' in six dimensions and obtaining a new `record gauge group' in four dimensions. \section{\large Eight dimensional vacua} An eight dimensional F-theory vacuum is determined by an elliptically fibered $K3$ surface with a section, where all components of the exceptional fibers except for those intersecting the section are blown down \cite{MVII}. The gauge group is determined by the $ADE$ classification of quotient singularities of surfaces. The intersection matrix of the blown-down divisors is given by minus the Cartan matrix of the corresponding Lie algebra. Many $K3$ surfaces can be constructed as hypersurfaces in toric varieties, in the following way \cite{Bat}: Consider a pair of reflexive\footnote{ Our notation is standard. For a definition of reflexivity and an introduction to toric geometry written for physicists see, for example, \cite{CDK}.} three dimensional polyhedra $(\D,\D^*)$ (for example, $\D$ could be the Newton polyhedron of a weighted projective space), with vertices in a pair of dual lattices $(M,N)$. Then each point in $\D^*$, except for the origin $\ipo$, determines a ray $v_i$ in $N$. Each of these rays corresponds \cite{Cox} to a homogeneous coordinate $z_i$ of the ambient space $\relax{\rm I\kern-.18em P}_{\D^*}$ and, consequently, to a divisor $D_i=\{z_i=0\}$. The $K3$ surface is determined by a divisor in the class $[\sum_i D_i]$. A divisor in $\relax{\rm I\kern-.18em P}_{\D^*}$ that does not correspond to a point interior to a facet (a codimension one face) of $\D^*$ results upon intersection with the $K3$ surface in a divisor in the $K3$ surface. Therefore the intersection pattern of divisors in the $K3$ surface can be determined by calculating intersections of the type \begin{equation}} \def\eeq{\end{equation} D_i\cdot D_j\cdot\sum_kD_k \eeq in $\relax{\rm I\kern-.18em P}_{\D^*}$. This calculation has been considered in detail in \cite{egs} leading to the following simple rules: Mutual intersections of divisors in the $K3$ surface are nonzero if and only if the corresponding points are joined by an edge $\th^*$ of $\D^*$; in this case their intersection number is the length $l(\th)$ of the dual edge $\th$ of $\D$ ($\th$ has length $l$ if it has $l-1$ interior lattice points). Self-intersections of divisors interior to $\th^*$ are $-2l$; the corresponding divisors are sums of $l$ rational curves. The self-intersection of a divisor corresponding to a vertex depends on the geometry of $\D^*$ in a more sophisticated way. It is $0$, however, in the case of a regular fiber and $-2$ for any component of a degenerate fiber of an elliptic fibration. These results were used to explain the observation made in \cite{CF} that the extended Dynkin diagrams of enhanced gauge groups are visible in the toric polyhedron. As noted in \cite{CF} and explained in \cite{k3}, the fact that a $K3$ surface is an elliptic fibration manifests itself torically in the fact that $\D^*$ contains the polygon corresponding to the generic fiber as a reflexive subpolyhedron. In this section we will consider a particular family of $K3$ surfaces which, by the results of \cite{CF}, can give rise to the group $E_8\times E_8$; it is the mirror family to the one determined by hypersurfaces of degree 12 in $\relax{\rm I\kern-.18em P}_{1,1,4,6}$. The principal point that we make in this paper is that the same dual pair of polyhedra can also give rise to the gauge group $SO(32)$. \del The dual polyhedron, $\D^*$, corresponds to , which are mirrors to those determined by hypersurfaces of degree 12 in $\relax{\rm I\kern-.18em P}_{1,1,4,6}$. \enddel In suitable coordinates, the dual polyhedron $\D^*$ is the convex hull of the points \begin{equation}} \def\eeq{\end{equation} (\-1,\-0,\-0),~~~(\-0,\-1,\-0),~~~(-2,-3,\-6),~~~(-2,-3,-6). \eeq Altogether $\D^*$ contains 39 lattice points. By the results of \cite{crp,wtc} there is no three dimensional reflexive polyhedron with a larger number of lattice points (there is precisely one other polyhedron with the same number of points, namely the Newton polyhedron of the degree 6 surface in $\relax{\rm I\kern-.18em P}_{1,1,1,3}$). Of these points 21 are `relevant' in the sense that they are not interior to facets. As there are three independent relations of linear equivalence among the divisors in $\relax{\rm I\kern-.18em P}_{\D^*}$ we see that the Picard lattice of a generic member of the corresponding family of $K3$ surfaces has rank 18. Figure 1 consists of a picture of this polyhedron, showing that it contains two different reflexive triangles. We have omitted the `irrelevant' points (the points interior to facets) except those belonging to the reflexive triangles. \begin{figure}[htb] \epsfxsize=3.5in \hfil\epsfbox{both.eps}\hfil \caption{\it The reflexive polyhedron that contains both the Weierstrass and the new $SO(32)$ triangle.} \label{fig:poly} \end{figure} In the first case this is the well known `Weierstrass triangle', the convex hull of the points \begin{equation}} \def\eeq{\end{equation} (\-1,\-0,\-0),~~~(\-0,\-1,\-0),~~~\hbox{and}~~~(-2,-3,\-0), \eeq lying in the plane $x_3=0$. The generic fiber is determined by a Weierstrass equation in the coordinates corresponding to the vertices of this triangle, with coefficients that depend on the coordinates coming from the points outside the plane $x_3=0$. This triangle divides $\D^*$ into two halves, each of which is an `$E_8$ top' as introduced in \cite{CF}. The section of the fibration can be identified with the divisor corresponding to the point $(-2,-3,\-0)$. The `top' and the `bottom' correspond to extended Dynkin diagrams of $E_8$, the extension points being the points $(-2,-3,\-1)$ and $(-2,-3,-1)$ above and below the `section point', respectively. In the second case our triangle lies in the plane $x_1=0$ and is given as the convex hull of the points \begin{equation}} \def\eeq{\end{equation} (\-0,\-1,\-0),~~~(\-0,-1,-2),~~~\hbox{and}~~~(\-0,-1,\-2). \eeq This triangle is dual to the Newton polyhedron of $\relax{\rm I\kern-.18em P}_{1,1,2}$\cite{MVI}. Now the $K3$ polyhedron is split in an asymmetric way: On one side we have just a single point, the corresponding divisor being a smooth fiber, whereas on the other side we have 17 relevant points forming the extended Dynkin diagram of $SO(32)$. Here we have two different sections, determined by $(\-0,-1,-2)$ and $(\-0,-1,\-2)$, respectively. This is in agreement with the assertion in \cite{AG2} that a fibration giving an $SO(32)$ string should have two distinct sections. Again we take the extension point to be the point adjacent to one of the `section points'. Compactification of F-theory on the corresponding $K3$ surface with the fibers corresponding to the Dynkin diagram blown down should be dual to the $SO(32)$ heterotic string on a $T^2$ with no Wilson lines on. It is easily checked by considering the dual polyhedron $\D$ that the complex structure of this model depends on two free parameters. The two different fibration structures that can be read off from our polyhedron $\D^*$ are related to the fact \cite{AM} that the Picard lattice $\G_{1,17}$ of the $K3$ allows two different decompositions as $\G_{1,1}\oplus\G_{16}$. \section{\large Lower dimensional compactifications and large gauge groups} In this section we will give examples of compactifications of F-theory to six and four dimensions such that the polyhedron of the previous section encodes K3 fibers of the corresponding compactification manifolds. Let us start with considering a Calabi-Yau threefold in the family that is mirror dual to $\relax{\rm I\kern-.18em P}_{1,1,12,28,42}[84]$ (this means that $\D^*_{CY}$ will be the Newton polyhedron of $\relax{\rm I\kern-.18em P}_{1,1,12,28,42}[84]$). In suitable coordinates the vertices of $\D^*_{CY}$ are given by \begin{equation}} \def\eeq{\end{equation} (\-1,\-0,\-0,\-0),~~~(\-0,\-1,\-0,\-0),~~~(-2,-3,-42,-6),~~~ (-2,-3,\-42,-6),~~~(-2,-3,\-0,\-1). \eeq In these coordinates it is easy to see that there are two $K3$ polyhedra in $\D^*_{CY}$. One of them lies in the hyperplane $x_3=0$. It corresponds to a `trivial top' with an $E_8$ bottom and will not be considered further. The other one lies in the hyperplane $x_4=0$. Its vertices are \begin{equation}} \def\eeq{\end{equation} (\-1,\-0,\-0,\-0),~~~~ (\-0,\-1,\-0,\-0),~~~~(-2,-3,-6,\-0),~~~~ (-2,-3,\-6,\-0). \eeq This is just the polyhedron $\D^*_{K3}$ we discussed in the previous section. The two different elliptic fibration structures in our polyhedron $\D^*_{K3}$ carry over to $\D^*_{CY}$. As explained in \cite{fft}, the fan for the toric variety describing the base of the fibration is determined by projecting the polyhedron along the fiber (more precisely: by considering equivalence classes of points differing only by vectors lying in the plane of the fiber). In the present case, this projection just amounts to throwing away the second and third coordinate of each point. Figure 2 shows the image of $\D^*_{CY}$ under this projection and the resulting fan. \begin{figure}[htb] \epsfxsize=6in \hfil\epsfbox{base.eps}\hfil \caption{\it The fan of the base of the $SO(32)$-fibration for the threefold. The rays give rise to the indicated groups.} \label{base} \end{figure} The elliptic fiber can degenerate along the curves in the base space determined by the toric divisors. The way it degenerates can be determined by considering the preimage of the projection. For the points $(x_1,x_4)=(1,\-0)$ and $(-2,\-1)$ respectively the preimage consists of only one point, so the corresponding fibers are just smooth elliptic curves. The other divisors in the base space lead to enhanced gauge groups as indicated. The preimages of the corresponding rays are `tops', where we define a `top' to be a three dimensional lattice polyhedron with one facet containing the origin and the other facets at integer distance one from the origin. This definition implies that the facet containing the origin is a reflexive polygon. The relevant points of the `top' without the facet containing the origin form the extended Dynkin diagram of the gauge group. Note that this generalises the concept of a `top' as a half of a reflexive polyhedron as formulated in \cite{CF}, since most of the tops we will encounter cannot be completed to a reflexive polyhedron. For the $SO(\cdot)$ groups the `tops' look similar to the right part of Figure 1, but with more points along the long line. As an example for an `$Sp(\cdot)$ top' Figure 3 shows the `top' for $Sp(24)$. \begin{figure}[htb] \epsfxsize=4in \hfil\epsfbox{sp24.eps}\hfil \caption{\it The `top' for the group $Sp(24)$. There are 25 points in a straight line which form the extended Dynkin diagram of the group.} \label{sp24} \end{figure} There are two types of subtleties that can arise from the fact that we interpret points as points of $\D^*_{CY}$ on the one hand and as points of a `top' on the other hand, both related to the fact that they need not correspond to the same number of divisors in each picture. The first of these subtleties occurs when a point that corresponds to precisely one divisor in the Calabi-Yau space corresponds to more than one divisor in the top. In this case several divisors that form part of an ADE pattern are identified globally; the corresponding monodromy leads to a non-simply laced gauge group. This is how we get the $Sp(\cdot)$ groups. The other special case is when points that are irrelevant in the context of the `top' turn out to be relevant for the Calabi-Yau space, as discussed in \cite{CPR3}. This happens for the point $(-2,-6)$ in the base. The preimage of the corresponding ray looks like the `top' for $SO(176)$, but after identifying the points that are relevant in the context of the `top' with the corresponding Dynkin diagram, we find that there are still points that are relevant in the context of the whole Calabi-Yau space. These points, by themselves, form an $Sp(40)$ `top'. Since in this case the extra points are relevant, the total contribution from the preimage of this ray is $SO(176)\times Sp(40)$. Proceeding similarly with what is possibly the `largest' fourfold polyhedron, the Newton polyhedron of $\relax{\rm I\kern-.18em P}_{1,1,84,516,1204,1806}[3612]$, we may choose coordinates such that the vertices of $\D^*_{\rm fourfold}$ are given by \begin{eqnarray}} \def\eea{\end{eqnarray} (\-1,\-0,\-0,\-0,\-0),~~~~ (\-0,\-1,\-0,\-0,\-0),~~~~(-2,-3,\-3606,-6,-6), ~~~~(-2,-3,-6,\-37,-6),\nn\\ \hfill (-2,-3,-6,-6,\-1),~~~~ (-2,-3,-6,-6,-6).\hspace{3.5truecm} \eea The $E_8\times E_8$ or $SO(32)$ polyhedron lies in the plane $x_4=x_5=0$ and the $SO(32)$ triangle lies in the $x_2x_3$-plane. The base of the fibration is therefore determined by projecting out $x_2$ and $x_3$. This results in a tetrahedron $T$ with vertices \begin{equation}} \def\eeq{\end{equation} (\-1,\-0,\-0),~~~~ (-2,-6,-6),~~~~ (-2,\-37,-6),~~~~(-2,-6,\-1). \eeq The preimages of its vertices are single points, the exception being the vertex $(-2,-6,-6)$ whose preimage consists of $3613$ lattice points in a row. Together with the preimage of $(-1,-3,-3)$ which lies on the same ray, this gives the `top' for the group $SO(7232)$. The points that are irrelevant in the context of this top are relevant in the context of the fourfold, so we get an additional group factor of $Sp(1804)$. The preimages of other points along the boundary of $T$ are always lines whose numbers of points can be determined by linearity; e.g. the preimage of \begin{equation}} \def\eeq{\end{equation} (-2,\-0,\-0)={6\0 7}(-2,-6,\-1)+{6\0 43}(-2,37,-6)+ {1\0 7\times 43}(-2,-6,-6) \eeq consists of \begin{equation}} \def\eeq{\end{equation} {6\0 7}\times 1+{6\0 43}\times 1+{1\0 7\times 43}\times 3613=13 \eeq points giving rise to the unbroken $SO(32)$. In general $k$ points in a row give rise to $SO(2k+6)$ or $Sp(k-1)$, depending on whether the point in the boundary of $T$ is divisible by two or not. If a point that leads to an $SO(\cdot)$ group lies on an edge of $T$, then there is an additional factor of $Sp({k-5\0 2})$. Altogether we get a gauge group of rank 302,896 consisting of 251 non-trivial simple factors, the largest of them being $SO(7232)$ and $Sp(3528)$ and the smallest being $SO(32)$ and $Sp(24)$. The full list of factors is given in the appendix. It is interesting that the $SO(32)$ decomposition seems to produce groups of larger rank than the $E_8\times E_8$ decomposition. For the fourfold we have been considering the $E_8\times E_8$ decomposition leads to the group \cite{CPR3} $E_8^{2561}\times F_4^{7576}\times G_2^{20168}\times SU(2)^{30200}$ of rank $121,328$ while for the $SO(32) $ decomposition the rank is larger. The reason for this is that the number of divisors in the base is smaller in the $SO(32)$ case because here we project along the direction along which the extension of the polytope is largest. \ni {\bf \large Acknowledgements:} We would like to thank Eugene Perevalov and Govindan Rajesh for useful discussions. This work is supported by the Austrian `Fonds zur F\"orderung der wissenschaftlichen Forschung' (Schr\"odinger fellowship J012328-PHY), by NSF grant PHY-9511632 and the Robert A. Welch Foundation. \newpage
{ "attr-fineweb-edu": 1.693359, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbqS6NNjgBtygd9jQ
\section{Introduction} The recent two independent analyses~\cite{7keV1,7keV2} based on X-ray observation data show an emission line at $E \simeq 3.5$ keV in the spectra coming from various galaxy clusters and the Andromeda galaxy. The observation is statistically significant ($ \sim 3\sigma-4 \sigma$) and more importantly is quite consistent with the location of the line in energy spectra and the signal flux. The observed flux and the best fit energy are \begin{eqnarray} {\Phi}^{\rm MOS}_\gamma &=& 4.0^{+0.8}_{-0.8} \times 10^{-6}\, {\rm photons ~cm^{-2} ~s^{-1}}\,,\\ E^{\rm MOS}_\gamma &=& 3.57 \pm 0.02\, {\rm keV}\,, \end{eqnarray} where we take the values from the XMM-Newton MOS spectra, and the results from the PN observations are similar~\cite{7keV1} and consistent with the measured values in the other analysis~\cite{7keV2}. No source of X-ray line including atomic transition in thermal plasma is known at this energy, which indicates that the observed line may suggest the existence of a new source. It would be tantalizing if a dark matter (DM) provided a possible source for the line signal. Indeed, a decaying DM of a mass $m_{\rm DM} \simeq 2 E_\gamma \simeq 7$ keV and a lifetime $\tau_{{\rm DM} \to \gamma X} \simeq 10^{28}\, {\rm s}$ is immediately suggested to explain the observed line~\cite{7keV1,7keV2}. An annihilating DM of a mass $m_{\rm DM} \simeq E_\gamma \simeq 3.5$ keV and an annihilation cross section $\langle \sigma v\rangle_{{\rm 2 DM} \to \gamma X} \sim 2 \Gamma_{\chi}/n_\chi \sim (10^{-31}-10^{-33})~{\rm cm^3 ~s^{-1}}$ can also account for the signal, where $n_\chi =\rho_\chi/m_\chi \sim (10^3-10^5)~{\rm cm^{-3}}$ is the DM number density of galaxy clusters. However, the realization of such an annihilating DM is very challenging since the corresponding annihilation cross section is too small compared to a typical value for a thermal WIMP (weakly interacting massive particle) DM. Other annihilation channels are also limited due to the small DM mass. Hereafter we will focus on a decaying DM model. Possible DM candidates such as a sterile neutrino and a long lived axion have been suggested as an explanation of this signal \cite{Ishida:2014dlp,Finkbeiner:2014sja,Higaki:2014zua,Jaeckel:2014qea, Lee:2014xua,Abazajian,Krall}.\footnote{For the cases of decaying sterile neutrino and gravitino warm dark matter, the authors of Ref.~\cite{Abazajian:2001vt} estimated expected x-ray fluxes from galaxy clusters and field galaxies.} To explain the 3.5 keV line with 7 keV axion DM \cite{Higaki:2014zua,Jaeckel:2014qea}, the required axion decay constant $f_a \simeq 10^{14-15}$ GeV, which is much larger than the conventional values preferred by most axion models~\cite{Axion,AxionReview}. In this letter, as an alternative, we examine axino ($\tilde{a}$) as a dark matter candidate and show how axino can fit the observed data. With an axion~\cite{Axion,AxionReview} as a solution of the strong CP problem, a light axino with a mass $m_{\tilde{a}}\sim \frac{M_{SUSY}^2}{f_a} \sim 7~{\rm keV}$ is an excellent DM candidate in supersymmetric models \cite{Covi:1999ty, Covi:2001nw, Covi:2009pq}. Moreover, it has been shown that axino in the preferred mass range can be a warm dark matter (WDM) satisfying the relic density constraint \cite{Covi:2009pq} through thermal production via thermal scatterings and/or non-thermal production via out-of-equilibrium decays. WDM is known to provide a solution to the small scale conflict between the observations and the N-body simulations with cold dark matter (CDM), where the overproduction of galactic substructures~\cite{Moore:1999nt}, local groups~\cite{Zavala:2009ms}, and local voids~\cite{Tikhonov:2008ss} compared to the observations has been found. A lower limit on WDM mass is $m_{\rm WDM} > 3.3$ keV from the recent high red-shift Lyman-$\alpha$ forest data~\cite{Viel:2013fqw}. The small scale behaviors of WDM with $ m_{\tilde{a}} \gtrsim 4-5$ keV are not so different from those of CDM \cite{Maccio':2012uh,Schneider:2013wwa}. Consequently, the 7 keV axino can alleviate a little of the small scale problems of CDM. \section{Axino dark matter with R-parity violation} The axino can be a good DM candidate even in the presence of R-parity violation. The decay channel to neutralino and photon, $\tilde{a} \to \tilde{\chi}_0 \gamma$, is kinematically closed with the heavier mass of neutralino, $m_{\tilde{\chi}^0} > m_{\tilde{a}}$, thus the axino mainly decays to the standard model particles through R-parity violating interactions. The decay width however is strongly suppressed by a high Peccei-Quinn symmetry breaking scale, $f_a$, and also the small R-parity violation so that the resultant lifetime can be long enough. A bilinear type of the R-parity violation is considered as a simple model \cite{Hall:1983id},\footnote{A similar model has been used to explain 130 GeV gamma-ray line signal from the galactic center region \cite{Endo:2013si}.} which is described by the following superpotential, \begin{eqnarray} W_{\not{R}} = \mu_i L_i H_u\,, \end{eqnarray} where $L_i$ and $H_u$ are respectively the lepton doublet and the up-type Higgs superfields and the index $i = \{1,2,3\}$ runs over generations. With these R-parity violating terms, the axino decays into a photon and a neutrino, and the decay rate is given by \cite{Covi:2009pq,Endo:2013si}, \begin{eqnarray}\label{decay} \Gamma_{\widetilde{a} \to \gamma \nu_i} = \frac{m_{\widetilde{a}}^3}{128 \pi^3 f_a^2}\, \alpha_{em}^2 C_{a\gamma\gamma}^2 |U_{\nu_i\widetilde{\gamma}}|^2 \end{eqnarray} with the neutrino-photino mixing parameter $U_{\nu_i\widetilde{\gamma}}$, \begin{eqnarray} U_{\nu_i\widetilde{\gamma}} \simeq \xi_i \frac{\sqrt{2}s_W M_Z}{M_1}\,, \end{eqnarray} where $\xi_i = \langle \widetilde{\nu}_i \rangle / v$ with the vacuum expectation values (VEV) of sneutrinos $\langle \widetilde{\nu}_i \rangle$ and Higgs $v = 246$ GeV. $M_Z$ is the mass of the standard model $Z$ boson, $M_1$ is the U(1)$_Y$ gaugino mass, and $s_W=\sin\theta_W$ where $\theta_W$ is the Weinberg angle. $C_{a\gamma\gamma}$ is a model dependent constant of order unity, which is normalized as \begin{eqnarray} \mathcal{L} = \frac{\alpha_{em}}{8\pi f_a}\, C_{a\gamma\gamma}\, a F_{\mu\nu} \widetilde{F}^{\mu\nu}\,, \end{eqnarray} where $\alpha_{em} = e^2/4\pi$, $a$ is the axion field, and $F_{\mu\nu}$ and $\widetilde{F}^{\mu\nu}$ are the electromagnetic field strength and its dual respectively. The lifetime of axino is conveniently given as \begin{eqnarray}\label{axinolifetime} \tau_{\widetilde{a} \to \gamma \nu} \simeq 10^{28} {\rm s}\, \left( \frac{C_{\rm eff}}{10^{-6}} \right)^{-2} \left( \frac{m_{\widetilde{a}}}{7 ~{\rm keV}} \right)^{-3} \left( \frac{f_a}{3\times 10^8 {\rm \, GeV}} \right)^{2}\,, \end{eqnarray} where $C_{\rm eff} \equiv C_{a\gamma\gamma} \sum_i |U_{\nu_i\widetilde{\gamma}}| \simeq C_{a\gamma\gamma}\, \xi\, \frac{\sqrt{2}s_W M_Z}{M_1}$ with $\xi \equiv \sqrt{\sum_i |\xi_i|^2}$. Thus, the axino DM with 7 keV mass can be a good source for the 3.5 keV X-ray line signal with a reasonable choice of parameters: $C_{\rm eff}\simeq 10^{-6}$ and $f_a\simeq 3\times 10^8$ GeV. The sneutrino VEVs $\langle \widetilde{\nu}_i \rangle$ induce the mixing between the leptons and the neutralinos generating neutrino masses at the tree-level. From the upper bound on the neutrino mass $m_\nu \equiv \sum_i m_{\nu_i} < 0.23$ eV ($95\%$ C.L., \textit{Planck}+WP+highL+BAO) \cite{Planck}, we estimate the size of $\xi$ \cite{Chun:2004mu}: \begin{eqnarray}\label{NuMass} \xi \lesssim \frac{1.1}{\cos\beta} \times 10^{-6} \left( \frac{M_N}{M_Z} \right)^{1/2} \left( \frac{m_{\nu}}{0.23~{\rm eV}} \right)^{1/2}\,, \end{eqnarray} where $M_N = M_1 M_2 / (c_W^2 M_1 + s_W^2 M_2) + M_Z^2 s_{2\beta}/\mu$ with the gaugino masses $M_1,M_2$ and $\mu$ parameters as in Ref.~\cite{Choi:1999tq}. With a choice of parameters, $M_1=M_2= M_{1/2}\simeq (1-10)~{\rm TeV}$ and $\mu > M_Z$, we get $\xi \lesssim (0.5-12)\times 10^{-5}$ with $\tan\beta= (1-10)$. Thus, $C_{\rm eff}\simeq 10^{-6}$ is obtainable with $C_{a\gamma\gamma} \sim \mathcal{O}(1)$ and $M_1 \sim 1$ TeV, which is natural. In general, WDM is believed to comprise some portion of the observed DM relic abundance with CDM as a dominant source~\cite{Maccio':2012uh,Anderhalden:2012qt}. Taking this fact into account, we generalize our analysis by introducing a parameter, \begin{eqnarray} r = \frac{\Omega_{\widetilde{a}}}{\Omega_{\rm DM}}\,, \end{eqnarray} which describes the WDM portion in the total DM amount. With a suppressed value $0 \leq r\leq 1$ in general, the required lifetime for the observed flux is scaled linearly as \begin{eqnarray}\label{requiredlife} \tau_{{\rm DM} \to \gamma X} = \Gamma_{{\rm DM} \to \gamma X}^{-1} \simeq r \times 10^{28}\, {\rm s}\,, \end{eqnarray} because the expected X-ray flux is proportional to the density of WDM. Comparing Eqs. (\ref{axinolifetime}) and (\ref{requiredlife}), one can easily find the needed values of parameters, $C_{\rm eff}$ and $f_a$, for a given axino DM fraction $r$. In Figure~\ref{Fig1}, we show the parameter space that is consistent with the $3.5$ keV line signal in the $C_{\rm eff}-f_a$ plane for the representative values of $r=1, 0.1$, and 0.01. The upper left region of the thick solid (black) line is excluded since the required axino relic density $\Omega_{\widetilde{a}}$ is larger than the observed DM density $\Omega_{\rm DM}$. \begin{figure}[t] \begin{center} \includegraphics[width=0.80\linewidth]{fig_fa_cons.pdf \end{center} \vspace*{-0.7cm} \caption{Parameter space that is consistent with the 3.5 keV line signal in the $C_{\rm eff}-f_a$ plane. The three thick-straight lines represent the values of $C_{\rm eff}$ and $f_a$ to fit the required lifetime for different values of $r$, $r=\frac{\Omega_{\widetilde{a}}}{\Omega_{\rm DM}} = 1, 0.1$, and $0.01$, respectively. The three thin curves respectively correspond to $r' = \frac{\Omega_{\widetilde{a}}+\Omega_a}{\Omega_{\rm DM}} = 1, 0.1$, and $0.01$ for $\alpha^{\rm dec} =186$. The upper (green), lower (yellow), and right (blue) shaded regions are excluded by the DM relic density, axion-like particle searches and astrophysical observations, and the neutrino mass limit, respectively. Conservative projected limits from NGAH are shown as the (light-yellow) shaded regions. A dotted horizontal line with arrows represents a model dependent bound from the SN1987A.} \label{Fig1} \end{figure} Axions can be copiously produced through the $a \gamma \gamma$ interaction in the Sun, which have been searched by axion telescopes such as CAST and Sumico. In addition, the $a \gamma \gamma$ interaction can induce exotic cooling mechanisms in the cores of stars and thus affect stellar evolution, which is constrained by observations of Horizontal Branch (HB) stars in globular clusters. The lower (yellow) shaded region is constrained by axion-like particle search experiments and astrophysical observations~\cite{Jaeckel:2010ni}. A future solar axion telescope, NGAH~\cite{Irastorza:2011gs}, provides projected limits stronger than current bounds from CAST, which are shown as (light-yellow) shaded regions in the figure. We use the limits for $m_a > \mathcal{O}(10^{-2})\, {\rm eV}$ to be conservative. They are $2-3$ times more stringent for a lower mass, $m_a <\mathcal{O}(10^{-2})\, {\rm eV}$. With a representative parameter set of $\tan\beta = 10$, $M_{1/2} = 1$ TeV, and $C_{a\gamma\gamma} = 1$, we obtain a limit $C_{\rm eff} \lesssim 2.3 \times 10^{-6}$ with $m_{\nu} < 0.23$ eV, which appears as the right (blue) shaded region. Another potentially important bound arises from the SN1987A. If the axion is hadronic, it can contribute to the emission of the energy from the SN1987A and provides a model dependent bound, $f_a \gtrsim 3.7\sqrt{F} \times (T/30{\rm MeV})^2 \times 10^8 $ GeV \cite{Raffelt:2006cw}. With a temperature $T\simeq 30$ MeV and the axion absorption rate, $\Gamma_a/T \in (1,10)$, we find $F\in (0.46,1.35)$ or $f_a \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} (2.5-4.3)\times 10^8$ GeV, which still allows the preferred value $f_a =3\times 10^8$ GeV. However, these limits remain fairly rough estimates \cite{Raffelt:2006cw}, and to completely rule out this scenario, one would require more robust experimental bounds such as a future solar axion telescope, NGAH, as shown in Figure~\ref{Fig1}. Finally the phenomenology of axion dark matter depends on the value of the Hubble expansion rate $H_I$ at the end of inflation, which can be determined by the tensor-to-scalar ratio and other CMB data \cite{Visinelli:2014twa}. Recent observation of the primordial B-mode polarization by BICEP2 Collaboration \cite{BICEP2} favors an axion scenario with $f_a < \frac{H_I}{2\pi}$, which affects our discussion. In such a scenario, the total axion energy density is given by \begin{equation} \Omega_a h^2 = 2.07 \times 10^{-14}\, ( \alpha^{\rm dec} + 1)\, f_a^{7/6} \,, \end{equation} where $\alpha^{\rm dec}$ is the fractional contribution to the axion density from decays of axionic topological defects (see Ref. \cite{Visinelli:2014twa} and the references therein). The axion abundance slightly modifies the allowed parameter space given by the three thick-straight lines. It favors a lower $f_a$ for a higher $C_{\rm eff}$, while there is almost no effect for a lower $C_{\rm eff}$. Modified bounds are shown as thin curves for the representative values of $r'= (\Omega_{\tilde a}+\Omega_a)/\Omega_{{\rm DM}}=1,0.1,$ and 0.01 with $\alpha^{\rm dec}=186$ in Figure~\ref{Fig1}. The new bounds approach the straight lines with a smaller value of $\alpha^{\rm dec}$. \section{Phenomenological implications} The axino dark matter scenario resembles gravitino dark matter models. Its collider signatures depend strongly on what the next-to-lightest supersymmetric particle is, providing a variety of possibilities. Other signatures may arise as well due to the bilinear R-parity violations in this particular scenario. However unfortunately it would be difficult to discriminate this model from other ``look-alike'' models. What is more interesting is neutrinos from the axino decay. As a byproduct of the axino decay, neutrinos with $E_\nu \sim 3.5$ keV are also expected as the same amount of 3.5 keV X-rays flux, $\sim 4 \times 10^{-6}\, {\rm cm}^{-2}\, {\rm s}^{-1}$. It would be challenging, if not impossible, to detect these neutrinos, because a large neutrino background is expected from various processes in the Sun as well as in the Earth: the solar fusion, thermal processes in the solar core and the terrestrial neutrinos by the natural radioactivity of the Earth. The expected neutrino background flux density at $E_\nu \sim 3.5$ keV is at the level of $10^8\, {\rm cm}^{-2}\, {\rm s}^{-1} {\rm MeV^{-1}}$ \cite{Haxton:2000xb}, which is much larger than that from the axino decay. Moreover, there is currently no easy way to detect $\mathcal{O}({\rm keV})$ neutrinos since even electron is too heavy to be scattered off by keV neutrinos. In the future, bolometric detectors may be able to measure temperature changes by scatterings. It will be extremely important to measure the neutrino events at $E_\nu = E_\gamma \sim 3.5 ~{\rm keV}$ to confirm the model. We would encourage the experimentalist to overcome the background with new neutrino detectors such as a directional detector in the future. \section{Conclusion} Recent observation of $E_\gamma \simeq 3.5$ keV X-ray line in galaxy clusters and Andromeda galaxy opens a new way to see dark matter particle in a light mass domain: $m_{DM} \simeq 3.5$ keV for an annihilating dark matter and $7$ keV for a decaying dark matter. In general, a long lived particle, which produces enough number of photons, could be a good candidate of the source of X-rays. In this letter, we studied the axino decay through the bilinear R-parity violating interaction. We found that the parameter space which fits the observed line is naturally compatible with most axion models as well as recent observation by the BICEP2 experiment. Observation of a neutrino line at the same energy, $E_\nu =E_\gamma=m_{\tilde{a}}/2$, as in the X-ray data, corroborates the axino DM scenario. \vspace{0.5 cm} \begin{acknowledgements} K.K. is supported by the U.S. DOE under Grant No. DE-FG02-12ER41809 and by the University of Kansas General Research Fund allocation 2301566. J.C.P. is supported by Basic Science Research Program through the National Research Foundation of Korea funded by the Ministry of Education (NRF-2013R1A1A2061561). S.C.P. is supported by Basic Science Research Program through the National Research Foundation of Korea funded by the Ministry of Science, ICT $\&$ Future planning (2011-0010294) and the Ministry of Education (2011-0010294, 2011-0029758, and NRF-2013R1A1A2064120). \end{acknowledgements}
{ "attr-fineweb-edu": 1.919922, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbqvxK02iP15ve5xo
\section{Introduction} Hard-decision forward error correction (HD-FEC) can offer dramatically reduced complexity compared to soft-decision FEC, at the price of some performance loss. HD-FEC is used, for example, in regional/metro optical transport networks (OTNs)\cite{Justesen2010} and has also been considered for other cost-sensitive applications such as optical data center interconnects\cite{Yu2017}. Our focus is on staircase codes\cite{Smith2012a}, which provide excellent performance and have received considerable attention in the literature. Similar to classical product codes, staircase codes are built from short component codes and decoded by iteratively applying bounded-distance decoding (BDD) to the component codes. For the purpose of this paper, BDD of a $t$-error-correcting component code can be seen as a black box that operates as follows. Let $\vect{r} = \vect{c} + \vect{e}$, where $\vect{c}, \vect{e} \in \{0,1\}^n$ denote a component codeword and random error vector, respectively, and $n$ is the code length. BDD yields the correct codeword $\vect{c}$ if $d_\text{H}(\vect{r}, \vect{c}) = w_\text{H}(\vect{e}) \leq t$, where $d_\text{H}$ and $w_\text{H}$ denote Hamming distance and weight, respectively. On the other hand, if $w_\text{H}(\vect{e}) > t$, the decoding either fails or there exists another codeword $\vect{c}'$ such that $d_\text{H}(\vect{r},\vect{c}') \leq t$. In the latter case, BDD is technically successful but the decoded codeword $\vect{c}'$ is not the correct one. Such \emph{miscorrections} are highly undesirable because they introduce additional errors into the iterative decoding process and significantly degrade performance. In this paper, we propose a novel iterative HD decoding algorithm for staircase codes which can detect and avoid most miscorrections. The algorithm provides significant post-FEC bit error rate improvements, in particular when $t$ is small (which is typically the case in practice). As an example, for $t=2$, the algorithm can improve performance by roughly $0.4\,$dB and reduce the error floor by over an order of magnitude, up to the point where the iterative decoding process is virtually miscorrection-free. Error floor improvements are particularly important for applications with stringent reliability constraints such as OTNs. \vspace{-0.07cm} \section{Staircase codes and iterative decoding} Let $\mathcal{C}$ be a binary linear component code with length $n$ and dimension $k$. Assuming that $n$ is even, a staircase code with rate $R = 2 k/n - 1$ based on $\mathcal{C}$ is defined as the set of all matrix sequences $\mat{B}_k \in \{0,1\}^{a \times a}$, $k = 0,1,2,\dots$, such that the rows in $[\mat{B}^\intercal_{k-1}, \mat{B}_k]$ for all $k \geq 1$ form valid codewords of $\mathcal{C}$, where $a = n/2$ is the block size and $\mat{B}_0$ is the all-zero matrix. We use extended primitive Bose--Chaudhuri--Hocquenghem (BCH) codes as component codes, i.e., a BCH code with an additional parity bit formed by adding (modulo 2) all $2^{\nu}-1$ coded bits of the BCH code, where $\nu$ is the Galois field extension degree. The overall extended code then has length $n = 2^{\nu}$ and guaranteed dimension $k = 2^\nu-\nu t-1$. The extra parity bit increases the guaranteed minimum distance to $d_\text{min} = 2t+2$. The conventional decoding procedure for staircase codes uses a sliding window comprising $W$ received blocks $\mat{B}_k, \mat{B}_{k+1}, \dots, \mat{B}_{k+W-1}$. This is illustrated in Fig.~\ref{fig:staircase_array} for $W = 5$ and $a = 6$. It is convenient to identify each component code in the window by a tuple $(i,j)$, where $i \in \{1, 2, \dots, W-1\}$ indicates the position relative to the current decoding window and $j \in \{1, 2, \dots, a\}$ enumerates all codes at a particular position. As an example, the component codes $(1,3)$ and $(4,4)$ are highlighted in blue in Fig.~\ref{fig:staircase_array}. Pseudocode for the conventional decoding procedure is given in Algorithm 1 below. Essentially, all component codes are decoded $\ell$ times, after which the decoding window shifts to the next position. Note that after the window shifts, the same component code is identified by a different position index. \setlength{\textfloatsep}{10pt} \begin{algorithm}[b] \small \DontPrintSemicolon \SetKw{ShortFor}{for} \SetKw{Break}{break} \SetKw{MyWhile}{while} \SetKw{MyIf}{if} \SetKw{MySet}{set} \SetKw{MyElse}{else} \SetKw{MyCompute}{compute} \SetKw{KwEach}{each} \SetKw{KwAnd}{and} $k \leftarrow 0$\; \While{true}{ \For{$l = 1, 2, \dots, \ell$}{ \For{$i = W, W-1, \dots, 1$}{ \For{$j = 1, 2, \dots, a$}{ apply BDD to component code $(i,j)$ } } } output decision for $\mat{B}_k$ and shift window\; $k \leftarrow k + 1$\; } \caption{ {\small Window decoding of staircase codes} } \end{algorithm} \section{Performance analysis} Analyzing the post-FEC bit error rate of staircase codes under the conventional decoding procedure is challenging. A major simplification is obtained by assuming that no miscorrections occur in the BDD of the component codes. In this case, it is possible to rigorously characterize the asymptotic performance as $a \to \infty$ using a technique called density evolution\cite{Haeger2017tit}. Moreover, the error floor can be estimated by enumerating stopping sets, also known as stall patterns\cite{Smith2012a}. However, if miscorrections are taken into account, both the asymptotic and error floor predictions are nonrigorous and become inaccurate. \setlength{\textfloatsep}{20pt} \begin{figure}[t] \begin{center} \includegraphics[width=7.33cm]{staircase_array} \end{center} \caption{Staircase decoding window of size $W=5$} \label{fig:staircase_array} \end{figure} \emph{Example 1:} Let $\nu=8$ and $t=2$, which gives a staircase code with $a = 128$ and $R = 0.867$. For window decoding parameters $W = 8$ and $\ell = 7$, the density evolution and error floor predictions are shown in Fig.~\ref{fig:scc_results} by the dashed lines. The analysis can be verified by performing idealized decoding, where miscorrections are prevented during BDD. The results are shown by the blue line (triangles) in Fig.~\ref{fig:scc_results} and accurately match the theoretical predictions. However, the actual performance with true BDD deviates from the idealized decoding, as shown by the red line (squares). \demo The performance degradation with respect to idealized decoding becomes less severe for larger values of $t$. Unfortunately, small values of $t$ are commonly used in practice because BDD can be implemented very efficiently in this case. We note at this point that there exist several works that attempt to quantify the performance loss due to miscorrections. In terms of error floor, the work in \cite{Smith2012a} introduces a heuristic parameter, whose value unfortunately has to be estimated from simulative data. In terms of asymptotic performance, the authors are aware of two works\cite{Jian2015,Truhachev2016}, both of which do not directly apply to staircase codes, but to a related code ensemble. \begin{figure}[t] \begin{center} \includegraphics[width=7.4cm]{scc_results_nu8_t2_s0} \end{center} \caption{Results for component codes with $n=256$ and $t=2$} \label{fig:scc_results} \end{figure} \section{Proposed algorithm} The main idea in order to improve performance is to systematically exploit the fact that miscorrections lead to inconsistencies, in the sense that two component codes that protect the same bit may disagree on its value. In the following, we show how these inconsistencies can be used to (a) reliably prevent miscorrections and (b) identify miscorrected codewords in order to revert their decoding decisions. Our algorithm relies on so-called anchor codewords, which have presumably been decoded without miscorrections. Roughly speaking, we want to make sure that bit flips do not lead to inconsistencies with anchor codewords. Consequently, decoding decisions from codewords that are in conflict with anchors are not applied. However, a small number of anchor codewords may be miscorrected and we allow for the decoding decisions of anchors to be reverted if too many other codewords are in conflict with a particular anchor. In order to make this more precise, we regard the BDD of a component code $(i,j)$ as a two-step process. In the first step, the decoding is performed and the outcome is either a set of error locations $\mathcal{E}_{i,j} \subset \{1, 2, \dots, n\}$, where $|\mathcal{E}_{i,j}| \leq t$, or a decoding failure. In the second step, error-correction is performed by flipping the bits corresponding to the error locations. Initially, we only perform the decoding step for all component codes, i.e., all component codes in the decoding window are decoded without applying any bit flips. We then iterate $\ell$ times over the component codes in the same fashion as in Algorithm 1, but replacing line 6 with the following four steps: \begin{enumerate} \item If no decoding failure occurred for the component codeword $(i,j)$, we proceed to step 2, otherwise, we skip to the next codeword. \item For each $e \in \mathcal{E}_{i,j}$, check if $e$ corresponds to an anchor codeword. If so, let $C$ be the number of other conflicts that this anchor is involved in. If $C < T$, where $T$ is a fixed threshold, the codeword $(i,j)$ is frozen and we skip to the next codeword. Frozen codewords are always skipped (in the loop of Algorithm 1) for the rest of the decoding unless any of their bits change. If $C \geq T$, the anchor is marked for backtracking. \item Error-correction for codeword $(i,j)$ is applied, i.e., the bits at all error locations in $\mathcal{E}_{i,j}$ are flipped. We also apply the decoding step again for codewords that had their syndrome changed due to a bit flip. Finally, the codeword $(i,j)$ becomes an anchor. \item Lastly, previously applied bit flips are reversed for all anchor codewords that were marked for backtracking during step 2. These codewords are no longer anchors and all frozen codewords that were in conflict with these codewords are unfrozen. \end{enumerate} Note that steps 3 and 4 are not reached for codeword $(i,j)$ if the corresponding bit flips of that codeword are inconsistent with any anchor for which $C < T$ holds. The following two examples illustrate the above steps for $t=2$ and $T=1$ with the help of Fig.~\ref{fig:staircase_array}. \emph{Example 2:} Assume that we are at $(i,j) = (3,4)$, corresponding to a component code with three attached errors shown by the black crosses. The codeword is miscorrected with $\mathcal{E}_{3,4} = \{10, 12\}$ shown by the red crosses. Assuming that the codeword $(4,4)$ is an anchor without any other conflicts, the codeword $(3,4)$ is frozen during step 2 and no bit flips are applied. \demo \emph{Example 3:} Let the codeword $(1,3)$ in Fig.~\ref{fig:staircase_array} be a miscorrected anchor without conflicts and error locations $\mathcal{E}_{1,3} = \{5, 7\}$. Assume that we are at $(i,j) = (2,1)$. The codeword $(2,1)$ has one attached error, thus $\mathcal{E}_{2,1} = \{3\}$. During step 2, the codeword $(2,1)$ is frozen and we skip to codeword $(2,2)$ with $\mathcal{E}_{2,2} = \{3, 10\}$. The bit flip at $e = 3$ is inconsistent with the anchor $(1,3)$, but, since this anchor is already in conflict with $(2,1)$ (and, hence, $C = T = 1$), the anchor is marked for backtracking. The bits according to $\mathcal{E}_{2,2}$ are then flipped in step 3 and the anchor $(1,3)$ is backtracked in step 4. \demo The previous example shows how a miscorrected anchor is backtracked. Since we do not know if an anchor is miscorrected or not, it is also possible that we mistakenly backtrack ``good'' anchors. Fortunately, this is unlikely to happen for long component codes because the additional errors due to miscorrections are approximately randomly distributed within the codeword. This implies that errors of two (or more) miscorrected codewords rarely overlap. For the algorithm to work well, a sufficiently large fraction of codewords at each position should be ``good'' anchors. However, when the decoding window shifts and a new block is added, no anchors exist at the last position $W-1$. We found that it is therefore beneficial to artificially restrict the error-correcting capability of these component codes in order to avoid anchoring too many miscorrected codewords. For example, for $t=2$, all component codes at position $W-1$ are treated as single-error-correcting. This restriction reduces the probability of miscorrecting a component code by roughly a factor of $n$, which is significant for long component codes\cite{Justesen2011}. Note that due to the window decoding, we are merely gradually increasing the error-correction capability: once the decoding window shifts, the component codes shift as well and they are then decoded with their full error-correcting capability. We remark that essentially the same algorithm can also be applied to product codes and other related code constructions, e.g., half-product or braided codes. \section{Decoding complexity} In terms of decoding complexity, one of the main advantages of iterative HD decoding of staircase codes compared to message-passing decoding of LDPC codes is the significantly reduced decoder data flow requirement\cite{Smith2012a}. While a thorough complexity analysis for the proposed algorithm is beyond the scope of this paper, we note that the algorithm can operate entirely in the syndrome domain, thereby leveraging the syndrome compression effect that is described in\cite{Smith2012a}. However, additional storage is needed compared to the conventional decoding to keep track of the error locations of anchor codewords (in case they are backtracked) and to store the conflicts between codewords. \section{Results and Discussion} We consider the same parameters as in Example 1, i.e., $\nu=8$, $t=2$, $W=8$, and $\ell= 7$. The conflict threshold is set to $T=1$ and we apply the error-correction capability restriction for component codes at position $W-1$ as described above. Simulation results for the proposed algorithm are shown in Fig.~\ref{fig:scc_results} by the green line (circles). It can be seen that the performance is significantly improved compared to the conventional decoding. In particular in terms of the error floor, the performance is virtually identical to the idealized decoding where miscorrections are prevented. Overall, the improvements translate into an additional coding gain of around $0.4\,$dB at a post-FEC bit error rate of $10^{-9}$ over the conventional decoding. Note that the staircase code parameters were chosen such that the error floor is high enough to be within the reach of software simulations. For OTNs, post-FEC bit error rates below $10^{-15}$ are typically required. In this case, other code parameters should be used or one may apply post-processing techniques to reduce the error floor below the application requirements\cite{Holzbaur2017}. \section{Conclusion} We have shown that the post-FEC performance of staircase codes can be significantly improved by adopting a modified iterative HD decoding algorithm that reduces the effect of miscorrections. For component codes with error-correcting capability $t = 2$, an additional coding gain of around $0.4\,$dB can be achieved. Moreover, the error floor can be reduced by over an order of magnitude, giving virtually miscorrection-free performance. \vspace{-0.1cm} \section{Acknowledgements} {\footnotesize This work is part of a project that has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk\l{}odowska-Curie grant agreement No.~749798. The work was also supported in part by the National Science Foundation (NSF) under Grant No.~1609327. Any opinions, findings, recommendations, and conclusions expressed in this material are those of the authors and do not necessarily reflect the views of these sponsors. } \vspace{-0.1cm} \newif\iffullbib \fullbibfalse \newlength{\bibspace} \setlength\bibspace{-2.5mm} \newcommand{J.~Lightw.~Technol.}{J.~Lightw.~Technol.} \newcommand{Opt.~Exp.}{Opt.~Exp.} \newcommand{IEEE Trans.~Inf.~Theory}{IEEE Trans.~Inf.~Theory} \newcommand{IEEE Trans.~Comm.}{IEEE Trans.~Comm.} \newcommand{Proc.~OFC}{Proc.~OFC} \newcommand{Proc.~ECOC}{Proc.~ECOC} \newcommand{Proc.~ITA}{Proc.~ITA} \newcommand{Proc.~SCC}{Proc.~SCC} \iffullbib {\scriptsize \setlength{\bibsep}{0.3ex plus 0.3ex} \bibliographystyle{IEEEtran}%
{ "attr-fineweb-edu": 1.650391, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbsE5qWTD6cbksWQT
\section{Introduction} The scalar fields play very important role in both particle physics as the Higgs field in Standard Model (SM) and standard cosmological model as one of elements of the inflation mechanism \cite{linde}. A scalar field (SF) can be introduced into General Relativity (GR) in two manners: without $R/6$ term (a minimal coupling) and with $R/6$ term (a conformal coupling) \cite{pct}. The difference between two couplings becomes essential at cosmological applications and the unification of SM with GR. Therefore, the research of difference of these two couplings from the dynamic point of view is the topical problem \cite{PPG,PS}. The present paper is devoted to investigation of dynamical status of different couplings of scalar field by means of the Hamiltonian approach to GR \cite{dir,ADM,WDW,M,ps1} generalized for finite space in \cite{242,242a} and the Lichnerowicz transformation to the unit determinant of the spatial metric \cite{lich}. We research self-consistences of initial conditions with equations of motion and boundary conditions, with the variation problem. In Section 2, the status of a conformal coupling scalar field in GR is considered. In Section 3, the correspondence of the considered model with a relativistic brane is established. Sections 4, 5 are devoted to the Hamiltonian approach to the considered model in the finite space-time. In Section 6, cosmological consequences are studied. \section{Scalar field: action, interval, and symmetries} The sum of GR and SF actions is \begin{equation}\label{21-1} S_{GR+SF,\xi}=\int d^4x\sqrt{-g} \left[-\frac{\varphi_0^2}{6}R(g)+\xi\frac{\Phi^2}{6}R(g) +g^{\mu\nu}\partial_\mu\Phi\partial_\nu\Phi \right], \end{equation} where \begin{equation}\label{1-h2} \varphi_0=\sqrt{\frac{3}{8\pi}}\mathcal{M}_{Pl}\simeq 0.3 \cdot 10^{18} {\rm GeV} \end{equation} is the mass scale factor, $R(g)$ is the curvature Ricci scalar and $g_{\mu\nu}$ is the metric tensor on the Riemann manifold with the {\it``geometric interval''} \begin{equation}\label{1-h7s} {ds^2} = g_{\mu\nu}dx^\mu x^\nu, \end{equation} $\xi=0,1$ for minimal ($\xi=0$) and conformal ($\xi=1$) couplings. The {\it``geometric interval''} can be written in terms of linear differential forms as components of an orthogonal Fock's simplex of reference $\omega_{(\alpha)}$ \begin{equation} \label{ds} ds^2\equiv\omega_{(\alpha)}\omega_{(\alpha)}= \omega_{(0)}\omega_{(0)}- \omega_{(1)}\omega_{(1)}-\omega_{(2)}\omega_{(2)}-\omega_{(3)}\omega_{(3)}. \end{equation} In terms of simplex the GR contains two principles of relativity: the {\it``geometric''} in the form of general coordinate transformations \begin{eqnarray} \label{1zel} x^{\mu} &\to& \tilde x^{\mu}=\tilde x^{\mu}(x^0,x^{1},x^{2},x^{3})\\ \omega_{(\alpha)}(x^{\mu})&\to&\omega_{(\alpha)}(\tilde x^{\mu})= \omega_{(\alpha)}(x^{\mu}) \end{eqnarray} and the {\it``dynamic''} principle formulated as the Lorentz transformations of an orthogonal simplex of reference \begin{equation} \label{2zel} {\omega}_{(\alpha)}~\to ~ \overline{\omega}_{(\alpha)}=L_{(\alpha)(\beta)}{\omega}_{(\beta)}. \end{equation} The latter are considered as transformations of a frame of reference. \section{Conformal coupling scalar field in GR as a relativistic brane} \indent The conformal coupling scalar field action \begin{equation}\label{2sc} S_{SF,\xi=1}=\int d^4x \left[\sqrt{-g}\frac{\Phi^2}{6}R(g) -\Phi \partial_\mu (\sqrt{-g} g^{\mu\nu}\partial_\nu\Phi) \right], \end{equation} is invariant with respect to scale transformations of metric components and scalar field \begin{equation}\label{ct} g_{\mu\nu}^{\Omega}=\Omega^{2} g_{\mu\nu},~~\Phi^{\Omega}=\Omega^{-1}\Phi \end{equation} with the conformal weights $n=2,-1$ for the tensor and scalar fields, respectively. The conformal coupling of the scalar field with the weight $n=-1$ is required by unification of GR and SM. The latter is scale invariant except of the Higgs potential. However, the Hilbert -- Einstein action $S_{GR}$ in Eq. (\ref{21-1}) is not invariant. After the scale transformation the total action (\ref{21-1}) $S_{GR+SF,\xi=1}$ takes the form of the conformal relativistic brane \begin{eqnarray}\label{brane-m} &&S_{GR}[g^{\Omega}]+S_{SF,\xi=1}[g^{\Omega},\Phi^{\Omega}]= S_{\mathrm{brane}}^{(D=4/N=2)}[X_{(0)},X_{(1)}]\!=\nonumber\\&-&\!\int\!d^4x\!\Bigg[\sqrt{-g}\!\frac{X_{(0)}^2-X_{(1)}^2}{6}\,{}^{(4)}\!R(g)-X_{(0)}\partial_\mu(\sqrt{-g}g^{\mu\nu}\partial_\nu{X}_{(0)})+\nonumber\\&&X_{(1)}\partial_\mu(\sqrt{-g}g^{\mu\nu}\partial_\nu{X}_{(1)})\Bigg],\end{eqnarray} where two external ``coordinates'' are defined as \begin{equation}\label{ct-1} X_{(0)}=\varphi_{0}\Omega, ~~~~~~~~~~X_{(1)}=\Phi \end{equation} in accord with the standard definition of the general action for brane in $D/N$ dimensions given in \cite{bn} by \begin{eqnarray} S^{(D/N)}_{\mathrm{brane}}&=&-\int\! d^Dx\!\sum_{A,B=1}^N\eta^{AB}\!\Bigg[\!\sqrt{-g}\frac{X_A X_B}{(D-2)(D-1)}{}^{(D)}\!R(g)\!-\nonumber\\&&\!X_A\partial_\mu(\sqrt{-g} g^{\mu\nu} \!\partial_\nu X_B)\Bigg] \label{braneDN} \end{eqnarray} in the case for $D=4$, $N=2$ we have $\eta^{AB}=\mathrm{diag}\{1,-1\}$. In this case, in order to keep conformal invariance of the theory (\ref{brane-m}), the Einstein definition of a measurable interval (\ref{1-h7s}) in GR (\ref{21-1}) should be replaced by its conformal invariant version as a Weyl-type ratio \begin{equation} \label{1-10a} ds_{\rm (L)}^2=\frac{ds^2}{ds_{\rm units}^2}, \end{equation} where $ds_{\rm units}^2$ is an interval of the units that is defined like the Einstein definition of a measurable interval (\ref{1-h7s}) in GR. From the relativistic brane viewpoint the Einstein GR (\ref{21-1}) with the conformal coupling scalar field looks like the Ptolemaeus absolutizing of the present-day (PD) value of one of ``coordinates'' in field ``superspace'' of events $[X_{(0)}|X_{(1)}]$, in our case it is \begin{equation}\label{P-1} X_{(0)}\Big|_{\rm PD}=\varphi_0. \end{equation} This is equivalent to fixation of units of measurements \cite{039}. Another choice of independent degrees of freedom in the brane theory (\ref{brane-m}) is the unit spatial metric determinant \begin{equation}\label{L-1} |g^{(3)}_{(\rm L)}|=1 \end{equation} known as the Lichnerowicz variables \cite{lich}. In the last case, a relativistic system in each frame has its proper units, like a particle in each frame in classical mechanics has its proper initial position and velocity (Galilei), and a relativistic particle in each frame in special relativity has its proper time (Einstein). The ``relative units'' (\ref{1-10a}) supposes a new type of conformal cosmology (CC) \cite{039,bpzz,zakhy} with another definition of the ``measurable distance'' (\ref{1-10a}), instead of the standard cosmology (SC) with absolute units (\ref{1-h7s}) \cite{f22}. The ``relative units'' (\ref{1-10a}) in CC excludes the expansion of the ``measurable'' volume of the Universe in the process of its cosmological evolution, as this volume does not depend on any scale factor including the cosmological one, whereas all masses in CC including the Planck one are scaled by cosmological scale factor. The relative ``measurable distance'' (\ref{1-10a}) in CC explains the SN data on the luminosity-distance -- redshift relation \cite{snov,SN,riess1} by the rigid state without $\Lambda$ -- term \cite{039,zakhy}. Thus, a conformal-invariant relativistic brane (\ref{brane-m}) is a more general theory than Einstein GR (\ref{21-1}), and is reduced to GR for the absolute units (\ref{P-1}) or to the scalar version of the Weyl conformal theory in terms of the Lichnerowicz variables (\ref{L-1}). In the following, we call the theory (\ref{brane-m}) with condition (\ref{L-1}) the Conformal Relativity (CR). The problem is to determine the measurable Planck mass and cosmological scale factor in both the GR (\ref{21-1}) and CR (\ref{brane-m}). Measurable quantities are determined by a frame of reference to initial data in both the ``external superspace of events'' $[X_{(0)},X_{(1)}]$ and ``internal'' Riemannian space-time $(x^0,x^k)$. \section{Reference frame in external ``superspace of events''} \subsection{Distortion of GR by the conformal coupling scalar field} One can see that the conformal coupling scalar field $\Phi$ distorts the Newton coupling constant in the Hilbert action (\ref{21-1}) distinguished by (\ref{P-1}) \begin{equation}\label{22-1} S_{GR+SF,\xi=1}=\int d^4x\sqrt{-g} \left[-\left(1\!-\!\frac{|\Phi|^2}{\varphi_0^2}\right)\frac{\varphi_0^2}{6}R(g) +g^{\mu\nu}\partial_\mu\Phi\partial_\nu\Phi \right], \end{equation} This distortion changes the Einstein equations and their standard solutions of type of Schwarzschild one and other \cite{B74,PPG,PS} due to the coefficient $\left[1\!-\!{|\Phi|^2}/(\varphi_0^2)\right]$. This coefficient restricts region of a scalar field motion by the condition $|\Phi|^2\leq {\varphi_0^2}$, because in other region $|\Phi|^2\geq {\varphi_0^2}$ the sign before the 4-dimensional curvature is changed in the Hilbert action (\ref{21-1}). The rough analogy of this restriction is the light cone in special relativity which defines the physically admissible region of a particle motion. \subsection{The Bekenstein's transformation of the Higgs field} In order to keep the Einstein theory (\ref{21-1}), one needs to consider only the field configuration such that $|\Phi|^2\leq {\varphi_0^2}$. For this case one can introduce new variables \cite{B74} \begin{eqnarray}\label{9-h11} g_{\mu\nu}&=&g_{\mu\nu}^{\rm (B)}\cosh^2 Q, \\\label{9-h6} |\Phi|^2&=&\varphi_0^2\sinh^2 Q \end{eqnarray} considered in \cite{PPG,PS}. These variables restore the initial Einstein -- Hilbert action \begin{equation}\label{22-2} S_{GR+SF,\xi=1}=\int d^4x\sqrt{-g} \varphi_0^2\left[-\frac{R(g_{\rm (B)})}{6}+g_{\rm (B)}^{\mu\nu} \partial_\mu Q \partial_\nu Q\right]. \end{equation} One can see that \emph{the Bekenstein transformation converts the ``conformal coupling'' scalar field with the weight $n=-1$ into the ``minimal coupling''} angle $Q$ of the scalar -- scale mixing that looks like a {\it scalar graviton} with the conformal weight $n=0$. \subsection{Choice of ``coordinates'' in brane ``superspace of events''} The analogy of GR (\ref{21-1}) with a relativistic brane (\ref{brane-m}) distinguished by the condition (\ref{L-1}) allows us to formulate the choice of variables (\ref{9-h11}) and (\ref{9-h6}) as a choice of the ``frame'' in the brane ``superspace of events'' \begin{eqnarray}\label{br-1} \widetilde{X}_{(0)}&=&\sqrt{X^2_{(0)}-X^2_{(1)}}, \\ Q&=&\rm{arc}\coth \frac{X_{(0)}}{X_{(1)}} \end{eqnarray} As we have seen above the argument in favor of the choice of these variables is the definition of the measurable value of the Newton constant \begin{equation}\label{nc-1} G=\frac{8\pi}{3}\widetilde{X}_{(0)}^{-2}\Big|_{\rm present-day}= \frac{8\pi}{3}\varphi^{-2}_0\end{equation} as the present-day value of the ``coordinate'' $\widetilde{X}_{(0)}=\varphi_0$. In the case the action (\ref{brane-m}), (\ref{L-1}) takes the form \begin{eqnarray}\nonumber &&S_{GR}[g^{\Omega}]+S_{SF,\xi=1}[g^{\Omega},\Phi^{\Omega}]= S_{\mathrm{brane}}^{(D=4/N=2)}[X_{(0)},X_{(1)}]\!=\\\nonumber &&\!\int\! d^4x\!\Bigg[\sqrt{-g_{(L)}}\widetilde{X}_{(0)}^2\! \left(-\frac{{}^{(4)}\!R(g_{(L)})}{6}+g_{(L)}^{\mu\nu}\partial_\mu Q\partial_\nu Q\right)\,+\\&&\label{brane-m3} \widetilde{X}_{(0)}\partial_\mu\left(\sqrt{-g_{(L)}} g_{(L)}^{\mu\nu}\partial_\nu \widetilde{X}_{(0)}\right)\Bigg].\end{eqnarray} This form is the brane generalization of the relativistic conformal mechanics \begin{eqnarray}\label{confm-1} S_{\mathrm{particle}}^{(D=1/N=2)}[X_{0},Q_{0}]\!&=& \!\int\! ds\!\left[{X}_{0}^2\! \left(\frac{d Q_{0}}{ds}\right)^2-\left(\frac{d {X}_{0}}{ds}\right)^2 \right];~~~\\\label{confm-2} &&ds = dx^0e(x^0).\end{eqnarray} In the following this conformal mechanics will be considered as a simple example. \section{Reference frame in internal Riemannian space-time} \subsection{The Dirac --- ADM parametrization} Recall that the Hamiltonian approach to GR is formulated in a specific frame of reference in terms of the Dirac -- ADM parametrization of metric \cite{dir,ADM} defined as \begin{eqnarray} \label{1adm} ds^2&=&g^{\rm(B)}_{\mu\nu}dx^\mu dx^\nu~~~~~~~~~~~ ~\equiv\omega^2_{(0)}-\omega^2_{(b)}\\\label{2adm} \omega_{(0)}&=&\psi^6N_{\rm d}dx^0~~~~~~~~~~~~~~ ~\equiv \psi^2 ~\omega^{(L)}_{(0)}\\\label{3adm} \omega_{(b)}&=&\psi^2 {\bf e}_{(b)i} (dx^i+N^i dx^0)\equiv \psi^2 ~\omega^{(L)}_{(b)} \end{eqnarray} here triads ${\bf e_{(a)i}}$ form the spatial metrics with $\det |{\bf e}|=1$, $N_{\rm d}$ is the Dirac lapse function, $N^i$ is shift vector and $\psi$ is a determinant of the spatial metric and $\omega^{(L)}_{(\mu)}$ are the Lichnerowicz simplex distinguished by the condition of the unit determinant (\ref{L-1}). In terms of these metric components the GR action takes the form \begin{eqnarray} \label{6-1} S[\varphi_0|F,Q]= \!\int\! dx^0 \varphi_0^2 \!\int \!d^3x\Bigg[\!-\!\psi^{12}\! N_{\rm d}\frac{{}^{(4)}R(g_{})}{6}~&+& \frac{(\partial_0Q\!-\!N^k\partial_kQ)^2}{N_{\rm d}} \!-\nonumber\\&&\!N_{\rm d} \psi^8\partial_{(b)}\!Q\partial_{(b)}Q \!\Bigg], \end{eqnarray} where $\partial_{(b)}Q={\bf e}^k_{(b)}\partial_k Q$ and ${}^{(4)}R(g_{})$ is given in Appendix (see Eq. (\ref{Asv11})). This action is invariant with respect to transformations \cite{vlad} \begin{eqnarray} \label{zel} x^0 &\rightarrow& \tilde x^0=\tilde x^0(x^0)\\\label{zel2} x_{i} &\rightarrow& \tilde x_{i}=\tilde x_{i}(x^0,x_{1},x_{2},x_{3}),\\ \label{kine} \tilde N_d &=& N_d \frac{dx^0}{d\tilde x^0},\\\tilde N^k&=&N^i \frac{\partial \tilde x^k }{\partial x_i}\frac{dx^0}{d\tilde x^0} - \frac{\partial \tilde x^k }{\partial x_i} \frac{\partial x^i}{\partial \tilde x^0}~. \end{eqnarray} This group of diffeomorphisms conserves a family of constant-time hypersurfaces, and is commonly known as the {``kinemetric''} subgroup of the group of general coordinate transformations $x^{\mu} \rightarrow \tilde x^{\mu}=\tilde x^{\mu}(x^0,x^{1},x^{2},x^{3})$. The {``kinemetric''} subgroup contains reparametrizations of the coordinate evolution parameter $x^0$. This means that in finite space-time the coordinate evolution parameter $x^0$ is not measurable quantity, like the coordinate evolution parameter $x^0$ in the relativistic conformal mechanics (\ref{confm-1}) that is invariant with respect to diffeomorphisms $x^0 \rightarrow \tilde x^0=\tilde x^0(x^0)$, because both parameters $x^0$ are not diffeo-invariant. The relativistic mechanics (\ref{confm-1}) has two diffeo-invariant measurable times. They are the geometrical interval (\ref{confm-2}) and the time-like variable $X_{0}$ in the external ``superspace of events''. The relation between these two ``times'' $X_{0}(s)$ are conventionally treated as a relativistic transformation. The main problem is to point out similar two measurable time-like diffeo-invariant quantities in both GR (\ref{6-1}) and a brane (\ref{brane-m3}). \subsection{External diffeo-invariant evolution parameter as zero mode in finite volume} The brane/GR correspondence (\ref{brane-m}) and special relativity (\ref{confm-1}) allows us to treat an external time as homogeneous component of the time-like external ``coordinate'' $\widetilde{X}_{(0)}(x^0,x^k)$ identifying this homogeneous component with the cosmological scale factor $a$ \begin{equation}\label{nc-2} \widetilde{X}_{(0)}(x^0,x^k)\to\varphi_0a(x^0)= \varphi(x^0) \end{equation} because this factor is introduced in the cosmological perturbation theory \cite{lif} by the scale transformation of the metrics (\ref{ct-1}) too \begin{equation}\label{ct-2} g_{\mu\nu}=a^2(x^0){\widetilde{g}}_{\mu\nu}. \end{equation} Recall that, in this case, any field $F^{(n)}$ with the conformal weight $(n)$ takes the form \begin{equation}\label{F} F^{(n)}=a^n(x_0) {\widetilde{F}}^{(n)}. \end{equation} In particular, the curvature \sqrt{-g}\,\,{}^{(4)}R(g)=a^2\sqrt{-{\widetilde{g}}}\,\,{}^{(4)}R({\widetilde{g}})-6a \partial_0\left[{\partial_0a}\sqrt{-{\widetilde{g}}}~ {\widetilde{g}}^{00}\right]$ can be expressed in terms of the new lapse function ${\widetilde{N}_d}$ and spatial determinant ${\widetilde{\psi}}$ in Eq. (\ref{1adm}) \begin{equation} \label{lfsd} {\widetilde{N}}_d=[\sqrt{-{\widetilde{g}}}~{\widetilde{g}}^{00}]^{-1}=a^{2}{N}_d,~~~~~~~~ {\widetilde{\psi}}=(\sqrt{a})^{-1}\psi. \end{equation} In order to keep the number of variables in GR, in contrast to \cite{lif}, we identify $\log \sqrt{a}$ with the spatial volume ``averaging'' of $\log{\psi}$, and $\log{\widetilde{\psi}}$, with the nonzero Fourier harmonics \cite{242,242a} \begin{equation}\label{1non1} \log \sqrt{a}=\langle \log{\psi}\rangle\equiv\frac{1}{V_0}\int d^3x\log{\psi},~~~~~~ \langle\log{\widetilde{\psi}}\rangle \equiv 0 \end{equation} here the Lichnerowicz diffeo-invariant volume $V_0=\int d^3x$ is introduced. One should emphasize that modern cosmological models \cite{lif} are considered in the finite space and ``internal finite time'' in a reference frame identified with the frame of the Cosmic Background Microwave Radiation. A scalar field can be also presented as a sum of a zero Fourier harmonics and nonzero ones like (\ref{1non1}) \begin{eqnarray}\label{z-s1} Q= \langle Q\rangle+\overline{Q}; ~~~~\langle\overline{Q}\rangle= \end{eqnarray} After the separation of all zero modes the action (\ref{6-1}) takes the form \begin{equation} \label{6-6} S[\varphi_0|F,Q]= S[\varphi|\widetilde{F},\overline{Q}]+ \underbrace{V_0\!\int\! dx^0 \! \frac{1}{{N}_0}\left[ \varphi^2\left(\frac{d \langle Q\rangle}{dx^0}\right)^2-\left(\frac{d \varphi}{dx^0}\right)^2\right]}_{zero-mode~contribution}; \end{equation} here \begin{eqnarray} \label{6-4} S[\varphi|\widetilde{F},\overline{Q}]= \!\int\! dx^0 \varphi^2 \!\int \!d^3x\Bigg[\!-\!\widetilde{\psi}^{12}\! \widetilde{N}_{\rm d} \frac{{}^{(4)}R(\widetilde{g}_{})}{6}~&+& \frac{(\partial_0\overline{Q}\!-\!N^k\partial_k\overline{Q})^2}{\widetilde{N}_{\rm d}} \nonumber\\&-&\!\widetilde{N}_{\rm d} \widetilde{\psi}^8\partial_{(b)}\!\overline{Q}\partial_{(b)}\overline{Q} \!\Bigg] \end{eqnarray} repeats action $S[\varphi_0|F,Q]$ (\ref{6-1}), where $[\varphi_0|F,Q]$ are replaced by $[\varphi|\widetilde{F},\overline{Q}]$, and \begin{equation} \label{6-5} \frac{1}{N_{0}}=\frac{1}{V_{0}}\int\frac{{d^3x}}{\widetilde{N}_{d}}\equiv \left\langle \frac{1}{\widetilde{N}_{d}}\right\rangle \end{equation} is the homogeneous component of the lapse function. The action of the local variables (\ref{6-4}) determines the correspondent local energy density for the local variables \begin{equation} \label{6-9e}-\widetilde{T}_{\rm d}=\frac{\delta S[\varphi_0|{\widetilde{F}},{\overline{Q}}]} {\delta \widetilde{N}_{\rm d}}. \end{equation} \subsection{``Internal diffeo-invariant homogeneous time''} The homogeneous component of the lapse function (\ref{6-5}) $N_0$ determines difeo-invariant local lapse function \begin{equation} \label{6-8} \mathcal{N}={\widetilde{N}_{\rm d}}{\langle{\widetilde{N}_{\rm d}^{-1}}\rangle},~~~~~ \langle\mathcal{N}^{-1}\rangle=1, \end{equation} and the ``internal diffeo-invariant homogeneous time'' with its derivative \begin{equation} \label{6-9} \int{dx^0}N_{0}=\zeta,~~~~f'=\frac{df}{d\zeta}. \end{equation} \subsection{Resolution of the energy constraints} The action principle for the $S[\varphi_0|{F},{Q}]$ with respect to the lapse function $\widetilde{N}_{\rm d}$ gives us the energy constraints equation \begin{equation}\label{6-7} \frac{1}{\mathcal{N}^2}\left(\varphi'^2-\varphi^2 {\langle Q\rangle'}^2\right)-\widetilde{T}_{\rm d}=0. \end{equation} This equation is the algebraic one with respect to the diffeo-invariant lapse function ${\cal N}$ and has solution satisfying the constraint (\ref{6-8}) \begin{equation}\label{6-10} {\cal N}= \frac{\langle\widetilde{T}_{\rm d}^{1/2}\rangle}{\widetilde{T}_{\rm d}^{1/2}}. \end{equation} The substitution of this solution into the energy constraint (\ref{6-7}) leads to the cosmological type equation \begin{equation}\label{6-11} \varphi'^2=\varphi^2 {\langle Q\rangle'}^2+{\langle(\widetilde{T}_{\rm d})^{1/2} \rangle}^2\equiv \rho_{\rm tot}(\varphi)= \frac{P_{\langle Q\rangle}^2}{4V_0^2\varphi^2}+{\langle(\widetilde{T}_{\rm d})^{1/2} \rangle}^2 \end{equation} here the total energy density $\rho_{\rm tot}(\varphi)$ is split on the sum of the energy density of local fields ${\langle(\widetilde{T}_{\rm d})^{1/2} \rangle}^2$ and the zero mode one, where \begin{equation}\label{6-12} P_{\langle Q\rangle}=2V^0{\varphi^2\langle Q\rangle'}\equiv 2V^0p_0 \end{equation} is the scalar field zero mode momentum that is an integral of motion of the considered model because the action does not depend on $\langle Q\rangle$. The value of the local energy density onto solutions of motion equations depends on only $\varphi$ too, because momentum of the external time $\varphi$ \begin{equation}\label{ecs} P_\varphi=2V_0\varphi'= \pm 2V_0\sqrt{\frac{p_0^2}{\varphi^2} + {\langle(\widetilde{T}_{\rm d})^{1/2} \rangle}^2}\equiv\mp E_\varphi\end{equation} can be considered as the Hamiltonian of evolution in the ``superspace of events''. The value of the momentum $P_\varphi=\pm E_\varphi$ onto solutions of motion equation is defined as an energy of the universe, in accord with the second N\"other theorem removing momenta by constraints following from diffeomorphisms. We can see that the dimension of the group of diffeomorphisms (\ref{zel}) and (\ref{zel2}) coincides with the dimension of the first class constraint momenta, because the local part of the energy constraint (\ref{6-7}) determines the diffeo-invariant local lapse function ${\cal N}$ (\ref{6-10}). The solution (\ref{ecs}) gives the diffeo-invariant constraint-shell action (\ref{100}) obtained in Appendix \begin{equation}\label{ecs2} S_{{\cal H}=0}\!= \int\limits_{\varphi_I}^{\varphi_0}d\widetilde{\varphi} \left\{\int\limits_{V_0}^{ } d^3x\sum\limits_{\widetilde{F} } P_{\widetilde{F}}\partial_\varphi \widetilde{F} \mp2E_\varphi\right\}. \end{equation} The GR version of the Friedmann equation (\ref{6-11}) leads to the diffeo-invariant Hubble law as the relation between the geometric time (\ref{6-9}) and the cosmological scale factor $\varphi=\varphi_0 a$ \cite{242,242a} $$\zeta_{(\pm)}=\int dx^0N_0=\pm\int^{\varphi_0}_{\varphi_I} {d\varphi}~\left[{p_0^2}/{\varphi^2} + {\langle(\widetilde{T}_{\rm d})^{1/2} \rangle}^2\right]^{-1/2}\geq 0,$$ where \begin{equation}\label{1-37} \widetilde{T}_{\rm d}= \frac{4\varphi^2}{3}{\widetilde{\psi}}^{7} \triangle {\widetilde{\psi}}+ \sum\limits_{I} \varphi^{I/2-2}{\widetilde{\psi}}^{I}\overline{{\cal T}}_I; \end{equation} here $\overline{\cal{T}}_I$ is partial energy density marked by the index $I$ running, in general case, a set of values I=0 (stiff), 4 (radiation), 6 (mass), 8 (curvature)\footnote{$\Lambda$-term corresponds to $I=12$} in correspondence with a type of matter field contributions (see Appendix, Eqs. (\ref{h32}) and (\ref{h35})). The second class Dirac condition of the minimal 3-dimensional hyper-surface [5] \begin{equation}\label{11-42} p_{{\widetilde{\psi}}}=0 \to (\partial_\zeta-N_{(b)}\partial_{(b)})\log{ {\widetilde{\psi}}}=\frac16\partial_{(b)}N_{(b)}, \e is included in order to give a positive value of the Hamiltonian density $\widetilde{T}_{\rm d}$ given by Eq. (\ref{1-37}) and Eq. (\ref{h32}) in the Appendix. The equations (\ref{6-10}) and $\widetilde{T}_{\psi}-\langle\widetilde{T}_{\psi}\rangle=0$ (where $\widetilde{T}_{\psi}=T_{\psi}[\varphi|\widetilde{\psi}]$ and $T_{\psi}$ is given by Eq. (\ref{1-37ab})) determine the lapse function ${\cal N}$ and the scalar component $\widetilde{\psi}$. Thus, we give the diffeo-invariant formulation of the GR with the conformal scalar field in the comoving CMB frame compatible with the Einstein equations $T_{\rm d}=T_{\rm \psi}=0$ and their Schwarzschild-type solutions $\triangle \psi=0$, $\triangle (\psi^7 {\cal N})=0$ in the infinite volume limit \cite{242}, in contrast to all other approach to a scalar field in GR \cite{linde,MFB}. The special relativity identification of the brane external time-like ``coordinate'' with the diffeo-invariant evolution parameter (\ref{nc-2}) $$\widetilde{X}_{(0)}(x^0,x^k)= \sqrt{X^2_{(0)}-X^2_{(1)}}=\varphi(\zeta)\widetilde{\psi}^2$$ arising after resolving energy constraint with respect its momentum $P_\varphi$ is in agreement with the Hamiltonian version \cite{242,242a,bpzz} of cosmological perturbation theory \cite{lif} identifying this external time-like ``coordinate'' with the cosmological scale factor $a(\eta)=\varphi(\eta)/\varphi_0$ provided that \begin{enumerate} \item the cosmological scale factor is a zero mode of the scalar component of metric (but it is not additional variable as it is supposed in the accepted cosmological perturbation theory \cite{lif}) \item the conformal time in the redshift -- luminosity distance relation is gauge-invariant measurable quantity, in accord with the Dirac definition of observable quantities as diffeo-invariants (but it is not diffeo-variant quantity as an object of Bardeen's gauge transformations in the accepted perturbation theory \cite{lif}), \item the initial datum $a(\eta=0)=a_I$ is free from the current data $a'(\eta_0)$ and fundamental parameter $M_{\rm Planck}$ of the motion equations (because the Planck epoch data $a(\eta=0)=a'(\eta_0)/M_{\rm Planck}$ violate the causality principle in the constraint action (\ref{ecs2}) and they are the origin of numerous problems in the Inflationary Model \cite{linde}), \item there is the vacuum as a state with the minimal energy as explicit resolving the energy constraint. The vacuum postulate is provided by the second class Dirac condition of the minimal 3-dimensional hyper-surface [5] (\ref{11-42}) that removes the kinetic perturbations of the accepted cosmological perturbation theory explaining the power CMB spectrum \cite{MFB}. \end{enumerate} However, the accepted cosmological perturbation theory \cite{MFB} omitted the potential perturbations going from the scalar metric component $\widetilde{\psi}=1-\Psi/2$ in partial energy density (\ref{1-37}) that leads to additional fluctuations of the CMB temperature \cite{242}. \section{Cosmology and the Cauchy problem of the zero mode dynamics} The conformal-invariant unified theory alternative in the homogeneous approximation \begin{eqnarray}\label{2.3-1} w(x^0,x^k)&=&\varphi(x^0)\equiv\varphi_0a(x^0),\\\label{2.3-2} Q(x^0,x^k)&=& \langle Q\rangle (x^0)\equiv\frac{1}{V_0}\int d^3x Q(x^0,x^k),\\\label{2.3-3} N_d(x^0,x^k) &=& N_0(x^0), \\\label{2.3-4} N_0(x^0)dx^0 &=& d\et \end{eqnarray} leads to the cosmological model given by the action \begin{eqnarray}\label{2.3-6} S&=&V_0\int dx^0 \left[\frac{-(\partial_0\varphi)^2+\varphi^2(\partial_0\langle Q\rangle)^2}{N_0} \right]=\\ \label{2.3-7} &=&\int dx^0 \left\{P_Q\frac{d}{dx^0} \langle Q\rangle -P_\varphi\frac{d}{dx^0} \varphi +\frac{N_0}{4V_0} \left[P_\varphi^2 -\frac{P^2_Q}{\varphi^2}\right] \right\}\end{eqnarray} where $V_0=\int d^3x$ is finite coordinate volume, \begin{eqnarray}\label{2.3-9} P_\varphi&=&2V_0\varphi'\equiv2V_0\frac{d\varphi}{d\eta},\\\label{2.3-10} P_Q&=&2V_0\,\varphi^2\,{\langle Q\rangle}' \end{eqnarray} are canonical conjugated momenta, $\eta = \int dx^0 N_0(x^0)$ is the conformal time. The energy constraint in the model \begin{eqnarray}\label{2.3-11} P_\varphi^2-E_\varphi^2=0;~~~~~~~~~~~~E_\varphi=\frac{|P_Q|}{\varphi} \end{eqnarray} repeat completely the cosmological equations of the GR in the case of a rigid equation of state $\Omega_{\rm rigid}=1$ \begin{eqnarray}\label{2.3-12} \varphi_0^2 a'^2=\frac{P_Q^2}{4V_0^2\varphi^2}\equiv \frac{\rho_0}{a^2} =H_0^2 \frac{\Omega_{\rm rigid}}{a^2}, \end{eqnarray} where $P_Q$ is the constant of the motion, because \begin{equation}\label{2.3-14} P_Q'=0. \end{equation} The solution of these equations take the form \begin{eqnarray}\label{2.3-16} \varphi(\eta)=\varphi_I\sqrt{1+2{\cal H}_I\eta}, ~~~~ Q(\eta)=Q_I+\log {\sqrt{1+2{\cal H}_I\eta}}, \end{eqnarray} where \begin{eqnarray}\label{2.3-17} \varphi_I&=&\varphi(\eta=0),\\\label{2.3-18} Q_I&=&Q(\eta=0),~~~~P_Q={\rm const} \end{eqnarray} are the ordinary initial data. These data do not depend on the current values of variables $\varphi_0=\varphi_0a(\eta=\eta_0)$ in contrast to the Planck epoch one, where the initial data of the scale variable $a_I=a(\eta=0)=a'(\eta_0)/M_{\rm Planck}$ are determined by its velocity at present-day epoch. We have seen above that this determination violates the causality principle in the constraint-shell action (\ref{ecs2}). \section{Conclusion} We convince that the conformal symmetry is the way for classification of scalar field dynamics in GR. Conformal transformation allows us to convert the conformal coupling scalar field into a conformal relativistic brane without any dimensional parameter. Spontaneous conformal symmetry breaking in this case can be provided by initial data. Consideration of the diffeo-invariant initial data in a specific frame differs our approach to scalar field in this paper from other approaches to this problem. A definition of initial data as diffeo-invariant measurable quantities supposes two distinguished reference frames - the observer rest frame and the observable comoving frame. In particular, comoving frame of the Universe is identified with the CMB frame that differs from the rest frame by the non zero dipole component of the temperature fluctuations. {\it Differences} between these two frames lie in essence of all principles of relativity including the Galilei's relativity as a {\it difference} of initial positions and velocities, the Einstein's relativity as a {\it difference} of proper times, and the Weyl's relativity of a {\it difference} of units. A definition of reference frame, in our paper, is based on the Fock simplex (in order to separate diffeomorphism from frame transformation), the Dirac--ADM parametrization of metric (in order to classify of the metric components), and the Zel'manov ``kinemetric'' diffeomorphisms as parametrizations of the internal coordinate (in order to identify the diffeo-invariant evolution parameter with the cosmological scale factor as the zero mode of metric determinant and to define the energy as the constraint-shell value of the scale momentum). Finally, the Hamiltonian action in GR coincides with the relativistic brane one, where time-like external coordinate plays the role of the diffeo-invariant evolution parameter in the field ``superspace of events'', and its momentum plays the role of the energy in accord with special relativity given in the Minkowskian space of events. Therefore, the generalization of the Dirac Hamiltonian approach to the conformal coupling scalar field gives us the possibility to restore the universal Hamiltonian description of relativistic brane-like systems with the action $S^{D/N}$ with any number of external and internal coordinates. Thus, we show that the Dirac Hamiltonian approach to the conformal coupling scalar field in GR coincides with the similar consideration of the conformal brane $S^{D=4/N=2}$. Both these theories (the conformal coupling scalar field and the brane) lead to the rigid state in agreement with the SN data on the luminosity-distance -- redshift relation \cite{snov,SN,riess1} in framework of the conformal cosmology \cite{039,bpzz,zakhy}, where the Weyl relativity of units (\ref{1-10a}) is supposed. \section*{Acknowledgements} All authors are grateful to Dmitry Kazakov and Igor Tkachev for discussions of statement of the problem of the rigid state in the modern cosmology. The authors are grateful to B.M. Barbashov, K.A. Bronnikov, V.V. Kassandrov, E.A. Kuraev, D.G. Pavlov, Yu.P. Rybakov and A.F. Zakharov for interesting and critical discussions. \L.A. Glinka is thankful to the Bogoliubov-Infeld program of grants for partial financial support. R.P. Kostecki is thankful to \L.A.G. and V.N.P. for hospitability. \section*{The Appendix. The Dirac--ADM approach to GR} The Hilbert action $S=S_{\rm GR}+S_Q$ in terms of the Dirac -- ADM variables (\ref{2adm}) and (\ref{3adm}) is as follows \begin{eqnarray}\nonumber \label{Asv11} &&S_{\rm GR}= -\int d^4x\sqrt{-g}\frac{\varphi_0^2}{6}~{}^{(4)}R(g)=\int d^4x ({\mathcal{K}}[\varphi_0| {g}]-{\mathcal{P}}[\varphi_0|{g}]+{\mathcal{S}}[\varphi_0|{g}])\nonumber\\ &&S_{\rm Q =\varphi_0^2\int dx^0 d^3x \Bigg[\!\! \frac{(\partial_0Q\!-\!N^k\partial_kQ)^2}{N_{\it d}} -N_{\it d} \psi^8(\partial_{(b)}Q)^2 \Bigg], \end{eqnarray} where \begin{eqnarray} {\mathcal{K}}[\varphi_0|e]&=&{{N}_d}\varphi_0^2\left(-{\vphantom{\int}}4 { {v}}^2+\frac{v^2_{(ab)}}{6}\right), \label{k1}\\ {\mathcal{P}}[\varphi_0|e]&=&\frac{{N_d}\varphi_0^2{\psi}^{7}}{6}\left( {}^{(3)}R({\bf e}){\psi}+ {8}\triangle{\psi}\right), \label{p1}\\ {\cal S}[\varphi_0|e]&=&2\varphi_0^2\left[\partial_0{v_{\psi}}- \partial_l(N^l{v_{\psi}})\right]-\frac{\varphi^2_0}3 \partial_j[\psi^2\partial^j (\psi^6 N_d)]\label{0-s1}; \end{eqnarray} are the kinetic and potential terms, respectively, \begin{eqnarray}\label{proi1} {v}&=&\frac{1}{{N_d}}\left[ (\partial_0-N^l\partial_l)\log{ {\psi}}-\frac16\partial_lN^l\right],\\ v_{(ab)}&=&\frac{1}{2}\left({\bf e}_{(a)i}v^i_{(b)}+{\bf e}_{(b)i}v^i_{(a)}\right),\\\label{proizvod} v_{(a)i}&=& \frac{1}{{N_d}}\left[(\partial_0-N^l\partial_l){\bf e}_{(a)i} + \frac13 {\bf e}_{(a)i}\partial_lN^l-{\bf e}_{(a)l}\partial_iN^l\right],\\\label{proizvod-p} v_{Q}&=&\frac{\partial_0Q-N^k\partial_kQ}{N_{\it d}} \end{eqnarray} are velocities of the metric components, ${\triangle}\psi=\partial_i({\bf e}^i_{(a)}{\bf e}^j_{(a)}\partial_j\psi)$ is the covariant Beltrami--Laplace operator, ${}^{(3)}R({\bf{e}})$ is a three-dimensional curvature expressed in terms of triads ${\bf e}_{(a)i}$: \begin{equation} \label{1-17} {}^{(3)}R({\bf e})=-2\partial^{\phantom{f}}_{i} [{\bf e}_{(b)}^{i}\sigma_{{(c)|(b)(c)}}]- \sigma_{(c)|(b)(c)}\sigma_{(a)|(b)(a)}+ \sigma_{(c)|(d)(f)}^{\phantom{(f)}}\sigma^{\phantom{(f)}}_{(f)|(d)(c)}. \end{equation} Here \begin{equation}\label{1-18} \sigma_{(a)|(b)(c)}= {\bf e}_{(c)}^{j} \nabla_{i}{\bf e}_{(a) k}{\bf e}_{(b)}^{\phantom{r}k}= \frac{1}{2}{\bf e}_{(a)j}\left[\partial_{(b)}{\bf e}^j_{(c)} -\partial_{(c)}{\bf e}^j_{(b)}\right] \end{equation} are the coefficients of the spin-connection (see \cite{242a}), \begin{equation}\nabla_{i}{\bf e}_{(a) j}=\partial_{i}{\bf e}_{(a)j} -\Gamma^k_{ij}{\bf e}_{(a) k}\end{equation} are covariant derivatives, and $\Gamma^k_{ij}=\frac{1}{2}{\bf e}^k_{(b)}(\partial_i{\bf e}_{(b)j} +\partial_j{\bf e}_{(b)i})$. The canonical conjugated momenta are \begin{eqnarray} \label{1-32}{p_{\psi}}&=&\frac{\partial {\cal K}[\varphi_0|e]}{\partial (\partial_0\ln{{{\psi}}})}~=-8\varphi_0^2{{v}}, \\\label{1-33} p^i_{(b)}&=&\frac{\partial {\cal K}[\varphi_0|e] }{\partial(\partial_0{\bf e}_{(a)i})} =\frac{\varphi^2}{3}{\bf e}^i_{(a)} v_{(a b)},\\\label{1-34} P_{Q}&=&2\varphi^2_0\frac{\partial_0Q-N^k\partial_kQ}{N_{\it d}}. \end{eqnarray} The Hamiltonian action takes the form \cite{242,242a} \begin{equation}\label{1-16} S=\int d^4x \left[\sum\limits_{{F=e,\log\psi,Q} } P_{F}\partial_0F -{\cal H}\right] \end{equation} where \begin{equation}\label{1-17c} {\cal H}={N_d} T_{\rm d}+N_{(b)} {T}^0_{(b)} +\lambda_0{p_\psi}+ \lambda_{(a)}\partial_k{\bf e}^k_{(a) \end{equation} is the sum of constraints with the Lagrangian multipliers ${N_d}$, $N_{(b)}={\bf e}_{k(b)}N^k$, $\lambda_0$ $,\lambda_{(a)}$, and , $T^0_{\rm (a)}= -\!\!{p_{\psi}}\partial_{(a)} {\psi}\!+\!\frac{1}{6}\partial_{(a)} ({p_{\psi}}{\psi})\! +\! 2p_{(b)(c)}\gamma_{(b)|(a)(c)}\!-\!\partial_{(b)}p_{(b)(a)}\!+\! P_Q\partial_{(a)}Q $ are the components of the total energy-momentum tensor ${T}^0_{(a)}=-\frac{\delta S}{\delta N_{k}}{\bf e}_{k(a)}$, and \begin{eqnarray}\label{1-37a} T_{\rm d}[\varphi_0|\psi]&=&-\frac{\delta S}{\delta N_{\rm d}} = \frac{4\varphi_0^2}{3}{\psi}^{7} \triangle {\psi}+ \sum\limits_{I} {\psi}^I{\cal T}_I,\\\label{1-37ab} T_{\psi}[\varphi_0|\psi]&=&-\psi\frac{\delta S}{\delta \psi}\equiv \frac{4\varphi_0^2}{3}\left[7N_d{\psi}^{7}\! \triangle {\psi}+{\psi}\! \triangle \!\![N_d{\psi}^{7}]\!\right]\!+\! \! N_d\sum\limits_{I }I {\psi}^I{\cal T}_I=0;\end{eqnarray} here $\cal{T}_I$ is partial energy density marked by the index $I$ running, in general case, a set of values I=0 (stiff), 4 (radiation), 6 (mass), 8 (curvature) in correspondence with a type of matter field contributions \begin{eqnarray}\label{h31} {\psi}^{7} \triangle {\psi}&\equiv&{\psi}^{7} \partial_{(b)}\partial_{(b)}{\psi}\\\label{h32} {\cal T}_{I=0}&=&\frac{6{p}_{(ab)}{p}_{(ab)}}{\varphi_0^2} -\frac{16}{\varphi_0^2}{p_{\psi}}^2+\frac{P_{Q}^2}{4\varphi_0^2} \\\label{h35} {\cal T}_{I=8}&=&\varphi_0^2\left[\frac{1} {6}R^{(3)}({\bf e})+{\partial_{(b)}Q\partial_{(b)}Q}\right], \end{eqnarray} here ${p}_{(ab)}=\frac{1}{2}({\bf e}^i_{(a)}\widetilde{p}_{(b)i}+ {\bf e}^i_{(b)}{p}_{(a)i})$, we include the Dirac local condition of the minimal 3-dimensional hyper-surface [5] too \begin{equation}\label{1-42} p_{{\widetilde{\psi}}}=0 \to (\partial_0-N^l\partial_l)\log{ {\widetilde{\psi}}}=\frac16\partial_lN^l, \end{equation} in order to obtain a positive value of the Hamiltonian density (\ref{h32}) after the separation of the cosmological scale factor (\ref{1non1}). The constraint-shell action (\ref{1-16}) after the separation of the zero modes (\ref{1non1}) and (\ref{z-s1}) takes the form \begin{eqnarray}\nonumber S\big|_{{\cal H}=0}\!\!\!&=&\!\! \int\! dx^0\!\int d^3x \sum\limits_{{F}={\psi},e,\,{Q}}P_{{F}}\partial_0F|_{\varphi_0 a=\varphi}=\\\nonumber &=& \int dx^0 \left\{\int\limits_{V_0} d^3x\sum\limits_{\widetilde{F}=\widetilde{\psi},e,\, \overline{Q}} P_{\widetilde{F}}\partial_0 \widetilde{F} -P_\varphi\frac{d\varphi}{dx^0}+P_{\langle Q\rangle}\frac{d\langle Q\rangle}{dx^0}\right\} =\\\label{100} &=&\int\limits_{\varphi_I}^{\varphi_0}d{\varphi}\left\{\int\limits_{V_0} d^3x\sum\limits_{\widetilde{F}=\widetilde{\psi},e,\, \overline{Q} } P_{\widetilde{F}}\partial_\varphi \widetilde{F} +P_{\langle Q\rangle}\frac{d\langle Q\rangle}{d\varphi} -P_\varphi\right\}. \end{eqnarray} where $P_\varphi=\pm E_\varphi$ is the constraint-shell Hamiltonian in the ``superspace of events'' given by the resolving the energy constraint (\ref{ecs}), where $\widetilde{T}_{\rm d}=T_{\rm d}[\varphi|\widetilde{\psi}]$, and $T_{\rm d}[\varphi|\widetilde{\psi}]$ is given by Eqs. (\ref{1-37a}), (\ref{h32}) and (\ref{h35}) where $[\varphi_0|{\psi}]$ is replaced by $[\varphi|\widetilde{\psi}]$. \section*{References}
{ "attr-fineweb-edu": 1.242188, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbsk4ukPiENFaWpIq
\section{Introduction.} \label{intro} Charge transport in structures with fractional dimensionality has attracted a high degree of attention due to both its fundamental and applied interest \cite{mbb82,fp90}. Conductivity of a random fractal of resistors was studied in Ref. \cite{k77} as an example of critical phenomena. A large body of literature has been devoted to a study of the unusual dynamics of electrons in regular and random fractals \cite{bh91}. The discussion of the questions and appropriate references may be found in recent reviews (see, e.g. \cite {hb87,nyo94}). In the present paper we consider problems of percolation and hopping transport on {\em nearly one-dimensional} strongly anisotropic fractals \cite {spj97} with dimensionality $D=1+\epsilon $. These fractals are expected to exhibit unique properties because their dimensionality at $\epsilon \ll 1$ is close to the low marginal one for the percolation transition. In contrast to isotropic fractals a small parameter $\epsilon $ enables us to obtain the exact solution. Moreover it becomes possible in the case of nearly-one-dimensional fractals to establish not only average characteristics but their entire distribution functions. Another motivation for a study of quasi-one-dimensional fractals is recent experimental data on conducting polymers such as doped polyacetylene, polypyrrole and polyaniline \cite{ts92,icsm96,wjrme90,joea94,ke97}. In general, this class of polymers has a great variety of transport properties. In the undoped state these polymers are semiconductors with an energy gap of Peierls-Mott origin \cite{Heeger88}. With doping the energy gap is suppressed quickly and for the highly doped case there is finite density of states at the Fermi level. The room temperature conductivity ($\sigma_{RT}$) of heavily doped sample may attain metallic values and temperature and frequency dependencies of conductivity may be close to metallic. The nature of the metallic phase in these samples is presently a subject of intensive study. One point is that the metallic state is dependent upon by strong interchain coupling \cite{wjrme90,joea94,ke97,nps89}. In doped polymers with moderate $\sigma_{RT}$ (of the order of several hundreds $S/cm$) the conductivity, as a rule, decreases with decreasing temperature \cite{ts92,wjrme90,joea94,ke97}. Because this decay follows a power law in a large temperature interval, presumably, these materials are near the metal-insulator transition which happens at the critical interchain coupling. Poorly conducting doped samples with $\sigma_{RT}$ of the order or less 1 $S/cm$ have behavior that can be classified as ``dielectric'' \cite {wjrme90}: it is similar to that observed in amorphous semiconductors. For such materials, DC conductivity is strongly dependent on temperature and, generally, follows: $\sigma _{DC}\propto \exp -(T_0/T)^{1/2}$. It is noted that caution must be made to review a large temperature range in comparison of experimental conductivity with the model dependencies \cite{ke97}. For a variable range hopping (VRH) mechanism of transport the temperature dependence of conductivity was initially derived \cite{md79,se84} to be : $% \sigma _{DC}\propto \exp -(T_0/T)^{1/(d+1)}$, where $d$ is system's dimensionality. For $d=1$ this formula reproduces the observable dependence $% \sigma_{DC}\propto \exp -(T_0/T)^{1/2}$. However this approach is not correct, since the 1d VRH \cite{kk73} yields the Arrhenius law, $% \sigma_{DC}\propto \exp -T_0/T$ with $T_0$ set by the highest barrier that occurs in the chain. The 1d VRH was modified for a quasi-1d system \cite {nps89,shante77} to include weak hops between the nearest-neighbor, thereby avoiding the highest barriers. This approximation results in a quasi-1d VRH law, $\sigma _{DC}\propto \exp -(T_0/T)^{1/2}$. Experimental measurements of microwave conductivity and dielectric constant in poorly conducting doped samples \cite{wjrme90,joea94} revealed that both are strongly dependent upon temperature too, most probably according to the same quasi-1d Mott's law, i.e. $\exp -(T_0/T)^{1/2}$. The usual theory of hopping transport predicts, however, only a very weak power temperature dependence for the frequency-dependent conductivity and the dielectric constant in two-- and three--dimensional systems~\cite{bb85}. In the present work we exploit the specific structure of the polymer network to understand these peculiar features of conducting polymers. In stretched polyacetylene this network is formed by coupled polymer chains oriented along some direction. Electron micrographs shows that in these substances polymeric chains are organized into {\em fibrils } \cite{ts92}, which may be distinctly seen to be subdivided into smaller ones \cite{ar87}. In a non-fibrillar form of conducting polymers, like polyaniline, X--rays data reveal the existence of highly ordered ``crystalline regions'' with metallic properties \cite{wjrme90,joea94,ke97}. Therefore the whole network of stretched polyaniline may be thought of as constructed from long one--dimensional polymer chains randomly coupled by metallic islands of various sizes. The volume fraction of metallic islands can be small. We assume here that polymer structure represents a {\em nearly one-dimensional fractal}. That means a specific kind of polymer chain organization, defined in the following way: Choose a three-dimensional cube with the edge $L$. Chains, which are coupled within this cube, form a set of bundles disconnected from each other. If for large enough $L$ the cross--section of the maximum bundle is proportional to $L^\epsilon $, where $0\leq \epsilon \leq 2 $, then we shall call the system $d^{*}=1+\epsilon $ --dimensional. Obviously $\epsilon =0$ for purely one--dimensional systems (sets of uncoupled chains). Note, that if one assume chains to be connected either with a low concentration of uncorrelated interchain links, or with weak links (their resistivities being high compared to intrachain ones in our example), then we are dealing with a {\em \ quasi--one--dimensional} system~ \cite{nps89}, which is three--dimensional according to our definition. The problem of electron localization in similar fractals was studied in Refs. \cite{Shapiro82,Cohen88}. It was found that even in the presence of a weak disorder all the electronic states remain localized as long as $% \epsilon \le 1$. Therefore, a mechanism of charge transport in the fractal with $\epsilon \ll 1$ is supposed to be variable range hopping (VRH). This assumption is in agreement with the experimental observations for poorly conducting highly doped polymers for which there is a finite density of states at the Fermi level \cite{jpmme94}. The usual method to treat VRH models is the effective medium approximation \cite{bb85}, which gives wrong results in the nearly--1d case. For example, for the percolation model this method gives the threshold concentration of broken bonds $c_t\approx \epsilon $, while $c_t\approx \exp \left( -1/\epsilon \right) $, as we shall see later. The results for critical exponents are also wrong in this approximation. To treat VRH in a nearly one-dimensional fractal we choose the following approach. We will first study the percolation problem in a nearly-1d fractal exactly. The VRH model is reduced to the percolation problem by constructing the effective percolation lattice \cite{bb85,bby79,zv80}. In this way we have found that at low temperatures the VRH conductivity obeys a quasi-1d Mott law: $\sigma_{DC}\propto \exp -(T_1/T)^{1/2}$ but the characteristic temperature $T_1$ is greater than $T_0$ for 1d chain by a factor $1/\epsilon$. Similar temperature dependence is obtained for AC conductivity. These results can explain the observed temperature dependence of conductivity and dielectric constant in poorly conducting polymers. Additionally it was shown that there is the strong frequency dependence of conductivity in the region of extremely low frequencies. These peculiarities reflect the fact that in the random fractal with dimensionality close to one the low frequency conductivity is entirely controlled by the weak charge transfer between clusters. Each cluster is very dense and remains well isolated. There exist several different problems related to the percolation. First, one can be interested in statistical properties of percolating media: distribution of connected clusters, probability of two or more points to be connected, etc. As it was shown by Fortuin and Kastelein \cite{fk72}, this problem can be reduced to the $q$-component Potts model in the limit $% q\rightarrow 1$. Thus, the powerful set of field theory methods may be applied. This analogy, however, does not allow us to treat the conductivity of a percolating cluster. The evaluation of the conductivity exponent $\mu $% , which describes the DC conductivity $\sigma$ behavior near percolation threshold: \begin{equation} \sigma \propto \left( \frac{c_t-c}{c_t}\right) ^\mu \,, \label{condexp} \end{equation} is much more complicated task than the ``field-theoretic'' ones, such as the exponents of correlation length and of infinite cluster capacity, etc. Thus our first aim is to study the critical behavior of conductivity near the percolation threshold in a $d$--dimensional lattice, where $d$ is close to lower critical dimensionality, i.e. $d=1+\epsilon $, $\epsilon \ll 1$. The real space renormalization group of Migdal and Kadanoff (RGMK) \cite {mg76,kdho75,kd76,tm96}, being exact at $d=1$, may be expected to be the appropriate tool as $d$ tends to unity. This method was applied to the percolation problem several years ago by Kirkpatrick \cite{k77}. He had found critical exponents of correlation length and conductivity by using the RG equations for conductivity distribution truncated up to the first moment. Though he had not considered explicitly a case of nearly-1d system, this method, if properly applied, gives the right dependence of conductivity exponent on $\epsilon $ except for a pre-exponential factor. We extend the RGMK method to consider the conductivities and resistivities distribution functions in random media. This enable us to derive the equation for the conductivity exponent $\mu$, which gives realistic values not only in the nearly-1d case. Moreover, explicit expressions for the distribution functions at the percolation threshold will be obtained. To the best of our knowledge only numerical estimates of random conductivity momenta were available until this work (see, e.g. \cite{bhl95}). Thorough investigation of the percolation problem and its various modifications is of interest both for its conceptual significance and due to its numerous applications \cite{nyo94,imb92,sa94}. Beside its application to random conducting media on which we shall concentrate here, the percolation approach was used, e.g., to treat the rigidity transition in random networks \cite{mdl96}, and mechanical breakdown in solids \cite{zrsv96}. Another application is the magnetic flux flow in type II superconductors \cite{wf96}% . In the case of magnetic vortices pinned by disorder, their motion just above the depinning threshold happens along a sparse (possibly, fractal) network of persistent channels. Directed percolation model is often applied now to describe a wide class of phenomena, in particular, self-organized criticality \cite{gp95,mz96}. The paper is organized as follows: In Chapter \ref{dimen} the notion of fractional dimensionality is introduced for the oriented chain arrays and illustrated by hierarchical structures. In Chapter \ref{rgmk} RGMK transformations for conductivity of disordered media and for connectivity of percolation system are derived, the latter one is studied in Chapter \ref {connect}. The RG equation for the distribution function of conductivity at the percolation threshold is solved in Chapter \ref{conduct}, and the explicit form of distribution function is found in Chapter \ref{DF}. Using scaling relations, results for the AC conductivity of the percolating lattice near threshold are obtained in Chapter \ref{scale}, and they are applied to describe the temperature and frequency dependence of conductivity for the variable range hopping transport in Chapter \ref{vrh}. Results are discussed in Chapter \ref{concl}. Three Appendixes contain technical details. \section{Hierarchical coupling of oriented chains} \label{dimen} The $(m,n)$ hierarchical structure (HS) is constructed through the infinite repetition of two successive steps (see Fig. \ref{hs}): a) construction of $n $-chains, and b) construction of $m$-bundles. After every step the resulting construction may be treated as a new bond ($l$-th level bundles). We use the following definition of dimensionality for a chain fractal considering an array of one-dimensional chains, connected in some regular or random fashion by transverse bonds of various lengths. In an $L$-size cube chains form a set of bundles, connected inside this cube. Within each bundle in the cube the chains are interconnected. There are no connection between the bundles within the $L$-size cube. If the number of chains in the maximum sized bundle scales as $L^\epsilon $ for large enough $L$, where $0\leq \epsilon \leq 2$, then we have $D=1+\epsilon $ --dimensional network. Obviously $\epsilon =0$ for a purely one--dimensional systems (sets of disconnected chains). The characteristic feature of the fractals, constructed from oriented 1d chains is their self-similarity: the system at any scale looks like subdivided into bundles, which in turn are subdivided into smaller ones, etc. In particular, the dimensionality $D=1+\ln m/\ln n$ may be ascribed to the $% (m,n) $ hierarchical structure, if in spatial dimension $d\ge D$ we replace every site with $2m^l$ bonds attached to it ($l$ is the level of bundles attached at each side of this site) with $m^l$ sites connected by transverse bonds of infinite strength (see example in Fig. \ref{hsin2d}). Our hypothesis here is that oriented polymer network structures are of this type (with $D=1+\epsilon $ close to 1, $\epsilon \ll 1$), at least in some wide enough interval of length scales, e.g. from the scale of fibrils (hundreds of nm) down to molecular scales. Transmission electron micrographs of fibrillar polyacetylene (see e.g. \cite{ar87}) appear to support this hypothesis. Of course, the real structures are not regular ones, and the requirement of self-similarity here is to be treated in statistical sense. Nevertheless, we shall use the RGMK scheme, based on regular fractals (HS) for their analysis. In the case of conducting polymers such as nonfibrillar doped polyaniline and doped polypyrrole one may assume the polymer networks apparent fractality in some scale range to be caused by a dilute distribution of crystalline regions providing interchain links (fractality, generated by randomness \cite{hba96}). Having in mind that the RGMK is exact in one dimension, one may hope to obtain meaningful results when the dimensionality is close to 1. \section{Migdal and Kadanoff equations} \label{rgmk} The renormalization group of Migdal and Kadanoff may be formulated in a quite simple phenomenological fashion. Suppose we have some random $D$ -dimensional medium with fluctuating local conductivity/resistivity. Let us consider a $\lambda $-size cube within the medium. Its conductance is $% \sigma (\lambda )\lambda ^{D-2}$, where the random conductivity $\sigma \left( \lambda \right) $ is $\lambda $-dependent for strongly inhomogeneous systems. The distribution function for conductivity and for resistivity $% \rho(\lambda )=1/\sigma (\lambda )$ are defined in the Laplace representation as: \begin{equation} P(\eta ,\lambda )=\left\langle \exp \left( -\eta \sigma \left( \lambda \right) \right) \right\rangle ;\quad Q(s,\lambda )=\left\langle \exp \left( -s\rho\left( \lambda \right) \right) \right\rangle . \label{def1} \end{equation} \begin{figure}[tbp] \epsfbox{Fig1.eps} \vspace{0.5cm} \caption{Construction of hierarchical structure.} \label{hs} \end{figure} \begin{figure}[tbp] \epsfbox{Fig2.eps} \vspace{0.5cm} \caption{Hierarchical structure of Fig. \ref{hs} depicted for a two-dimensional case.} \label{hsin2d} \end{figure} If we change the size of the cube, $\lambda \rightarrow \lambda ^{\prime }=n\lambda $, we arrive at some new random variables $\sigma (\lambda ^{\prime })$, $\rho(\lambda ^{\prime })$ with distribution functions $% P\left( \eta ,\lambda ^{\prime }\right) $, $Q\left( s,\lambda ^{\prime }\right)$, respectively. The cube's enhancement may be treated as $n$-times expansion in one (``longitudal'') spatial direction, and in $D-1$ other (``transverse'') ones. If one intends to treat these transformations as infinitesimal ones afterwards, the order of operations is not important. The RGMK scheme is based upon two approximations: i) enhancing the size $n$ times in the longitudinal direction is treated as connection of $n$ resistors in series, and ii) in a similar way, the transverse cube's enhancement is replaced by the parallel connection of $m=n^{D-1}$ elements. Thus we have: \begin{eqnarray} \widetilde{\rho}^{\left( n\right) }\left( \lambda \right) =\frac 1n \sum_1^n\rho_l\left( \lambda \right)\,,\,\,\,\,\sigma \left( n\lambda \right) =\frac 1m\sum_1^m\widetilde{\sigma }_l^{\left( n\right)} \left( \lambda \right)\,. \label{adlaw} \end{eqnarray} Here the tilde values refer to the rectangular element with the dimensions $% n\lambda $ in the longitudinal direction and $\lambda $ in other ones. In both steps the resistivities $\rho_l$ and conductivities $\widetilde{\sigma}% _l^{\left(n\right)}=1/\widetilde{\rho}_l^{\left(n\right)}$ of the constituent component are supposed to be independent random variables, and, therefore, we have the distribution functions to be transformed in two steps simply as: \begin{equation} \tilde{Q}(s,\lambda )=\left\langle \exp \left( -s\tilde{\rho}^{\left( n\right) }\left( \lambda \right) \right) \right\rangle =Q^n\left( s/n,\lambda \right) \,,\;P\left( \eta ,n\lambda \right) =\tilde{P}^m\left( \eta /m,\lambda \right),~~~~m=n^{D-1}. \label{lgtr} \end{equation} This transformation is exact for the $(m,n)$ hierarchical structure. Equation (\ref{lgtr}) should be supplemented with the relation between conductivities and resistivities distribution functions (DF) in the Laplace representation. It may be easily derived from the definitions (\ref{def1}), using the following integral identity: \[ e^{-x/\alpha }=1-\sqrt{x}\int_0^\infty \frac{dy}{\sqrt{y}}J_1\left( 2\sqrt{% xy }\right) e^{-\alpha y}\,, \] where $J_1$ is the Bessel's function. As a result, the relation between $% Q(s,\lambda)$ and $P(\eta,\lambda)$ takes the form of Hankel's transformation: \begin{equation} Q(s,\lambda )=1-\sqrt{s}\int_0^\infty \frac{d\eta }{\sqrt{\eta }} J_1\left( 2% \sqrt{s\eta }\right) P\left( \eta ,\lambda \right) \,, \label{hankel} \end{equation} the reverse relation is of the same form. This transformation has the following properties to be used later: \begin{equation} Q\left( 0,\lambda \right) =1-P\left( +\infty ,\lambda \right) \,,\quad Q\left( +\infty ,\lambda \right) =1-P\left( 0,\lambda \right) \,, \label{ha} \end{equation} \begin{equation} s\frac{\partial Q(s,\lambda )}{\partial s}=\sqrt{s}\int_0^\infty \frac{ d\eta }{\sqrt{\eta }}J_1\left( 2\sqrt{s\eta }\right) \eta \frac{ \partial P\left( \eta ,\lambda \right) }{\partial \eta }\,, \label{hd1} \end{equation} \begin{equation} s\frac{\partial ^2Q(s,\lambda )}{\partial s^2}=\sqrt{s}\int_0^\infty \frac{ d\eta }{\sqrt{\eta }}J_1\left( 2\sqrt{s\eta }\right) \eta P\left( \eta ,\lambda \right) \,\,. \label{hd2} \end{equation} Thus we have a closed set of equations for arbitrary rescaling factors $n$ and $m$. It appears to be more convenient to deal with an infinitesimal transformation by setting $n=1+\delta \lambda /\lambda $, $m=1+\epsilon \delta \lambda /\lambda $. From Eq.(\ref{lgtr}) we have the variation of distribution functions consisting of longitudinal and transverse parts: $% \delta P=\delta _lP+\delta _tP$, $\delta Q=\delta _lQ+\delta _tQ$. \begin{eqnarray} \delta _lQ(s,\lambda ) &=&\left[ -s\frac{\partial Q(s,\lambda )}{\partial s} +Q(s,\lambda )\ln Q(s,\lambda )\right] \frac{\delta \lambda }\lambda \,, \nonumber \\ \delta _tP(\eta ,\lambda ) &=&\epsilon \left[ -\eta \frac{\partial P(\eta ,\lambda )}{\partial \eta }+P(\eta ,\lambda )\ln P(\eta ,\lambda )\right] \frac{\delta \lambda }\lambda . \label{variz} \end{eqnarray} Using the relations (\ref{hankel},\ref{hd1}), one may rewrite the first of these equations as: \begin{equation} \delta _lP(\eta ,\lambda )=\left[ \eta \frac{\partial P(\eta ,\lambda ) }{% \partial \eta }-\int_0^\infty \frac{ds}{\sqrt{s}}J_1\left( 2\sqrt{ s\eta }% \right) Q(s,\lambda )\ln Q(s,\lambda )\right] \frac{\delta \lambda } \lambda \,. \label{varlg} \end{equation} Then, from $\delta P=\frac{\partial P}{\partial \lambda }\delta \lambda =\delta _lP+\delta _tP$, we have the following equation: \begin{eqnarray} &&\lambda \frac{\partial P\left( \eta ,\lambda \right) }{\partial \lambda } =B\left( \left\{ P\right\} ,\eta \right) = \nonumber \\ &&\left( 1-\epsilon \right) \eta \frac{\partial P(\eta ,\lambda )}{ \partial \eta }+\epsilon P(\eta ,\lambda )\ln P(\eta ,\lambda )-\int_0^\infty \frac{ds% }{\sqrt{s}}J_1\left( 2\sqrt{s\eta }\right) Q(s,\lambda )\ln Q(s,\lambda )\,. \label{eveq} \end{eqnarray} This equation, combined with the Eq. (\ref{hankel}), determines the evolution of the distribution function upon size scaling in a closed form. This scheme also allows us to treat the percolation system, introducing the probability that the $\lambda$-sized cube is disconnected (i.e., has zero conductivity or infinite resistivity) $c(\lambda )$. Taking into account the definitions of distribution functions (\ref{def1}), $c(\lambda)$ may be written as: \begin{equation} c(\lambda )=P\left( +0,\lambda \right) =1-Q\left( +\infty ,\lambda \right) \,. \label{bb} \end{equation} Putting in Eq. (\ref{eveq}) $\eta =+\infty $ and using the formula (\ref{ha}% ), we have: \begin{equation} \frac{dc}{d\lambda }=\epsilon c\ln c-\left( 1-c\right) \ln \left( 1-c\right) \,. \label{bbev} \end{equation} The right hand side of this equation has three fixed points: two stable ones, $c=0$ and $c=1$, corresponding to connected and disconnected systems, respectively, and the unstable fixed point, $c=c_t$, $0<c_t<1$, \begin{equation} \epsilon c_t\ln c_t=\left( 1-c_t\right) \ln \left( 1-c_t\right) \,, \label{thrp} \end{equation} corresponding to the percolation threshold. Now let us consider statistical properties of clusters for the percolation problem, i.e., the distribution of clusters over sizes and site numbers, existence and capacity of the infinite cluster, etc. We suppose every bond of a HS, to be either broken, with probability $c$, or not, --- with probability $1-c$. The statistics of percolating network is closely related to the thermodynamics of the $q$-states Potts model \cite{fk72}. Namely, if we consider the partition function of the latter: \begin{equation} Z_q=\exp \left( -{\cal H}_q^{(0)}\right) , \label{pf} \end{equation} \begin{equation} {\cal H}_q^{(0)}=K\sum_{\left\langle ij\right\rangle }\left( 1-\delta _{\eta _i\eta _j}\right) , \label{hp0} \end{equation} where variables $\eta _i=0,1,\ldots ,q-1$, $\delta $ is the Kronecker $% \delta $-symbol. $Z_q$ may be expressed in terms of the percolation model as: \begin{equation} Z_q=\left\langle q^\Gamma \right\rangle \,. \label{pfp} \end{equation} Here $\Gamma $ is the total number of connected clusters in the percolation model, and the average is over realizations with the broken bonds concentration $c=\exp (-K)$. To establish a further relationship, it is necessary to introduce external fields into the Hamiltonian of the Potts model: \begin{equation} {\cal H}_q^{(1)}={\cal H}_q^{(0)}+h_1\sum_i\left( 1-\delta _{\eta _i,0}\right) , \label{hp1} \end{equation} which may be thought of as additional bonds with strength $h_1$ between every site of a given lattice and some fictitious external one. The statistical properties of the percolation model may be obtained from its ``free energy'' \begin{equation} f\left( K,h_1\right) =-\frac 1N\left. \frac \partial {\partial q}\ln Z_q\left( K,h_1\right) \right| _{q=1}=-\frac 1N\left\langle \Gamma \right\rangle _{h_1}\,, \label{fe} \end{equation} where $N$ is the total number of sites. For an example, the order parameter \begin{equation} P_1\equiv 1-\left. \frac{\partial f}{\partial h_1}\right| _{h_1=0}\,, \label{op1} \end{equation} characterizes the ``capacity'' of the infinite cluster, i.e., the probability of a given site to belong to the infinite cluster. The second derivative is the ``susceptibility'': \begin{equation} \chi =-\left. \frac{\partial ^2f}{\partial h_1^2}\right| _{h_1=0}\,, \label{suc} \end{equation} which gives the average number of sites in finite clusters \cite{nyo94}. To obtain the RG equation for $f$ of a HS, one should sum over intermediate sites in bundles, thus, performing a transition from the initial Hamiltonian, containing variables of all sites, to the one, containing variables of sites of the next level (see Fig. \ref{hsin2d}). This transformation reproduces the structure of the initial Hamiltonian (\ref{hp1}% ) with an additional term, corresponding to an extra external field $h_2$: \begin{equation} {\cal H}_q={\cal H}_q^{(1)}-h_2\sum_{\left\langle ij\right\rangle }\left[ \delta _{\eta _i,0}\delta _{\eta _j,0}-\delta _{\eta _i\eta _j}- \frac 12% \left( \delta _{\eta _i,0}+\delta _{\eta _j,0}\right) +1\right] \,, \label{hp} \end{equation} The last two terms in the sum are introduced for further convenience. For the $(n,m)$ HS, due to its self-similarity, after the summation over variables at intermediate sites we have the following equality: \begin{equation} Z_q(K,h,N)=\exp \left[ \frac N{mn}f_q^{(0)}(K,h)\right] Z_q\left( K^{\prime },h^{\prime },\frac N{mn}\right) \,, \label{pftr} \end{equation} where $h\equiv (h_1,h_2)$. Expressions for the parameters of the new Hamiltonian $K^{\prime }$ and $h^{\prime }$, and the function $f_q^{(0)}$ may be found using the transfer matrix formalism. Introducing the transfer matrix for a bond: \begin{eqnarray} T_{\eta _1\eta _2} &=&\exp \left[ -H(\eta _1,\eta _2)\right] \,, \nonumber \\ H\left( \eta _1,\eta _2\right) &=&K\left( 1-\delta _{\eta _1\eta _2}\right) +% \frac{h_1}2\left( 2-\delta _{\eta _1,0}-\delta _{\eta _2,0}\right) - \nonumber \label{trm} \\ &&h_2\left[ \delta _{\eta _1,0}\delta _{\eta _2,0}-\delta _{\eta _1\eta _2}-% \frac 12\left( \delta _{\eta _1,0}+\delta _{\eta _2,0}\right) +1\right] \,, \end{eqnarray} it is easy to calculate the transfer matrix for the $(n,m)$ bundle: \begin{equation} T_{\eta _1\eta _2}^{\prime }=\left[ \left( \hat T^n\right) _{\eta _1\eta _2}\right] ^m=\exp \left[ f_q^{(0)}(K,h)-H^{\prime }(\eta _1,\eta _2)\right] \,, \label{ntrm} \end{equation} with $H^{\prime }$ having the same structure as $H$, but with new parameters $K^{\prime }$ and $h^{\prime }$. Then we come to the following equation for the free energy of the Potts model: \[ f_q(K,h)\equiv -\frac 1N\ln Z_q=\frac 1{mn}f_q(K^{\prime },h^{\prime })- \frac 1{mn}f_q^{(0)}(K,h)\,, \] and for the percolation model: \begin{equation} f(c,h)=\frac 1{mn}f(c^{\prime },h^{\prime })-\frac 1{mn}u(c,h)\,, \label{fetr} \end{equation} where the variable $c=\exp (-K)$ is introduced instead of $K$, and $% u(c,h)=\partial f_q^{(0)}/\partial q|_{q=1}$. Finally, after transition to infinitesimal transformations: \begin{eqnarray} n &=&1+\frac{d\lambda }\lambda ,\quad m=1+\epsilon \frac{d\lambda }\lambda \,, \nonumber \\ c^{\prime } &=&c+v_c\frac{d\lambda }\lambda \,,\quad h_{1,2}^{\prime }=h_{1,2}+v_{1,2}\frac{d\lambda }\lambda \,,\quad u=w\frac{d\lambda }\lambda \,, \label{izdef} \end{eqnarray} we arrive at the following equation: \begin{equation} v_c\frac{\partial f}{\partial c}+v_1\frac{\partial f}{\partial h_1}+v_2\frac{ \partial f}{\partial h_2}-\left( 1+\epsilon \right) f=w\,, \label{efe} \end{equation} where $v_c$, $v_{1,2}$ and $w$ are found in Appendix \ref{apa}, Eqs.(\ref{vc}% -\ref{w}). Setting $h_{1,2}=0$ in Eq.(\ref{efe}), and taking into account Eq.(\ref{w}), we have: \begin{equation} \lambda \frac{df}{d\lambda }-(1+\epsilon )f=w_0=c+(1-c)\ln (1-c)\,, \label{efe0} \end{equation} where the independent variable $\lambda $ is related to $c$ by Eq. (\ref {bbev}). Equation (\ref{efe}) should be supplemented by the boundary conditions. Directly from the definition (\ref{fe}) we have: $f=-1$ at $c=0$ and $f=0$ at $c=1$. Both Equations (\ref{efe},\ref{efe0}) have essentially two different solutions depending on $c>c_t$ or $c<c_t$, where $c_t$ is the threshold concentration of broken bonds determined from Eq. (\ref{thrp}). \section{Percolation exponents} \label{connect} In the present section the critical exponents of connectivity will be obtained. Critical exponent $\nu $ of the correlation length $\xi $: \begin{equation} \xi \propto |c-c_t|^{-\nu }\,, \label{defnu} \end{equation} is determined by linearization of Eq. (\ref{bbev}) near $c=c_t$: \begin{equation} \nu ^{-1}=\epsilon \left( 1+\ln c\right) +1+\ln (1-c). \label{cle} \end{equation} Eqs.(\ref{bbev},\ref{efe0}) can be solved analytically for $\epsilon \ll 1$. In this case we have from Eq.(\ref{thrp}): \begin{equation} c_t=e^{-1/\epsilon }\,. \label{thrn1d} \end{equation} After substituting Eq. (\ref{thrn1d}) into Eq. (\ref{cle}), the critical exponent $\nu $ reads: \begin{equation} \nu \approx \frac 1\epsilon. \label{clen1d} \end{equation} The procedure for the solution of Eqs. (\ref{bbev},\ref{efe0}) is described in Appendix \ref{apb}. The ``partition function'' $f(c)$ is found to be: \begin{equation} f(c)=\left\{ \begin{array}{c} -c,\quad c\gg c_t; \\ -\frac{c_t^2}{2\epsilon }\left( 2\ln \frac{c_t}c\right) ^{\frac 1\epsilon +1}\gamma \left( -\frac 1\epsilon -1,2\ln \frac{c_t}c\right) -\epsilon ^{% \frac 1\epsilon +1}\left( \ln \frac{c_t}c\right) ^{\frac 1\epsilon +1}\,,\quad 1\gg c>c_t; \\ \frac{c_t^2}{2\epsilon }\left( 2\ln \frac{c_t}c\right) ^{\frac 1\epsilon +1}\Gamma \left( -\frac 1\epsilon -1,2\ln \frac{c_t}c\right) \,,\quad c<c_t; \end{array} \right. \label{fe0} \end{equation} where $\Gamma (a,x)$ is the incomplete $\Gamma $-function \cite{be}, $\gamma (a,x)=\Gamma (a)-\Gamma (a,x)$. It is curious that the ``singular'' part of the ``free energy'' to the left of the percolation threshold, $c<c_t$: \begin{equation} f_s^{-}(c)=-\frac{\sqrt{\pi }}2c_t\frac{\left( 2\epsilon \right) ^{\frac 1 % \epsilon +\frac 12}}{\sin \frac \pi \epsilon }\left( \ln \frac{c_t}c\right) ^{\frac 1\epsilon +1} \label{fe0s} \end{equation} is strongly oscillating function of $\epsilon $ at $\epsilon \ll 1$. The critical exponent for the ``specific heat'' $\alpha $, $f_s\propto |c-c_t|^{2-\alpha }$ appears to be large negative (as usual in the percolation model, see, e.g. \cite{nyo94}). Differentiating Eq.(\ref{efe}) with respect to $h_{1,2}$ and setting $% h_{1,2}=0$ (see also Eqs.(\ref{vc}-\ref{w})) we get equations for the ``order parameters'' $P_1$ (Eq.(\ref{op1})) and $P_2\equiv -\left. \partial f/\partial h_2\right| _{h=0}$: \begin{eqnarray} \lambda \frac{dP_1}{d\lambda } &=&\frac 1c\left[ 2-c+\frac 2c(1-c)\ln (1-c)\right] cP_1-P_2\,, \nonumber \\ \lambda \frac{dP_2}{d\lambda } &=&\left[ c+\ln (1-c)\right] P_1+\left[ 1-c+ \frac 1c(2-c)\ln (1-c)\right] P_2\,. \label{ope} \end{eqnarray} Boundary conditions: $\left. m_1\right| _{c=0}=1$, $\left. m_2\right| _{c=0}=0$, $\left. m_1\right| _{c=1}=\left. m_2\right| _{c=1}=0$ may be obtained directly from the definition of order parameters. At $c>c_t$ we have the trivial solution of Eqs. (\ref{ope}), $P_1=P_2=0$. The lowest eigenvalue of the matrix in the right hand side of Eq.(\ref{ope}) at $c=c_t$% , which also is $\beta /\nu $, may be easily found numerically for any dimensionality. The results are presented in Table \ref{exponents} together with ones obtained by other methods. For the nearly-1d system one can find the explicit form of order parameter dependence $P_1\left( c\right) $ and $P_1\left( \lambda \right) $. Because the region of our interest is $c<c_t\ll 1$, Eqs. (\ref{ope}) can be rewritten for small $c$ as: \begin{eqnarray} \lambda \frac{dP_1}{d\lambda } &=&\frac{c^2}3P_1-\frac c3P_2\,, \label{open1da} \\ \lambda \frac{dP_2}{d\lambda } &=&-\frac{c^2}2P_1+P_2\,. \label{open1d} \end{eqnarray} One can see from Eq.(\ref{open1d}), that $P_2=O(c^2P_1)$, therefore one can neglect the second term in Eq. (\ref{open1da}). As a result Eq. (\ref {open1da}) directly is solved by the substitution $c=c_t\exp(-\lambda^% \epsilon)$ (see Eq. (\ref{solc1})) to read: \begin{equation} P_1=\exp \left[ -\frac{c_t^2}{3\epsilon }\int\limits_{\lambda ^\epsilon }^\infty \frac{dx}xe^{-2x}\right] =\exp \left[ \frac{c_t^2}{3\epsilon }{\rm % Ei}\left( -2\lambda ^\epsilon \right) \right] \,, \label{op} \end{equation} where ${\rm Ei}$ is the integral exponent function. Taking into account the asymptotics of ${\rm Ei}\left( -x\right) =\ln x-C+\dots $ at small $x$, and the relation between $c$ and $\lambda $, near the percolation threshold we obtain: \begin{equation} P_1=\left( \frac{c_t-c}{c_t}\right) ^\beta \,,\quad \beta =\frac{c_t^2}{ 3\epsilon }=\frac 1{3\epsilon }\exp \left( -\frac 2\epsilon \right) \,. \label{beta} \end{equation} Thus the critical exponent $\beta $ of the infinite cluster capacity appears to be very small and a strongly nonanalytic function of $\epsilon $. Using the scaling relations\cite{nyo94} , the complete set of critical exponents for the connectivity problems may be expressed through $\nu $ and $% \beta $ already obtained. \section{Conductivity exponent} \label{conduct} Let us consider in more details the properties of the evolution functional $% B $ in Eq.(\ref{eveq}). If we assume, e.g., that the conductivity is 0, with probability $c$, and equals some finite value (say, 1), with probability $% 1-c $, then its distribution function reads: $c+\left( 1-c\right) e^{-\eta }$% . Choosing the distribution in Eq. (\ref{eveq}) close to this form: \begin{equation} P\left( \eta \right) =c+\left( 1-c\right) e^{-\eta }+\psi \left( \eta \right) \,, \label{lin} \end{equation} where $\psi(\eta)$ is the small correction. Linearizing the evolution operator $B$ with respect to $\psi $, we have: \begin{eqnarray} \ &&B\left( \left\{ P\right\} ,\eta \right) =\epsilon \left( 1-c\right) \eta e^{-\eta }+\epsilon \left[ c+\left( 1-c\right) e^{-\eta }\right] \ln \left[ c+\left( 1-c\right) e^{-\eta }\right] + \nonumber \\ &&\left( 1-c\right) \ln \left( 1-c\right) \,\left( 1-e^{-\eta }\right) +\eta \frac{d^2\psi }{d\eta ^2}+\left( 1-\epsilon \right) \eta \frac{ d\psi }{% d\eta }+\left\{ 1+\epsilon +\epsilon \ln \left[ c+\left( 1-c\right) e^{-\eta }\right] \right\} \psi \,. \label{linev} \end{eqnarray} Another approximation for $B$ is possible if we assume: \begin{equation} P\left( \eta \right) =c+\left( 1-c\right) \exp \left[ -\phi (\eta )\right] \,, \label{defphi} \end{equation} with $\phi \left( 0\right) =0$, $\phi \left( \eta \right) \rightarrow \pm \infty $ as $\eta \rightarrow \pm \infty $ rapidly enough (faster than $\pm \sqrt{\left| \eta \right| }$, as we shall see later). The important point is also to assume analyticity of $P\left( \eta \right) $ and of $Q(s)$ within some stripe along the real axis. Using the relations: \[ J_1(z)=\frac{H_1^{(1)}(z)+H_1^{(2)}(z)}2\,,\quad H_1^{(1)}\left( ze^{i\pi }\right) =-H_1^{(2)}(z)\,,\quad H_1^{(1)}(z)=-\frac{2i}{\pi z}\,\text{as } z\rightarrow 0\,, \] where $H_1^{(1,2)}$ are Hankel's function of first and second kind, respectively, we may replace the integrals with the $J_1$-functions in Eqs. (% \ref{hankel},\ref{eveq}) with the ones containing $H_1^{(1)}$, along the contour $C$ shown on Fig. \ref{intc}. Thus we obtain: \begin{eqnarray} Q(s) &=&\left( 1-c\right) \left[ 1-\sqrt{s}\int_0^\infty \frac{d\eta }{\sqrt{% \eta }}J_1\left( 2\sqrt{s\eta }\right) \exp \left( -\phi \left( \eta \right) \right) \right] \label{hank1} \\ &=&-\left( 1-c\right) \frac{\sqrt{s}}2\int_C\frac{d\eta }{\sqrt{\eta }} H_1^{(1)}\left( 2\sqrt{s\eta }\right) \exp \left( -\phi \left( \eta \right) \right) , \label{hank1a} \end{eqnarray} \begin{figure}[tbp] \epsfbox{Fig3.eps} \vspace{0.5cm} \caption{Integration contour in Eq. (\ref{hank1}).} \label{intc} \end{figure} where the contribution of the pole of $H_1^{(1)}\left( 2\sqrt{s\eta } \right) /\sqrt{\eta }$ in Eq. (\ref{hank1a}) just reproduces the first term in the square brackets of Eq. (\ref{hank1}). Assuming $\left| s\right| $ to be large enough, one may replace $H_1^{(1)}$ by its asymptotic expression: \[ H_1^{(1)}\left( 2\sqrt{s\eta }\right) \simeq \pi ^{-1/2}\left( s\eta \right) ^{-1/4}\exp \left( -\frac{3i\pi }4+2i\sqrt{s\eta }\right) \,, \] and treat the integral (\ref{hank1a}) by the saddle point method. As a result, we get: \begin{equation} Q\left( s\right) =\left( 1-c\right) \left[ \frac{\phi ^{\prime }(\eta _c)}{ \phi ^{\prime }(\eta _c)+2\eta _c\phi ^{\prime \prime }(\eta _c)} \right] ^{1/2}\exp \left[ -\phi (\eta _c)+2\eta _c\phi ^{\prime }(\eta _c)\right] \,, \label{sp} \end{equation} where the saddle point should be determined from the equation: \begin{equation} i\sqrt{\frac s{\eta _c}}-\phi ^{\prime }(\eta _c)=0\,,\quad \text{ or\quad }% \eta _c\left( \phi ^{\prime }(\eta _c)\right) ^2=-s\,. \label{spe} \end{equation} If we define: \begin{equation} \chi \left( s\right) =\phi \left( \eta _c\right) -2i\sqrt{s\eta _c}=\phi \left( \eta _c\right) -2\eta _c\phi ^{\prime }(\eta _c)\,, \label{spt} \end{equation} the following relations can be easily established: \begin{eqnarray*} \chi ^{\prime }(s)=\frac 1{\phi ^{\prime }(\eta _c)}\,,\quad s\left( \chi ^{\prime }(s)\right) ^2= &-\eta _c\,,\quad \phi (\eta _c)=&\chi (s)-2s\chi ^{\prime }(s)\,,\quad \\ \chi ^{\prime }(s)+2s\chi ^{\prime \prime }(s) &=&\frac 1{\phi ^{\prime }(\eta _c)+2\eta _c\phi ^{\prime \prime }(\eta _c)}\,. \end{eqnarray*} Obviously, the transformation $\phi (\eta )\leftrightarrow \chi (s)$ is symmetric, i.e., its reverse has the same functional form. With $\chi(s)$ we may rewrite Eq.(\ref{sp}) as: \begin{equation} Q(s)=\left( 1-c\right) \left[ 1+2s\frac{\chi ^{\prime \prime }(s)}{\chi ^{\prime }(s)}\right] ^{1/2}\exp \left( -\chi (s)\right) . \label{sp1} \end{equation} Proceeding along the same line, one also can derive the following equality: \begin{eqnarray} &&\sqrt{\eta }\int_0^\infty \frac{ds}{\sqrt{s}}J_1\left( 2\sqrt{s\eta } \right) Q(s)\ln Q(s)=\left( 1-c\right) \ln \left( 1-c\right) + \nonumber \\ &&\left( 1-c\right) \left[ \chi \left( s_c\right) -\ln \left( 1-c\right) - \frac 12\ln \left( 1+2s_c\frac{\chi ^{\prime \prime }(s_c)}{\chi ^{\prime }(s_c)}\right) \right] \exp \left( -\chi (s_c)+2s_c\chi ^{\prime }(s_c)\right) = \label{sp2} \\ &&\left( 1-c\right) \left[ \phi (\eta )-2\eta \phi ^{\prime }(\eta )+ \frac 1% 2\ln \left( 1+2\frac{\phi ^{\prime \prime }(\eta )}{\phi ^{\prime }(\eta )}% \right) \right] \exp \left( -\phi (\eta )\right). \nonumber \end{eqnarray} Replacing $\phi (\eta )=-\ln \left[P(\eta )-c\right]$ back in Eq. (\ref{sp2}% ), and substituting the result into Eq. (\ref{eveq}), we get the evolution equation (\ref{eveq}) in the differential form: \begin{eqnarray} \lambda \frac{\partial P}{\partial \lambda } &=&B_1\left( \left\{ P\right\} ,\eta \right) =-\left( 1+\epsilon \right) \eta P^{\prime }+\epsilon P\ln P-\left( 1-c\right) \ln \left( 1-c\right) +\left( P-c\right) \ln \left( P-c\right) \nonumber \\ &-&\frac 12\left( P-c\right) \ln \left[ 1-2\eta \frac{P^{\prime \prime }}{ P^{\prime }}-2\eta \frac{P^{\prime }}{P-c}\right] \,. \label{evsp} \end{eqnarray} The derivation of Eq. (\ref{evsp}) suggests the replacement of the evolution operator $B$ with its approximate form $B_1$ at least for large enough $\eta$% . But, if the function $P(\eta )$ is analytic, then, taking into account $% P(0)=1$, it may be represented in the form (\ref{lin}) in some neighborhood of the point $\eta =0$. If we plug Eq. (\ref{lin}) into Eq. (\ref{evsp}), and linearize the resulting expression with respect to $\psi $, we arrive {\em exactly} at the same evolution operator as in Eq. (\ref{linev}), which was obtained by the linearization of the exact evolution operator $B $. This observation prompts us to enhance the region of validity of the approximate evolution equation (\ref{evsp}) for everywhere in the complex plane $\eta $. At the percolation threshold, $c=c_t$, the solution of the RG evolution equation can be taken in the form: \begin{eqnarray} P(\eta,\lambda )=\bar{P}(\eta\lambda ^{-a}). \label{anz1} \end{eqnarray} Here the critical index $a$ is related to the critical exponents of the conductivity $\mu$ and of the correlation length $\nu$ by the relation \begin{eqnarray} a=\mu /\nu. \label{def:a} \end{eqnarray} Indeed, according to Eqs. (\ref{def1},\ref{anz1}), the average conductivity of the $\lambda$-sized cube in the critical regime, $c=c_t$, is \begin{eqnarray} <\sigma(\lambda)> = {\frac{dP(\eta,\lambda)}{d\eta}}|_{\eta=0} =\sigma_0\left(\frac{\lambda_0}{\lambda}\right)^a. \label{17.28b} \end{eqnarray} The same conductivity (\ref{17.28b}) is realized in the infinite disordered fractal with the correlation length $\xi$ equal to $\lambda$. Near the percolation threshold the correlation length $\xi$ is given by Eq. (\ref {defnu}) and, therefore, according to Eq. (\ref{17.28b}) the fractal conductivity obeys the scaling law \begin{eqnarray} <\sigma> \sim (c_t-c)^\mu,~~~~~~\mu = a\nu. \nonumber \end{eqnarray} With the scaling anzats (\ref{anz1}), Eq. (\ref{evsp}) becomes an ordinary differential one of the second order. It appears to be more convenient to use the function $\phi (x)=-\ln \left[ \left( \bar{P }(x)-c_t\right) /(1-c_t)\right] $ instead of $\bar{P}(x)$. Denoting $\phi _0=-\ln \left[ c_t/(1-c_t)\right] $, we have: \begin{equation} \frac 12\ln \left[ 1+2x\frac{\phi ^{\prime \prime }}{\phi ^{\prime }}\right] =\left( 1+\epsilon -a\right) x\phi ^{\prime }-\phi +\epsilon \left[ g\left( \phi -\phi _0\right) -g\left( -\phi _0\right) \right] \,, \label{steq} \end{equation} where we introduce: \begin{equation} g(\phi )\equiv \left( e^\phi +1\right) \ln \left( 1+e^\phi \right) \,. \label{defg} \end{equation} An equation for $\phi _0$ which follows from Eq. (\ref{thrp}) was used to derive Eq. (\ref{steq}). The latter may be reduced to the first order linear equation by introducing: \begin{equation} z\left( \phi \right) \equiv \exp \left[ -2\left( 1+\epsilon -a\right) x\phi ^{\prime }\right] \,, \label{defz} \end{equation} and treating $\phi $ as independent variable: \begin{equation} \frac{dz}{d\phi }+\left( 1+\epsilon -a\right) z=-\left( 1+\epsilon -a\right) \exp \left\{ -2\phi +2\epsilon \left[ g\left( \phi -\phi _0\right) -g\left( -\phi _0\right) \right] \right\} \,, \label{steq1} \end{equation} where $a$ is related to the critical index of conductivity by Eq. (\ref {def:a}). One should require the solution of Eq. (\ref{steq1}) $z(\phi )\rightarrow 0$ as $\phi \rightarrow +\infty $ faster than $\exp \left[ -\left( 1+\epsilon -a\right) \phi \right] $ to ensure the applicability of the saddle-point approximation. This selects a solution in the form \begin{equation} z\left( \phi \right) =\left( 1+\epsilon -a\right) e^{-\left( 1+\epsilon -a\right) \phi }\int_\phi ^\infty dy\exp \left\{ -\left( 1+\epsilon -a\right) y+2\epsilon \left[ g\left( y-\phi _0\right) -g\left( -\phi _0\right) \right] \right\}. \label{soln1} \end{equation} The normalization condition $\phi (0)=0$ implies $z(0)=1$, from which the equation for $a$ follows: \begin{equation} \left( 1+\epsilon -a\right) \int_0^\infty dy\exp \left\{ -\left( 1+\epsilon -a\right) y+2\epsilon \left[ g\left( y-\phi _0\right) -g\left( -\phi _0\right) \right] \right\} =1\,. \label{eqa} \end{equation} Comparing the values of $a$, obtained from Eq.(\ref{eqa}) and by the numerical investigation of the original evolution equation (\ref{eveq}), one can see that both methods give the same results at any dimensionality. This, together with the considerations presented above, prompts us to consider the saddle point solution as an exact one. Of course, the RGMK method itself is an approximate one. Comparison of the numeric results for the critical exponent of conductivity $% \mu $ is presented in Table \ref{exponents}. In three dimensions we have from Eq. (\ref{eqa}): $a\approx 1.891$. On the other hand, the best numerical results \cite{nh97} give $a=2.25\pm 0.04$. So, the RGMK method provides a reasonable solution even for 3d systems. For the case $\epsilon \ll 1$, from Eq. (\ref{eqa}) it follows \begin{equation} a=\frac{1+\epsilon }\epsilon \exp \left( -\frac{1+\epsilon }\epsilon \right) \,,\;\;\;\mu =\frac a\epsilon. \label{cexld} \end{equation} For $\epsilon \gg 1$ from Eq. (\ref{eqa}) one may obtain: \begin{equation} a=\epsilon -\frac \epsilon 4e^{-\epsilon }\,. \label{cexhd} \end{equation} \section{Distribution function at the threshold} \label{DF} The function $\phi (x)$ may be determined as a reverse of the equation: \begin{equation} Cx=\phi \exp \left[ -\int_0^\phi d\zeta \left( \frac{1+\epsilon -a}{z(\zeta ) }-\frac 1\zeta \right) \right] \,. \label{sol2} \end{equation} The integration constant $C$ in Eq. (\ref{sol2}) corresponds to arbitrary choice of the unit of conductivity, or, alternatively, of the length scale at the percolation threshold. Thus the distribution function (DF) for conductivities which in the initial representation is defined as, $\Pi (\sigma,\lambda )=\left\langle \delta \left( \sigma_\lambda -\sigma \right) \right\rangle $ at the percolation threshold takes the universal scaling form: \begin{eqnarray} \Pi (\sigma ,\lambda )=c_t+(1-c_t)\bar{\Pi}(y), \label{17.45} \end{eqnarray} where $y$ is the conductivity in units of the average conductivity (\ref {17.28b}) \begin{eqnarray} y={\frac{\sigma }{<\sigma(\lambda)>}}={\frac{\sigma}{\sigma_0}}\left({\frac{% \lambda}{\lambda_0}}\right)^a, \label{17.37a} \end{eqnarray} and the scaling function $\bar{\Pi}(y)$ may be expressed as the integral: \begin{equation} \bar{\Pi}(y)=\int_{-i\infty }^{+i\infty }\frac{dx}{2\pi i}\exp \left[ xy-\phi (x)\right] =\frac 1y\int_{0-i\infty }^{0+i\infty }\frac{d\phi }{2\pi i} \exp \left[ -\phi +yx(\phi )\right]. \label{scf} \end{equation} The last equality was obtained through integration by parts. However, it should be noted that additional unphysical contribution arises when evaluating the integral in Eq. (\ref{scf}). Namely, the function $% x(\phi )$ is singular at $\phi =\tilde{\phi}_n=\phi _0+i\pi (2n+1)$, where $% n $ is an integer number. These singularities result from the procedure of analytic continuation within the RGMK approach. It can be illustrated as follows: Let us assume the initial distribution of conductivities to be: $% P_0(\eta )=c+(1-c)e^{-\eta }$. After putting $m$ identically distributed conductors in parallel, the Laplace transform of DF for their sum, $P_1(\eta )=\left[ c+(1-c)e^{-\eta }\right] ^m\,$, has $m$-th order zeroes at $\tilde{% \eta}_n=\eta _0+i\pi (2n+1)$, where $\eta _0=\ln [(1-c)/c]$. These zeroes transform into singularities after analytic continuation to non-integer $m$. Thus the procedure of the transition from integer rescaling transformation (which is exact for a hierarchical structure) to the infinitesimal one is the source of the above singularities in Eq. (\ref{scf}). Since these singularities are artificial ones they should be merely discarded in the integral (\ref{scf}). At large conductivities $y \gg 1$, shifting integration contour in Eq. (\ref {scf}) to the region $\Re \phi >\phi _0$, one has the following asymptotic expression for the DF: \begin{equation} \bar \Pi (y\gg1)=\frac{D_2}{y_2}\left( \frac y{y_2}\right) ^{\frac{% 1+\epsilon }{ 2a}-1}\exp \left[ -\left( \frac y{y_2}\right) ^{\frac{% 1+\epsilon }a}\right] \,, \label{leftscf} \end{equation} where: \begin{eqnarray} D_2 &=&\frac{[2\pi (1+\epsilon )(1+\epsilon -a)]^{1/2}}{(1-c_t)a}\left( \frac{1+\epsilon +a}{1+\epsilon -a}\right) ^{\frac 1{2(1+\epsilon )}}\,,\ \ y_2=\frac{1+\epsilon }{1+\epsilon -a}\left( \frac{1+\epsilon -a}a\right) ^{% \frac a{1+\epsilon }}e^{A_2}\,, \nonumber \\ A_2 &=&2(1+\epsilon -a)\int_{-\infty }^0d\zeta \ln (-\zeta )\frac d{d\zeta } \frac \zeta {\ln z(\zeta )}. \label{leftpar} \end{eqnarray} Shifting the integration contour in Eq.(\ref{scf}) to the region $\Re \phi <\phi _0$, we arrive at the following expression for $\bar{\Pi}(y)$ in the region of small $y$: \begin{equation} \bar{\Pi}(y\ll1)=\frac{D_1}{y_1}\left( \frac y{y_1}\right) ^{\frac 1{% 2(\epsilon -a)}+1}\exp \left[ -\left( \frac{y_1}y\right) ^{\frac 1{\epsilon -a}}\right] \,, \label{rightscf} \end{equation} with: \begin{eqnarray} D_1 &=&\frac{[2\pi (1+\epsilon -a)]^{1/2}}{\epsilon -a}e^{-\epsilon }c_t^{-\epsilon /(1-c_t)}\,,\ \ y_1=\frac{e^{-A_1}}{1+\epsilon -a}\left( \frac{\epsilon -a}{1+\epsilon -a}\right) ^{\epsilon -a}, \nonumber \\ A_1 &=&2(1+\epsilon -a)\int_0^\infty d\zeta \ln \zeta \frac d{d\zeta }\frac % \zeta {\ln z(\zeta )}\,. \label{rightpar} \end{eqnarray} More detailed results are available in the limit $\epsilon \ll 1$. At $\Re \phi <\phi _0\simeq 1/\epsilon $ to first order of $c_t$ and of $a$ we get the following expression for $x(\phi )$: \begin{equation} \ln x(\phi )=\ln \phi +c_t\frac{e^\phi -1-\phi }\phi +\frac a{(1+\epsilon )^2 }\int_0^\phi \frac{d\zeta }{\zeta ^2}\left[ e^{(1+\epsilon )\zeta }-1-(1+\epsilon )\zeta \right] \,. \label{leftx1d} \end{equation} Evaluating Taylor's series of $\phi \left( x\right) $ at $x=0$, the central momenta of conductivity are found to be of the order of $a$: \begin{equation} \frac{\left\langle \sigma -\left\langle \sigma \right\rangle \right\rangle ^2% }{\left\langle \sigma \right\rangle ^2}=a+c_t\,,\ \ \ \frac{\left\langle \sigma -\left\langle \sigma \right\rangle \right\rangle ^3}{\left\langle \sigma \right\rangle ^3}=-\frac 12(1+\epsilon )a+c_t\,,\ \ldots \ \label{mom1d} \end{equation} On the other hand, using in Eq.(\ref{scf}) the asymptotics of $x(\phi )$ at $% \Re \phi <\phi _0$ and $\left| \phi \right| \gg 1$ we have for large enough $% y$ (see also Appendix \ref{apc}): \begin{equation} y\bar{\Pi}(y)=e^{\phi _2}\frac{1+\epsilon }aWS(W)\,, \label{a1daslarge} \end{equation} where $\phi _2={\frac{a}{(1+\epsilon )^2}}-c_t$, and the new fluctuating variable was introduced: \begin{equation} W=e^{G_1}\frac a{1+\epsilon }y^{\frac{1+\epsilon }a}\,,\ \ \ G_1=1-\gamma -\ln (1+\epsilon )-\frac{(1+\epsilon )c_t}a\simeq 1-\gamma \,. \label{Omegay} \end{equation} $\gamma $ is Euler's constant, and $S(W)$ is given by: \begin{equation} S(W)=\int_{-i\infty +\Delta }^{i\infty +\Delta }\frac{du}{2\pi i}u^{Wu}\,. \label{sfunc} \end{equation} An asymptotic expression for $S(W)$ may be easily obtained by the saddle-point method (Appendix \ref{apc}): \begin{equation} S(W)\approx \left\{ \begin{array}{l} \frac{\exp (-e^{-1}W)}{\sqrt{2\pi eW}},\text{~as }W\gg 1\,; \\ \frac{\sqrt{2\pi }}{eW}\left[ \frac{\ln \ln (e/W)}{\ln (e/W)}\right] ^2, \text{~as }W\ll 1. \end{array} \right. \label{sfuncas} \end{equation} Fig. \ref{univ} shows $WS\left( W\right) $ as a function of $\ln W$. \begin{figure}[tbp] \epsfbox{Fig4.eps} \vspace{0.5cm} \caption{Universal distribution of conductivity for the quasi-$1D$ fractal at the percolation threshold (Eq. (\ref{a1daslarge})). There is a sharp decay in the region of large conductivity and a long tail for large resistances (Eq. (\ref{sfuncas})).} \label{univ} \end{figure} The asymptote of $\bar{\Pi}(y)$ at small $y$ is given by Eqs. (\ref{rightscf}% , \ref{rightpar}). At $\epsilon \ll 1$ it may be reduced to: \begin{equation} y\bar{\Pi}(y)=\frac{y^{-\frac 1{2\epsilon }}}{\sqrt{2\pi }\epsilon }\exp \left( \frac 12-e^{-1}y^{-1/\epsilon }\right) \,. \label{left1das} \end{equation} The two expressions (\ref{a1daslarge},\ref{left1das}) should be supplemented by one for the intermediate region, where the function $x(\phi)$ in the integral (\ref{scf}) can be expanded in $a$ and $c_t$. Here we have in the first order of $a,\,c_t$: \begin{eqnarray*} x\left( \phi \right) &=&\phi +\eta \left( \phi \right) ,\;\eta \left( \phi \right) =c_t\left( e^\phi -1-\phi \right) +\frac{a\phi }{\left( 1+\epsilon \right) ^2}\int_0^\phi \frac{d\zeta }{\zeta ^2}\left[ e^{\left( 1+\epsilon \right) \zeta }-1-\left( 1+\epsilon \right) \zeta \right] \,, \\ y\bar{\Pi}\left( y\right) &=&\delta \left( y-1\right) +y\int_{-i\infty }^{+i\infty }\frac{d\phi }{2\pi i}e^{\left( y-1\right) \phi }\eta \left( \phi \right) \,. \end{eqnarray*} Apart from the $\delta $-function term, this yields in the region $0<y<1$ : \begin{equation} \bar{\Pi}(y)=\frac a{1+\epsilon }\frac 1{\Delta ^2}\,, \label{a1dinm} \end{equation} where $\Delta =1-y$. To establish regions of validity for different expressions of DF, let us consider the region $\Delta \ll 1$. Here Eq.(\ref{left1das}) turns into: \begin{equation} \bar{\Pi}\left( y\right) =\frac 1{\sqrt{2\pi }\epsilon }\exp \left( \frac 12+% \frac \Delta {2\epsilon }-e^{\Delta /\varepsilon -1}\right) \,, \label{left1das1} \end{equation} and Eq.(\ref{a1daslarge}) may be written, taking into account Eq.(\ref {sfuncas}) at $W\ll 1$, as: \begin{equation} \bar{\Pi}\left( y\right) =\frac{\sqrt{2\pi }}e\frac a{1+\epsilon }\left[ \frac{\ln \left( \ln \frac{1+\epsilon }a+\frac{1+\epsilon }a\Delta \right) }{% \Delta +\frac a{1+\epsilon }\ln \frac{1+\epsilon }a}\right] ^2. \label{a1daslarge1} \end{equation} Comparing Eq.(\ref{a1dinm}) with Eqs.(\ref{a1daslarge1},\ref{left1das}) one can conclude that Eq.(\ref{a1daslarge}) is valid if $\Delta <\Delta _1\sim a/\epsilon \ln \left( 1/\epsilon \right) $, and Eq.(\ref{left1das}) holds for $\Delta >\Delta _2\sim \epsilon \ln \left( e/\epsilon \right) $. Fluctuations of conductivity appears to be distributed within narrow region of relative width $\Delta _1$, which ensures that not very high order central momenta of the conductivity to be small (see Eq.(\ref{mom1d})). However, if expressed in terms of the universally fluctuating variable $W $, the distribution becomes smeared over a wide region with the lower cut-off $% W_1\sim a^p$, $p\approx 1+1/\ln \left( 1/\epsilon \right) $. The distribution function $S\left( W\right)$ arises naturally in a 1d chain of random resistors, if, to require a scaling form for the distribution function of $\lambda $-length chain: $\Upsilon \left( \rho,\lambda \right) =% \bar{\Upsilon}(\rho\lambda ^{-a})$, or $Q\left( s,\lambda \right) = \bar{Q}% \left( s\lambda ^a\right) $ in the Laplace representation. Then from $% Q\left( s,n\lambda \right) =Q^n\left( s/n,\lambda \right) $ it immediately follows: $\bar{Q} \left( x\right) =\exp \left[ -Cx^{1/\left( 1+a\right) }\right] $. Evaluating its inverse Laplace's transform $\Upsilon \left( r\right) $, and assuming $a\ll 1$, which is true in the 1d case, we have after the proper rescaling of the integration variable: \begin{equation} r\Upsilon \left( r\right) =\frac 1aWS\left( W\right) ,\;\;W=ar^{-1/a}, \label{scf1d} \end{equation} which is essentially the same formula as Eqs. (\ref{a1daslarge},\ref{Omegay}% ). \section{Scaling and AC conductivity} \label{scale} Exact results for AC-conductivity in disordered systems are available for a very limited class of models, mostly for 1d ones. The common method to study disordered hopping systems is the effective medium approximation (EMA), which gives qualitatively correct results for three-, two-, and even for one-dimensional systems. However, it fails for a nearly 1d system. In a percolation model, for an example, EMA gives threshold concentration value $% c_t\propto \epsilon $ and completely wrong values of critical exponents. However, knowing the results for DC conductivity and topological properties of the percolation network, the qualitative behavior of low-frequency conductivity may be restored within the scaling hypothesis \cite{nyo94,sw94}. Namely, it should be assumed that the only length scale near the threshold is the correlation length $\xi \propto |\tau |^{-\nu }$, $\tau =(c_t-c)/c_t$% . The second assumption is about the anomalous diffusion of a tracer placed onto the infinite cluster at the percolation threshold: \begin{equation} \left\langle r^2(t)\right\rangle _\infty \propto t^\zeta \,,\quad \zeta <1\,. \label{adif} \end{equation} $r(t)$ is the distance from tracer's position at $t=0$, $\left\langle \dots \right\rangle _\infty $ means the average over the initial positions within the infinite cluster only. Above the threshold, when $c<c_t$, we have normal diffusion at sufficiently large times, when $\left\langle r^2(t)\right\rangle >\xi ^2$: \begin{equation} \left\langle r^2(t)\right\rangle =D_\infty t\,, \label{ndif} \end{equation} with the diffusion constant $D_\infty $ connected with DC conductivity as: \begin{equation} \sigma _{DC}=\frac{e^2n_e}{kT}D_\infty \,. \label{dc} \end{equation} $e$ and $n_e$ are charge and concentration of electrons, respectively. Near threshold we have $D_\infty \propto \tau ^\mu $, $\mu $ is the critical exponent of the DC conductivity. Note, however, that in Eq. (\ref{ndif}) the average is over the whole network, including finite clusters. The relationship between exponents $\mu $ and $\zeta $ may be established in the following way: At some large enough time $t$ let us consider the $% \lambda $-sized box, $\lambda =\left\langle r^2(t)\right\rangle ^{1/2}\propto t^{\zeta /2}$ inside the system at the percolation threshold. Its conductivity is $\sigma (\lambda )\propto \lambda ^{-\mu /\nu }$. At the same time, it may be expressed as $\sigma (\lambda )\propto P_\infty (\lambda )\lambda ^2/t$, where $P_\infty (\lambda )\equiv P_1$ is the infinite cluster capacity for the system in which the correlation length equals $\lambda $. Since $P_\infty (\lambda )\propto \lambda ^{-\beta /\nu }$% , it follows $\sigma (\lambda )\propto \lambda ^{\beta +2-2/\zeta }$. Thus we can conclude: \begin{equation} \zeta =\frac{2\nu }{2\nu +\mu -\beta }\,\,. \label{dzmu} \end{equation} If we take into account the average over finite clusters, we obtain at the percolation threshold: $\left\langle r^2(t)\right\rangle \propto P_\infty (\lambda )\lambda ^2\propto \lambda ^{2-\beta /\nu }\propto t^{\bar{\zeta}}$ , $\bar{\zeta}$ is the anomalous diffusion exponent including the contribution of finite clusters: \begin{equation} \bar{\zeta}=\left( 1-\frac \beta {2\nu }\right) \zeta =\frac{2\nu -\beta }{ 2\nu +\mu -\beta }\,. \label{dzbar} \end{equation} Below the percolation threshold, at $c>c_t$, we have $\left\langle r^2(t)\right\rangle \sim \xi ^2\propto \tau ^{-2\nu }$ at $t\rightarrow \infty $. All the above may be summarized as: \begin{equation} \left\langle r^2(t)\right\rangle =t^{\bar \zeta }G(\tau t^{u/\mu })\,, \label{scal} \end{equation} where: \begin{equation} u=1-\bar \zeta =\frac \mu {2\nu +\mu -\beta }\,, \label{frexp} \end{equation} with the scaling function $G(0)$ being some constant, $G(x)\propto x^\mu $ as $x\rightarrow +\infty $, and $G(x)\propto |x|^{-s}$, \begin{equation} s=2\nu -\beta \,, \label{diexp} \end{equation} as $x\rightarrow -\infty $. The conductivity may be expressed through $\left\langle r^2(S)\right\rangle $% , the Laplace transform of $\left\langle r^2(t)\right\rangle $, as: \begin{equation} \sigma \left( \omega ,\tau \right) =\frac{e^2n_e}{kT}S^2\left\langle r^2(S)\right\rangle \,, \label{acc} \end{equation} where $S=-i\omega $. Using the Tauberian theorem for the Laplace transformation of power laws, from Eqs. (\ref{scal},\ref{acc}) one can obtain: \begin{equation} \sigma (\omega ,\tau )=\frac{e^2n_e}{kT}S^u\bar G\left( \tau S^{-u/\mu }\right) \,, \label{scal1} \end{equation} with scaling function $\bar G(x)$ having the same asymptotic properties as $% G(x)$. Thus we have at $c<c_t$ and $\omega \ll \tau ^{s+\mu }$: \[ \sigma \propto \tau ^\mu, \] at $c>c_t$ and $\omega \ll |\tau |^{s+\mu }$: \[ \sigma \propto -i\omega \tau ^{-s}, \] and at $\omega \gg |\tau |^{s+\mu }$ (in particular, at any $\omega $ if $% c=c_t$): \[ \sigma \propto (-i\omega )^u. \] The summary of the frequency dependence of AC conductivity is given by Fig. \ref{accond}. \begin{figure}[tbp] \epsfbox{Fig5.eps} \vspace{6.5cm} \caption{ Diagram of frequency-dependent conductivity in the quasi-$1D$ fractal near the percolation threshold, $|\tau|\ll 1$. Here $s=2\nu-\beta$ and $u = \mu/(2\nu+\mu-\beta)$ and $\nu$, $\beta$ and $\mu$ are the critical indexes of the correlation length, capacity of infinite cluster and conductivity; $\omega_c \sim |\tau|^{s+\mu}$ is the boundary frequency separated the critical region from the conducting and dielectric phases. For the quasi-$1D$ fractal with the transverse dimensionality $\epsilon \ll 1$, $% u \ll 1$, $\mu \ll 1$ and $s \gg 1$.} \label{accond} \end{figure} In a nearly 1d case the static conductivity exponent $\mu $ is given by Eq. (% \ref{cexld}), the frequency dependence exponent $u$ and the exponent $s$ of dielectric constant divergence may be written with Eqs. (\ref{clen1d},\ref {beta}) as: \begin{equation} s\approx \frac 2\epsilon \,, \label{sap} \end{equation} \begin{equation} u\approx \frac 1{2\epsilon }\exp \left( -1-\frac 1\epsilon \right) \,. \label{uexp} \end{equation} Thus the exponent $s$ is very large but the exponent $u$ is very small. Therefore, in a dielectric phase the AC conductivity as a function of frequency demonstrates a step-type behavior. In the conducting state the frequency dependence of conductivity remains very weak. \section{Variable range hopping} \label{vrh} In a conductor with localized carriers the charge transport is provided by the variable range hopping (VRH). The model may be formulated as follows. The phonon assisted hopping rate $w_{ij}$ from one localized state $j$ to the other $i$ per unit time, including Fermi occupation probabilities $p_i$, is approximated by the formula \cite{kk73}: \begin{eqnarray} w_{ij}p_j\left( 1-p_i\right) &=&\omega _0\exp \left( -2f_{ij}\right) \,, \nonumber \\ f_{ij} &=&\frac{\left| \varepsilon _j-\varepsilon _i\right| +\left| \varepsilon _j - \varepsilon_F\right| +\left| \varepsilon _i -\varepsilon_F\right| }{4kT}+\frac{\left| {\bf r}_j-{\bf r}_i\right| }a\,, \label{hr} \end{eqnarray} where $\varepsilon _i$ and ${\bf r}_i$ are energies and position vectors of localized states respectively, and $a$ is their radius. We assume here the hopping motion to be along the chains. Assuming that the localized states near the Fermi level are distributed uniformly in space and energy, the distribution of random variables $f_{ij}$ is: \begin{equation} F(f)\equiv \text{Probability}(f_{ij}>f)=\exp \left[ -\left( \frac f{f_0} \right) ^2\right] \,, \label{df} \end{equation} where: \begin{equation} f_0=\left( \frac{T_0}T\right) ^{1/2}\,,\quad kT_0={\frac{1 }{4N_Fa}}\,. \label{f0} \end{equation} $N_F$ is the density of states at the Fermi level. To study charge motion in a system with continuous distribution of hopping rates is a much more complicated problem than the one for a percolating system, where hopping rates are either $0$, or some given finite value $w_0$% . However, knowing the results for the conductivity of percolating system, qualitative conclusions can be obtained for the system with continuously distributed hopping rates. Namely, let us introduce some probe hopping rate $% w_c$, replacing all hopping rates $w_{ij}<w_c$ with $0$, and all $w_{ij}>w_c$ with $w_c$. Obviously the conductivity becomes lower than the initial one, but if to choose $w_c$ from the requirement to get a maximal conductivity, one can hope to obtain a good estimate for the original conductivity. It is convenient to represent the probe value as $w_c=\exp (-2f_c)$. The corresponding broken bonds concentration $c$ is given by formula (\ref{df}), i.e., $c=\exp \left[ -\left( f_c/f_0\right) ^2\right] $. The value of $f_c$ for the threshold concentration is $f_t=f_0/\sqrt{\epsilon }$ or \begin{eqnarray} f_t={\frac{1}{2}}\left({\frac{T_1 }{T}}\right)^{1/2},~~~~~kT_1=\frac{ 4 kT_0}% \epsilon =\frac 1{\epsilon N_Fa}. \label{ft} \end{eqnarray} Assuming the probe value $f_c=f_t+\delta $ to be close to the threshold one, $\tau =\left( c_t-c\right) /c_t\ll 1$, we have the relation \begin{equation} \delta =\frac 12\epsilon f_t\tau \,. \label{dltau} \end{equation} The scaling formula (\ref{scal1}) for the conductivity of the percolation system now reads \begin{equation} \sigma (\omega ,\tau )=\frac{e^2n_e}{kT}a_{\parallel }^2|\tau |^\mu w_cg\left[ \tau \left( \frac S{w_c}\right) ^{-u/\mu }\right] \,, \label{scaling} \end{equation} where $w_c=w_t\exp \left( -\epsilon f_t\tau \right) $, $w_t=\omega _0\exp \left( -2f_t\right) $, $a_{\parallel }=1/(N_FkT)$ is the hopping length, and the electron density $n_e=N_FkT.$ Scaling function $g(x)$ has the following properties: \[ g(x)\approx \left\{ \begin{array}{l} A\left| x\right| ^{-\mu }\;\text{as }\left| x\right| \ll 1\,, \\ D+B_{+}x^{-s-\mu }+\ldots \text{ as }x\gg 1\,, \\ B_{-}\left| x\right| ^{-s-\mu }+\ldots \text{ as }x<0,\;|x|\gg 1, \end{array} \right. \] with coefficients $A$, $B_{\pm}$ and $D$ of the order of unity. First let us consider DC conductivity. Obviously one should choose $\tau >0$% , and the expression to maximize the conductivity as a function of $\tau $ is $\tau ^\mu \exp \left( -\epsilon f_t\tau \right) $. The optimal value of probe parameter $\tau $ is very small: $\tau _{DC}=\mu /\epsilon f_t$. Taking into account Eq. (\ref{f0}), we have: \begin{eqnarray} \sigma_{DC}\sim \frac{e^2}{N_F\left( kT\right) ^2}w_t,~~~~w_t=\omega _0\exp \left[ -\left( \frac{T_1}T\right) ^{1/2}\right]. \label{dcc} \end{eqnarray} Thus the DC conductivity obeys a quasi-1d Mott's law, but the characteristic temperature $T_1$ given by Eq. (\ref{ft}) is much greater than $T_0$ for VRH in a strictly 1d chain from Eq. (\ref{f0}). The so-called ``hydrodynamic region'' of very low frequencies, where the conductivity's frequency dependence is determined by expansion: \begin{equation} \sigma =\sigma _{DC}\left( 1-\frac{i\omega }{\omega _h}+\dots \right) \,,\quad \omega <\omega _h\,, \label{hydro} \end{equation} appears to be very narrow for the nearly 1d system. Its width $\omega _h$ may be estimated from the condition on the argument of the scaling function $% g$ in Eq. (\ref{scaling}) to be of the order of unity at $\tau =\tau _{DC}$ and $\omega =\omega _h$: \begin{equation} \omega _h\sim \tau _{DC}^{2/\epsilon }w_t\sim \exp \left( -\frac 2{\epsilon ^2}\right) f_t^{-2/\epsilon }w_t\,. \label{oh} \end{equation} Within this region $|\omega|< \omega_h$ we get the effective value of $\tau _c$ to be dependent on frequency as $\tau _c=\tau _{DC}\left( 1-S/\omega _h+\dots \right) $. We suppose at further derivations $S=-i\omega $ to be real and positive, having in mind analytic continuation afterwards. At $S\sim \omega _h$, $\tau _c$ changes its sign, and now the conductivity is determined by the charge motion inside finite-size clusters. At $|\tau _c|\ll 1$, the size of effective clusters is large, i.e., the clusters contain many 1d chains. This frequency region is called {\em multiple hopping% } one. Note, that in contrast to two- and three-dimensional systems for which the multiple hopping regime transforms at higher frequencies into the regime of pair hops, here the multiple hopping frequency region borders that of the one-dimensional hopping. From properties of the scaling function $g$ in Eq. (\ref{scaling}) one can conclude, that the conductivity is maximal if one chooses the probe value $% \tau _c$ such that the argument of $g$ is of the order of unity. Taking into account explicit expressions for critical exponents in a nearly 1d case we have: \begin{equation} \tau ^{\prime }\exp \left( \frac 12\epsilon ^2f_t\tau ^{\prime }\right) \sim \left( \frac S{w_t}\right) ^{\epsilon /2}\,, \label{mhe} \end{equation} where $\tau ^{\prime }\equiv -\tau _c$ and the conductivity may be estimated to be: \begin{equation} \sigma \sim \frac{e^2}{N_F(kT)^2}S\tau ^{\prime -2/\epsilon }\sim \frac{e^2}{ N_F(kT)^2}w_t\exp \left( \epsilon f_t\tau ^{\prime }\right) \,. \label{mhc} \end{equation} One can see from Eq. (\ref{mhe}), that the character of frequency dependence is determined by the parameter $\epsilon ^2f_t\equiv 2(T_2/T)^{1/2}$, where: \begin{equation} T_2=\frac 14\epsilon ^4T_1\,. \label{t2} \end{equation} If the temperature is relatively high, $T\gg T_2$, from Eq. (\ref{mhe}) it follows $\tau ^{\prime }\sim \left( S/w_t\right) ^{\epsilon /2}$, and the conductivity reads: \begin{equation} \sigma \sim \frac{e^2}{N_F(kT)^2}w_t\exp \left[ \epsilon f_t\left( \frac S{ w_t}\right) ^{\epsilon /2}\right] \,. \label{imf} \end{equation} Thus the temperature dependence of conductivity at a given frequency is described by the quasi-1d Mott's law $\exp -\left( T_1/T\right) ^{1/2}$. The frequency region for application of Eq. (\ref{imf}) is determined by the requirement $\tau ^{\prime }\ll 1$, which may be written as: \begin{equation} \frac \epsilon 2\ln \frac{w_t}S\gg 1\,. \label{ub1} \end{equation} At lower temperatures, $T\ll T_2$, Eq. (\ref{imf}) is valid as long as $% (1/2)\epsilon ^2f_t\tau ^{\prime }\ll 1$, or: \begin{equation} \frac \epsilon 2\ln \frac{w_t}S\gg \ln \frac{\epsilon ^2f_t}2=\frac 12\ln \frac{T_2}T. \label{ub2} \end{equation} At higher frequencies the solution of Eq. (\ref{mhe}) is given by the equation \[ \tau ^{\prime }\sim \frac 1{\epsilon f_t}\ln \frac S{\omega _1}\,, \] where $\omega _1=\left( 2/\epsilon ^2f_t\right) ^{2/\epsilon }w_t=\left( T/T_2\right) ^{2/\epsilon }w_t$, and the conductivity now is: \begin{equation} \sigma \sim \frac{e^2}{N_F(kT)^2}S\left( \frac{\epsilon f_t}{\ln \frac S{ \omega _1}}\right) ^{2/\epsilon }\,. \label{hf} \end{equation} This formula remains to be valid until $\tau ^{\prime }\ll 1$, i.e. if: \begin{equation} \frac \epsilon 2\ln \frac S{w_t}\ll \epsilon f_t\text{\thinspace .} \label{upb} \end{equation} If the frequency is higher than ones determined by Eqs. (\ref{ub1}) or (\ref {upb}), the conductivity behavior becomes a 1d one (see, e.g., Ref. \cite {nps89}). \section{Conclusions.} \label{concl} As one naturally expects, the results for the percolation problem in nearly one dimension approach 1d ones as $\epsilon \rightarrow 0$. In particular, threshold concentration $c_t$ (Eq. (\ref{thrn1d})) and critical exponents $% \beta $ and $\mu$ (Eqs. (\ref{beta},\ref{cexld})) tend to zero. As a result the capacity and conductivity have a jump-like behavior as a function of concentration of broken bonds near a critical value. The reason is that in the limit of one dimension the infinite cluster arises at $c=0$ and occupies immediately the whole system. The critical length exponent $\nu \approx \epsilon ^{-1}$ is, however, large, contrary to the 1d case, when $\nu =1$, but this nearly 1d behavior of the correlation length can be observed in a very narrow range of concentrations, $|c-c_t|\ll 1$. Outside this region, when $c_t\ll c\ll 1$, critical length scales in a 1d manner, $\xi =c^{-1}$. The other surprising feature is the strongly nonanalytic behavior of both threshold concentration and of critical exponents, which points to a regular $\epsilon $-expansion near lower critical dimensionality $d=1$ being rather impossible. Although the RGMK method becomes exact only in the limit $% \epsilon \ll 1$ the comparison with numerical results (see Table 1) points out that the critical indexes obtained by this method proves realistic even for $\epsilon = 1,2$. All these features together teach us that the infinite cluster arises almost like a jump. The infinite cluster at the percolation threshold itself is a fractal with a number of dimensionalities \cite{nyo94} all of them less than the dimensionality of the original lattice itself. For example, fractal dimensionality: \begin{equation} D_f=D-\frac \beta \nu \approx 1+\epsilon -\frac 13\exp \left( -\frac 2% \epsilon \right) \,, \label{fd} \end{equation} characterizing the mass distribution within infinite cluster, is very close to the fractal dimension of the system itself $D$, which means that infinite cluster at the threshold is ``almost dense''. Fracton, or spectral dimension $\tilde d$: \begin{equation} \tilde d=D_f\frac{2\nu }{2\nu +\mu -\beta }\approx 1+\epsilon -\frac 1{% 2\epsilon }\exp \left( -1-\frac 1\epsilon \right) \,, \label{fnd} \end{equation} was introduced to describe the behavior of random walk on the infinite cluster (it may also be used to describe, e.g. density of localized vibrational states, or fractons, etc.). Its closeness to $D=1+\epsilon $ means that the diffusion on the infinite cluster at the threshold is almost normal. Respectively, the conductivity frequency dependence exponent $u$, Eq. (\ref{uexp}), is small. On the other hand, dielectric constant in the insulating phase, $\varepsilon ^{\prime }\propto |\tau |^{-s}$ diverges very strongly, nearly as $\xi ^2\propto |\tau |^{-2\nu }$, but, similar to the correlation length $\xi $, this divergency takes place in a narrow interval of concentrations of the order of $c_t$. The RGMK enables us to study not only the average characteristics of the system but also their fluctuations. These fluctuations become essential near the critical point when the correlation length $\xi$ becomes larger or comparable with the system size $\lambda$. In this case because of non-self-averaging a sample demonstrates individual characteristics corresponding to its specific disorder. In the present work we have found the distribution of possible conductivities of samples in the critical regime. The average conductivity $% <\sigma(\lambda)>$ decays with the sample size $\lambda$ according to scaling law (\ref{17.28b}). All the fluctuations are found to obey the same scaling law (\ref{mom1d}). Thus the distribution of conductivity is the universal function, $\bar{\Pi}(y)$, in units of the average conductivity, $% y=\sigma/<\sigma(\lambda)>$. In other words the above fractal dimensionalities of the percolating cluster do not vary with the fractal size, $\lambda$, as happens in multi-fractal systems \cite{Castellani86}. This robustness comes from the additive laws (\ref{adlaw}) for classical charge transport. The function $\bar{\Pi}(y)$ represents the distribution of possible experimental deviations from the scaling law (\ref{17.28b}). It is shown, Eq. (\ref{mom1d}), that the central body of the distribution $\bar{\Pi}(y)$ is concentrated in the narrow interval around the average value, i.e., $y=1$% , but it does not take the gaussian form. The distant tails of the distribution in the region of large conductivity $y\gg 1$ and for large resistivities $1/y \gg 1$ decay like a stretched exponent, Eqs. (\ref {leftscf},\ref{rightscf}). In the limit $\epsilon \ll 1$ the shape of the distribution function (\ref{a1daslarge}) is consistent with the 1d scaling of the percolating cluster. Returning to the variable range hopping in the chain fractal as a consequence of (i) 1d character of variable range hopping along the chains, and (ii) finite value of broken bonds threshold concentration $c_t$, the DC conductivity obeys a quasi-1d Mott's law (\ref{dcc}). But the characteristic temperature $T_1$ of this dependence is higher than the formal value of the characteristic temperature for 1d chain (remembering that Mott's law is not valid for 1d systems), $T_0$, by a factor $1/\epsilon$. This increase can be understood through comparison with the quasi-1d model of weakly coupled metallic chains \cite{nps89}. The variable range hopping conductivity of this model obeys the same law with the characteristic temperature $T^*= {% \frac{T_0}{2(d-1)}}$, where $2(d-1)$ is the number of neighboring chains and $d-1$ is the transverse dimensionality of the quasi-1d system ($d=3$). Taking $d-1=\epsilon$ we formally reproduce $T_1$ for the nearly-1d fractal. Experimental temperature dependence of conductivity in the poorly conducting polymers very often follows a quasi-1d Mott's law with substantially increased characteristic temperature \cite{wjrme90,joea94,joo98}, as expected for $\epsilon \ll 1$. In 2d and 3d isotropic systems with VRH mechanism of charge transport the temperature dependencies of the AC conductivity and of the dielectric constant are rather weak, but in nearly 1d systems there exists the region of frequencies and temperatures, where these dependencies are nearly the same as the quasi-1d Mott's type (see Eq. (\ref{imf})), which continuously transforms into 1d dependence within a wide enough transient region, Eq. (% \ref{hf}). Such a type of strong temperature dependence for both DC and AC conductivity and also dielectric constant is experimentally observed in conducting polymers with localized carriers \cite{wjrme90,joea94}. The physical picture behind this dependence is the following. The low-dimensional random system can be separated in weakly coupled clusters within which carriers are confined. With increasing temperature, the size of clusters exponentially increases, as more space accessible for carriers due to thermal activation. As a result, the dielectric constant and the conductivity exhibit strong temperature dependencies. In contrast to the low-dimensional case, the clusters in two- and three-dimensional systems prove to be more effectively coupled. Therefore the large polarization of clusters does not happen because of transition of carriers between clusters. Thus our results support strongly the idea that even poorly conducting polymers represent low dimensional systems. \acknowledgments The authors are grateful to S.N. Dorogovtsev, V.V. Bryksin, Yu.A. Firsov, W. Wonneberger and W. Schirmacher for useful discussions. This work was partially supported by a Russian National Grants RFFI No. 96-02-16848 and No. 97-02-18283, and the U.S. NSF DMR-9508723.
{ "attr-fineweb-edu": 1.483398, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbyI25V5hc_OxwvZ2
\section{Introduction} In the quest to better understand non--perturbative phenomena in string theory, and to find clues as to the full nature of M--theory\cite{Hull:1994ys,Townsend:1995kk,Witten:1995ex}, rich solvable examples are of considerable value. The type~0 minimal string theories (formulated in refs.~\cite{Morris:1990bw,Dalley:1991qg,Dalley:1991vr,Johnson:1992pu,Dalley:1992br} and refs.~\cite{Crnkovic:1990ms,Hollowood:1991xq}, and recognized as type~0 strings in ref.~\cite{Klebanov:2003wg}) have highly tractable non--perturbative physics, and despite being exactly solvable contain a rich set of phenomena such as holography and open--closed dualities. Like the original minimal strings\cite{Gross:1989vs,Brezin:1990rb,Douglas:1989ve}, they have bosonic spacetime physics, although they have worldsheet fermions, and may be thought of as two--dimensional supergravity coupled to ${\hat c}<1$ superconformal minimal models, with a diagonal GSO projection. The physics of the minimal strings can be succinctly formulated in terms of associated non--linear ordinary differential equations, known as the string equations. They furnish asymptotic expansions for the free energy once the boundary conditions are fixed. These expansions take the form of a string world--sheet expansion. Typically (for the cases of the type~0A and~0B systems coupled to the $(2,4k)$ superconformal minimal models) there are two perturbative regimes in which such an expansion can be developed. One is interpreted as purely closed string backgrounds with integer, $N$, units of R--R flux, while the other has both open and closed string sectors with $N$ D--branes in the background\footnote{Interestingly, in the associated non--linear system --- the generalized KdV system for type~0A --- the $N$ D--branes or flux units correspond to special soliton solutions, as shown in refs.~\cite{Carlisle:2005mk, Carlisle:2005wa}.}. Remarkably, these models have a non--perturbative completion (first discovered in refs.~\cite{Morris:1990bw,Dalley:1991qg,Dalley:1991vr,Johnson:1992pu,Dalley:1992br} in the context of the type~0A systems) that connects these two asymptotic regimes, furnishing an example of a so--called ``geometric transition'' between open and closed string backgrounds. While in the case of type~0A, complete (numerical) solutions of the exact string equations for any $N$ can be found, the connectedness of the type~0B case has been argued for on the basis of a large $N$, 't Hooft--like, limit where the interpolating solution can be found using algebraic techniques\cite{Klebanov:2003wg}. This has more than just shades of the AdS/CFT correspondence\cite{Maldacena:1997re,Gubser:1998bc,Witten:1998qj}, where in the prototype, the open string physics of $N$ D3--branes can be rewritten in terms of that of closed strings in AdS$_5\times S^5$ with $N$ units of R--R flux. In the present context we have a precise analogue of this important example. There is of course a 't Hooft large $N$ limit there too, connecting supergravity to large $N$ Yang--Mills. That we have here not just the analogue of the large $N$ limit but also knowledge of how to go beyond, may ultimately prove instructive. In ref.~\cite{DWW}, we investigated a new system called the Dispersive Water Wave (DWW) hierarchy\cite{Kaup1,Kaup2,Broer,Kuper,Gordoa:2001}, which we argued should yield new string equations {\it via} a similarity reduction. These string equations turn out to be a Painlev\'{e}~IV hierarchy of equations. We found that both the type~0A and~0B string theories were found to be naturally embedded within this framework. In addition to this new non--perturbative connection, we found further that the two string theories are merely special points in a much larger tapestry of possibilities that also appear to be string theories. This is somewhat suggestive of an M--theory, now for minimal strings, which is exactly solvable. It clearly deserves further study. Among the other special points which suggested themselves (using perturbation theory) were some that we conjectured to be the type IIA and IIB string theories coupled to the $(4,4k-2)$ superconformal minimal models. Much of the analysis carried out to support this identification was using perturbative techniques. The subject of the present paper is to continue the study of this rich system into the non--perturbative regime. We argue that the conjectured type~II theories have non--perturbative completions by showing that the corresponding perturbative expansions match onto each other smoothly. This is accomplished using a combination of analytical and numerical techniques similar to those used for the type~0 theories coupled to the $(2,4k)$ superconformal minimal models. The outline of the paper is as follows: In sections 2 and 3, we review essential aspects of the well--known type~0 theories and provide a summary of results about the DWW system that will be needed later. We reproduce the analytic ('t Hooft limit) technique used for the type 0B string theory, as in ref.~\cite{Klebanov:2003wg}, in section 4 to keep the presentation self--contained. We then reformulate it in a manner that allows us to extend the technique to our DWW system. We apply this strategy to our system in Section 5 for the first two even DWW flows, which contain the type~0 and conjectured type~II theories of interest and find that the type~II theories possess smooth non--perturbative solutions. In addition, we find evidence for the existence of new non--perturbative solutions with novel asymptotics. In section 6, we consider the first (and simplest) flow of the DWW system. For this case, we are able to strengthen evidence for the existence of these new non--perturbative solutions with exact numerical results. We end with a brief discussion in section 7. An appendix details the expansions arising from the DWW system for the first few flows along with some of their essential properties. \section{Type~0 strings: A brief review}\label{sec:type0review} We begin with a brief review of the type~0 string theories coupled to the $(2,4k)$ superconformal minimal models of type $(A,A)$. Those familiar with these models can proceed directly to the next section. \subsection{Type 0A}\label{sec:type0Areview} Type 0A string theory coupled to the $(2,4k)$ superconformal minimal models (first derived and studied in refs.\cite{Dalley:1991qg,Dalley:1991vr,Johnson:1992pu,Dalley:1992br} and identified as type 0A in ref.~\cite{Klebanov:2003wg}) is described by the string equation \begin{equation}\label{streqn0A} w\mathcal{R}^2 - \frac{1}{2}\mathcal{R}\mathcal{R}'' + \frac{1}{4} \mathcal{R}'^{2} = \nu^2 \Gamma^2 \quad , \end{equation} where, for a particular model, $w(z)$ is a real function of the real variable $z$, a prime denotes $\nu {\partial}/{\partial z}$, and $\Gamma$ and $\nu$ are real constants. The quantity $\mathcal{R}$ is a function of $w(z)$ and its $z$--derivatives. In general $w(z)$ additionally depends on couplings $t_k$ associated with flowing between the various models. Then we have \begin{equation}\label{GDpoly} \mathcal{R} = \sum_{k = 0}^{\infty} \Big ( k + \frac{1}{2}\Big ) t_k P_k \quad , \end{equation} where the $P_k[w]$ are polynomials in $w(z)$ and its $z$--derivatives, called the {Gel'fand -- Dikii} polynomials\cite{Gelfand:1975rn}. They are related by a recursion relation (defining a recursion operator $R_2$) \begin{equation}\label{KdVrec} P'_{k+1} = \frac{1}{4}P'''_{k} - wP'_{k} - \frac{1}{2}w'P_{k}\equiv R_2\circ P'_k\ , \end{equation} and fixed by the value of the constant $P_{0}$ and the requirement that the rest vanish for vanishing $w$. The first four are: \begin{eqnarray}\label{0APolys} &&P_{0} = \frac{1}{2}\ , \quad P_{1} = -\frac{1}{4} w\ , \quad P_{2} = \frac{1}{16}(3 w^2 - w'')\ , \nonumber\\ {\rm and}\quad &&P_{3} = -\frac{1}{64}(10w^3 - 10ww'' - 5(w')^2 + w''')\ . \end{eqnarray} The $k$th model is chosen by setting all the $t_j$ to zero except $t_{0} \equiv z$ and \begin{equation}\label{t0A} t_{k}=\frac{(-4)^{k+1}(k!)^2}{(2k+1)!}\ . \end{equation} This number is chosen so that the coefficient of $w^k$ in $\mathcal{R}$ is set to $-1$.\footnote{This gives $w=z^{1/k} +\ldots$ as $z\rightarrow+\infty$. If we had instead chosen $t_0=-z$, we would have chosen the coefficient of $w^k$ to be unity.} The flows between various models are captured by the integrable KdV hierarchy\cite{Douglas, Banks}. The function $w(z)$ defines the partition function $Z = \exp (-F)$ of the string theory $via$ \begin{equation}\label{0AFreeEnergy} w(z) = 2 \nu^2 \frac{\partial^2 F}{\partial \mu^2}\Big{|}_{\mu = z} \quad , \end{equation} where $\mu$ is the coefficient of the lowest dimension operator in the world--sheet theory. The asymptotic expansions of the string equations for the first two cases are: \medskip \noindent {$k = 1$} \begin{eqnarray}\label{0Aexpnsk=1} w(z) &=& z + \frac{\nu \Gamma}{z^{1/2}} - \frac{\nu^2 \Gamma^2}{2 z^2} + \frac{5}{32} \frac{\nu^3}{z^{7/2}}\Gamma\left(4\Gamma^2 + 1\right) +\cdots \quad (z \rightarrow \infty) \\ w(z) &=& 0 + \frac{\nu^2 (4 \Gamma^2 - 1)}{4 z^2} + \frac{\nu^4}{8} \frac{(4\Gamma^2 - 1)(4 \Gamma^2 - 9)}{z^5}+ \cdots \quad (z \rightarrow -\infty) \nonumber \end{eqnarray} \noindent {$k = 2$} \begin{eqnarray}\label{0Aexpnsk=2} w(z) &=& z^{1/2} + \frac{\nu \Gamma}{2 z^{3/4}} - \frac{1}{24}\frac{\nu^2}{z^2}\left(6 \Gamma^2 + 1\right) + \cdots \quad (z \rightarrow \infty) \\ w(z) &=& (4 \Gamma^2 - 1)\left(\frac{\nu^2}{4 z^2} + \frac{1}{32}\frac{\nu^6}{z^7}(4 \Gamma^2 - 9)(4 \Gamma^2 - 25) + \cdots\right) \quad (z \rightarrow -\infty) \nonumber \end{eqnarray} The solutions for various $k$ for $z > 0$ can be numerically and analytically shown to match onto the solution for $z<0$, providing a unique\cite{Dalley:1991vr,Johnson:1992pu,Dalley:1992br} non--perturbative completion of the theory. (See figure~\reef{fig:plot} for an example of a solution found using numerical methods.) \begin{figure}[ht] \begin{center} \includegraphics[width=100mm]{type0Aplot}\\ \caption{\footnotesize{A plot of the $k=2$ type~0A solution showing how the perturbative regimes at large $|z|$ are smoothly connected. Section~\ref{sec:DWWreview} discusses a function $v(x)$ ($x\propto z$), which has a number of different classes of behaviour distinguished by choice of boundary condition. The type~0A theory has class $v_1(z)$ in the $+z$ perturbative regime and class $v_2(z)$ in the $-z$ perturbative regime. Here we have set $\nu=1$ and $\Gamma=0$.}}\label{fig:plot} \end{center} \end{figure} In the $\mu \rightarrow +\infty$ regime, $\Gamma$ represents\cite{Dalley:1992br,Klebanov:2003wg} the number of background ZZ D--branes~\cite{Zamolodchikov:2001ah} in the model, while in the $\mu \rightarrow -\infty$ regime, $\Gamma$ represents the number of units of R--R flux in the background\cite{Klebanov:2003wg}. \subsection{Type 0B strings}\label{sec:type0Breview} Type 0B string theory coupled to the $(2, 4k)$ superconformal minimal models \cite{Klebanov:2003wg} is described by the following {string} equations\cite{Crnkovic:1990ms,Hollowood:1991xq}: \begin{eqnarray}\label{streqn0B} \sum_{l = 0}^{\infty} t_{l}(l + 1)R_{l} = 0 \ ,\qquad \sum_{l = 0}^{\infty}t_{l}(l + 1)H_{l} + \nu q = 0 \ , \end{eqnarray} where the $R_l$ and $H_l$ are polynomials of functions $r(x)$ and $\omega (x)$ (and their derivatives), and~$\nu$ and $q$ are real constants. The function ${\widetilde w}(x) = {r^2}/{4}$ defines the partition function of the theory $via$ \begin{equation}\label{0BFreeEnergy} {\widetilde w}(x) = \frac{r^2}{4} = \nu^2 \frac{d^2F}{dx^2}\Big{|}_{\mu = x} \quad . \end{equation} where $\mu$ is the coefficient of the lowest dimension operator in the world--sheet theory. The $n$th model is chosen by setting all $t_l$ to zero except $t_0 \sim x$ and $t_n$. These models have an interpretation as type 0B strings coupled to the $(2,2n)$ superconformal minimal models only for even $n=2k$. The flows between various models are captured by the integrable Zakharov--Shabat (ZS) hierarchy\cite{Zakharov:1979zz}. The asymptotic expansions of the string equations \reef{streqn0B} for the first two even $n=2k$ are: \noindent $n = 2 \,\,\, (k=1)$ \begin{eqnarray}\label{0Bexpnsm=2} {\widetilde w}(x) &=& \frac{x}{4} + \left(q^2 - \frac{1}{4}\right)\left[\frac{\nu^2}{2x^2} + \left(q^2 - \frac{9}{4}\right)\left(\frac{-2\nu^4}{x^5} + \cdots\right)\right]\ , \quad (x \rightarrow \infty) \nonumber\\ {\widetilde w}(x) &=& \frac{\nu q\sqrt{2}}{4|x|^{1/2}} - \frac{\nu^2 q^2}{4 |x|^2} + \frac{\nu^3}{|x|^{7/2}}\frac{5\sqrt{2}}{64} q\left(1 + 4q^2\right)+\cdots \quad (x \rightarrow -\infty) \end{eqnarray} \noindent $n = 4 \,\,\, (k=2)$ \begin{eqnarray}\label{0Bexpnsm=4} {\widetilde w}(x) &=& \frac{\sqrt{x}}{4} + \frac{\nu^2}{144 x^2} \left(64q^2 - 15\right) + \cdots ; \quad (x \rightarrow \infty) \\ {\widetilde w}(x) &=& \frac{\sqrt{|x|}}{2\sqrt{14}} + \frac{\nu}{2 |x|^{3/4}}\frac{q}{\sqrt{3}\cdot7^{1/4}} + \cdots \quad (x \rightarrow -\infty) \nonumber \end{eqnarray} In the $\mu \rightarrow -\infty$ regime, $q$ represents the number of background ZZ D--branes in the model, while in the $\mu \rightarrow \infty$ regime it counts the number of units of R--R flux in the background\cite{Klebanov:2003wg}. The structure of solutions with increasing $n$ is particularly rich \cite{Klebanov:2003wg}; the $n=4$ expansion for $\mu \rightarrow -\infty$ shown above breaks a $\IZ_2$ symmetry due to the presence of R--R fields in the dual string theory. Unlike the 0A case, the solutions for $x>0$ have not been shown numerically to match onto those for $x < 0$ so far. The highly non-linear nature of the string equations makes it difficult to numerically obtain smooth solutions connecting the two regimes for the 0B case. Nevertheless, as argued in ref. \cite{Klebanov:2003wg} and reviewed in section~\ref{sec:type0expnmtch}, these theories can be argued to be non-perturbatively complete in a particular ('t Hooft) limit. For the $n=2$ ($k=1$) model\footnote{The central charge of the $\mathcal{N}=1$ $(p,q)$ super--conformal minimal models is given by $\hat{c} = 1 - \frac{2(p-q)^2}{pq}$. For $n=2$, the central charge of the $(2,4)$ superconformal minimal model is thus zero and we simply have the pure world--sheet supergravity sector.}, the full non--perturbative solution is known since it can be mapped directly to the solution known for the $k=1$ type~0A case $via$ the Morris map~\cite{Morris:1990bw,Morris:1992zr}. The string equation for the 0A theory becomes the string equation for the 0B theory once one identifies $\Gamma$ with $q$, but with the sign of $x$ reversed. Analogues of the Morris map for higher $n$ are not known. \section{The DWW system: A brief review}\label{sec:DWWreview} The DWW system introduced in ref.~\cite{DWW} leads to a Painlev\'e~IV hierarchy of string equations: \begin{eqnarray}\label{DWWstring1} -\frac{1}{2}\mathcal{L}_x+\frac{1}{2}u\mathcal{L}+\mathcal{K} &=& \nu c\quad \quad \quad \quad \quad \\ \label{DWWstring2} \left(-v+\frac{1}{4}u^2+\frac{1}{2}u_x\right)\mathcal{L}^2-\frac{1}{2}\mathcal{L}\mathcal{L}_{xx}+\frac{1}{4}\mathcal{L}_x^2&=&\nu^2 \Gamma^2\quad ,\quad \quad \quad \quad \quad \end{eqnarray} where $c$ and $\Gamma$ are real constants and $\nu$ plays the same role as for the type~0 theories (note that here and in the rest of the paper, for any function $G(x)$, $G_{x}$ will denote $\nu\, \partial G/\partial x$). The functions $u(x)$ and $v(x)$ are generalizations of the two functions $r(x)$ and $\omega(x)$ used to describe the 0B theory. The polynomials $\mathcal{L}[u,v]$ and $\mathcal{K}[u,v]$ are defined by \begin{equation}\label{LK} \left(\begin{array}{c} \mathcal{L}\\ \mathcal{K}\end{array}\right) = \sum_{n=0}^{\infty}\frac{1}{2}(n+1)t_n \left(\begin{array}{c} L_n[u,v]\\ K_n[u,v]\end{array}\right) \ \quad, \end{equation} where $L_n[u,v]$ and $K_n[u,v]$ are polynomials in $u(x)$, $v(x)$ and their derivatives, similar to the polynomials $R_n$ and $H_n$ for the 0B theory. They satisfy the recursion relation \begin{eqnarray*}\label{DWWrec} \left(\begin{array}{c} L_{n+1}[u,v]\\ K_{n+1}[u,v]\end{array}\right) \ = R \left(\begin{array}{c} L_n[u,v]\\ K_n[u,v]\end{array}\right) \ \quad . \end{eqnarray*} where $R$ is the recursion operator of the DWW hierarchy, given by \begin{equation}\label{DWWRecOp} R\equiv \frac{1}{2}\left(\begin{array}{cc} \partial_{x}u\partial_{x}^{-1}-\partial_{x}&2\\ 2v+v_{x}\partial_{x}^{-1}&u+\partial_{x}\end{array}\right) \quad . \end{equation} The first few $L_{n}$ and $K_{n}$ are as follows: \begin{eqnarray}\label{DWWPolys} L_0 &=& 2; \quad K_0 = 0; \nonumber\\ L_1 &=& u; \quad K_1 = v; \\ L_2 &=& \frac{1}{2}u^2 + v - \frac{1}{2}u_x; \quad K_2 = uv + \frac{1}{2}v_x;\nonumber \end{eqnarray} The $n$th model is chosen by setting all $t_i$ equal to zero except for $t_0=x$ and $t_n$ which is chosen to be a numerical factor to fix the normalization. The parameter $t_n$ can be replaced by \begin{equation}\label{tntogn} g_n \equiv \frac{1}{\frac{1}{2}(n+1)t_n} \quad , \end{equation} in order to make direct contact with recent literature which discusses this system in a much different (mathematical) context\cite{Gordoa:2001}. Analytic expansion solutions to the string equations~\reef{DWWstring1} and~\reef{DWWstring2} for all $n$ can be organized into five main classes on the basis of the $\nu^0$ (or $g_s^{-2}$) behavior of $u(x)$ and $v(x)$, as follows, \begin{equation}\label{classes} \begin{split} \textrm{Class 1:}\quad&u_1 \sim 0\ ,\quad\quad\; v_1\sim x^{2/n}\ ;\\ \textrm{Class 2:}\quad&u_2 \sim 0\ ,\quad\quad\; v_2\sim 0\ ;\\ \textrm{Class 3:}\quad&u_3 \sim x^{1/n}\ ,\quad v_3\sim 0\ ;\\ \textrm{Class 4:}\quad&u_4 \sim x^{1/n}\ ,\quad v_4\sim x^{2/n},\quad u_4^2/v_4 \sim 1/4\ ;\\ \textrm{Class 5:}\quad&u_5 \sim x^{1/n}\ ,\quad v_5\sim x^{2/n},\quad u_5^2/v_5 \sim a\neq 1/4\ .\\ \end{split} \end{equation} They follow interesting patterns\footnote{For example, the expansions in Class 1 appear only for even $n = 2k$, while the expansions in Class 5 are the analogues of the $\IZ_2$ symmetry-breaking solutions of the 0B theories.} with increasing $n$ and have been explored in detail in ref.~\cite{DWW}. Representative solutions for the first few $n$ have been provided in the Appendix for ease of reference. In addition to the type~0 theories reviewed earlier, these solutions encode new string theories, some of which were conjectured to be type~II string theories coupled to super--conformal minimal models as reviewed below. The type~0 theories reviewed earlier can be recovered completely from this system of equations by applying appropriate constraints~\cite{DWW}. Setting $u(x) = 0$ and $\mathcal{L} = 0$ in~\reef{DWWstring1} and~\reef{DWWstring2} results in the type~0A and type~0B theories respectively\footnote{One also needs to make the identification $t_{2n}^{DWW}= \frac{1}{4}t_n^{\mathrm{KdV}}=\frac{(-1)^{n+1}4^n(n!)^2}{(2n+1)!} \; \Rightarrow \;g_{2n} = 2\frac{(-1)^{n+1}(2n)!}{4^n(n!)^2}$.}. Consistency of the two equations then requires that $c = -1/2$ for type~0A and $\Gamma = 0$ for type~0B, while the free parameter counts the branes and fluxes in the respective theories. The function $v(x)$ (after appropriate redefinition) encodes the free energies of the respective theories, according to equations~\reef{0AFreeEnergy} and~\reef{0BFreeEnergy}. One can also recover solutions corresponding to the type~0 theories by setting $c=-1/2$ or $\Gamma = 0$ in appropriately chosen expansions from the five classes listed above. Expansions in Classes 1 and 2 (labeled $v_1(x)$ and $v_2(x)$) reduce to those of the type~0A theory (see~\reef{0Aexpnsk=1} and~\reef{0Aexpnsk=2}) once we set $c = -1/2$. This helps fix the directions of the various expansions by requiring that $v(x)$ be real, with the result that some expansions are real and others complex for a given $n$. Further details can be found in ref. ~\cite{DWW}. A full solution for $v(x)$ is constructed by specifying its behavior in the two asymptotic regimes, as seen for the $k=2$ 0A theory in figure~\reef{fig:plot}. This 0A solution is obtained from the DWW expansions, $v_1(x)$ and $v_2(x)$, by setting $c = -1/2$. Using this scheme, the above expansions can be organized in the form of a square as shown in figure~\reef{square2}. The four corners of the square correspond to four different string theories. The top corners represent the known type 0 strings coupled to the $(2,4k)$ $(A,A)$ superminimal models. The bottom corners represent two new theories that were conjectured in ref.~\cite{DWW} to be type~IIA and IIB string theories coupled to the $(4,4k-2)$ $(A,D)$ minimal models (for even $k$ only). At each corner, one parameter from $(c, \Gamma)$ is frozen to the value indicated on the figure, while the other parameter counts the branes/fluxes in the appropriate regime. The horizontal sides of the square correspond to expansions in the $+x$ direction, while the vertical sides represent expansions in the $-x$ direction. For all $n$, there are two special points on the vertical sides corresponding to $c =\pm\Gamma$, where the expansions $v_2$ and $v_3$ identically vanish. \begin{figure}[!ht] \begin{center} \includegraphics[width=80mm]{Square2final}\\ \caption{\footnotesize{A family of string theories, forming a square. See text for details.}}\label{square2} \end{center} \end{figure} It must be noted that the square fully exists only for even $n = 0\mod4$. For $n=2\mod4$, the expansions labeled $v_4$ become complex as $+x$ expansions and the expansions labeled $v_1$ do not appear for odd $n$. The new theories were conjectured to be type~II strings coupled to super--minimal matter by matching the genus zero contributions from the asymptotic expansions (the terms appearing at order $g_s^0$ in the free energy) with the continuum calculation of their one--loop partition functions. In the absence of non--perturbative constraints (like $u = 0$ for the 0A theory) leading to the conjectured type~II theories, the corresponding values of $c = 0$ and $\Gamma^2 = 1/4$ in figure~\ref{square2}, were obtained by systematic analysis of the properties of the genus zero contributions~\cite{DWW}. However, pairing perturbative expansions in this manner does not guarantee the existence of a full non-perturbative solution with the desired properties. A type of 't Hooft limit will allow us to argue for the existence of such non-perturbative solutions for the type~II theories. We demonstrate how this limit works for the two parameter DWW system in section~\reef{sec:DWWexpnmtch}. Before that, we review the 't Hooft limit for the known type~0 theories. \section{Expansion Matching for type 0 theories : The known} \label{sec:type0expnmtch} The theories encoded by the string equations described so far can be non--perturbatively completed by matching the perturbative solutions for $x \rightarrow \infty$ on to those for $x \rightarrow -\infty$. Numerical solutions have been found for the string equations for the 0A models that smoothly interpolate between the two asymptotic regimes as discussed earlier. See, e.g., figure~\reef{fig:plot}. The string equations for the 0B models, however, are more difficult to analyze numerically and attempts to find smooth numerical solutions similar to the 0A ones have been unsuccessful thus far\footnote{The only exception is $n=2$ $(k=1)$ 0B theory, where the numerical $k=1$ type 0A solution can be converted into it $via$ the Morris map, as mentioned in section~\ref{sec:type0Breview}.}. This difficulty is inherited by the DWW string equations presented in section~\ref{sec:DWWreview}. It was shown in ref.~\cite{Klebanov:2003wg} that the asymptotic expansions for the 0B theories (for example, those in equations~\reef{0Bexpnsm=2} and~\reef{0Bexpnsm=4}) match on to each other in a 't Hooft limit, with the brane/flux counting parameter (labeled $q$ for 0B and $\Gamma$ for 0A) taken to be large. This was then taken to suggest that the theories are non--perturbatively complete even when the parameter is finite. We review this limit in some detail, before analyzing our system in the same limit in section~\ref{sec:DWWexpnmtch}. \subsection{The 0B theory in a 't Hooft limit} \subsubsection{The $n=2$ theory} The string equation for the simplest 0B theory with $n=2$ is \begin{eqnarray}\label{streqn0Bm=2v2} \nu^{2} \frac{\partial^2 r}{\partial x^2} -\frac{1}{2} r^3 + \frac{1}{2} x r + \nu^{2} \frac{q^2}{r^3} = 0 \quad , \end{eqnarray} obtained by using the explicit forms of the polynomials $R_2$ and $H_2$ and eliminating $\omega(x)$ between the two equations~\reef{streqn0B}. Consider the limit $q \rightarrow \infty$, $x \rightarrow \pm \infty$ with $t = (\nu q)^{-2/3}x$ fixed. This is our 't Hooft limit with $t^{-3/2}$ being the 't Hooft coupling\footnote{With the identification $g_s = \frac{\nu}{x^{3/2}}$ and $q$ as counting the number of units of R--R flux, this is recognized as the usual 't Hooft coupling $\lambda \sim g_s N$ familiar from higher-dimensional string theories. Taking $x \rightarrow \pm \infty$ amounts to taking the limit $g_s \rightarrow 0$.}. Defining $s = (\nu q)^{-1/3} r$, the string equation \reef{streqn0Bm=2v2} becomes \begin{equation}\label{0Bseqnm=2} \frac{2}{q^2}s^3 \partial_{t}^2 s - s^6 + ts^4 + 2 = 0 \quad . \end{equation} In the large $q$ limit, the first term containing the derivative can be neglected resulting in an algebraic equation for $f(t) = s^2$, \begin{equation}\label{0Balgeqnm=2} f^3 - t f^2 = 2 \quad . \end{equation} For generic $t$ this equation has 3 solutions, only \emph{one} of which is real for all $t$ (an easy check is to consider $t\approx 0$). This solution reads \begin{equation}\label{vsolnm=2} f(t) = \frac{1}{3}\left[t + (t^3 + 27 - 3\sqrt{81 + 6 t^3})^{1/3} + (t^3 + 27 + 3\sqrt{81 + 6 t^3})^{1/3}\right] \quad. \end{equation} For $t > -\frac{3}{2^{1/3}}$, the arguments of the square roots are positive and $f(t)$ is real. Since there are opposite signs in front of the square roots in the above expression, the half-integer powers of $t$ cancel when one expands $f(t)$ for $t \rightarrow \infty$. For $t < -\frac{3}{2^{1/3}}$ the arguments of the square roots are negative and the second and third terms in the solution equation are complex, but $f(t)$ is real since all the imaginary contributions cancel out. The expansions for $f(t)$ for large negative and positive $t$ are \begin{eqnarray}\label{0Bvexpnm=2} f(t) &=& \frac{\sqrt{2}}{t^{1/2}} - \frac{1}{t^2} + \frac{5\sqrt{2}}{4t^{7/2}} -\frac{4}{t^5} + \cdots, \quad (t \rightarrow -\infty) \\ f(t) &=& t + \frac{2}{t^2} -\frac{8}{t^5} + \frac{56}{t^8} + \cdots \quad (t \rightarrow \infty) \nonumber \end{eqnarray} These expansions reproduce the coefficients of the highest powers of $q$ in the asymptotic expansions to the string equation \reef{0Bexpnsm=2} after remembering that $\widetilde{w} = f/4$. One smooth function \reef{vsolnm=2} captures the limiting behavior of large $q$ and connects\footnote{One can also formulate an equivalent argument as in ref.~\cite{Klebanov:2003wg} by starting with the action $S \sim \int dx \left[\frac{1}{2}r'^{2} + \frac{1}{8}(r^2 - x)^2 + \frac{1}{2}\frac{q^2}{r^2}\right] = \int dx \left[\frac{1}{2}r'^{2} + V(r^2)\right]$ whose equation of motion is~\reef{streqn0Bm=2v2}. Since this action is bounded below, it is clear that a solution to this equation will exist. The original matrix model integral dual to the theory is well defined and convergent, so one expects a finite and real answer for the free energy $F$. It is then natural to expect that equation~\reef{streqn0Bm=2v2} will have a unique real and smooth solution.} the two asymptotic regions smoothly. This limit has been interpreted in~\cite{Klebanov:2003wg} as a 't Hooft limit where only spherical topologies with boundaries (D--branes) or fluxes survive. For $q = 0$, the 0B theory exhibits the Gross--Witten phase transition at $x=0$ (namely $F'' = x/4 $ for $x > 0$ and $F'' = 0$ for $x < 0$). This transition can be smoothed out by either the genus expansion with $q=0$ or by the expansion in the 't Hooft parameter for large $q$. \subsubsection{The $n=4$ theory} A similar analysis shows that the asymptotic expansions for the $n=4$ theory \reef{0Bexpnsm=4} match smoothly onto each other. The analysis is somewhat complicated because $\omega(x)$ cannot be eliminated in favor of $r(x)$, unlike the $n=2$ case and one has to use the asymptotic expansions for both $r(x)$ and $\omega(x)$. The expansions for $\omega(x)$ are~\cite{Klebanov:2003wg}: \begin{eqnarray}\label{0Bexpnsm=4omega} \omega(x) &=& -\frac{2}{3}\frac{q}{x} + \frac{2}{3}\frac{q}{x^{7/2}}\left(\frac{80}{27}q^2 -\frac{5}{4}\right) + \cdots \quad (x \rightarrow \infty) \\ \omega(x) &=& -\frac{\sqrt{3}}{2}\left(\frac{2}{7}|x|\right)^{1/4} + \frac{2}{3}\frac{q}{|x|} - \frac{5}{96\sqrt{3}}\left(\frac{7}{2}\right)^{1/4}\frac{3 + 32 q^2}{|x|^{9/4}} + \cdots \quad (x \rightarrow -\infty) \nonumber \end{eqnarray} In the $q \rightarrow \infty$ limit with $\omega = (\nu q)^{1/5} h$, $x = (\nu q)^{4/5} t$ and $r^2 = (\nu q)^{2/5} f$, the derivative terms in the string equation for this case are suppressed, resulting in \begin{eqnarray}\label{vheqnm=4} \frac{3}{8}f^2 - 3 f h^2 + h^4 -\frac{3}{8}t &=& 0 \quad, \nonumber\\ f h \left(-\frac{3}{2}f + 2 h^2\right) &=& 1 \quad. \end{eqnarray} The second equation, on solving for $f$, gives \begin{equation}\label{vpmeqnm=4} f_{\pm} = \frac{2}{3}h^2 \pm \sqrt{\left(\frac{2}{3}h^2\right)^2 - \frac{2}{3h}} \quad . \end{equation} From equation~\reef{0Bexpnsm=4omega}, it can be seen that $h$ is negative, so that only the solution $f_{+}$ can be chosen\footnote{Note the presence of more than one solution for $f$. Such multiple solutions will be prominent in our more general analysis in the next section.} for real $f$. Substituting $f_+$ into the first equation gives, after some rearrangement: \begin{equation}\label{heqnm=4} -12 - 864 h^5 + 448 h^{10} - 36 ht - 96 h^{6} t - 27 h^{2} t^{2} = 0 \quad . \end{equation} Defining $y = ht$ reduces the above equation to a quadratic in $h^5$, \begin{equation}\label{yeqnm=4} - (864 + 96y)h^5 + 448h^{10} -(36y + 27y^2 + 12) = 0 \quad . \end{equation} Solving for $h^5$ as a function of $y$ \begin{equation}\label{hasfnofy} h_{\pm}^5 = \frac{54 + 6y \pm 5\sqrt{3}\sqrt{28 + 3(y + 2)^2}}{56} \quad , \end{equation} will allow a solution for $t = y/h$. The solution $h_{+}^5$ is always positive and non--zero as a function of $y$. To get the asymptotics we desire, we focus on $h_{-}^5$, which is negative and zero only for $y_0 = -2/3$. Using $t = y/h_{-}$ it is easy to see that as $y \rightarrow y_0$, $t \rightarrow \infty$ and and as $y \rightarrow \infty$, $t \rightarrow -\infty$. As $t$ changes continuously, we will always lie on the $h_{-}$ branch since the two branches never cross. The expansion for $h_{-}$ as a function of $t$ can then be worked out \cite{Klebanov:2003wg} to be \begin{eqnarray}\label{0Bhexpnsm=4} h_{-} &=& -\frac{\sqrt{3}}{2}(2 |t|/7)^{1/4} + \frac{2}{3|t|} - \frac{5(7/2)^{1/4}}{3\sqrt{3}|t|^{9/4}} + \cdots \quad (t \rightarrow -\infty)\nonumber\\ h_{-} &=& -\frac{2}{3t} + \frac{160}{81 t^{7/2}} + \cdots \quad (t \rightarrow \infty) \end{eqnarray} The coefficients in these expansions agree with the leading powers for large $q$ in the expansions of $\omega(x)$ in equation~\reef{0Bexpnsm=4omega} and can again be interpreted as a limit in which only spherical topologies with boundaries or fluxes survive.. This shows that an appropriately chosen solution of equation~\reef{vheqnm=4} interpolates between the two asymptotic regimes in the limit of large $q$. This 't Hooft limit removes the derivative terms from the string equations, resulting in algebraic equations that are simpler to analyze. It can then be sensibly thought of as \emph{algebraic} limit. Including the derivatives gives terms \emph{sub--leading} in powers of $q$ and presumably does not introduce any singularities that would destroy the smooth interpolation. The sub--leading powers of $q$ likely smooth out any sharp transitions (as we saw for the Gross--Witten phase transition above) from large negative $x$ to large positive $x$. \subsection{The 0A theory in a 't Hooft limit} It is interesting to work out the 't Hooft limit of the type 0A theories (with string equations~\reef{streqn0A}). Since, as already discussed, solutions of the full equations have been obtained numerically, it is instructive to compare the 't Hooft limit ({\it i.e.,} algebraic) results to the numerical results. (Note that while the last section's type~0B results were a review, the type~0A analysis is presented here for the first time.) \subsubsection{The $k=1$ theory} The string equation for this theory is, \begin{eqnarray*}\label{streqn0Ak=1} w\left(w-z \right)^2 - \frac{1}{2} \nu^2 \frac{\partial^2 w}{\partial z^2} \left(w-z\right) + \frac{1}{4} \nu^2 \left(\frac{\partial w}{\partial z} - 1\right)^2 = \nu^2 \Gamma^2 \quad . \end{eqnarray*} Consider the limit $\Gamma \rightarrow \infty$, $z \rightarrow \pm \infty$ with $\rho = (\nu \Gamma)^{-2/3} z$ fixed. This is the 't Hooft limit with $\rho^{-3/2}$ being the 't Hooft coupling. Defining $w = (\nu \Gamma)^{-2/3} g$, the above string equation becomes \begin{equation}\label{0Aeqnforgk=1} g (g - \rho)^2 - \frac{1}{2 \Gamma^2}(g - \rho)\partial_{\rho}^2 g + \frac{1}{4 \Gamma^2}(\partial_{\rho}g - 1)^2 = 1 \quad . \end{equation} In the large $\Gamma$ limit, the derivative terms can be neglected to give the simple algebraic equation \begin{equation}\label{0Aalgeqnforgk=1} g (g - \rho)^2 = 1\ . \end{equation} As for the type~0B case of the last section, the solutions to this cubic equation\footnote{One can also use the Morris map referred to at the end of section~\reef{sec:type0Breview} to obtain this equation directly from the corresponding 0B algebraic equation~\reef{0Balgeqnm=2}.} can be expanded as a Taylor series for $\rho \rightarrow \pm \infty$ and seen to reproduce the coefficients of the highest powers of $\Gamma$ in the asymptotic expansions~\reef{0Aexpnsk=1}. One smooth function essentially connects the two asymptotic regions, as in the 0B theory. Figure~\reef{0Ak1g1} shows a comparison between the solution to equation~\reef{0Aalgeqnforgk=1} and the solution to the exact string equations~\reef{streqn0Ak=1} obtained by numerical methods, both with $\Gamma = 1$. The two solutions deviate slightly from each other in the interior, and asymptotically, they match. This can be seen in figure~\reef{0Ak1g1diff}, which plots the difference between the exact and algebraic solutions, which goes to zero for large $|x|$. Physics sub--leading in powers of $\Gamma$ contribute to the exact solution, while only the highest powers of $\Gamma$ contribute to the algebraic solution. The inclusion of the derivative terms does not introduce any singularities. \begin{figure}[!ht] \begin{center} \subfigure{ \includegraphics[width=80mm, height = 60mm]{fig3a1f}} \subfigure{\includegraphics[width=80mm, height = 60mm]{fig3b1f}} \caption{\footnotesize{Comparison of the solution (in the 't Hooft limit) to equation~\reef{0Aalgeqnforgk=1} and the numerical solution to the full string equations~\reef{streqn0Ak=1} obtained by numerical methods, with $\Gamma = 1$. The curve which is uppermost at {\it e.g.,} $x=-1.0$ represents the solution of the exact equation. The plot on the right shows the same curves within a smaller domain for better resolution.}}\label{0Ak1g1} \end{center} \end{figure} \begin{figure}[!ht] \begin{center} \includegraphics[width=80mm]{fig4f}\\ \caption{\footnotesize{The difference between the algebraic and full numerical solutions for the $k=1$ 0A theory. For large~$|x|$, the difference goes to zero. }}\label{0Ak1g1diff} \end{center} \end{figure} \subsubsection{The $k=2$ theory} The string equation for the $k=2$ 0A theory has $\mathcal{R} = \frac{w^{''}}{3} - w^2 - z$. In the large $\Gamma \rightarrow \infty$ and $z \rightarrow \pm \infty$ limit, with $\rho = (\nu \Gamma)^{-5/4} z$ held fixed, the derivative terms drop out. Defining $g = (\nu \Gamma)^{-5/4} w$ results in the algebraic equation \begin{equation}\label{0Aalgeqnforgk=2} g (g^2 - z)^2 = 1\ . \end{equation} The solutions to this equation reproduce the coefficients of the highest powers of $\Gamma$ in the asymptotic expansions \reef{0Aexpnsk=2}. Figure~\reef{0Ak2g1} shows a comparison between the solution to equation~\reef{0Aalgeqnforgk=2} and the solution to the exact $k=2$ 0A string equation obtained by numerical methods. The difference between the two solutions is plotted in figure~\ref{0Ak2g1diff} and can be seen to approach zero for large~$|x|$. The deviation for finite $x$ and asymptotic matching is evident, and can be explained by the additional contributions coming from sub--leading powers of $\Gamma$ in the exact solution. \begin{figure}[!ht] \begin{center} \subfigure{ \includegraphics[width=80mm, height = 60mm]{fig5a1f}} \subfigure{\includegraphics[width=80mm, height = 60mm]{fig5bf}} \caption{\footnotesize{Comparison of the solution (in the 't Hooft limit) to equation~\reef{0Aalgeqnforgk=2} and the solution to the exact string equation obtained by numerical methods, with $\Gamma = 1$. The curve which is uppermost at {\it e.g.,} $x=-2$ represents the solution of the exact equation. The plot on the right shows the same curves within a smaller domain for better resolution.}}\label{0Ak2g1} \end{center} \end{figure} \begin{figure}[!ht] \begin{center} \includegraphics[width=80mm]{fig6f}\\ \caption{\footnotesize{The difference between the algebraic and full numerical solutions for the $k=2$ 0A theory. For large $|x|$, the difference goes to zero.}}\label{0Ak2g1diff} \end{center} \end{figure} \subsection{A modified 't Hooft limit}\label{sec:modlim} A modification of the above limit (one that does not affect the physics) will allow us to apply it to the Painlev\'e~IV hierarchy of string equations. The need for such a modification will be explained in the next section. Instead of rescaling variables in the 0B (0A) theory to eliminate $q$ ($\Gamma$), one can remove the derivatives by first rescaling \begin{equation}\label{modlimit1} q \rightarrow \frac{q}{\nu} \quad, \end{equation} and then taking the limit \begin{equation}\label{modlimit2} \nu \rightarrow 0 \quad, \end{equation} in that order, without affecting the physics. Note that the rescaled $q$ is large, so this is still a large $q$ limit. The limit $\nu \rightarrow 0$ amounts to taking $g_s \rightarrow 0$, as is clear from the definition of~$g_s$ ($g_s = \frac{\nu}{\mu^{3/2}}$ for $n=2$ and $g_s = \frac{\nu}{\mu^{5/4}}$ for $n = 4$). This modified limit therefore extracts the physics same as the 't Hooft limit of the previous subsections. In this modified limit, the following equations are obtained for the type 0B theories for $n=2$, \begin{equation}\label{0Bralgeqnm=2} r^6 - x r^4 = 2 q^2 \quad, \end{equation} and for $n=4$, \begin{eqnarray}\label{0Balgeqnsm=4} \frac{3}{8}r^5 - 3 r^3 \omega^2 + r\omega^4 - \frac{3}{8} x r &=& 0 , \nonumber\\ \frac{3}{2}r^4\omega - 2 r^2 \omega^3 &=& - q \quad . \end{eqnarray} These are the same as equations~\reef{0Balgeqnm=2} and~\reef{vheqnm=4} with $r^2 \sim f$, $\omega \sim h$ and $x \sim t$, but with the rescaled parameter $q$ explicitly present. For the 0A theory, after rescaling $\Gamma \rightarrow \frac{\Gamma}{\nu}$, the following equations are obtained for $k=1$ and $k=2$, respectively, \begin{equation}\label{0Aalgeqnk=1} w(w - z)^2 = \Gamma^2 ,\qquad w(w^2 - z)^2 = \Gamma^2 \quad . \end{equation} In fact, for general $k$, the type~0A string equation \reef{streqn0A} effectively reduces to \begin{equation}\label{0Aalgeqn} w \widetilde {\cal{R}}^2 = \Gamma^2 \quad, \end{equation} where $\widetilde {\cal{R}}=w^k-z$ is obtained from $\cal{R}$ after dropping all the derivatives of $w(z)$. \section{Expansion Matching for DWW: The unknown} \label{sec:DWWexpnmtch} \subsection{\bf{'t Hooft limit for DWW : Two parameters}} The string equations of the DWW hierarchy (reviewed in section~\ref{sec:DWWreview}) are quite general, encoding the string equations of type~0A and of type~0B as special cases. This generality, however, comes at a price: all but the simplest cases are too complicated to be solved using standard numerical methods. This complicates our task of performing a non--perturbative analysis of the string theories proposed in ref.~\cite{DWW}, which were based on strictly perturbative considerations. To facilitate this analysis, we examine the DWW string equations in a (modified) 't Hooft limit. The result of this study is considerable evidence that the conjectured type~II theories possess smooth solutions connecting the perturbative expansions. This analysis also uncovers new non-perturbative solutions, absent from the earlier perturbative analysis, which we suspect may encode new unidentified string theories. In section~\ref{sec:type0expnmtch} we described how to take the 't Hooft limit of the type~0A and type~0B theories. Implicit in these methods was the fact that the type~0 theories each depend on a single parameter, which counts branes or fluxes. The full DWW equations, in contrast, have two free parameters, so it is not immediately clear how to implement the 't Hooft limit in this more general case. The perturbative analysis of ref.~\cite{DWW} suggests that, in addition to the type~0 theories, the full DWW string equations also describe type~II theories whose branes and fluxes are counted by a single parameter. This motivates our assumption that \emph{new theories are described by a one--dimensional subspace of the parameters $c$ and $\Gamma$}. We further restrict our study to subspaces defined by \emph{linear} combinations of $c$ and $\Gamma$. This is the simplest possibility and although there could be interesting one-dimensional subspaces corresponding to non-trivial curves in $(c,\Gamma)$ parameter space, we will not consider them here. Given this assumption and the scaling prescription described in section~\ref{sec:modlim}, we can utilize the 't Hooft limit to analyze the string equations~\reef{DWWstring1} and~\reef{DWWstring2}. For given $\eta$ and $\xi$, let $c + \eta \Gamma = \xi$ be the constraint defining the one-dimensional parameter subspace of a new theory under consideration. Since the constraint is (by assumption) satisfied by the solutions of interest, we can impose the constraint on the DWW equations themselves. We are left with a single free parameter which we can recale as in section~\ref{sec:modlim} to take the 't Hooft limit. In this way we obtain algebraic equations which are simple to analyze. Unfortunately, in the search for new theories, we do not \emph{a priori} know the values of~$\eta$ and $\xi$, so the above method is not directly applicable. Nevertheless, we can proceed by adopting a slightly different perspective. Note that the constraint $c + \eta \Gamma = \xi$ implies that the free parameter can be taken to be $c$ or $\Gamma$, or any linear combination of them (as long as it is independent of the constraint). Rescaling the free parameter by $1/\nu$ is therefore equivalent to rescaling $c\rightarrow c/\nu$ and $\Gamma \rightarrow \Gamma/\nu$. This procedure is independent of the details of the original constraint, so it can be implemented without actually knowing the values of $\eta$ and $\xi$. The result is a set of algebraic equations which depends on $c$ and $\Gamma$. Moreover, the solutions to these equations can be used to deduce information about the original constraint. To see this, note that since $\xi$ is a constant, after taking the limit the constraint takes the form $c + \eta \Gamma = 0$. Therefore, the original constraint is satisfied by any solution of the algebraic equations which satisfies $c + \eta \Gamma = 0$. Turning this around, if a particular solution to the algebraic equations only exists when $c + \eta \Gamma = 0$, then this is a strong indication that it descends from the special solution to the full equations subject to $c + \eta \Gamma = \xi$. That is to say, it is an algebraic approximation to the full solutions of some new string theory. Note that $\xi$ is not determined from this approach, but $\eta$ is. \subsection{'t Hooft limit for DWW : Strategy} To obtain the algebraic equations, we begin with the DWW string equations~\reef{DWWstring1} and~\reef{DWWstring2}, \begin{eqnarray*} -\frac{1}{2}\mathcal{L}_x+\frac{1}{2}u\mathcal{L}+\mathcal{K} &=& \nu c\quad \quad \quad \quad \quad \\ \left(-v+\frac{1}{4}u^2+\frac{1}{2}u_x\right)\mathcal{L}^2-\frac{1}{2}\mathcal{L}\mathcal{L}_{xx}+\frac{1}{4}\mathcal{L}_x^2&=&\nu^2 \Gamma^2\quad .\quad \quad \quad \quad \quad \end{eqnarray*} In the modified 't Hooft limit discussed above, \begin{eqnarray}\label{modlimDWW} \Gamma \rightarrow \frac{\Gamma}{\nu} \quad, \quad c \rightarrow \frac{c}{\nu} \quad,\quad \nu \rightarrow 0 \quad , \end{eqnarray} the string equations simplify to \begin{eqnarray}\label{DWWalgeqns} \frac{1}{2}u \tilde{\mathcal{L}}+\tilde{\mathcal{K}} = c \ , \qquad {\rm and}\qquad \left(-v+\frac{1}{4}u^2\right)\tilde{\mathcal{L}}^2 = \Gamma^2 \ , \end{eqnarray} where $\tilde{\cal{L}}$ and $\tilde{\cal{K}}$ are the polynomials in $u$ and $v$ obtained from $\cal{L}$ and $\cal{K}$ after dropping all the derivative terms. Before analyzing these equations in full generality, let us discuss a few special cases to illustrate our methods. \begin{enumerate} \item $u = 0$ (type~0A)\\ The restriction to type~0A requires $n$ to be even, in which case $\widetilde{\cal{K}} \propto u$. Thus, setting $u = 0$ in first equation of \reef{DWWalgeqns} implies $c = 0$. This constraint can be written as $c +\eta \Gamma = 0 $ with $\eta = 0$. As described above, this constraint lifts to $c+\eta \Gamma = c = \xi$ at the level of the full equations. If we hadn't known the parameter constraint of the 0A theory ($c=-\frac{1}{2}$), the algebraic equations alone would indicate that $c=\xi$, a constant. Alternatively one could start by looking for solutions to the equations~\reef{DWWalgeqns} with $c = 0$. A subset of smooth solutions, identified by their boundary behavior, will correspond to asymptotic expansions in Classes 1 and 2 with $c$ fixed (recall that these correspond to 0A). The boundary behavior of the smooth solutions is matched with the behavior of expansions in Classes 1 and 2, which are the asymptotic expansions of the type~0A theory. \item $\widetilde{\cal{L}} = 0$ (type~0B)\\ Setting $\widetilde{\cal{L}} = 0$ forces $\Gamma = 0$ and the first equation gives $\widetilde{\cal{K}} = c$. For even $n$, these reduce to the 0B algebraic equations after redefining the variables appropriately. The constraint $\Gamma = 0$ can be rewritten as $c +\eta \Gamma = 0 $ with $\eta=\infty$ and lifts to a constraint on the full equations which takes the form $\Gamma=\xi$, a constant. The true parameter constraint for the full $0B$ theory is $\Gamma=0$; the algebraic equations alone do not determine the precise value of $\xi$. As in the 0A case above, one could look for smooth solutions to the algebraic equations subject to $\Gamma = 0$. If they obey the correct boundary conditions (expansion Classes 1 and 3 (or 5)) one could identify these as 0B solutions. \end{enumerate} Our strategy to demonstrate the existence of smooth non--perturbative solutions to the type~0 and type~II theories and identify potentially new theories can be summarized as follows. Take the limit~\reef{modlimDWW} of the full DWW equations to obtain a set of algebraic equations. Search for solutions to these equations subject to one of three constraints\footnote{It turns out that more general linear combinations are relevant only for $n\geq 4$, and for simplicity we exclude such cases from the present analysis.}: \begin{equation} c=0\ , \quad \Gamma = 0\ , \quad{\rm or}\quad c \pm \Gamma = 0\ . \end{equation} By matching the asymptotic behavior of these solutions onto the various expansions, we identify the type~0 and type~II string theories. We also find new solutions with asymptotics different from those of the type~0 or type~II theories. We speculate that these solutions correspond to some new unknown string theories. As emphasized above, the precise parameter constraints of these new theories cannot be fully determined from the algebraic equations. \subsection{DWW $n = 2$ in a 't Hooft limit} The string equations for $n=2$ reduce in the algebraic limit~\reef{modlimDWW} to \begin{eqnarray}\label{n=2algeqn} u\left( v + \frac{1}{2} u^2 + x\right) + 2 u v &=& 2 c \quad ,\nonumber\\ \left(u v - c\right)^2 - v \left(v + \frac{1}{2} u^2 + x\right)^2 &=& \Gamma^2 \quad . \end{eqnarray} They produce a total of nine asymptotic expansions which fall into 4 classes (see Appendix~\ref{Appexpns}). As the expansions within each class are related by various $\mathbb{Z}_2$ symmetries~\cite{DWW}, we label each expansion only by a subscript specifying its class. The solutions corresponding to the three constraints in parameter space are described below. \begin{enumerate} \item $c = 0$\\ A plot of solutions to the algebraic equations~\reef{n=2algeqn} with $c = 0$ and $\Gamma=2$ is shown in figure~\reef{n=2c0}. The plot on the right shows two smooth solutions that connect regions\footnote{The algebraic solutions are represented by the solid black curves, while the asymptotic expansions are represented by the dashed curves. We adopt this convention in the rest of the paper.} of large $-x$ and $+x$. In our conventions, these are $v_2 | v_{1}$ and $v_{3} | v_{4}$ solutions\footnote{We label a solution with $-x$ asymptotics $v_L$ and $+x$ asymptotics $v_R$ by $v_L | v_R$.}. The plot on the left shows the solutions which do not join negative asymptotics to positive asymptotics, a feature we expect to find in any solutions describing new string theories. For this reason, these types of solutions will not be our primary interest. \begin{figure}[ht] \begin{center} \subfigure{ \includegraphics[width=78mm, height = 60mm]{fig7a2f}} \subfigure{\includegraphics[width=80mm, height = 60mm]{fig7b22f}} \caption{\footnotesize{$n=2$ algebraic solutions with $c = 0$ and $\Gamma=2$. }}\label{n=2c0} \end{center} \end{figure} The $v_2|v_1$ solutions correspond to the 0A theory (with $k=1$) coupled to pure supergravity. It is unclear if the $v_3|v_4$ solutions correspond to a consistent new theory. Based on the square of theories in figure~\reef{square2}, it is tempting to conclude that this theory might be the type~IIA string theory coupled to the $(4,2)$ $(A,D)$ superconformal minimal model. However, as mentioned in the discussion below figure~\reef{square2}, the $v_4$ expansion is complex as a $+x$ expansion for $n = 2\mod4$. For the particular case of $n=2$ and $c=0$, $v_4$ happens to be real, but this is an exception. We find it unlikely that the corresponding solution to the full equations exists and encodes a consistent theory, but we leave this question for future investigation. \item $\Gamma = 0$\\ Figure~\reef{n=2g0} shows the algebraic solutions for $\Gamma=0$ and $c=1.5$. The right figure shows two smooth solutions, $v_{3}|v_{1}$ and $v_{4}|v_2$, connecting the negative and positive regions. The $v_{3}|v_{1}$ solutions are algebraic approximations to the full solutions of the 0B theory (with $n=2$) coupled to pure supergravity. Analogous to the previous case with $c=0$, it is unclear if the $v_4|v_2$ solutions descend from a consistent new theory. It is tempting to conclude that these are type~IIB string theories coupled to the $(4,2)$ $(A,D)$ superminimal model, but this is likely incorrect because $v_4$ is complex as a $+x$ expansion for $n=2\mod4$. \begin{figure}[hh] \begin{center} \subfigure{ \includegraphics[width=80mm, height = 59mm]{fig8af}} \subfigure{\includegraphics[width=80mm, height = 60mm]{fig8bf}} \caption{\footnotesize{$n=2$ algebraic solutions with $\Gamma = 0$ and $c=1.5$.} }\label{n=2g0} \end{center} \end{figure} \item $c = \pm \Gamma$\\ We consider $c=\Gamma$. The case of $c=-\Gamma$ is completely analogous. Figure~\reef{n=2ceqg} shows solutions for $c=2$ and $\Gamma=2$. The solutions in the right figure are $v_4 | v_1$ and $v_3|v_1$ and connect the negative region smoothly to the positive region. \begin{figure}[ht] \begin{center} \subfigure{ \includegraphics[width=80mm, height = 60mm]{fig9af}} \subfigure{\includegraphics[width=80mm, height = 60mm]{fig9bf}} \caption{\footnotesize{$n=2$ algebraic solutions with $c = 2$ and $\Gamma=2$.} }\label{n=2ceqg} \end{center} \end{figure} The fixed value of $c \mp \Gamma$ cannot be deduced from the algebraic solutions. \end{enumerate} \subsection{DWW $n = 4$ in a 't Hooft limit} The string equations in the 't Hooft limit, after setting $g_4 = -3/4$, are too long to be reproduced here. The polynomials $\cal{L}$ and $\cal{K}$ reduce to, \begin{eqnarray}\label{n=4algeqn} \mathcal{\widetilde{L}} &=& \frac{3}{4}v^2 + \frac{3}{2} v u^2 + \frac{1}{8}u^4 - \frac{3}{4}x \quad, \nonumber\\ \mathcal{\widetilde{K}} &=& \frac{3}{2} u v^2 + \frac{1}{2} u^3 v \quad, \end{eqnarray} from which the corresponding algebraic string equations can be easily obtained using~\reef{DWWalgeqns}. There are a total of twenty--five expansions~\cite{DWW}, falling into five classes. Solutions to the algebraic string equations with the three constraints are shown below. \begin{enumerate} \item $c = 0$\\ Figure~\reef{n=4c0} shows solutions with $c=0$ and $\Gamma=2$. There are three solutions that interpolate between the negative region and the positive region, $v_2|v_{1}$, $v_{5}|v_{3}$ and $v_{5}|v_{4}$. \begin{figure}[ht] \begin{center} \subfigure{ \includegraphics[width=80mm, height = 60mm]{fig10af}} \subfigure{\includegraphics[width=80mm, height = 60mm]{fig10bf}} \caption{\footnotesize{$n=4$ algebraic solutions with $c = 0$ and $\Gamma = 2$.} }\label{n=4c0} \end{center} \end{figure} The $v_2|v_1$ solutions represent type~0A coupled to the $(2,8)$ $(A,A)$ superminimal model. The $v_{5}|v_{4}$ solutions fit with our conjectured type~IIA string theory coupled to the $(4,6)$ $(A,D)$ superminimal model and are analogues of the $\IZ_2$ symmetry--breaking solutions of the 0B theory. The $v_5 | v_3$ solutions are new. On setting $c=0$ in the \emph{full} expansions, the powers of $\Gamma$ in both correspond to a parameter counting branes. An underlying string theory with these solutions, if it exists, would have branes in both asymptotic regimes\footnote{It is possible that such brane--brane solutions could be summed up to form rational solutions. This would be analogous to the rational solutions of the type~0A string equations that were considered in a string theory context in ref.\cite{Johnson:2006ux}. The rational solutions have $v_2$ type expansions (for $c=-1/2$) in both asymptotic directions for $x$.}. Further work is needed to conclude if such an underlying theory exists. \item $\Gamma = 0$\\ Figure~\reef{n=4g0} shows solutions with $\Gamma=0$ and $c=1$. The solutions which interpolate between $-x$ and $+x$ are $v_{5}|v_{1}$, $v_{5}|v_{3}$ and $v_2 | v_{4}$. The $v_5|v_1$ solutions correspond to the $\IZ_2$ symmetry--breaking solutions of type~0B string theory coupled to the $(2,8)$ superminimal model, while the $v_2|v_4$ solutions fit with our conjectured type~IIB string theory coupled to the $(4,6)$ $(A,D)$ superminimal models. \begin{figure}[ht] \begin{center} \subfigure{ \includegraphics[width=80mm, height = 60mm]{fig11af}} \subfigure{\includegraphics[width=80mm, height = 60mm]{fig11bf}} \caption{\footnotesize{$n=4$ algebraic solutions with $\Gamma = 0$ and $c=1$.} }\label{n=4g0} \end{center} \end{figure} The $v_5|v_3$ solutions with $\Gamma = 0$ are new, similar to those above with $c=0$. Further work is required to understand the existence and nature of such theories. \item $c = \pm \Gamma$\\ Again we consider $c=\Gamma$ since the case of $c=-\Gamma$ is completely analogous. Figure~\reef{n=4ceqg} plots the solutions for $c=1$ and $\Gamma =1$. The four smooth solutions which interpolate between $-x$ and $+x$ are two $v_5|v_1$ solutions, one $v_5|v_4$ solution and one $v_5|v_3$ solution. (The two $v_5|v_1$ solutions can be obtained from one another by applying $\IZ_2$ symmetries on the signs of the parameters). All of these solutions are new, and it is possible that they correspond to well-defined underlying string theories, but we leave any definitive claims to future work. \begin{figure}[ht] \begin{center} \subfigure{ \includegraphics[width=80mm, height = 60mm]{fig12af}} \subfigure{\includegraphics[width=80mm, height = 60mm]{fig12bf}} \caption{\footnotesize{$n=4$ algebraic solutions with $c = 1$ and $\Gamma = 1$.} }\label{n=4ceqg} \end{center} \end{figure} \end{enumerate} \section{DWW $n=1$: 't Hooft limit and Numerical Results}\label{sec:n=1expnmatch} The string equations for $n=1$ are, \begin{eqnarray}\label{DWWn=1} 2v - u_x + u^2 + g_1 x u = 2 \nu g_1 (c + \frac{1}{2}) \quad , \hspace{30mm} \\ \left(-v + \frac{1}{4}u^2 + \frac{1}{2}u_x\right)(u + g_1 x)^2 -\frac{1}{2}u_{xx}(u + g_1 x) + \frac{1}{4}\left(u_x + \nu g_1\right)^2 = \nu^2 g_1^2 \Gamma^2 \quad, \nonumber \end{eqnarray} where we have used the relation $g_1 = \frac{1}{t_1}$. These equations are simple enough to allow for complete numerical solutions, which will help us classify new non-perturbatively complete solutions. We will also examine these equations in the limit~\reef{modlimDWW}, under which they reduce to \begin{eqnarray}\label{n=1algeqn} u^2 + 2v -2 u x &=& -4 c \quad, \nonumber\\ v^2 + 2 v \left(u^2 -4 u x + 4 x^2 -4 c\right) &=& 4 (c^2 - \Gamma^2) \quad, \end{eqnarray} where we have used $g_1 = -2$. These equations admit four asymptotic expansions, which we label as: \begin{eqnarray}\label{lon=1} v_2 &\sim& \frac{\nu^2}{x^2}\left(c^2 - \Gamma^2\right) \quad, \nonumber\\ v_{3a} &\sim& g_1 \nu \left(c + \Gamma\right) \quad, \nonumber\\ v_{3b} &\sim& g_1 \nu \left(c - \Gamma\right) \quad, \\ v_{4} &\sim& \frac{1}{9} g_1^2 x^2 \quad. \nonumber \end{eqnarray} The solutions with the three different parameter constraints are presented below. \begin{enumerate} \item $c = 0$ \\ A plot of solutions to the algebraic equations~\reef{n=1algeqn} with $c = 0$ is shown in figure~\reef{c0gm2}. There are two smooth solutions interpolating between $+x$ and $-x$, labeled $v_3|v_2$ in our convention. \begin{figure}[!h] \begin{center} \includegraphics[width=70mm]{fig13f}\\ \caption{\footnotesize{$n=1$ algebraic solution with $c=0$ and $\Gamma = -2$.} }\label{c0gm2} \end{center} \end{figure} As outlined in the Appendix \reef{BraneFlux}, $v_2$ with $c = 0$ can be interpreted as a flux expansion (for odd $n$) in $\Gamma$, while $v_3$ with $c = 0$ is a brane expansion in $\Gamma$. These interpretations are possible strictly for $c = 0$; a finite value of $c$ results in powers of $g_s$ that do not allow $v_2$ to be interpreted as a flux expansion. So a string theory corresponding to this solution (if it exists) should require $c=0$ in the exact string equations. \item $\Gamma = 0$ \\ The plots for this case are shown in figure~\reef{cm2g0}. There are three smooth solutions, two of which are $v_4 | v_2$ and $v_2 | v_4$ solutions. The third solution is parallel to the $x$--axis and exactly equals $v_{3a}$ and $v_{3b}$. It is a 0B solution, the `topological point' of the 0B theory~\cite{Klebanov:2003wg} with $v = \nu g_1 c$. The $v_4 | v_2$ and $v_2|v_4$ solutions are new. For any value of $\Gamma$, $v_2$ and $v_4$ have powers of $c$ and $g_s$ consistent with a parameter counting branes. This suggests that a string theory underlying such solutions (if it exists) should have branes in \emph{both} asymptotic regimes. \begin{figure}[!h] \begin{center} \includegraphics[width=70mm]{fig14f}\\ \caption{\footnotesize{$n=1$ algebraic solution with $c=-2$ and $\Gamma = 0$.}}\label{cm2g0} \end{center} \end{figure} Interestingly, algebraic $v_4 | v_2$ solutions do not exist for $c>0$. Nevertheless, we have demonstrated the existence of numerical solutions to the full equations in this and other cases. See figure~\reef{cwithg0full}. These solutions\footnote{Note the development of a bump in the interior of the solution as $c$ decreases. There are a number of qualitative features of this family of solutions that are akin to those seen in studies of the type~0A case in refs.~\cite{Carlisle:2005mk,Carlisle:2005wa}, when examining the case of $\Gamma\to -1$.} were obtained using the ${\tt bvp4c}$ algorithm in $\textrm{MATLAB}$. One lesson learned is that the failure to find algebraic solutions in a t 'Hooft limit is not a guarantee that smooth solutions to the full equations do not exist. \begin{figure}[!h] \begin{center} \includegraphics[width=100mm]{fig15f1}\\ \caption{\footnotesize{$n=1$ solution to the full string equations~\reef{DWWn=1} with $\Gamma=0$.} }\label{cwithg0full} \end{center} \end{figure} \item $c = \pm \Gamma$\\ Figure~\reef{c2g2} shows algebraic solutions where $c=2$ and $\Gamma=2$. These are $v_4|v_{3}$ and $v_{3}|v_4$ solutions in our notation. The constraint $c \pm \Gamma = 0$ on the algebraic equations lifts to $c \pm \Gamma = \xi$ on the full solutions. The constant $\xi$ cannot be determined from algebraic solutions alone. In this case, however, a thorough numerical analysis of the full equations is possible and it shows that $\xi = - 1/2$. Examples of these numerical solutions are displayed in figure~\reef{cplusg}. \begin{figure}[ht] \begin{center} \includegraphics[width=70mm]{fig16f} \caption{\footnotesize{$n=1$ algebraic solutions with $c = 2$ and $\Gamma= \pm2$.} }\label{c2g2} \end{center} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=100mm]{fig17f1}\\ \caption{\footnotesize{$n=1$ solution to the full string equations~\reef{DWWn=1} with $c + \Gamma=-1/2$.} }\label{cplusg} \end{center} \end{figure} The numerical solution strongly suggests that such new solutions are not just an artifact of the 't Hooft limit and should be taken seriously as new examples of non--perturbatively complete solutions. Such solutions were not apparent in the perturbative analysis of ref.~\cite{DWW} and lead us to believe that the modified 't Hooft limit presented here is a good way of unearthing new non--perturbative solutions. \end{enumerate} \section{Discussion} We have presented a modification of the 't Hooft limit, first used by the authors of ref.~\cite{Klebanov:2003wg}, to argue for the existence of new non--perturbative solutions to string equations that are difficult to obtain numerically. We have analyzed the Painlev\'e~IV hierarchy of string equations introduced in ref.~\cite{DWW} in this limit, showing that examples of the conjectured type~II string theories coupled to the $(4,4k-2)$ superconformal minimal models have well--defined non--perturbative solutions. As in the case of type~0A, higher $k$ solutions are likely to exist as a consequence of these\cite{Johnson:1992pu}, since the underlying integrable flow structure should evolve lower $k$ solutions into higher $k$ ones. Although this limit results only in the highest powers of the brane/flux parameter surviving (so that we have only spherical topologies with boundaries or fluxes), it is likely that smooth solutions exist for the full string equations too. The higher genus terms in the free energy are obtained by including the derivative corrections to the string equations and from the lesson of type~0A (where we can compare to numerical solutions of the full equations --- see section~\ref{sec:modlim}) it seems that they do not introduce any singular behaviour. This can presumably be checked through further numerical work for the 0B and conjectured type~II theories. We have uncovered a number of clear examples of new non--perturbative solutions which also seem to be string--like. It would be interesting to determine if these indeed correspond to new string theories. This could presumably be checked using perturbative techniques of the sort that we used in ref.~\cite{DWW} to identify the type~II theories. We have demonstrated, in at least the $n=1$ case, that such new solutions are not simply an artifact of the 't Hooft limit, and that smooth solutions to the full equations can be obtained numerically obeying the same parameter constraints. It would be interesting to find new solutions of this type numerically for higher $n$. \section*{Acknowledgements} This work was supported by the US Department of Energy.
{ "attr-fineweb-edu": 1.951172, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUc-E5qoaAwtECY9ey
\section{Peter Hasenfratz' scientific contributions} It is difficult to summarize Peter's vast scientific contributions in just a short amount of time, and it is even more difficult to acknowledge it in terms of numbers. Peter has over 125 published articles which by today's standards does not seem a lot. I think this simply reflects the fact that his publications only followed very high quality standards which were difficult to satisfy, not least by himself. He would publish only when he considered the results worthwhile and significant. Many of the publications are in fact proceedings of plenary talks, review articles, and lecture notes. Not suprisingly, he was an excellent academic teacher, not only during his lectures at the university, but also at numerous international schools. As a student it was difficult not to get infected by his enthusiasm about quantum field theory (QFT) in particular and particle physics in general. Consequently enough, he has also been an advisor to a total 14 Ph.D.~students. One particular scientific contribution is actually of sociological nature. In 1982 Peter initiated the first lattice conference as a workshop at CERN. In the unimaginable times without internet, he issued handwritten personal invitations to all participants. In this way, he put the seed for a new scientific community, which over the years has grown into what we experience today at this conference. So far, there have been 33 conferences with several hundreds of participants each - the 34th symposium here in Southampton for example boasts more than 420 participants. \begin{wrapfigure}[21]{r}{0.4\textwidth} {\centering \includegraphics[width=0.4\textwidth]{vol29-issue2-p021fig.png} } \vspace{-0.55cm} \caption { Peter Hasenfratz delivering a plenary talk on the mass limits of the Higgs particle at the lattice conference at Fermilab in 1988 \cite{Hasenfratz:1988ts} (picture from \cite{Hasenfratz:1988cc}).} \end{wrapfigure} Peter seemed to have an ability to analytically calculate seemingly uncalculable things. Let me here just mention two specific examples which I will discuss in more detail later, the scale parameter of QCD on the lattice on the one hand, and the exact mass gaps in several two-dimensional asymptotically free quantum field theories on the other hand. He was a very creative and original thinker and provided numerous seminal contributions to lattice QFT, such as the concept of Fixed Point (FP) actions, the index theorem, the understanding of lattice chiral symmetry, the proper lattice definition of the chemical potential, and so on and on, just to mention here the concepts which I will refer to later in my talk. Other major research topics to which Peter contributed include the quark bag model, topological excitations, spin models, hopping expansion, Higgs physics (upper bound, top quark condensate, \ldots), finite size effects from Goldstone bosons, or finite temperature phase transitions in QCD. Higgs physics was a topic Peter was particularly fond of. The picture on the right shows him delivering a plenary talk on the mass limits of the Higgs particle at the lattice conference at Fermilab in 1988. Throughout his career he continued to be fascinated by the Higgs mechanism and the underlying field theoretic concepts, and he certainly enjoyed the discovery of the Higgs particle a few years ago. It would be impossible to cover all of the topics mentioned above - instead I will just focus on three separate topics which I find particularly interesting, not least because they connect to issues which are going to be discussed at this conference. The first topic concerns the connection between the lattice and the continuum, the second concerns the calculation of mass gaps, and the third the renormalization group (RG) and fixed point (FP) actions. \section{The connection between the lattice and the continuum} Let us turn back the clock about 35 years into a time when lattice field theory, and in particular numerical calculations, started to come out of its infancy. Peter was very busy as a salesman in the high-energy particle physics community, giving many talks on the lattice approach to field theory in general, and QCD in particular. Despite being a salesman, he did never stop to emphasize the shortcomings of these early calculations, to many of which Peter contributed himself, and he kept being very critical towards them. Let me quote in the following a few statements which can be found in his review paper on lattice gauge theories from 1981 \cite{Hasenfratz:1981ua}: "{\it MC simulations did not help us to obtain a better physical understanding, a deeper insight into the theory.}" -- "{\it Is $g$ small enough? [\ldots] there is reason to worry: the approach to asymptotic scaling might be very slow.}" -- "{\it In spite of the intense work, there is no real progress one can report on.}" -- "{\it The whole program is faced with unexpected and unpleasant difficulties at this moment.}" -- "{\it Clarification is needed.}" -- "{\it One should consider these numbers with some reservations.}" -- "{\it [\ldots] although it is not clear whether every part of the calculation is under control.}" -- Further into 1982 \cite{Hasenfratz:1982sa} he continued along the same line, always keeping expectations low and advocating a slow and careful approach, instead of a fast and attention-grabbing one: "{\it [\ldots] the reliability of this procedure is really questionable.}" -- "{\it I sense a big change concerning the expectations of the physics community. Actually I believe this change is too big.}" -- "{\it Please, have your own healthy doubts [\ldots]. Solving QCD is not \underline{so} easy.}" -- "{\it [\ldots] admit clearly the defects of our methods and make serious efforts to improve them. This path is less spectacular, but, perhaps, worth following.}" -- All these statements emphasize the carefulness and great diligence which Peter applied to the lattice approach, and I would like you to keep this in mind, when I discuss some of the very latest results from lattice QCD at the end of this section. In the late seventies and early eighties it was not yet clear whether lattice QCD provides a sensible regularization of continuum QCD. Hence, in order for lattice QCD to be meaning- and useful on a practical level, it was obviously crucial to establish the connection between calculations on the lattice and in the continuum. Lattice QCD, either in the chiral or the quenched limit, contains only the dimensionless coupling $g$ and implicitly the dimensionful lattice spacing $a$ as parameters. For a physical mass $m$, or equivalently a correlation length $\xi$, one therefore has \[ m = f(g) \cdot \frac{1}{a} \quad \text{or} \quad \xi = h(g)\cdot a \, . \] The continuum limit is reached when $1/m$ or $\xi \gg a$, i.e.~when the lattice system approaches a continuous phase transition. In asymptotically free theories, the limit is straightforward to reach, since the lattice spacing $a \rightarrow 0$ for $g \rightarrow 0$. By changing $a$ and $g$ appropriately towards the continuum limit, physical quantities should become independent of $a$, \[ \frac{d}{da} m = 0 \quad (a \rightarrow 0)\, , \] which is equivalent to say that the theory is renormalizable. The renormalizability uniquely fixes the relation between $a$ and $g$ through the differential equation for $f(g)$, \begin{equation} -f(g) + f'(g) \left(a \frac{d}{da} g \right) = 0 \qquad \text{where} \qquad\beta(g) \equiv a \frac{d}{da} g = -b_0 g^3 - b_1 g^5 - \ldots \, . \label{eq:beta-function} \end{equation} Hence, every physical quantity on the lattice can be expressed in terms of a {\it single}, RG-invariant mass parameter $\Lambda^\text{latt}$, e.g. $m = c_m \cdot \Lambda^\text{latt}$, and the dependence of the scale on the gauge coupling $g$ is determined by Eq.~(\ref{eq:beta-function}), yielding \[ \Lambda^\text{latt} = \frac{1}{a}\, e^{-1/2b_0 g^2} \left(b_0 g^2\right)^{-b_1/2b_0^2} \, \cdot [1 + {\cal O}(g^2)] \] to lowest order in perturbation theory. Analogously, in a continuum renormalization scheme one has \[ \Lambda = M \, e^{-1/2b_0 g(M)^2} \left(b_0 g(M)^2\right)^{-b_1/2b_0^2} \, \cdot [1 + {\cal O}(g(M)^2)] \] where the mass parameter $M$ corresponds to the renormalization scale introduced in the continuum renormalization scheme. So, in order to set the scale in the lattice theory, and to make sense of it, one better connects the two scales. In 1980 Anna and Peter Hasenfratz calculated this connection \cite{Hasenfratz:1980kn}: \begin{equation*} \Lambda^\text{MOM}_\text{\tiny Feynman gauge} = 83.5 \, \Lambda^\text{latt} \quad \text{for} \,\, SU(3), \quad \quad \Lambda^\text{MOM}_\text{\tiny Feynman gauge} = 57.5 \, \Lambda^\text{latt} \quad \text{for} \,\, SU(2) \, . \end{equation*} The computation involves a rather long and tedious 1-loop calculation of 2- and 3-point functions in lattice perturbation theory. In particular, the calculation provides an explicit demonstration that there are no unwanted divergences and that all noncovariant terms cancel. In that sense, the result truly constitutes a milestone in establishing lattice QCD as a viable and useful regularization of QCD. Anna and Peter were the first to get the relation correct, thereby settling a dispute which was ongoing at the time. It is hard to overestimate the challenge and the difficulty of this calculation, and in fact it took 15 more years until the corresponding 2-loop calculation was completed \cite{Luscher:1995np}. Of course, the $\Lambda$ parameter is nonperturbatively defined, \begin{equation*} \Lambda = M \, e^{-1/2b_0 g(M)^2} \left(b_0 g(M)^2\right)^{-b_1/2b_0^2} \times\exp\left[- \int_0^{g(M)} dx\left(\frac{1}{\beta(x)} + \frac{1}{b_0 x^3} - \frac{b_1}{b_0^2x}\right) \right] \, , \end{equation*} and lattice QCD is the ideal method to relate it {\it nonperturbatively} to the low-energy properties of \begin{wrapfigure}[16]{r}{0.4\textwidth} \centering \vspace{-0.3cm} \includegraphics[width=0.4\textwidth]{r0LamMSbar-crop} \caption{Lattice QCD results for the $\Lambda$ parameter in the $\overline{\text{MS}}$-scheme in units of $r_0$ from the FLAG report \cite{Aoki:2016frl}. \label{fig:Lambda-parameter}} \end{wrapfigure} QCD. In fact, the $\Lambda$ parameter is a quantity which nowadays is rather well determined from lattice QCD calculations. In Fig.~\ref{fig:Lambda-parameter} we show the latest collection of lattice QCD results for the $\Lambda$ parameter in the $\overline{\text{MS}}$-scheme in units of $r_0$ from the FLAG report \cite{Aoki:2016frl}. Obviously, while the results for $N_f=0$ and 2 flavour QCD are of no interest from a phenomenological point of view, they certainly are for the theoretical understanding of QCD in general. \begin{figure}[t] \centering \vspace{-0.5cm} \includegraphics[width=0.45\textwidth]{alphas_table_FLAG} \hfill \includegraphics[height=6.0cm]{alphasMSbarZ-crop} \caption{Collection of lattice QCD results for the $N_f=5$ strong coupling $\alpha_{\overline{\text{MS}}}^{(5)}(M_Z)$ in the $\overline{\text{MS}}$-scheme at the scale $M_Z$ from the FLAG report \cite{Aoki:2016frl}.\label{fig:alpha_s}} \vspace{-0.5cm} \end{figure} Closely related to the $\Lambda$ parameter is the running strong coupling $\alpha_s$ at the renormalization scale $M$, \[ \alpha_s(M) = \frac{g^2(M)}{4 \pi} \, . \] It can for example be determined by measuring a short distance quantity ${\cal O}$ at scale $M$ and matching it with the corresponding perturbative expansion in terms of the coupling in the $\overline{\text{MS}}$-scheme, \[ {\cal O}(M) = c_1 \alpha_{\overline{\text{MS}}}(M) + c_2 \alpha_{\overline{\text{MS}}}(M)^2 + \ldots \, . \] Also for this quantity, lattice QCD calculations are well advanced, as can be seen from the table and plot in Fig.~\ref{fig:alpha_s}. Many collaborations provide values for the $N_f=5$ strong coupling $\alpha_{\overline{\text{MS}}}^{(5)}(M_Z)$ in \begin{wrapfigure}[21]{r}{0.3\textwidth} \includegraphics[width=0.3\textwidth]{alphas_PDG} \caption{Summary of determinations of $\alpha_{\overline{\text{MS}}}(M_Z^2)$ from the 2016 edition of the PDG \cite{Olive:2016xmw}.\label{fig:alphas_PDG}} \end{wrapfigure} \noindent the $\overline{\text{MS}}$-scheme at the scale given by the $Z$-boson mass $M_Z$, employing a variety of different short distance quantities. It is clear, that a critical assessment of the situation is necessary in order to provide a consistent picture and a reliable estimate of the strong coupling useful for phenomenology. This is exactly what FLAG provides in its review \cite{Aoki:2016frl}. A careful evaluation indicates that for the strong coupling the dominant source of uncertainty comes from discretization errors and the truncation of continuum and lattice perturbation theory. It is interesting to compare the FLAG 16 lattice average $\alpha_{\overline{\text{MS}}}^{(5)}(M_Z) = 0.1182(12)$ with the values from the 2016 edition of the PDG \cite{Olive:2016xmw}. They quote $\alpha_{\overline{\text{MS}}}^{(5)}(M_Z) = 0.1174(16)$ for an average of all nonlattice results, and $\alpha_{\overline{\text{MS}}}^{(5)}(M_Z) = 0.1181(11)$ for the world average including the results from the lattice. It is clear from Fig.~\ref{fig:alphas_PDG} that the lattice determination by now provides the most precise value. It is gratifying to see that after more than three decades the common effort of the lattice community finally starts to pay off. In fact, as compared to its 2014 edition, the PDG is now following a more conservative approach for averaging the lattice results, very much in the spirit of FLAG 16, and even more so in the spirit of Peter. Indeed, it seems that Peter was right 35 years ago, when he advocated to follow a conservative approach, even though it is less spectacular. The conservative approach followed by FLAG requires a critical assessment of the available lattice results, as summarized e.g.~in Fig.~\ref{fig:alpha_s}. However, such tables and summaries are not a new invention. In fact, back in 1982 Peter has already provided such tables, cf.~Fig.~\ref{fig:old_tables}, summarizing the properties of various lattice calculations and the corresponding results available at that time \cite{Hasenfratz:1982sa}. Despite being very critical, Peter was nevertheless always very optimistic, as the following quote by Peter in \cite{Hasenfratz:1981ua} nicely portrays: "{\it We are able to obtain non-perturbative numbers in a four-dimensional, relativistic, relevant theory. We are proud of it.}" \section{The mass gaps} One of the most difficult problems in a quantum field theory is the \begin{figure}[!t] \centering \includegraphics[width=0.49\textwidth]{simulation_table.png} \hfill \includegraphics[width=0.45\textwidth]{result_table.png} \caption{Early tables for critically assessing and summarizing lattice calculations compiled by Peter in 1982 \cite{Hasenfratz:1982sa}.\label{fig:old_tables}} \end{figure} determination of the relation between the renormalized couplings from the Lagrangian and the physical masses of the theory, such as for example the nucleon mass in the chiral limit of QCD in units of $\Lambda^{\overline{\text{MS}}}$, \[ m_N = c_{m_N} \cdot \Lambda^{\overline{\text{MS}}} \, . \] The difficulty lies in the fact that the Lagrangian is defined at short distances (UV-scale), while the masses are parameters at large distances (IR-scale). Surprisingly, there is a family of models where this relation can be found {\it exactly}, namely the O($N$) nonlinear sigma models in $d=2$ dimensions. For $N\geq 3$ these integrable models are asymptotically free and contain massive O($N$) isovector multiplets. In 1990 Peter, together with Michele Maggiore and Ferenc Niedermayer, calculated this relation {\it exactly} for $N=3$ and 4 \cite{Hasenfratz:1990zz}: \begin{align*} m &= \frac{8}{e} \cdot \Lambda^{\overline{\text{MS}}} \hspace{2.1cm} N=3 \, ,\\ m &= \sqrt{\frac{32}{\pi e}} \cdot\Lambda^{\overline{\text{MS}}} \hspace{1.5cm} N=4 \, . \end{align*} In the same year, Peter and Ferenc extended the calculation to arbitrary $N \ge 3$ \cite{Hasenfratz:1990ab}: \[ m = \left(\frac{8}{e}\right)^{1/(N-2)} \frac{1}{\Gamma(1+1/(N-2))} \cdot\Lambda^{\overline{\text{MS}}} \, . \] It is interesting to note that at the time, there were over 30 nonperturbative determinations which differed wildly from each other. While the calculation is rather involved, it is based on a beautiful idea. It starts from the introduction of a chemical potential $h$ coupled to a Noether charge and the observation that the {\it change} of the free energy is RG invariant, as is the chemical potential $h$ itself. Then, on the one hand, the free energy can be calculated in perturbation theory in the regime $h \gg m$ where the theory becomes asymptotically free, \begin{equation*} f(h) - f(0) = -(N-2)\frac{h^2}{4\pi} \left[\ln \frac{h}{e^{1/2} \Lambda_{\overline{\text{MS}}}} +\frac{1}{N-2}\ln \ln \frac{h}{\Lambda_{\overline{\text{MS}}}} + {\cal O}\left(\frac{\ln \ln(h/\Lambda_{\overline{\text{MS}}})}{\ln(h/\Lambda_{\overline{\text{MS}}})} \right)\right] \, . \end{equation*} On the other hand, since the model is integrable, the free energy can also be calculated by applying the Bethe ansatz, or directly from the $S$-matrix, \[ f(h) - f(0) = -\frac{m}{2\pi} \int \cosh\theta \, \varepsilon(\theta) d\theta \] with $\varepsilon(\theta)$ fulfilling a specific integral equation \cite{Hasenfratz:1990ab}. One can then use a generalized Wiener-Hopf technique to express the integral equation in terms of $\ln h/m$, again in the regime $h\gg m$, and read off the mass $m$ in terms of $\Lambda_{\overline{\text{MS}}}$ by comparing the expressions obtained from the two approaches. The same idea can be applied to other quantum field theories. Its application yields for example also the exact mass gap in the Gross-Neveu model \cite{Forgacs:1991rs} and in the $d=2+1$ dimensional antiferromagnetic Heisenberg model at low temperatures \cite{Hasenfratz:1990jw,Hasenfratz:2005fn}. The reason why I dwell on this in some detail is due to the fact that the idea to couple a chemical potential to a conserved charge and to calculate the corresponding change in the free energy is very general. In fact, it is relevant and in use even today. One interesting, very recent application is for matching chiral Lagrangians for QCD with different regularizations \cite{Niedermayer:2016yll,Niedermayer:2016ilf}. The calculation is related to the QCD rotator in the $\delta$-regime where $m_\pi L_s \ll 1$ and $F_\pi L_s \gg 1$~\cite{Leutwyler:1987ak} and provides a promising new way to determine the low-energy constants of QCD \cite{Hasenfratz:2009mp} by introducing an infrared cutoff through a finite spatial box size $L_s$ and then studying the finite size scaling of the spectrum in the chiral limit. More precisely, the chiral Lagrangian for massless 2-flavour QCD has a $\text{SU}(2) \times \text{SU}(2) \simeq \text{O}(4)$ symmetry and the general O($N$) spectrum is given by a quantum mechanical rotator \begin{equation*} E(l) = l(l+N-2)/2 \Theta \, \qquad l =0,1,2,\ldots \end{equation*} where $\Theta = F^2L_s^3$ is the moment of inertia of the rotator~\cite{Leutwyler:1987ak}. The next-to-leading (NLO) term of the expansion in $1/(F^2L_s^2)$ has been calculated by Peter and Ferenc some time ago \cite{Hasenfratz:1993vf}, while the NNLO terms were obtained more recently in the dimensional regularization (DR) scheme \cite{Hasenfratz:2009mp} and on the lattice \cite{Niedermayer:2010mx}. Finally, the very recent, heroic calculation by Niedermayer and Weisz \cite{Niedermayer:2016yll,Niedermayer:2016ilf} makes use of the same idea and thereby connects the couplings in the two regularizations of the effective theory, i.e.~it converts the expressions for physical quantities obtained in the lattice scheme to the corresponding ones in the DR scheme. In particular, it relates the finite-volume mass gap on the lattice to the DR scheme. \section{The renormalization group and FP actions} Peter had a very deep appreciation and understanding of the Wilson renormalization group (RG) description of quantum field theories. In this context, he always emphasized the viewpoint that lattice gauge theory is just a statistical system, albeit a rather unusual one due to the local gauge invariance, and that the critical limit corresponds to the continuum limit of the corresponding quantum field theory, as illustrated in Fig.~\ref{fig:critical limit} from one of Peter's lecture notes \cite{Hasenfratz:1983vk}. \begin{figure}[htb] \centering \includegraphics[angle=-0.75,width=0.5\textwidth]{continuum_limit.png} \caption{The continuum limit as the critical limit of a statistical lattice system where the lattice spacing becomes small compared to the characteristic physical distances. Figure from \cite{Hasenfratz:1983vk}. \label{fig:critical limit}} \end{figure} Since the lattice provides a fully nonperturbative description of the phase transition, the continuum physics is recovered in the lattice system at long distances. Close to the phase transition one can integrate out the variables describing the short distance lattice physics and obtain an effective action for the relevant long distance variables in terms of the effective couplings $\left\{K_\alpha \right\}$, \[ \left\{K_\alpha^{(1)} \right\} \quad \stackrel{\text{RG}}{\longrightarrow} \quad \left\{K_\alpha^{(2)} \right\} \quad \stackrel{\text{RG}}{\longrightarrow} \quad \cdots \quad \stackrel{\text{RG}}{\longrightarrow} \quad \left\{K_\alpha^{(n)} \right\} \quad \stackrel{\text{RG}}{\longrightarrow} \quad \cdots \, . \] The sequence of RG transformations might have a fixed point (FP), \[ \left\{K_\alpha^{*} \right\} \quad \stackrel{\text{RG}}{\longrightarrow} \quad \left\{K_\alpha^{*} \right\} \, . \] In particular, one is interested in a FP where the correlation length of the system is $\xi=\infty$. For gauge theories, \begin{figure}[htb] \centering \includegraphics[width=0.4\textwidth]{RG_flow_AF.png} \caption{Renormalized trajectory (RT) in the parameter space of gauge couplings. It comes out of the critical hyperplane $g = 0$ and attracts all the flow lines starting in the neighbourhood of the fixed point. Figure from \cite{Hasenfratz:1983vk}. \label{fig:RG_flow_AF}} \end{figure} the RG transformations are complicated due to the requirement of gauge invariance, but once this is fulfilled, the RG transformations provide the basic starting point in expecting {\it renormalizability} and {\it universality} along the renormalized trajectory (RT) of the lattice system, cf.~Fig.~\ref{fig:RG_flow_AF} taken from \cite{Hasenfratz:1983vk}. The fact that the lattice provides a fully nonperturbative description of the RG flow of the couplings and the corresponding FP structure becomes again important today for investigations of quantum field theories beyond the Standard Model (BSM), as also discussed extensively at this lattice conference. It should also serve as a warning for such investigations. Some of the BSM theories are expected to possess a conformal FP, however, an IR FP is in general not perturbative and perturbative intuition could therefore well be misleading. I think it is important to keep this in mind when interpreting some of the results from the numerical BSM calculations dicsussed at this conference, cf.~the plenary talk \cite{Pica:2017gcb}. Back in the early eighties, the application of RG ideas to lattice gauge theory inspired the investigation of perturbative and nonperturbative improvement of lattice actions by making use of approximations to the FP, as e.g.~illustrated in the left plot of Fig.~\ref{fig:RG_flow_FP} taken from Peter's lecture notes from 1983 \cite{Hasenfratz:1983vk}. While the concept of "(quantum) perfect" actions was already known then, it would take 10 years to make it into something more specific. In 1993 Peter and Ferenc Niedermayer realized that for asymptotically free theories the path integral defining the FP for RG transformations reduces to {\it classical saddle point equations}, \[ S_G^\text{FP}[V] = \min_{\{U\}}\left[S_G^\text{FP}[U] + T_G[V,U]\right] \, \] where $S_G^\text{FP}$ is the FP gauge action, $T_G$ a blocking kernel defining a RG transformation, while $U, V$ are the gauge fields on the fine and coarse lattices, respectively, related by the RG transformation~\cite{Hasenfratz:1993sp}. The FP Dirac operator is similarly defined by \[ D^\text{FP}[V]^{-1} = R[V] + \omega[U]\cdot D^\text{FP}[U]^{-1} \cdot\omega[U] \] where $\omega$ defines a blocking kernel for the fermion fields. It can then be shown that the action $\beta S_G^\text{FP} + \overline{\psi} D^\text{FP} \psi$ is {\it classically perfect}, \begin{figure}[bth] \centering \begin{minipage}{0.4\textwidth} \includegraphics[width=\textwidth]{improved_action_RT.png} \end{minipage} \hfill \begin{minipage}{0.5\textwidth} \includegraphics[width=\textwidth]{RG_flow_FP.png} \end{minipage} \caption{Renormalized trajectory in the parameter space of (gauge) couplings and the improved actions based on an approximation of the FP (left plot) or the exact FP (right plot). The left plot is from 1983 \cite{Hasenfratz:1983vk}, the right plot from 1993 \cite{Hasenfratz:1993sp}. \label{fig:RG_flow_FP}} \end{figure} i.e.~it has no lattice artefacts on field configurations satisfying the equations of motions. For each gauge field configuration on the lattice, the minimizing gauge field $U(x,V)$ defines a FP field in the continuum which induces some remarkable properties on the lattice gauge fields. Under certain conditions, all symmetries of the continuum are well defined on the lattice \cite{Hasenfratz:2006kv} and the FP field allows for a representation of the corresponding infinitesimal transformations on the lattice according to \[ V_n \quad \stackrel{\tiny\text{minimize}}{\longrightarrow} \quad U(x,V) \quad \stackrel{\tiny\text{transform}}\longrightarrow \quad U^\varepsilon(x^\varepsilon,V) \quad \stackrel{\tiny\text{block}}\longrightarrow \quad V^\varepsilon_n \, . \] Similar considerations can be made for the fermion fields \cite{Hasenfratz:2006kv}. In particular, the procedure in principle also provides a possibility to define supersymmetric algebras on the lattice. The fact that the FP equations define a scheme to match coarse and fine lattice configurations could also be useful in recent attempts to delay the onset of topological critical slowing down in today's lattice QCD simulations by constructing multi-scale Monte-Carlo update algorithms. One effort in this direction \cite{Endres:2015yca,Detmold:2016rnh}, in which the FP action approach could be very useful, is in fact discussed in a plenary talk at this conference \cite{Endres:2016rzj}. Then, in 1997 Peter made a truly groundbreaking observation \cite{Hasenfratz:1997ft}. While travelling to a summer school Peter looked through a pile of old preprints which he picked up while tidying up \begin{wrapfigure}{r}{0.7\textwidth} \includegraphics[width=0.7\textwidth]{peter_GW.png} \end{wrapfigure} his office. A paper by Ginsparg and Wilson \cite{Ginsparg:1981bj} grabbed his attention and he realized that the FP Dirac operator $D^\text{FP}$ fulfills the now-famous Ginsparg-Wilson relation \[ D \gamma_5 + \gamma_5 D = D \gamma_5 D \, . \] The relation is derived from RG transformations applied to free fermions. Any solution of the relation avoids the Nielsen-Ninomyia no-go theorem, implies the correct triangle anomaly and the validity of all the soft-pion theorems on the lattice \cite{Hasenfratz:1997ft}. Peter's crucial observation then was that the FP Dirac operator $D^\text{FP}$ constitutes a solution for the interacting theory. As a consequence of the Ginsparg-Wilson relation there is no tuning, no mixing and no current renormalization necessary for the FP Dirac operator on the lattice~\cite{Hasenfratz:1998jp}. The observation set off an avalanche of developments. This is best visualized by looking at the citation history of the Ginsparg-Wilson paper \cite{Ginsparg:1981bj}, cf.~Fig.~\ref{fig:citationHistory_GW}. It is fascinating to see how the interest in the paper exploded after Peter's rediscovery in 1997 -- with 985 citations it is by now the second most cited paper cited by the hep-lat archive \cite{INSPIRE:topcite2016}. It is impossible to describe in detail the revolution which followed, so let me just mention a few key developments on the theoretical side: the exact index theorem in QCD on the lattice \cite{Hasenfratz:1998ri}; the overlap operator as another solution to the Ginsparg-Wilson relation \cite{Neuberger:1998wv}; the exact chiral symmetry on the lattice \cite{Luscher:1998pqa}; Abelian chiral gauge theories on the lattice \cite{Luscher:1998du, Neuberger:1998xn}; the axial anomaly and topology \cite{Kikukawa:1998pd,Luscher:1998kn,Chiu:1998xf,Adams:1998eg}; the chiral Jacobian on the lattice \cite{Fujikawa:1998if,Suzuki:1998yz}; lattice supersymmetry \cite{So:1998ya}; and so on and on. \begin{figure}[tbh] \centering \includegraphics[width=0.8\textwidth]{citationHistory_GW} \caption{Citation history of the Ginsparg-Wilson paper \cite{Ginsparg:1981bj}.\label{fig:citationHistory_GW}} \end{figure} Even today, the implications for phenomenological and theoretical applications can hardly be overestimated, and as evidence I just refer to the various parallel sessions such as "Weak decays and matrix elements", "Chiral symmetry", "Theoretical developments", with several talks related to or based on chiral symmetry on the lattice in some way or another. In order to close this section, let me make a couple of remarks which I find particularly intriguing in the context of chiral symmetry regularized on the lattice. Firstly, the possibility to realize exactly massless fermions on the lattice constitutes a true solution of a hierarchy problem, a fact which is often unappreciated outside the lattice community. Secondly, FP fermions on the one hand and overlap/domain wall (DW) fermions \cite{Kaplan:1992bt,Narayanan:1992wx,Shamir:1993zy,Neuberger:1997fp} on the other hand derive from two completely different approaches, yet they both fulfill the Ginsparg-Wilson relation. Is there a connection between the two formulations, and if yes, how are they related? While the FP operator is motivated from RG considerations and therefore has a very specific physical meaning, the overlap/DW operator has a less obvious physical interpretation. Rather, the overlap/DW operator can more easily be understood from an algebraic viewpoint. More precisely, the Ginsparg-Wilson relation is a specific form of an {\it algebraic Riccati equation}, the solution of which naturally involves the sign function as it also appears in the overlap/DW operator. Thirdly, the chiral transformation on the lattice makes use of the gauge field dependent chiral projectors $\hat P_\pm = 1/2(1\pm\hat\gamma_5)$ with $\hat \gamma_5 = \gamma_5(1-D)$. They are particularly important in the context of defining chiral gauge theories on the lattice, a problem which remains to be of high interest and is in fact also discussed in a plenary talk at this lattice conference, cf.~\cite{Grabowska:2016bis} and references therein to the corresponding lattice talks. On the one hand, the chiral projectors are responsible for the necessary asymmetry between fermions and anti-fermions, on the other hand they also break $CP$ symmetry \cite{Hasenfratz:2007dp}. The $CP$ breaking is an unwanted feature, the role of which is still not well understood. \section{Closing remarks} After my recollection of just three out of many of Peter's scientific contributions, it is important to realize that Peter's legacy is much more than his outstanding scientific achievements. It goes far beyond what he has taught us about quantum field theories on and off the lattice. As researchers we are all curious and strive for the unknown, but Peter taught us what it means to go one step further: do look close and careful, be {\it very} critical - and do question even the presumably well-known and the commonly accepted. This will inevitably lead to surprising results, new insights and creative ideas. Of course this is all said easily and quickly - it distinguishes Peter that he was indeed able to follow this path and operate in such a perfect way. The careful and conservative path is usuallly not very spectacular, but it is certainly much more enduring and lasting, and, as Peter pointed out, worth following. It will sustain and support the lattice community much better in the long term by gaining and keeping a high regard and esteem from outside our community. One of Peter's legacy obviously is the fact that we all meet once a year to discuss our progress, exchange the latest insights and results, and get excited about new developments. I am very happy to see that this legacy is carried on. Looking into the audience I see many young faces among the very many participants, so I am very optimistic that this legacy will continue for a very long time -- I am certain that this would make Peter very happy. I would like to thank Anna Hasenfratz, Kieran Holland, Ferenc Niedermayer and Uwe-Jens Wiese for helpful discussions during the preparation of this talk. \bibliographystyle{JHEP}
{ "attr-fineweb-edu": 1.71582, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUc7g5qdmB629MGt9q
\section{Introduction} Abelian Chern-Simons term is induced in 2+1 dimensional parity violating electron system, and is the lowest dimensional gauge invariant object\cite{2+1C-S}. Its roles has been studied in anyon superconductivity\cite{anyon} and quantum Hall effect\cite{QHE}. In the quantum Hall system an effect of Chern-Simons term has been observed. One can expect that there may be other physical systems in which Chern-Simons term plays some roles. It is known that Chern-Simons term has non-trivial topological structure, and the coefficient of induced term becomes topological invariant and not affected from higher order corrections in the gauge invariant systems, without spontaneous symmetry breaking. In fact, these assumptions are realized in quantum Hall system and Hall conductance becomes topological invariant and is quantized exactly\cite{QHE}\cite{QHE2}. In this paper, we consider a parity violating system in which gauge invariance is spontaneously broken. In such a system, these assumptions are not realized, so it is interesting and important to find out a structure of Chern-Simons term and its dynamical effects. In Lorentz invariant system, Chern-Simons term can only exist in 2+1 dimension. But in non-relativistic systems, Chern-Simons term can also exist in 3+1 dimension, as well, in a form that is embedded in 2+1 dimension of 3+1 dimensional spacetime. ~\\ As a system in which these situations are simultaneously realized, we study rotating superfluid $^{3}$He-A \cite{and} \cite{He}. Although $^{3}$He atom is neutral, we can introduce external ``U(1) gauge field'' in order to consider rotation and other motion of $^{3}$He-A system, and can introduce also gauge symmetry approximately with this field. This symmetry is spontaneously broken, however, by Cooper pair condensation. These pairs have orbital angular momentum 1 along some direction in $^{3}$He-A, so parity is also violated spontaneously. Furthermore, this system is non-relativistic, of course. It is a purpose of the present paper to study Chern-Simons term in this system and its physical implications. Our paper is organized in the following manner. In section 2, we derive an action for rotating $^{3}$He-A. In section 3, we discuss about Chern-Simons term in 2+1 dimensional rotating $^{3}$He-A. Usually, Chern-Simons term is induced in an effective action for external gauge field from Fermion 1-loop diagram. But in the present case, Chern-Simons term is induced after perturbative calculation of Fermion and Goldstone mode in the lowest order, and its coefficient depends on an infra-red cut off of Goldstome mode. In section 4, we consider 3+1 dimensional case. Owing to induced Chern-Simons term, there is a kind of ``Hall current'' in superfluid $^{3}$He-A. Furthermore, a problem about the orbital angular momentum of $^{3}$He-A system, so-called ``angular momentum paradox'' is resolved by Chern-Simons system. In section 5, we show that Hall current occurs at an edge of cylindrical superfluid $^{3}$He-A and this current gives an angular momentum. Summary are given in section 6. \section{The action for superfluid $^{3}$He-A in rotating system} We derive the effective action of 2+1 and 3+1 dimensional superfluid $^{3}$He-A in rotating system in this section. Although $^{3}$He atom is neutral, as we shall see below, we are able to introduce ``U(1) gauge field $A_{\mu}$'' in $^{3}$He action in rotating system and to study the system's properties with it. Let $\psi_{\alpha}(x)$ stand for $^{3}$He atom with spin $\alpha$. The original action for homogeneous $^{3}$He in the rest system is (D as the dimension of the system.) \begin{eqnarray} S_{{\rm{org.}}}=&\int d^{D}x \psi_{\alpha}^{\dagger}(x) \{i \partial_{0}-(\frac{\vec{p}^{2}}{2m} - \epsilon_{{\rm F}} )\}\psi_{\alpha}(x) &\nonumber\\ &-\frac{1}{2}\int d^{D}x d^{D-1}x^{\prime} \psi_{\alpha}^{\dagger}(x)\psi_{\beta}^{\dagger}(x^{\prime}) V_{\alpha\beta;\gamma\delta}(\vec{x}-\vec{x^{\prime}}) \psi_{\gamma} (x^{\prime}) {\psi}_{\delta}(x), &\label{a1} \end{eqnarray}\label{a1}\\ where $\epsilon_{{\rm F}}$ is Fermi energy and $V$ is instantaneous interaction between neutral $^{3}$He atom. The transformation from rest frame coordinate $\{\vec{x}_{{\rm s}}\}$ to moving frame coordinate $\{\vec{x}(t)\}$ fixed to bulk $^{3}$He which is rotating around 3rd-axis (z-axis) with angular velocity $\Omega$ is \begin{equation} \vec{x}_{{\rm s}} \rightarrow \vec{x}(t)=R_{D-1}(t) \vec{x}_{{\rm s}} ; R_{2}(t)=\left( \begin{array}{cc} {\rm cos} \Omega t & - {\rm sin} \Omega t \\ {\rm sin} \Omega t & {\rm cos} \Omega t \end{array} \right), R_{3}(t)=\left( \begin{array}{ccc} {\rm cos} \Omega t & - {\rm sin} \Omega t & 0 \\ {\rm sin} \Omega t & {\rm cos} \Omega t & 0 \\ 0 & 0 & 1 \end{array} \right), \end{equation} and time derivative $\psi$ is transformed as \begin{equation} i \partial_{0} \psi \rightarrow i \partial_{0} \psi + i \frac{d \vec{x}(t)}{d t} \cdot \vec{\nabla} \psi ; \frac{d \vec{x}(t)}{d t}=\vec{\Omega} \times \vec{x}. \end{equation} We define external ``vector potential'' as $\vec{A}=m \vec{\Omega} \times \vec{x}$, and external ``scalar potential $A_{0}$'' as difference of the chemical potential from its homogeneous value $\epsilon_{{\rm F}}$, then the rotating $^{3}$He action is written as \begin{eqnarray} S_{{\rm rot.}}[\psi , \psi^{\dagger} , A_{\mu}] =&\int d^{D}x \psi^{\dagger}(x)\{i \partial_{0}+A_{0}-(\frac{(\vec{p}+\vec{A})^{2}}{2m} - \epsilon_{{\rm F}} )+\frac{\vec{A}^{2}}{2m}\}\psi(x) &\nonumber\\ &-\frac{1}{2}\int d^{D}xd^{D-1}x^{\prime}{\psi}^{\dagger}(x){\psi}^{\dagger}(x^{\prime})V(\vec{x}-\vec{x^{\prime}}) {\psi} (x^{\prime}) {\psi}(x). &\label{a2} \end{eqnarray} Except $\psi^{\dagger} \frac{\vec{A}^{2}}{2m} \psi$ term, the action is invariant under a gauge transformation $A_{\mu} \rightarrow A_{\mu} + \partial_{\mu} \xi, \psi \rightarrow e^{i \xi} \psi$. The generating functional is defined by using path integral formalism as \begin{equation} Z[A_{\mu}]= \int {\cal{D}}\psi^{\dagger} {\cal{D}}\psi e^{i S_{{\rm rot.}}[\psi ,\psi^{\dagger},A_{\mu}]}.\label{a3} \end{equation} We introduce a pair field $\Psi$ which specifies that $^{3}$He is in the superfluid state and rewrite Eq.(\ref{a3}). This can be done by introducing auxilially field and by transforming field variables in the same way as the Stratonovich-Hubbard transformation. Let us define a path integral of a Gaussian integral form, N, as \begin{equation} N=\int {\cal{D}}\Psi^{\dagger} {\cal{D}}\Psi e^{i \Delta S},\label{a4} \end{equation} \begin{eqnarray} &\Delta S =\frac{1}{2} \int d^{D}x d^{D-1}x^{\prime} (\Psi_{\beta\alpha}^{\dagger}(x^{\prime},x)-\psi_{\alpha}^{\dagger}(x)\psi_{\beta}^{\dagger}(x^{\prime}))V_{\alpha\beta;\gamma\delta}(\vec{x}-\vec{x^{\prime}})&\nonumber\\ &\times (\Psi_{\gamma\delta}(x^{\prime},x)-\psi_{\gamma}(x^{\prime})\psi_{\delta}(x)), &\nonumber \end{eqnarray} and insert Eq.(\ref{a4}) into Eq.(\ref{a3}). $Z[A_{\mu}]$ can be written as \begin{equation} Z[A_{\mu}]=\frac{1}{N}\int {\cal{D}}\psi^{\dagger}{\cal{D}}\psi{\cal{D}}\Psi^{\dagger} {\cal{D}}\Psi e^{i S},\label{a5} \end{equation} \begin{eqnarray} S = S_{{\rm rot.}}+\Delta S = &\int d^{D}x \psi^{\dagger}(x) \{i \partial_{0}+A_{0}-(\frac{(\vec{p}+\vec{A})^{2}}{2m} - \epsilon_{{\rm F}} )\}\psi(x)&\nonumber\\ &-\frac{1}{2}\int d^{D}x d^{D-1}x^{\prime} \psi^{\dagger}(x)\psi^{\dagger}(x^{\prime}) V(\vec{x}-\vec{x^{\prime}}) \Psi(x^{\prime},x)&\nonumber\\ &-\frac{1}{2}\int d^{D}x d^{D-1}x^{\prime} \Psi^{\dagger}(x^{\prime},x) V(\vec{x}-\vec{x^{\prime}}) \psi(x^{\prime})\psi(x)&\nonumber\\ &+\frac{1}{2}\int d^{D}x d^{D-1}x^{\prime} \Psi^{\dagger}(x^{\prime},x) V(\vec{x}-\vec{x^{\prime}}) \Psi(x^{\prime},x) .&\nonumber \end{eqnarray} We regard $\Psi^{0}(\vec{x},\vec{x}^{\prime})$ as a stationary value of $\Psi(x,x^{\prime})$. This value is determined by minimizing an effective potential, which is derived by integrating out Fermion field ($\psi,\psi^{\dagger}$) while $A_{\mu}$ and dynamical fluctuation of $\Psi$ are kept constant. $\Psi^{0}$ satisfies \begin{equation} \Psi_{\alpha\beta}^{0}(\vec{x},\vec{x}^{\prime}) =<\psi_{\alpha}(\vec{x})\psi_{\beta}(\vec{x}^{\prime})>. \end{equation} We can see easily that U(1) symmetry is spontaneously broken, whenever $\Psi^{0}$ has non zero value. The mean field solutions have been given in \cite{and}. In integrating $\Psi$ and $\Psi^{\dagger}$ in Eq.(\ref{a5}), we only consider the most important variable in the low energy reagion, i.e., phase degrees of freedom around $\Psi^{0}$ (Goldstone mode), and neglect fluctuations of other degrees of freedom. So, we write $\Psi$ as a product of $\Psi^{0}$ part, SU(2) part, and U(1) part, \begin{equation} \Psi_{\alpha\beta}(x,x^{\prime}) =e^{-i \theta(x) - i \theta(x^{\prime})} U_{\alpha\alpha^{\prime}}(x)U_{\beta\beta^{\prime}}(x^{\prime}) \Psi_{\alpha^{\prime}\beta^{\prime}}^{0}(\vec{x},\vec{x}^{\prime}), \end{equation} $$ U(x)=e^{-i \phi^{a}(x)\cdot\sigma^{a}}, $$ where $\sigma^{a}$($a$=1,2,3) is Pauli matrices. We transform Fermion fields in the path integral (\ref{a5}) as \begin{equation} \psi \rightarrow e^{- i \theta} U \psi , \psi^{\dagger} \rightarrow \psi^{\dagger} U^{\dagger} e^{i \theta}. \end{equation} This transformation does not change the path integral measure (i.e.${\cal{D}}\psi^{\prime\dagger} {\cal{D}}\psi^{\prime} ={\cal{D}}\psi^{\dagger} {\cal{D}}\psi$), so the action S becomes \begin{eqnarray} S = &\int d^{D}x \psi^{\dagger}(x) \{ i \partial_{0} + A_{0}+\partial_{0}\theta+iU^{\dagger}\partial_{0}U -(\frac{(\vec{p} + \vec{A} + \vec{\partial}\theta + i U^{\dagger}\vec{\partial}U)^{2}}{2m} - \epsilon_{{\rm F}} ) +\frac{\vec{A}^2}{2m}\}\psi(x)&\nonumber\\ &-\frac{1}{2}\int d^{D}xd^{D-1}x^{\prime} ({\psi}^{\dagger}(x)\Delta (\vec{x},\vec{x^{\prime}}) {\psi}^{\dagger}(x^{\prime}) +{\psi}(x)\Delta ^{\dagger}(\vec{x},\vec{x^{\prime}}) {\psi} (x^{\prime}))&\label{a6}\\ &+\frac{1}{2}\int d^{D}x d^{D-1}x^{\prime} \Psi^{0 \dagger}(\vec{x}^{\prime},\vec{x}) U^{\dagger}(x^{\prime}) U^{\dagger}(x) V(\vec{x}-\vec{x^{\prime}})U(x^{\prime})U(x) \Psi^{0}(\vec{x}^{\prime},\vec{x}) ,&\nonumber \end{eqnarray} where $\Delta, \Delta^{\dagger}$ are the gap functions, and they are determined as \begin{equation} \Delta_{\alpha\beta}(\vec{x}-\vec{x}^{\prime}) =V_{\alpha\beta;\gamma\delta}(\vec{x}-\vec{x}^{\prime}) \Psi_{\gamma\delta}^{0}(\vec{x}^{\prime},\vec{x}) \end{equation} $$ \Delta_{\delta\gamma}^{\dagger}(\vec{x}^{\prime}-\vec{x}) =\Psi_{\beta\alpha}^{0\dagger}(\vec{x}^{\prime},\vec{x}) V_{\alpha\beta;\gamma\delta}(\vec{x}-\vec{x}^{\prime}), $$ and in the action (\ref{a6}), contraction of spin suffices can be taken as $\psi_{\alpha}^{\dagger}\Delta_{\alpha\beta}\psi_{\beta}^{\dagger}, \psi_{\alpha}\Delta_{\alpha\beta}^{\dagger}\psi_{\beta}$, if the gap has parity odd structure. In the superfluid $^{3}$He-A, the gap has angular momentum 1 along some direction. Hence, the gap is parity odd, and, parity symmetry is spontaneously broken in this system. We choose this angular momentum direction in 3rd-axis (z-axis) throughout our paper. In the weak coupling approximation, the gap has a form in the momentum space \cite{and} \begin{equation} \Delta_{\alpha\beta}(\vec{k})= \left\{ \begin{array}{l} i \Delta \frac{\sqrt{k_{1}^{2}+k_{2}^{2}}}{|\vec{k}|} e^{i \varphi} \delta_{\alpha\beta}; (- \omega_{{\rm D}} < (\frac{\vec{k}^{2}}{2m}-\epsilon_{{\rm F}}) < \omega_{{\rm D}})\\ 0;({\rm otherwise}), \end{array} \right. \label{gap}\end{equation} $$ {\rm tan}\varphi = k_{2}/k_{1}, $$ where $\omega_{{\rm D}}$ is Debey frequency, which satisfies the relation $|\Delta|<<\omega_{{\rm D}}<<\mu$. Finally, the generating functional is written by, \begin{equation} Z[A_{\mu}]=\frac{1}{N}\int {\cal{D}}\theta{\cal{D}}\phi {\cal{D}}\psi^{\dagger}{\cal{D}}\psi e^{i S}.\label{a10} \end{equation} \section{Chern-Simons term in 2+1 dimensional superfluid $^{3}$He-A} In this section, we calculate the right hand side of Eq.(\ref{a10}) in 2+1 dimensional case, and show how Chern-Simons term is induced. First, we integrate out Fermion fields in Eq.(\ref{a10}) and obtain $Z^{\rm (f)}$; \begin{equation} Z^{({\rm f})}[A_{\mu},\partial_{\mu} \theta,U^{\dagger} \partial_{\mu} U] = \int {\cal{D}}\psi^{\dagger} {\cal{D}}\psi e^{i S} = e^{iS_{{\rm eff.}}^{({\rm f})} [A_{\mu},\partial_{\mu} \theta,U^{\dagger} \partial_{\mu} U]} .\label{a0}\end{equation} To carry out this, we divide the action ${S}$ in two parts $S_{0}$, and $S_{\rm int.}$ as \begin{equation} S=S_{0} + S_{{\rm int.}}. \end{equation} $S_{0}$ is written as \begin{equation} S_{0}=\int d^{3}x d^{3}y \frac{1}{2} (\psi^{\dagger}(x) \psi(x)) \left( \begin{array}{cc} iG(x-y) & F(x-y) \\ -F^{\dagger}(x-y) & -iG(y-x) \end{array} \right)^{-1} \left( \begin{array}{c} \psi(y) \\ \psi^{\dagger}(y) \end{array} \right),\label{s0} \end{equation} where $$ \left\{ \begin{array}{c} iG(x-y)=<T\psi(x)\psi^{\dagger}(y)> \\ F(x-y)=<T\psi(x)\psi(y)> \\ F^{\dagger}(x-y)=<T\psi^{\dagger}(y)\psi^{\dagger}(x)>, \end{array} \right. $$ are Fermion propagators in superfluid state, and their forms in momentum space obtained by solving Gor'kov equation are \begin{equation} \left\{ \begin{array}{c} iG(k)=\frac{k_{0}+(\frac{\vec{k}^{2}}{2m}-\epsilon_{{\rm F}})} {k_{0}^{2}-E^{2}(\vec{k})+i \epsilon} \\ F(k)=\frac{i \Delta(\vec{k})}{k_{0}^{2}-E^{2}(\vec{k})+i \epsilon}\\ F^{\dagger}(k)=\frac{-i \Delta^{\dagger}(\vec{k})} {k_{0}^{2}-E^{2}(\vec{k})+i \epsilon}, \end{array} \right.\label{fpro} \end{equation} $E(\vec{k})$ is quasiparticle energy and has the form $E(\vec{k})= \sqrt{(\frac{\vec{k}^{2}}{2m}-\epsilon_{{\rm F}})^{2}+|\Delta(\vec{k})|^{2}}$, $\Delta(\vec{k})$ has the form (\ref{gap}) in the superfluid A phase and $|\Delta(\vec{k})|^{2}=|\Delta|^{2}={\rm const.}$ in 2+1 dimensional case. $S_{{\rm int.}}$ is written as \begin{eqnarray} &S_{{\rm int.}}=\int d^{3}x [j_{0}(A_{0}+\partial_{0}\theta) -\vec{j}\cdot(\vec{A}+\vec{\partial}\theta) +\psi^{\dagger}\frac{(\vec{A}+\vec{\partial}\theta)^{2}}{2m}\psi +\psi^{\dagger}\frac{\vec{A}^{2}}{2m}\psi &\nonumber\\ &+J_{0}^{a}(i U^{\dagger}\partial_{0}U)^{a} -\vec{J}^{a}\cdot(U^{\dagger}\vec{\partial}U)^{a} +\psi^{\dagger}\frac{(i U^{\dagger}\vec{\partial}U)^{2}}{2m}\psi],&\label{sint} \end{eqnarray} where $(j_{0},\vec{j})$ and $(J_{0}^{a},\vec{J}^{a})$ is U(1) and SU(2) currents which have forms \begin{eqnarray} &j_{0}=\psi^{\dagger}\psi,~~ \vec{j}=-\frac{i}{2m}[(\vec{\partial}\psi^{\dagger})\psi -\psi^{\dagger}(\vec{\partial}\psi)] +\psi^{\dagger}\frac{(\vec{A}+\vec{\partial}\theta)}{m}\psi&\nonumber\\ &J_{0}^{a}=\psi^{\dagger} \sigma^{a} \psi,~~ \vec{J}^{a}=-\frac{i}{2m}[(\vec{\partial}\psi^{\dagger})\sigma^{a}\psi -\psi^{\dagger}\sigma^{a}(\vec{\partial}\psi)] +\psi^{\dagger}\frac{(i U^{\dagger}\vec{\partial}U)^{a}}{m}\psi&. \label{current} \end{eqnarray} So, $S_{{\rm eff.}}^{({\rm f})}$ should have the form \begin{eqnarray} &S_{{\rm eff.}}^{({\rm f})}= \int d^{3}x d^{3}y [\frac{1}{2}(A_{\mu}+\partial_{\mu}\theta)_{x}\pi^{\mu\nu}(x-y) (A_{\nu}+\partial_{\nu}\theta)_{y} - \frac{1}{2}\frac{\rho}{m}A_{i}(x)A^{i}(x)&\nonumber\\ &+\frac{1}{2}\partial_{\mu}\phi^{a}(x)\pi^{\mu\nu}_{ab}(x-y)\partial_{\nu}\phi^{b}(y)]+\cdot\cdot\cdot.&\label{a50} \end{eqnarray} We consider only lower dimensional terms in $A_{\mu}$ and $\partial_{\mu} \theta$, which comes from current 2 point functions in the lowest order of perturbative calculation. It is possible to see that effects of spontaneous symmetry breaking are included in these terms. Scalar-Gauge interaction through current 2 point functions $A_{\mu}\pi^{\mu\nu}\partial_{\nu}\theta$ do not appear if there exists U(1) gauge symmetry (for, Ward identity $\partial_{\mu} \pi^{\mu\nu}=0$). But now, gauge symmetry is spontaneously broken so that this interaction can exist. This interaction should give effects to the structure of Chern-Simons term in the spontaneously symmetry breaking case. As far as current 2 point functions are concerned, SU(2) fields $\phi^{a}$ do not couple to $A_{\mu}$ and $\partial_{\mu}\theta$ because of spin conservation. So, we neglect field $\phi^{a}$ from now on. In this action, $\frac{1}{2}\frac{\rho}{m}A_{i}(x)A^{i}(x)$ term explicitly violates remained U(1) gauge invariance after spontaneously symmetry breaking; $A_{\mu} \rightarrow A_{\mu} + \partial_{\mu}\xi $, $\theta \rightarrow \theta - \xi $. Clearly, this violation comes from the existence of $\psi^{\dagger} \frac{\vec{A}^{2}}{2m} \psi$ term. Now, one can derive current 2 point functions by the loop calculation about Fermion field. They are given in the momentum space as follows; \begin{equation} \left\{ \begin{array}{l} \pi_{00}(p)=v^{2}+{\cal{O}}(p^{2})\\ \pi_{0j}(p) =i\sigma_{xy}^{(0)} \varepsilon_{0ij} p_{i}+{\cal{O}}(p^{2}) \\ \pi_{ij}(p) =-v^{2}c_{{\rm g}}^{2}\delta_{ij}+{\cal{O}}(p^{2}) \end{array} \right. \label{b8} \end{equation} with \begin{equation} \left\{ \begin{array}{l} \sigma_{xy}^{(0)}=\frac{1}{4\pi} \\ v^{2}=N(0) \\ v^{2}c_{{\rm g}}^{2}=\frac{\rho}{m} ,\end{array} \right. \label{b26} \end{equation} where $\rho$ is Fermion number density and $N(0)=\frac{m}{\pi}$ is density of state at Fermi surface including spin degree of freedom in 2+1 dimension. After scaling $\theta$ field as $\theta \rightarrow \theta/v$, $S_{{\rm eff.}}^{({\rm f})}$ becomes \begin{eqnarray} &S_{{\rm eff.}}^{({\rm f})}=\int d^{3}x [\frac{v^{2}}{2}A_{0}^{2} + \frac{\sigma_{xy}^{(0)}}{2} \varepsilon_{0ij}(A_{0}\partial_{i}A_{j}+A_{i}\partial_{j}A_{0}) +\frac{1}{2}\{(\partial_{0}\theta)^{2}-c_{g}^{2}(\vec{\partial} \theta)^{2}\} &\nonumber\\ &+v A_{0}\partial_{0}\theta -v c_{g}^{2} \vec{A} \cdot \vec{\partial} \theta +\frac{\sigma_{xy}^{(0)}}{v} (\vec{\partial}\times\vec{A}) (\partial_{0} \theta)] +\cdot\cdot\cdot. &\label{b5} \end{eqnarray} As we see in Eq.(\ref{a0}), $S_{\rm eff.}^{({\rm f})}$ is an effective action which is induced by dynamical effects of Fermion. We recognize $v$ and $c_{{\rm g}}$ as a decay constant and a velocity of sound of Goldstone mode. The second and the third terms in the right hand side of Eq.(\ref{b5}) are parity violating and similar to the Chern-Simons term, but there is no $\varepsilon_{i0j}A_{i}\partial_{0}A_{j}$ term. This dose not contradict with gauge invariance. These terms comes from $(A_{0}+\partial_{0}\theta)\pi_{0i} (A_{i}+\partial_{i}\theta)$ term, which is manifestly gauge invariant due to Goldstone mode. So, gauge invariance still remains without $\varepsilon_{i0j}A_{i}\partial_{0}A_{j}$ term owing to existence of Goldstone mode. Because of the existence of this Chern-Simons like parity violating term, there is Hall current as a response to gradient of external scalar potential (chemical potential) $A_{0}$ which has the form \begin{equation} j^{i}=\sigma_{xy}^{(0)}\varepsilon^{0ij}\partial_{j}A_{0},\label{b15} \end{equation} and we can recognize $\sigma_{xy}^{(0)}$ as Hall conductance. This result has been obtained by Volovik with slightly different manner from our present calculation\cite{Vol.}, and ours agree with his result. \\ Next, we integrate out Goldstone mode and obtain the effective action of $A_{\mu}$. We see how Chern-Simons like parity violating term in Eq.(\ref{b5}) has a correction. As we shall see in Eq.(\ref{b5}), $S_{{\rm eff.}}^{({\rm f})}$ has quadratic forms about Goldstone mode $\theta$ as far as bi-linear forms are concerned. Finally we have, \begin{eqnarray} &e^{i S_{{\rm eff.}}[A_{\mu}]}= \int {\cal{D}}\theta e^{i S_{{\rm eff.}}^{({\rm f})}}&\nonumber\\ &=det\{(\partial_{0}^{2} - c_{{\rm g}}^{2} \vec{\partial}^{2})^{-1}\} e^{i \int d^{3}x d^{3}y {\cal{J}}(x) D(x-y) {\cal{J}}(y)} e^{i \int d^{3}x \frac{\sigma_{xy}^{(0)}}{2} \varepsilon_{0ij}(A_{0}\partial_{i}A_{j}+A_{i}\partial_{j}A_{0})+\cdot\cdot\cdot} &\label{b6}\\ \nonumber\\ & =e^{i \int d^{3}x d^{3}y \frac{1}{2} A_{\mu}(x)\Pi^{\mu\nu}(x-y)A_{\nu}(y) +\cdot\cdot\cdot}. &\label{b7} \end{eqnarray} $D(x-y)$ is Goldstone mode propagator $$ D(x-y)=\int \frac{d^{3}p}{(2\pi)^{3}} \frac{1}{p_{0}^2 - c_{{\rm g}}^{2} |\vec{p}|^{2}}, $$ \begin{center} \begin{minipage}[b]{5cm} \epsfxsize=5cm \epsfbox{paperlanl.fig1.eps} \end{minipage} ~\\ Fig.1 Diagrams for Eq.(\ref{a80}). Dash line means Goldstone mode propagator $D(p)$. Circles mean $\pi^{\mu\nu}(p)$.~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \end{center} and ${\cal{J}}(x)$ is $$ {\cal{J}}(x)=v \partial_{0} A_{0} - v c_{{\rm g}}^{2} \vec{\partial} \cdot \vec{A} + \frac{\sigma_{xy}^{(0)}}{v} (\vec{\partial}\times{\dot{\vec{A}}}). $$ From Eqs.(\ref{b6}) and (\ref{b7}), we can evaluate $\Pi^{\mu\nu}$ in momentum space as \begin{eqnarray} &\Pi_{0i}(p) =\pi_{0i}(p)+ \{ v^{2}c_{{\rm g}}^{2}p_{0}p_{i} +i\sigma_{xy}^{(0)}\varepsilon_{0ki}p_{0}^{2}p_{k} +\cdot\cdot\cdot \}D(p) &\nonumber\\ & \Pi_{ij}(p) =\pi_{ij}(p) -\{ v^{2}c_{{\rm g}}^{4}p_{i}p_{j} +ic_{{\rm g}}^{2}\sigma_{xy}^{(0)}p_{0}p_{k}(\varepsilon_{0kj}p_{i} -\varepsilon_{0ki}p_{j})&\nonumber\\ &~~~~~~~~~~+\frac{\sigma_{xy}^{(0)2}}{v^{2}} \varepsilon_{0ki}\varepsilon_{0lj}p_{k}p_{l}p_{0} +\cdot\cdot\cdot \} D(p).\label{a80} & \end{eqnarray} These terms are shown diagramatically in Fig.1. Linear terms in momentum is important in the low energy region, and have anti-symmetric structure about space-time indecies from $\Pi^{\mu\nu}$. Their coefficients can be written as \begin{eqnarray} & \frac{1}{2!} \varepsilon_{0ij}\frac{\partial}{\partial p_{i}}\Pi_{0j}(p)|_{p=0} =i\sigma_{xy}^{(0)} [1-\frac{p_{0}^{2}}{p_{0}^2 - c_{{\rm g}}^{2} |\vec{p}|^{2}}|_{p=0}] &\nonumber\\ & \frac{1}{2!} \varepsilon_{i0j}\frac{\partial}{\partial p_{0}} \Pi_{ij}(p)|_{p=0}=i\sigma_{xy}^{(0)} \frac{c_{{\rm g}}^{2}|\vec{p}|^{2}}{p_{0}^2 - c_{{\rm g}}^{2} |\vec{p}|^{2}}|_{p=0} &\label{b11} \end{eqnarray} Here, some complications arise. The values $\frac{p_{0}^{2}}{p_{0}^2 - c_{{\rm g}}^{2} |\vec{p}|^{2}}|_{p=0},\frac{c_{{\rm g}}^{2}|\vec{p}|^{2}}{p_{0}^2 - c_{{\rm g}}^{2} |\vec{p}|^{2}}|_{p=0}$ in the right-hand side depend on how the limit is taken in the small momentum region. To see this clearly, we define $\epsilon$ as a relative ratio between $p_{0}$ and $|\vec{p}|$ as follows; \begin{equation} \epsilon p_{0}=c_{{\rm g}}|\vec{p}|.\label{b12} \end{equation} Let us substitute Eq.(\ref{b12}) into Eq.(\ref{b11}), and then, we have $\Pi^{\mu\nu}(p)$ \begin{equation} \Pi^{\mu\nu}(p)=\frac{\sigma_{xy}^{(0)}}{2} \frac{\epsilon^{2}}{\epsilon^{2}-1} \varepsilon^{\mu\rho\nu}ip_{\rho}+\cdot\cdot\cdot. \end{equation} Hence, $S_{{\rm eff.}}[A_{\mu}]$ in Eq.(\ref{b6}) has a term written as \begin{equation} S_{{\rm eff.}}^{\rm (C-S)}[A_{\mu}]=\int d^{3}x \frac{\sigma_{xy}^{(0)}}{2} \frac{\epsilon^{2}}{\epsilon^{2}-1} \varepsilon^{\mu\nu\rho}A_{\mu} \partial_{\nu} A_{\rho} \end{equation} So, we can see that ``Chern-Simons like'' term in $S_{{\rm eff.}}^{({\rm f})}$ of Eq.(\ref{b5}), has a correction from Goldstone mode and becomes{\it ``totally anti-symmetric Chern-Simons term''}. Hall current derived from this term has the form \begin{equation} j^{i}=\sigma_{{\rm He}} \varepsilon^{i0j}F_{0j}~~;~~ \sigma_{{\rm He}}= \sigma_{xy}^{(0)}\frac{\epsilon^{2}}{\epsilon^{2}-1}.\label{b16} \end{equation} Concerning a behavior of Hall conductance, we see a significant difference between Quantum Hall effect and our case. In Quantum Hall effect gauge invariance is strictly preserved, hence Hall conductance becomes topologically invariant, and is quantized exactly. The value becomes integer multiple of the fundamental constant that is proportional to fine structure constant. But in the present case, gauge invariance is spontaneously broken and Hall conductance $\sigma_{{\rm He}}$ depends on an infra-red cut off. The low energy fluctuation due to Goldstone mode gives this effect. Consequently, the value in Eq.(\ref{b16}) is not strictly constant and depends on the boundary condition in temporal direction and spatial direction. In finite systems in spatial direction, the momentum cut off $|\vec{p}|$ is inversely proportional to the spatial size, but the energy cut off $p_{0}$ can be arbitrary small. Hence, $\epsilon$ should be infinite in this case, and $\sigma_{\rm He}$ in Eq.(\ref{b16}) agrees with $\sigma_{xy}^{(0)}$. In infinite systems, on the other hand, the momentum cut off $|\vec{p}|$ can be arbitrary small. If the energy cut off $p_{0}$ is small but finite, then $\epsilon$ vanishes, and $\sigma_{\rm He}$ in Eq.(\ref{b16}) vanishes in this case. The parameter $\epsilon$ can be of other finite value depending on the boundary conditions. Especially, if the small momentum limit is taken while the velocity is fixed with that of Goldstone mode, then $\epsilon=1$ ,and $\sigma_{{\rm He}}$ is divergent. In this special boundary condition $p_{0}=c_{{\rm g}}|\vec{p}|$, Goldstone mode causes resonance effect because this relation is the same as Goldstone mode's dispersion relation and $\sigma_{{\rm He}}$ is enhanced. Varying $\epsilon$ around 1, Hall current changes its direction. \section{Chern-Simons term in 3+1 dimensional Superfluid $^{3}$He-A} In this section, We study 3+1 dimensional system, and derive Chern-Simons term in 3+1 dimensional rotating $^{3}$He-A. These systems are non-relativistic, so, Chern-Simons term embedded in 2+1 dimension can exist. We show that this term is induced in these systems, actually. The calculation is almost the same as 2+1 dimensional case. First, we integrate out Fermion field as \begin{equation} Z^{({\rm f})}[A_{\mu},\partial_{\mu} \theta,U^{\dagger} \partial_{\mu} U] = \int {\cal{D}}\psi^{\dagger} {\cal{D}}\psi e^{i S} = e^{iS_{{\rm eff.}}^{({\rm f})}[A_{\mu},\partial_{\mu} \theta,U^{\dagger} \partial_{\mu} U]} ,\end{equation} with the action of Eq.(\ref{a6}) in 3+1 dimension by perturbation expansion method. Fermion propagators have the same forms as Eq.(\ref{fpro}), but now, absolute value of the gap has the following momentum dependence; \begin{equation} |\Delta(\vec{k})|^{2}=|\Delta|^{2}(1-\frac{k_{z}^{2}}{|\vec{k}|^{2}}) \end{equation} Clearly, this gap has two nodes in the z direction. Interactions are same as Eq.(\ref{sint}) with same currents (\ref{current}). Only difference is in the volume element $d^{3}x \rightarrow d^{4}x$ . So, $S_{{\rm eff.}}^{({\rm f})}$ should be written as (c.f.Eq.(\ref{a50})) \begin{eqnarray} &S_{{\rm eff.}}^{({\rm f})}= \int d^{4}x d^{4}y [\frac{1}{2}(A_{\mu}+\partial_{\mu}\theta)_{x}\pi^{\mu\nu}(x-y) (A_{\nu}+\partial_{\nu}\theta)_{y} - \frac{1}{2}\frac{\rho}{m}A_{i}(x)A^{i}(x)&\nonumber\\ &+\frac{1}{2}\partial_{\mu}\phi^{a}(x)\pi^{\mu\nu}_{ab}(x-y)\partial_{\nu}\phi^{b}(y)]+\cdot\cdot\cdot.&\label{c15} \end{eqnarray} We neglect now SU(2) goldstone mode $\phi$ from the same reason as 2+1 dimensional case. Namely $\phi$ does not couple to $A_{\mu}$ and $\theta$ as far as current 2 point functions are concerned because of spin conservation. We calculate current 2 point functions perturbatively, and we have the current correlation functions in momentum space as \begin{equation} \left\{ \begin{array}{l} \pi_{00}(p)=v^{2}+{\cal{O}}(p^{2})\\ \pi_{0j}(p) =i\sigma_{xy\hat{z}}^{(0)} \varepsilon_{0ij\hat{3}} p_{i} +{\cal{O}}(p^{2})~~~; \varepsilon_{\mu\nu\rho\hat{3}} =\varepsilon_{\mu\nu\rho\sigma}(\hat{z})^{\sigma}\\ \pi_{ij}(p) =-v^{2}c_{{\rm g}}^{2}\delta_{ij}+{\cal{O}}(p^{2}) \end{array} \right. \label{c10} \end{equation} with \begin{equation} \left\{ \begin{array}{l} \sigma_{xy\hat{z}}^{(0)}=\frac{N(0)}{4m} \\ v^{2}=N(0) \\ v^{2}c_{{\rm g}}^{2}=\frac{\rho}{m} ,\end{array} \right. \label{c17} \end{equation} where $N(0)=\frac{3\rho}{2\epsilon_{{\rm F}}}$ is a density of state at Fermi surface including spin degree of freedom in 3+1 dimension. These results are ${\it{not}}$ the same with that in 2+1 dimensional case (\ref{b26}). Let us substitute Eqs.(\ref{c10}) and (\ref{c17}) into Eq.(\ref{c15}), and rescale $\theta$ as $\theta \rightarrow \theta/v$, then $S_{{\rm eff.}}^{({\rm f})}$ is written as \begin{eqnarray} &S_{{\rm eff.}}^{({\rm f})}=\int d^{4}x [\frac{v^{2}}{2}A_{0}^{2}+\frac{\sigma_{xy\hat{z}}^{(0)}}{2} \varepsilon_{0ij\hat{3}}(A_{0}\partial_{i}A_{j}+A_{i}\partial_{j}A_{0}) +\frac{1}{2}\{(\partial_{0}\theta)^{2} -c_{{\rm g}}^{2}(\vec{\partial}\theta)^{2}\}& \nonumber\\ &+v A_{0}\partial_{0}\theta -v c_{{\rm g}}^{2} \vec{A} \cdot \vec{\partial} \theta +\frac{\sigma_{xy\hat{z}}^{(0)}}{v} (\vec{\partial}\times\vec{A})_{z} (\partial_{0} \theta)] +\cdot\cdot\cdot. &\label{c20} \end{eqnarray} The second and the thirs terms in the right hand side of Eq.(\ref{c20}) are parity violating terms and resemble Chern-Simons term. From this we can derive ``Hall current'' as follows; \begin{equation} \vec{j} =\sigma_{xy\hat{z}}^{(0)}(\vec{\partial}A_{0} \times \hat{\vec{z}}) \label{b89} \end{equation} In $^{3}$He-A case, $A_{0}$ is chemical potential, and its relation with $^{3}$He atom number density $\rho$ is $\rho=N(0)A_{0}$. So, the current of Eq.(\ref{b89}) becomes \begin{equation} \vec{j} =\frac{1}{4m}(\vec{\partial}\rho \times \hat{\vec{z}}).\label{MaMu} \end{equation} This current agrees with that derived by Mermin and Muzikar\cite{Mer.Muz.}. ~\\ ~\\ Next, we integrate out Goldstone mode and obtain an effective action, \begin{equation} e^{i S_{{\rm eff.}}[A_{\mu}]}=\int {\cal{D}} \theta e^{i S_{{\rm eff.}}^{({\rm f})}}, \end{equation} with parameters written in Eq.(\ref{c20}). By integrating Goldstone mode, we obtain a term in $S_{\rm eff.}[A_{\mu}]$ written as, \begin{equation} S_{{\rm eff.}}^{(\rm C-S)}[A_{\mu}]= \int d^{4}x \frac{\sigma_{xy\hat{z}}^{(0)}}{2} \frac{\epsilon^{2}}{\epsilon^{2}-1} \varepsilon^{\mu\nu\rho\sigma}A_{\mu} \partial_{\nu} A_{\rho}\hat{z}_{\sigma}. \end{equation} So, we see that in 3+1 dimensional rotating $^{3}$He-A, Chern-Simons term which is embedded in 2+1 dimensional space is induced by integrating Fermion and Goldstone mode. Hall current derived from this action is \begin{equation} j^{i}=\sigma_{{\rm He}} \varepsilon^{i0j\hat{3}}F_{0j}~~;~~ \sigma_{{\rm He}}= \sigma_{xy\hat{z}}^{(0)}\frac{\epsilon^{2}}{\epsilon^{2}-1}.\label{c16} \end{equation} In the same manner as the 2+1 dimensional system, this current has Hall conductance. The magnitude, however, depends on an infra-red cut off of Goldstone mode. \section{Orbital angular momentum of superfluid $^{3}$He-A in a cylinder} We discuss a physical implications of Chern-Simons term. It has been known that there is a problem about the orbital angular momentum of $^{3}$He-A system. Cooper pairs in $^{3}$He-A have orbital angular momentum $l_{z}=1$. So, in a $^{3}$He-A system at $T=0$ in which Cooper pairs condensate homogeneously, it is expected that {\it total} angular momentum in the system $L_{z}$ coincides with a total number of pairs $\frac{N}{2}$ in natural unit $\hbar=1$ ($N$; total number of $^{3}$He atom). Angular momentum {\it density} ${\cal{L}}_{z}$ in an isotropic and homogeneous system have been calculated with field theoretical method, but the expected result $\frac{\rho}{2}$ could not be obtained. This problem is so-called ``the angular momentum paradox of $^{3}$He-A'' \cite{Kita1}. Recently, it is shown by Kita \cite{Kita2} that a current exists at the edge of $^{3}$He-A in a cylinder which has axial symmetry around z-axis and whose Cooper pairs homogeneously condensate, and this edge current contributes to $L_{z}$ with the expected value, $\frac{N}{2}$. In this section, we show that one can easily obtain this edge current from our Hall current (\ref{c16}) by putting cylindrical boundary condition and can obtain $L_{z}=\frac{N}{2}$. The value may be enhanced if one puts on a spetial time dependent boundary condition in the system. Contribution of Hall current (\ref{c16}) to ${\cal{L}}_{z}$ is written as follows; \begin{equation} {\cal{L}}_{z}=(\vec{x} \times m \vec{j})_{z} \label{d1} \end{equation} A boundary condition for $^{3}$He-A in a cylinder with radius $a$ is \begin{equation} \rho(\vec{x})=\rho \theta(a-r)~,~\theta(r)~;{\rm the~step~function.} \end{equation} This condition is equivalent in our discussion to put on the $A_{0}$ and $\vec{A}$ as \begin{eqnarray} &A_{0}(x)=\frac{1}{N(0)}\rho(\vec{x})&\nonumber\\ &\vec{A}(x)=0.& \end{eqnarray} Because, this boundary condition has no time dependence, the parameter $\epsilon$ should be infinite. So, Hall current (\ref{c16}) reduces to the form (\ref{MaMu}), and Eq.(\ref{d1}) becomes \begin{equation} {\cal{L}}_{z}=\frac{-1}{4}\vec{x}\cdot\vec{\partial}\rho(\vec{x}) =\frac{\rho}{4}r \delta(a-r). \end{equation} Hence, $L_{z}$ is \begin{eqnarray} L_{z}=& \int_{0}^{1}dz\int_{0}^{2\pi}d\theta\int_{0}^{a}dr r {\cal {L}}_{z}&\nonumber\\ &=\frac{\rho}{2}\pi a^{2}=\frac{N}{2}& \end{eqnarray} If time dependent boundary conditions could be prepared, $L_{z}$ becomes, \begin{equation} L_{z}=\frac{N}{2}\frac{\epsilon^{2}}{\epsilon^{2}-1}, \end{equation} i.e., as same as Hall conductance, ${\cal{L}}_{z}$ is affected by spacetime dependence of boundary conditions. It is interesting if this different value will be observed. \section{Summary} We have considered 2+1 and 3+1 dimensional rotating superfluid $^{3}$He-A. In rotating $^{3}$He-A, we could introduce external U(1) gauge field in the system, and this approximate gauge invariance is spontaneously broken by Cooper pair condensation. We have shown that Abelian Chern-Simons term is induced in the effective action for external gauge field from the combined effects of Fermion and Goldstone mode. In quantum Hall effect, gauge invariance is strictly preserved and, Hall conductance which is a coefficient of Chern-Simons term becomes topological invariant and it is quantized exactly. But in the present case, we have shown that Hall conductance should depend on an infra-red cut off of Goldstone mode. This infra-red cut off have a relation with spacetime dependence of boundary condition and, Hall conductance is enhanced when Goldstone mode causes resonance. We have shown that the total orbital angular momentum $L_{z}$ in cylindrical superfluid $^{3}$He-A is derived automatically from Chern-Simons term and, $L_{z}$ is proportional to Hall conductance. So, $L_{z}$ also depends on infra-red cut off of Goldstone mode, and varies its value with changing spacetime dependence of boundaly condition. If a system has no time dependent boundaly condition with homogeneous pair condensation, $L_{z}$ takes the value $\frac{N}{2}$ which is a total number of Cooper pairs. It is the expected value under the problem which is called ``the angular momentum paradox in $^{3}$He-A''. This value is changed, however, if we put on a time dependent boundary conditions. These results are obtained by a perturbative calculation in the lowest order, expressed in (Fig.1). In the system without spontaneous breaking of gauge symmetry, there is no correction to Hall conductance from higher order effects. In our case, gauge symmetry is spontaneously broken and there could be higher order corrections. It is our future probrem to study higher order corrections to Hall conductance from diagrams of Fig.2. \begin{center} \begin{minipage}[b]{5cm} \epsfxsize=5cm \epsfbox{paperlanl.fig2.eps} \end{minipage} Fig.2 Higher order correction terms. Circles mean 3 point functions which is induced by Fermion loop calculation. \end{center} \begin{center} {\bf ACKNOWLEDGMENT} \end{center} The authors are grateful to Professor T. Kita and Dr. N. Maeda for useful discussions. This work was partially supported by the special Grant-in-Aid for Promotion of Education and Science in Hokkaido University provided by the Ministry of Education, Science, Sports, and Culture, the Grant-in-Aid for Scientific Research (07640522), the Grant-in-Aid for Scientific Research on Priority area (Physics of CP violation), and the Grant-in-Aid for International Science Research (Joint Research 07044048) from the Ministry of Education, Science, Sports and Culture, Japan.
{ "attr-fineweb-edu": 1.167969, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUc9Y4eIZjpZeDTLbY
\section{Introduction} \label{sec:intro} \input{sec1_intro} \section{Related Work} \label{sec:relatedwork} \input{sec2_relatedwork} \section{Proposed Approach} \label{sec:method} \input{sec3_proposed_approach} \section{Experimental Evaluation} \label{sec:experiments} \input{sec4_experiments} \section{Conclusion} \label{sec:conclusion} \input{sec5_conclusion} \section*{Acknowledgment} \input{sec7_acknowledgement} \input{sec8_references} \end{document} \subsection{Multi-Modal CNN based Approaches} Following their success in computer vision, CNN-based solutions have replaced conventional methods such as the works in \cite{DepthKernel2011}, \cite{HKD2011}, and \cite{HONV2012} in the field of RGB-D object recognition, as in many other areas. For instance, the authors of \citep{Wang_2015_ICCV, Wang_2015_IEEE_ToM} present CNN-based multi-modal learning systems motivated by the intuition of common patterns shared between RGB and depth modalities. They enforce their systems to correlate features of the two modalities in a multi-modal fusion layer with a pretrained model \citep{Wang_2015_ICCV} and their custom network \citep{Wang_2015_IEEE_ToM} respectively. \cite{Li_AAAI_2018df} extend the idea of considering multi-modal intrinsic relationship with intra-class and inter-class similarities for indoor scene classification by providing a two-stage training approach. In \cite{Rahman_ICME_2017}, a three-streams multi-modal CNN architecture has been proposed in which depth images are represented with two different encoding methods in two-streams and the remaining stream is used for RGB images. Despite the extra burden, this naturally has increased the depth accuracy in particular. Similar multi-representational approach has been proposed by \cite{Zia_ICCVW_2017} where a hybrid 2D/3D CNN model initialized with pretrained 2D CNNs is employed together with 3D CNNs for depth images. \cite{Cheng_3DV_2015} propose convolutional fisher kernel (CFK) method which integrates a single CNN layer with fisher kernel encoding and utilizes Gaussian mixture models for feature distribution. The drawback of their approach is the very high dimensional of the feature space. \subsection{Transfer Learning based Approaches} Deep learning algorithms require a significant amount of annotated training data and obtaining such data can be difficult and expensive. Therefore, it is important to leverage transfer learning for enhancing high-performance learner on a target domain and the task at hand. Especially, applying a trained deep network and then fine-tuning the parameters can speed up the learning process or improve the classification performance \citep{Wang_TIP_2017}. Furthermore, many works show that a pretrained CNN on a large-scale dataset can generate good generic representations that can effectively be used for other visual recognition tasks as well \citep{Razavian_CVPRW_2014, Yosinski_NIPS_2014, Oquab_CVPR_2014, Azizpour_CVPRW_2015, Azizpour_TPAMI_2015}. This is particularly important in vision tasks on RGB-D datasets, which is hard to collect with labeled data and generally amount of data is much less than that of the labeled images in RGB datasets. There are many successful approaches that use transfer learning in the field of RGB-D object recognition. \cite{Schwarz_ICRA_2015} use the activations of two fully connected layers, i.e. \textit{fc7} and \textit{fc8}, extracted from the pretrained AlexNet \citep{Krizhevsky_NIPS_2012} for RGB-D object recognition and pose estimation. \cite{Gupta_ECCV_2014} study the problem of object detection and segmentation on RGB-D data and present a depth encoding approach referred as HHA to utilize a pretrained CNN model on RGB datasets. Asif \textit{et al.} introduce a cascaded architecture of random forests together with the use of the \textit{fc7} features of the pretrained models of \citep{Chatfield_BMVC_2014} and \citep{Simonyan_ICLR_2015} to encode the appearance and structural information of objects in their works of \cite{Asif_ICRA_2015} and \cite{Asif_ToR_2017}, respectively. \cite{Carlucci_RAS_2018} propose a colorization network architecture and use a pretrained model as feature extractor after fine-tuning it. They also make use of the final fully-connected layer in their approach. So, these above-mentioned studies mainly focus on the outputs of the fully-connected layers. On the other hand, many studies \citep{Liu_CVPR_2015, Zaki_ICRA_2016, Zaki_RAS_2017, Song_IJCAI_2017, Caglayan_ECCVW_2018} have concluded that using fully connected layers from pretrained or finetuned networks might not be the optimum approach to capture discriminating properties in visual recognition tasks. Moreover, combining the activations obtained in different levels of the same modal enhances recognition performance further, especially for multi-modal representations, where earlier layers capture modality-specific patterns \citep{Yang_ICCV_2015, Song_IJCAI_2017, Caglayan_ECCVW_2018}. Hence, utilizing information at different levels in the works of \citep{Yang_ICCV_2015, Zaki_ICRA_2016, Zaki_RAS_2017, Song_IJCAI_2017, Caglayan_ECCVW_2018, Zaki_AuotRobots_2019} yields better performances. More recent approach of \cite{Loghmani_RAL_2019} utilizes the pretrained model of residual networks \citep{He_CVPR_2016} to extract features from multiple layers and combines them through a recurrent neural network. Their experimental results also verify that multi-level feature fusion provides better performance than single-level features. While their approach is based on a gated recurrent unit (GRU) \citep{Cho_EMNLP_2014} with a number of memory neurons, our approach employs multiple random neural networks with no necessarily need for training. A different related approach is proposed by \cite{Asif_TPAMI_2018}. They handle the classification task by dividing it into image-level and pixel-level branches and fusing through a Fisher encoding branch. \cite{Eitel_IROS_2015} and \cite{Tang_TCDS_2019} employ two-stream CNNs, one for each modality of RGB and depth channels and each stream uses the pretrained model of \citep{Krizhevsky_NIPS_2012} on the ImageNet. In both works \citep{Eitel_IROS_2015, Tang_TCDS_2019}, the two-streams are finally connected by a fully-connected fusion layer and a canonical correlation analysis (CCA) module, respectively. While feature fusion approaches (e.g. concatenation) may provide good accuracy for the visual recognition task, feature fusion may not be the only solution for multi-level decision process since increased feature space may not be good for recognition with small number of data. We experiment and show that voting on the SVM confidence scores for selected levels can also provide reliable and improved performance. Moreover, this also enables us to use confidence score based importance to RGB and depth domains in multi-modal fusion. \subsection{Random Recursive Neural Networks} Randomization in neural networks has been researched for a long time in various studies \citep{Schmidt_ICPR_1992, Pao_Computer_1992, Pao_Neurocomp_1994, Igelnik_Pao_TNN_1995, Huang_TNN_2006, Rahimi_NIPS_2008, Socher_NIPS_2012} due to its benefits, such as simplicity and computationally cheapness over optimization \citep{Rahimi_NIPS_2009}. Since a complete overview of these variations is beyond the scope of this paper, we give an overview specifically with the focus of random recursive neural networks \citep{Socher_NIPS_2012}. Recursive neural networks (RNNs) \citep{Pollack_AI_1990, Hinton_AI_1990, Socher_ICML_2011} are graphs that process a given input into recursive tree structures to make a high-level reasoning possible in a part-whole hierarchy by repeating the same process over the trees. RNNs have been employed for various research purposes in computer vision including image super-resolution \citep{Kim_CVPR_2016}, semantic segmentation \citep{Socher_ICML_2011, Sharma_NIPS_2014}, and RGB-D object recognition \citep{Socher_NIPS_2012, Bai_Neurocomp_2015, Cheng_CVIU_2015}. \cite{Socher_NIPS_2012} have introduced a two-stage RGB-D object recognition architecture where the first stage is a single CNN layer using a set of k-means centroids as the convolution filters and the second stage is multiple random recursive neural networks to process outputs of the first stage. \cite{Bai_Neurocomp_2015} propose a subset based approach of the pioneer work in \citep{Socher_NIPS_2012} where they use a sparse auto-encoder instead of the k-means clustering for convolution filters. \cite{Cheng_CVIU_2015} employ the same architecture of \cite{Socher_NIPS_2012} for a semi-supervised learning system with a modification by adding a spatial pyramid pooling to prevent a potential performance degradation during resizing input images. \cite{Bui_Access_2016} have replaced the single CNN layer in \citep{Socher_NIPS_2012} with a pretrained CNN model for RGB object recognition and achieved impressive results. Following their success, in our preliminary work \citep{Caglayan_ECCVW_2018}, we propose an approach that aims to improve on this idea by gathering feature representations at different levels in a compact and representative feature vector for both of RGB and depth data. To this end, we reshape CNN activations in each layer that provides a generic structure for each layer by fixing the tree structure without hurting performance and it allows us to improve recognition accuracy by combining feature vectors at different levels. In this work, we propose a pooling strategy to handle large dimensional CNN activations by extending the idea of randomness in RNNs. This can be related with the stochastic pooling in \cite{Zeiler_ICLR_2013}, which picks the normalized activations of a region according to a multinomial distribution by computing the probabilities within the region. Instead of using probabilities, our pooling approach here is a form of averaging based on uniform distributed random weights. \subsection{CNN-Stage} The backbone of our approach is a pretrained CNN model. Since size of available RGB-D datasets are much smaller than that of RGB's, it is important to make use of an efficient knowledge transfer from pretrained models on large RGB datasets. In addition, it saves time by eliminating the need for training from scratch. In the previous work \citep{Caglayan_ECCVW_2018}, the available pretrained CNN model of \citep{Chatfield_BMVC_2014}, named VGG\_f, in MatConvNet toolbox \citep{Vedaldi_Matconvnet_ICM_2015} has been used. In this work, we employ several available pretrained models of the ImageNet including AlexNet \citep{Krizhevsky_NIPS_2012}, VGGNet \citep{Simonyan_ICLR_2015} (specifically VGGNet-16 model with batch normalization), ResNet \citep{He_CVPR_2016} (specifically ResNet-50 and ResNet-101 models), and DenseNet \citep{Huang_CVPR_2017}. We extract features from seven different levels of CNN models. For AlexNet, outputs of the five successive convolutional layers and the following two fully-connected (FC) layers have been considered, while for VGGNet, the first two FC layers are taken into account together with the outputs of each convolution block that includes several convolutions and a final max pooling operations. Unlike AlexNet and VGGNet, ResNet and DenseNet models consist of blocks such as residual, dense or transition blocks where there are multiple layers. While ResNet extends the sequential behaviour of AlexNet and VGGNet with the introduction of the skip-connections, DenseNet takes one step further by concatenating the incoming activations rather than summing up them. The ResNet models consist of five stages and a following average pooling and an FC layer. Therefore, each output of the five successive stages and the output of the final average pool have been considered for the six of the seven extraction points. As for the remaining extraction level for these models (ResNet-50 and ResNet-101), the middle point of the third block (which is the largest block) has been taken. Similarly, for DenseNet model, the output of all the four dense blocks (for the last dense block, the output of normalization that follows the dense block has been taken) and the transition blocks between them have been considered as the extraction points. Since common and straightforward model of AlexNet has a minimum depth with a seven layer stack-ups, the above-mentioned CNN extraction points for each model are selected to evaluate and compare level-wise model performances. In addition, these levels are also related to the CNN model in the previous work \citep{Caglayan_ECCVW_2018} that we improve on by considering their intrinsic reasoning behind the use of blocks and the approximate distance differences. \subsection{RNN-Stage} \begin{figure}[!t] \centering \includegraphics[width=0.65\columnwidth, keepaspectratio]{rnn.pdf} \caption{Graphical representation of a single recursive neural network (RNN). The same random weights have been applied to compute each node and level.} \label{fig:RNN} \end{figure} Random recursive neural networks offer a feasible solution by randomly fixing the network connections and eliminate the need for selection in the parameter space. Motivated by this, we employ multiple random RNNs, whose inputs are the activation maps of a pretrained CNN model. RNNs map a given 3D matrix input into a vector of higher level representations of it by applying the same operations recursively in a tree structure. In each layer, adjacent blocks are merged into a parent vector with tied weights where the objective is to map inputs $\mathit{C \in}$ $\mathbb{R}$$\mathit{^{K \times s \times s}}$ into a lower dimensional space $\mathit{p \in}$ $\mathbb{R}$$\mathit{^{K}}$ through multiple levels in the end. Then, the output of a parent vector is passed through a nonlinear function. A typical choice for this purpose is the $\mathit{tanh}$ function. In our previous work \citep{Caglayan_ECCVW_2018}, we give the comparative results of different activation functions in terms of accuracy success and show hyperbolic functions work well. Therefore, in this work, we employ $\mathit{tanh}$ activation function as in \citep{Socher_NIPS_2012, Caglayan_ECCVW_2018}. Fig. \ref{fig:RNN} shows a graphical representation of a pooled CNN output with the size $K\times8\times8$ and an RNN structure with 3 levels and blocks of $2\times2=4$ child nodes (Note that this figure is inspired by the RNN graphical representation of \cite{Socher_NIPS_2012}). In our case, inputs of RNNs are activation maps obtained from different levels of the underlying CNN model. Let $\mathit{x}$ be an input image that pass through $\mathit{f(x)_l}$ a given CNN model, where $\mathit{l=1,..,7}$ are the extraction levels and $\mathit{f(x)_l = C_l}$, where the output convolution maps are either a 3D matrix $\mathit{C_l \in \mathbb{R}^{K\times s\times s} }$ for $\mathit{l}$ convolutional layers or a 1D vector of $\mathit{C_l \in \mathbb{R}^{M} }$ for $\mathit{l}$ FC layers/global average pooling. Since RNN requires a 3D input of $\mathit{C \in \mathbb{R}^{K \times s \times s}}$, we first process the convolution maps at each level to ensure the required form. Moreover, by applying this step, we ensure that RNNs are able to handle inputs fast and effectively by reducing the receptive field area and/or the number of activation maps of high-dimensional feature levels (e.g. the outputs of early levels for models such as VGGNet, ResNet, DenseNet, etc.). In addition, we apply preprocessing to obtain similar output structures with the previous work \cite{Caglayan_ECCVW_2018}. However, it was enough to apply only reshaping in the previous work due to less dimensional size of layers in VGG\_f model. In this work, we introduce random weighted pooling that copes with high dimensionality of layers in the underlying deeper models such as ResNet \citep{He_CVPR_2016} and DenseNet \citep{Huang_CVPR_2017}. Our pooling mechanism can downsample CNN activations in both number and spatial dimension of maps. After applying the preprocessing step to obtain suitable forms for RNNs, we compute parent vector as \begin{equation} \label{eq:rnnParent} \begin{aligned} p = g\left(WC_l\right)\\ \end{aligned} \end{equation} where $\mathit{C_l = \begin{bmatrix} c_1 \\ \vdots \\ c_{s^2} \end{bmatrix}}$ for each CNN extraction level $\mathit{l=1,...,7}$, $g$ is a nonlinearity function which is $tanh$ in this study, $s$ is block size of an RNN. Instead of a multi-level structured RNN, an RNN in this study is of one-level with a single parent vector. In fact, our experiments have shown that the single-level structure provides better or comparable results over the multi-level structure in terms of accuracy (see the \textit{supplementary material}). Moreover, the single-level is more efficient with less computational burden. Thus, $s$ block size is actually the receptive field size in an RNN. In Eq. \ref{eq:rnnParent}, the parameter weight matrix is $\mathit{W \in \mathbb{R}^{K\times s^2K}}$ and it is randomly generated from a predefined distribution that satisfies the following probability density function \begin{equation} \label{eq:weightPdf} \begin{aligned} W \sim h \Rightarrow \int_{a}^{b}h(w)dw = P(a \leq W \leq b)\\ \end{aligned} \end{equation} where $h$ is a predefined distribution and $a$ and $b$ are boundaries of the distribution. In our case, the weights are set to be uniform random values in $\mathit{[-0.1, +0.1]}$, which have been assigned by following our previous work \cite{Caglayan_ECCVW_2018} and specifically with the assumption of preventing possible explosion of tensor values due to our aggregating pooling strategy. On the other hand, \cite{Saxe_ICML_2011} find that the distribution of random weights such as uniform, Laplacian, or Gaussian does not affect classification performances as long as the distribution is 0-centered. We refer readers to \cite{Rahimi_NIPS_2008} and \cite{Rudi_NIPS_2017} for more insights and further details on the properties of random features. Keeping in mind that in order to obtain sufficient descriptive power from the randomness, we need to generate enough samples from the range. In \cite{Socher_NIPS_2012}, it has been demonstrated experimentally that increasing the number of random RNNs up to $64$ improves performance and gives the best result with $128$ RNNs. In \cite{Caglayan_ECCVW_2018}, it has also been verified that $K = 128$ number of RNN weights can be generated for feature encoding with high performance in classification on both of RGB and depth data. Therefore, as a standard usage in this work, we do feature encoding on CNN features using $128$ random RNNs with $64$ channel representations, leading us to $8192$ dimensional feature vector at each level in a model. The reason why random weights work well for object recognition tasks seems to lie in the fact that particular convolutional pooling architectures can naturally produce frequency selective and translational invariant features \citep{Saxe_ICML_2011}. As stated before, in analogy to the convolutional-pooling architecture in \cite{Jarrett_ICCV_2009}, our approach intuitively incorporates both selectivity due to the CNN stage and translational invariance due to the RNN stage. Moreover, we have to point out that there is biological plausibility lies in the use of randomness as well. \cite{Rigotti_Frontiers_2010} have shown that random connections between inter-layer neurons are needed to implement mixed selectivity for optimal performance during complex cognitive tasks. Before concluding this section, we give details of our random pooling approach, where we extend the idea of random RNN as a downsampling mechanism. \subsubsection{Random Weighted Pooling} \label{sec:randomPooling} In our previous work \citep{Caglayan_ECCVW_2018}, we give CNN outputs to RNNs after a reshaping process. However, due to the high dimensional output size of the models used in this study, it is necessary to process CNN activations further. In this work, we propose a random pooling strategy to reduce the dimension in either size of the activation maps ($s$ block size or receptive field area of an RNN) or number of maps ($K$) at CNN levels where reshaping is insufficient. In our random weighted pooling approach, we aggregate the CNN activation maps by sampling from a uniform distribution as in Eq. \ref{eq:weightPdf} from each pooling area. More precisely, for $l$ extraction level, the pooling reduces $\mathit{C_l}$ activations by mapping into $\mathit{A_{l}^{'}}$ area as $\mathit{P: C_l \mapsto A_{l}^{'}}$ where $\mathit{C_l \in \mathbb{R}^{K\times s \times s}}$ and $\mathit{A_{l}^{'} \in \mathbb{R}^{K^{'}\times s^{'} \times s^{'}}}$ in Eq. \ref{eq:randomPool}. \begin{equation} \label{eq:randomPool} \begin{aligned} A_l^{'} = \sum_{i \in A_l}W_{l}^{(i)}{C_{l}^{(i)}}\\ \end{aligned} \end{equation} where $A_l$ is pooling area, $C_l$ convolutional activations, $i$ is the index of each element within the pooling, and $W_l$ is random weights. $K^{'}< K$ and $s^{'} = s$ when pooling is over number of maps whereas $K^{'} = K$ and $s^{'} < s$ when pooling is over size of maps. Fig. \ref{fig:Pooling} illustrates proposed random weighted pooling for both of downsampling in number of maps and size of maps. In this work, by extending the randomness in RNNs along the pipeline with the proposed pooling strategy, we aim to show that randomness can actually work quite effectively. In fact, as we can see in the comparative results (see Sec. \ref{sec.exp.ma.poolingPerformances}), this randomness in our approach works generally better comparing to the other common pooling methods such as max pooling and average pooling, especially at the semantic levels. \begin{figure \centering \includegraphics[width=\columnwidth, keepaspectratio]{pooling.pdf} \caption{Illustration of random weighted pooling over number of maps (top) and window size of maps (below).} \label{fig:Pooling} \end{figure} \subsection{Fusion and Classification} \label{sec:fusionClassification} After obtaining encoded features from the RNN-Stage, we investigate multi-level fusions to capture more distinctive information at different levels for further recognition performance. To minimize the cross entropy error between output predictions and the target values, we could give multi-level outputs to fully connected layers and back-propagate through them. However, following the success in our previous study \citep{Caglayan_ECCVW_2018}, we perform classification by employing linear SVM with the scikit-learn\footnote{\url{https://github.com/scikit-learn/scikit-learn}} \citep{Pedregosa_JMLR_2011_scikit} implementation. To this end, in our previous work \citep{Caglayan_ECCVW_2018}, we have performed the straightforward feature concatenation on various combinations of the best mid-level representations. In this work, in addition to the feature concatenation, we also apply soft voting by averaging SVM confidence scores on these best trio of levels. Finally, RGB and depth features are fused to evaluate combined RGB-D accuracy performance. The motivation behind the need for a complementary multi-modal fusion is twofold. The fact that shiny, transparent, or thin surfaces may cause corruption in depth information since depth sensors do not properly handle reflections from such surfaces, resulting better performance in favor of RGB in such cases. On the other hand, depth sensors work well in a certain range and are insensitive to changes in lighting conditions. Therefore, to take full advantage of both modalities in a complementary way, a compact multi-modal combination based on the success of input type is important in devising the best performing fusion. To this end, we present a decision mechanism using weighted soft voting based on the confidence scores obtained from RGB and depth streams. Modality weighting in this way is used to compensate imbalance and complement decision in different data modalities. Once the modality-specific branches proceed, we combine the predictions through the weighted SVM as follows. Let $S_{\scriptstyle{i}}$ represents SVM confidence scores of each category class $n = 0...N-1$, where $N$ is number of classes, and $i \in \{rgb, depth\}$ indicates RGB and depth modalities. Then, weights $w_{\scriptstyle{i}}$ are computed as in Eq. \ref{eq:fusionWeights}. \begin{equation} \label{eq:fusionWeights} \begin{aligned} w_{\scriptstyle{i}} = \sqrt{\dfrac{e^{m_{\scriptstyle{i}}}}{\sum\limits_{i} {e^{m_{\scriptstyle{i}}}}}} \end{aligned} \end{equation} where $m_{\scriptstyle{i}}$ is normalized squared magnitudes for each modality and defined as: \begin{equation} \label{eq:normScoreMagnitudes} \begin{aligned} m_{\scriptstyle{i}}=\dfrac{{\lVert S_{\scriptstyle{i}}\rVert}^2}{\max({\lVert S_{\scriptstyle{rgb}}\rVert}^2, {\lVert S_{\scriptstyle{depth}}\rVert}^2)} \end{aligned} \end{equation} Finally, multi-modal RGB-D predictions are estimated as follows, in Eq. \ref{eq:weightedFusion}: \begin{equation} \label{eq:weightedFusion} \begin{aligned} \hat{y}_{\scriptscriptstyle{RGBD}} = {\underset{n}{\mathrm{arg\,max}}} \sum_i w_{\scriptstyle{i}}S_{\scriptstyle{i}} \end{aligned} \end{equation} where $n$ is a category class. Concretely, if RGB and depth results are balanced in confidence scores, then the final soft voting decision is based on equal contribution from each stream similar to averaging. \subsection{Dataset and Setup} \label{sec:exp.datasets} \subsubsection{Washington RGB-D Object Dataset} Washington RGB-D object dataset includes a total of $41,877$ images for each modality under $51$ object categories and $300$ category instances. Categories are commonly used household objects such as cups, camera, keyboards, vegetables, fruits, etc. Each instance of a category has images taken from $30^\circ$, $45^\circ$ and $60^\circ$ elevation angles. The dataset provides $10$ train/test splits where in each split, one instance for each category is used for testing and the remaining instances are for training. Thus, for a single split run, a total of $51$ category instances (roughly $7,000$ images) are used at testing and the remaining $249$ instances (roughly $35,000$ images) are used at training phase. We evaluate the proposed work on the provided cropped images with the same setup in \cite{Lai_ICRA_2011} for the 10 splits and average accuracy results are reported for the comparison to the related works. \subsubsection{SUN RGB-D Scene Dataset} SUN RGB-D scene dataset is the largest real-world RGB-D scene understanding benchmark to the date and contains RGB-D images of indoor scenes. Following the publicly available configuration of the dataset, we choose $19$ scene categories with a total of $4,845$ images for training and $4,659$ images for testing. We use the same train/test split of \cite{Song_CVPR_2015} to evaluate the proposed work for scene recognition. \subsection{Object Recognition Performance} \label{sec:exp.objectRecognition} Table \ref{table:wrgbdResults} shows average accuracy performance of our approach along with the state-of-the-art methods for object recognition on Washington RGB-D object benchmark. Our approach greatly improves the previous state-of-the-art results for both of RGB and depth modalities with a margin of $2.4\%$ and $1.3\%$, respectively. As for the combined RGB-D results, our approach surpasses all the other methods except that of \cite{Loghmani_RAL_2019}, which is slightly better than ours ($0.3\%$). As stated before (see Sec. \ref{sec:relatedwork}), their approach is based on a gated recurrent unit with a set of memory neurons and is powered by a multimodal fusion learning schema. On the other hand, in this paper, we focus on a simple yet effective multi-modal feature extraction framework with a soft voting SVM classification. These results emphasize the importance of deep features in a unified framework based on the incorporation of CNNs and random RNNs. What is interesting here is that even a simple model like AlexNet can yield quite successful results. Concretely, our previous work \citep{Caglayan_ECCVW_2018} with AlexNet architecture, called VGG\_f in MatConvNet toolbox, gives impressive results as the models used in this work. \input{sec4_table_wrgbd} We also present average accuracy performance of individual object categories on the 10 evaluation splits of Washinton RGB-D Object dataset using the best-performing structure, ResNet101-RNN. As shown in Fig. \ref{fig:wrgbdIndividualResults}, our approach is highly accurate in recognition of the most of the object categories. Categories with lower accuray results are \textit{mushroom}, \textit{peach}, and \textit{pitcher}. The common reason that leads to the lower performance in these categories seems to be due to their less number of instances. In particular, these categories have only $3$ instances, which is the minimum number for any category in the dataset. Considering the other categories with up to $14$ instances, this imbalance of the data may have biased the learning to favor of categories with more examples. Moreover, the accuracy of our combined RGB and depth based on weighted confidences of modalities reflects that the fusion of RGB and depth data in this way can provide strong discrimination capability for object categories. \subsection{Scene Recognition Performance} \label{sec:exp.sceneRecognition} To test the generalization ability of our approach, we also carry out comparative analysis of our best-performing model, namely ResNet101-RNN, on SUN RGB-D Scene \citep{Song_CVPR_2015} dataset for scene recognition as a more challenging task of scene understanding. To this end, we first apply ResNet101 pretrained model without finetuning, namely Fix ResNet101-RNN, for both of RGB and depth modalities. Then, we finetune the pretrained CNN model on SUN RGB-D Scene dataset using the same hyper-parameters of object recognition task (see Sec. \ref{sec.exp.ma.finetuning}). The results of these experiments together with the-state-of-the-art results on this dataset are reported in Table \ref{table:sunrgbdResults}. Our best system greatly improves the-state-of-the-art methods not only for RGB-D final result but also for individual data modalities. It is worth mentioning that we use the pretrained CNN model on object-centric dataset of ImageNet \citep{Deng_Imagenet_CVPR_2009}, which is less commonly used for scene recognition task than the pretrained models on scene-centric datasets such as Places \citep{Zhou_NIPS_2014}. Nevertheless, our approach produces superior results compared to existing state-of-the-art methods for RGB-D scene recognition task. Moreoever, it is interesting that our system even with fixed pretrained CNN model is already discriminative enough and achieves impressive accuracy performances. Contrary to our findings on Washington RGB-D Object dataset, finetuning provides much better results not only for depth domain but also for the RGB domain as well. This is what we expect as scene recognition is a cross-domain task for our approach that has the pretrained CNN model of the object-centric ImageNet as the backbone. Specifically, finetuning on depth data boosts the accuracy greatly by providing both domain and modality adaptation. \input{sec4_table_sunrgbd} Fig. \ref{fig:sunrgbdConfusionMatrix} shows the confusion matrix of our approach with finetuning over the $19$ categories of SUN RGB-D Scene dataset for RGB-D. The matrix demonstrates the degree of confusion between pairs of scene categories and implies the similarity between scenes on this dataset. The largest misclassification errors happen to be between extremely similar scene categories such as \textit{computer room} - \textit{office}, \textit{conference room}-\textit{classroom}, \textit{discussion area}-\textit{rest space}, \textit{lecture theatre}-\textit{classroom}, \textit{study space}-\textit{classroom}, \textit{lab}-\textit{office}, etc. In addition to the inter-class similarity, other reasons for poor performance might be intra-class variations of the scenes and lack of getting enough representative knowledge transfer from the ImageNet models. \begin{figure}[!ht] \begin{center} \includegraphics[width=\columnwidth, keepaspectratio]{sunrgbd_confusion_matrix.pdf} \end{center} \caption{RGB-D confusion matrix of ResNet101-RNN on SUN RGB-D Scene dataset (best viewed with magnification).} \label{fig:sunrgbdConfusionMatrix} \end{figure} To further analyse the performance of our system, we give top-3 and top-5 classification accuracy together with top-1 results as in Table \ref{table:top135sceneResults}. While the top-1 accuracy shows the percentage of test images that exactly matches with the predicted classes, the top-3 and top-5 indicates the percentage of test images that are among the top ranked 3 and 5 predictions, respectively. The top-3 and top-5 results demonstrate the effectiveness of our system more closely by overcoming ambiguity among scene categories greatly. Fig. \ref{fig:sunrgbdConfusedSamples} depicts some test examples of scene categories confused with each other frequently on SUN RGB-D Scene dataset. As shown in the figure, these scene categories have similar appearances that make them hard to distinguish even for a human expert without sufficient context knowledge in the evaluation. Nevertheless, our approach is able to identify scene category labels among the top-3 and top-5 predictions with high accuracy. \begin{table}[!h] \caption{Scene recognition accuracy of top-1, top-3, and top-5 on SUN RGB-D Scene dataset (\%).} \begin{center} \setlength{\tabcolsep}{0.9em} \def1.2{1.2} \begin{adjustbox}{width=0.65\columnwidth} \begin{tabular}{ lccc } \hline Accuracy & RGB & Depth & RGB-D \\ \hline \hline top-1 & 58.5 & 50.1 & 60.7 \\ top-3 & 81.0 & 71.5 & 83.5 \\ top-5 & 88.5 & 80.9 & 89.9 \\ \hline \end{tabular} \end{adjustbox} \label{table:top135sceneResults} \end{center} \end{table} \begin{figure}[!t] \begin{center} \includegraphics[width=0.95\columnwidth, keepaspectratio]{confusedScenes.pdf} \end{center} c \caption{Top-5 RGB-D predictions of our system using sample test images of frequently confused scene categories on SUN RGB-D Scene dataset.} \label{fig:sunrgbdConfusedSamples} \end{figure} \subsection{Model Ablation} \label{sec:exp.modelAblation} \begin{figure*}[!ht] \centering \subfloat{\includegraphics[width=\columnwidth, keepaspectratio]{randomnessRGB2.pdf}}% \subfloat{\includegraphics[width=\columnwidth, keepaspectratio]{randomnessDepth2.pdf}}% \caption{Effect of randomness on the accuracy results for each level (L1 to L7). Values indicate standard deviations.} \label{fig:randomness} \end{figure*} We have analyzed and validated the proposed framework with extensive experiments using a variety of architectural configurations on the popular benchmark of Washington RGB-D dataset, which is more than $4$ times larger than the SUN RGB-D scene dataset. In this section, the analysis and evaluations of the model ablative investigations are presented. Further experiments and analysis are given in the \textit{supplementary material}. The developmental experiments are carried out on two splits of Washington RGB-D Object dataset for both modalities in order to evaluate on more stable results. The average results are analyzed. However, in some experiments, more runs have been carried out, which are clearly stated in the related sections. In Sec. \ref{sec:exp.objectRecognition} and Sec. \ref{sec:exp.sceneRecognition}, the best performing models are compared with the state-of-the-art methods with the exact provided evaluation setups. We assess the proposed framework on a PC with AMD Ryzen 9 3900X 12-Core Processor, 3.8 GHz Base, 128 GB DDR4 RAM 2666 MHz, and NVIDIA GeForce GTX 1080 Ti graphics card with 11 GB memory. \begin{figure*}[!b] \centering \subfloat{\includegraphics[width=\columnwidth, keepaspectratio]{layerwisePerformancesRGB.pdf}}% \subfloat{\includegraphics[width=\columnwidth, keepaspectratio]{layerwisePerformancesDepth.pdf}}% l \caption{Level-wise average accuracy performance of different baseline models on all the 10-splits of Washington RGB-D dataset.} \label{fig:levelwisePerformances} \end{figure*} \subsubsection{Empirical Evaluation of the Effect of Randomness} \label{sec.exp.ma.randomness} The use of random weights both in pooling and RNN structures leads to the question of how stable are the results. Thus, we experimentally investigate to see whether there is a decisive difference between different runs that generate and use new random weights. We run the pipeline with different random weights on two splits, 5 times for each. Fig. \ref{fig:randomness} reports average results with their standard deviations for each level. The figure clearly shows that randomness does not cause any instability in the model and produces similar results with very small deviations. \subsubsection{Level-wise Performance of Different Models} \label{sec.exp.ma.levelPerformances} Fig. \ref{fig:levelwisePerformances} shows level-wise average accuracy performances of all the baseline models for both of RGB and depth modalities on all the 10 evaluation splits. The graphs show a similar performance trend line with an upward at the beginning and a downward at the end. Although the levels at which optimum performance is obtained vary according to the model, what is common to all models in general is that instead of final level representations, intermediate level representations present the optimal results. These experiments also verify that while deep models transform attributes from general to specific through the network eventually \citep{Razavian_CVPRW_2014, Zeiler_ECCV_2014}, intermediate layers present the optimal representations. This makes sense because while early layers response to low-level raw features such as corners and edges, late layers extract more object-specific features of the trained datasets. This is more clear on the depth plot in Fig. \ref{fig:levelwisePerformances}, where the dataset difference is obvious due to the domain difference. We should state that RNN encoding on features extracted from FC layers with less than $8192$ dimension might not be efficient since they are already compact enough. Therefore, encoding outputs of these layers to a larger feature space through RNNs might lead to redundancy in representations. This might be another reason why there is a drop in accuracy of these layers (e.g. see L7 in Fig. \ref{fig:levelwisePerformances}). In addition, depth plot contains more fluctuations and irregularities comparing to the RGB plot, since the pretrained models of the RGB ImageNet are used as fixed extractors without finetuning. As for the different baseline model comparison, ResNet-101 and DenseNet-121 models perform similarly in terms of accuracy and are better than others. \begin{figure*}[!hb] \centering \subfloat{\includegraphics[width=\columnwidth, keepaspectratio]{finetuningRGB.pdf}}% \subfloat{\includegraphics[width=\columnwidth, keepaspectratio]{finetuningDepth.pdf}}% f \caption{Level-wise average accuracy performance of finetuned CNN models together with fixed models on all the 10-splits of Washington RGB-D dataset.} \label{fig:finetuning} \end{figure*} \subsubsection{Comparative Results of Random Weighted Pooling} \label{sec.exp.ma.poolingPerformances} In our approach, we extend the idea of randomness into a pooling strategy to cope with the high dimensionality of CNN activations, which could not be only applied to map/window size but also can be used to reduce the number of maps. We particularly employ random pooling to confirm that randomness works greatly in overall RNN-Stage even in such a pooling strategy together with random RNNs. To this end, we investigate the comparative accuracy performances of random pooling together with average pooling and max pooling. We use the DenseNet-121 model, where pooling is used extensively on each level (except in level 4), and we conduct experiments using the same RNN weights for fair comparison. Fig. \ref{fig:poolingComparison} shows average accuracy results of two splits for each pooling on both RGB and depth data. As seen from the figure, random weighted pooling generally performs similar to average pooling, while its performance in average is better than max pooling. Moreover, it is seen that random pooling acquires better results especially in middle/late levels(L4-L7), which presents more stable and semantically meaningful representations comparing to the early levels. The results also show that the proposed random pooling and average pooling can be used interchangeably as their performances are similar. \begin{figure}[!ht] \centering \includegraphics[width=\columnwidth, keepaspectratio]{poolingComparison.pdf} p \caption{Average accuracy performance of different pooling methods on RGB and depth data for the baseline model of DenseNet-121 on two splits of Washington RGB-D dataset.} \label{fig:poolingComparison} \end{figure} We further investigate the comparative accuracy performance of the proposed random pooling in our final ResNet-101 based pipeline. As it can be seen in Table~\ref{table:poolingComparison}, when the proposed pipeline is set to use random weighted pooling, it produces better or similar accuracy than max pooling and average pooling-based pipelines. This validates the power of randomness in a pooling strategy and the use of random pooling as an alternative way for down-sampling. \begin{table}[!h] \caption{Average accuracy performance of different pooling methods in the best performing ResNet101-RNN pipeline on Washington RGB-D dataset (\%).} \begin{center} \setlength{\tabcolsep}{0.9em} \def1.2{1.2} \begin{adjustbox}{width=0.75\columnwidth} \begin{tabular}{ lccc } \hline Accuracy & RGB & Depth & RGB-D \\ \hline \hline Max & 91.1 $\pm\hfil$ 1.4 & 87.1 $\pm\hfil$ 2.5 & 93.8 $\pm\hfil$ 0.9 \\ Average & 91.6 $\pm\hfil$ 1.6 & 87.2 $\pm\hfil$ 2.5 & 94.0 $\pm\hfil$ 1.0 \\ Random & 92.3 $\pm\hfil$ 1.0 & 87.2 $\pm\hfil$ 2.5 & 94.1 $\pm\hfil$ 1.0 \\ \hline \end{tabular} \end{adjustbox} \label{table:poolingComparison} \end{center} \end{table} \subsubsection{Contribution of Finetuning} \label{sec.exp.ma.finetuning} We have not used any training or finetuning in our approach to feature extraction in the ablative experiments so far (except Table~\ref{table:poolingComparison}, where depth modality-based ResNet101 is finetuned). Although impressive results are obtained on RGB data, the same success is not achieved on depth data. The reason for this difference is that the baseline CNN models are pretrained models on RGB dataset of the ImageNet. Therefore, as the next step, we analyze the changes in accuracy performance of RGB and depth data modalities by finetuning the baseline CNN models in our approach. To this end, we first carry out a systematic inquiry to find optimal finetuning hyper-parameters on a predefined set of values using only one split of Washington RGB-D dataset as a validation set for AlexNet and DenseNet-121 models. Then, finetuning of the models are performed by stochastic gradient descent (SGD) with momentum. The hyper-parameters of momentum, learning rate, batch size, learning rate decay factor and decay step size, and number of epochs, respectively are used as following; $(0.9, 0.001, 32, 0.01, 10, 40)$ and $(0.9, 0.0001, 8, 0.1, 10, 40)$ are used for AlexNet on RGB and depth data, respectively, whereas $(0.95, 0.0001, 16, 0.1, 10, 40)$ and $(0.95, 0.001, 8, 0.1, 10, 40)$ are used for DenseNet-121. Apart from these two models, we also perform finetuning on the ResNet-101 model. We use the same finetuning hyperparameters of DenseNet-121 for ResNet-101, since they are in a similar architectural structure. Fig. \ref{fig:finetuning} shows average accuracy performance of finetuned CNN models together with fixed models on all the 10 evaluation splits of Washington RGB-D object dataset. The plot shows a clear upward in performance on depth data as expected. However, there is a loss of accuracy in general, when finetuning is performed on RGB data. Washington RGB-D object dataset contains a subset of the categories in ImageNet. Accordingly, pretrained models of ImageNet already satisfy highly correlated distribution on RGB data. Therefore, there is no need for finetuning on RGB data. In contrast, in order to ensure coherence and relevance, finetuning is required for depth data due to domain difference of the inputs with the pretrained models. \subsubsection{Weighted Voting based RGB-D Fusion Performance} \label{sec.exp.ma.weightedFusion} Finally, we provide RGB-D combined results for AlexNet, DenseNet-121, and ResNet-101 models as shown in Table \ref{table:rgbdFusions} based on the SVM confidences. The table reports average results for fusion of the best levels of RGB and depth, and the best trio levels (see the \textit{supplementary material}). We evaluate two types of soft voting, our proposed weighted vote and average vote. The proposed weighted vote increases accuracy comparing to the average vote for all the models both on the multi-modal fusion of the best single and best trio levels of RGB and depth streams. The results also confirm the strength of our multi-modal voting approach that combines RGB and depth modalities effectively. On the other hand, the reason why RGB-D fusion improves the individual RGB and depth results lies in the fact that these different data modalities support each other towards a more accurate representation by capturing different aspects of the data with a strong complementary approach. RGB data are rich in terms of texture and color information. Depth data have additional geometric information to depict object shapes. Moreover, depth sensors are more insensitive to changes in lighting conditions. Therefore, multi-modal data combination is useful not only for its integrative characteristic, but also for its complementarity when one modality data is lacking such as RGB data in dark environment or depth data on shiny surfaces. \input{sec4_table_rgbdFusion} \subsection{Discussion} \label{sec:exp.discussion} Our framework presents an effective solution for deep feature extraction in an efficient way by integrating a pretrained CNN model with random weights based RNNs. Randomization throughout our RNN-Stage raises the question of whether the results are stable enough. The carefully implemented experiments in Sec. \ref{sec.exp.ma.randomness} are an empirical justification for the stability of random weights. On the other hand, our multi-level analysis shows that the optimum performance gain from a single level always comes from an intermediate level for all the models with/without finetuning for both of RGB and depth modalities. The only exception is in the use of finetuned DenseNet-121 model on depth data. This is an interesting finding, because one expects better representation capabilities of final layers, especially in the use of finetuned models. Yet, as expected, performance generally increases from the first level to the last level throughout the networks when the underlying CNN models are finetuned. Since Washington RGB-D Object \citep{Lai_ICRA_2011} dataset includes a subset of object categories in the ImageNet \citep{Deng_Imagenet_CVPR_2009}, finetuning does not improve accuracy success on RGB data. In contrast, accuracy gain is significant due to the need for domain adaptation in depth data. This also shows that using an appropriate technique to handle depth data as in our approach (see the \textit{supplementary material}), leads impressive performance improvement by knowledge transfer between modalities. In this study, although we have explored different techniques to fuse representations of multiple levels to further increase the classification success, a single optimum level may actually be sufficient enough for many tasks. In this way, especially for tasks where computational time is more critical, results can be obtained much faster without sacrificing accuracy success. Another point of interest is that the data imbalance in Washington RGB-D Object dataset results in poor performance for the individual categories with less instances and consequently leads to a drop in the overall success of the system. Hence, this imbalance might be overcome by applying data augmentation on the categories with less instances. The success of our approach for RGB-D scene recognition confirms the generalization ability of the proposed framework. Unlike object recognition, when the underlying CNN models are finetuned, success in both RGB and depth modalities increases significantly in scene recognition task. This is due to the need for cross-domain task adaptation of object-centric based pretrained models. Therefore, similar findings in object recognition could be observed if scene-centric based pretrained models are employed for scene recognition (e. g. Places \citep{Zhou_NIPS_2014}). Moreover, such pretrained models could improve the results further within our framework. Another potential that could improve the success for scene recognition is embedding contextual knowledge by jointly employing attention mechanism such as \cite{Fukui_2019_CVPR} in our structure. This work has been implemented as the extension of our previous work \citep{Caglayan_ECCVW_2018}. Therefore, we have not explored further multimodal architectures. However, instead of SVM, combining level-wise outputs through a multilayer perceptron (MLP) might be more convenient for RGB-D multimodal design. In particular, it would be interesting to use the soft voting approach proposed in this study with MLP. In the future, we plan to investigate such an approach for a better RGB-D multimodal tasks, such that success is focused on the ultimate RGB-D fusion rather than the individual accuracy success of the RGB and depth modalities. \section{Data Preparation} \label{sec:dataPreparation} Following common practices for preprocessing, we resize RGB images to 256x256 dimensions according to bilinear transformation and apply center cropping to get 224x224 dimensional images. Then, we apply commonly used z-score standardization on the input data by using mean and standard-deviation of the ImageNet \citep{Deng_Imagenet_CVPR_2009}. For depth domain, we first need appropriate RGB-like representation of depth data to leverage the power of pretrained CNN models over the large-scale RGB dataset of the ImageNet. To do so, there are several ways to represent depth data as RGB-like images such as HHA method of \cite{Gupta_ECCV_2014} (i.e. using horizontal and vertical observation values and angle of the normal to common surface), ColorJet work by \cite{Eitel_IROS_2015} (i.e. mapping depth values to different RGB color values), or commonly used surface normal based colorization as in \citep{Bo_IROS_2011, Caglayan_ECCVW_2018}. In this work, we prefer to use the colorization technique based on surface normals, as it confirms its effectiveness in our previous work \citep{Caglayan_ECCVW_2018}. However, unlike surface normal estimation from depth maps without camera parameters in \citep{Caglayan_ECCVW_2018}, we improve this in a more accurate way by estimating surface normals on 3D point clouds that have been computed using depth maps and camera intrinsic parameters. To address the issue of missing depth values, we first apply a fast vectorized depth interpolation by applying a median filter through a $5\times5$ neighborhood to reconstruct missing values in noisy depth inputs. Then, 3D point cloud estimation by using camera intrinsic constants and surface normal calculation on point clouds are followed, respectively. After this, the common approach is scaling surface normals to map values to the $0 - 255$ range to fit RGB image processing. However, since such an approach of mapping from floating point to integer values leads to a loss of information, we use these normal vectors as is without performing further quantization or scaling. Furthermore, unlike in RGB input processing, we apply resizing operation on these RGB-like depth data using the nearest neighborhood based interpolation rather than bilinear interpolation. Because the latter may lead to more distortion in geometric structure of a scene. Moreover, nearest neighbor interpolation is more suitable to the characteristics of depth data by providing a better separability between foreground and background in a scene. When applying z-score standardization to depth domain, we use the standard-deviation of the ImageNet as in RGB domain. However, we use zero-mean instead of the ImageNet mean as normal vectors are in the range of $[-1, 1]$ without the need for zero-mean shifting. \begin{figure*}[!t] \centering \includegraphics[width=0.98\textwidth]{dataPreparation.eps} \caption{Illustration of data preparation pipeline. From right to left images are; RGB, depth map (contrast enhanced for visualization), depth map after interpolation process (contrast enhanced for visualization), 3D point cloud, and finally colorized depth based on surface normals to be given as input to the proposed model.} \label{fig:dataPreparation} \end{figure*} \section{Backbone Models} \label{sec:backboneModels} \begin{figure*}[!t] \centering \includegraphics[width=0.75\textwidth]{models.pdf} \caption{Schematic overview of CNN models and their level-wise extraction points based structures. Each level of schematic view shows name of the level, operations performed in the level with the number of them if exist (for ResNet \citep{He_CVPR_2016} and DenseNet \citep{Huang_CVPR_2017} models), and dimensions of the activation output.} \label{fig:Models} \end{figure*} In this work, we employ several available pretrained models of PyTorch including AlexNet \citep{Krizhevsky_NIPS_2012}, VGGNet \citep{Simonyan_ICLR_2015} (specifically VGGNet-16 model with batch normalization), ResNet \citep{He_CVPR_2016} (specifically ResNet-50 and ResNet-101 models), and DenseNet \citep{Huang_CVPR_2017}. We extract features from seven different levels of CNN models. The models investigated in this study with the feature extraction levels are shown in Fig. \ref{fig:Models}. Each level of the CNN models shows name of the level, operations performed in the level with the number of them if exist (for ResNet \citep{He_CVPR_2016} and DenseNet \citep{Huang_CVPR_2017} models), and dimensions of the activation output. \section{Computation Time and Memory Profiling on Different Models} \label{sec:profiling} \input{supp_table_profiling} We evaluate different baseline CNN models within our framework in terms of computational time and memory requirements. We evaluate the proposed framework in two parts: (\textit{i}) Feature extraction containing CNN-RNN stages and (\textit{ii}) Classification where a model based on the extracted features is learnt to distinguish the different classes. The batch size is set to 64 for all the models. Table \ref{table:cnnProfiling} reports computational times and memory workspaces for the whole data processing ($41,877$ images) on Washington RGB-D dataset. The results here are the average results of two splits on RGB images. There is additional cost for depth data processing as it is required to colorize them. The results on this table cover the overall processing and classification of all $7$ level features. Moreover, it should be noted that classification time covers both training and testing processes, in which training takes the main computational burden. Therefore, the main cost in terms of processing time comes from training SVM models that works on CPU for $7$ times. The process for only a single optimum level would reduce the computational time to a ratio of seven approximately. Hence, using a single optimum level or fusion of selected levels can be efficient enough in terms of time and memory requirements while presenting sufficient representations. \section{Effect of Multi-Level RNN Structure} \label{sec:multilevelRNNEffect} \begin{figure}[!t] \centering \includegraphics[width=0.6\columnwidth, keepaspectratio]{multilevelRNN.eps} \caption{Comparison of single-level and multi-level RNNs on two different CNN activations (L6 and L7) of AlexNet. The horizontal axis shows average accuracy performances (\%) on two splits of Washington RGB-D dataset.} \label{fig:multilevelRNN} \end{figure} An RNN in this study is of one-level structure with a single parent computation, which is obviously computationally fast comparing to the multi-level structural RNNs. Furthermore, in this way, it provides an ease of use with no need of further processing for fixing the required input forms. However, in order to testify the performance of single-level RNNs over multiple-level RNNs, we analyze the comparative accuracy performances of 1-level RNNs together with 3-levels RNNs (see Fig. 2 in main paper). To this end, we conduct experiments on two CNN activation levels with highest semantic information (L6 and L7) of the baseline model of AlexNet. The average results of two splits for both of RGB and depth data are shown in Fig. \ref{fig:multilevelRNN}. The results show that RNN with 1-level performs better than RNN with 3-levels on RGB data, while 3-levels of RNN is better than 1-level of RNN on depth data. The better performance of RNN with 3-levels on depth data might be due to the use of a pretrained CNN model based on the RGB data of ImageNet. Hence, further processing might provide more representative information for depth data in that way. Therefore, this difference might be diminished or turn in favor of 1-level RNNs in the use of finetuned CNNs for depth modality as well. Overall, considering both RGB and depth data together, RNNs with 1-level are better in terms of accuracy performance as well. \input{supp_table_fusion_levels} \section{Empirical Performance of Different Fusion Strategies} \label{sec:fusionStrategies} We have shown that a fixed pretrained CNN model together with random RNN already achieves impressive results on a single level. Likewise, when such pretrained models are finetuned on depth data, the results are boosted greatly. The best single levels for RGB and depth data, respectively, are L4 and L5 for AlexNet; L5 and L6 for ResNet-101; and L6 and L7 for DenseNet-121. Next, to further improve accuracy performances, we investigate empirical accuracy analysis of multi-level fusions using fixed pretrained CNN models on RGB data and finetuned CNN models on depth data. In this work, in addition to the feature concatenation as in our previous work \citep{Caglayan_ECCVW_2018}, we also apply average voting based on SVM confidence scores on the best performing levels. Table \ref{table:levelFusions} reports the average accuracy on the all 10 train/test splits of Washington RGB-D dataset for AlexNet, DenseNet-121, and ResNet-101. The table shows the top four level results (best levels) for each modality and their fusion combinations. The best four levels (LB1, LB2, LB3, LB4) of AlexNet are (L4, L5, L6, L3) on RGB data and (L5, L6, L7, L4) on depth data, respectively; of ResNet-101 are (L5, L6, L4, L7) on RGB data and (L6, L7, L5, L4) on depth data, respectively; and for DenseNet-121 these levels are (L6, L5, L7, L4) on RGB and (L7, L6, L5, L4) on depth data, respectively. As can be seen from the table, a single level has already produced very good results. Since both model structures and data modality characteristics are different, the best results for each column generally vary depending on the data type and the used model. Nevertheless, in general, average voting on SVM confidence scores gives better results comparing to feature concatenation. We can also see that fusion of more levels does not necessarily increase the accuracy success. In general, the optimum results are achieved with SVM average voting of the best three levels for ResNet-101 and DenseNet-121 models. \section{Performance of Finetuned CNN-only Semantic Features} \label{sec.cnnOnly} In our previous work \citep{Caglayan_ECCVW_2018}, we analyze the comparison of CNN-only features with the CNN-RNN incorporation-based features on best performing middle layers and show the advantages of using CNN-RNN. On the other hand, as we stated in the main paper, features obtained from semantically rich information-based final layers is often utilized in many approaches \citep{Razavian_CVPRW_2014, Schwarz_ICRA_2015, Girshick_CVPR_2014, Sermanet_ICLR_2014, Farabet_TPAMI_2013}. Therefore, here we also extract features from the final semantic layer, level 7, and classify them with a linear SVM and report the results in Table~\ref{table:SemanticCNNs}. We use the finetuned models of AlexNet, DenseNet-121, and ResNet-101 models. RGB-D results are computed using the average RGB and depth SVM confidence scores. The results confirm that the backbone models produce impressive performance by finetuning the CNN models properly (see finetuning parameter setups in the main paper for more details). In addition, we can see that in case of CNN features powered by the incorporation of RNNs with the stronger representations (see object recognition performance in the main paper), the results are better compared to the CNN-only semantic features. Moreover, there is no any extra training cost, since RNNs in our model do not require training. \begin{table}[!h] \caption{Average accuracy performance of semantic CNN features (L7 features) without RNN using different finetuned baseline models on Washington RGB-D dataset (\%).} \begin{center} \setlength{\tabcolsep}{0.9em} \def1.2{1.2} \begin{adjustbox}{width=0.4\columnwidth} \begin{tabular}{ lccc } \hline Accuracy & RGB & Depth & RGB-D \\ \hline \hline AlexNet & 74.3 $\pm\hfil$ 2.8 & 83.1 $\pm\hfil$ 2.2 & 89.0 $\pm\hfil$ 1.9 \\ DenseNet-121 & 88.8 $\pm\hfil$ 1.2 & 86.7 $\pm\hfil$ 2.1 & 93.2 $\pm\hfil$ 1.6 \\ ResNet-101 & 89.0 $\pm\hfil$ 0.8 & 86.7 $\pm\hfil$ 2.5 & 92.8 $\pm\hfil$ 1.4 \\ \hline \end{tabular} \end{adjustbox} \label{table:SemanticCNNs} \end{center} \end{table} \input{sec8_references} \end{document}
{ "attr-fineweb-edu": 1.625977, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUcHHxK7FjYHGHy8eA
\section{Introduction} \noindent \noindent We study the initial-value problem for the nonlinear Schr\"odinger equations with Coulomb potential \begin{align} \label{equ1.1} \begin{cases} (i\partial_t-\mathcal{L}_K)u= \lambda f(|u|^2)u,\quad (t,x)\in\mathbb{R}\times\mathbb{R}^3, \\ u(0,x)=u_0(x)\in H^1(\mathbb{R}^3), \quad x\in\mathbb{R}^3, \end{cases} \end{align} where $u:\mathbb{R}_t\times\mathbb{R}_x^3\to \mathbb{C},\; \mathcal{L}_K=-\Delta-\frac{K}{|x|}$ with $K\in\mathbb{R}$, $f(|u|^2)=|u|^{p-1}$, and $\lambda\in\{\pm1\}$ with $\lambda=1$ known as the defocusing case and $\lambda=-1$ as the focusing case. The study of the operator $\mathcal{L}_K=-\Delta-K|x|^{-1}$ with the Coulomb potential originates from both the physical and mathematical interests. In particular, $K$ is positive, this operator provides a quantum mechanical description of the Coulomb force between two charged particles and corresponds to having an external attractive long-range potential due to the presence of a positively charged atomic nucleus. We refer to the reader to \cite{Mess, Ser} for work on these more models of the hydrogen atom in quantum physics fields. The mathematical interest in these equations however comes from the operator theory with a long range decay potential and the dispersive behavior of the solution. Note that $|x|^{-1}\in L^2(\mathbb{R}^3)+L^\infty(\mathbb{R}^3),$ we know from \cite[Theorem X.15]{RS} that $\mathcal{L}_K$ is essentially self-adjoint on $C_0^\infty(\mathbb{R}^3)$ and self-adjoint on $D(-\Delta)$. We refer the reader to \cite{RS,Taylor} for more theory of this operator. The nonlinear equation \eqref{equ1.1} and many variations aspects have been studied extensively in the literature. In particular, the existence of a unique strong global-in-time solution to \eqref{equ1.1} with Hartree nonlinearity $f(|u|^2)=|x|^{-1}\ast|u|^2$ goes back to \cite{CG}. When $K\leq 0$, the solution $u(t)$ to \eqref{equ1.1} with the Hartree nonlinearity is studied in \cite{DF, HO} in which they proved the global existence and a decay rate for the solution; however, they need the initial data in a weighted-$L^2$ space. When $K>0$, Lenzmann and Lewin \cite{LeLe} proved a time average estimate holds for every $R>0$ such that \begin{equation} \limsup_{T\to\infty}\frac1{T}\int_0^T\int_{|x|\leq R}|u(t,x)|^2 dx dt\leq 4K \end{equation} and \begin{equation} \limsup_{T\to\infty}\frac1{T}\int_0^T\int_{|x|\leq R}|\nabla u(t,x)|^2 dx dt\leq K^3 \end{equation} which is related to the RAGE theorem (see Reed-Simon\cite{RS}). In this paper, we will study the Cauchy problem for the nonlinear Schr\"odinger equation \eqref{equ1.1} with initial data in energy space $H^1(\mathbb{R}^3)$. The Cauchy problem, including the global existence and scattering theory, for the nonlinear Schr\"odinger equation without potential, i.e. $K=0$, has been intensively studied in \cite{Cav,GV79}. Due to the perturbation of the long range potential, many basic tools which were used to study the nonlinear Schr\"odinger equation are different even fails. We only have a local-in-time Strichartz estimate and global-in-time Strichartz estimate fails when $K>0$,. We therefore show the solution of \eqref{equ1.1} is global existence but does not scatter. Fortunately, in the case $K<0$, Mizutani \cite{Miz} recently obtained the global-in-time Strichartz estimate by employing several techniques from scattering theory such as the long time parametrix construction of Isozaki-Kitada type \cite{IK}, propagation estimates and local decay estimates. In this repulsive case, we will establish an interaction Morawetz estimate for the defocusing case, which provides us a decay of the solution $u$ to \eqref{equ1.1}. Combining this with the global-in-time Strichartz estimate \cite{Miz}, we therefore obtain the scattering theory in the repulsive and defocusing cases. It is worth mentioning that in the proof of scattering theory, we also need a chain rule which is established by proving the equivalence of the Sobolev norm from the heat kernel estimate, as we did in \cite{KMVZZ-Sobolev,ZZ}. Even though we obtain some results for this Cauchy problem, the whole picture of the nonlinear Schr\"odinger equation with the Coulomb potential is far to be completed, for example, the scattering theory in the energy-critical cases. Equation \eqref{equ1.1} admits a number of symmetries in $H^1(\mathbb{R}^3)$, explicitly: $\bullet$ {\bf Phase invariance:} if $u(t,x)$ solves \eqref{equ1.1}, then so does $e^{i\gamma}u(t,x),~\gamma\in\mathbb{R};$ $\bullet$ {\bf Time translation invariance:} if $u(t,x)$ solves \eqref{equ1.1}, then so does $u(t+t_0,x+x_0),~(t_0,x_0)\in\mathbb{R}\times\mathbb{R}^3$. From the Ehrenfest law or direct computation, these symmetries induce invariances in the energy space, namely: mass \begin{equation}\label{equ:mass} M(u)=\int_{\mathbb{R}^3} |u(t,x)|^2\;dx=M(u_0) \end{equation} and energy \begin{equation}\label{equ:energy} E(u)=\int_{\mathbb{R}^3}\Big(\frac12|\nabla u|^2-\frac{K}2\frac{|u|^2}{|x|}+\frac{\lambda}{p+1}|u|^{p+1}\Big)\;dx. \end{equation} Comparing with the classical Schr\"odinger equation (i.e. \eqref{equ1.1} with $K=0$), equation \eqref{equ1.1} is not {\bf space translation invariance}, which induces that the momentum $$P(u):={\rm Im}\int_{\mathbb{R}^3}\bar{u}\nabla u\;dx$$ is not conserved. Removing the potential term $\frac{K}{|x|}u$, one recovers the classical nonlinear Schr\"odinger equation: \begin{align} \label{equ:nls} \begin{cases} (i\partial_t+\Delta)u= \lambda |u|^{p-1}u,\quad (t,x)\in\mathbb{R}\times\mathbb{R}^3, \\ u(0,x)=u_0(x)\in H^1(\mathbb{R}^3), \end{cases} \end{align} which is scaling invariant. That is, the class of solutions to \eqref{equ:nls} is left invariant by the scaling \begin{equation}\label{scale} u(t,x)\mapsto \mu^{\frac2{p-1}}u(\mu^2t, \mu x),\quad\mu>0. \end{equation} Moreover, one can also check that the only homogeneous $L_x^2$-based Sobolev space that is left invariant under \eqref{scale} is $\dot{H}_x^{s_c}(\mathbb{R}^3)$ with $s_c:=\tfrac{3}2-\tfrac2{p-1}$. When $s_c<1$, the problem is called energy-subcritical problem. The problem is known as energy-critical problem when $s_c=1$. There are a number of work to study the problems, we refer the reader to \cite{Bo99a,Cav,CKSTT07,GV85,RV,Visan2007} for defocusing case in the energy-subcritical and energy-critical cases; to \cite{Dodson4,DM1,DM2,DHR,HR,KM,KV20101} for the focusing case. It is known that the defocusing case is different from the focusing one due to the opposite sign between the kinetic energy and potential energy. In this paper, we mainly consider the influence of the long range potential $K|x|^{-1}$ on the existence theory and scattering theory for nonlinear Schr\"odinger equation. We will find some influences, e.g. global existence, are same as the result of \eqref{equ:nls}; but, in particular $K>0$, some results are quite different. For example, the solution is global existence no matter what sign of $K$, but it scatters when $K<0$ but does not scatter when $K>0$ even in the defocusing case. As mentioned above the focusing case is different from the defocusing case. In the focusing case $(\lambda=-1)$, we will also use the energy without potential $$E_0(u):=\frac12\int_{\mathbb{R}^3}|\nabla u(t,x)|^2\;dx-\frac1{p+1}\int_{\mathbb{R}^3}|u(t,x)|^{p+1}\;dx,$$ to give the threshold for global/blowup dichotomy. As the same argument as in \cite{KMVZ,LMM} considering NLS with an inverse square potential, in the case $K<0,$ we will consider the initial data below the threshold of the ground state $Q$ to the classical elliptic equation \begin{equation}\label{equ:ground} -\Delta Q+Q=Q^p,\quad 1<p<5 \end{equation} due to the sharp constant in the Gagliardo-Nirenberg inequality \begin{equation}\label{equ:gni} \|f\|_{L^{p+1}}^{p+1}\leq C_K\|f\|_{L^2}^\frac{5-p}{2}\|\sqrt{\mathcal{L}_K}f\|_{L^2}^\frac{3(p-1)}{2}= C_K\|f\|_{L^2}^\frac{5-p}{2}\Big(\|f\|_{\dot{H}^1}^2-K\int\tfrac{|f|^2}{|x|}\;dx\Big)^\frac{3(p-1)}{4}. \end{equation} Let $C_0$ be the sharp constant of the classical Gagliardo-Nirenberg inequality \begin{equation}\label{equ:clagn} \|f\|_{L^{p+1}}^{p+1}\leq C_0\|f\|_{L^2}^\frac{5-p}{2}\|f\|_{\dot{H}^1}^\frac{3(p-1)}{2}. \end{equation} Then, we claim that $C_K=C_0$, it is well-known that equality in \eqref{equ:clagn} with $K=0$ is attained by $Q$, but we will see that equality in \eqref{equ:gni} with $K<0$ is never attained. Indeed, by the sharp Gagliardo-Nirenberg inequality for \eqref{equ:clagn}, we find $$\lim_{n\to\infty}\frac{ \|Q\|_{L^{p+1}}^{p+1}}{\|Q\|_{L^2}^\frac{5-p}{2}\Big(\|Q\|_{\dot{H}^1}^2-K\int\tfrac{|Q|^2}{|x-n|}\;dx\Big)^\frac{3(p-1)}{4}}=\frac{ \|Q\|_{L^{p+1}}^{p+1}}{\|Q\|_{L^2}^\frac{5-p}{2}\|Q\|_{\dot{H}^1}^\frac{3(p-1)}{2}}=C_0.$$ Thus, $C_0\leq C_K.$ However, for any $f\in H^1\setminus\{0\}$ and $K<0$, the standard Gagliardo-Nirenberg inequality implies $$\|f\|_{L^{p+1}}^{p+1}\leq C_0\|f\|_{L^2}^\frac{5-p}{2}\|f\|_{\dot{H}^1}^\frac{3(p-1)}{2}< C_0\|f\|_{L^2}^\frac{5-p}{2}\|\sqrt{\mathcal{L}_K}f\|_{L^2}^\frac{3(p-1)}{2}.$$ Thus $C_K = C_0$, and the last estimate also shows that equality is never attained. In the energy-critical case ($s_c=1$), we consider the ground state $W$ to be the elliptic equation $$-\Delta W=W^5$$ due to the sharp constant in Sobolev embedding. We refer to \cite{BL,GNN,Kw} about the existence and uniqueness of the ground state.\vspace{0.2cm} Now, we state our main results. First, we consider the global well-posedness theory for the problem \eqref{equ1.1} under some restrictions. In the energy-subcritical case (i.e $p-1<4$), the global well-posedness will follow from local well-posedness theory and uniform kinetic energy control \begin{equation}\label{equ:uniforkinetic1231} \sup_{t\in I}\|u(t)\|_{\dot{H}^1(\mathbb{R}^3)}\leq C(E(u_0),M(u_0)), \end{equation} And the local well-posedness will be proved by the standard fixed point argument combining with Strichartz estimate on Lorentz space. In the energy-critical case ($p-1=4$), we will show the global well-posedness by controlling global kinetic energy \eqref{equ:uniforkinetic1231} and proving ``good local well-posedness". More precisely, using perturbation argument as in Zhang \cite{Zhang} and global well-posedness for equation \eqref{equ:nls} under some restrictions, we will show that there exists a small constant $T=T(\|u_{0}\|_{H^{1}_{x}})$ such that \eqref{equ1.1} is well-posed on $[0,T]$, which is so-called ``good local well-posed". On the other hand, since the equation in \eqref{equ1.1} is time translation invariant, this ``good local well-posed" combining with the global kinetic energy control \eqref{equ:uniforkinetic1231} gives immediately the global well-posedness. We remark that this argument also works for the energy-subcritical case. \begin{theorem}[Global well-posedness]\label{thm:global} Let $K\in\mathbb{R}$ and $u_0\in H^1(\mathbb{R}^3)$. Suppose that $0<p-1\leq4$ in the defocusing case $\lambda=1$. While for the focusing case $\lambda=-1$, we assume that $0<p-1<\frac43$ $($mass-subcritical$)$ or \begin{itemize} \item If $p-1=\frac43$ $($mass-critical$)$ , assume $M(u_0)<M(Q)$. \item If $\frac43<p-1<4$ and $K<0$, assume \footnote{For $K<0$ and $\lambda=-1$, we remark that under the assumption $M(u_0)^{1-s_c}E(u_0)^{s_c}<M(Q)^{1-s_c}E_0(Q)^{s_c}$, the condition $\|u_0\|_{L^2}^{1-s_c}\|u_0\|_{\dot{H}^1}^{s_c}<\|Q\|_{L^2}^{1-s_c}\|Q\|_{\dot{H}^1}^{s_c}$ is equivalent to $\|u_0\|_{L^2}^{1-s_c}\Big(\|u_0\|_{\dot{H}^1}^2-K\big\||x|^{-\frac12}u_0\big\|_{L^2}^2\Big)^{\frac{s_c}2}<\|Q\|_{L^2}^{1-s_c}\|Q\|_{\dot{H}^1}^{s_c}.$ See Remark \ref{rem:tde}.} \begin{equation}\label{equ:mecond} M(u_0)^{1-s_c}E(u_0)^{s_c}<M(Q)^{1-s_c}E_0(Q)^{s_c},\; \|u_0\|_{L^2}^{1-s_c}\|u_0\|_{\dot{H}^1}^{s_c}<\|Q\|_{L^2}^{1-s_c}\|Q\|_{\dot{H}^1}^{s_c}. \end{equation} \item If $p-1=4$ $($energy-critical$)$ and $K<0$, assume that $u_0$ is radial\footnote{Here the restriction $K<0$ induces us to utilize the result of Kenig-Merle \cite{KM} in which one needs a radial initial data. } and \begin{equation}\label{equ:energythre} E(u_0)<E_0(W),\; \|u_0\|_{\dot{H}^1}<\|W\|_{\dot{H}^1}. \end{equation} \end{itemize} Then, there exists a unique global solution $u(t,x)$ to \eqref{equ1.1} such that \begin{equation}\label{equ:ugolbaunif} \|u\|_{L_t^q(I,H^{1,r})}\leq C(\|u_0\|_{H^1},|I|), \end{equation} for any $I\subset\mathbb{R}$ compact and $(q, r)\in\Lambda_0$ admissible defined below. \end{theorem} \begin{remark}The global existence is almost completed in the defocusing case regardless of whether in the repulsive or attractive case. The focusing case is more complicated and the following blow up result below is a supplement of this global existence. \end{remark} Next, for the global solution $u$ to equation \eqref{equ1.1}, we want to study the long-time behavior of the solution, such as scattering theory. We say that a global solution $u$ to \eqref{equ1.1} \emph{scatters}, if there exist $u_\pm\in H_x^1(\mathbb{R}^3)$ such that \[ \lim_{t\to\pm\infty} \| u(t) - e^{-it\mathcal{L}_K}u_{\pm} \|_{H_x^1(\mathbb{R}^3)} = 0. \] From the argument as in the proof of well-posedness theory, we know that one can regard the long-range potential term $\frac{K}{|x|}u$ as the nonlinear perturbation term(it looks like the cubic nonlinear term $|u|^2u$ from scaling analysis). However, by Reed-Simon\cite{RS}, we know that the limits $$s-\lim_{t\to\pm\infty}e^{it\mathcal{L}_K}e^{it\Delta}\quad \text{in}\quad L^2(\mathbb{R}^3)$$ do not exist. Therefore, we can not regard the potential term $\frac{K}{|x|}u$ as the nonlinear perturbation in the scattering theory. We refer the reader to several different constructions of wave operators in the long-range case, such as momentum approach\cite{Hor}, Isozaki-Kitada method\cite{IK} and position approach \cite{DG,Ya}. On the other hand, the standard arguments show that the scattering is equivalent to the global Strichartz-norm boundedness ($\|u(t)\|_{L_t^q(\mathbb{R};L_x^r(\mathbb{R}^3))}<+\infty)$ provided that we have the global-in time Strichartz estimate. However, in the attractive case, i.e. $K>0$, the global-in-time Strichartz estimate does not hold, see Subsection \ref{sub:stricharz} below. Thus, we don't know whether the solution $u$ to \eqref{equ1.1} with $K>0$ scatters or not even for the small initial data. While for the repulsive case, i.e $K<0$, the global-in-time Strichartz estimates were recently established by Mizutani \cite{Miz}. Then, combining with Sobolev norm equivalence \eqref{equ:sobequi123} below, one can easily obtain the scattering result for the small initial data. For the general initial data, we will get the scattering result in the defocusing energy-subcritical case ($\lambda=1,~p<5$) by establishing the interaction Morawetz estimate, which gives a global Strichartz-norm boundedness. In the case $K>0$, we know from \cite[Lemma 6]{BJ} that there is a positive solution $f(x)\in H^2$ of the elliptic equation \begin{equation}\label{equ:by} -\Delta f-\frac{K}{|x|}f+f+f^p=0. \end{equation} This implies that there is a soliton $u(t,x):=e^{it}f(x)$ solves \eqref{equ1.1} with $\lambda=1$. We remark that such soliton is global but not scatters. Equation \eqref{equ:by} arises in the Thomas-Fermi-von Weizsacker (TFW) theory of atoms and molecules \cite{BBL,Lieb} without electronic repulsion. There, $K|x|^{-1}$ is the electric potential due to a fixed nucleus of atomic number $K$ located at the origin, $f(x)^2$ stands for the electronic density and $\int f(x)^2dx$ is the total number of electrons. While for the case $K\leq0$, we will derive the quadratic Morawetz indentity for \eqref{equ1.1} and then establish the following interaction Morawetz estimate for $\lambda=1$ \begin{equation}\label{equ:intmorest} \int_{\mathbb{R}}\int_{\mathbb{R}^3}|u(t,x)|^4\;dx\;dt\leq CM(u_0)\sup_{t\in\mathbb{R}}\|u(t,\cdot)\|_{\dot{H}^\frac12}^2, \end{equation} which provides us a decay of the solution $u$ to \eqref{equ1.1}. Combining this with Strichartz estimate and Leibniz rule obtained by the following Sobolev norm equivalence \begin{equation}\label{equ:sobequi123} \big\|\sqrt{1+\mathcal{L}_K}f\big\|_{L^p(\mathbb{R}^3)}\simeq \big\| \sqrt{1-\Delta}f\big\|_{L^p(\mathbb{R}^3)},\quad 1<p<3, \end{equation} we establish the scattering theory as follows. \begin{theorem}[Scattering theory]\label{thm:scattering} Let $K\leq0,\; \frac43<p-1<4,\;\lambda=1$ and $u_0\in H^1(\mathbb{R}^3)$. Then, there exists a global solution $u$ to \eqref{equ1.1}, and the solution $u$ scatters in the sense that there exists $u_{\pm}\in H^1(\mathbb{R}^3)$ such that \begin{equation}\label{equ:uscat} \lim_{t\to\pm\infty}\big\|u(t,\cdot)-e^{-it\mathcal{L}_K}u_{\pm}\big\|_{H^1(\mathbb{R}^3)}=0. \end{equation} \end{theorem} In the focusing case, i.e $\lambda=-1$, by the classical Virial argument, one can obtain the blow-up result for the negative energy. \begin{theorem}[Blow-up result]\label{thm:blowup} Let $K\in\mathbb{R},\; \frac43< p-1\leq4,\;\lambda=-1$. $(i)$ Let $u_0\in \Sigma:=\{u_0\in H^1,\; xu_0\in L^2\}$. Then, the solution $u$ to \eqref{equ1.1} blows up in both time direction, in one of the three cases: \begin{enumerate} \item $C(E(u_0),M(u_0))<0$; \item $C(E(u_0),M(u_0))=0,\; y'(0)<0$; \item $C(E(u_0),M(u_0))>0,\; y'(0)^2\geq 24(p-1) C(E(u_0),M(u_0))\big\||x|u_0\big\|_{L^2(\mathbb{R}^3)}^2$; \end{enumerate} where $$y'(0)=4{\rm Im}\int_{\mathbb{R}^3}x\cdot\nabla u_0\bar{u}_0\;dx,$$ and \begin{equation}\label{equ:cem} C(E(u_0),M(u_0)):= \begin{cases} E(u_0) \quad\text{if}\quad K\leq0\\ E(u_0)+\frac{3K^2}{2(3p-7)(p-1)}M(u_0)\quad\text{if}\quad K>0. \end{cases} \end{equation} $(ii)$ Let $u_0\in H^1(\mathbb{R}^3)$ be radial, and assume that $C(E(u_0),M(u_0))<0.$ Then, the solution $u$ to \eqref{equ1.1} blows up in both time direction. \end{theorem} The paper is organized as follows. In Section $2$, as a preliminaries, we give some notation, recall the Strichartz estimate and prove the Sobolev space equivalence. Section $3$ is devoted to proving global well-posedness, i.e Theorem \ref{thm:global}. We show the interaction Morawetz-type estimates in Section $4$, and we utilize such Morawetz-type estimates and the equivalence of Sobolev norm to prove Theorem \ref{thm:scattering}. Finally, we use the Virial argument to obtain the blow-up result (Theorem \ref{thm:blowup}) in Section 5. \subsection*{Acknowledgements} The authors were supported by NSFC Grants 11771041, 11831004. We are grateful to R. Killip, J. Murphy and M. Visan for useful discussions. \section{Preliminaries} In this section, we first introduce some notation, and then recall the Strichartz estimates. We conclude this section by showing the Sobolev space equivalence between the operator $\mathcal{L}_K$ and Laplacian operator $-\Delta$. \subsection{Notations} First, we give some notations which will be used throughout this paper. To simplify the expression of our inequalities, we introduce some symbols $\lesssim, \thicksim, \ll$. If $X, Y$ are nonnegative quantities, we use $X\lesssim Y $ or $X=O(Y)$ to denote the estimate $X\leq CY$ for some $C$, and $X \thicksim Y$ to denote the estimate $X\lesssim Y\lesssim X$. We denote $a_{\pm}$ to be any quantity of the form $a\pm\epsilon$ for any $\epsilon>0$. For a spacetime slab $I\times\mathbb{R}^3$, we write $L_t^q L_x^r(I\times\mathbb{R}^3)$ for the Banach space of functions $u:I\times\mathbb{R}^3\to\mathbb{C}$ equipped with the norm $$\|u\|_{L_t^q(I;L_x^r(\mathbb{R}^3))}:=\bigg(\int_I \|u(t,\cdot)\|_{L_x^r(\mathbb{R}^3)}\bigg)^{1/q},$$ with the usual adjustments when $q$ or $r$ is infinity. When $q=r$, we abbreviate $L_t^qL_x^q=L_{t,x}^q$. We will also often abbreviate $\|f\|_{L_x^r(\mathbb{R}^3)}$ to $\|f\|_{L_x^r}.$ For $1\leq r\leq\infty$, we use $r'$ to denote the dual exponent to $r$, i.e. the solution to $\tfrac{1}{r}+\tfrac{1}{r'}=1.$ The Fourier transform on $\mathbb{R}^3$ is defined by \begin{equation*} \aligned \widehat{f}(\xi):= \big( 2\pi \big)^{-\frac{3}{2}}\int_{\mathbb{R}^3}e^{- ix\cdot \xi}f(x)dx , \endaligned \end{equation*} giving rise to the fractional differentiation operators $|\nabla|^{s}$ and $\langle\nabla\rangle^s$, defined by \begin{equation*} \aligned \widehat{|\nabla|^sf}(\xi):=|\xi|^s\hat{f}(\xi),~~\widehat{\langle\nabla\rangle^sf}(\xi):=\langle\xi\rangle^s\hat{f}(\xi), \endaligned \end{equation*} where $\langle\xi\rangle:=1+|\xi|$. This helps us to define the homogeneous and inhomogeneous Sobolev norms $$\|u\|_{\dot{W}^{s,p}(\mathbb{R}^3)}=\big\||\nabla|^su\big\|_{L^p},\; \|u\|_{{W}^{s,p}(\mathbb{R}^3)}=\big\|\langle\nabla\rangle^su\big\|_{L^p}.$$ Especially, for $p=2$, we denote $\dot{W}^{s,p}(\mathbb{R}^3)=\dot{H}^s(\mathbb{R}^3)$ and ${W}^{s,p}(\mathbb{R}^3)=H^s(\mathbb{R}^3).$ Next, we recall the well-known Lorentz space and some properties of this space for our purpose. Given a measurable function $f: \mathbb{R}^3\to \mathbb{C}$, define the distribution function of $f$ as $$f_\ast(t)=\mu(\{x\in \mathbb{R}^3: |f(x)|> t\}),\quad t>0$$ and its rearrangement function as $$f^*(s)=\inf\{t: f_\ast(t)\leq s\}.$$ For $1\leq p<\infty$ and $1\leq r\leq \infty$, define the Lorentz quasi-norm \begin{equation*} \|f\|_{L^{p,r}(\mathbb{R}^3)}=\begin{cases}\Big(\int_0^\infty(s^{\frac1p}f^*(s))^r\frac{ds}{s}\Big)^{1/r}, &\quad 1\leq r<\infty;\\ \sup\limits_{s>0} s^{\frac1p}f^*(s),&\qquad r=\infty. \end{cases} \end{equation*} The Lorentz space $L^{p,r}(\mathbb{R}^3)$ denotes the space of complex-valued measurable functions $f$ on $\mathbb{R}^3$ such that its quasi-norm $\|f\|_{L^{p,r}(\mathbb{R}^3)}$ is finite. From this characterization, $L^{p,\infty}(\mathbb{R}^3)$ is the usual weak $L^p$ space, $L^{p,p}(\mathbb{R}^3)=L^p(\mathbb{R}^3)$ and $L^{p,r}(\mathbb{R}^3)\subset L^{p,\tilde{r}}(\mathbb{R}^3)$ with $r<\tilde{r}$. We refer to O'Neil \cite{Neil} for the following H\"older inequality in Lorentz space. \begin{proposition}[H\"older's inequality in Lorentz space]\label{Lorentz} Let $1\leq p, p_0, p_1<\infty$ and $1\leq r, r_0, r_1\leq \infty$, then \begin{equation} \|fg\|_{L^{p,r}}\leq C \|f\|_{L^{p_0,r_0}} \|g\|_{L^{p_1,r_1}}, \quad \frac1p=\frac{1}{p_0}+\frac{1}{p_1}, ~~\frac1r=\frac{1}{r_0}+\frac{1}{r_1}. \end{equation} \end{proposition} \subsection{Strichartz estimate}\label{sub:stricharz} It is well known that the Strichartz estimate is very useful in the study of the nonlinear dispersive equations. To state the result, we define \begin{equation} \Lambda_0=\big\{(q,r):\;\tfrac2q=3\big(\tfrac12-\tfrac1r\big), q,r\geq2\big\}. \end{equation} \begin{theorem}[Local-in-time Strichartz estimate]\label{thm:locastri} Let $K\in\mathbb{R}$ and $\mathcal{L}_K$ be as above. For $(q,r)\in\Lambda_0$, there holds \begin{equation}\label{equ:locainstr} \|e^{it\mathcal{L}_K}f\|_{L_t^q(I,L_x^r)}\leq C(|I|)\|f\|_{L_x^2}. \end{equation} \end{theorem} \begin{proof} The proof is based on a perturbation argument. Let $u(t,x)=e^{it\mathcal{L}_K}f$, then $u$ satisfies that $$i\partial_tu+\Delta u=-\frac{K}{|x|}u,\;u(0,x)=f(x)$$ We regard the Coulomb potential as an inhomogeneous term, hence we have by Duhamel's formula $$e^{it\mathcal{L}_K}f=u(t)=e^{it\Delta}f+iK\int_0^te^{i(t-s)\Delta}\frac{u}{|x|}\;dx.$$ For our purpose, we recall the inhomogeneous Strichartz estimate without potential on Lorentz space. \begin{lemma}[Strichartz estimate for $e^{it\Delta}$, \cite{KT,Plan}]\label{lem:locinstr} For $(q,r),(q_1,r_1)\in \Lambda_0$, we have \begin{equation}\label{equ:edstrlor} \begin{split} \|e^{it\Delta}f\|_{L_t^q(I,L_x^r)}&\leq C\|f\|_{L_x^2};\\ \Big\|\int_0^t e^{i(t-s)\Delta}F(s)\;ds\Big\|_{L_t^q(I,L_x^{r,2})}&\leq C\|F(t,x)\|_{L_t^{q_1'}(I,L_x^{r_1',2})}, \end{split} \end{equation} where $\frac1q+\frac{1}{q'}=1$. \end{lemma} Using the above lemma, we show that \begin{align*} \|e^{it\mathcal{L}_K}f\|_{L_t^q(I,L_x^{r})}\leq C\|f\|_{L^2(\mathbb{R}^3)}+|K|\Big\|\int_0^te^{i(t-s)\Delta}\frac{u}{|x|}\;dx\Big\|_{L_t^q(I,L_x^{r})}. \end{align*} We use the above inhomogeneous Strichartz estimate to obtain \begin{align*} &\Big\|\int_0^te^{i(t-s)\Delta}\frac{u}{|x|}\;dx\Big\|_{L_t^q(I,L_x^{r})}\leq \Big\|\int_0^te^{i(t-s)\Delta}\frac{u}{|x|}\;dx\Big\|_{L_t^q(I,L_x^{r,2})}\\\leq & C\Big\|\frac{u}{|x|}\Big\|_{L_t^2(I,L_x^{\frac{6}{5},2})} \leq C|I|^\frac12\||x|^{-1}\|_{L_x^{3,\infty}}\|u\|_{L_t^\infty L_x^2}\\ \leq&C(|I|)\|f\|_{L_x^2} \end{align*} where we use the mass conservation in the last inequality. Therefore we prove \eqref{equ:locainstr}. \end{proof} It is nature to ask whether the global-in-time Strichartz estimate holds or not. The answer is that the global-in-time Strichartz estimate does not hold in the attractive case $K>0$ but holds in the repulsive case $K\leq 0$.\vspace{0.2cm} To see the attractive case, a simple computation shows $$\Delta(e^{-c|x|})=c^2e^{-c|x|}-\frac{2}{|x|}ce^{-c|x|}.$$ Let $c_K=K/2$, this implies $$\mathcal{L}_K(e^{-c_K|x|})=\Big(-\Delta-\frac{K}{|x|}\Big)(e^{-c_K|x|})=-\Big(\frac{K}{2}\Big)^2e^{-c_K|x|}.$$ Then, the function $u(t,x)=e^{ic_K^2t}(e^{-c_K|x|})$ with $c_K=\frac{K}{2}$ solves the linear equation $i\partial_tu-\mathcal{L}_Ku=0$ and $u_0(x)=e^{-c_K|x|}\in L^2(\mathbb{R}^3)$ when $K>0$. However, \begin{equation} \|u(t,x)\|_{L_t^q(\mathbb{R},L_x^r(\mathbb{R}^3))}=+\infty. \end{equation} In the repulsive Coulomb potential case, Mizutani \cite{Miz} recently proved the global-in-time Strichartz estimate, where the proof employs several techniques from linear scattering theory such as the long time parametrix construction of Isozaki-Kitada type \cite{IK}, propagation estimates and local decay estimates. \begin{theorem}[Global-in-time Strichartz estimate,\cite{Miz}]\label{thm:globalstri} For $(q,r),(q_1,r_1)\in\Lambda_0$ and $K<0$, there holds \begin{equation}\label{equ:globainstr} \|e^{it\mathcal{L}_K}f\|_{L_t^q(\mathbb{R},L_x^r)}\leq C\|f\|_{L_x^2}, \end{equation} and \begin{equation}\label{equ:globainstrinh} \Big\|\int_0^t e^{i(t-s)\mathcal{L}_K}F(s)\;ds\Big\|_{L_t^q(\mathbb{R},L_x^r)}\leq C\|F\|_{L_t^{q_1'}(\mathbb{R},L_x^{r_1'})}. \end{equation} \end{theorem} \subsection{Fractional product rule} As mentioned in the introduction, we need the following fractional chain rule in the proof of scattering theory when $K<0$. The $L^p$-product rule for fractional derivatives in Euclidean spaces \begin{align*} \| (-\Delta)^{\frac s2}(fg)\|_{L^p(\mathbb{R}^3)} \lesssim &\| (-\Delta)^{\frac s2} f\|_{L^{p_1}(\mathbb{R}^3)}\|g\|_{L^{p_2}(\mathbb{R}^3)}\\ &+\|f\|_{L^{q_1}(\mathbb{R}^3)}\| (-\Delta)^{\frac s2} g\|_{L^{q_2}(\mathbb{R}^3)}, \end{align*} was first proved by Christ and Weinstein \cite{ChW}. Here $1<p, p_1, p_2, q_1, q_2< \infty$, $s\geq0$ and $\frac1p=\frac1{p_1}+\frac1{p_2}=\frac1{q_1}+\frac1{q_2}$. Similarly, we have the following for the operator $\mathcal{L}_K$ with $K<0$. \begin{lemma}[Fractional product rule]\label{L:Leibnitz} Fix $K<0$ and let $\mathcal{L}_K$ be as above. Then for all $f, g\in C_c^{\infty}(\mathbb{R}^3\setminus\{0\})$ we have \begin{align*} \| \sqrt{1+\mathcal{L}_K}(fg)\|_{L^p(\mathbb{R}^3)} \lesssim \| \sqrt{1+\mathcal{L}_K} f\|_{L^{p_1}(\mathbb{R}^3)}\|g\|_{L^{p_2}(\mathbb{R}^3)}+\|f\|_{L^{q_1}(\mathbb{R}^3)}\| \sqrt{1+\mathcal{L}_K} g\|_{L^{q_2}(\mathbb{R}^3)}, \end{align*} for any exponents satisfying $1< p, p_1, q_2< 3$, $1<p_2, q_1 <\infty$ and $\frac1p=\frac1{p_1}+\frac1{p_2}=\frac1{q_1}+\frac1{q_2}$. \end{lemma} This is a consequence of the equivalence of Sobolev norm $$ \big\| f\big\|_{L^p(\mathbb{R}^3)}+\big\|\nabla f\big\|_{L^p(\mathbb{R}^3)} \sim \big\| \sqrt{1+\mathcal{L}_K} f\big\|_{L^p(\mathbb{R}^3)},\quad 1< p < 3. $$ which will be proved in the next subsection. \subsection{Sobolev space equivalence} In this subsection, we study the relationship between Sobolev space adapted with Laplacian operator perturbed by Coulomb potential and classical Laplacian operator, that is, for suitable $s$ and $p$ such that \begin{equation}\label{equ:soalarge} \big\|\langle \mathcal{L}_K\rangle^\frac{s}{2} f\big\|_{L^p(\mathbb{R}^3)}\simeq \big\|\langle\nabla\rangle^\frac{s}{2} f\big\|_{L^p(\mathbb{R}^3)} \end{equation} where $\langle a\rangle=(1+|a|^2)^{1/2}$. To this end, we recall the heat kernel estimate \begin{lemma}[Heat kernel]\label{lem:heat} Let $K<0$ and let $\mathcal{L}_K$ be as above. Then there exist constants $C,c>0$ such that \begin{equation}\label{equ:heatk} 0\leq e^{-t\mathcal{L}_K}(x,y)\leq Ct^{-3/2}e^{-\frac{|x-y|^2}{ct}}. \end{equation} \end{lemma} \begin{proof} Since $K<0$, then $\mathcal{L}_K=-\Delta+V(x)$ with a positive positive $V=-K|x|^{-1}$. It is easy to verify that $V\in L^2_{\text{loc}}(\mathbb{R}^3)$. It is well known that \eqref{equ:heatk}, e.g. see \cite{Kurata}. Indeed, one can use the estimate of the fundamental solution of the elliptic operator $\mathcal{L}_K+\lambda$ with non-negative parameter $\lambda$ in Shen \cite{Shen} to obtain the heat kernel estimate. \end{proof} \begin{lemma}[Sobolev norm equivalence]\label{lem:sobnorequ} Let $K<0$, $1<p<3$ and $0\leq s\leq2.$ There holds \begin{equation}\label{equ:sobequi} \big\|(1+\mathcal{L}_K)^\frac{s}2f\big\|_{L^p(\mathbb{R}^3)}\simeq \big\| (1-\Delta)^\frac{s}2f\big\|_{L^p(\mathbb{R}^3)}. \end{equation} \end{lemma} \begin{proof} The proof is classical and follows from heat kernel estimate and Stein complex interpolation. We refer to Y. Hong\cite{Hong} or the authors \cite{ZZ}, but we give a complete proof for convenience. First, we consider $s=2$. Using the Hardy inequality \cite[Lemma 2.6] {ZZ} with $p<3$, we obtain \begin{align*} \big\|(1+\mathcal{L}_K)f\big\|_{L^p} \leq& \big\| (1-\Delta)f\big\|_{L^p}+|K|\big\|\tfrac{f}{|x|}\big\|_{L^p} \\ \lesssim & \big\| (1-\Delta)f\big\|_{L^p}+\|\nabla f\|_{L^p}\\ \lesssim& \big\| (1-\Delta)f\big\|_{L^p}. \end{align*} By Lemma \ref{lem:heat}, we see the heat kernel operator $e^{-t(1+\mathcal{L}_K)}$ obeys the Gaussian heat kernel estimate. Hence we easily get the Hardy's inequality for $p<3$ $$\big\|\tfrac{f}{|x|}\big\|_{L^p}\lesssim\big\|\sqrt{1+\mathcal{L}_K}f\big\|_{L^p}.$$ Hence, \begin{align*} \big\|(1-\Delta)f\big\|_{L^p} \leq& \big\| (1+\mathcal{L}_K)f\big\|_{L^p}+|K|\big\|\tfrac{f}{|x|}\big\|_{L^p}\\ \lesssim&\big\| (1+\mathcal{L}_K)f\big\|_{L^p}+\big\| \sqrt{1+\mathcal{L}_K}f\big\|_{L^p}\\ \lesssim&\big\| (1+\mathcal{L}_K)f\big\|_{L^p}. \end{align*} This implies \eqref{equ:sobequi} with $s=2$. Next, since the heat kernel operator $e^{-t(1+\mathcal{L}_K)}$ obeys the Gaussian heat kernel estimate, we have by Sikora-Wright \cite{SW} $$\big\|(1-\Delta)^{ib}f\big\|_{L^p}+\big\|(1+\mathcal{L}_K)^{ib}f\big\|_{L^p}\lesssim\langle b\rangle^\frac32,\quad \forall~b\in\mathbb{R},\;\forall~1<p<+\infty.$$ Let $z=a+ib$, define $$T_z=(1+\mathcal{L}_K)^z(1-\Delta)^{-z}, G_z=(1-\Delta)^z(1+\mathcal{L}_K)^{-z}.$$ Then we have that for $1<p<3$ $$\|T_{1+ib}\|_{L^p\to L^p}\leq \langle b\rangle^3\|(1+\mathcal{L}_K)(1-\Delta)^{-1}\|_{L^p\to L^p}\leq C\langle b\rangle^3.$$ This shows that \begin{align*} \big\|(1-\Delta)^zf\big\|_{L^p}\lesssim &\langle {\rm Im } z\rangle^\frac32\big\|(1+\mathcal{L}_K)^zf\big\|_{L^p} \\ \big\|(1+\mathcal{L}_K)^zf\big\|_{L^p}\lesssim &\langle {\rm Im } z\rangle^\frac32\big\|(1-\Delta)^zf\big\|_{L^p} \end{align*} holds for $1<p<+\infty$ when ${\rm Re}z=0$ and for $1<p<3$ when ${\rm Re z}=1$. Therefore, \eqref{equ:sobequi} follows by the Stein complex interpolation. \end{proof} \section{Global well-posedness} In this section, we prove the well-posedness for equation \eqref{equ1.1} including local and global well-posedness. In this part, we only use the classical Strichartz estimate for the Schr\"odinger equation without potential $i\partial_tu-\Delta u=0$ on Lorentz space. In the energy-subcritical case (i.e $p-1<4$), the global well-posedness will follow from local well-posedness theory and uniform kinetic energy control \begin{equation}\label{equ:uniforkinetic123} \sup_{t\in I}\|u(t)\|_{\dot{H}^1(\mathbb{R}^3)}\leq C(E(u_0),M(u_0)). \end{equation} In the energy-critical case ($p-1=4$), we prove the global well-posedness by using a perturbation argument and the well-known scattering theory for Schr\"odinger without potential in \cite{CKSTT07, KM}. \subsection{Local well-posedness for energy-subcritical: $s_c<1$} \begin{theorem}[Local well-posedness, energy-subcritical]\label{thm:local} Let $K\in\mathbb{R}$, $0<p-1<4$ and $u_0\in H^1(\mathbb{R}^3)$. Then there exists $T=T(\|u_0\|_{H^1})>0$ such that the equation \eqref{equ1.1} with initial data $u_0$ has a unique solution $u$ with \begin{equation}\label{small} u\in C(I; H^1(\mathbb{R}^3))\cap L_t^{q_0}(I,W^{1,r_0}(\mathbb{R}^3)),\quad I=[0,T], \end{equation} where $(q_0,r_0)=\big(\tfrac{4(p+1)}{3(p-1)},p+1\big)\in\Lambda_0.$ \end{theorem} \begin{proof} Define the map \begin{equation} \Phi(u(t)):=e^{it\Delta}u_0+i\int_0^t e^{i(t-s)\Delta}\Big(\frac{K}{|x|}u-\lambda|u|^{p-1}u\Big)(s)\;ds, \end{equation} with $I=[0,T]$ $$B(I)=\big\{u\in Y(I)=C(I,H^1(\mathbb{R}^3))\cap L_t^{q_0}(I,W^{1,r_0}),\; \|u\|_{Y(I)}\leq 2C\|u_0\|_{H^1}\big\},$$ and the metric $d(u,v)=\|u-v\|_{L_t^{q_0}(I,L_x^{r_0})\cap L_t^\infty(I,L_x^2)}.$ For $u\in B(I)$, we have by Strichartz estimate \eqref{equ:edstrlor} \begin{align*} \big\|\Phi(u)\big\|_{Y(I)}\leq& C\|u_0\|_{H^1}+C\Big\|\langle \nabla\rangle \Big(\tfrac{K}{|x|}u\Big)\Big\|_{L_t^2 L_x^{\frac65,2}}+C\big\|\langle \nabla\rangle (|u|^{p-1}u)\big\|_{L_t^{q_0'} L_x^{r_0'}(I\times\mathbb{R}^3)}\\ \leq&C\|u_0\|_{H^1}+C_1T^\frac12\|u\|_{L_t^\infty H^1}+C_1T^{1-\frac{2}{q_0}} \|u\|_{L_t^\infty(I, L_x^{r_0})}^{p-1}\|u\|_{L_t^{q_0}(I,W^{1,r_0})}\\ \leq&C\|u_0\|_{H^1}+C_1T^\frac12\|u\|_{L_t^\infty H^1}+C_1T^{1-\frac{2}{q_0}}\|u\|_{Y(I)}^p\\ \leq&C\|u_0\|_{H^1}+2CC_1T^\frac12\|u_0\|_{H^1}+2CC_1T^{\frac{5-p}{2(p+1)}}\|u_0\|_{H^1}(2C\|u_0\|_{H^1})^{p-1}\\ \leq&2C\|u_0\|_{H^1} \end{align*} by taking $T$ small such that $$2C_1T^\frac12+ 2C_1T^{\frac{5-p}{2(p+1)}}(2C\|u_0\|_{H^1})^{p-1}\leq1.$$ On the other hand, for $u,v\in B(I)$, we get by Strichartz estimate \begin{align*} d\big(\Phi(u),\Phi(v)\big)=&\Big\|\int_0^te^{i(t-s)\Delta}\big[\frac{K}{|x|}(u-v)-(|u|^{p-1}u-|v|^{p-1}v)\big](s)\;ds\Big\|_{L_t^{q_0}(I,L_x^{r_0})}\\ \leq&C\Big\|\frac{u-v}{|x|}\Big\|_{L_t^2 L_x^{\frac65,2}}+C\big\||u|^{p-1}u-|v|^{p-1}v\big\|_{L_t^{q_0'}(I,L_x^{r_0'})}\\ \leq& CT^\frac12\|u-v\|_{L_t^\infty L_x^2}+CT^{\frac{5-p}{2(p+1)}}\|u-v\|_{L_t^{q_0}(I,L_x^{r_0})}\big\|(u,v)\big\|_{L_t^\infty(I,H^1)}^{p-1}\\ \leq&\frac12d(u,v) \end{align*} by taking $T$ small such that $$CT^\frac12+4CT^{\frac{5-p}{2(p+1)}}(2C\|u_0\|_{H^1})^{p-1}\leq\frac12.$$ A standard fixed point argument gives a unique local solution $u:[0,T]\times\mathbb{R}^3\to\mathbb{C}$ to \eqref{equ1.1}. \end{proof} \subsection{Global well-posedness for energy-subcritical: $s_c<1$}\label{sub:global} By the local well-posedness theory and mass conservation, the global well-posedness will follow from the uniform kinetic energy control \begin{equation}\label{equ:uniforkinetic} \sup_{t\in I}\|u(t)\|_{\dot{H}^1(\mathbb{R}^3)}\leq C(E(u_0),M(u_0)). \end{equation} We argue the following several cases.\vspace{0.2cm} {\bf Case 1: the defocusing case, i.e. $\lambda=1$.} In the defocusing case, we have the uniform bound \begin{equation}\label{equ:unibode} \|u(t,\cdot)\|_{H^1_x(\mathbb{R}^3)}\leq C(M(u_0),E(u_0)). \end{equation} In fact, we have by Hardy's inequality and Young's ineqaulity \begin{equation}\label{equ:hardyi} \int_{\mathbb{R}^3}\frac{|u|^2}{|x|}\;dx\leq C\|u\|_{\dot{H}^\frac12}^2\leq C\|u\|_{L^2_x}\|u\|_{\dot{H}^1}\leq \frac{1}{2|K|}\|u\|_{\dot{H}^1}^2+2C^2|K|\cdot\|u\|_{L^2_x}^2, \end{equation} which implies \begin{align*} E(u_0)=E(u)\geq&\frac14\int_{\mathbb{R}^3}|\nabla u(t)|^2\;dx-C^2|K| M(u_0) \end{align*} and hence $$\|u(t)\|_{H^1}^2\leq C_1M(u_0)+4E(u_0).$$ Therefore we can extend the local existence to be a global one.\vspace{0.2cm} {\bf Case 2: $\lambda=-1, 0<p-1<\frac43$.} In this case, we have by Gagliardo-Nirenberg inequality and Young's inequality $$\|u\|_{L_x^{p+1}}^{p+1}\leq C\|u\|_{L_x^2}^\frac{5-p}{2}\|u\|_{\dot{H}^1}^\frac{3(p-1)}{2}\leq C_1M(u_0)^\frac{5-p}{7-3p}+\frac{p+1}8 \|u\|_{\dot{H}^1}^2.$$ This together with \eqref{equ:hardyi} implies \begin{align*} E(u_0)=E(u)\geq&\frac18\int_{\mathbb{R}^3}|\nabla u(t)|^2\;dx-C^2|K| M(u_0)-\frac{C_1}{p+1}M(u_0)^\frac{5-p}{7-3p}, \end{align*} and so $$\|u(t)\|_{H^1}^2\leq C_1M(u_0)+8E(u_0).$$ Thus we can obtain the global existence by extending the local solution.\vspace{0.2cm} {\bf Case 3: $\lambda=-1,\;p=\frac73,\;M(u_0)<M(Q).$} For the mass-critical equation: $$i\partial_tu+\Delta u+\frac{K}{|x|}u+|u|^\frac{4}{3}u=0.$$ From \eqref{equ:hardyi}, we obtain $$\Big|\frac{K}2\int_{\mathbb{R}^3}\frac{|u|^2}{|x|}\;dx\Big|\leq \frac{\varepsilon}{2}\|\nabla u\|_{L^2}^2+\frac{C^2|K|}{\varepsilon}M(u_0).$$ One the other hand, we have by the sharp Gagliardo-Nirenberg inequality $$\frac{3}{10}\int_{\mathbb{R}^3}|u|^\frac{10}{3}\;dx\leq\frac12\Big(\frac{\|u\|_{L^2}}{\|Q\|_{L^2}}\Big)^\frac{4}{3}\|\nabla u\|_{L^2}^2.$$ Hence, \begin{align*} E(u_0)\geq& \frac12\|\nabla u(t,\cdot)\|_{L^2}^2\Big(1-\varepsilon-\Big(\frac{\|u\|_{L^2}}{\|Q\|_{L^2}}\Big)^\frac{4}{3}\Big)-\frac{C^2|K|}{\varepsilon}M(u_0)\\ \geq&\frac14\|\nabla u(t,\cdot)\|_{L^2}^2\Big(1-\Big(\frac{\|u\|_{L^2}}{\|Q\|_{L^2}}\Big)^\frac{4}{3}\Big)-\frac{C^2|K|}{\varepsilon}M(u_0). \end{align*} This shows $$\|u(t,\cdot)\|_{L_t^\infty H^1_x}\leq C(M(u_0),E(u_0)).$$\vspace{0.2cm} {\bf Case 4: $\lambda=-1,\; K<0,\; \frac43<p-1<4.$} In this case, we assume that $$ M(u_0)^{1-s_c}E(u_0)^{s_c}<M(Q)^{1-s_c}E_0(Q)^{s_c},\; \|u_0\|_{L^2}^{1-s_c}\|u_0\|_{\dot{H}^1}^{s_c}<\|Q\|_{L^2}^{1-s_c}\|Q\|_{\dot{H}^1}^{s_c}.$$ Then, there exists $\delta>0$ such that $$M(u_0)^{1-s_c}E(u_0)^{s_c}\leq(1-\delta)M(Q)^{1-s_c}E_0(Q)^{s_c}.$$ By the sharp Gagliardo-Nirenberg inequality, we have \begin{equation}\label{equ:gnineq} \|f\|_{L_x^{p+1}}^{p+1}\leq C_0\|f\|_{L_x^2}^{\frac{5-p}2} \|f\|_{\dot H^1}^{\frac{3(p-1)}2}, \end{equation} with the sharp constant \begin{equation}\label{equ:qah1} C_0\|Q\|_{L^2}^{(1-s_c)(p-1)}\|Q\|_{\dot{H}^1}^{s_c(p-1)}=\frac{2(p+1)}{3(p-1)}. \end{equation} This shows for $K<0$ \begin{align*} (1-\delta)M(Q)^{1-s_c}E_0(Q)^{s_c} \geq& M(u)^{1-s_c} E(u)^{s_c}\\ \geq& \|u(t)\|_{L_x^2}^{2(1-s_c)}\Big(\frac12\|u(t)\|_{\dot H^1}^2-\frac{C_0}{p+1}\|u(t)\|_{L_x^2}^{\frac{5-p}2} \|u(t)\|_{\dot H^1}^{\frac{3(p-1)}2}\Big)^{s_c} \end{align*} for any $t\in I$. This together with \begin{equation}\label{equ:idenenery} E_0(Q)=\frac{3p-7}{6(p-1)}\|Q\|_{\dot{H}^1}^2=\frac{3p-7}{4(p+1)}\|Q\|_{L_x^{p+1}}^{p+1}, \end{equation} implies that \[ (1-\delta)^\frac1{s_c}\geq \frac{3(p-1)}{3p-7}\biggl(\frac{\|u(t)\|_{L_x^2}^{1-s_c}\|u(t)\|_{\dot H^1}^{s_c}}{\|Q\|_{L_x^2}^{1-s_c} \|Q\|_{\dot H^1}^{s_c}}\biggr)^\frac2{s_c} - \frac2{3p-7}\biggl(\frac{\|u(t)\|_{L_x^2}^{1-s_c} \|u(t)\|_{\dot H^1}^{s_c}}{\|Q\|_{L_x^2}^{1-s_c} \|Q\|_{\dot H^1}^{s_c}}\biggr)^{\frac2{s_c}(p-1)}. \] Using a continuity argument, together with the observation that \[ (1-\delta)^\frac1{s_c} \geq \frac{3(p-1)}{3p-7}y^\frac2{s_c} - \frac2{3p-7}y^{\frac2{s_c}(p-1)} \Rightarrow |y-1|\geq \delta' \quad \text{for some}\quad \delta'=\delta'(\delta)>0, \] we obtain \begin{equation}\label{equ:k0negt} \|u(t)\|_{L^2}^{1-s_c}\|u(t)\|_{\dot{H}^1}^{s_c}<\|Q\|_{L^2}^{1-s_c}\|Q\|_{\dot{H}^1}^{s_c},\quad \forall~t\in I. \end{equation} In sum, we obtain the uniform kinetic energy control in the maximal life-span. Therefore, we conclude the proof of Theorem \ref{thm:global}. \begin{remark}\label{rem:tde} $(i)$ For $K<0$ and $\lambda=-1$, we remark that under the assumption $M(u_0)^{1-s_c}E(u_0)^{s_c}\leq(1-\delta)M(Q)^{1-s_c}E_0(Q)^{s_c}$ for some $\delta>0$, the condition \begin{equation}\label{equ:weakcond} \|u_0\|_{L^2}^{1-s_c}\|u_0\|_{\dot{H}^1}^{s_c}<\|Q\|_{L^2}^{1-s_c}\|Q\|_{\dot{H}^1}^{s_c} \end{equation} is equivalent to \begin{equation}\label{equ:strocond} \|u_0\|_{L^2}^{1-s_c}\Big(\|u_0\|_{\dot{H}^1}^2-K\big\||x|^{-\frac12}u_0\big\|_{L^2}^2\Big)^{\frac{s_c}2}<\|Q\|_{L^2}^{1-s_c}\|Q\|_{\dot{H}^1}^{s_c}. \end{equation} We take $s_c=\tfrac12$ for example. In this case, we have $p=3,$ and the ground state $Q$ solves $$-\Delta Q+Q=Q^3.$$ A simple computation shows that \begin{equation}\label{equ:energ} E_0(Q_0)=\frac16\|Q_0\|_{\dot{H}^1}^2=\frac18\|Q_0\|_{L^4}^4=\frac12\|Q_0\|_{L^2}^2 \end{equation} and \begin{equation}\label{equ:c0} C_0:=\frac{\|Q\|_{L^4}^4}{\|Q\|_{L^2}\|Q\|_{\dot{H}^1}^3}=\frac43\frac{1}{\|Q\|_{L^2}\|Q\|_{\dot{H}^1}}. \end{equation} Since $K<0$, it is easy to get \eqref{equ:weakcond} from \eqref{equ:strocond}. Now, we assume \eqref{equ:weakcond}. By the sharp Gaglilardo-Nirenberg's inequality $$\|u\|_{L^4}^4\leq C_0\|u\|_{L^2}\|u\|_{\dot{H}^1}^3$$ and using \eqref{equ:c0}, we obtain \begin{align*} M(u_0)E(u_0)=&\frac12\|u_0\|_{L^2}^2\Big( \|u_0\|_{\dot{H}^1}^2-K\big\||x|^{-\frac12}u_0\big\|_{L^2}^2\Big)-\frac14\|u_0\|_{L^2}^2\|u_0\|_{L^4}^4\\ \geq&\frac12\|u_0\|_{L^2}^2\Big( \|u_0\|_{\dot{H}^1}^2-K\big\||x|^{-\frac12}u_0\big\|_{L^2}^2\Big)-\frac{C_0}4\|u_0\|_{L^2}^3\|u_0\|_{\dot{H}^1}^3\\ \geq&\frac12\|u_0\|_{L^2}^2\Big( \|u_0\|_{\dot{H}^1}^2-K\big\||x|^{-\frac12}u_0\big\|_{L^2}^2\Big)-\frac{C_0}4\|Q\|_{L^2}^3\|Q\|_{\dot{H}^1}^3\\ =&\frac12\|u_0\|_{L^2}^2\Big( \|u_0\|_{\dot{H}^1}^2-K\big\||x|^{-\frac12}u_0\big\|_{L^2}^2\Big)-\frac13\|Q\|_{L^2}^2\|Q\|_{\dot{H}^1}^2. \end{align*} This together with the assumption $M(u_0)E(u_0)\leq(1-\delta)M(Q)E_0(Q)$ and \eqref{equ:energ} yields that \begin{align*} \frac12\|u_0\|_{L^2}^2\Big( \|u_0\|_{\dot{H}^1}^2-K\big\||x|^{-\frac12}u_0\big\|_{L^2}^2\Big)\leq & M(u_0)E(u_0)+\frac13\|Q\|_{L^2}^2\|Q\|_{\dot{H}^1}^2\\ \leq&(1-\delta)M(Q)E_0(Q)+\frac13\|Q\|_{L^2}^2\|Q\|_{\dot{H}^1}^2\\ =&\frac{3-\delta}{6}\|Q\|_{L^2}^2\|Q\|_{\dot{H}^1}^2. \end{align*} And so $$\|u_0\|_{L^2}^2\Big( \|u_0\|_{\dot{H}^1}^2-K\big\||x|^{-\frac12}u_0\big\|_{L^2}^2\Big)<\|Q\|_{L^2}^2\|Q\|_{\dot{H}^1}^2.$$ $(ii)$ By the same argument as in $(i)$, for $K<0$, $\lambda=-1$ and $p=5$, under the assumption $E(u_0)<E_0(W),$ the condition \begin{equation}\label{equ:weakcondener} \|u_0\|_{\dot{H}^1}<\|W\|_{\dot{H}^1} \end{equation} is equivalent to \begin{equation}\label{equ:strocondener} \|u_0\|_{\dot{H}^1}^2-K\big\||x|^{-\frac12}u_0\big\|_{L^2}^2<\|W\|_{\dot{H}^1}^2. \end{equation} \end{remark} \subsection{Global well-posedness for energy-critical: $s_c=1$ and $K<0$} We will show the global well-posedness by controlling global kinetic energy and proving ``good local well-posedness" as in Zhang \cite{Zhang}. More precisely, we will show that there exists a small constant $T=T(\|u_{0}\|_{H^{1}_{x}})$ such that \eqref{equ1.1} is well-posed on $[0,T]$, which is so-called ``good local well-posed". On the other hand, since the equation in \eqref{equ1.1} is time translation invariant, this ``good local well-posed" combining with the global kinetic energy control gives immediately the global well-posedness. {\bf Step 1. global kinetic energy.} For the defocusing case ($\lambda=1$), it follows from Case 1 in Subsection \ref{sub:global} that $$\sup_{t\in I}\|u(t,\cdot)\|_{H^1}^2\leq C_1M(u_0)+4E(u_0).$$ While for the focusing case $(\lambda=-1)$ and $K<0$, under the restriction \begin{equation}\label{equ:energythre123} E(u_0)<E_0(W),\; \|u_0\|_{\dot{H}^1}<\|W\|_{\dot{H}^1}, \end{equation} we easily obtain \begin{equation}\label{equ:energythre321} E_0(u_0)<E_0(W),\; \|u_0\|_{\dot{H}^1}<\|W\|_{\dot{H}^1}. \end{equation} Hence, we have by coercivity as in \cite{KM} \begin{equation}\label{equ:uniformkneg} \sup_{t\in I}\|u(t)\|_{\dot{H}^1}<cE_0(u)<cE(u_0)<cE_0(W). \end{equation} And so we derive the global kinetic energy. {\bf Step 2: good local well-posedness.} To obtain it, we first introduce several spaces and give estimates of the nonlinearities in terms of these spaces. For a time slab $I\subset \mathbb{R}$, we define $$\dot{X}^0_I:=L_{t,x}^\frac{10}{3}\cap L_t^{10}L_x^\frac{30}{13}(I\times\mathbb{R}^3),\; \dot{X}^1_I:=\{f:\nabla f\in\dot{X}^0_I\},\; X_I^1=\dot{X}^0_I\cap\dot{X}^1_I.$$ Then, we have by H\"older's inequality and Sobolev embedding \begin{equation}\label{equ:nonlinest} \big\|\nabla^i\big(u^kv^{4-k}\big)\big\|_{L_{t,x}^{\frac{10}{7}}(I\times\mathbb{R}^3)}\lesssim \|u\|_{\dot{X}^1_x}^{p-1}\|u\|_{\dot{X}^i_I}, \end{equation} for $i=0,1$, and \begin{equation}\label{equ:harsob} \big\|\langle\nabla\rangle\big(\tfrac{u}{|x|}\big)\big\|_{L_t^2(I;L_x^{\frac{6}{5},2})}\leq C|I|^\frac12\|u\|_{L_t^\infty(I;H^1_x)}. \end{equation} Now, it follows from \cite{CKSTT07} for the defocusing case ($\lambda=1$) and \cite{KM} for the focusing case $(\lambda=-1)$ under the assumption \eqref{equ:energythre321} and $u_0$ radial that the Cauchy problem \begin{align}\label{aequ2} \begin{cases} i\partial_tv+\Delta v=\lambda|v|^4v,\quad (t,x)\in \mathbb{R}\times\mathbb{R}^3,\\ v(0)=u_0, \end{cases} \end{align} is globally well-posed and the global solution $v$ satisfies the estimate \begin{align}\label{equ31} \|v\|_{L^{q}(\mathbb{R};\dot{W}^{1,r}_x)}\leqslant C(\|u_{0}\|_{\dot{H}^{1}}),\quad \|v\|_{L^{q}(\mathbb{R};L^r)}\leq C(\|u_{0}\|_{\dot H^{1}})\|u_0\|_{L^2} \end{align} for all $(q,r)\in\Lambda_0$. So to recover $u$ on the time interval $[0,T]$, where $T$ is a small constant to be specified later, it's sufficient to solve the difference equation of $\omega$ with 0-data initial on the time interval $[0,T]$, \begin{align}\label{equ32} \begin{cases} i\omega_t+\Delta \omega=-\tfrac{K}{|x|}(v+\omega)-\lambda|v+\omega|^4(v+\omega)+\lambda|v|^4v\\ \omega(0)=0. \end{cases} \end{align} In order to solve \eqref{equ32}, we subdivide $[0,T]$ into finite subintervals such that on each subinterval, the influence of $v$ to the problem \eqref{equ32} is very small. Let $\epsilon$ be a small constant, from \eqref{equ31}, it allows us to divide $\mathbb{R}$ into subintervals $I_{0},\ldots I_{J-1}$ such that on each $I_{j}$, \begin{align*} \|v\|_{X^1(I_{j})}\thicksim \epsilon,\quad 0\leq j\leq J-1\quad \text{with}~J\leq C(\|u_{0}\|_{H^1},\epsilon). \end{align*} So without loss of generality and renaming the intervals if necessary, we can write \begin{align*} [0,T]=\bigcup\limits_{j=0}^{J^\prime}I_{j},\quad I_{j}=[t_{j},t_{j+1}] \end{align*} with $J^{\prime}\leqslant J$ and on each $I_{j}$\begin{equation}\label{ad1}\|v\|_{X^1(I_j)}\lesssim\epsilon.\end{equation} Now we begin to solve the difference equation \eqref{equ32} on each $I_{j}$ by inductive arguments. More precisely, we show that for each $0\leqslant j\leqslant J^\prime-1$, there exists a unique solution $\omega$ to \eqref{equ32} on $I_{j}$ such that \begin{align}\label{equ33} \|\omega\|_{X^1(I_j)}+\|\omega\|_{L^{\infty}(I_{j};H^{1})}\leq (2C)^{j}T^{\frac{1}{4}}. \end{align} We mainly utilize the induction argument. Assume \eqref{equ32} has been solved on $I_{j-1}$ and the solution $\omega$ satisfies the bound \eqref{equ33} until to $j-1$, it is enough to derive the bound of the $\omega$ on $I_{j}$. Define the solution map \begin{align*} \Phi(\omega(t))=e^{i(t-t_{j})\Delta}w(t_j)+i\int_{t_j}^{t}e^{i(t-s)\Delta} \Big(\tfrac{K}{|x|}(v+\omega)+\lambda|v+\omega|^4(v+\omega)-\lambda|v|^4v\Big)(s)ds \end{align*} and a set \begin{align*} B=\{\omega:\|\omega\|_{L^\infty(I_{j};H^1)}+\|\omega\|_{X^1(I_j)}\leq (2C)^jT^{\frac{1}{4}}\} \end{align*} and the norm $\|\cdot\|_B$ is taken as the same as the one in the capital bracket. Then it suffices to show that $B$ is stable and the solution map $\Phi$ is contractive under the weak topology $\dot{X}^0(I_j)\cap L_t^\infty(I_j,L_x^2)$. Actually, it follows from the Strichartz estimate on Lorentz space and \eqref{equ:nonlinest}, \eqref{equ:harsob} that \begin{align*} \|\Phi(\omega)\|_{B}\lesssim& \|\omega(t_j)\|_{H^1}+\big\||v+w|^4(v+w)-|v|^4v\big\|_{L^{\frac{10}{7}}(I_j;W^{1,\frac{10}{7}}_x)} +\Big\|\langle \nabla\rangle \Big(\tfrac{K}{|x|}(v+\omega)\Big)\Big\|_{L^2(I_j;L^{\frac{6}{5},2})}\\ \lesssim&\|\omega(t_j)\|_{H^1}+\sum\limits_{i=0}^{4}\|v\|_{X^1(I_j)}^i \|\omega\|_{X^1(I_j)}^{5-i}+T^{\frac{1}{2}}\|v+\omega\|_{L_t^\infty(I_j; H^1_x)} \end{align*} Thus, \eqref{equ31} and \eqref{ad1} gives \begin{align*} \|\Phi(\omega)\|_{B}&\leq C\Big( \|\omega(t_j)\|_{H^1}+\sum\limits_{i=0}^{4}\epsilon^i\|\omega\|_{X^1(I_j)}^{5-i}+CT^{\frac{1}{2}}+T^{\frac{1}{2}} \|\omega\|_{L_t^\infty (I_j;H^1_x)}\Big). \end{align*} Plugging the inductive assumption $\|\omega(t_j)\|_{H^1}\leq (2C)^{j-1}T^{\frac{1}{4}}$, we see that for $\omega\in B$, \begin{align}\label{equ36} \|\Phi(\omega)\|_{B} \leq& C\big[(2C)^{j-1} +\epsilon^4(2C)^j+CT^\frac14+(2C)^{j}T^{\frac{3}{4}}\big]T^{\frac{1}{4}}\\\label{equ37} &+C\sum\limits_{i=0}^3((2C)^jT^{\frac{1}{4}})^{5-i}\epsilon^i \end{align} Thus we can choose $\epsilon$ and $T$ small depending only on the Strichartz constant such that \begin{align*} \eqref{equ36}\leq\frac{3}{4}(2C)^jT^{\frac{1}{4}}. \end{align*} Fix this $\epsilon$, \eqref{equ37} is a higher order term with respect to the quantity $T^{\frac{1}{4}}$, we have \begin{align*} \eqref{equ37}\leq\frac{1}{4}(2C)^jT^{\frac{1}{4}}, \end{align*} which is available by choosing $T$ small enough. Of course $T$ will depend on $j$, however, since $j\leqslant J^\prime-1\leq C(\|u_{0}\|_{H^1})$, we can choose $T$ to be a small constant depending only on $\|u_{0}\|_{H^1}$ and $\epsilon$, therefore is uniform in the process of induction. Hence \begin{align*} \|\Phi(\omega)\|_{B}&\leq (2C)^jT^{\frac{1}{4}}. \end{align*} On the other hand, by a similarly argument as before, we have, for $\omega_1, \omega_2\in B$ \begin{align*} &\|\Phi(\omega_{1})-\Phi(\omega_2)\|_{\dot{X}^0{(I_j)}\cap L^\infty_t(I_j,L_x^2)} \\ \leq&C\big\|\tfrac{\omega_1-\omega_2}{|x|}\big\|_{L_t^2(I_j;L_x^{\frac{6}{5}})}+C\big\||v+\omega_1|^4(v+\omega_1)-|v+\omega_2|^4(v+\omega_2)\big\|_{ L_{t,x}^\frac{10}{7}(I_j\times\mathbb{R}^3)}\\ \leq&CT^\frac12\|\omega_1-\omega_2\|_{L^\infty_t(I_j,L_x^2)}+C\|\omega_1-\omega_2\|_{\dot{X}^0{(I_j)}}\big( \|v\|_{\dot{X}^1(I_j)}^4+\|\omega_1\|_{\dot{X}^1(I_j)}^4+\|\omega_2\|_{\dot{X}^1(I_j)}^4\big)\\ \leq&\|\omega_1-\omega_2\|_{\dot{X}^0{(I_j)}\cap L^\infty_t(I_j,L_x^2)}\big(CT^\frac12+\epsilon^4+2(2C)^jT^{\frac{1}{4}}\big), \end{align*} which allows us to derive \begin{equation*} \|\Phi(\omega_{1})-\Phi(\omega_2)\|_{\dot{X}^0{(I_j)}\cap L^\infty_t(I_j,L_x^2)}\leq\frac{1}{2}\|\omega_1-\omega_2\|_{\dot{X}^0{(I_j)}\cap L^\infty_t(I_j,L_x^2)}, \end{equation*} by taking $\epsilon,T$ small such that \begin{equation*} CT^\frac12+\epsilon^4+2(2C)^jT^{\frac{1}{2}}\leq\frac{1}{4}. \end{equation*} A standard fixed point argument gives a unique solution $\omega$ of \eqref{equ32} on $I_j$ which satisfies the bound \eqref{equ33}. Finally, we get a unique solution of \eqref{equ32} on $[0,T]$ such that \begin{align*} \|\omega\|_{X^1([0,T])}\leq\sum\limits_{j=0}^{J^\prime-1}\|\omega\|_{X^1(I_j)} \leq\sum\limits_{j=0}^{J^\prime-1}(2C)^jT^{\frac{1}{4}}\leq C(2C)^JT^{\frac{1}{2}}\leq C. \end{align*} Since on $[0,T]$, $u=v+\omega$, we obtain a unique solution to \eqref{equ1.1} on $[0,T]$ such that \begin{align*} \|u\|_{X^1([0,T])}\leq \|\omega\|_{X^1([0,T])}+\|v\|_{X^1([0,T])}\leq C(\|u_{0}\|_{H^1}). \end{align*} As we mentioned before, this ``good local well-posedness" combining with the ``global kinetic energy control" as in Step 1 gives finally the global well-posedness. However, since the solution is connected one interval by another, it does not have global space-time bound. In the following, we will discuss the defocusing case, in which the global solution have the enough decay to imply scattering. \section{Morawetz estimate and scattering theory} In this section, we establish an interaction Morawetz estimate and the scattering theory in Theorem \ref{thm:scattering}. In the whole of the section, we are in the defocusing case with repulsive potential, that is, $K<0$ and $\lambda=1$. \subsection{Morawetz estimate} In this subsection, we establish the interaction Morawetz estimate for \eqref{equ1.1} with $K<0$ and $\lambda=1$. \begin{lemma}\label{lem:virial} Let $u:\mathbb{R}\times\mathbb{R}^3\to \mathbb{C}$ solve $i\partial_tu+\Delta u+V(x) u=\mathcal{N},$ and $\mathcal{N}\bar{u}\in \mathbb{R}$. Given a smooth weight $w:\mathbb{R}^3\to\mathbb{R}$ and a (sufficiently smooth and decaying) solution $u$ to \eqref{equ1.1}, we define $$I(t,w)=\int_{\mathbb{R}^3} w(x)|u(t,x)|^2\;dx.$$ Then, we have \begin{align}\label{equ:firstd} \partial_tI(t,w)=&2{\rm Im}\int_{\mathbb{R}^3}\bar{u}\nabla u\cdot \nabla w\;dx,\\ \label{equ:secdd} \partial_{tt}I(t,w)=&-\int_{\mathbb{R}^3}|u|^2\Delta^2w\;dx+4{\rm Re}\int \partial_ju\partial_k\bar{u}\partial_j\partial_kw\\\nonumber &+\int_{\mathbb{R}^3}|u|^2\nabla V\cdot\nabla w\;dx +2{\rm Re}\int\big(\mathcal{N}\nabla\bar{u}-\bar{u}\nabla\mathcal{N}\big)\cdot\nabla w\;dx. \end{align} \end{lemma} \begin{proof} First, note that \begin{equation}\label{equ:ut} \partial_tu=i\Delta u+iV(x)u-i\mathcal{N}, \end{equation} we get \begin{align*} \partial_tI(t,w)=&2{\rm Re}\int_{\mathbb{R}^3}w(x)\partial_tu\bar{u}\;dx\\ =&2{\rm Re}\int_{\mathbb{R}^3}w(x)\big(i\Delta u+iV(x)u-i\mathcal{N}\big)\bar{u}\;dx\\ =&-2{\rm Im}\int_{\mathbb{R}^3}w(x)\Delta u\bar{u}\;dx\\ =&2{\rm Im}\int_{\mathbb{R}^3}\bar{u}\nabla u\cdot \nabla w\;dx. \end{align*} Furthermore, \begin{align*} \partial_{tt}I(t,w)=&2{\rm Im}\int_{\mathbb{R}^3}\bar{u}_t\nabla u\cdot \nabla w\;dx+2{\rm Im}\int_{\mathbb{R}^d}\bar{u}\nabla u_t\cdot \nabla w\;dx\\ =&2{\rm Im}\int_{\mathbb{R}^3}\big(-i\Delta \bar{u}-iV(x)\bar{u}+i\bar{\mathcal{N}}\big)\nabla u\cdot \nabla w\;dx\\ &+2{\rm Im}\int_{\mathbb{R}^3}\bar{u}\nabla \big(i\Delta u+iV(x)u-i\mathcal{N}\big)\cdot \nabla w\;dx\\ =&2{\rm Re}\int \big(-\Delta \bar{u}\nabla u+\bar{u}\nabla\Delta u\big)\cdot\nabla w\;dx\\ &+2{\rm Re}\int\big(\bar{u}\nabla(Vu)-V\bar{u}\nabla u\big)\cdot\nabla w\;dx\\ &+2{\rm Re}\int\big(\mathcal{N}\nabla\bar{u}-\bar{u}\nabla\mathcal{N}\big)\cdot\nabla w\;dx\\ =&-\int_{\mathbb{R}^3}|u|^2\Delta^2w\;dx+4{\rm Re}\int \partial_ju\partial_k\bar{u}\partial_j\partial_kw\\ &+\int_{\mathbb{R}^3}|u|^2\nabla V\cdot\nabla w\;dx +2{\rm Re}\int\big(\mathcal{N}\nabla\bar{u}-\bar{u}\nabla\mathcal{N}\big)\cdot\nabla w\;dx. \end{align*} \end{proof} \begin{remark}\label{rem:moride} $(i)$ For $\mathcal{N}=\lambda|u|^{p-1}u$, so $\mathcal{N}\bar{u}=\lambda |u|^{p+1}\in\mathbb{R}$, then one has $$2{\rm Re}\int_{\mathbb{R}^3}\big(\mathcal{N}\nabla\bar{u}-\bar{u}\nabla\mathcal{N}\big)\cdot\nabla w\;dx=\lambda\frac{2(p-1)}{p+1}\int_{\mathbb{R}^d}|u|^{p+1}\Delta w\;dx.$$ $(ii)$ For $\mathcal{N}=|u|^{p-1}u,\;V(x)=\frac{K}{|x|}$, and $w$ being radial, we have \begin{align*} \partial_{tt}I(t,w)=&-\int_{\mathbb{R}^3}|u|^2\Delta^2w\;dx+4{\rm Re}\int \partial_ju\partial_k\bar{u}\partial_j\partial_kw\\\nonumber &-K\int_{\mathbb{R}^3}\frac{|u|^2}{|x|^2}\partial_r w\;dx +\frac{2(p-1)}{p+1}\int_{\mathbb{R}^3}|u|^{p+1}\Delta w\;dx. \end{align*} \end{remark} As a consequence, we obtain the following classical Morawetz estimate by taking $w(x)=|x|$. \begin{lemma}[Classical Morawetz estimate]\label{lem:moraw} Let $u:\;I\times\mathbb{R}^3\to\mathbb{C}$ solve \eqref{equ1.1} with $\lambda=1$. Then, \begin{align}\label{equ:morident} \frac{d}{dt}{\rm Im}\int_{\mathbb{R}^3}\bar{u}\frac{x}{|x|}\cdot\nabla u\;dx=&c|u(t,0)|^2+2\int_{\mathbb{R}^3}\frac{|\nabla_\theta u|^2}{|x|}\;dx\\\nonumber &-\frac{K}{2}\int_{\mathbb{R}^3}\frac{|u|^2}{|x|^2}\;dx+\int_{\mathbb{R}^3}\frac{|u|^4}{|x|}\;dx. \end{align} Moreover, we have for $K<0$ \begin{equation}\label{equ:Morawe} \int_I\int_{\mathbb{R}^3}\Big(\frac{|u|^2}{|x|^2}+\frac{|u|^4}{|x|}\Big)\;dx\;dt\leq C\sup_{t\in I}\|u(t,\cdot)\|_{\dot H^\frac12}^2. \end{equation} \end{lemma} Next, we establish the interaction Morawetz estimate for \eqref{equ1.1} with $K<0$ and $\lambda=1$ as the case that $K=0$ in \cite{CKSTT}. \begin{theorem}[Interaction Morawetz estimate]\label{thm:intmorawet} Let $u:\;I\times\mathbb{R}^3\to\mathbb{C}$ solve $i\partial_tu+\Delta u+\frac{K}{|x|}u=|u|^{p-1}u.$ Then, for $K<0$, we have \begin{equation}\label{interac2} \big\|u\big\|_{L_t^4(I;L_x^4(\mathbb{R}^3))}^2\leq C\|u(t_0)\|_{L^2}\sup_{t\in I}\|u(t)\|_{\dot H^{\frac12}}. \end{equation} \end{theorem} \begin{proof} We consider the NLS equation in the form of \begin{equation}\label{NLS} i\partial_tu+\Delta u=gu \end{equation} where $g=g(\rho,|x|)$ is a real function of $\rho=|u|^2=2T_{00}$ and $|x|$. We first recall the conservation laws for free Schr\"odinger in Tao \cite{Tao} \begin{equation*} \begin{split} \partial_t T_{00}+\partial_j T_{0j}=0,\\ \partial_t T_{0j}+\partial_k T_{jk}=0, \end{split} \end{equation*} where the mass density quantity $T_{00}$ is defined by $T_{00}=\tfrac12|u|^2,$ the mass current and the momentum density quantity $T_{0j}=T_{j0}$ is given by $T_{0j}=T_{j0}=\mathrm{Im}(\bar u\partial_j u)$, and the quantity $T_{jk}$ is \begin{equation}\label{stress} T_{jk}=2\mathrm{Re}(\partial_j u \partial_k\bar u)-\tfrac12\delta_{jk}\Delta(|u|^2), \end{equation} for all $j,k=1,...n,$ and $\delta_{jk}$ is the Kroncker delta. Note that the kinetic terms are unchanged, we see that for \eqref{NLS} \begin{equation}\label{Local Conservation} \begin{split} \partial_t T_{00}+\partial_j T_{0j}&=0,\\ \partial_t T_{0j}+\partial_k T_{jk}&=-\rho\partial_j g. \end{split} \end{equation} By the density argument, we may assume sufficient smoothness and decay at infinity of the solutions to the calculation and in particular to the integrations by parts. Let $h$ be a sufficiently regular real even function defined in $\mathbb{R}^3$, e.g. $h=|x|$. The starting point is the auxiliary quantity \begin{equation*} J=\tfrac12\langle|u|^2, h\ast |u|^2\rangle=2\langle T_{00}, h\ast T_{00}\rangle. \end{equation*} Define the quadratic Morawetz quantity $M=\tfrac14\partial_t J$. Hence we can precisely rewrite \begin{equation}\label{3.1} M=-\tfrac12\langle\partial_jT_{0j}, h\ast T_{00}\rangle-\tfrac12\langle T_{00}, h\ast \partial_jT_{0j} \rangle=-\langle T_{00}, \partial_j h\ast T_{0j} \rangle. \end{equation} By \eqref{Local Conservation} and integration by parts, we have \begin{equation*} \begin{split} \partial_tM&=\langle\partial_kT_{0k}, \partial_j h\ast T_{0j} \rangle-\langle T_{00}, \partial_j h\ast\partial_t T_{0j} \rangle\\&=-\sum_{j,k=1}^n\langle T_{0j}, \partial_{jk} h\ast T_{0j} \rangle+\langle T_{00}, \partial_{jk} h\ast T_{jk} \rangle+\langle \rho, \partial_j h\ast(\rho\partial_j g) \rangle. \end{split} \end{equation*} For our purpose, we note that \begin{equation} \begin{split} \sum_{j,k=1}^n\langle T_{0k}, \partial_{jk} h\ast T_{0j} \rangle&=\big\langle \mathrm{Im}(\bar u\nabla u), \nabla^2 h\ast \mathrm{Im}(\bar u\nabla u) \big\rangle\\&=\big\langle \bar u\nabla u, \nabla^2 h\ast \bar u\nabla u \rangle-\langle \mathrm{Re}(\bar u\nabla u), \nabla^2 h\ast \mathrm{Re}(\bar u\nabla u) \big\rangle. \end{split} \end{equation} Therefore it yields that \begin{equation*} \begin{split} \partial_tM=&\big\langle \mathrm{Re}(\bar u\nabla u), \nabla^2 h\ast \mathrm{Re}(\bar u\nabla u) \big\rangle-\big\langle \bar u\nabla u, \nabla^2 h\ast \bar u\nabla u \big\rangle\\&+\Big\langle \bar uu, \partial_{jk} h\ast \big(\mathrm{Re}(\partial_j u \partial_k\bar u)-\tfrac14\delta_{jk}\Delta(|u|^2)\big) \Big\rangle+\big\langle \rho, \partial_j h\ast(\rho\partial_j g) \big\rangle. \end{split} \end{equation*} From the observation \begin{equation*} \begin{split} -\big\langle \bar uu, \partial_{jk} h\ast\delta_{jk}\Delta(|u|^2) \big\rangle=\big\langle \nabla (|u|^2), \Delta h\ast \nabla(|u|^2) \big\rangle, \end{split} \end{equation*} we write \begin{equation}\label{Morawetz equality} \begin{split} \partial_tM=\tfrac12\langle \nabla \rho, \Delta h\ast\nabla\rho \rangle+R+\big\langle \rho, \partial_j h\ast(\rho\partial_j g) \big\rangle, \end{split} \end{equation} where $R$ is given by \begin{equation*}\label{3.4} \begin{split} R&=\big\langle \bar uu, \nabla^2 h\ast (\nabla\bar u \nabla u) \big\rangle-\big\langle \bar u\nabla u, \nabla^2 h\ast \bar u\nabla u \big\rangle\\&=\tfrac12\int \Big(\bar u(x)\nabla \bar u(y)-\bar u(y)\nabla\bar u(x)\Big)\nabla^2h(x-y)\Big(u(x)\nabla u(y)-u(y)\nabla u(x)\Big)\mathrm{d}x\mathrm{d}y. \end{split} \end{equation*} Since the Hessian of $h$ is positive definite, we have $R\geq0$. Integrating over time in an interval $[t_1, t_2]\subset I$ yields \begin{equation*} \begin{split} \int_{t_1}^{t_2}\Big\{\frac12\langle \nabla \rho, \Delta h\ast\nabla\rho \rangle+\langle \rho, \partial_j h\ast(\rho\partial_j g) \rangle+R\Big\}\mathrm{d}t=-\langle T_{00}, \partial_j h\ast T_{0j} \rangle\big|_{t=t_1}^{t=t_2}. \end{split} \end{equation*} From now on, we choose $h(x)=|x|$. One can follow the arguments in \cite{CKSTT} to bound the right hand by the quantity \begin{equation*} \Big|\mathrm{Im}\iint_{\mathbb{R}^{3}\times\mathbb{R}^3}|u(x)|^2\frac{x-y}{|x-y|}\bar u(y)\nabla u(y)dxdy\Big|\leq C\sup_{t\in I}\|u(t)\|^2_{L^2}\|u(t)\|^2_{\dot H^{\frac12}}. \end{equation*} Therefore we conclude \begin{equation}\label{Morawetz inequality} \int_{t_1}^{t_2}\big\langle \rho, \partial_j h\ast(\rho\partial_j g) \big\rangle dt+\big\|u\big\|_{L^4(I;L^4(\mathbb{R}^3))}^2\leq C\sup_{t\in I}\|u(t)\|_{L^2}\|u(t)\|_{\dot H^{\frac12}}. \end{equation} Now we consider the term \begin{equation*} \begin{split} P&:=\big\langle \rho, \nabla h\ast (\rho\nabla g) \big\rangle. \end{split} \end{equation*} Consider $g(\rho,|x|)=\rho^{(p-1)/2}+V(x)$, then we can write $P=P_1+P_2$ where \begin{equation}\label{3.5} \begin{split} P_1= \big\langle\rho, \nabla h\ast \big(\rho\nabla (\rho^{(p-1)/2})\big)\big\rangle=\frac{p-1}{p+1}\big\langle\rho, \Delta h\ast \rho^{(p+1)/2}\big\rangle\geq 0 \end{split} \end{equation} and \begin{equation}\label{P2} \begin{split} P_2=& \iint\rho(x)\nabla h(x-y)\rho(y)\nabla \big(V(y)\big)\mathrm{d}x\mathrm{d}y\\ =&\iint|u(x)|^2\frac{(x-y)\cdot y}{|x-y|\cdot|y|^3}|u(y)|^2\;dx\;dy. \end{split} \end{equation} By using the Morawetz estimate \eqref{equ:Morawe} $$\int_I\int_{\mathbb{R}^3}\frac{|u|^2}{|x|^2}\;dx\;dt\leq C\sup_{t\in I}\|u\|_{\dot{H}^\frac12}^2,$$ one has $$|P_2|\leq \|u_0\|_{L^2}^2\sup_{t\in I}\|u\|_{\dot{H}^\frac12}^2.$$ And so, we conclude the proof of Theorem \ref{thm:intmorawet}. \end{proof} \begin{remark}\label{rem:intmor} By the same argument as above, one can extend the Coulomb potential $V(x)=\frac{K}{|x|}$ to $V(x)$ satisfies the following argument: first, we have by Morawetz estimate $$\int_I\int_{\mathbb{R}^3}|u|^2\frac{x}{|x|}\cdot\nabla V\;dx\;dt\leq C\sup_{t\in I}\|u\|_{\dot{H}^\frac12}^2.$$ As in \eqref{P2}, we are reduced to estimate the term $$\int_I\int_{\mathbb{R}^3}|u|^2|\nabla V|\;dx\;dt.$$ Therefore, we can extend $V(x)$ satisfying $$\frac{x}{|x|}\cdot \nabla V\geq c|\nabla V|,$$ with some positive constant $c$. \end{remark} \subsection{Scattering theory} Now we use the global-in-time interaction Morawetz estimate \eqref{interac2} \begin{equation}\label{rMorawetz} \big\|u\big\|_{L_t^4(\mathbb{R};L_x^4(\mathbb{R}^3))}^2\leq C\|u_0\|_{L^2}\sup_{t\in \mathbb{R}}\|u(t)\|_{\dot H^{\frac12}}, \end{equation} to prove the scattering theory part of Theorem \ref{thm:scattering}. Since the construction of the wave operator is standard, we only show the asymptotic completeness.\vspace{0.2cm} Let $u$ be a global solution to \eqref{equ1.1}. Let $\eta>0$ be a small constant to be chosen later and split $\mathbb{R}$ into $L=L(\|u_0\|_{H^1})$ finite subintervals $I_j=[t_j,t_{j+1}]$ such that \begin{equation}\label{equ4.16} \|u\|_{L_{t,x}^{4}(I_j\times\mathbb{R}^3)}\leq\eta. \end{equation} Define $$\big\|\langle\nabla\rangle u\big\|_{S^0(I)}:=\sup_{(q,r)\in\Lambda_0:r\in[2,3_-]}\big\|\langle\nabla\rangle u\big\|_{L_t^qL_x^r(I\times\mathbb{R}^3)}.$$ Using the Strichartz estimate and Sobolev norm equivalence \eqref{equ:sobequi}, we obtain \begin{align}\label{star} \big\|\langle\nabla\rangle u\big\|_{S^0(I_j)}\lesssim&\|u(t_j)\|_{H^1}+\big\|\langle\nabla\rangle(|u|^{p-1}u) \big\|_{L_t^2L_x^\frac{6}{5}(I_j\times\mathbb{R}^3)}. \end{align} Let $\epsilon>0$ to be determined later, and $r_\epsilon=\frac{6}{3-(4/(2+\epsilon))}$. On the other hand, we use the Leibniz rule and H\"older's inequality to obtain \begin{align*} \big\|\langle\nabla\rangle(|u|^{p-1}u) \big\|_{L_t^2L_x^\frac65}\lesssim&\big\|\langle\nabla\rangle u\big\|_{L_t^{2+\epsilon}(I_j;L_x^{r_\epsilon})} \|u\|^{p-1}_{L_t^{\frac{2(p-1)(2+\epsilon)}{\epsilon}}L_x^\frac{3(p-1)(2+\epsilon)}{4+\epsilon}}.\end{align*} Taking $\epsilon=2_+$, and so $r_\epsilon=3_-$. If $p\in(\frac73,4]$, then ${2(p-1)(2+\epsilon)}/{\epsilon}>4$ and $2\leq \frac{3(p-1)(2+\epsilon)}{4+\epsilon}\leq 6$. Therefore we use interpolation to obtain \begin{align*} \|u\|_{L_t^{\frac{2(p-1)(2+\epsilon)}{\epsilon}}L_x^\frac{3(p-1)(2+\epsilon)}{4+\epsilon}}\leq C\|u\|^\alpha_{L_{t,x}^{4}(I_j\times\mathbb{R}^3)}\|u\|^\beta_{L_t^{\infty}L_x^6(I_j\times\mathbb{R}^3)}\|u\|^\gamma_{L_t^{\infty}L_x^2(I_j\times\mathbb{R}^3)}, \end{align*} where $\alpha>0,\beta,\gamma\geq 0$ satisfy $\alpha+\beta+\gamma=1$ and \begin{align*} \begin{cases} \frac{\epsilon}{2(p-1)(2+\epsilon)}&=\frac{\alpha}{4}+\frac{\beta}{\infty}+\frac{\gamma}{\infty},\\ \frac{4+\epsilon}{3(p-1)(2+\epsilon)}&=\frac{\alpha}{4} +\frac{\beta}{6}+\frac{\gamma}2. \end{cases} \end{align*} Hence \begin{align*} \big\|\langle\nabla\rangle(|u|^{p-1}u) \big\|_{L_t^2L_x^\frac{2n}{n+2}}\lesssim&\big\|\langle\nabla\rangle u\big\|_{L_t^{2+\epsilon}(I_j;L_x^{r_\epsilon})}\|u\|^{\alpha(p-1)}_{L_{t,x}^{4}(\mathbb{R}\times\mathbb{R}^3)} \|u\|^{(\beta+\gamma)(p-1)}_{L_t^\infty H^1_x(I_j\times\mathbb{R}^3)}\\\leq& C\eta^{\alpha(p-1)}\big\|\langle\nabla\rangle u\big\|_{S^0(I_j)}.\end{align*} Plugging this into \eqref{star} and noting that $\alpha(p-1)>0$, we can choose $\eta$ to be small enough such that \begin{align*} \big\|\langle\nabla\rangle u\big\|_{S^0(I_j)}\leq C(E,M,\eta).\end{align*} Hence we have by the finiteness of $L$ \begin{align}\label{boundnorm} \big\|\langle\nabla\rangle u\big\|_{S^0(\mathbb{R})}\leq C(E,M,\eta,L).\end{align} If $p\in (4,5)$, we use interpolation to show that \begin{align*} \|u\|_{L_t^{\frac{2(p-1)(2+\epsilon)}{\epsilon}}L_x^\frac{3(p-1)(2+\epsilon)}{4+\epsilon}}\leq C\|u\|^\alpha_{L_{t,x}^{4}(I_j\times\mathbb{R}^3)}\|u\|^\beta_{L_t^{\infty}L_x^6(I_j\times\mathbb{R}^3)}\|u\|^\gamma_{L_t^{6}L_x^{18}(I_j\times\mathbb{R}^3)},\end{align*} where $\alpha>0,\beta,\gamma\geq 0$ satisfy $\alpha+\beta+\gamma=1$ and \begin{align*} \begin{cases} \frac{\epsilon}{2(p-1)(2+\epsilon)}&=\frac{\alpha}{4}+\frac{\beta}{\infty}+\frac{\gamma}{6},\\ \frac{4+\epsilon}{3(p-1)(2+\epsilon)}&=\frac{\alpha}{4} +\frac{\beta}{6}+\frac{\gamma}{18}. \end{cases} \end{align*} It is easy to solve these equations for $p\in(4,5)$. Since $r_\epsilon\in[2,3_-]$ for $\epsilon=2_+$, we have \begin{align*} \big\|\langle\nabla\rangle(|u|^{p-1}u) \big\|_{L_t^2L_x^\frac{6}{5}}\lesssim&\big\|\langle\nabla\rangle u\big\|_{L_t^{2+\epsilon}(I_j;L_x^{r_\epsilon})}\|u\|^{\alpha(p-1)}_{L_{t,x}^4(\mathbb{R}\times\mathbb{R}^3)}\|u\|^{\beta(p-1)}_{L_t^\infty H^1_x(I_j\times\mathbb{R}^3)}\|\langle\nabla\rangle u\|^{\gamma(p-1)}_{L_t^{6}L_x^{\frac{18}{7}}(I_j\times\mathbb{R}^3)}\\&\leq C\eta^{\alpha(p-1)}\big\|\langle\nabla\rangle u\big\|^{1+\gamma(p-1)}_{S^0(I_j)}.\end{align*} Hence arguing as above we obtain \eqref{boundnorm}. Finally, we utilize \eqref{boundnorm} to show asymptotic completeness. We need to prove that there exist unique $u_\pm$ such that $$\lim_{t\to\pm\infty}\|u(t)-e^{it\mathcal{L}_K}u_\pm\|_{H^1_x}=0.$$ By time reversal symmetry, it suffices to prove this for positive times. For $t>0$, we will show that $v(t):=e^{-it\mathcal{L}_K}u(t)$ converges in $H^1_x$ as $t\to+\infty$, and denote $u_+$ to be the limit. In fact, we obtain by Duhamel's formula \begin{equation}\label{equ4.21} v(t)=u_0-i\int_0^te^{-i\tau \mathcal{L}_K}(|u|^{p-1}u)(\tau)d\tau. \end{equation} Hence, for $0<t_1<t_2$, we have $$v(t_2)-v(t_1)=-i\int_{t_1}^{t_2}e^{-i\tau \mathcal{L}_K}(|u|^{p-1}u)(\tau)d\tau.$$ Arguing as before, we deduce that for some $\alpha>0,\beta\geq1$ \begin{align*} \|v(t_2)-v(t_1)\|_{H^1(\mathbb{R}^3)}=&\Big\|\int_{t_1}^{t_2}e^{-i\tau \mathcal{L}_K}(|u|^{p-1}u)(\tau)d\tau\Big\|_{H^1(\mathbb{R}^3)}\\ \lesssim&\big\|\langle\nabla\rangle(|u|^{p-1}u) \big\|_{L_t^2L_x^\frac65([t_1,t_2]\times\mathbb{R}^3)}\\ \lesssim&\|u\|_{L_{t,x}^{4}([t_1,t_2]\times\mathbb{R}^3)}^{\alpha(p-1)}\big\|\langle\nabla\rangle u\big\|^\beta_{S^0([t_1,t_2])} \\ \to&0\quad \text{as}\quad t_1,~t_2\to+\infty. \end{align*} As $t$ tends to $+\infty$, the limitation of \eqref{equ4.21} is well defined. In particular, we find the asymptotic state $$u_+=u_0-i\int_0^\infty e^{-i\tau \mathcal{L}_K}(|u|^{p-1}u)(\tau)d\tau.$$ Therefore, we conclude the proof of Theorem \ref{thm:scattering}. \section{Blow up} In this section, we study the blow up behavior of the solution in the focusing case, i.e $\lambda=-1$. In the case that $K>0$, we will use the sharp Hardy's inequality and Young's inequality to obtain \begin{align}\nonumber \int_{\mathbb{R}^3}\frac{|u|^2}{|x|}\;dx\leq &\big(\int|u|^2\;dx\big)^\frac12\Big(\int\frac{|u|^2}{|x|^2}\;dx\Big)^\frac12 \\\nonumber \leq & 2\|u\|_{L^2}\|u\|_{\dot{H}^1}\\\label{equ:kpos} \leq&\frac{1}{C_{p,K}}\|u\|_{L^2}^2+C_{p,K}\|u\|^2_{\dot{H}^1}, \end{align} for any $C_{p,K}>0$. From Remark \ref{rem:moride}, it follows that for $w$ radial function, we have \begin{align}\label{equ:virit} \partial_{tt}\int_{\mathbb{R}^3}w(x)|u|^2\;dx=&-\int_{\mathbb{R}^3}|u|^2\Delta^2w\;dx+4{\rm Re}\int_{\mathbb{R}^3} \partial_ju\partial_k\bar{u}\partial_j\partial_kw\\\nonumber &-K\int_{\mathbb{R}^3}\frac{|u|^2}{|x|^2}\partial_r w\;dx -\frac{2(p-1)}{p+1}\int_{\mathbb{R}^3}|u|^{p+1}\Delta w\;dx. \end{align} {\bf Case 1: $u_0\in\Sigma.$} By taking $w(x)=|x|^2$, we obtain \begin{corollary}\label{cor:virial} Let $u$ solve \eqref{equ1.1}, then we have \begin{align}\nonumber \frac{d^2}{dt^2}\int_{\mathbb{R}^3}|x|^2|u(t,x)|^2\;dx=&8\int_{\mathbb{R}^3}|\nabla u|^2\;dx-2K\int_{\mathbb{R}^3}\frac{|u|^2}{|x|}\;dx-\frac{12(p-1)}{p+1}\int_{\mathbb{R}^3}|u|^{p+1}\;dx\\\nonumber =&12(p-1)E(u)-2(3p-7)\int_{\mathbb{R}^3}|\nabla u|^2\;dx+6K\int_{\mathbb{R}^3}\frac{|u|^2}{|x|}\;dx. \end{align} \end{corollary} Let $I=[0,T]$ be the maximal interval of existence. Let $$y(t):=\int|x|^2|u(t,x)|^2\;dx,$$ then for $t\in I$ $$y'(t)=4{\rm Im}\int_{\mathbb{R}^3}x\cdot\nabla u\bar{u}\;dx.$$ By Corollary \ref{cor:virial} and \eqref{equ:kpos} with $C_{p,K}=\frac{3p-7}{3K}$ when $K>0$, we get \begin{equation}\label{equ:y''t} y''(t)\leq 12(p-1)C(E,M):= \begin{cases} 12(p-1)E(u_0) \quad\text{if}\quad K\leq0\\ 12(p-1)E(u_0)+\frac{18K^2}{3p-7}M(u_0)\quad\text{if}\quad K>0. \end{cases} \end{equation} Hence $$y(t)\leq 6(p-1)C(E,M)t^2+y'(0)t+y(0).$$ which implies $I$ is finite provided that $$(i) C(E,M)<0; (ii) C(E,M)=0, y'(0)<0; (iii) C(E,M)>0, y'(0)^2\geq 24(p-1)C(E,M)y(0). $$ In fact, in the above conditions, we have $T<+\infty$ and $$\lim_{t\to T}y(t)=0$$ this together with $$\|u_0\|_{L_x^2}^2=\|u(t)\|_{L^2}^2\leq\big\||x|u(t)\big\|_{L^2}\|u(t)\|_{\dot H^1}.$$ implies \begin{equation}\label{equ:blup} \lim_{t\to T}\|u(t)\|_{\dot{H}^1}=+\infty. \end{equation} {\bf Case 2: $u_0\in H^1_{\rm rad}(\mathbb{R}^3)$.} Let $\phi$ be a smooth, radial function satisfying $|\partial^2_r\phi(r)|\leq 2$, $\phi(r)=r^2 $ for $r\leq 1$, and $\phi(r)=0$ for $r\geq3$. For $R\geq1 $, we define $$\phi_R(x)=R^2\phi\big(\tfrac{|x|}{R}\big)\;\text{ and }\; V_R(x)=\int_{\mathbb{R}^3} \phi_R(x)|u(t,x)|^2dx.$$ Let $u(t,x)$ be a radial solution to \eqref{equ1.1}, then by a direct computation, we have by \eqref{equ:virit} \begin{equation}\label{equ:vr} \partial_t V_R(x)=2 {\rm Im} \int_{\mathbb{R}^3}[\overline{u}\partial_j u](t,x)\partial_j\phi_R(x)] dx, \end{equation} and \begin{align*} \partial^2_t V_R(x)=&4{\rm Re}\int_{\mathbb{R}^3} \partial_ju\partial_k\bar{u}\partial_j\partial_k\phi_R-\int_{\mathbb{R}^3}|u|^2\Delta^2\phi_R\;dx\\\nonumber &-K\int_{\mathbb{R}^3}\frac{|u|^2}{|x|^2} \phi_R'\;dx -\frac{2(p-1)}{p+1}\int_{\mathbb{R}^3}|u|^{p+1}\Delta \phi_R\;dx\\ = & 4\int_{\mathbb{R}^3} \phi_R''|\nabla u|^2dx-K\int_{\mathbb{R}^3}\frac{|u|^2}{|x|^2} \phi_R'\;dx-\int_{\mathbb{R}^3}\left[ \Delta^2\phi_R |u(t,x)|^2 +\frac{2(p-1)}{p+1}\Delta\phi_R(x)|u|^{p+1}(t,x)\right] dx\\ =&8\int_{\mathbb{R}^3}|\nabla u|^2\;dx-2K\int_{\mathbb{R}^3}\frac{|u|^2}{|x|}\;dx-\frac{12(p-1)}{p+1}\int_{\mathbb{R}^3}|u|^{p+1}\;dx-\int_{\mathbb{R}^3} \Delta^2\phi_R |u(t,x)|^2\;dx\\ &-4\int_{\mathbb{R}^3}|\nabla u|^2(2-\phi_R'')\;dx+K\int_{\mathbb{R}^3}\frac{|u|^2}{|x|}(2-\phi_R')\;dx+\frac{2(p-1)}{p+1}\int_{\mathbb{R}^3}|u|^{p+1}(6-\Delta\phi_R)\;dx\\ \leq&12(p-1)E(u)-2(3p-7)\|\nabla u\|_{L^2}^2+6K\int_{\mathbb{R}^3}\frac{|u|^2}{|x|}\;dx\\ &-4\int_{\mathbb{R}^3}|\nabla u|^2(2-\phi_R'')\;dx+C\int_{|x|\geq R}\big(\tfrac{|u|^2}{R}+|u|^{p+1}\big)\;dx. \end{align*} By the radial Sobolev inequality, we have \begin{align*} \|f\|_{L^\infty(|x|\geq R)}\leq&\frac{c}{R}\|f\|_{L^2_x(|x|\geq R)}^\frac12\|\nabla f\|_{L^2_x(|x|\geq R)}^\frac12. \end{align*} Therefore, by mass conservation and Young's inequality, we know that for any $\epsilon>0$ there exist sufficiently large $R$ such that for $K\leq0$ \begin{align*} \partial_t^2V(t)\leq&12(p-1)E(u)-2(3p-7-\epsilon)\|u\|_{\dot{H}^1}^2 + \epsilon^2\\ \leq&12(p-1)E(u_0)+ \epsilon^2, \end{align*} and for $K>0$ by using \eqref{equ:kpos} with $C_{p,K}=\frac{3p-7-\delta}{3K}$ and $0<\delta\ll1$ \begin{align*} \partial_t^2V(t)\leq&12(p-1)E(u)-(\delta-\epsilon)\|u\|_{\dot{H}^1}^2+\frac{18K^2}{3p-7-\delta}M(u) + \epsilon^2\\ \leq&12(p-1)E(u_0)+\frac{18K^2}{3p-7-\delta}M(u_0)+ \epsilon^2, \end{align*} for any $3p-7>\delta>\epsilon>0$. Finally, if we choose $\epsilon$ sufficient small, we can obtain \begin{equation}\label{equ:vrt} \partial_t^2V_R(t)\leq \begin{cases} 6(p-1)E(u_0), & \mbox{if } K\leq0 \\ 6(p-1)E(u_0)+\frac{9K^2}{3p-7-\delta}M(u_0), & \mbox{if } K>0, \end{cases} \end{equation} which implies that $u$ blows up in finite time by the same argument as Case 1, since for the case $K>0$, the assumption $$E(u_0)+\frac{3K^2}{2(3p-7)(p-1)}M(u_0)<0,$$ shows that there exists $0<\delta\ll1$ such that $$6(p-1)E(u_0)+\frac{9K^2}{3p-7-\delta}M(u_0)<0.$$ \begin{center}
{ "attr-fineweb-edu": 1.621094, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUcMDxaL3SufhMn6bL
\section{introduction} Edge learning refers to the deployment of learning algorithms at the network edge so as to have rapid access to massive mobile data for training \emph{artificial intelligence} (AI) models~\cite{zhu2018towards,mao2017survey,wang2018edge}. A popular framework, called \emph{federated edge learning} (FEEL), distributes the task of model training over many edge devices \cite{mcmahan2017communication,konevcny2016federated}. Thereby, FEEL exploits distributed data without compromising their privacy, and furthermore leverages the computation resources of edge devices. Essentially, the framework is a distributed implementation of \emph{stochastic gradient descent} (SGD) in a wireless network. In FEEL, edge devices periodically upload locally trained models to an edge server, which are then aggregated and used to update a global model. When the edge devices participating in the learning process share the same wireless medium to convey local updates to the edge server, the limited radio resources can cause severe congestion over the air interface, resulting in a communication bottleneck for FEEL. One promising solution is \emph{over-the-air aggregation} also called \emph{over-the-air computation} (AirComp) that exploits the waveform superposition property of the wireless medium to support simultaneous transmission by all the devices~\cite{zhu2018low,amiri2019machine}. Compared with conventional orthogonal access schemes, this can dramatically reduce the required resources, particularly many devices participate in FEEL. However, the uncoded linear analog modulation scheme used for over-the-air aggregation in \cite{zhu2018low,amiri2019machine} may cause difficulty in their deployment in existing systems, which typically use digital modulation (e.g., 3GPP). In this work, we propose a new design, called \emph{one-bit broadband digital aggregation} (OBDA), that extends the conventional analog design by featuring digital modulation of gradients to facilitate efficient implementation of FEEL in a practical wireless system. We present a comprehensive convergence analysis of OBDA to quantify the effects of channel hostilities, including channel noise, fading and estimation errors, on its convergence rate. The results provide guidelines on coping with the channel hostilities in FEEL systems with OBDA. \subsection{FEEL over a Multiple Access Channel (MAC)} A typical FEEL algorithm involves iterations between two steps as shown in Fig. \ref{Fig:1}. In the first step, the edge server broadcasts the current version of the global model to the participating edge devices. Each edge device employs SGD using only locally available data. In the next step, edge devices convey their local updates (gradient estimates or model updates) to the edge server. Each iteration of these two steps is called one \emph{communication round}. The iteration continues until the global model converges. The communication bottleneck has already been acknowledged as a major challenge in the federated learning literature, and several strategies have been proposed to reduce the communication overhead. We can identify three main approaches. The first is to discard the updates from slow-responding edge devices (stragglers) for fast update synchronization~\cite{chen2016revisiting,tandon2017gradient}. Another approach is to employ update significance rather than the computation speed to schedule devices~\cite{kamp2018efficient,chen2018lag}. Update significance is measured by either the model variance \cite{kamp2018efficient}, or the gradient divergence~\cite{chen2018lag} corresponding to model-averaging~\cite{mcmahan2017communication} and gradient-averaging~\cite{konevcny2016federated} implementation methods, respectively. { The last approach focuses on update compression by exploiting the sparsity of gradient updates~\cite{lin2017deep,sattler2019sparse} and low-resolution gradient/model-parameter quantization \cite{alistarh2017qsgd,2018signsgd,reisizadeh2019fedpaq}. However, all these approaches assume reliable links between the devices and the server and ignore the wireless nature of the communication medium. } The envisioned implementation of FEEL in practical wireless networks requires taking into account wireless channel hostilities and the scarcity of radio resources. The first works in the literature that studied FEEL taking into account the physical layer resource constraints focus on over-the-air aggregation \cite{zhu2018low,amiri2019machine,amiri2019federated,amiri2019collaborative,yang2018federated}. Specifically, a broadband over-the-air aggregation system based on analog modulation called \emph{broadband analog aggregation} (BAA) is designed in \cite{zhu2018low}, where the gradients/models transmitted by devices are averaged over frequency sub-channels. For such a system, several communication-learning tradeoffs are derived to guide the designs of broadband power control and device scheduling. Over-the-air aggregation is also designed for narrow-band channels with an additional feature of gradient dimension reduction exploiting gradient sparsity in \cite{amiri2019machine,amiri2019federated}. Subsequently, \emph{channel-state information} (CSI) requirement for over-the-air aggregation is relaxed by exploiting multiple antennas at the edge server in \cite{amiri2019collaborative}. The joint design of device scheduling and beamforming for over-the-air aggregation is investigated in \cite{yang2018federated} to accelerate FEEL in a multi-antenna system. Few other recent works focus on radio resource management \cite{abad2019hierarchical,chen2019joint,yang2019scheduling}. In \cite{abad2019hierarchical}, a hierarchical FEEL framework is introduced in a cellular network. In \cite{chen2019joint}, a novel bandwidth allocation strategy is proposed for minimizing the model training loss of FEEL via convergence analysis accounting for packet transmission errors. Different classic scheduling schemes, such as proportional fair scheduling, are applied in a FEEL system and their effects on the convergence rate are studied in \cite{yang2019scheduling}. The performance of FEEL is typically measured in terms of the convergence rate, which quantifies how fast the global model converges over communication rounds. The current work is the first to present a comprehensive framework for convergence analysis targeting FEEL with AirComp. \subsection{Over-the-Air Aggregation} With over-the-air aggregation, the edge server receives an approximate (noisy version) of the desired functional value, efficiently exploiting the available bandwidth by simultaneous transmission from all the devices, as opposed to orthogonalized massive access. The idea of over-the-air aggregation has previously been studied in the context of data aggregation in sensor networks, known also as AirComp. In \cite{nazer2007computation}, function computation over a MAC is studied from a fundamental information theoretic point of view, assuming \emph{independent and identically distributed} (i.i.d.) source sequences, and focusing on the asymptotic computation rate. This is extended in \cite{GoldenbaumTCOM2013} to wireless MACs and a more general class of non-linear functions. A practical implementation is presented in \cite{abari2016over}. From an implementation perspective, techniques for distributed power control and robust design against channel estimation errors are proposed in~\cite{xiao2008linear} and ~\cite{goldenbaum2014channel}, respectively. More recently, AirComp techniques are designed for \emph{multiple-input-multiple-output} (MIMO) systems for enabling spatially multiplexed vector-valued function computation in \cite{zhu2018mimo,li2019wirelessly,wen2019reduced}. To this end, receive beamforming and an enabling scheme for CSI feedback are designed in~\cite{zhu2018mimo}. The framework is extended to wirelessly-powered AirComp~\cite{li2019wirelessly} and massive MIMO AirComp systems \cite{wen2019reduced}. Existing works on over-the-air aggregation, as well as its application in the context of FEEL \cite{zhu2018low,amiri2019machine,amiri2019federated,amiri2019collaborative,yang2018federated} consider analog modulation assuming that the transmitter can modulate the carrier waveform as desired, freely choosing the I/Q coefficients as arbitrary real number. However, most existing wireless devices come with embedded digital modulation chips, and they may not be capable of employing an arbitrary modulation scheme. { In particular, modern cellular systems are based on \emph{orthogonal frequency division multiple access} (OFDMA) using \emph{quadrature amplitude modulation} (QAM). Therefore, the goal of the paper is to extend the over-the-air aggregation framework to FEEL considering transmitters that are limited to QAM.} \subsection{Contributions and Organization} In this paper, we consider the implementation of over-the-air aggregation for FEEL over a practical wireless system with digital modulation. Building on the signSGD proposed in \cite{2018signsgd} and \emph{orthogonal frequency division multiplexing} (OFDM) transceivers, { we design an elaborate FEEL scheme, called OBDA, which features one-bit gradient quantization and QAM modulation at devices, and over-the-air majority-vote based gradient-decoding at the edge server.} This novel design will allow implementing AirComp across devices that are endowed with digital modulation chips, without requiring significant changes in the hardware or the communication architecture. Moreover, while existing works on over-the-air analog aggregation mostly rely on numerical experiments for performance evaluation \cite{zhu2018low,amiri2019machine}, one of the main contributions of this work is an analytical study of the convergence rate of the OBDA scheme. The considered (model) convergence rate is defined as the rate at which { the expected value of average gradient norm}, denoted by $\bar{G}$, diminishes as the number of rounds, denoted by $N$, and the number of devices, denoted by $K$, grow. {\color{black} For ideal FEEL with single-bit gradient quantization transmitted over noiseless channels, the convergence rate is shown to be \cite{2018signsgd} \begin{align}\label{eq:noiseless} \bar{G}\leq \frac{1}{\sqrt{N}}\left(c + \frac{c'}{\sqrt{K}}\right), \end{align} where $c$ and $c'$ denote some constants related to the landscape of the training loss function. The convergence rates of OBDA in the presence of channel hostilities are derived for three scenarios: 1) Gaussian MAC, 2) fading MAC with perfect (transmit) CSI, and 3) fading MAC with imperfect (transmit) CSI. For all three scenarios, the derived convergence rates share the following form: \begin{align}\label{eq:channel_hostilities} \bar{G} \leq \frac{a}{\sqrt{N}}\left(c + \frac{c'}{\sqrt{K}} + b\right), \end{align} where the channel hostilities are translated into a scaling factor $a\in (1, \infty)$ and a bias term $b\in (0, \infty)$. It is clear that the wireless channel imperfections slow down the model convergence with respect to the noiseless counterpart in \eqref{eq:noiseless}. The two terms $a$ and $b$ are independent of $N$, but are functions of $K$ and the received \emph{signal-to-noise ratio} (SNR) as shown explicitly in the sequel. Particularly, as $K$ or the received SNR grows, $a$ and $b$ are shown to converge to their noiseless limits of 1 and 0, respectively. The convergence is shown to follow different scaling laws described as follows. \begin{itemize} \item \textbf{Gaussian MAC}: We consider a MAC with unit channel gain and additive complex Gaussian noise. For this case, the scaling factor $a$ and the bias term $b$ in \eqref{eq:channel_hostilities} scale as $1+ O\left(\frac{1}{K\sqrt{\text{SNR}}}\right)$ and $O\left(\frac{1}{K\sqrt{\text{SNR}}}\right)$, respectively. \item \textbf{MAC with fading and perfect CSI}: {We consider a fading MAC and OBDA with perfect CSI, which is needed for transmit power control. Let $\alpha$ denote the expected ratio of gradient coefficients truncated due to the adopted truncated channel inversion power control under a power constraint. Then, the terms $a$ and $b$ in \eqref{eq:channel_hostilities} scale as $1 + O\left(\frac{1}{\alpha K\sqrt{\text{SNR}}}\right)$ and $O\left(\frac{1}{\alpha K\sqrt{\text{SNR}}}\right)$, respectively. In other words, the variation in channel quality due to fading is translated into an effective reduction on the number of devices by a factor of $\alpha$ in each round. This leads to a decreased convergence rate compared to the case of Gaussian channels.} \item \textbf{MAC with fading and imperfect CSI}: In practice, CSI errors may exist due to inaccurate channel estimation. Using imperfect CSI for power control results in additional perturbation to the received gradients. As a result, the terms $a$ and $b$ in \eqref{eq:channel_hostilities} scale as $1 + O\left(\frac{1}{\alpha\sqrt{K}}\right)$ and $O\left(\frac{1}{\alpha\sqrt{K}}\right)$, respectively, and are asymptotically independent of the receive SNR. Compared with the preceding two cases, the much slower rates and their insensitivity to increasing SNR shows that CSI errors can incur severe degradation of the OBDA performance. \end{itemize} } { Finally, it is worth emphasizing that the current work differs from the closely related work \cite{2018signsgd,reisizadeh2019fedpaq} in that our focus is on designing customized wireless communication techniques for implementing FEEL over a wireless network rather than developing new federated learning algorithms. To this end, we particularly take into account the characteristics of wireless channels in the design of communication techniques for FEEL, and characterize their effects on the resultant convergence rate. The key findings are summarized above, which are originally presented in this work and represent the novelty \emph{with respect to} (w.r.t.) \cite{2018signsgd,reisizadeh2019fedpaq}.} \emph{Organization}: The remainder of the paper is organized as follows. Section II introduces the learning and communication models. Section III presents the proposed OBDA scheme. The convergence analysis under the AWGN channel case is presented in Section IV, and further extended to the fading channel case in Section V accounting for both the perfect CSI and imperfect CSI scenarios. Section VI presents the experimental results using real datasets followed by concluding remarks in Section VII. \begin{figure*}[tt] \centering \includegraphics[width=14cm]{Figures/One_bit_federated_learning_diagram.eps} \caption{FEEL via OBDA from distributed data.} \vspace{-3mm} \label{Fig:1} \end{figure*} \vspace{-1mm} \section{Learning and Communication Models} \vspace{-1mm} We consider a FEEL system comprising a single edge server coordinating the learning process across $K$ edge devices as shown in Fig.~\ref{Fig:1}. Device $k$, $k = 1,\ldots, K$, has its own local dataset, denoted by ${\cal D}_k$, consisting of labeled data samples $\{({\mathbf{x}}_i,y_j)\} \in {\cal D}_k$, where ${\mathbf{x}}_i \in \mathbb{R}^d$ denotes the unlabeled data and $y_j \in \mathbb{R}$ the associated label. A common model (e.g., a classifier), represented by the parameter vector ${\mathbf{w}} \in \mathbb{R}^q$, is trained collaboratively across the edge devices, orchestrated by the edge server. Here $q$ denotes the model size. \subsection{Learning Model} The loss function measuring the model error is defined as follows. The \emph{local loss function} of the model ${\mathbf{w}}$ on ${\cal D}_k$ is \begin{align}\label{eq:local_loss} \qquad F_k({\mathbf{w}}) = \frac{1}{|{\cal D}_k|} \sum_{({\mathbf{x}}_j, y_j) \in {\cal D}_k} f({\mathbf{w}}, {\mathbf{x}}_j, y_j), \end{align} where $f({\mathbf{w}}, {\mathbf{x}}_j, y_j)$ denotes the sample loss quantifying the prediction error of the model ${\mathbf{w}}$ on the training sample ${\mathbf{x}}_j$ w.r.t. its true label $y_j$. For convenience, we rewrite $f({\mathbf{w}}, {\mathbf{x}}_j, y_j)$ as $f_j({\mathbf{w}})$, and assume uniform sizes for local datasets; that is, $|{\cal D}_k| = D$, $\forall k$. Then, the \emph{global loss function} on all the distributed datasets can be written as { \begin{align}\label{eq:global_loss} \;\; F({\mathbf{w}}) \triangleq \frac{\sum_{k=1}^K \sum_{j \in {\cal D}_k }f_j({\mathbf{w}})}{\sum_{k=1}^K |{\cal D}_k| } = \frac{1}{K} \sum_{k = 1}^K F_k({\mathbf{w}}). \end{align} } { The goal of the learning process is thus to minimize the global loss function $F({\mathbf{w}})$.\footnote{\color{black} The problem formulation follows the standard in the existing federated learning literature (see e.g., [3]-[22]). Note that the formulated problem for model training is a stochastic problem as we construct the loss function using a randomly sampled subset of data, and the minimization of the loss function is solved by \emph{stochastic gradient decent} (SGD). The statistical distribution of the stochastic gradient estimate is assumed to satisfy Assumption 3, which is a widely-adopted assumption for tractable convergence analysis for non-convex loss functions. Note that the SGD approach can also handle online training scenarios where the dataset is collected sequentially in real-time.} One way to do this is to upload all the local datasets to the edge server, and solve the problem in a centralized manner. However, this is typically undesirable due to either privacy concerns, or the sheer size of the datasets. Alternatively, the FEEL framework can be employed to minimize $F({\mathbf{w}})$ in a distributed manner. We focus on the gradient-averaging implementation of FEEL in the current work as illustrated in Fig. \ref{Fig:1} with the detailed procedure eloborated in the sequel. } In each communication round of FEEL, say the $n$-th round, device $k$ computes a local estimate of the gradient of the loss function in \eqref{eq:global_loss} using its local dataset ${\cal D}_k$ and the current parameter-vector ${\mathbf{w}}^{(n)}$. Let ${\mathbf{g}}_k^{(n)} \in \mathbb{R}^q$ denote the local gradient estimate at device $k$ in the $n$-th round, where we remove the dependence on the parameter vector ${\mathbf{w}}^{(n)}$ for simplicity. We then have: \begin{align}\label{eq:local_update} \qquad {\mathbf{g}}_k^{(n)} = \frac{1}{n_b} \sum_{j \in \tilde{\cal D}_k} \nabla f_j({\mathbf{w}}^{(n)}), \end{align} where $\nabla$ represents the gradient operator. $\tilde{\cal D}_k \subset {\cal D}_k$ is the selected data batch from the local dataset for computing the local gradient estimate, and $n_b$ is the batch size. Accordingly, $n_b = |{\cal D}_k|$ means that all the local dataset is used for gradient estimation. { If the local gradient estimates can be reliably conveyed to the edge server, the global estimate of the gradient of the loss function in \eqref{eq:global_loss} would be computed as follows:\footnote{ For the case with heterogenous dataset sizes at different devices, the global gradient estimate is a weighted average of the local ones, i.e., ${\mathbf{g}}^{(n)} = \frac{\sum_{k=1}^K |{\cal D}_k| {\mathbf{g}}_k^{(n)}}{\sum_{k=1}^K |{\cal D}_k| }$. The desired weighted aggregation of the local gradient estimate can also be attained by the proposed over-the-air aggregation with an additional pre-processing $\varphi_k(\cdot)$ on each of the transmitted signals, $x_k$, via $\varphi_k(x_k) = \frac{|{\cal D}_k|}{\sum_{k=1}^K |{\cal D}_k| } x_k$ similarly as in \cite{yang2018federated}.} \begin{align}\label{eq:gradient_averaging} \qquad \hat {\mathbf{g}}^{(n)} = \frac{1}{K} \sum_{k=1}^K {\mathbf{g}}_k^{(n)}. \end{align} } Then, the global gradient estimate is broadcast back to each device, which then uses it to update the current model via gradient descent based on the following equation: \begin{align}\label{eq:model_update} \qquad {\mathbf{w}}^{(n+1)} = {\mathbf{w}}^{(n)} - \eta \cdot \hat {\mathbf{g}}^{(n)}, \end{align} where $\eta$ denotes the learning rate. The steps in \eqref{eq:local_update}, \eqref{eq:gradient_averaging}, and \eqref{eq:model_update} are iterated until a convergence condition is met. { As observed from \eqref{eq:gradient_averaging}, it is only the aggregated gradient, i.e., $\sum_{k=1}^K {\mathbf{g}}_k$, not the individual gradient estimates $\{{\mathbf{g}}_k\}$, needed at the edge server for computing the global gradient estimate.} This motivates the communication efficient aggregation scheme presented in Section \ref{sec:OBDA Design}. \vspace{-1mm} \subsection{Communication Model} \vspace{-1mm} Local gradient estimates of edge devices are transmitted to the edge server over a broadband MAC. To cope with the frequency selective fading and inter-symbol interference, OFDM modulation is adopted to divide the available bandwidth $B$ into $M$ orthogonal sub-channels. We assume that a fixed digital constellation is employed by all the devices to transmit over each sub-channel. Thus, each device needs to transmit its local gradient estimate using a finite number of digital symbols. This requires quantization of the local gradient estimates, and mapping each quantized gradient element to one digital symbol to facilitate the proposed OBDA. Let $\tilde {\mathbf{g}}_k = [\tilde g_{k,1}, \ldots, \tilde g_{k,q}]^T$ denote the channel input vector of the $k$-th device, where $q$ is the size of the gradient vector, and $\tilde {\mathbf{g}}_{k,j} \in {\cal Q}$ for some finite digital input constellation ${\cal Q}$. During the gradient-uploading phase, all the devices transmit simultaneously over the whole available bandwidth. At each communication round, gradient-uploading duration consists of $N_s = \frac{q}{M}$ OFDM symbols for complete uploading of all the gradient parameters. We assume symbol-level synchronization among the transmitted devices through a synchronization channel (e.g., ``timing advance'' in LTE systems \cite{TALTE}).\footnote{The accuracy of synchronization is proportional to the bandwidth dedicated for the synchronization channel. Particularly, the current state-of-the-art phase-locked loop can achieve a synchronization offset of $0.1B_s^{-1}$, where $B_s$ is the amount of bandwidth used for synchronization. In existing LTE systems, the typical value of $B_s$ is 1MHz. Thus, a sufficiently small synchronization offset of $0.1\mu sec$ can be achieved. Note that in a broadband OFDM system, as long as the synchronization offset is smaller than the length of cyclic prefix (the typical value is $5 \mu sec$ in the LTE systems), the offset simply introduces a phase shift to the received symbol. The phase shift can be easily compensated by channel equalization, incurring no performance loss \cite{arunabha2010fundamentals}.} Accordingly, the $i$-th aggregated gradient parameter, denoted by $\tilde g_i$, with $i = (t-1)M + m$, received at the $m$-th sub-carrier and $t$-th OFDM symbol is given by \begin{align}\label{channel_model} \qquad \vspace{-1mm} \tilde g_i = \sum_{k=1}^K h_k[t,m] p_k[t,m] \tilde g_{k,i} + z[t,m], \qquad \forall i, \vspace{-1mm} \end{align} {\color{black} where $\{h_k[t,m]\}$ are the channel coefficients with identical Rayleigh distribution, i.e., $h_k[t, m] \sim {\cal CN}(0,1)$;}\footnote{\color{black} It should be emphasized that the current analysis does not require the assumption of independent channel realizations over different time slots. In particular, the same analytical results hold as long as the channel coefficients at different time slots follow identical Rayleigh (complex Gaussian) distribution, regardless of whether there is correlation over time or not.} \footnote{ We also assume identical path-losses for different devices to simplify exposition. Note that the difference in path-losses between devices can be equalized by the channel inversion power control applied in the design as specified in the sequel. If there exists a bottleneck device with severe path-loss that does not allow channel inversion within the given power budget, it can be excluded in the scheduling phase.} and $p_k[t,m]$ is the associated power control policy to be specified in the sequel. Finally, $z[t,m]$ models the zero-mean i.i.d. \emph{additive white Gaussian noise} (AWGN) with variance $\sigma_z^2$. For ease of notation, we skip the index of OFDM symbol $t$ in the subsequent exposition whenever no confusion is incurred. The power allocation over sub-channels, $\{p_k[m]\}$, will be adapted to the corresponding channel coefficients, $\{h_k[m]\}$, for implementing gradient aggregation via AirComp as presented in the sequel. The transmission of each device is subject to a long-term transmission power constraint: \begin{align}\label{power_constraint1} {\mathbb E} \left[{ \sum_{m=1}^M |p_k[m]} |^2 \right] \leq P_0, \qquad \forall k, \end{align} where the expectation is taken over the distribution of random channel coefficients, and we assume ${\mathbb E} \left[ \tilde g_{k,i} \tilde g_{k,i}^* \right] = 1$ without loss of generality. Since channel coefficients are identically distributed over different sub-channels, the above power constraint reduces to \begin{align}\label{power_constraint2} \qquad {\mathbb E} \left[ | p_k[m]|^2 \right] &\leq \frac{P_0}{M}, \qquad \forall k. \end{align} \vspace{-3mm} \section{One-Bit Broadband Digital Aggregation (OBDA): System Design}\label{sec:OBDA Design} \vspace{-1mm} As discussed, the essential idea of OBDA is to integrate signSGD and AirComp so as to support communication-efficient FEEL using digital modulation. The implementation of the idea requires an elaborate system design, which is explained in detail in this section. \subsection{Transmitter Design} The transmitter design for edge devices is shown in Fig. \ref{subfig:tx}. The design builds on a conventional OFDM transmitter with \emph{truncated channel-inversion power control}. However, unlike in conventional communication systems, where coded data bits are passed to the OFDM encoder, here we feed raw quantized bits without any coding. Inspired by signSGD \cite{2018signsgd}, we apply \emph{one-bit quantization} of local gradient estimates, which simply corresponds to taking the signs of the local gradient parameters element-wise: \begin{align} (\text{One-bit quantization}) \qquad \tilde g_{k,i} = {\sf sign}(g_{k,i}), \quad \forall k,i. \end{align} Each of the binary gradient parameters is modulated into one \emph{binary phase shift keying} (BPSK) symbol. We emphasize that, even though we use BPSK modulation in our presentation and the convergence analysis for simplicity, the extension of OBDA to 4-QAM configuration is straightforward by simply viewing each 4-QAM symbol as two orthogonal BPSK symbols. Indeed, we employ 4-QAM modulation for the numerical experiments in Section \ref{simulation}. The long symbol sequence is then divided into blocks, and each block of $M$ symbols is transmitted as a single OFDM symbol with one symbol/parameter over each frequency sub-channel. Assuming perfect CSI at the transmitter, sub-channels are inverted by power control so that gradient parameters transmitted by different devices are received with identical amplitudes, achieving amplitude alignment at the receiver as required for OBDA. Nevertheless, a brute-force approach is inefficient if not impossible under a power constraint since some sub-channels are likely to encounter deep fades. To overcome the issue, we adopt the more practical \emph{truncated-channel-inversion} scheme \cite{zhu2018low}. To be specific, a sub-channel is inverted only if its gain exceeds a so called \emph{power-cutoff threshold}, denoted by $g_{\sf th}$, or otherwise allocated zero power. Then the transmission power of the $k$-th device on the $m$-th sub-channel, $p_k[m]$, is \begin{align}\label{truncated_channel_inv} \qquad p_k[m] = \left\{\begin{aligned} & \frac{\sqrt{\rho_0}}{ |h_k[m]|} \frac{h_k[m]^\dagger}{|h_k[m]|}, && |h_k[m]|^2 \geq g_{\sf th} \\ &0, && |h_k[m]|^2 < g_{\sf th}, \end{aligned} \right. \end{align} where $\rho_0$ is a scaling factor set to satisfy the average-transmit-power constraint in \eqref{power_constraint2}, and determines the receive power of the gradient update from each device as observed from \eqref{channel_model}. The exact value of $\rho_0$ can be computed via \begin{align} \rho_0 = \frac{P_0}{M{\sf Ei}(g_{\sf th})}, \end{align} where ${\sf Ei}(x)$ is the exponential integral function defined as ${\sf Ei}(x) = \int_x^\infty \frac{1}{t} \exp(-t) dt$. The result follows from the fact that the channel coefficient is Rayleigh distributed $h_k[m] \sim {CN(0,1)}$, and thus the channel gain $g_k = |h_k[m]|^2$ follows an exponential distribution with unit mean. Thus the power constraint in \eqref{power_constraint2} is explicitly given by $\rho_0 \int_{g_{\sf th}}^\infty \frac{1}{g}\exp(-g) dg = \frac{P_0}{M}$. The desired result follows by solving the integral. We remark that the policy can cause the loss of those gradient parameters that are mapped to the truncated sub-channels. To measure the loss, we define the \emph{non-truncation probability} of a parameter, denoted by $\alpha$, as the probability that the associated channel gain is above the power-cutoff threshold: \begin{align}\label{eq:truncation_ratio} \alpha = {\sf Pr}(|h_k|^2 \geq g_{\sf th}) = \exp(-g_{\sf th}). \end{align} The result immediately follows from the exponential distribution of the channel gain. The value of $\alpha$ affects the learning convergence rate as shown in the sequel. \begin{figure*}[tt] \centering \subfigure[Transmitter design for edge devices]{\label{subfig:tx}\includegraphics[width=0.8\textwidth]{Figures/Transmitter_design.eps}} \hspace{0.25in} \subfigure[Receiver design for edge server]{\label{subfig:rx}\includegraphics[width=0.8\textwidth]{Figures/Receiver_design.eps}} \vspace{3mm} \caption{Transceiver design for OBDA.} \label{Fig:Fig:system_diagram} \vspace{-3mm} \end{figure*} \vspace{-4mm} \subsection{Receiver Design} \vspace{-1mm} Fig. \ref{subfig:rx} shows the receiver design for the edge server. It has the same architecture as a conventional OFDM receiver except that the digital detector is replaced with a \emph{majority-vote based decoder} for estimating the global gradient-update from the received signal as elaborated in the following. Consider an arbitrary communication round. Given the simultaneous transmission of all participating devices, the server receives superimposed waveforms. By substituting the truncated-channel-inversion policy in \eqref{truncated_channel_inv} into \eqref{channel_model}, the server obtains the aggregated local-gradient block, denoted by a $M\times 1$ vector $\tilde {\mathbf{g}}[t]$, at the parallel-to-serial converter output [see Fig. \ref{subfig:rx}] as: \begin{align}\label{channel_model2} \!\! (\text{Over-the-air aggregation}) \; \tilde {\mathbf{g}}[t] = \sum_{k =1}^K \sqrt{\rho_0} \tilde {\mathbf{g}}^{(\sf Tr)}_{k}[t] + {\mathbf{z}}[t], \!\! \end{align} where $t$ is the index of the local-gradient block (OFDM symbol) as defined in \eqref{channel_model}, $\tilde {\mathbf{g}}^{(\sf Tr)}_{k}[t]$ is the truncated version of $\tilde {\mathbf{g}}_{k}[t] = [\tilde g_{k, (t-1)M + 1}, \ldots, \tilde g_{k,tM}]^T$, with the truncated elements determined by the channel realizations according to \eqref{truncated_channel_inv} and set to zero. Next, cascading all the $N_s$ blocks recovers the full-dimension aggregated one-bit quantized local gradient estimates: \begin{align}\label{eq:aggregated_gradient} \qquad \tilde {\mathbf{g}} = \left[\tilde {\mathbf{g}}[1]^T, \tilde {\mathbf{g}}[2]^T, \ldots, \tilde {\mathbf{g}}[N_s]^T\right]^T. \end{align} Finally, to attain a global gradient estimate from $\tilde {\mathbf{g}}$ for model updating, a majority-vote based decoder is adopted and enforced by simply taking the element-wise sign of $\tilde {\mathbf{g}}$: \begin{align} (\text{Majority-vote based decoder}) \qquad {\mathbf{v}} = {\sf sign} (\tilde {\mathbf{g}}). \end{align} The operation essentially estimates the global gradient-update by over-the-air element-wise majority vote based on the one-bit quantized local gradient estimates attained at different devices. { \begin{remark}[Why majority vote?] \emph{ Instead of simply taking an average of the one-bit local gradient estimates, majority vote that involves additional sign-taking operation brings additional benefit from the following two aspects: 1) it is more hardware-friendly and robust (against noise) to detect the sign of the aggregated gradient estimate (majority vote) than recovering its actual value in full precision at the edge server; 2) it allows communication-efficient broadcasting of the aggregated gradient estimates back to the edge devices thanks to the one-bit quantized elements after the majority vote. } \end{remark} } Then, the server initiates the next communication round by broadcasting the global gradient estimate to all the devices for model updating via \vspace{-3mm} \begin{align}\label{eq:model_update_onebit} \qquad \vspace{-2mm} {\mathbf{w}}^{(n+1)} = {\mathbf{w}}^{(n)} - \eta {\mathbf{v}}^{(n)}, \end{align} or completes the learning process if the convergence criterion (e.g., target number of communication rounds) is met. {We assume that the global gradient parameters can be sent to the devices perfectly, due to the high transmit power available at the edge server and the use of the whole downlink bandwidth for broadcasting.} \section{Convergence Analysis for OBDA over AWGN Channels}\label{sec: AWGN_analysis} In this section, we formally characterize the learning performance of a FEEL system deploying the proposed OBDA scheme in Section \ref{sec:OBDA Design} over static AWGN channels; that is, we assume $h_k[t,m] = 1$, $\forall k,t,m$, in this section. Particularly, we focus on understanding how the channel noise affects the convergence rate of the proposed scheme. \vspace{-3mm} \subsection{Basic Assumptions}\label{sec:assumptions} \vspace{-1mm} To facilitate the convergence analysis, several standard assumptions are made on the loss function and computed gradient estimates. In order to allow the developed theory to be applicable to the popular \emph{deep neural networks} (DNNs), we do not assume a convex loss function, but require a lower bounded one as formally stated below, which is the minimal assumption needed for ensuring convergence to a stationary point \cite{AllenZhu2017natasha2}. \begin{assumption}[Bounded Loss Function] \emph{For any parameter vector ${\mathbf{w}}$, the associated loss function is lower bounded by some value $F^*$}, i.e., $F({\mathbf{w}}) \geq F^*$, $\forall {\mathbf{w}}$. \end{assumption} \noindent Assumptions 2 and 3 below, on the Lipschitz smoothness and bounded variance, respectively, are standard in the stochastic optimization literature \cite{AllenZhu2017natasha2}. \begin{assumption}[Smoothness] \emph{Let ${\mathbf{g}}$ denote the gradient of the loss function $F({\mathbf{w}})$ in \eqref{eq:global_loss} evaluated at point ${\mathbf{w}} = [w_1,\cdots, w_q]$ with $q$ denoting the number of model parameters. We assume that there exists a vector of non-negative constants ${\bf L} = [L_1, \cdots, L_q]$, for any ${\mathbf{w}}, {\mathbf{w}}'$, that satisfy} \begin{align}\label{eq:smoothness} \!\!\! |F({\mathbf{w}}') - [F({\mathbf{w}}) + {\mathbf{g}}^T({\mathbf{w}}' - {\mathbf{w}})]| \leq \frac{1}{2} \sum_{i=1}^q L_i(w'_i - w_i)^2. \!\!\! \end{align} \end{assumption} \begin{assumption}[Variance Bound] \emph{It is assumed that the stochastic gradient estimates $\{{\mathbf{g}}_j\}$ defined in \eqref{eq:local_update} are independent and unbiased estimates of the batch gradient ${\mathbf{g}} = \nabla F({\mathbf{w}})$ with coordinate bounded variance, i.e.,} \begin{align} &\mathbb{E}[{\mathbf{g}}_j] = {\mathbf{g}}, \qquad \forall j, \\ &\mathbb{E}[(g_{j,i} - g_i)^2] \leq \sigma_i^2 \qquad \forall j, i, \end{align} \emph{where $g_{j,i}$ and $g_i$ denote the $i$-th element of ${\mathbf{g}}_j({\mathbf{w}})$ and ${\mathbf{g}}({\mathbf{w}})$, respectively, and $\boldsymbol \sigma = [\sigma_1, \ldots, \sigma_q]$ is a vector of non-negative constants.} \end{assumption} We further assume that the data-stochasticity induced gradient noise, which causes the discrepancy between ${\mathbf{g}}_j$ and ${\mathbf{g}}$, is unimodal and symmetric, as verified by experiments in \cite{2018signsgd} and formally stated below. \begin{assumption}[Unimodal, Symmetric Gradient Noise]\emph{For any given parameter vector ${\mathbf{w}}$, each element of the stochastic gradient vector ${\mathbf{g}}_j({\mathbf{w}})$, $\forall j$, has a unimodal distribution that is also symmetric around its mean (the ground-truth full-batch gradient elements).} \end{assumption} \noindent Clearly, Gaussian noise is a special case. Note that even for a moderate mini-batch size, we expect the central limit theorem to take effect and render typical gradient noise distributions close to Gaussian. \vspace{-1mm} \subsection{Convergence Analysis} \vspace{-1mm} The above assumptions allow tractable convergence analysis as follows. Given AWGN channels, the gradient aggregation is a direct consequence of the MAC output and the power control in \eqref{truncated_channel_inv} is not needed in this case. Specifically, without truncation due to fading, the full-dimension aggregated local gradient defined in \eqref{eq:aggregated_gradient} is given by \begin{align}\label{eq:aggregated_gradient_AWGN} \tilde {\mathbf{g}} = \sum_{k =1}^K \sqrt{\rho_0} \tilde {\mathbf{g}}_{k} + {\mathbf{z}}, \end{align} where we have $\rho_0 = \frac{P_0}{M}$ according to the power constraint in \eqref{power_constraint2}. { The resulting convergence rate of the proposed OBDA scheme is derived as follows. Throughout the paper, we set the learning rate to $\eta = \frac{1}{\sqrt{\|{\bf L}\|_1 n_b}}$ and the mini-batch size to $n_b = \frac{1}{\gamma}N$, with an arbitrary $\gamma > 0$ and $N$ denoting the number of communication rounds. } { \begin{theorem}\label{theo:AWGN} \emph{Consider a FEEL system deploying OBDA over AWGN channels, the convergence rate is given by} \begin{multline} \!\! {\mathbb E} \left[\frac{1}{N} \sum_{n=0}^{N-1} \|{\bf g}^{(n)}\|_1\right] \leq \frac{a_{\sf AWGN}}{\sqrt{N}}\left(\sqrt{\|{\bf L}\|_1} (F^{(0)} - F^* + \frac{\gamma}{2}) + \right. \\ \left. \frac{2\gamma}{\sqrt{K}}\|\boldsymbol{\sigma}\|_1 + {b_{\sf AWGN} } \right), \end{multline} \emph{where the scaling factor $a_{\sf AWGN}$ and the bias term $b_{\sf AWGN}$ are given by} \begin{align} a_{\sf AWGN} = \frac{1}{1- \frac{1}{K \sqrt{\rho}}}, \qquad {b_{\sf AWGN} = \frac{2 \gamma}{K\sqrt{\rho} } \|\boldsymbol{\sigma}\|_1 }, \end{align} \emph{and $\rho \triangleq \frac{\rho_0}{\sigma_z^2} = \frac{P_0}{M \sigma_z^2}$ denotes the \emph{receive SNR}. } \end{theorem} } \noindent{\emph{Proof:} } See Appendix \ref{app:theo:AWGN}. \hspace*{\fill}~\qed\par\endtrivlist\vskip3pt For comparison, we reproduce below the convergence rate over noiseless channels derived as Theorem 2 in \cite{2018signsgd}. { \begin{lemma}\label{lemma:error_free} \emph{The convergence rate with OBDA over error-free channels is given by} \begin{multline} {\mathbb E} \left[\frac{1}{N} \sum_{n=0}^{N-1} \|{\bf g}^{(n)}\|_1\right] \leq \frac{1}{\sqrt{N}}\left(\sqrt{\|{\bf L}\|_1} (F^{(0)} - F^* + \frac{\gamma}{2}) + \right. \\ \left. \frac{2\gamma}{\sqrt{K}}\|\boldsymbol{\sigma}\|_1 \right). \end{multline} \end{lemma} } \vspace{-3mm} \begin{remark}[Effect of Channel Noise] \emph{ A comparison between the results in Theorem \ref{theo:AWGN} and Lemma \ref{lemma:error_free} reveals that the existence of channel noise slows down the convergence rate by adding a scaling factor and a positive bias term, i.e., $a_{\sf AWGN}$ and $b_{\sf AWGN}$, respectively, to the upper bound on the time-averaged gradient norm. Due to the increased bound, more communication rounds will be needed for convergence. Nevertheless, the negative effect of channel noise vanishes at a scaling rate of $\frac{1}{K}$ as the number of participating devices grows. We can also see that we recover the convergence rate in Lemma \ref{lemma:error_free} when $\rho \to \infty$. } \end{remark} \section{Convergence Analysis for OBDA over Fading Channels}\label{sec: fading_analysis_perfect_CSI} In this section, we extend the convergence result for AWGN channels to the more general case of fading channels. For this case, transmit CSI is needed for power control. We consider both the cases of perfect CSI and imperfect CSI in the analysis. The same set of assumptions as in Section \ref{sec:assumptions} are made here. \vspace{-3mm} \subsection{Convergence Rate with Perfect CSI} With perfect CSI at each device, the truncated channel inversion power control can be accurately performed. The resultant convergence rate of the OBDA scheme is derived as follows. { \begin{theorem}\label{theo:fading} \emph{Consider a FEEL system deploying OBDA over fading channels with truncated channel inversion power control using perfect CSI, the convergence rate is given by} \begin{multline} {\mathbb E} \left[\frac{1}{N} \sum_{n=0}^{N-1} \|{\bf g}^{(n)}\|_1\right] \leq \frac{a_{\sf FAD}}{\sqrt{N}}\left(\sqrt{\|{\bf L}\|_1} (F^{(0)} - F^* + \frac{\gamma}{2}) + \right. \\ \left. \frac{2\gamma}{\sqrt{K}}\|\boldsymbol{\sigma}\|_1 + {b_{\sf FAD} } \right), \end{multline} \emph{where the scaling factor $a_{\sf FAD}$ and the bias term $b_{\sf FAD}$ are} \begin{align} a_{\sf FAD} = \frac{1}{1- {(1-\alpha)^K} - \frac{2}{{\alpha K} \sqrt{\rho}}}, \; b_{\sf FAD} = \frac{4\gamma}{\alpha K \sqrt{\rho}} \|\boldsymbol{\sigma}\|_1, \end{align} \emph{and $\rho \triangleq \frac{\rho_0}{\sigma_z^2} = \frac{P_0}{M {\sf Ei}(g_{\sf th}) \sigma_z^2}$ denotes the average \emph{receive SNR}. } \end{theorem} } \noindent{\emph{Proof:} } See Appendix \ref{app:theo:fading} \hspace*{\fill}~\qed\par\endtrivlist\vskip3pt \begin{remark}[Effect of Channel Fading] \emph{ A comparison between Theorems \ref{theo:AWGN} and \ref{theo:fading} reveals that the existence of channel fading further slows down the convergence rate of the OBDA by introducing a larger scaling factor and a bias term: $a_{\sf FAD} > a_{\sf AWGN}$ and $b_{\sf FAD} > b_{\sf AWGN}$. This negative effect of channel fading vanishes at a scaling rate of $\frac{1}{\alpha K}$ as the number of participating devices grows. Compared with the AWGN counterpart, the rate is slowed down by a factor of $\alpha$. The degradation is due to the gradient truncation induced by truncated channel inversion power control to cope with fading. } \end{remark} \vspace{-4mm} \subsection{Convergence Rate with Imperfect CSI} \label{sec: fading_analysis_imperfect_CSI} \vspace{-1mm} In practice, there may exist channel estimation errors that lead to imperfect channel inversion and, as a result, reduces the convergence rate. To facilitate the analysis, we adopt the bounded channel estimation error model (see e.g., \cite{hong2014signal}), where the estimated CSI is a perturbed version of the ground-true one and the additive perturbation, denoted as $\Delta$, is assumed to be bounded: \begin{align}\label{eq:channel_estimation_error} \hat h_k[m] = h_k[m] + \Delta, \qquad \forall k, m, \end{align} where we assume the absolute value of the perturbation is bounded by $|\Delta| \leq \Delta_{\max}\ll \sqrt{g_{\sf th}}$\footnote{When there exist channel estimation errors, it is desirable to set a relative high channel cutoff threshold $g_{\sf th}$ to ensure that $g_{\sf th} \gg \Delta_{\max}^2$ for avoiding the channel truncation decision misled by the estimation perturbation $\Delta$.} with a zero mean ${\mathbb E}(\Delta) = 0$, and a variance of ${\sf Var}(\Delta) = \sigma_\Delta^2$. Based on the above CSI model, the model convergence rate can be derived below. { \begin{theorem}\label{theo:imperfect_CSI} \emph{Consider a FEEL system deploying OBDA over fading channels with truncated channel inversion power control using imperfect CSI, the convergence rate is given by } \begin{multline} {\mathbb E} \left[\frac{1}{N} \sum_{n=0}^{N-1} \|{\bf g}^{(n)}\|_1\right] \leq \frac{a_{\sf CERR}}{\sqrt{N}}\left(\sqrt{\|{\bf L}\|_1} (F^{(0)} - F^* + \frac{\gamma}{2}) + \right. \\ \left. \frac{2\gamma}{\sqrt{K}}\|\boldsymbol{\sigma}\|_1 + {b_{\sf CERR} } \right), \end{multline} \emph{where the scaling factor $a_{\sf CERR}$ and the bias term $b_{\sf CERR}$ are given by} \begin{align}\label{eq:scaling_bias_imperfect_CSI} a_{\sf CERR} & = \frac{1}{1- {(1-\alpha)^K} - \frac{2}{{\alpha K \sqrt{\rho}}}- \frac{2\sqrt{6} \sigma_\Delta}{\sqrt{\alpha K} \sqrt{\sqrt{g_{\sf th}} - \Delta_{\max}}}}, \notag\\ b_{\sf CERR} & = \left(\frac{4}{\alpha K \sqrt{\rho}}+{ \frac{4\sqrt{6} \sigma_\Delta}{\sqrt{\alpha K} \sqrt{\sqrt{g_{\sf th}}- \Delta_{\max}}}}\right) \gamma \|\boldsymbol{\sigma}\|_1, \end{align} \emph{and $\rho \triangleq \frac{\rho_0}{\sigma_z^2} = \frac{P_0}{M {\sf Ei}(g_{\sf th}) \sigma_z^2}$ denotes the average \emph{receive SNR}. } \end{theorem} } \noindent{\emph{Proof:} } See Appendix \ref{app:theo:imperfect_CSI} \hspace*{\fill}~\qed\par\endtrivlist\vskip3pt \begin{remark}[Effect of Imperfect CSI] \emph{ Comparing Theorem \ref{theo:imperfect_CSI} and Theorem \ref{theo:fading}, one can observe that the imperfect CSI reduces the convergence rate for OBDA even further as reflected by larger scaling factor and bias terms: $a_{\sf CERR} > a_{\sf FAD}$ and $b_{\sf CERR} > b_{\sf FAD}$. Particularly, with imperfect CSI, the negative effect of channel fading vanishes at a slower scaling law of $\frac{1}{\sqrt{\alpha K}}$ as the number of participating devices increases. In contrast, it is a $\frac{1}{\alpha K}$ for the perfect CSI case, which is much faster. The results in \eqref{eq:scaling_bias_imperfect_CSI} also quantify the effect of the level of CSI accuracy (represented by $\Delta_{\max}$) on the convergence rate for the proposed scheme. } \end{remark} \begin{figure*}[tt] \centering \includegraphics[width=14cm]{Figures/convolutional_neural_network.eps} \caption{Architecture of the CNN used in our experiments.} \label{Fig:convolutional_neural_network} \vspace{-4mm} \end{figure*} \section{Simulation results}\label{simulation} For numerical experiments we consider a FEEL system with one edge server and $K = 100$ edge devices. The simulation settings are given as follows unless specified otherwise. The number of sub-channels is $M=1000$, and the average receive SNR, defined as $\rho = \frac{P_0}{M\sigma_z^2}$, is set to be 10 dB. { We consider the learning task of image classification using the well-known MNIST and CIFAR10 datasets, where the former consists of $10$ classes of black-and-white digits ranging from ``$0$" to ``$9$" and the latter comprises $10$ categories of colorful objectives such as airplanes, cars, etc.. In particular, for the MNIST dataset, as illustrated in Fig. \ref{Fig:convolutional_neural_network}, the classifier model is implemented using a $6$-layer \emph{convolutional neural network} (CNN) that consists of two $5\times5$ convolution layers with ReLU activation (the first with $32$ channels, the second with $64$), each followed with a $2\times2$ max pooling; a fully connected layer with $512$ units, ReLU activation; and a final softmax output layer (i.e., $q=582,026$). While for the CIFAR10 dataset, the well-known classifier model, ResNet18 with batch normalization proposed in \cite{he2016deep}, is applied.} We adopt the 4-QAM for the quantized gradient element modulation, where the odd and even gradient coefficients are mapped to the real and imaginary parts of 4-QAM symbols, respectively. The open-source codes are available at https://github.com/BornHater/One-bit-over-the-air-computation. \subsection{Performance Evaluation of OBDA} For both MNIST and CIFAR10 datasets, the effectiveness of the OBDA is evaluated in the three considered scenarios, namely over an AWGN MAC, and fading MACs with and without perfect CSI, which represent three levels of wireless hostilities. Test accuracy is plotted as a function of the number of communication rounds in Fig.~\ref{Fig:performacne_comparison}. The proposed OBDA scheme converges in all three scenarios, but at different rates depending on the level of wireless hostility the scheme suffers from. In the presence of channel fading the convergence is slower compared with its counterpart over an AWGN channel. This is because part of the gradient signs corresponding to subchannels experiencing deep fade are truncated, rendering a smaller number of effective participating devices for each gradient entry. The imperfect CSI further slows down the convergence of the proposed OBDA. This is due to inaccurate aggregation, which results in a deviated gradient for model updating. These observations are aligned with our analysis presented in Theorems \ref{theo:AWGN}-\ref{theo:imperfect_CSI}. \subsection{Effect of Device Population} The effect of the device population on the convergence behaviour is illustrated in Fig.~\ref{Fig:effect_user_number}, where we set the number of communication-rounds as $150$ for the MNIST dataset and $200$ for the CIFAR10 dataset, and plot the curves of test accuracy w.r.t. the total number of edge-devices $K$ for the three considered scenarios. It is observed that, in all scenarios, the test accuracy grows as $K$ increases. This is because a larger $K$ suppresses the noise variance inherent in stochastic gradients as well as the negative effects due to wireless hostilities. This phenomenon is coined as \emph{majority-vote gain}. In particular, the majority-vote gain is observed to be the most prominent in the gentle wireless condition (i.e., AWGN), and weakened in the hostile one (i.e., fading with imperfect CSI). This behaviour is aligned with analytical results in Theorems \ref{theo:AWGN}-\ref{theo:imperfect_CSI}, which showed that the negative effects introduced by different wireless hostilities vanish, at different rates, with the growth of the device population. \setlength{\textfloatsep}{4pt} \begin{figure}[tt] \centering \subfigure[Dataset: MNIST; Classifier: CNN ]{\includegraphics[width=0.42\textwidth]{Figures/Comparison_3_schemes_mnist.eps}} \subfigure[Dataset: CIFAR10; Classifier: ResNet18 ]{\includegraphics[width=0.42\textwidth]{Figures/Comparison_3_schemes_cifar10.eps}} \vspace{3mm} \caption{Convergence performance of FEEL using OBDA. } \label{Fig:performacne_comparison} \end{figure} \setlength{\textfloatsep}{4pt} \begin{figure}[tt] \centering \subfigure[Dataset: MNIST; Classifier: CNN ]{\includegraphics[width=0.42\textwidth]{Figures/Effect_user_mnist.eps}} \subfigure[Dataset: CIFAR10; Classifier: ResNet18]{\includegraphics[width=0.42\textwidth]{Figures/Effect_user_cifar10.eps}} \vspace{3mm} \caption{Effect of device population on the convergence. } \label{Fig:effect_user_number} \end{figure} \vspace{-1mm} \subsection{Performance Comparison: OFDMA, BAA and OBDA} \vspace{-1mm} A performance comparison between digital transmission using OFDMA, BAA developed in \cite{zhu2018low}, and the proposed OBDA is presented in Fig. \ref{Fig:comparison}. In this figure, the test accuracy and communication latency are plotted for these three schemes over a fading MAC with perfect CSI. For the digital OFDMA, sub-channels are evenly allocated to the edge devices; gradient-update parameters are quantized into bit sequences with 16-bit per coefficient; and adaptive MQAM modulation is used to maximize the spectrum efficiency under a target bit error rate of $10^{-3}$. It can be observed from Fig. \ref{Fig:test_acc} that the convergence speed of digital OFDMA, BAA and OBDA are in descending order while all three schemes achieve comparable accuracies after sufficient number of communication rounds. The reason behind the faster convergence of digital OFDMA w.r.t. BAA is that the direct exposure of the analog modulated signals in BAA to the channel hostilities results in a boosted expected gradient norm as mentioned in Remarks 1-3. The performance gap between the BAA and OBDA in terms of convergence speed arises from the quantization loss introduced by the latter. However, we observe that the gap between the two is small, which shows that over-the-air gradient aggregation can be employed with devices employing simple 4-QAM modulation without significant performance loss w.r.t. analog aggregation. Moreover, Fig. \ref{Fig:latency} shows that, without compromising the learning accuracies, the \emph{per-round communication latencies} for both OBDA and BAA are independent of the number of devices, while that of the digital OFDMA scales up as the device population grows. \setlength{\textfloatsep}{4pt} \begin{figure}[tt] \centering \subfigure[Test accuracy ]{\label{Fig:test_acc}\includegraphics[width=0.42\textwidth]{Figures/Tx_comparison_mnist.eps}} \subfigure[Communication latency]{\label{Fig:latency}\includegraphics[width=0.42\textwidth]{Figures/Latency_comparison.eps}} \vspace{3mm} \caption{Performance comparison among digital OFDMA, BAA and OBDA. } \label{Fig:comparison} \end{figure} \section{Concluding Remarks}\label{Conclusion} In the context of FEEL, we have proposed a novel digital over-the-air gradient aggregation scheme, called OBDA, for communication-efficient distributed learning across wireless devices. To understand its performance, a comprehensive convergence analysis framework for OBDA subject to wireless channel hostilities is developed. This work represents the first attempt to develop digital AirComp, which is more practical, compared to its analog counterpart, in terms of the compatibility with the modern digital communication infrastructure. For future work, we will consider the generalization of the current work to multi-cell FEEL, where the effect of inter-cell interference should also be taken into account. As another interesting direction, the proposed design assuming a single learning task can be extended to the multi-task learning scenario, where an additional task scheduler at the server needs to be designed in an effort to reducing the frequency of the gradient updates, further accelerating the learning process.
{ "attr-fineweb-edu": 1.852539, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUc_fxK4sA-9zSAWGJ
\section{INTRODUCTION} The contextual bandit problem is a variant of the extensively studied multi-armed bandit problem \cite{auer2002finite}. Both contextual and non-contextual bandits involve making a sequence of decisions on which action to take from an action space $A$. After an action is taken, a stochastic reward $r$ is revealed for the chosen action only. The goal is to maximize these rewards. Maximization of reward is often recast as the minimization of regret \cite{bubeck2012regret}. As the reward is only revealed for the chosen action, bandit problems involve trading off exploration to try potentially new and better actions and exploitation of known good actions. In a contextual bandit, before making each decision the bandit is shown some context $\mathbf{x} \in X$. Asymptotically optimal approaches to trading off exploration and exploitation for the non-contextual multi-armed bandit can be derived \cite{auer2002nonstochastic}. But deriving such solutions in the contextual case has resulted in simplifying assumptions such as assuming a linear relationship between the context and the reward \cite{li2010contextual}. These simplifying assumptions are often unsatisfactory. For example, at HubSpot we wish to optimize the time of day to send an email in order to maximize the probability of an email converting (opened, clicked on or replied to). We know that the context of the particular email being sent, such as the timezone of the email sender, has an effect on whether the email converts for a given send time. Additionally, we only observe whether the email converts for the time we actually \textit{choose} to send the email. Thus, we model this as a contextual bandit problem. The action space $A$ is the possible times to send an email, the context $X$ are the features describing the email being sent and the reward $r$ is whether the email converts. We know from other non-bandit (classification) tasks involving the same email data, such as predicting the bounce rate of an email, that deep neural networks substantially outperform linear models at modeling our email data. Many other marketing problems can also be cast as contextual bandit problems \cite{lu2010contextual} and we wish to model the context of these problems using deep neural networks, while still managing the exploration vs.\ exploitation trade-off. We propose doing so by applying recent advances in Bayesian neural networks, whereby it is possible to obtain samples from an approximate posterior over the weights in a neural network. Given these samples it is trivial to use Thompson sampling \cite{thompson1933likelihood} to manage the exploration vs.\ exploitation trade-off. \section{BACKGROUND} \subsection{CONTEXTUAL MULTI-ARMED BANDITS} We provide a more formal definition of the contextual bandit problem. Suppose we have an agent acting in an environment. At each timestep the agent is presented with some context $\mathbf{x} \in X$ from the environment. The agent must choose to take some action $\mathbf{a} \in A$ from a set of possible actions \{$\mathbf{a_1}$, $\mathbf{a_2}$, ..., $\mathbf{a_m}$\}. When the agent takes an action it receives a real-valued reward $r$ for the action taken, however it does not observe a reward for untaken actions. For simplicity we restrict $r \in \{0,1\}$ although the treatment is trivially extended to any real valued reward. The agent wishes to maximize the cumulative reward over time, and in our formulation the agent is assumed to interact with the environment over an infinite time horizon. The agent must build a model for $P(r | \mathbf{a}, \mathbf{x})$ from observed ($\mathbf{x}$, $\mathbf{a}$, r) triplets. But the agent only observes triplets involving actions $\mathbf{a}$ it has taken, so the agent drives its own data generation process. Thus, it is necessary for the agent to trade off exploration and exploitation. If the agent learns early on to associate high rewards with a particular action, it may not take actions which have higher expected reward in new, unseen contexts. Equally, the agent cannot always take random actions as it will forgo higher expected reward for known good actions in particular contexts. There are many approaches to trading off exploration and exploitation \cite{auer2002using,auer2002finite,silver2016mastering}. Two popular and simple approaches are epsilon-greedy exploration and Thompson sampling. With an epsilon-greedy policy, a random action is taken $\epsilon$ percentage of the time and the best predicted action $1 - \epsilon$ percentage of the time. Thomson sampling \cite{thompson1933likelihood} is a heuristic for trading off exploration and exploitation when using a parametric likelihood $P(r | \mathbf{a}, \mathbf{x}; \mathbf{w})$ function and a posterior on the parameters $P(\mathbf{w} | \{(\mathbf{x}, \mathbf{a}, r)\})$. At each timestep we sample $\mathbf{\tilde{w}} \sim P(\mathbf{w} | \{(\mathbf{x}, \mathbf{a}, r)\})$ and then choose the action with highest expected reward $P(r | \mathbf{a}, \mathbf{x}; \mathbf{\tilde{w}})$. Thompson sampling has been shown to be a very effective heuristic for managing this trade-off \cite{agrawal2013thompson,chapelle2011empirical}. \subsection{BAYESIAN NEURAL NETWORKS} Neural networks are typically trained with maximum likelihood estimation (MLE) or maximum a posteriori estimation (MAP) of the network weights \cite{blundell2015weight}. This leads to a point estimate of the weights and thus a Bayesian approach to ascertaining uncertainty estimates from the network is not possible. In contrast, in a Bayesian neural network a prior is placed on the network weights and the posterior distribution is learned over these weights. In general due to the intractability of full Bayesian inference one conducts approximate Bayesian inference to learn the posterior. Approximate inference in Bayesian neural networks has a long history \cite{barber1998ensemble,buntine1991bayesian,hinton1993keeping,mackay1992practical,mackay1995probable,neal1995bayesian}. Only recently have approaches to approximate inference been proposed which scale well to larger networks and datasets \cite{blundell2015weight,gal2016dropout,graves2011practical}. Of particular importance to our application is a recent result \cite{gal2016dropout} where dropout training \cite{srivastava2014dropout} in an arbitrary depth feed-forward neural network with arbitrary non-linearities is shown to be equivalent to an approximation of the probabilistic deep Gaussian process \cite{damianou2013deep}. In practice this Bayesian interpretation of dropout means that if we turn dropout on at inference time, each stochastic forward pass through the network corresponds to a sample from an approximate posterior over the network weights. This property of dropout at inference time has been used to undertake Thompson sampling to drive exploration in reinforcement learning problems \cite{gal2016dropout}. Under the Bayesian interpretation of dropout, a continuous relaxation of Bernoulli dropout using the Concrete distribution \cite{maddison2016concrete} can be realized \cite{gal2017concrete}. This continuous relaxation of dropout, named Concrete Dropout, treats the dropout parameter $p$ (the probability of turning off a node in Bernoulli dropout) as a parameter of the network that can be trained using standard gradient descent methods. Appropriate regularization parameters are then added to the objective function, which trades off between fitting the data and maximizing the entropy of the dropout parameter. As the amount of data increases, the data likelihood term overwhelms the dropout regularization term, which leads to well calibrated uncertainty estimates, and thus appropriate levels of exploration. \section{DEEP CONTEXTUAL MULTI-ARMED BANDITS} In our model, we use a neural network to model $P(r | \mathbf{a}, \mathbf{x}; \mathbf{w})$, where $\mathbf{w}$ are the network weights. We train the network using Concrete Dropout, and do not disable dropout at inference time. At each timestep we are presented with a context $\mathbf{x}$ and a set of possible actions \{$\mathbf{a_1}$, $\mathbf{a_2}$, ..., $\mathbf{a_m}$\}. We sample dropout noise $\mathbf{d}$ from a uniform distribution, $U(0, 1)$, which is used to, in effect, sample from the posterior over the weights $\mathbf{w}$. We unroll the $m$ actions into ($\mathbf{x}$, $\mathbf{a_i}$) pairs which are passed to the network for inference. We choose the action $\mathbf{a}$ whose ($\mathbf{x}$, $\mathbf{a_i}$) pair has the highest expected reward. Theoretically for Thompson sampling we should update the model after each observation. In practice, realtime online learning is unsuitable for industrial settings, so we retrain the model on an exponential scale in the number of data points. See \autoref{algo:dcmab}. \IncMargin{1em} \begin{algorithm} \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \Input{$N$ the number of initial steps to take actions randomly before training, $K$ coefficient for retraining on an exponential scale and a neural network architecture that defines a parametric function $f(\mathbf{x}, \mathbf{a}; \mathbf{w})$.} $nextRetrain\leftarrow N$\; \While{True}{ \For{$i\leftarrow 1$ \KwTo $nextRetrain$}{ receive context $\mathbf{x}$ and possible actions \{$\mathbf{a_1}$, $\mathbf{a_2}$, ..., $\mathbf{a_m}$\}\; sample dropout noise $\mathbf{d}$\; compute corresponding weights $\mathbf{w}_d$\; \For{$\mathbf{a} \in \{\mathbf{a_1}, \mathbf{a_2}, ..., \mathbf{a_m}\}$}{ compute predicted reward $r = f(\mathbf{x}, \mathbf{a}; \mathbf{w}_d)$\; } choose $\mathbf{a}$ with the highest predicted reward\; } retrain on all ($\mathbf{x}$, $\mathbf{a}$, r) triplets\; $nextRetrain\leftarrow K*nextRetrain$\; } \caption{Deep Contextual Multi-Armed Bandits\label{algo:dcmab}} \end{algorithm} \DecMargin{1em} Initially, when the amount of data is small, the uncertainty on the network weights is high. Therefore, samples from the posterior have high variance, which leads to a high level of exploration, as desired. As the model observes more ($\mathbf{x}$, $\mathbf{a}$, r) triplets, the uncertainty on the weights decreases, and the network explores less and less, converging to an almost purely exploitative strategy. Additionally, by using Concrete Dropout, the level of exploration is automatically tuned according to the size of the network, the amount of data observed, and the complexity of the context. Thus, we do not have to create hand-crafted dropout rate or epsilon annealing schedules for each problem. This is particularly important in real world settings, where it may not be possible to run a large number of simulations to determine an appropriate problem-specific annealing schedule. In summary, our approach satisfies three key requirements: \begin{enumerate} \setlength\itemsep{0em} \item It models the context $\mathbf{x}$ using a deep non-linear neural network. \item It explores the $X \bigtimes A$ space in a principled and efficient way that requires little manual tuning. \item Its time complexity is consistent with real-world, latency-constrained tasks. \end{enumerate} \section{EXPERIMENTS} We test our approach on two tasks with synthetic data which allows us to run many simulations. We compare against three baseline approaches: 1) a non-contextual bandit which uses Thompson sampling under a Beta/Binomial conjugate Bayesian analysis, 2) an epsilon-greedy bandit which chooses a random action 5\% of the time and the best possible action the remainder of the time and 3) a bandit with fixed dropout rate. In the below experiments for the contextual models (including the epsilon-greedy and fixed dropout rate bandits), we use a neural network with 2 hidden layers with 256 units each and ReLU activation function. All networks are trained using the Adam optimizer \cite{kingma2014adam} with initial learning rate = 0.001. We retrain the contextual model on an exponential scale after observing $2^7$, $2^8$, $2^9$, ... examples i.e. $K = 2$ and $N = 128$. \subsection{MUSHROOM TASK} We make a slight modification to a previously used contextual bandit problem \cite{blundell2015weight,guez2015sample}. The UCI Mushroom dataset \cite{UCI2013} contains 8,124 mushrooms classified as poisonous or edible, with up to 22 features for each mushroom. At each timestep, our agent is presented with the features for a mushroom and must decide whether to eat it or not. The agent gets a reward of 1 for eating an edible mushroom, a reward of 0 for eating a poisonous mushroom, and with probability $p$ a reward of 1 for not eating a mushroom. In the below experiments we set $p = 0.5$. Regret is measured against an oracle which can see whether the mushroom is edible or not, always eats edible mushrooms and does not eat poisonous mushrooms. \begin{figure}[h] \setlength{\belowcaptionskip}{-10pt} \centering \includegraphics[width=0.4\textwidth]{mushroom} \caption{Mushroom Task - comparison between epsilon-greedy, Thompson sampling with Bernoulli dropout, and Thompson sampling with Concrete Dropout. Concrete Dropout with final cumulative regret (FCR) of 252 performs significantly better than Bernoulli dropout which has FCR of 658 while the FCR for for the epsilon-greedy agent is 1,718.} \label{fig:mushroom} \end{figure} \autoref{fig:mushroom} demonstrates that, as expected, the epsilon-greedy agent accumulates regret linearly, due to taking random actions 5\% of the time. Both fixed rate dropout and Concrete Dropout agents rapidly learn to ``solve'' the task, but Concrete Dropout not only has a much lower final cumulative regret than the fixed rate dropout agent (less than half), but it also learns to stop exploring earlier. Not shown is a non-contextual bandit whose final regret is 24,048, which demonstrates the value of modeling the context for this task. The discontinuities in the graph are due to the fact that we retrain on an exponential scale. \subsection{CASINO PARITY TASK} We propose a new contextual bandit task which is a twist on the traditional non-contextual multi-armed bandit problem. Suppose there are L bandits in a casino, each of type A or B. Bandits of type A pay out with probability $p_A$ and bandits of type B pay out with probability $p_B$. Assume for simplicity the payouts are always one dollar. Every bandit in the casino has an ID encoded as a binary string. Unknown to the gambler, bandits of type A have an even parity ID and bandits of type B an odd parity ID. At each timestep the gambler is presented the ID of a bandit of either type A or B, and the gambler must choose to play or not. If the gambler plays they receive reward according to the bandit type's probability, otherwise they are randomly assigned one of the L bandits which they must play. Without loss of generality if $p_A > p_B$, clearly the optimal strategy is to always play the even parity bandits and to not play any odd parity bandits. Rewards are stochastic and the context requires a non-linear model \cite{minsky1969perceptrons}. When $p_A \neq p_B$ the context can be leveraged to minimize regret. In the below experiments we set $p_A = 0.7$, $p_A = 0.3$ and $L = 32$. Regret is measured relative to an oracle which can see the bandit type A or B and always plays type A bandits and always refuses to play type B bandits if $p_A > p_B$ and vice versa. \begin{figure}[h] \setlength{\belowcaptionskip}{-10pt} \centering \includegraphics[width=0.4\textwidth]{casino_parity_32} \caption{Casino Parity Task - comparison between epsilon-greedy, Thompson sampling with Bernoulli dropout, and Thompson sampling with Concrete Dropout. Concrete Dropout with final cumulative regret (FCR) of 1,258 performs better than Bernoulli dropout which has FCR of 1,536 and the epsilon-greedy agent which has FCR of 2,718.} \label{fig:casino} \end{figure} In \autoref{fig:casino} we see that Concrete Dropout with Thompson sampling, which is our proposed deep contextual multi-armed bandit, again outperforms fixed rate Bernoulli dropout and the epsilon-greedy agent. We note that for a non-contextual bandit the final cumulative regret was 25,102. \section{CONCLUSION} We have presented an approach to the contextual multi-armed bandit problem which enables modeling the context ($\mathbf{x}$) using a deep non-linear neural network while still enabling principled and efficient exploration of the joint context, action space $X \bigtimes A$. We proposed deep contextual multi-armed bandits which apply recent work based on a Bayesian interpretation of dropout \cite{gal2016dropout,gal2017concrete}. By combining standard dropout training with inference time dropout, we are able to sample from an approximate posterior over the weights in a Bayesian neural network. This enables Thompson sampling, a heuristic which has been shown to be effective in addressing the exploration vs.\ exploitation trade-off. Concrete Dropout, a continuous relaxation of Bernoulli dropout, has been shown to provide well-calibrated uncertainty estimates. In practical applications it is not possible to run many simulations to determine a good dropout annealing schedule. By applying Concrete Dropout in deep contextual multi-armed bandits we avoid having to define such a schedule for each task. Through standard gradient descent training we learn dropout rates which appropriately trade off exploration and exploitation. As we wish to deploy deep contextual multi-armed bandits to many different tasks which may require vastly different levels of exploration, adaptively learning the appropriate dropout rate is a key advantage of our approach over using standard Bernoulli dropout. Deep contextual multi-armed bandits empirically outperform non-contextual bandits, bandits with epsilon-greedy exploration and fixed dropout rate bandits on the two contextual bandit tasks presented in this paper. Additionally we note that we are applying our approach to a number of contextual bandit problems in the marketing domain at HubSpot. \subsubsection*{Acknowledgements} The authors would like to thank Marco Lagi, Adam Starikiewicz, Vedant Misra and George Banis for their helpful comments on drafts of this paper. \newpage \bibliographystyle{splncs03}
{ "attr-fineweb-edu": 1.933594, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUcaE5qsNCPfdFeVNg
\section{Introduction} The ensemble Kalman filter \cite{evensen1994sequential,burgers1998analysis,evensen2009data}, one of the most widely applied data assimilation algorithms \cite{asch2016data,law2015data,reich2015probabilistic}, uses a Monte Carlo approach to provide a non-linear approximation to the Kalman filter~\cite{kalman1960new}. In the typical case of an undersampled ensemble the algorithm requires correction procedures such as inflation~\cite{anderson2001ensemble}, localization~\cite{hunt2007efficient, petrie2008localization, anderson2012localization,Sandu_2015_SCALA,Sandu_2017_Covariance-Cholesky,zhang2010ensemble}, and ensemble subspace enrichment~\cite{Sandu_2015_covarianceShrinkage, Sandu_2019_Covariance-parallel,Sandu_2014_EnKF_SMF}. Hybrid data assimilation \cite{hamill2000hybrid} is typically an umbrella term for assimilation techniques that combine both offline-estimated climatological covariances with their online-estimated statistical counterparts. These methods are often thought of as heuristic corrections, but in fact stem from statistically rigorous covariance shrinkage techniques. This work is based on enriching the ensemble subspace through the use of climatological covariances. Previous work~\cite{Sandu_2015_covarianceShrinkage, Sandu_2019_Covariance-parallel} proposed augmenting the covariance estimates derived from the ensemble by a full rank shrinkage covariance matrix approximation. In this work we consider augmenting the physical ensemble with synthetic members drawn from a normal distribution with a possibly low rank covariance matrix derived from \textit{a priori} information such a climatological information or method of snapshots. We show that this is equivalent to a stochastic implementation of the shrinkage covariance matrix estimate proposed in ~\cite{Sandu_2015_covarianceShrinkage, Sandu_2019_Covariance-parallel}, and therefore augmenting the physical ensemble with synthetic members enriches the rank of the covariance matrix, and nudges the resulting covariance estimate toward the true covariance. \section{Background} Our aim is to understand the behavior of an evolving natural phenomenon. The evolution of the natural phenomenon is approximated by an imperfect dynamical model \begin{equation} X_i = \!M_{(i-1)\to i}(X_{i-1}) + \Xi_i, \end{equation} where $X_{i-1}$ is a random variable (RV) whose distribution represents our uncertainty in the state of the system at time $i-1$, $\!M_{(i-1)\to i}$ is the (imperfect) dynamical model, $\Xi_i$ is a RV whose distribution represents our uncertainty in the additive modeling error, and $X_i$ is the RV whose distribution represents our uncertainty in the (forecasted) state at time $i$. One collects noisy observations of the truth: \begin{equation} \*y^\|o_i = \!H_i(\*x_i^\|t) + \*\eta_i, \end{equation} where $\*x^\|t$ represents the true state of nature represented in model space, $\!H_i$ is the (potentially non-linear) observation operator, $\*\eta_i$ is a RV whose distribution represents our uncertainty in the observations, and $\*y^\|o_i$ are the observation values, assumed to be realizations of an observation RV $Y_i$. Take $n$ to be the dimension of the state-space, and $m$ to be the dimension of the observation space. The goal of data assimilation is to find the \textit{a posteriori} estimate of the state given the observations, which is typically achieved through Bayes' theorem. At time $i$ we have: \begin{equation} \pi(X_i|Y_i) \propto \pi(Y_i|X_i)\,\pi(X_i). \end{equation} In typical Kalman filtering the assumption of Gaussianity is made, whereby the states at all times, as well as the additive model and observation errors, are assumed to be Gaussian and independently distributed. Specifically one assumes $\Xi_i \sim \!N(\*0, \*Q_i)$ and $\*\eta_i \sim \!N(\*0,\mathbf{R}_i)$. In what follows we use the following notation. The \textit{a priori} estimates at all times are represented with the superscript $\square^\|f$, for forecast (as from the imperfect model), and the \textit{a posteriori} estimates are represented with the superscript $\square^\|a$, for analysis (through a DA algorithm). \subsection{Ensemble Transform Kalman Filter} Forecasting with an ensemble of coarse models has proven to be a more robust methodology than forecasting with a single fine model~\cite{kalnay2003atmospheric}. Ensemble Kalman filtering aims to utilize the ensemble of forecasted states to construct empirical moments and use them to implement the Kalman filter formula. The Ensemble Transform Kalman Filter (ETKF) computes an optimal transform of the prior ensemble member states to the posterior member states; for Gaussian distributions the optimal transform is described by a symmetric transform matrix. We now describe the standard ETKF. Let $\*X^\|a_{i-1} = [\*x^{(1),\|a}_{i-1},\dots \*x^{(N),\|a}_{i-1}]$ represent the $N$--members analysis ensemble at time $i-1$. The forecast step is: \begin{equation} \*x_i^{(k),\|f} = \!M_{(i-1)\to i}(\*x_{i-1}^{(k),\|a}) + \*\xi_i^{(k)},\quad k=1,\dots,N, \end{equation} where $\*\xi_i^{(k)}$ is a random draw from $ \!N(\*0,\*Q_i)$. The ETKF analysis step reads: \begin{subequations} \begin{eqnarray} \label{eq:ETKF-analysis-anomalies} \*A^\|a_i &=& \*A^\|f_i\, \*T_i, \\ \label{eq:ETKF-analysis-mean} \bar{\*x}^\|a &=& \bar{\*x}^\|f + \*A^\|a_i\, \*Z^{\|a,\mathsf{T}}\,\mathbf{R}^{-1}\,\*d_i, \end{eqnarray} \end{subequations} where \begin{subequations} \begin{align} \label{eq:transform-matrix} \*T_i &= {\left(\mathbf{I} - \*Z_i^{\|f,\mathsf{T}}\,\S_i^{-1}\,\*Z_i^\|f\right)}^{\frac{1}{2}}, \\ \label{eq:S-matrix} \S_i &= \*Z_i^\|f\,\*Z_i^{\|f,\mathsf{T}} + \mathbf{R}_i, \\ \*A^\|f_i &= \frac{1}{\sqrt{N-1}}\left(\*X^\|f_i - \overline{\*x}^\|f_i\,\*1^\mathsf{T}\right), \\ \*Z^\|f_i &= \frac{1}{\sqrt{N-1}}\left(\!H(\*X^\|f) - \overline{\!H(\*X^\|f)}\, \*1^\mathsf{T}\right), \\ \*d_i &= \*y^\|o_i - \overline{\!H(\*X^\|f_i)},\\ \overline{\*x}^\|f_i &= \frac{1}{N}\sum_{k=1}^N \*X^{\|f,(k)}_i,\\ \overline{\!H(\*X^\|f_i)} &= \frac{1}{N}\sum_{k=1}^N \!H(\*X^{\|f,(k)}_i). \end{align} \end{subequations} Here the unique symmetric square root of the matrix is used, as there is evidence of that option being the most numerically stable~\cite{sakov2008implications}. Also, it is common practice to approximate $\*Z^\|a$ by \begin{equation} \label{eq:ZaAssum} \*Z^\|a \approx \*Z^\|f\,\*T_i, \end{equation} although in reality $\*Z^\|a$ is implicitly defined from the analysis ensemble $\*X^\|a$ as follows: \begin{equation} \label{eq:Za} \begin{split} \*X^\|a_i &= \bar{\*x}^\|a_i\,\*1^\mathsf{T} + \sqrt{N-1}\*A^\|a_i, \\ \*Z^\|a &= \frac{1}{\sqrt{N-1}}\left(\!H(\*X^\|a) - \overline{\!H(\*X^\|a)}\,\*1^\mathsf{T}\right). \end{split} \end{equation} Note that equation~\eqref{eq:Za} is automatically satisfied by~\eqref{eq:ZaAssum} when $\!H$ is a linear operator. Additionally, this implicitly defines a fixed point iteration which should converge when the linear operator is sufficiently smooth. In this paper we make the assumption that equation~\eqref{eq:ZaAssum} is exact. The empirical forecast covariance estimate \begin{equation} \label{eq:empirical-forecast-covariance} \Ct{X^\|f_i}{X^\|f_i} = \*A^\|f_i\,\*A^{\|f,\mathsf{T}}_i \end{equation} is not perfect. In order to improve the performance of EnKF, inflation is applied to the ensemble anomalies, \begin{equation} \*A^\|f_i \leftarrow \alpha \, \*A^\|f_i, \end{equation} before any other computation is performed (meaning that it is also applied to the observation anomalies, $\*Z^\|f_i$ as well). The inflation parameter $\alpha>1$ is typically chosen in a heuristic manner. \subsection{Covariance localization} Traditional state-space localization of the empirical covariance \eqref{eq:empirical-forecast-covariance} is done by tapering, i.e., by using a Schur product \begin{equation} \*B^\|f_i = \*\rho_i \circ \Ct{X^\|f_i}{X^\|f_i}, \end{equation} with the localization matrix $\*\rho$ contains entries that are progressively smaller as the distance between the corresponding variables increases. Tapering methods for localization can also be thought of as a type of shrinkage \cite{chen2012shrinkage}. The localized ETKF (LETKF)~\cite{hunt2007efficient} is a localization approach to the ETKF. The LETKF and its variants are considered to be one of the state-of-the-art EnKF methods. In the state-space approach to the LETKF, each state space variable $[u]_j$ is assimilated independently of all others, with the observation space error covariance inverse replaced by \begin{equation}\label{eq:rloc} \*R_i^{-1} \xleftarrow{} \*\rho_{i, j} \circ \*R_i^{-1}, \end{equation} with the matrix $\*\rho_{i,[j]}$ being a diagonal matrix representing the decorrelation factor between all observation space variables from the $j$th state space variable. Each diagonal element represents the tapering factor, and is often chosen to be some function of distance from the state-space variable being assimilated to the corresponding observation-space variable. The implicit assumption is that all observations are independent of each other, in both the observation error ($\*R_i$ is diagonal), and forecast error ($\*Z^\|f\*Z^{\|f,\mathsf{T}}$ is assumed to be diagonal). \subsection{Covariance shrinkage} In statistical literature\cite{chen2009shrinkage,chen2010shrinkage,chen2011robust,ledoit2004well} covariance shrinkage refers to describe the methodology under which a statistical covariance derived from a set of samples is made to approach the ``true'' covariance from which the set of samples is derived. For the vast majority of statistical applications, there is no additional apriori knowledge about the distribution of the samples, thus assumptions such as Gaussianity and sphericity are made. In data assimilation applications, however, climatological estimates of covariance are extremely commonplace. Assume that there exists a target covariance matrix $\!P$ that represents the \textit{a priori} knowledge about the error spatial correlations. In this paper we focus on an additive shrinkage covariance structure which is a linear combination of the empirical covariance \eqref{eq:empirical-forecast-covariance} and the target covariance, \begin{equation} \label{eq:shrinkage-covariance} \*B^\|f_i = \gamma_i\,\mu_i\,\!P + (1-\gamma_i)\,\Ct{X^\|f_i}{X^\|f_i}, \end{equation} with $\gamma_i$ representing the shrinkage factor and $\mu_i$ representing a scaling factor. By employing a general target matrix $\!P$, a closed-form expression to compute the weight value $\gamma_i$ is proposed in \cite{Stoica2008,Zhu2011}. In this derivation, weights are computed as follows: \begin{equation} \gamma_i=\min\left(\frac{\frac{1}{N^2} \cdot\sum_{k=1}^N\norm{\*x_i^{(k),\|f}-\overline{\*x^\|f_i}}^4 -\frac{1}{N}\cdot \norm{ \Ct{X^\|f_i}{X^\|f_i}}^2}{\norm{\Ct{X^\|f_i}{X^\|f_i}-\!P}^2},1\right), \label{eq:KA} \end{equation} but as such an estimate is expensive to compute in an operational setting well will settle for a more computationally inexpensive method. No assumptions about the structure of $\!P$ are made to compute $\gamma_i$. The general form \eqref{eq:shrinkage-covariance} can be reduced to a standard form where the target matrix is the (scaled) identity by defining the statistical covariance $\!C_i = \!P^{-\frac{1}{2}}\Ct{X^\|f_i}{X^\|f_i}\!P^{-\frac{1}{2}}$. The corresponding scaling factor, $\mu_i$ is then the average covariance of $\!C_i$, \begin{equation} \label{eq:mu} \mu_i = \frac{\tr(\!C_i)}{n}, \end{equation} with the new target $\mu_i\*I$ representing a spherical climatological assumption on $\!C_i$. The choice of $\gamma_i$ is extremely important. In this paper we focus on the Rao-Blackwellized Ledoit-Wolf (RBLW) estimator~\cite{chen2009shrinkage} (equation~(9) in~\cite{Sandu_2017_Covariance-Cholesky}), \begin{equation} \label{eq:RBLW} \gamma_{i,\text{RBLW}} = \min \left[\vcenter{\hbox{$\displaystyle\frac{N - 2}{N(N+2)} + \frac{(n + 1)N - 2}{\hat{U}_iN(N+2)(n-1)}$}},\,\,\, 1\right], \end{equation} where the sphericity factor \begin{equation} \label{eqn:sphericity} \hat{U}_i = \frac{1}{n - 1}\left(\frac{n \tr(\!C_i^2)}{\tr^2(\!C_i)} - 1\right) \end{equation} measures how similar the correlation structures of the sample covariance and target covariance are. Note that if our samples are also used to calculate the sample mean, the effective sample size of the sample covariance is smaller by one, therefore for most practical applications one replaces $N$ by $N-1$ in \eqref{eq:RBLW}. The RBLW estimator is valid under Gaussian assumptions about the samples from which the sample covariance matrix is derived, however is only considered to be accurate for an oversampled sample covariance matrix. There are two major issues with the application of the shrinkage estimator in the EnKF, both related to its reliance on the sphericity of $\!C_i$. First, when operating in the undersampled regime $N \ll n$, the estimate $\!C_i$ is also undersampled, and the problem of ``spurious correlations'' will affect the measure of sphericity \eqref{eqn:sphericity}. The second related issue regards the climatological estimate $\!P$. Unless the climatological estimate accurately measures the correlation structure of the sample covariance, the shrinkage estimate could potentially be very poor. The long-term accuracy of the climatological estimate to the covariance is thus of great importance. Note that $\!P$ is not required to be invertible. Commonly, a reduced spectral version of $\!P$ is known, $\!P = \!U\,\!L\,\!U^*$, with the $\!L$ being a diagonal matrix of $r\ll n$ spectral coefficients, and $\!U$ being an $n\times r$ matrix of orthonormal coefficients. The canonical symmetric pseudo-inverse square-root of $\!P$ would therefore be $\!P^{-\frac{1}{2}} = \!U\,\!L^{-\frac{1}{2}}\,\!U^*$. If $\sigma_i$ is the $i$-th singular value of $\!P^{-1/2}\*A^\|f$, then the traces appearing in \eqref{eqn:sphericity} can be computed as: \begin{gather*} \tr\left(\!C\right) = \sum_{i=1}^{N-1}\sigma_i^2,\qquad \tr\left(\!C^2\right) = \sum_{i=1}^{N-1}\sigma_i^4. \end{gather*} \section{ETKF implementation with stochastic shrinkage covariance estimates} Ensemble methods propagate our uncertainty about the dynamics of the system. Application of Bayes' rule requires that all available information is used~\cite{jaynes2003probability}. Therefore, we attempt to use the locally known climatological information in conjunction with our ensemble information. If the dynamical system is locally (in time) stationary, climatologies about the local time roughly describe a measure of averaged-in-space uncertainty. Nino-Ruiz and Sandu ~\cite{Sandu_2015_covarianceShrinkage, Sandu_2019_Covariance-parallel} proposed to replace the empirical covariance in EnKF with a shrinkage covariance estimator~\eqref{eq:shrinkage-covariance}. They showed that this considerably improves the analysis at a modest additional computational cost. Additional, synthetic ensemble members drawn from a normal distribution with covariance $\*B^\|f$ are used to decrease the sampling errors. In this work we develop an implementation of ETKF with a stochastic shrinkage covariance estimator~\eqref{eq:shrinkage-covariance}. Rather than computing the covariance estimate ~\eqref{eq:shrinkage-covariance}, we build a synthetic ensemble by sampling directly from a distribution with covariance $\mu_i \!P$. The anomalies of this synthetic ensemble are independent of the anomalies of the forecast EnKF ensemble. To be specific, let $\!X^\|f \in \Re^{n \times M}$ be a synthetic ensemble of $M$ members (as opposed to the dynamic ensemble $\*X^\|f_i$ with $N$ members) drawn from a climatological probability density. We denote the variables related to the synthetic ensemble by calligraphic letters. A major concern is the choice of distribution. As sampling from the dynamical manifold is impractical, heuristic assumptions are made about the distributions involved. A useful known heuristic is the principle of maximum entropy (PME)~\cite{jaynes2003probability}. Given the PME, and the assumption that we know both the mean and covariance of our assumed distribution, and the assumption that the dynamics are supported over all of $\Re^n$, there is one standard interpretation of the information contained in the synthetic ensemble $\!X^\|f_i$. Assuming that the mean and covariance are known information (through sampling), a Gaussian assumption on the synthetic ensemble is warranted, \begin{equation} \!X^\|f_i \sim \mathcal{N}(\bar{\*x}^\|f_i,\mu_i\,\!P). \end{equation} Alternatively, not explored in this paper, a Laplace assumption on the synthetic ensemble can be assumed. The Laplace assumption has obvious parallels to robust statistics methods in data assimilation~\cite{rao2017robust}. By sampling from a Laplace distribution, outlier behaviour can more readily be captured with fewer samples. However, the concentration of samples around the mean is also increased, thereby requiring more samples to better represent the covariance. The Gaussian assumption appears more natural in Kalman filter-based methods, and thus the rest of the paper will make this assumption. The synthetic ensemble anomalies in the state and observation spaces are: \begin{equation} \label{eq:synthetic-anomalies} \begin{split} \!A^\|f_i &= \frac{1}{\sqrt{ M-1}}\left(\!X^\|f_i - \overline{\!X}^\|f_i\,\*1^\mathsf{T}\right) \in \Re^{n\times M},\\ \!Z^\|f_i &= \frac{1}{\sqrt{ M-1}}\left(\!H(\!X^\|f_i) - \overline{\!H(\!X^\|f_i)}\,\*1^\mathsf{T}\right) \in \Re^{m\times M}. \end{split} \end{equation} The shrinkage estimator \eqref{eq:shrinkage-covariance} of the forecast error covariance for $\*B^\|f_i$ is represented in terms of synthetic and forecast anomalies as: \begin{equation} \label{eq:Bf-shrinkage-stochastic} \widetilde{\*B}^\|f_i = \gamma_i\,\!A^\|f_i\,\!A^{\|f,\mathsf{T}}_i + (1-\gamma_i)\,\*A^\|f_i\,\*A^{\|f,\mathsf{T}}_i. \end{equation} The Kalman filter formulation ~\cite{kalman1960new} yields the following analysis covariance matrix: \begin{equation} \label{eq:Ba-shrinkage-stochastic} \widetilde{\*B}^\|a_i = \widetilde{\*B}^\|f_i - \widetilde{\*B}^\|f_i\,\*H_i^\mathsf{T}\,\S_i^{-1}\,\*H_i \,\widetilde{\*B}^\|f_i, \end{equation} where $\S_i$ will be discussed later. Using the forecast error covariance estimate \eqref{eq:Bf-shrinkage-stochastic} in \eqref{eq:Ba-shrinkage-stochastic} leads to the following analysis covariance: \begin{equation} \begin{split} \widetilde{\*B}^\|a_i &= \gamma_i\,\!A^\|f_i\,\!A^{\|f,\mathsf{T}}_i + (1-\gamma_i)\,\*A^\|f_i\,\*A^{\|f,\mathsf{T}}_i \\ &\quad - \left(\gamma_i\,\!A^\|f_i\,\!Z^{\|f,\mathsf{T}}_i + (1-\gamma_i)\,\*A^\|f_i\,\*Z^{\|f,\mathsf{T}}_i\right)\,\S^{-1}_i\,\left(\gamma_i\,\!Z^\|f_i\,\!A^{\|f,\mathsf{T}}_i + (1-\gamma_i)\,\*Z^\|f_i\,\*A^{\|f,\mathsf{T}}_i\right), \end{split} \label{eq:goal-cov}\tag{goal-cov} \end{equation} which we refer to as the~ ``target'' analysis covariance formula. The goal of our modified ensemble Kalman filter is to construct an $N$-member analysis ensemble such that the anomalies $\*A^\|a_i$ \eqref{eq:ETKF-analysis-anomalies} represent the~\eqref{eq:goal-cov} analysis covariance as well as possible: \begin{equation} \label{eq:goal-an}\tag{goal-an} \textnormal{Find}~~\*A^\|a_i \in \Re^{n \times N}~~\textnormal{such that}:\quad \*A^\|a_i \,\*A^{\|a,\mathsf{T}}_i \approx \widetilde{\*B}^\|a_i. \end{equation} In the proposed method, we enrich our forecast ensemble in a way that closely approximates the shrinkage covariance~\eqref{eq:shrinkage-covariance}. An alternative approach, where we transform the physical ensemble and the surrogate ensemble as separate entities that are only related by the common information in their transformations, is discussed in Appendix~\ref{sec:appendix}. \subsection{The method} We enrich the ensembles of forecast anomalies with synthetic anomalies \eqref{eq:synthetic-anomalies}: \begin{equation} \label{eq:type1-enriched-ensembles} \begin{split} \#A^\|f_i &= \begin{bmatrix}\sqrt{1-\gamma_i}\,\*A^\|f_i & \sqrt{\gamma_i}\,\!A^\|f_i\end{bmatrix} \in \Re^{n\times (N + M)},\\ \#Z^\|f_i &= \begin{bmatrix}\sqrt{1-\gamma_i}\,\*Z^\|f_i & \sqrt{\gamma_i}\,\!Z^\|f_i\end{bmatrix} \in \Re^{m\times (N + M)}. \end{split} \end{equation} Next, we define a transform matrix \eqref{eq:ETKF-analysis-anomalies} that is applied to the enriched ensemble \eqref{eq:type1-enriched-ensembles}, and leads to an analysis ensemble that represents the target analysis covariance \eqref{eq:goal-an}. Specifically, we search for a transform matrix $\#T_i$ such that: \begin{equation} \label{eq:T1eq} \widetilde{\*B}^\|a_i = \#A^\|f_i\,\#T_i\,\#T_i^\mathsf{T}\,\#A^{\|f,\mathsf{T}}_i. \end{equation} Using the extended ensembles \eqref{eq:T1eq} the~\eqref{eq:goal-cov} becomes \begin{equation} \label{eq:t1ba} \widetilde{\*B}^\|a_i = \#A^\|f_i\,\left(\mathbf{I}_{(N+M) \times (N+M)} - \#Z^{\|f,\mathsf{T}}_i\,\S^{-1}_i\,\#Z^\|f_i\right)\,\#A^{\|f,\mathsf{T}}_i, \end{equation} where, from \eqref{eq:S-matrix}, \begin{equation} \S_i = \#Z^\|f_i\,\#Z^{\|f,\mathsf{T}}_i + \mathbf{R}_i. \end{equation} The transform matrix \eqref{eq:transform-matrix} is a square root of ~\eqref{eq:T1eq}: \begin{equation} \label{eq:T-symmetric-sqrt} \#T_i = {\left(\mathbf{I}_{(N+M) \times (N+M)} - \#Z^{\|f,\mathsf{T}}_i\,\S^{-1}_i\,\#Z^\|f_i\right)}^{\frac{1}{2}}. \end{equation} We compute the analysis mean using the shrinkage covariance estimate. From \eqref{eq:ETKF-analysis-mean}: \begin{align} \label{eq:type1-mean} \bar{\*x}^\|a_i &= \bar{\*x}^\|f_i + \#A^\|f_i\,\#T_i\,\#T_i^\mathsf{T}\,\#Z^{\|f,\mathsf{T}}_i\,\mathbf{R}^{-1}_i\,\*d_i, \end{align} where the full analysis covariance estimate ~\eqref{eq:t1ba} is used. In addition, we achieve the \eqref{eq:goal-an} by keeping the first $N$ members of the transformed extended ensemble, or equivalently, the first $N$ columns of the symmetric square root \eqref{eq:T-symmetric-sqrt}. From \eqref{eq:ETKF-analysis-anomalies} we have: \begin{align} \label{eq:type1-anomalies} \*A^\|a_i &= \frac{1}{\sqrt{1-\gamma_i}}\,\bigl[\#A^\|f_i\,\#T_i\bigr]_{:,1:N} = \frac{1}{\sqrt{1-\gamma_i}} \,\#A^\|f_i\,\breve{\#T}_i, \quad \breve{\#T}_i = \left[\#T_i\right]_{:,1:N}. \end{align} An alternative approach to achieve the \eqref{eq:goal-an} is to look for a low-rank, approximate square root instead of the symmetric square root \eqref{eq:T-symmetric-sqrt}. Specifically, we seek a transformation matrix $\widehat{\#T}_i$ such that: \begin{equation} \label{eq:T-lowrank-sqrt} \widehat{\#T}_i \in \Re^{(N+M) \times N}, \qquad \widehat{\#T}_i\,\widehat{\#T}_i^T \approx \mathbf{I}_{(N+M) \times (N+M)} - \#Z^{\|f,\mathsf{T}}_i\S^{-1}_i\#Z^\|f_i. \end{equation} The calculation of the symmetric square root \eqref{eq:T-symmetric-sqrt} requires an SVD of the right hand side matrix. With the same computational effort one can compute the low rank transformation: \begin{equation} \label{eq:T-symmetric-sqrt2} \begin{split} \mathbf{U}\, \boldsymbol{\Sigma} \, \mathbf{U}^T &= {\left(\mathbf{I} - \#Z^{\|f,\mathsf{T}}_i\S^{-1}_i\#Z^\|f_i\right)}, \qquad \mathbf{U}, \boldsymbol{\Sigma} \in \Re^{(N+M) \times (N+M)};\\ \breve{\#T}_i &= \mathbf{U}\, \boldsymbol{\Sigma}^{1/2} \, [\mathbf{U}_{1:N,:}]^T \in \Re^{(N+M) \times N} \qquad\textnormal{(symmetric square root \eqref{eq:T-symmetric-sqrt})}; \\ \widehat{\#T}_i &= \mathbf{U}_{:,1:N}\, \boldsymbol{\Sigma}^{1/2}_{1:N,1:N} \in \Re^{(N+M) \times N} \qquad \textnormal{(low rank square root \eqref{eq:T-lowrank-sqrt})}. \end{split} \end{equation} The mean calculation \eqref{eq:type1-mean} is the same. The ensemble transform produces $N$ transformed ensemble members that contain ``mixed'' information from both the physical and the synthetic ensembles: \begin{align*} \label{eq:type2-anomalies} \*A^\|a_i &= \frac{1}{\sqrt{1-\gamma_i}} \,\#A^\|f_i\,\widehat{\#T}_i. \end{align*} \begin{remark}[Classical localization] It is possible to combine the proposed stochastic shrinkage approach with traditional localization. The LETKF implementation~\cite{hunt2007efficient} computes transform matrices for subsets of variables, corresponding to localized spatial domains. In a similar vein one can combine our shrinkage algorithm with classical localization, as follows. Subsets of variables of the enriched ensembles \eqref{eq:type1-enriched-ensembles} are used to compute local transform matrices \eqref{eq:T-symmetric-sqrt} or \eqref{eq:T-lowrank-sqrt}, which are then applied to transform the corresponding local subsets, i.e. to compute the corresponding rows in equations \eqref{eq:type1-anomalies} or \eqref{eq:type1-anomalies}, respectively. Insofar, the authors have not observed this to have any measurable effect on the error. \end{remark} \section{Numerical experiments} All test problem implementations are available in the `ODE Test Problems' suite \cite{otp, otpsoft}. \subsection{The Lorenz'96 model (L96)} We first consider the 40-variable Lorenz '96 problem~\cite{lorenz1996predictability}, \begin{equation} \label{eq:Lorenz} \left[y\right]'_i = - \left[y\right]_{i-1}\left(\left[y\right]_{i-2} - \left[y\right]_{i+1}\right) - \left[y\right]_i + F, \quad i=1,\dots,40, \quad F=8. \end{equation} Assuming \eqref{eq:Lorenz} is ergodic (thus having a constant spatio-temporal measure of uncertainty on the manifold of the attractor), we compute the target covariance matrix $\!P$ as the empirical covariance from $10,000$ independent ensemble members run over $225$ days in the system (where $0.05$ time units corresponds to 6 hours), with an interval of 6 hours between snapshots, This system is known to have 13 positive Lyapunov exponents, with a Kaplan-Yorke dimension of about 27.1 \cite{popov2019bayesian}. The time between consecutive assimilation steps is $\Delta t = 0.05$ units, corresponding to six hours in the system. All variables are observed directly with an observation error covariance matrix of $\mathbf{R}_i = \mathbf{I}_{40}$. The time integration of the model is performed with RK4 the fourth order Runge-Kutta scheme RK4 \cite{hairer1991solving} with a step size $h = \Delta t$. The problem is run over 2200 assimilation steps. The first 200 are discarded to account for model spinup. Twenty independent model realizations are performed in order to glean statistical information thereof. \subsection{L96 assimilation results} \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{figure1.pdf} \caption{Results for the L96 problem with dynamic ensembles sizes of $N=5$ and $N=14$, inflation factor $\alpha=1.1$, and different synthetic ensemble sizes $M$. We compute the KL divergence of the rank histogram~\eqref{eq:rank-histogram} and the RMSE~\eqref{eq:rmse} for the methods. Error bars show two standard deviations.}\label{fig:KLRMSEM} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=0.66\linewidth]{figure2.pdf} \caption{The left panel presents the RMSE of the L96 model for various values of the dynamic and synthetic ensemble sizes. The right panel presents the shrinkage factor $\gamma$ for a synthetic ensemble size of $M=100$, with error bars showing two standard deviations.} \label{fig:l96-M-versus-N-versus-G} \end{figure*} We assess the quality of the analysis ensembles using a rank histogram~\cite{hamill2001interpretation}, cumulative over 20 independent runs. For a quantitative metric we compute the KL divergence from $Q$ to $P$, \begin{equation}\label{eq:rank-histogram} D_{KL}\left(P\middle|\middle|Q\right) = -\sum_k P_i\log\left(\frac{P_k}{Q_k}\right), \end{equation} where $P$ is the uniform distribution and $Q$ is our ensemble rank histogram, and $P_k$ \& $Q_k$ are the discrete probabilites associated with each bin. Additionally, for testing the accuracy of all our methods we compute the spatio-temporal RMSE, \begin{equation}\label{eq:rmse} \sqrt{\sum_{i=1}^K \sum_{j=1}^n \left[x_i\right]_j^2}, \end{equation} with $K$ representing the amount of snapshots at which the analysis is computed, and $\left[x_i\right]_j$ is the $j$th component of the state variable at time $i$. For the given settings of a severely undersampled ensemble ($N=5$) and mild inflation ($\alpha = 1.1$), we compare the Gaussian sampling methodology coupled to the RBLW formulation for the shrinkage factor $\gamma$~\eqref{eq:RBLW}, with the optimal static $\gamma=0.85$ shrinkage factor. For a dynamic ensemble that captures the positive error growth modes ($N=14$) will will compare the RBLW estimator with the optimal static $\gamma=0.1$. We will compare the mean and variance of the KL divergence of the rank histogram of the variable $[y]_{17}$ from the uniform, and the statistics of the spatio-temporal RMSE. The results are reported in Figure~\ref{fig:KLRMSEM}. For both an undersampled and sufficient ensemble, the optimal shrinkage factor has a smaller mean error, and a smaller KL divergence with less variance (top left, top middle, bottom left, and bottom middle panels). In the undersampled case, the RBLW estimator induces more variance into the RMSE (top middle panel). For the sufficiently sampled ensemble, however, the optimal static shrinkage value induced significantly more variance into the error, with the RBLW estimator reducing the error significantly (bottom middle panel). It is possible that a better estimator than RBLW may get the `best of both worlds' and induce low error with low variance, though this is as-of-yet out of reach. This is to be expected as the RBLW estimate is only accurate in the limit of ensemble size, and there is no theory about its accuracy in the undersampled case. In the authors' experience other estimators such as OAS, while having the theoretically desired properties, perform empirically worse in conjunction with ensemble methods. Currently, for a modest reduction in accuracy, one of the hyperparameters can be {estimated online} by the methodology. For the second round of experiments with Lorenz '96, reported in Figure~\ref{fig:l96-M-versus-N-versus-G}, we comparing analysis errors when the synthetic and dynamic ensemble sizes are modified (left panel). It is evident that increases in both dynamic and synthetic ensemble size lead to lower error. In the right panel we also compare dynamic ensemble size the values of $\gamma$ that are produced. It is clear that an increase in dynamic ensemble size decreases the need for shrinkage. \subsection{The Quasi-Geostrophic model (QG)} \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{figure3.pdf} \begin{minipage}[t]{.33333\linewidth} \centering \end{minipage}% \begin{minipage}[t]{.33333\linewidth} \centering \end{minipage}% \begin{minipage}[t]{.33333\linewidth} \centering \end{minipage} \caption{Analysis RMSE results with the QG equations. For QG, $ M=100$, and Gaussian and Laplace samples, as compared to the LETKF with both the Gaspari-Cohn decorrelation function (GC) and our operational approximation (OP)~\eqref{eq:operational-decorrelation}.} \label{fig:qgT1T2} \end{figure*} We follow the QG formulations given in~\cite{san2015stabilized,mou2019data}. We discretize the equation \begin{equation} \begin{split} \label{eq:QG} \omega_t + J(\psi,\omega) - {Ro}^{-1}\, \psi_x &= {Re}^{-1}\, \Delta\omega + {Ro}^{-1}\,F, \\ J(\psi,\omega)&\equiv \psi_y \omega_x - \psi_x \omega_y, \end{split} \end{equation} where $\omega$ stands for the vorticity, $\psi$ stands for the stream function, $Re$ is the Reynolds number, $Ro$ is the Rossby number, $J$ is the Jacobian term, and $F$ is a constant (in time) forcing term. The relationship between stream and vorticity, $\omega = -\Delta\psi$ is explicitly enforced in the evaluation of the ODE. The forcing term is a symmetric double gyre, \begin{equation} F = \sin\left(\pi(y-1)\right). \end{equation} Homogeneous Dirichlet boundary conditions are enforced on the spatial domain $[0,1]\times[0,2]$. The spatial discretization is a second order central finite difference for the first derivatives, and the Laplacian, with the Arakawa approximation \cite{arakawa1966computational} (a pseudo finite element scheme~\cite{jespersen1974arakawa}) used for computing the Jacobian term. All spatial discretizations exclude the trivial boundary points from explicit computation. The matrix $\!P$ is approximated from 700 snapshots of the solution about 283 hours apart each, with Gaspari-Cohn localization applied, so as to keep the matrix sparse. The true model is run outside of time of the snapshots so as to not pollute the results. Nature utilizes a $255\times511$ spatial discretization, and the model a $63\times127$ spatial discretization. Observations are first relaxed into the model space via multigridding~\cite{zubair2009efficient}, then 150 distinct spatial points (using an observation operator similar to \cite{sakov2008deterministic}) from the non-linear observation operator, \begin{equation} \!H(\psi) = \sqrt{\psi_x^2 + \psi_y^2}, \end{equation} representing zonal wind magnitude, are taken. The observation error is unbiased, with covariance $\*R = 4\mathbf{I}_{150}$. The number of synthetic ensemble members is fixed at a constant $ M = 100$, as to be more than the number of full model run ensemble members, but significantly less than the rank of the covariance. Observations are taken $\Delta t=0.010886$ time units (representing one day in model space) apart. We run a total of 350 assimilation steps, taking the first 50 as spinup. Results are averaged over 5 model runs (with the same nature run, but different initializations of the dynamic ensemble), with diverging runs treated as de-facto infinite error. Operationally, the LETKF assumes that observations are sufficiently far apart that almost no two points are influenced by the same observation, and the algorithm runs on `patches' defined by the observations. In such an operational framework, equation~\eqref{eq:rloc} cannot be applied. Thus, using an extremely nice decorrelation function such as Gaspari-Cohn~\cite{gaspari1999construction} (GC) is not operationally feasible. Defining sharp boundaries for the patches would be equally unfair. As such, in addition to looking at the GC decorrelation function, we will construct a decorrelation function that has both a built-in cut-off radius, but also allows for a smoother coupling between neighboring patches: \begin{equation} \ell(k) = \begin{cases}1 & k \leq 1\\ {(5 - 4k)}^2{(8k - 7)} & 1 < k \leq \frac{5}{4}\\ 0 & const.\end{cases},\label{eq:operational-decorrelation} \end{equation} where $k_{ij} =d_{ij}/r$ is the ratio between the distance of two given points and the decorrelation radius (which we will set to 15 grid points). The nonlinear term is an approximation to a sigmoid function. \subsection{QG assimilation results} Figure~\ref{fig:qgT1T2} reports the results with the QG model. The shrinkage approach handily beats the operational approximation decorrelation function~\eqref{eq:operational-decorrelation} both in terms of stability and in terms of RMSE for the vast majority of chosen dynamic ensemble size $N$ and inflation factor $\alpha$. Comparing our methodology to the LETKF with Gaspari-Cohn (GC) localization, we see that GC significantly decreases the error for larger values of $N$ and $\alpha$, but is not stable for more operational under-sampled dynamic ensemble sizes and low inflation factors, as opposed to our shrinkage method. Possible sources of error are both the nonlinear observations and the coarse approximation to the covariance estimate. These results lend additional support to the argument that a better (perhaps more heuristic) approximation to the shrinkage factor $\gamma$ is needed in order to use our methodology coupled with localization to stabilize operational implementations of the LETKF. The quasi-geostrophic results indicate that our methodology holds promise to be of use for practical data assimilation systems, especially those for which the observations are non-linear transformations of the state representation. However, the methodology needs to be refined with more optimal shrinkage factors for operational undersampled empirical covariances. \begin{remark} An operational implementation of the LETKF requires $m\times N$ linear solves and $m$ matrix square roots, while our stochastic shrinkage algorithm requires $N + M$ linear solves and one matrix square root. Thus as the number of observations grows, the stochastic shrinkage methodology becomes a lot more compelling. \end{remark} \section{Discussion} In~\cite{Sandu_2015_covarianceShrinkage} it was shown that shrinkage covariance matrix estimators can greatly improve the performance of the EnKF. This work develops ensemble transform filters that utilize a stochastic implementation of shrinkage covariance matrix estimates. We compare our filter to the current state-of-the-art LETKF algorithm. Lorenz '96 results indicate that the new filter performs worse in the under-sampled regime than the best `static' shrinkage method, and performs better (in terms of less variance in the error) than an optimal dynamic shrinkage method for the sufficiently sampled case. Additional results with QG indicate that our method could potentially be used to augment operational LETKF implementations, but that more work is needed to devise better heuristic estimates of the shrinkage factor $\gamma$ such that the two approaches could potentially be coupled. \begin{acknowledgments} The first two authors would like to thank Traian Iliescu and Changhong Mou for their in-depth help with understanding of the Quasi-Geostrophic model, Steven Roberts, the primary maintainer of the ODE Test Problems package, and the rest of the members of the Computational Science Laboratory at Virginia Tech for their continuous support. The authors would like to thank Dr. Pavel Sakov and a second, Anonymous, referee for their insightful feedback that lead to an improved paper. The first two authors were supported, in part, by the National Science Foundation through awards NSF ACI--1709727 and NSF CCF--1613905, AFOSR through the award AFOSR DDDAS 15RT1037, and by DOE ASCR through the award DE--SC0021313. The last author was supported by the Research Council of Norway and the companies AkerBP, Wintershall--DEA, V{\aa}r Energy, Petrobras, Equinor, Lundin and Neptune Energy, through the Petromaks--2 project (280473) DIGIRES (\url{http://digires.no}) \end{acknowledgments} \section*{References}
{ "attr-fineweb-edu": 1.611328, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUcbHxK1ThhAqzV37F
\subsubsection*{\textbf{A. Main Theorems}} We recall the main object of this paper: the scaled Boltzmann equation \eqref{Boltzmann} of the scaling \eqref{scale} \begin{equation} \label{eqtn_F} \e \partial_t F^\e + {v}\cdot \nabla_x F^\e = \frac{1}{\kappa \e } Q(F^\e ,F^\e ) \ \ \text{in} \ [0,T] \times \mathbb{T}^2 \times \mathbb{R}^3. \end{equation} In this paper we set that the spatial variables and velocity variables belong to 2D periodic domain and 3D whole space respectively: \begin{align} x &= (x_1,x_2) \in \mathbb{T}^2 := \left[- \frac{1}{2},\frac{1}{2}\right] \times\left[- \frac{1}{2},\frac{1}{2}\right] \ \text{with the periodic boundary}, \label{T2} \\ v &= (\munderbar{v}, v_3) :=(v_1,v_2,v_3) \in \mathbb{R}^3 . \label{bar v} \end{align} The existence and uniqueness of the Boltzmann equation with fixed scaling have been extensively studied in \cite{GrSt, Guo2002, Guo2003}; initial-boundary value problem \cite{Guo2010, KL2018_1, KL2018_2}; singularity formation \cite{K2011}; boundary regularity estimate \cite{GKTT2017, CKL2019}; non-equilibrium steady states \cite{EGKM}. For the weak solution contents, we refer to \cite{DiL_B, GS2004} and the reference therein. As the main quantities in the hydrodynamic limit, we are interested in the following observables and their convergence toward the counterparts in fluid: \begin{definition}[Boltzmann's macroscopic velocity and vorticity] \begin{equation} \begin{split}\label{vorticity_B} u^\e_B(t,x) &= \frac{1}{\e} \int_{\mathbb{R}^3} (F^\e(t,x,v) - M_{1,0,1} (v)) \munderbar{v} \mathrm{d} v,\\ \o^\e_B(t,x) &:= \nabla^\perp \cdot u^\e_B(t,x) = \Big(-\frac{\partial}{\partial x_2}, \frac{\partial}{\partial x_1}\Big) \cdot u^\e_B(t,x) . \end{split} \end{equation} \end{definition} In 2D, the incompressible Euler equations \eqref{Euler_eqtn} has the vorticity formulation: \begin{align} \partial_t \o + u \cdot \nabla \o =0 \ \ &\text{in} \ [0,T] \times \mathbb{T}^2, \label{vorticity}\\ u =- \nabla^\perp ( -\Delta)^{-1} \o \ \ &\text{in} \ [0,T] \times \mathbb{T}^2 ,\label{BS}\\ \o|_{t=0} = \o_0 \ \ &\text{in} \ \mathbb{T}^2 .\label{vorticity_initial} \end{align} We will present the Biot-Savart formula of \eqref{BS} in the periodic box $\mathbb{T}^2$ at \eqref{BS_periodic}. When a velocity field is Lipschitz continuous, there exists a Lagrangian flow $X (s;t,x)$ solving \begin{equation} \label{dX} \frac{d}{ds} X(s;t,x) = u(s, X(s;t,x)),\ \ X(s;t,x)|_{s=t}=x. \end{equation} Then a smooth solution of the vorticity equation \eqref{vorticity}-\eqref{vorticity_initial} is given by \begin{equation} \begin{split}\label{Lag_sol} \o(t,x) = \o_0 (X(0;t,x)), \ \ u(t,x) = -\nabla^\perp (-\Delta)^{-1} \o (t,x). \end{split} \end{equation} Out of the smooth context, a general notion of Lagrangian flow has been introduced: \begin{definition}[\cite{DiL_ODE,CDe}]\label{rLf} Let $u \in L^1([0,T] \times \mathbb{T}^2; \mathbb{R}^2)$. A map $X: [0,T] \times \mathbb{T}^2 \rightarrow \mathbb{T}^2$ is a regular Lagrangian flow of \eqref{dX} if and only if for almost every $x \in \mathbb{T}^2$ and for any $t \in [0,T]$, the map $s \in [0,t] \mapsto X(s;t,x) \in \mathbb{T}^2$ is an absolutely continuous integral solution of \eqref{dX}; and there exists a constant $\mathfrak{C}>0$ such that for all $(s, t) \in [0,t] \times [0,T]$ there holds \begin{equation} \label{compression} \int_{\mathbb{T}^2} \phi (X(s;t,x)) \mathrm{d} x \leq \mathfrak{C} \int_{\mathbb{T}^2} \phi (x) \mathrm{d} x,\end{equation} for every measurable function $\phi: \mathbb{T}^2 \rightarrow [0,\infty]$. \end{definition} For a given regular Lagrangian flow to \eqref{dX}, we can define the \textit{Lagrangian solution} $(u, \o)$ along the regular Lagrangian flow as in \eqref{Lag_sol}. In fact, the existence and uniqueness (for a given $u$) of the regular Lagrangian flow is proved in \cite{DiL_ODE,CDe,BC2013} as long as \eqref{BS} holds while $\o \in L^\mathfrak{p}$ for $\mathfrak{p} \geq 1$. \textit{Our first theorem is about the convergence of $\o_B^\e$ to the Lagrangian solution $\o$, when vorticities belong to $L^\mathfrak{p} (\mathbb{T}^2)$ when $\mathfrak{p} <\infty$.} \begin{theorem}[Informal statement of Theorem \ref{main_theorem1}: Strong Convergence]\label{main_theorem1_summary} Let arbitrary $T>0$ and $(u_0, \o_0) \in L^2 (\mathbb{T}^2) \times L^\mathfrak{p} (\mathbb{T}^2)$ for $\mathfrak{p} \geq 1$. Let $(u, \o) \in L^\infty ((0, T); L^2 (\mathbb{T}^2) \times L^\mathfrak{p} (\mathbb{T}^2))$ be a Lagrangian solution of 2D incompressible Euler equations \eqref{vorticity}-\eqref{vorticity_initial} with initial data $(u_0, \o_0)$. Then we construct a family of solutions to the Boltzmann equation \eqref{eqtn_F} whose macroscopic velocity and vorticity $(u_B^\e, \o_B^\e)$ of \eqref{vorticity_B} converge to the Lagrangian solution. Moreover, we have \begin{equation} \notag \o_B^\e \rightarrow \o \ \ \text{strongly in } [0,T] \times \mathbb{T}^2. \end{equation} \end{theorem} \begin{remark}Uniqueness of the incompressible Euler equations in 2D is only known for vorticities with moderate growth of $L^\mathfrak{p}$ norm as $\mathfrak{p} \rightarrow \infty$ by Yudovich \cite{Y1963,Y1995}. In some sense, we can view the theorem as a ``selection principle'' of a Lagrangian solution of the incompressible Euler equations from the Boltzmann equation. \end{remark} \begin{remark}Our proof does not rely on a result of inviscid limit of the nonlinear Navier-Stokes equations (cf. \cite{JK2020}) nor the higher order Hilbert expansion (cf. see the results by Guo \cite{Guo2006} and de Masi-Esposito-Lebowitz \cite{DEL}). A direct approach we develop in this paper is based on stability analysis for both the Lagrangian solutions of the inviscid fluid and the Boltzmann solutions with a new corrector. \end{remark} \textit{Our second theorem is about the quantitative rate of convergence/stability of $\o_B^\e$ to $\o$, when the uniqueness of fluid is guaranteed.} In \cite{Y1995}, Yudovich extend his uniquness result for bounded vorticities \cite{Y1963} to the so-called localized Yudovich class, namely $ \o_0 \in Y_{\mathrm{ul}}^\Theta (\Omega)$ with certain modulus of continuity for its velocity $u$. Here, \begin{equation} \nota \| \o \|_{Y_{\mathrm{ul} }^\Theta (\T^2) } := \sup_{1 \le \mathfrak{p} < \infty} \frac{ \| \o \|_{L^\mathfrak{p} (\T^2 ) } }{\Theta (\mathfrak{p}) } \ \ \text{for some } \Theta(\mathfrak{p}) \rightarrow \infty \ \text{as} \ \mathfrak{p} \rightarrow \infty. \end{equation} Here, we specify $\Theta:\mathbb{R}_{+} \rightarrow \mathbb{R}_{+}$: there exists $m \in \mathbb{Z}_{+}$ such that $ \Theta (\mathfrak{p}) = \prod_{k=1} ^m \log_k \mathfrak{p},$ for large $\mathfrak{p} >1$, where $\log_k \mathfrak{p}$ is defined inductively by $\log_0 \mathfrak{p} = 1$, $\log_1 \mathfrak{p} = \log \mathfrak{p}$, and $ \log_{k+1} \mathfrak{p} = \log \log_{k} \mathfrak{p}$. Also, we denote the inverse function of $\log_m (\mathfrak{p}) $ (defined for large $\mathfrak{p}$) by $e_m$. Finally, we note that $ \int_{e_m(1) } ^\infty \frac{1}{\mathfrak{p} \Theta (\mathfrak{p} ) } = \infty$ which turns out to be important in uniqueness of the solution. \begin{theorem}[Informal statement of Theorem \ref{main_theorem2}: Rate of Convergence] \label{main_theorem2_summary} If we further assume $\o_0 \in Y_{\mathrm{ul} }^\Theta (\T^2)$ in addition to Theorem \ref{main_theorem1_summary}, then \begin{equation} \notag \o_B^\e \rightarrow \o \ \ \text{strongly in } [0,T] \times \mathbb{T}^2 \ \text{with an explicit rate}. \end{equation} \hide \eqref{vorticity_B} converges to the vorticity of the Euler equations \eqref{Lag_sol} $ \o_0 $ be a function with moderate growth of $L^p$ norm: $\| \o_0 \|_{L^\mathfrak{p}} \lesssim \prod_{k=1} ^m \log_k \mathfrak{p}$ (\eqref{Thetap}) for $\mathfrak{p} \in [1, \infty)$. Let $\o$ be the unique weak solution of 2D incompressible Euler equation with initial data $(u_0, \o_0)$ which propagates up to $T$, and $u$ is given by Biot-Savart law \eqref{BS}.Then there exists a family of solutions to Boltzmann equations \eqref{eqtn_F} whose bulk density, velocity, and temperature converges to $(1, u, 1)$ in the sense that $\frac{1}{\e} \left ( F^\e (t) - M_{1, \e u, 1} \right ) \rightarrow 0$ in a suitable space (see Theorem \ref{main_theorem2}). Furthermore, bulk velocity of $F^\e$ converges in $L^\infty (0, T; W^{1, \mathfrak{p}}(\mathbb{T}^2 ) )$ with an explicit rate (\eqref{conv:vorticity_B_rate}) depending on $\o_0$. Moreover, if $\o_0$ has additional regularity, the rate is uniform in $\o_0$ (\eqref{conv:vorticity_B_uniform}). \unhide \end{theorem} \hide Also, convolution on $\mathbb{T}^2$ is denoted by $*$, and it means $f*g (x) = \int_{\mathbb{T}^2} f(x-y) g(y) \mathrm{d} y$ as in \eqref{convolution}. Finally, we sometimes denote our domain $\mathbb{T}^2$ by $\mathbb{T}^2$. Consider the scaled Boltzmann equation \eqref{Boltzmann} with our scaling \eqref{scale}: The goal of this paper is to construct a family of solutions to \eqref{eqtn_F} such that In \cite{JK2020}, Jang-Kim develops a new perturbation analysis around the local Maxwellian at this scaling, and identify a growth integral factor (replacing $t/ \sqrt{\kappa}$ factor in the perturbation around global Maxwellian) which resembles the famous Beale-Kato-Majda result \cite{BKM1984}. On the other hand, the Euler scaling \eqref{scale} suffers a loss of scale from the corresponding macro-micro balance . Compared to the diffusive scaling, the hydrodynamic limit of the Euler scaling \eqref{scale} has two major difficulties. First, as the nonlinear perturbation is more singular Second, the incompressible Euler equations \cite{SR} However, when $u$ is not Lipschitz continuous \unhide \subsubsection*{\textbf{B. Novelties, difficulties and idea}} The major novelty of this paper is to establish the incompressible Euler limit \textit{in the level of vorticity} \textit{without using inviscid limit} of the Navier-Stokes equations, in the vicinity of the \textit{macroscopic singularity} ($\o \notin L^\infty (\mathbb{T}^2)$). We study the convergence of Boltzmann's macroscopic vorticity toward the Euler's vorticity, as interesting singular behavior, e.g. interfaces in vortex patches, can be observed only in a stronger topology of velocities. We believe this new approach will shed the light to the validity of Euler equations more direct fashion. Possible application would be direct validity proof of Euler solutions from the kinetic theory without relying on the inviscid limit results. In addition, we are able to allow quite far-from-equilibrium initial data (see \eqref{Hilbertexpansion1}). There are two major difficulties in the proof. First, \textit{the macroscopic solutions are singular} and their singularity appears as growth in the microscopic level (\cite{JK2020}): \begin{equation} \label{Lipschitz} \exp \Big(\int^t_0 \| \nabla_x u (s) \|_{L^\infty_x} \mathrm{d} s\Big). \end{equation} This factor becomes significantly difficult to control when we study the Boltzmann solutions close to the solution of Euler equations, instead of Navier-Stokes equations. The diffusion in the bulk velocity has a considerable magnitude, and causes a singular term due to the growth of \eqref{Lipschitz}. Second, \textit{the macro-micro scale balance is singular} in the Euler scaling. As the transport effect is weaker, this results the lack of a scale factor of the hydrodynamic bound in the dissipation. In fact, an integrability gain in $L^p$ ($\hookleftarrow H^1_x$ in 2D) of \cite{EGKM2} or velocity average lemma \cite{glassey} are not useful to control the singular nonlinearity. In addition, the perturbation equations suffer a loss of scale due to the commutator of spatial derivatives and the linearized operator around a local Maxwellian associated with macroscopic solutions. \hide \item However, if we introduce less expansion, we may not able to cancel out enough singular terms, which puts a serious obstacle in closing the estimate. Furthermore, we want to construct a sequence of Boltzmann solutions which are close to the solution of Euler equation, instead of Navier-Stokes equation: this is in particular desirable since we want to observe singular behavior of bulk velocity, without blurring from viscosity effect. However, the term which causes diffusion in the bulk velocity has a considerable magnitude, although not singular. Thus, we need an idea to overcome this effect. \unhide To overcome the difficulties, we devise a novel \textit{viscosity-canceling correction} in an asymptotic expansion of the scaled Boltzmann equations. To handle the low regularity of fluid velocity fields, we regularize the initial data with scale $\beta$ and expand the Boltzmann equations around the local Maxwellian $M_{1, \e u^\beta, 1}$ associated with the Euler solution $u^\beta$ starting from $u^\beta_0$. At first place, one may try a form of the standard Hilbert expansion: \begin{equation} \label{Hilbertexpansion1} \begin{split} M_{1, \e u^\beta, 1} + \e^2 p^\beta M_{1, \e u^\beta, 1} - \e^2 \kappa (\nabla_x u^\beta): \mathfrak{A} \sqrt{M_{1, \e u^\beta, 1}} + \e f_R \sqrt{M_{1, \e u^\beta, 1}}, \end{split} \end{equation} by matching to cancel most singular terms. The Euler equation is in the hierarchy of $O(\e^2)$: it comes from $\e \partial_t M_{1, \e u^\beta, 1}$ and correctors. However, the third term of order $\e^2 \kappa$ introduces the viscosity contribution $-\e^2 \kappa \eta_0 \Delta_x u^\beta \cdot (v - \e u^\beta) M_{1, \e u^\beta, 1}$: and comparing fo $\e f_R \sqrt{M_{1, \e u^\beta, 1}}$, we see that if this term is not canceled, then it will drive the remainder to order $O(\e \kappa)$ - which is dangerous. Note that this term is hydrodynamic, so we cannot rely on coercivity provided by $L$: it provides additional $\e \sqrt{\kappa}$ smallness only for non-hydrodynamic terms. A simple but useful observation is that still this term is in a lower hierarchy than that of Euler equation. Thus, if we introduce an additional corrector in $\e \kappa$ level, we may cancel out viscosity contribution. Of course one needs to be careful as we introduce $\e \kappa$-size term to cancel out $\e^2 \kappa$-size term! However, by carefully choosing the form of $\e \kappa$-size corrector: \begin{equation} \label{F_e_inftro} F^\e = \eqref{Hilbertexpansion1} + \e \kappa \tilde{u}^\beta \cdot (v-\e u^\beta) M_{1, \e u^\beta, 1} + \e^2 \kappa \tilde{p}^\beta M_{1, \e u^\beta, 1}, \end{equation} we can actually fulfill our goal. \begin{enumerate} \item $\e \kappa \tilde{u}\cdot (v-\e u^\beta) M_{1, \e u^\beta, 1 }$ is fully hydrodynamic, and therefore the most singular term coming from collision with local Maxwellian vanishes. Then the largest term coming from collision is the collision of this corrector with itself, which is of size $\e \kappa$, but being non-hydrodynamic. Thus, it is in fact, small (due to $\e \sqrt{\kappa}$ gain for non-hydrodynamic term, non-hydrodynamic source terms of $\e \sqrt{\kappa}$ drives the remainder to order $O(\e \kappa)$. ) \item By imposing $\nabla_x \cdot \tilde{u} = 0$, we can cancel out the hydrodynamic part for $v \cdot \nabla_x (\tilde{u}\cdot (v-\e u^\beta) M_{1, \e u^\beta, 1 } )$, which is of order $\e \kappa$. Also, by introducing additional corrector at $\e^2 \kappa$ level, one can cancel out all hydrodynamic terms of $\e^2 \kappa$ level by evolution equation for $\tilde{u}$, including $\Delta_x u$. Therefore, the remaining hydrodynamic terms are of order $o(\e^2 \kappa)$, and non-hydrodynamic terms are of order $O(\e \kappa)$, and both are small. \item Interaction of this corrector and the remainder also turns out to be innocuous as well. \end{enumerate} It is worth to remark that in this corrector-based Hilbert expansion we do not need to set up $\e = \kappa$ as in usual Hilbert expansion \cite{DEL}: we only need $\e/\kappa^2 \rightarrow 0$. This is satisfactory, in the sense that a regime which is close to Navier-Stokes regime (whose $\kappa$ vanishes slowly) should be more tractable in philosophy: and indeed for such regime we can allow larger deviation from the equilibrium. In addition, we note that this expansion in fact allows even more general data than \eqref{Hilbertexpansion1}: we have additional freedom in choosing $\tilde{u}_0$, so in principle a remainder with certain part of size $\e \kappa$ is in fact admissible, while in \eqref{Hilbertexpansion1} all parts of remainder should be of size $o(\e \kappa)$. We believe that this new idea of correction would have many applications. \subsubsection*{Notations}For the sake of readers' convenience, we list notations used often in this paper. \begin{align} \partial \ : & \ \partial f = \ \partial_{x_1} f \text{ or } \partial_{x_2}f \label{pderivative} \\ \partial^s \ : & \ \partial^s f = \sum_{\alpha_1 + \alpha_2 \leq s} \partial_{x_1}^{\alpha_1} \partial_{x_2}^{\alpha_2}f \label{psderivative} \\ f*g \ : & \ f*g (x) := \int_{\mathbb{T}^2} f(x-y) g(y) \mathrm{d} y\label{convolution} \\ f*_{\mathbb{R}^2 }g \ : & \ f*_{\mathbb{R}^2 }g (x) = \int_{\mathbb{R}^2} f(x-y) g(y) \mathrm{d} y \label{convolution_R2} \\ ( \ \cdot \ ) _+ \ :& \ ( a )_+= \max \{a, 0\} \label{()+} \\ \log_+ \ :& \ \log _+ \hspace{-2pt}a = \max\{ \log a, 0\} \label{log+} \\ \lesssim \ : & \ \text{there exists } C>0 \text{ such that } a \lesssim b \ \text{implies } a \leq C b \label{lesssim}\\ a \simeq b \ : & \ \text{$a$ consists of an appropriate linear combination of the terms in $b$} \label{simeq}\\ [\![ \cdot , \cdot]\!] \ : & \ [\![ A, B]\!] g := A (Bg) -B (Ag) \ \ \text{(commutator)} \label{commutator} \\ \| \cdot \|_{L^p_\Box} \ : & \ \| f \|_{L^p_t} = \| f \|_{L^p (0,T)}, \ \| f \|_{L^p_x} = \| f \|_{L^p (\mathbb{T}^2)}, \ \| f \|_{L^p_v} = \| f \|_{L^p (\mathbb{R}^3)}\label{Lp} \\ \| \cdot \|_{L^p_x L^2_v} \ : & \ \|f \|_{L^p_x L^2_v} := \| f \|_{L^p(\mathbb{T}^2 ; L^2 (\mathbb{R}^3))} = \big \| \| f(x,v ) \|_{L^2(\mathbb{R}^3_v)} \big \|_{L^p(\mathbb{T}^2_x) } \label{mixedLp}\\ d_{\mathbb{T}^2} (x,y) \ : & \ \text{geodesic distance between $x$ and $y$ in $\mathbb{T}^2$, often abused as $|x-y|$} \end{align} \subsubsection*{Acknowledgements}The authors thank Juhi Jang and Theo Drivas for their interest to this project. This project is supported in part by National Science Foundation under Grant No. 1900923 and 2047681 (NSF-CAREER), and Brain Pool program funded by the Ministry of Science and ICT through the National Research Foundation of Korea (2021H1D3A2A01039047). CK thanks Professor Seung Yeal Ha for the kind hospitality during his stay at the Seoul National University. The authors thanks In-jee Jeong for his kind hospitality and support. \hide : therefore, instead of choosing initial data as a perturbation around the local Maxwellian $M_{1, \e u_0, 1}$, we choose initial data as a perturbation around the local Maxwellian $M_{1, \e u^\beta_0, 1}$, where $u^\beta_0$ is the initial data regularization of $u_0$ with scale $\beta$. In the hydrodynamic limit proof, we need a stability of the Euler solution under the perturbation of initial data, as well as control of remaining small terms, we can construct a sequence of Boltzmann solutions whose bulk velocity converges to Euler solution. It turns out that this simple idea works well: in the class of solution of Euler equation we consider, we have a certain stability, so we can prove that solution $u^\beta$ starting from $u^\beta_0$ converges to the solution $u$ from $u_0$. Also, for the estimate of remainders, introduction of regularization scale $\beta$ gives an additional freedom in our analysis: by sacrificing the speed of regularization convergence, we can control the size of higher derivatives appearing in the remainder equation. In addition, many weak solutions of fluid equations are interpreted as a limit of smooth solutions. In that regard, this initial data regularization approach is quite natural. \paragraph{\textbf{3. Viscosity-canceling correction}} The final piece of analysis is a Hilbert expansion particularly designed to keep source terms in remainder equation small. At first place, one may try a Hilbert expansion of the form \begin{equation} \label{Hilbertexpansion1} \begin{split} F^\e &= M_{1, \e u^\beta, 1} + \e^2 f_2 \sqrt{M_{1, \e u^\beta, 1}} + \e f_R \sqrt{M_{1, \e u^\beta, 1}} \\ &= M_{1, \e u^\beta, 1} + \e^2 p^\beta M_{1, \e u^\beta, 1} - \e^2 \kappa (\nabla_x u^\beta): \mathfrak{A} \sqrt{M_{1, \e u^\beta, 1}} + \e f_R \sqrt{M_{1, \e u^\beta, 1}}, \end{split} \end{equation} by matching to cancel most singular terms. The Euler equation is in the hierarchy of $O(\e^2)$: it comes from $\e \partial_t M_{1, \e u^\beta, 1}$ and correctors. However, the third term of order $\e^2 \kappa$ introduces the viscosity contribution $-\e^2 \kappa \eta_0 \Delta_x u^\beta \cdot (v - \e u^\beta) M_{1, \e u^\beta, 1}$: and comparing fo $\e f_R \sqrt{M_{1, \e u^\beta, 1}}$, we see that if this term is not cancelled, then it will drive the remainder to order $O(\e \kappa)$ - which is dangerous. Note that this term is hydrodynamic, so we cannot rely on coercivity provided by $L$: it provides additional $\e \sqrt{\kappa}$ smallness only for non-hydrodynamic, i.e. perpendicular to hydrodynamic, terms. \paragraph{\textbf{2. Control by Energy}} To admit far-from-equilibrium initial data, we need to keep the characteristic size of remainder as large as possible. A heuristic calculation suggests that the size $o(\e \kappa)$ for the remainder is the threshold: if the remainder becomes of the size $O(\e \kappa)$, we lose the control for nonlinearity of the remainder equation. Thus, we aim to keep our characteristic size of remainder to be slightly smaller than $\e \kappa$. As the macro-micro scale balance is singular, we naturally pursue a control of the nonlinearity by the energy in $H^2_xL^2_v$ and raw dissipation. It turns out that this idea gives a sharp scalin : the commutator $[\![ \partial^s, L ]\!]$ between spatial derivatives and $L$ forces us to lose $\sqrt{\kappa}$ scale for each derivative, but we do not lose scale in nonlinearity for 2-dimensional domain. Thus, by setting initial data decaying to $0$ in arbitrary slow rate as $\e \rightarrow 0$, we can keep $L^2_x L^2_v$ norms of remainder and its derivatives small, provided that the source terms are also small, which is the main point of the next idea. Furthermore, we note that $H^2_x L^2_v$ suits very well with our goal to see convergence in a stronger topology: as we can control up to second derivatives of remainder small, we can keep our Boltzmann solution close to the local Maxwellian $M_{1, \e u^\beta, 1}$. Its zeroth and first derivatives may converge - they correspond to the velocity and vorticity. Its second derivatives may blow in general, which represents the formation of singular object, e.g. interfaces. \unhide \hide \begin{theorem} We construct a family of solutions $F^\e$ to (\ref{Boltzmann}) and (\ref{diffuse_BC}) with (\ref{scale}) such that for some $T\geq 0$ \begin{equation} \frac{F^\e(t,x,v)- M_{1, 0, 1} (v)}{\e} \rightarrow u_E(t,x) \ \ \text{strongly in } L^\infty_t (0,T)L^2_{x,v}. \end{equation} \end{theorem} In this note it is important to consider the incompressible Navier-Stokes equation with $\kappa$-viscosity \begin{align} \partial_t u+ u \cdot \nabla_x u - \kappa \eta_0 \Delta u + \nabla_x p &=0 \ \ \text{in} \ \T^2, \label{NS_k}\\ \nabla_x \cdot u &=0 \ \ \text{in} \ \T^2, \label{incomp} \end{align} where a physical constant $\eta_0$ will be fixed later. We introduce a vorticity \begin{equation} \label{vorticity} \omega = \text{curl } u= \nabla^\perp \cdot u := \partial_1 u_2 - \partial_2 u_1 . \end{equation} \unhide \hide Next we consider a key change of variable formula: [[INTRO]] \begin{lemma} Fix $N>0$. For any $s\geq s^\prime \geq 0$ and $(y,\underline u) \in \T^2 \times \{\underline u \in \mathbb{R}^2: |\underline u|<N\}$, the map \begin{equation} s^\prime \mapsto Y(s^\prime;s,y,\underline u) \in\T^2 \end{equation} is $m$-to-one, where $m \leq \max\big\{ \big(2N\frac{s-s^\prime}{\e } \big)^2, 1\big\}$ at most. There is a change of variables formula: for a non-negative function $A: z \in \T^2 \mapsto A(z) \geq 0$, \begin{equation} \label{COV} \int_{\{\underline u \in \mathbb{R}^2: |\underline u|< N\}} A( Y(s^\prime;s,y,\underline u)) \mathrm{d} \underline u \leq \max\Big\{ N^2, \frac{\e^2}{|s-s^\prime|^2} \Big\} \int_{\T^2} A( z ) \mathrm{d} z. \end{equation} \end{lemma} \begin{proof}It suffices to show \eqref{COV}, while others are obvious. From $\det \big(\frac{\partial Y (s^\prime; s, y, \underline u)}{\partial \underline u}\big)= \frac{|s-s^\prime|^2}{\e^2}$, \begin{equation} \begin{split}\notag & \int_{\{\underline u \in \mathbb{R}^2: |\underline u|< N\}} A( Y(s^\prime;s,y,\underline u)) \mathrm{d} \underline u\\ &\leq \max\Big\{ \big(2N\frac{s-s^\prime}{\e } \Big)^2 , 1 \big\}\int_{\T^2} A(z) \frac{\e^2}{|s-s^\prime|^2} \mathrm{d} z\\ &= \max\Big\{ N^2, \frac{\e^2}{|s-s^\prime|^2} \Big\} \int_{\T^2} A( z ) \mathrm{d} z. \end{split} \end{equation} \end{proof} \unhide \tableofcontents \section{Approximation of the Lagrangian solutions of the Euler equations} As discussed in the introduction, we would like to obtain a limit to weak solutions, which does not have enough regularity in the framework of the standard Hilbert expansion in general, Moreover, we want a convergent sequence in a stronger topology than $L^\mathfrak{p}$ for velocity, as interesting singular behavior can be observed only in a stronger topology. However, control in stronger topology requires more regularity for velocity field as well. A straightforward remedy for low regularity of fluid velocity field is to regularize the initial data: therefore, instead of choosing initial data as a perturbation around the local Maxwellian $M_{1, \e u_0, 1}$, we choose initial data as a perturbation around the local Maxwellian $M_{1, \e u^\beta_0, 1}$, where $u^\beta_0$ is the initial data regularization of $u_0$ with scale $\beta$. Then if one can prove stability of the Euler solution under the perturbation of initial data, as well as control of remaining small terms, we can construct a sequence of Boltzmann solutions whose bulk velocity converges to Euler solution. It turns out that this simple idea works well: in the class of solution of Euler equation we consider, we have a certain stability, so we can prove that solution $u^\beta$ starting from $u^\beta_0$ converges to the solution $u$ from $u_0$. Also, for the estimate of remainders, introduction of regularization scale $\beta$ gives an additional freedom in our analysis: by sacrificing the speed of regularization convergence, we can control the size of higher derivatives appearing in the remainder equation. In addition, many weak solutions of fluid equations are interpreted as a limit of smooth solutions. In that regard, this initial data regularization approach is quite natural. \hide Here, there exists a smooth periodic function $R$ such that \begin{equation} \label{BS_periodic} u(x)= \mathbf{b} * w = \Big( \frac{1}{2\pi} \frac{ x ^\perp}{|x |^2} + \nabla^\perp _xR \Big) * \o . \end{equation} We record an explicit formula of $R$ in \eqref{reg_G}. {\color{red}[[CK: Add the form of $R$ in the appendix]]} \unhide \hide Here, with $\mathbf{b}(x ) = \frac{1}{2\pi} \frac{ x ^\perp}{|x |^2}$ for $x \in \mathbb{R}^2$, \begin{equation} \label{b_star} \mathbf{b} \star \o (x) := \int_{\mathbb{R}^2} \mathbf{b} (x-y) \o (y) \mathrm{d} y, \end{equation} where $\o (y)$ should be understood as a periodic function in $\{y \in \mathbb{R}^2\}$. \unhide \subsection{Regularization} In our proof of the hydrodynamic limit from the Boltzmann equations, it is important to regularize the Largangian solutions of the Euler equation \eqref{vorticity}. We achieve this by regularizing the initial data using the standard mollifier. \hide Define a convolution on $\mathbb{T}^2$: \begin{equation} f*g (x) := \int_{\mathbb{T}^2} f(x-y) g(y) \mathrm{d} y. \label{convolution} \end{equation} \unhide \hide Finally, we sometimes denote our domain $\mathbb{T}^2$ by $\mathbb{T}^2$. \begin{align} \partial_t \o + u \cdot \nabla \o =0 \ \ &\text{in} \ [0,T] \times \mathbb{T}^2, \label{vorticity}\\ u =- \nabla^\perp ( -\Delta)^{-1} \o \ \ &\text{in} \ [0,T] \times \mathbb{T}^2 ,\label{BS}\\ \o|_{t=0} = \o_0 \ \ &\text{in} \ \mathbb{T}^2 .\label{vorticity_initial} \end{align} \unhide Let $\varphi \in C_c ^\infty (\mathbb{R}^2)$ be a smooth non-negative function with $\int_{\mathbb{R}^2} \varphi(x) dx = 1$ and $\varphi(x) = 0 $ for $|x- (0,0)|\geq \frac{1}{4}$. For $\beta \in (0,1)$, we define \begin{equation}\label{mollifier} \varphi^\beta(x) := \frac{1}{\beta^2} \varphi \big( \frac{x}{\beta}\big) \ \text{ for } \ x \in \left [ -\frac{1}{2}, \frac{1}{2} \right ]^2. \end{equation} Note that $\varphi^\beta$ can be extended periodically so that $\varphi^\beta \in C^\infty (\mathbb{T}^2)$ and $\int_{\mathbb{T}^2} \varphi^\beta (x) dx = 1$ as well. Also, $\varphi^\beta$ is supported on $B_{\frac{\beta}{4} } (0)$. Note that $\{\varphi^\beta\}_{\beta}$ are approximate identities: thus , for $1 \le p < \infty$ and $\psi \in L^p (\mathbb{T}^2)$, we have \begin{equation} \label{mol->1} \lim_{\beta \rightarrow 0} \| \varphi^\beta * \psi - \psi \|_{L^p (\mathbb{T}^2) } = 0. \end{equation} Note that we cannot expect a universal rate of convergence, which is independent of $\psi$ if $\psi$ is merely in $L^p(\mathbb{T}^2)$ or $p=\infty$. However, if we have a certain regularity for $\psi$, we have the rate of convergence: for example, if $\psi \in W^{1,2} (\T^2)$, we have \begin{equation} \begin{split}\label{mol->1_rate} &\|\varphi^\beta * \psi - \psi \|_{L^2 (\mathbb{T}^2)} = \left ( \int_{\mathbb{T}^2} \left | \int_{\mathbb{T}^2} \varphi^\beta(y) ( \psi(x-y) - \psi(x) ) \mathrm{d} y \right |^2 \mathrm{d} x \right )^\frac{1}{2} \\ &\le \int_{\mathbb{T}^2} |\varphi^\beta (y) | \left ( \int_{\mathbb{T}^2} | \psi(x-y) - \psi(x) |^2 dx \right )^{\frac{1}{2} } dy \\ & \le C \int_{\mathbb{T}^2} |y| |\varphi^\beta (y) | dy \| \psi \|_{W^{1,2} (\mathbb{T}^2 ) } \le C \beta \| \psi \|_{W^{1,2} (\mathbb{T}^2 ) }. \end{split} \end{equation} We consider approximation solutions $(u^\beta, \o^\beta)$ for the mollified initial data: \begin{align} \partial_t \o^\beta + u^\beta \cdot \nabla \o^\beta =0 \ \ &\text{in} \ [0,T] \times \mathbb{T}^2, \label{vorticity_eqtn}\\ u^\beta = - \nabla^\perp (-\Delta)^{-1}\o^\beta \ \ &\text{in} \ [0,T] \times \mathbb{T}^2, \label{BS_beta}\\ \o^\beta|_{t=0}= \o^\beta_0 := \varphi^\beta * \o_0 \ \ &\text{in} \ \mathbb{T}^2 .\label{vorticity_beta} \end{align} Note that, for each $\beta\in (0,1)$ this problem \eqref{vorticity_eqtn}-\eqref{vorticity_beta} has a smooth (therefore unique) solution, which is the Lagrangian solution: \begin{align} &\o^\beta (t,x) = \o_0^\beta (X^\beta (0;t,x)),\label{Lag_beta}\\ & \frac{d}{ds} X^\beta (s;t,x) = u^\beta(s, X^\beta (s;t,x) ), \ \ X^\beta (s;t,x) |_{s=t} = x. \label{ODE:X_beta} \end{align} \begin{remark} If $u^\beta$ is obtained from \eqref{BS} with $\omega^\beta \in C^\infty (\mathbb{T})$, $u^\beta$ is incompressible and thus associated flow $X^\beta$ by \eqref{dX} satisfies \eqref{compression} with an equality and $\mathfrak{C} = 1$ (measure-preserving). \end{remark} We define a pressure as a unique solution of $- \Delta p^\beta = \text{div}(\text{div}(u^\beta \otimes u^\beta))$ with $\fint_{\T^2} p^\beta=0$. Then we have \begin{equation} \label{up} \begin{split} (\partial_t + u^\beta \cdot \nabla_x) u^\beta + \nabla_x p^\beta = 0 \ \ &\text{in} \ [0,T] \times \mathbb{T}^2, \\ \nabla_x \cdot u^\beta = 0 \ \ &\text{in} \ [0,T] \times \mathbb{T}^2, \\ u^\beta (x,0) = u_0^\beta (x) \ \ &\text{in} \ \mathbb{T}^2. \end{split} \end{equation} Also, we will consider the following auxiliary linear equation. \begin{equation} \label{uptilde} \begin{split} (\partial_t + u^\beta \cdot \nabla_x) \tilde{u}^\beta + \tilde{u}^\beta \cdot \nabla_x u^\beta + \nabla_x \tilde{p}^\beta - \eta_0 \Delta_x u^\beta =0 \ \ &\text{in} \ [0,T] \times \mathbb{T}^2,\\ \nabla_x \cdot \tilde{u}^\beta = 0 \ \ &\text{in} \ [0,T] \times \mathbb{T}^2, \\ \tilde{u}^\beta(0, x) = \tilde{u}_0 (x) \ \ &\text{in} \ \mathbb{T}^2. \end{split} \end{equation} Here $\eta_0$ is given by Lemma \ref{Ainn} \subsection{Biot-Savart law in a periodic domain} In this part, we discuss the asymptotic form of kernel for Biot-Savart law which gives $u$ from $\o$, and the singular integral which gives $\nabla_ x u $ from $\o$ in our setting, the periodic domain $\mathbb{T}^2 = \left [-\frac{1}{2}, \frac{1}{2} \right ]^2$. This is important, since the compactness results we have used, in particular \cite{BC2013}, have the $\mathbb{R}^N$ setting: in particular, the key estimate, weak $L^1$ estimate for $\nabla_x u$ relies on the form of Calderon-Zygmund kernel of Riesz transform. Therefore, we need an asymptotic form of Biot-Savart kernels and Riesz transforms. We start from \cite{CO2007}: \begin{proposition}[\cite{CO2007}, Lemma 1] The function $G$, defined on $\mathbb{R}^2 \simeq \mathbb{C}$ by \begin{equation} \label{Greentorus} \begin{split} G(z) &:= \mathrm{Im} \left ( \frac{|z|^2 - z^2}{-4 i } - \frac{z}{2} + \frac{i}{12} \right ) \\ & - \frac{1}{2\pi} \log \left | \left (1 - e (z) \right )\times \prod_{n=1} ^\infty \left (1 - e(n i+z ) \right )\left ( 1 - e(ni - z) \right ) \right |, \end{split} \end{equation} where $e(z) = e^{2 \pi i z}$, is $\mathbb{Z}^2$-periodic and is the Green's function with mass on $\mathbb{Z}^2$, that is, \begin{equation} \begin{split} - \Delta_x G(x) = \sum_{\zeta \in \mathbb{Z}^2} \delta( x - \zeta )- 1 \text{ for } x \in \mathbb{R}^2, \ \ \in G(x) dx = 0. \end{split} \end{equation} \end{proposition} In particular, the infinite product inside converges absolutely and $G$ is of the form \begin{equation} \label{Greenform} G(z) = \frac{|z|^2}{4} - \frac{1}{2\pi}\log |\mathfrak{h}(z) |, \end{equation} where $\mathfrak{h}$ is a holomorphic function with simple zeros exactly on $\mathbb{Z}^2$. For the sake of completeness, we briefly reason \eqref{Greenform}. We recall the following result from complex analysis: \begin{proposition}[Theorem 15.5 of \cite{RCA}] Suppose that $\{ g_n \}$ is a sequence of non-zero holomorphic functions on $\mathbb{C}$ such that \begin{equation} \sum_{n=1} ^\infty |1 - g_n (z) | \end{equation} converges uniformly on compact subsets of $\mathbb{C}$. Then the product \begin{equation} g(z) = \prod_{n=1} ^\infty g_n (z) \end{equation} converges uniformly on compact subsets of $\mathbb{C}$, and thus $g$ is holomorphic on $\mathbb{C}$. Furthermore, the multiplicity of $g$ at $z_0$ (i.e. the smallest nonnegative integer $k$ such that $\lim_{z \rightarrow z_0} \frac{g(z)}{(z-z_0)^k} \ne 0$) is the sum of multiplicities of $g_n$ at $z_0$. \end{proposition} Now we see that $\mathfrak{h}(z)$ is the product of $1 - e(z) = 1 - e^{2 \pi i z}$, $1 - e(ni + z) = 1 - e^{-2 \pi n + 2 \pi i z}$, and $1 - e(ni - z) = 1 - e^{-2 \pi n - 2 \pi i z} $. Note that $| 1 - (1 - e (ni+z) ) |= | 1 - (1 - e(ni - z) ) | = e^{- 2 \pi n}$ so the premise of the proposiiton is satisfied. Thus $\mathfrak{h} (z)$ is holomorphic. Furthermore, the zeros of $\mathfrak{h}$ is exactly the union of zeros of $1 - e(z)$, which is $\{ m i | m \in \mathbb{Z} \}$, zeros of $1 - e (ni + z) $, which is $ \{ m - ni | m \in \mathbb{Z} \}$, and zeros of $1 - e(ni - z) $, which is $\{ m + ni | m \in \mathbb{Z} \}$, for each integer $n \ge 1$. The union os exactly $\mathbb{Z}^2$. Moreover, the multiplicity of each point in $\mathbb{Z}^2$ is 1, in other words, all roots are simple. Thus, on $\mathbb{R}^2 \setminus \mathbb{Z}^2$, $G$ is infinitely differentiable. Furthermore, let $\zeta \in \mathbb{Z}^2$. Then there exists a $\mathfrak{r}_\zeta > 0$ such that \begin{equation} \mathfrak{h} (z) = (z - \zeta) \mathfrak{H} (z), \end{equation} where $\mathfrak{H} (z) = \frac{\mathfrak{h} (z) }{z-\zeta}$ is an holomorphic function on $B_{\mathfrak{r}_\zeta} (\zeta)$ and $\inf_{z \in B_{\mathfrak{r}_\zeta} (\zeta)} |\mathfrak{H} (z) | \ge c_\zeta > 0$. Therefore, we can rewrite \eqref{Greenform} in the following form and differentiate: for $z \in B_{\mathfrak{r}_\zeta} (\zeta)$, \begin{equation} \begin{split} G(z) &= -\frac{1}{2\pi} \log | z - \zeta| + \mathfrak{B}_\zeta (z), \\ \nabla G(z) &= -\frac{1}{2\pi} \frac{z - \zeta} {|z- \zeta |^2} + \nabla \mathfrak{B}_\zeta (z), \\ \nabla^2 G(z) &= \frac{1}{4\pi} \frac{ (z-\zeta) \otimes (z-\zeta) - \frac{1}{2} |z-\zeta|^2 \mathbb{I}_2 }{|z-\zeta|^4} + \nabla^2 \mathfrak{B}_\zeta (z), \end{split} \end{equation} where $z = x+i y$ is identified with $(x,y)$, $\nabla = (\partial_x, \partial_y )$, and \begin{equation} \mathfrak{B}_\zeta (z) = \frac{|z|^2}{4} - \frac{1}{2\pi} \log | \mathfrak{H} (z) | \end{equation} is a smooth function (in $x, y$) whose all derivatives are bounded. In particular, taking $\zeta = 0 = (0,0)$ and taking $\mathfrak{r} = \mathfrak{r}_0$, we have the following: \begin{proposition} \label{PeriodicKernel} Let $G$ be defined by \eqref{Greentorus}, so that the solution to Poisson equation $- \Delta_x q = h - \int_{\mathbb{T}^2} h$ is given by $q = G * h$, and the Biot-Savart law by \begin{equation} \label{BS_periodic} u(x)= \mathbf{b} * w = \Big( \frac{1}{2\pi} \frac{ x ^\perp}{|x |^2} + \nabla^\perp _x \mathfrak{B} \Big) * \o . \end{equation} Then there exists a $\mathfrak{r}>0$ such that $G, \nabla_x G, \nabla_x^2 G$ are smooth and bounded in $\mathbb{T}^2 \setminus B_{\mathfrak{r} } (0) = \left [ -\frac{1}{2}, \frac{1}{2} \right ]^2 \setminus B_{\mathfrak{r} } (0)$ and in $B_{\mathfrak{r} } (0)$ we have \begin{equation} \begin{split} G(x) &= -\frac{1}{2\pi} \log |x| + \mathfrak{B}(x), x \in B_{\mathfrak{r}} (0) , \\ \nabla_x G(x) &= -\frac{1}{2\pi} \frac{x}{|x|^2} + \nabla_x \mathfrak{B}(x), x \in B_{\mathfrak{r}} (0), \\ \nabla_x^2 G(x) &= \frac{1}{4\pi} \frac{x\otimes x - \frac{1}{2} |x|^2 \mathbb{I}_2 }{|x|^4} + \nabla_x^2 \mathfrak{B}(x), x \in B_{\mathfrak{r}} (0), \end{split} \end{equation} where $\nabla_x ^k \mathfrak{B}$ are bounded in $B_{\mathfrak{r}} (0) $ for all $k\ge 0$. \end{proposition} \subsection{Higher Regularity of the Approximations $(u^\beta,\o^\beta)$} In this section we establish the regularity estimate of $(u^\beta, \o^\beta)$ solving \eqref{up} and \eqref{vorticity_eqtn}-\eqref{vorticity_beta} and $(\tilde{u}^\beta, \tilde{p}^\beta)$ solving \eqref{uptilde}. First we prove that, for $1 \leq r, p \leq \infty$, \begin{align} \| \o_0^\beta \|_{L^r (\mathbb{T}^2)} & \lesssim \beta^{-2 \big( \frac{1}{p} - \frac{1}{r}\big)_{\hspace{-2pt}+}} \| \o_0 \|_{L^p}, \label{growth:o_ell}\\ \| \nabla^k\o _0^\beta \|_{L^r (\mathbb{T}^2)} & \lesssim \beta^{- k -2 \big( \frac{1}{p} - \frac{1}{r}\big)_{\hspace{-2pt}+}} \| \o_0 \|_{L^p} .\label{growth:Do_ell} \end{align} From the Young's inequality, for $1+1/r = 1/p+ 1/q$ and $r, p, q \in [1, \infty]$, \begin{equation} \notag \| \o_0^\beta \|_{L^r (\mathbb{T}^2)} \leq \| \varphi^\beta \|_{L^q (\mathbb{T}^2)} \| \o_0 \|_{L^p (\mathbb{T}^2)} \lesssim \beta^{-2 \big( \frac{1}{p} - \frac{1}{r}\big)} \| \o_0 \|_{L^p} \ \ \text{for } r \geq p, \end{equation} \begin{equation} \notag \| \nabla^k\o _0^\beta \|_{L^r (\mathbb{T}^2)} \leq \| \nabla^k \varphi^\beta \|_{L^q (\mathbb{T}^2)} \| \o_0 \|_{L^p (\mathbb{T}^2)} \leq \beta^{- k -2 \big( \frac{1}{p} - \frac{1}{r}\big)} \| \o_0 \|_{L^p} \ \ \text{for } r \geq p. \end{equation} For both, we have used \begin{equation} \nota \left(\int_{\mathbb{T}^2} | \nabla^k_x\varphi^\beta|^q \mathrm{d} x\right)^{1/q}= \left( \frac{\beta^2}{\beta^{q(2+k)}}\int_{\mathbb{T}^2} |\nabla^k \varphi (\frac{x}{\beta})|^q \mathrm{d} \frac{x_1}{\beta} \mathrm{d} \frac{x_2}{\beta} \right)^{1/q} = \beta^{ -k- \frac{ 2(q-1) }{q}} \| \nabla^k \varphi \|_{L^q (\mathbb{T}^2)}. \end{equation} Using $|\mathbb{T}^2| =1$, we have \begin{align*} \| \o_0^\beta \|_{L^r (\mathbb{T}^2)} &\leq \| \o_0^\beta \|_{L^p (\mathbb{T}^2)} \lesssim \| \o_0 \|_{L^p (\mathbb{T}^2)} \ \ \text{for } p \geq r \\ \| \nabla^k \o_0 ^\beta \|_{L^r (\mathbb{T}^2)} &\leq \| \nabla^k \o_0 ^\beta \|_{L^p (\mathbb{T}^2)} \lesssim \beta^{- k} \| \o _0 \|_{L^p (\mathbb{T}^2)} \ \ \text{for } p \geq r \end{align*} Collecting the bounds, we conclude \eqref{growth:o_ell} and \eqref{growth:Do_ell} \subsubsection{Bounds for $\| \nabla_x u^\beta (t) \|_{L^\infty(\mathbb{T}^2 ) }$} \begin{theorem}\label{thm:Dptu} Let $(u^\beta,\o^\beta)$ be the Lagrangian solution of \eqref{Lag_beta} supplemented with \eqref{ODE:X_beta} and \eqref{BS_beta}. For $\mathfrak{p} \in [1, \infty]$ and $\beta \ll \color{black}{} \| \o_0 \|_{L^\mathfrak{p}}$, we have the following estimate for all $t\geq0$, \begin{align} \| \nabla u^\beta (t, \cdot )\|_{L^\infty} & \lesssim \mathfrak{Lip}(\beta, \mathfrak{p}) := \Big( \beta^{- \frac{2}{\mathfrak{p}}} \log_+ \hspace{-2pt} \frac{1}{\beta} \Big) \| \o_0 \|_{L^\mathfrak{p}} e^{ t C\color{black}{} \beta^{- \frac{2}{\mathfrak{p}}}\| \o_0 \|_{L^\mathfrak{p}} } {\text{ for some } C>1}\color{black}{}.\label{est:Du} \end{align} \end{theorem} We will estimate $\nabla_x X$ by applying the Gronwall's inequality to the differentiation of \eqref{ODE:X_beta}: \begin{equation} \label{dspX} \frac{d}{ds} \nabla_x X^\beta (s; t, x)= \nabla_x X(s; t, x) \cdot (\nabla_x u) (s, X(s; t, x) ). \end{equation} The initial condition for each purely spatial derivative can be driven from \eqref{dX}: \begin{equation} \label{initial:DX} \nabla_x X(s;t,x)|_{s=t} = id \end{equation} We use a following version of Gronwall's inequality. \begin{lemma}[\cite{BCD2011}, Lemma 3.3]\label{lem:gronwall} Let $q$ and $z$ be two $C^0$ (resp. $C^1$) nonnegative functions on $[t_0, T]$. Let $\mathcal{G}$ be a continuous function on $[t_0, T]$. Suppose that, for $t \in[t_0, T]$, \begin{equation} \frac{d}{dt} z(t) \le \mathcal{G}(t) z(t) + q(t). \end{equation} For any time $t \in [t_0, T]$, we have \begin{equation} z(t) \le z(t_0) \exp \left (\int_{t_0} ^t \mathcal{G} (\tau) d\tau \right ) + \int_{t_0} ^t q(\tau) \exp \left ( \int_\tau ^t \mathcal{G} (\tau') d\tau ' \right ) d \tau. \end{equation} \end{lemma} \color{black}{} \begin{lemma}\label{lem:pX} For any $r \in [1,\infty]$ and $0 \leq s \leq t$, \begin{align}\label{est:DX} \| \nabla_x X^\beta (s;t,\cdot ) \|_{L^r (\mathbb{T}^2)} \leq e^{\int_s ^t \|\nabla_x u (t') \|_{L^\infty_x} \mathrm{d} t' }. \end{align} \end{lemma} \begin{proof}The proof is immediate from the Gronwall's inequality to \eqref{dspX} and the initial condition $\|\nabla X^\beta (t;t,x)\|_{L^r (\mathbb{T}^2)}= \|\nabla x\|_{L^r (\mathbb{T}^2)}= \|id\|_{L^r (\mathbb{T}^2)}=1$ from \eqref{initial:DX}. \end{proof} Next, using the Morrey's inequality \begin{equation} \label{Sob_emb} W^{1, r} (\mathbb{T}^2) \subset C^{0,1- \frac{2}{r}} (\mathbb{T}^2) \ \ \text{ for } \ r>2. \end{equation} we estimate the Holder seminorm of $\o^\beta$. \begin{lemma}\label{lem:estH:o} For $r\in (2,\infty) $, \begin{equation} \begin{split}\label{estH:o} [\o^\beta (t, \cdot)]_{C^{0,1- \frac{2}{r}} (\mathbb{T}^2)} \lesssim \beta^{-1-2 \big( \frac{1}{\mathfrak{p}} - \frac{1}{r} \big)_{\hspace{-2pt} +}} \| \o_0 \|_{L^\mathfrak{p} (\mathbb{T}^2)} e^{\left (1- \frac{2}{r} \right )\int_0 ^t \| \nabla_x u^\beta (t') \|_{L^\infty_x} \mathrm{d} t'}. \end{split} \end{equation} \end{lemma} \begin{proof} We note that \begin{equation} \begin{split} [ \o^\beta (t, \cdot ) ]_{C^{0, 1 - \frac{2}{r} } (\mathbb{T}^2)} &= \sup_{x\ne y \in \mathbb{T}^2} \frac{|\o_0 ^\beta (X^\beta(0; t, x) ) - \o_0 ^\beta (X^\beta (0; t, y) ) | }{|x-y|^{\left (1 - \frac{2}{r} \right ) } } \\ &\le [\o_0 ^\beta ]_{C^{0, 1 - \frac{2}{r} } (\mathbb{T})} \| \nabla_x X^\beta (0; t, \cdot ) \|_{L^\infty_x} ^{\left (1 - \frac{2}{r} \right )}, \end{split} \end{equation} where we slightly abused the notation by \begin{equation} \label{geodistance1} |x-y| = \mathrm{dist}_{\mathbb{T}^2} (x,y). \end{equation} Applying Morrey's inequality \eqref{Sob_emb} for $[\o_0 ^\beta]_{C^{0, 1 - \frac{2}{r} } (\mathbb{T}^2)}$ and applying \eqref{est:DX} gives the result. \end{proof} The following standard estimate is important in the proof: \begin{lemma} Let $(u^\beta, \o^\beta)$ satisfy \eqref{BS_beta} Then, for any $\gamma>0$, \begin{equation} \label{potential1_k} \| \nabla_x u \|_{L^\infty (\T^2)} \lesssim 1+ \| \o \|_{L^1(\T^2)} + \| \o \|_{L^\infty(\T^2) } \log_+ ( [ \o]_{C^{0,\gamma} (\T^2)} ) . \end{equation}\hide\begin{equation} \label{potential_k} \| \nabla \partial^\alpha u \|_{L^\infty (\T^2)} \lesssim \| \partial^\alpha \o \|_{L^1(\T^2)} + \| \partial^\alpha \o \|_{L^\infty(\T^2) } \log \left(e+ \frac{[\partial^\alpha \o]_{C^{0,\gamma} (\T^2)}}{\| \partial ^\alpha \o \|_{L^\infty(\T^2)}} \right). \end{equation} \begin{equation} B\log \left( e+ \frac{A}{B} \right \lesssim B\log ( eA+ B) -B \log B \lesssim B \max ( \log_+ A, \log_+ B) \end{equation}\unhide\end{lemma} \begin{proof} The result is well known from the potential theory (e.g. \cite{Rein}) so we just briefly sketch the proof. Assume that $\o \in L^1(\T^2) \cap C^{0,\gamma} (\T^2)$. From \eqref{BS} and \eqref{BS_periodic}, for $R\geq d>0$, there exists $C_2>0$ only depending on the spatial dimension (2 in our case), \begin{equation} \begin{split} \partial_{x_j} u_i (x) =& \ \int_{|x-y| \geq R} \partial_{j} \mathbf{b}_i (x-y) \o (y) \mathrm{d} y + \int_{d \leq |x-y| \leq R} \partial_{j} \mathbf{b}_i (x-y) \o (y) \mathrm{d} y \\ & + \int_{|x-y| \leq d}\partial_j \mathbf{b}_i (x-y) [ \o (y) - \o(x)] \mathrm{d} y +C_2 \delta_{i+1, j} \o(x) , \label{exp:Du} \end{split} \end{equation} for \begin{equation} \label{b_j} \partial_j \mathbf{b} (x-y) := \frac{1}{2\pi }\bigg( \frac{2 (x_{i+1} - y_{i+1}) (x_j - y_j)}{|x-y|^4} - \frac{\delta_{i+1, j}}{|x-y|^2}\bigg) + \partial_j \mathfrak{B}(x-y) . \end{equation} Here, the index $i+1$ should be understood on a modulus of 2; and $\delta_{i+1,j}=1$ if $i+1=j$ mod 2 and $\delta_{i+1,j}=0$ if $i+1\neq j$ mod 2. We bound $\eqref{exp:Du}$ as \begin{equation} \begin{split}\label{exp:Du_decomp} | \eqref{exp:Du}| & \leq \int_{|x-y|\geq R} \frac{4}{|x-y|^2} |\o (y)| \mathrm{d} y + \int_{d \leq |x-y|\leq R} \frac{4}{|x-y|^2} |\o (y)| \mathrm{d} y\\ &+ [\o]_{C^{0, \gamma} (\T^2)} \int_{|x-y| \leq d } \frac{4}{|x-y|^{2-\gamma}} \mathrm{d} y + C_2 |\o (x) | \\ & \lesssim R^{-1/2} \| \o \|_{L^1(\T^2)} + \ln \left(\frac{R}{d} \right) \| \o \|_{L^\infty(\T^2)} + d^\gamma [\o]_{C^{0, \gamma} (\T^2)} + \| \o \|_{L^\infty(\T^2)}. \end{split}\end{equation} We finalize the proof by choosing $R=1$ and $d=\max\big(1, [\o]_{C^{0,\gamma} (\T^2)}^{1/\gamma}\big)$. \end{proof} \begin{proof}[\textbf{Proof of Theorem \ref{thm:Dptu}}] To prove \eqref{est:Du}, we apply $\eqref{growth:o_ell}|_{r=1, \infty}$ and $\eqref{estH:o}|_{r>2}$ to $\eqref{potential1_k}|$ to conclude that \begin{equation} \begin{split}\label{est1:Du} &\| \nabla u^\beta (t, \cdot )\|_{L^\infty}/ \| \o_0 \|_{L^\mathfrak{p}}\\ &\lesssim 1 + \beta^{- \frac{2}{\mathfrak{p}}} \log_+ ( \beta^{-1 -2 (\frac{1}{\mathfrak{p}} - \frac{1}{r})_+} \| \o_0 \|_{L^\mathfrak{p}} e^{\int_0 ^t \| \nabla_x u^\beta (s) \|_{L^\infty_x} \mathrm{d} s } )\\ &\lesssim 1+ \beta^{- \frac{2}{\mathfrak{p}}} \Big\{ \log_+ \hspace{-2pt} \frac{1}{\beta} + \log_+ \hspace{-2pt} \| \o_0 \|_{L^\mathfrak{p}} + \int^t_0\| \nabla u^\beta(s, \cdot ) \|_{L^\infty} \mathrm{d} s\Big\}. \end{split}\end{equation} Applying Gronwall's inequality gives the result. \end{proof} \subsubsection{Bounds for $V(\beta)$} We introduce the growth-of-estimate function for $(u^\beta, p^\beta, \tilde{u}^\beta, \tilde{p}^\beta )$, which is a function of $\beta$: \begin{equation} \label{Vbeta} \begin{split} V(\beta) := \sum_{s_1 + s_2 \le 2, D \in \{\partial_t, \partial\} }& \| \partial^{s_1} D (u^\beta, \partial u^\beta, p^\beta, \tilde{u}^\beta, \tilde{p}^\beta) \|_{L^\infty_{t,x} } \\ & \times \left ( 1 + \| \partial^{s_2} (\tilde{u}^\beta, u^\beta ) \|_{L^\infty_{t,x} } \right ) (1 + \sum_{j\le 2} \| \partial^j u^\beta \|_{L^\infty_{t,x}} )^2. \end{split} \end{equation} This is a pointwise bound for all derivatives of $(u^\beta, p^\beta, \tilde{u}^\beta, \tilde{p}^\beta)$ appearing in the remainder estimates in section \ref{sec:re}. We have a following explicit bound for $V(\beta)$. \begin{theorem} \label{Vbound} Suppose that $\o_0 \in L^\mathfrak{p} (\mathbb{T}^2)$. Then $$V(\beta) \lesssim \left (\| \tilde{u}_0 \|_{H^6(\mathbb{T}^2 ) } + TU(\beta, \mathfrak{p}) e^{T U(\beta, \mathfrak{p} ) } + U(\beta , \mathfrak{p}) \right )^6,$$ where $U(\beta)$ is as defined in \eqref{uutildeb}. \end{theorem} \begin{proof} By Sobolev embedding, and the formula for $p^\beta$, $\tilde{p}^\beta$, $\partial_t {u}^\beta$, and $\partial_t \tilde{u}^\beta$, we have a bound \begin{equation} \notag V(\beta) \lesssim \left (\| u^\beta \|_{L^\infty ((0, T); H^8(\mathbb{T}^2 ) )} +\| \tilde{u}^\beta \|_{L^\infty ((0, T); H^6(\mathbb{T}^2 ) )} \right ) ^6 \end{equation} We invoke the standard energy, commutator estimate and algebra property of $H^s(\mathbb{T}^2)$, $s > 1$: \begin{equation} \begin{split}\notag &\frac{\mathrm{d}}{2 \mathrm{d} t} \| \partial^8 u^\beta (t) \|_{L^2 (\mathbb{T}^2 ) }^2 \\ &\le \| \partial^8 u^\beta (t) \|_{L^2 (\mathbb{T}^2 ) } \| [\![ \partial^8, u^\beta \cdot \nabla_x ]\!] u \|_{L^2 (\mathbb{T}^2)} \lesssim \| \nabla_x u^\beta \|_{L^\infty (\mathbb{T}^2 ) }\| \partial^8 u^\beta (t) \|_{L^2 (\mathbb{T}^2 ) } ^2 , \\ &\frac{\mathrm{d}}{2 \mathrm{d} t} \| \partial^6 \tilde{u}^\beta (t) \|_{L^2 (\mathbb{T}^2 ) }^2\\ & \lesssim \| \partial^6 \tilde{u}^\beta (t) \|_{L^2 (\mathbb{T}^2) } \\ & \ \ {\tiny \times \bigg( \| [\![ \partial^6, u^\beta \cdot \nabla_x ]\!] \tilde{u} (t) \|_{L^2 (\mathbb{T}^2 ) } + \| \partial^6 \tilde{u}^\beta (t) \|_{L^2 (\mathbb{T}^2 ) } \| \partial^7 u^\beta (t) \|_{L^2 (\mathbb{T}^2 ) } + \| \partial^8 u^\beta (t) \|_{L^2 (\mathbb{T}^2 ) } \bigg)} \\ & \lesssim \| \partial ^8 u^\beta (t) \|_{L^2 (\mathbb{T}^2 )} \| \partial^6 \tilde{u}^\beta (t ) \|_{L^2 (\mathbb{T}^ 2 ) }^2 + \| \partial^8 \tilde{u}^\beta (t) \|_{L^2 (\mathbb{T} ^2 ) }^2. \end{split} \end{equation} Therefore, we have \begin{equation} \label{uutildeb} \begin{split} \| u^\beta \|_{L^\infty ((0, T); H^8 (\mathbb{T}^2 ) )} &\lesssim e^{\| \nabla_x u^\beta \|_{L^\infty ((0, T) \times \mathbb{T}^2 )}} \| u^\beta (0) \|_{L^\infty ((0, T); H^8 (\mathbb{T}^2 ) )} \\ &\lesssim e^{\mathfrak{Lip} (\beta, \mathfrak{p})} \beta^{-8 - 2 \left ( \frac{1}{\mathfrak{p}} - \frac{1}{2} \right )_+} \| \o_0 \|_{L^\mathfrak{p}} =: U(\beta, \mathfrak{p}) , \\ \| \tilde{u}^\beta \|_{L^\infty ((0, T); H^6 (\mathbb{T}^2 ) )} &\lesssim e^{\|u^\beta \|_{L^\infty ((0, T); H^8 (\mathbb{T}^2 ) ) } T } \left ( \| \tilde{u}_0 \|_{H^6(\mathbb{T}^2 ) } + T \|u\|_{L^\infty ((0, T); H^8 (\mathbb{T}^2 ) ) } \right ) \\ & \lesssim (\| \tilde{u}_0 \|_{H^6(\mathbb{T}^2 ) } + TU(\beta, \mathfrak{p}) ) e^{T U(\beta, \mathfrak{p} ) }. \end{split} \end{equation}\end{proof} \section{Hilbert-type Expansion with Viscosity-canceling corrector} \subsection{Formulation around a local Maxwellian} We denote a local Maxwellian corresponding to $(1 , \e u^\beta ,1 )$ b \begin{equation} \label{mu_e} \mu : = M_{1 , \e u^\beta , 1 }. \end{equation} We try to construct a family of solutions $F^\e$ in a form of \begin{equation} \label{F_e} F^\e = \mu+ \e^2 p^\beta \mu - \e^2 \kappa (\nabla_x u^\beta) : \mathfrak{A} \sqrt{ \mu} + \{\e \kappa \tilde{u}^\beta \cdot (v-\e u^\beta) + \e^2 \kappa \tilde{p}^\beta\} \mu + \e f_R \sqrt{ \mu}, \end{equation} where $p^\beta$, $\tilde{u}^\beta$, and $\tilde{p}^\beta$ satisfy \eqref{up} and \eqref{uptilde}, and $\mathfrak{A}$ will be defined in \eqref{AB}. Also, we assume the following assumption on the relative maginitudes on $\e, \kappa = \kappa(\e), \beta = \beta(\e) $: \begin{equation} \label{ekappabeta} \begin{split} \lim_{\e \rightarrow 0} \frac{\e}{\kappa^2} = 0, \\ \lim_{\e \rightarrow 0} \kappa^{\frac{1}{4}} V(\beta ) = 0, \\ \lim_{\e \rightarrow 0} \kappa^{\frac{1}{2} } e^{2 \mathbf{C}_0 T \| \nabla_x u^\beta \|_{L^\infty ((0, T) \times \mathbb{T}^2 ) } ^2 } = 0, \end{split} \end{equation} where $\mathbf{C}_0$ is specified in Section \ref{sec:re}. We define \begin{equation} \label{L_Gamma} L f = \frac{-2}{\sqrt{\mu }} Q(\mu , \sqrt{\mu }f) ,\ \ \Gamma (f,g) = \frac{1}{ \sqrt{\mu}} Q(\sqrt{\mu}f, \sqrt{\mu}g) . \end{equation} From the collision invariance, a null space of $L$, denoted by $\mathcal{N}$, has five orthonormal bases $\{\varphi_i \sqrt{\mu }\}_{i=1}^5$ with \hide \begin{equation} \label{basis} \varphi_0 := \frac{1}{\sqrt{1+ \delta \sigma}} , \ \ \ \varphi_i: = \frac{1}{\sqrt{1+ \delta \sigma}} \frac{v_i -\delta u_i }{\sqrt{1+ \delta \theta}} \ \ \text{for} \ i=1,2,3 , \ \ \ \varphi_4: = \frac{1}{ \sqrt{1+ \delta \sigma}} \frac{ \big|\frac{v-\delta u}{\sqrt{1+ \delta \theta}}\big|^2-3}{\sqrt{6}}. \end{equation}\unhide \begin{equation} \label{basis} \begin{split} &\varphi = (\varphi_0, \varphi_1, \varphi_2, \varphi_3, \varphi_4), \\ &\varphi_0 := 1 , \ \ \ \varphi_i: = {v_i -\e u_i^\beta } \ \ \text{for} \ i=1,2,3 , \ \ \ \varphi_4: = \frac{ | {v-\e u^\beta } |^2-3}{\sqrt{6}}. \end{split} \end{equation} We define $\mathbf{P}$, an $L^2_v$-projection on $\mathcal{N}$, as \begin{equation} \label{P} \begin{split} P g&:= (P_0 g, P_1 g, P_2 g, P_3 g, P_4 g ) , \ \ {P}_j g:= \int_{\mathbb{R}^3} g \varphi_j \sqrt{\mu } \mathrm{d} v \ \ \text{for } j=0,1, \cdots,4,\\ \mathbf{P} g&:= \sum_{j=0}^4 ( {P}_j g) \varphi_j \sqrt{\mu } = Pg \cdot \varphi \sqrt{\mu}. \end{split} \end{equation} We record the exact form of $L$ and $\Gamma$ for the later purpose: the calculation is due to Grad \cite{G1963}: one can also read \cite{glassey} for details of derivations. Also the exact form of formulae were excerpted from \cite{JK2020}: For certain positive constants $c_1, c_2, c_3$, \begin{equation} \label{Lgammaform} \begin{split} Lf(v) &= \nu f (v) - Kf (v) = \nu(v) f(v) - \int_{\mathbb{R}^3}\mathbf{k}(v, v_*) f(v_*) \mathrm{d} v_*, \\ \nu(v) &= c_1 \left ( (2 |v-\e u^\beta | + \frac{1}{|v-\e u^\beta |} ) \int_0 ^{|v-\e u^\beta | } e^{-\frac{z^2}{2}} \mathrm{d} z + e^{-\frac{|v-\e u^\beta | ^2 }{2 }} \right ), \\ \mathbf{k} (v, v_*) &= c_2 |v-v_* | e^{-\frac{|v-\e u^\beta |^2 + |v_* - \e u^\beta |^2 }{4}} - \frac{c_3}{|v-v_*| }e^{-\frac{1}{8} |v-v_*|^2 - \frac{1}{8} \frac{(|v-\e u^\beta |^2 - |v_* - \e u^\beta | ^2 )^2}{|v-v_* |^2 } }, \\ \Gamma (f, g) ( v) &= \int_{\mathbb{R}^3} \int_{\mathbb{S}^2} |(v- v_*) \cdot \o | \sqrt{\mu (v_*) } (f(v')g(v_*') + g(v')f(v_*') ) \mathrm{d} \o \mathrm{d} v_* \\ &- \int_{\mathbb{R}^3} \int_{\mathbb{S}^2} |(v- v_*) \cdot \o| \sqrt{\mu (v_* ) } ( f(v) g(v_*) + g(v) f(v_*) ) \mathrm{d} \o \mathrm{d} v_*, \end{split} \end{equation} where $v^\prime = v- ((v-v_*) \cdot \o) \o$, $v_*^\prime = v_*+ ((v-v_*) \cdot \o) \o$. Here, all $\nu, \mathbf{k}, \Gamma$ also depend on $x$ and $t$ in a straightforward manner, that is, $Lf(x, t, v)$ and $\Gamma (f, g)(x, t, v)$ depends on $f(x, t, \cdot)$, $g(x, t, \cdot)$, and $u^\beta(x, t)$; we omitted them for the sake of simplicity. Also, we define ${}_{\partial^s} L$ and ${}_{\partial^s} \Gamma$ for $s \ge 1$: \begin{equation} \label{psLGamma} \begin{split} {}_{\partial^s} L f (v) &= \partial^s (\nu ) (v) f(v) - \int_{\mathbb{R}^3} \partial^s (\mathbf{k})(v, v_*) f(v_*) \mathrm{d} v_*, \\ {}_{\partial^s} \Gamma (f, g) ( v) &= \int_{\mathbb{R}^3} \int_{\mathbb{S}^2} |(v- v_*) \cdot \o | \partial^s(\sqrt{\mu (v_*) }) (f(v')g(v_*') + g(v')f(v_*') ) \mathrm{d} \o \mathrm{d} v_* \\ &- \int_{\mathbb{R}^3} \int_{\mathbb{S}^2} |(v- v_*) \cdot \o| \partial^s (\sqrt{\mu (v_* ) } ) ( f(v) g(v_*) + g(v) f(v_*) )\mathrm{d} \o \mathrm{d} v_*. \end{split} \end{equation} We list standard results which will be used later in this section for the sake of readers' convenience. First we note that \begin{equation} \label{Lnonhydro} Q(\mu, \mu)=0=\mathbf{P}L=L \mathbf{P}=\mathbf{P} \Gamma, \end{equation} from the collision invariance. \begin{lemma}[\cite{EGKM2,Guo2006,Guo2002}] Suppose that \eqref{ekappabeta} holds. Then \label{lemma_L} \begin{equation} \label{est:L} \begin{split} &\| \nu^{-1/2}L f \|_{L^2(\T^2 \times \mathbb{R}^3)} \lesssim \| \sqrt{\nu} (\mathbf{I} - \mathbf{P}) f \|_{L^2(\T^2 \times \mathbb{R}^3)}, \\ &\| \nu^{\frac{1}{2}} (\mathbf{I} - \mathbf{P} ) f \|_{L^2_v }^2 \lesssim \left | \int Lf(v) f(v) \mathrm{d} v \right | , \\ &\left | \int {}_{\partial^s} L f(v) g(v) \mathrm{d} v \right | \\ &\lesssim \e \| \partial^s u^\beta \|_{L^\infty_{t,x} } \left (\| \mathbf{P} f \|_{L^2_v} + \| \nu^{\frac{1}{2}} (\mathbf{I} - \mathbf{P} ) f \|_{L^2_v} \right ) \left (\| \mathbf{P} g \|_{L^2_v} + \| \nu^{\frac{1}{2}} (\mathbf{I} - \mathbf{P} ) g \|_{L^2_v} \right ), \end{split} \end{equation} \begin{equation} \label{est:Gamma} \begin{split} & \left | \int \Gamma ( f, g) h \mathrm{d} v \mathrm{d} x \mathrm{d} t \right |\\ & \lesssim \int \left [ \left ( \| \mathbf{P} f \|_{L^2_v} + \|\nu^{\frac{1}{2} } (\mathbf{I} - \mathbf{P} ) f\|_{L^2_v} \right ) \| g \|_{L^2_v} \right. \\ & \ \ \ \ \ \ \ \ \ + \left. \left ( \| \mathbf{P} g \|_{L^2_v} + \| \nu^{\frac{1}{2} } (\mathbf{I} - \mathbf{P} ) g \|_{L^2_v} \right ) \| f \|_{L^2_v} \right ] \| \nu^{\frac{1}{2} } (\mathbf{I} - \mathbf{P} ) h \|_{L^2_v} \mathrm{d} x \mathrm{d} t, \\ & \left | \int {}_{\partial^s} \Gamma ( f, g) h \mathrm{d} v \mathrm{d} x \mathrm{d} t \right | \\ & \lesssim \e \| \partial ^s u \|_{L^\infty_{t,x} } \int \left [ \left ( \| \mathbf{P} f \|_{L^2_v} + \|\nu^{\frac{1}{2} } (\mathbf{I} - \mathbf{P} ) f \|_{L^2_v} \right ) \| g \|_{L^2_v} \right. \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + \left. \left ( \| \mathbf{P} g \|_{L^2_v} + \| \nu^{\frac{1}{2} } (\mathbf{I} - \mathbf{P} ) g \|_{L^2_v} \right ) \| f \|_{L^2_v} \right ] \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \times \left ( \| \mathbf{P} h \|_{L^2_v} + \| \nu^{\frac{1}{2} } (\mathbf{I} - \mathbf{P} ) h \|_{L^2_v} \right ) \mathrm{d} x \mathrm{d} t. \end{split} \end{equation} \hide \begin{equation} \label{est:Gamma} \begin{split} \| \nu^{-1/2} \Gamma (f,g) \|_{L^2((0,t) \times \T^2 \times \mathbb{R}^3)} \lesssim & \ \| wg \|_{L^\infty([0,t] \times \T^2 \times \mathbb{R}^3)}\| \sqrt{\nu} (\mathbf{I} - \P) f \|_{L^2 ((0,t) \times \T^2 \times \mathbb{R}^3)} % \\ & + \| w g \|_{L^2(0,t;L^\infty(\T^2 ))} \| f \|_{L^\infty(0,t;L^2(\T^2 \times \mathbb{R}^3 ))} . \end{split} \end{equation} \unhide \end{lemma} Next, we introduce a lemma illustrating the structure of higher derivatives of $Lf$. Recall the notation $[\![ \cdot , \cdot]\!]$ for the commutator \eqref{commutator}. \begin{lemma} \label{Lpcomm} For $s \ge 1$, $[\![ \partial^s, L ]\!] f$ is a linear combination, whose coefficient depends only on $s$, of the terms having one of the following forms: \begin{enumerate} \item ${}_{\partial^j} L (\mathbf{I} - \mathbf{P} ) \partial^{s-j} f$, where $1 \le j \le s$, \item $L \partial \cdots [\![\mathbf{P}, \partial ]\!] \cdots \partial f$, where $\partial \cdots [\![\mathbf{P}, \partial ]\!] \cdots \partial f$ is an application of $s-1$ $\partial$ and one $ [\![\mathbf{P}, \partial ]\!]$ at $j$-th order to $f$ ($0 \le j \le s$), or \item ${}_{\partial^j} L \partial \cdots [\![ \mathbf{P}, \partial ]\!] \cdots \partial f$, where $1 \le j \le s-1$, and $\partial \cdots [\![ \mathbf{P}, \partial ]\!] \cdots \partial f$ is an application of $s-j-1$ $\partial$ and one $ [\![\mathbf{P}, \partial ]\!]$ at $i$-th order to $f$ ($0 \le i \le s-j$). \end{enumerate} \end{lemma} \begin{proof} We proceed by the induction on $s$: first we note that \begin{equation} \notag \begin{split} \partial (Lf) &= \partial L (\mathbf{I} - \mathbf{P}) f = {}_{\partial} L (\mathbf{I} - \mathbf{P} ) f + L \partial (\mathbf{I} - \mathbf{P} ) f \\ &= {}_{\partial} L (\mathbf{I} - \mathbf{P} ) f + L [\![ \mathbf{P}, \partial ]\!] f + L (\mathbf{I} - \mathbf{P} ) \partial f, \\ [\![ \partial, L ]\!] f &= {}_{\partial} L (\mathbf{I} - \mathbf{P} ) f + L [\![ \mathbf{P}, \partial ]\!] f, \end{split} \end{equation} which proves the claim for $s=1$. Next, for $s\ge 1$, we have \begin{equation} \notag [\![ \partial^{s+1}, L ]\!] f = \partial^{s+1} L f - L \partial^{s+1} f = \partial [\![\partial^s, L]\!] f + [\![ \partial, L ]\!] \partial^s f , \end{equation} and by the first step $[\![ \partial, L ]\!] \partial^s f $ consists of terms in the lemma. Also, application of $\partial$ to the terms of the second and third form of the lemma produces terms of the second and third form again, while application of $\partial$ to the first form produces \begin{equation} \notag \begin{split} \partial {}_{\partial^j} L (\mathbf{I} - \mathbf{P} ) \partial^{s-j} f &= {}_{\partial^{j+1}} L (\mathbf{I} - \mathbf{P} ) \partial^{s-j} f + {}_{\partial^j} L \partial (\mathbf{I} - \mathbf{P} ) \partial^{s-j} f \\ &= {}_{\partial^{j+1}} L (\mathbf{I} - \mathbf{P} ) \partial^{s-j} f + {}_{\partial^j} L [\![ \mathbf{P}, \partial ]\!] \partial^{s-j} f + \cdots + {}_{\partial^j} L \partial^{s-j} [\![ \mathbf{P}, \partial ]\!] f \\ & + {}_{\partial^j} L (\mathbf{I} - \mathbf{P} ) \partial^{s-j+1} f, \end{split} \end{equation} which proves the claim. \end{proof} Also, we have the following straightforward estimate for $[\![\mathbf{P}, \partial ]\!] f$: \begin{lemma} \label{pPcomm} Suppose that \eqref{ekappabeta} holds. For $ s_1 + s_2 \le 1$, the following holds: \begin{equation} \begin{split}\notag [\![\mathbf{P}, \partial ]\!] f &= - \sum_{i=0} ^4 \langle f, \varphi_i \sqrt{\mu} \rangle_{L^2_v} \partial (\varphi_i \sqrt{\mu} ),\\ \| [\![\mathbf{P}, \partial ]\!] f \|_{L^2_v} &\lesssim \e \| \nabla_x u^\beta \|_{L^\infty_{t,x} } \| f \|_{L^2_v }, \\ \| \partial^{s_1} [\![\mathbf{P}, \partial ]\!] \partial^{s_2} f \|_{L^2_v} &\lesssim \e V(\beta) \| \partial^{s_1 + s_2} f \|_{L^2_v }. \end{split} \end{equation} \end{lemma} Next, we introduce anisotropic spaces: this will be key to our analysis. For $p \in [1, \infty]$, we recall the space $L^p(\mathbb{T}^2 ; L^2 (\mathbb{R}^3))$ by the norm $\| f \|_{L^p(\mathbb{T}^2 ; L^2 (\mathbb{R}^3))} $ in \eqref{mixedLp}. For $p, q \in [1, \infty]$, $L^q([0, T]; L^p(\mathbb{T}^2; L^2 (\mathbb{R}^3 ) ))$ is defined similarly. We have the following anisotropic interpolations: \begin{lemma}\label{anint} We have the following: \begin{enumerate} \item (Anisotropic Ladyzhenskaya) $\| f \|_{L^4_x L^2_v} \lesssim \| f \|_{L^2_x L^2_v} ^{\frac{1}{2} } \| \partial f \|_{L^2_x L^2_v} ^{\frac{1}{2} }$, and \item (Anisotropic Agmon) $\| f \|_{L^\infty_x L^2_v} \lesssim \| f \|_{L^2_x L^2_v} ^{\frac{1}{2} } \| \partial^2 f \|_{L^2_x L^2_v} ^{\frac{1}{2} }$. \end{enumerate} \end{lemma} \begin{proof} We only prove the former: the latter is derived in a similar manner. \begin{equation} \begin{split}\notag \| f \|_{L^4_x L^2_v} &= \left ( \int_{\mathbb{T}^2} \left ( \int_{\mathbb{R}^3} |f(x,v)|^2 \mathrm{d} v \right )^{\frac{4}{2} } \mathrm{d} x \right )^{\frac{1}{2\cdot 2} } \le \left (\int_{\mathbb{R}^3} \left (\int_{\mathbb{T}^2} |f(x,v) |^4 \mathrm{d} x \right )^{\frac{1}{2}} \mathrm{d} v \right )^{\frac{1}{2}} \\ & = \left ( \int_{\mathbb{R}^3} \| f(\cdot ,v )\|_{L^4_x} ^2 \mathrm{d} v \right )^{\frac{1}{2}} \lesssim \left ( \int_{\mathbb{R}^3} \| f (\cdot ,v) \|_{L^2_x} \| \partial f (\cdot , v ) \|_{L^2 _x} \mathrm{d} v \right )^{\frac{1}{2}} \\ & \le \left ( \int_{\mathbb{R}^3} \int_{\mathbb{T}^2} |f(x,v)|^2 \mathrm{d} x \mathrm{d} v \right )^{\frac{1}{2\cdot 2}} \left ( \int_{\mathbb{R}^3} \int_{\mathbb{T}^2} |\partial f(x,v)|^2 \mathrm{d} x \mathrm{d} v \right )^{\frac{1}{2\cdot 2}} \\ &= \| f \|_{L^2_x L^2_v} ^{\frac{1}{2} } \| \partial f \|_{L^2_x L^2_v} ^{\frac{1}{2} }. \end{split} \end{equation} where we applied Minkowski for the first, usual Ladyzhenskaya for the second, and Holder for the last inequalities. \end{proof} From Lemma \ref{anint}, we have the following. \begin{lemma} \label{nonhydroLp} \begin{equation} \notag \begin{split} &\| \nu^{\frac{1}{2} } (\mathbf{I} - \mathbf{P} ) f \|_{L^4_x L^2_v}\\ \lesssim& \ \e^{\frac{1}{2}} \| \nu^{\frac{1}{2} } (\mathbf{I} - \mathbf{P} ) f \|_{L^2_x L^2_v} ^{\frac{1}{2} } \\ & \ \ \times \left (\|\partial u^\beta \|_{L^\infty_{x} } \| f \|_{L^2_x L^2_v} + \| \e^{-1} \nu^{\frac{1}{2} } (\mathbf{I} - \mathbf{P}) \partial f \|_{L^2_x L^2_v} + V(\beta) \| \e^{-1} \nu^{\frac{1}{2} } (\mathbf{I} - \mathbf{P} ) f \|_{L^2_x L^2_v } \right )^{\frac{1}{2}}, \\ &\| \nu^{\frac{1}{2} } (\mathbf{I} - \mathbf{P} ) f \|_{L^\infty_x L^2_v}\\ \lesssim & \ \e^{\frac{1}{2}} \| \nu^{\frac{1}{2} } (\mathbf{I} - \mathbf{P} ) f \|_{L^2_x L^2_v} ^{\frac{1}{2} } \\ & \ \ \times \left [\| \partial u^\beta \|_{L^\infty_x} \| \partial f \|_{L^2_x L^2_v }+ \|\e^{-1} \nu^{\frac{1}{2} } (\mathbf{I} - \mathbf{P} ) \partial^2 f \|_{L^2_x L^2_v} \right. \\ & \ \ \ \ \ \ \ \left. + V(\beta) \left (\| \e^{-1} \nu^{\frac{1}{2}} (\mathbf{I} - \mathbf{P} ) f \|_{L^2_x L^2_v } + \|\e^{-1} \nu^{\frac{1}{2} } (\mathbf{I} - \mathbf{P}) \partial f \|_{L^2_x L^2_v } + \| f \|_{L^2_x L^2_v} \right ) \right ]^{\frac{1}{2} }. \end{split} \end{equation} \end{lemma} \begin{proof} We only give proof for the first inequality: the second inequality can be proved by a similar argument. By Lemma \ref{anint}, it suffices to control $\partial (\nu^{\frac{1}{2} } (\mathbf{I} -\mathbf{P} ) f )$: we have \begin{equation} \notag \partial (\nu^{\frac{1}{2} } (\mathbf{I} -\mathbf{P} ) f ) = \frac{1}{2} \nu^{-1} \partial (\nu) \nu^{\frac{1}{2}} (\mathbf{I} - \mathbf{P} ) f + \nu^{\frac{1}{2}}[\![\mathbf{P},\partial]\!] f + \nu^{\frac{1}{2}}(\mathbf{I} - \mathbf{P} ) \partial f. \end{equation} One can easily check that $ \sup_{x,v} | \nu^{-1} \partial (\nu) | \lesssim \e \| \partial u^\beta \|_{L^\infty_x}$, and thus the inequality follows. \end{proof} \begin{lemma}[\cite{CIP,Guo2006}] \label{Linverse} $L|_{\mathcal{N}^\perp} : \mathcal{N}^\perp \rightarrow \mathcal{N}^\perp$ is a bijection, and thus $L^{-1} : \mathcal{N}^\perp \rightarrow \mathcal{N}^\perp$ is well-defined. Also, $L^{-1}$ is symmetric under any orthonormal transformation. In particular, if $f \in \mathcal{N}^\perp $ is an even (resp. odd) function, then so is $L^{-1} f$. \end{lemma} \begin{proof} The proof follows the Fredholm alternative and rotational invariance of $Q$. We refer to \cite{CIP,Guo2006} for the proof. \end{proof} The term (v-\e u^\beta) \otimes (v -\e u^\beta) \sqrt{\mu $ and its image over $L^{-1}$ turns out to play an important role in Hilbert expansion. Note that \begin{align}\label{vvproj} &(\mathbf{I} - \mathbf{P} ) \left ( (v-\e u^\beta) \otimes (v -\e u^\beta) \sqrt{\mu}\right ) = \left ((v-\e u^\beta) \otimes (v -\e u^\beta) - \frac{1}{3} |v-\e u^\beta|^2 \mathbf{I}_3 \right ) \sqrt{\mu} \end{align} \hide \begin{lemma}[\cite{BGL1, JK2020}] We have the following: \end{lemma} \begin{proof} The proof is straightforward, via Isserlis theorem.{\color{red}\huge[[Add details]]} \end{proof}\unhide Thus, we define $\mathfrak{A}:= \mathfrak{A}(t,x) \in \mathbb{M}_{3 \times 3} (\mathbb{R})$ by (see \cite{BGL1}) \begin{equation} \label{AB} \mathfrak{A}_{ij} = L^{-1} \left ( \left ((v-\e u^\beta)_i (v-\e u^\beta)_j - \frac{|v- \e u^\beta |^2}{3} \delta_{ij} \right ) \sqrt{\mu} \right ). \end{equation} Regarding $\mathfrak{A}$, we have the following useful lemma. \begin{lemma}[\cite{BGL1, BGL2}] \label{Ainn} $\langle L \mathfrak{A}_{\ell k}, \mathfrak{A}_{ij} \rangle = \eta_0 (\delta_{ik} \delta_{j\ell} + \delta_{i\ell} \delta_{jk} ) - \frac{2}{3} \eta_{0} \delta_{ij} \delta_{k\ell}.$ \end{lemma} \begin{proof} We refer to \cite{BGL1, BGL2} for the proof. \end{proof} From explicit calculation, we can also establish the following result: \begin{lemma} \label{hydroph3} For $i,j,k \in \{ 1,2,3 \}$, \begin{equation} \notag \mathbf{P} (\varphi_i \varphi_j \varphi_k \sqrt{\mu} ) = \sum_{\ell=1} ^3 (\delta_{ij}\delta_{k\ell} + \delta_{ik}\delta_{j\ell} + \delta_{jk}\delta_{i\ell} ) \varphi_\ell \sqrt{\mu}. \end{equation} \end{lemma} We also have the following useful pointwise estimates. First, we have the following pointwise estimates on $\partial^s \left ( f \frac{(\partial_t + \frac{v}{\e} \cdot \nabla_x) \sqrt{\mu} }{\sqrt{\mu} } \right )$: \begin{lemma} \label{Momentstreambound} Suppose that \eqref{ekappabeta} holds. Then for $s \le 2$, we have \begin{equation} \label{mombound} \begin{split} \partial^s \left ( f \frac{(\partial_t + \frac{v}{\e} \cdot \nabla_x) \sqrt{\mu} }{\sqrt{\mu} } \right ) = & \ \partial^s f\left ( \frac{(\partial_t + \frac{v}{\e} \cdot \nabla_x) \sqrt{\mu} }{\sqrt{\mu} } \right ) \\& + \sum_{s' < s} (\partial^{s'} f) \frac{1}{2} \sum_{i,j} (\partial^{s-s'} \partial_{x_i} u^\beta_j) \varphi_i \varphi_j + R, \end{split} \end{equation} where $ |R| \lesssim \e V(\beta) \nu(v) \sum_{s' < s}| \partial^{s'} f |.$ \end{lemma} \begin{proof} It suffices to notice that \begin{equation} \notag \frac{(\partial_t + \frac{v}{\e} \cdot \nabla_x) \sqrt{\mu} }{\sqrt{\mu} } = \frac{1}{2} \sum_{i,j} \partial_{x_i} u^{\beta}_j \varphi_i \varphi_j + \frac{1}{2} \e \sum_{i} (\partial_t u^\beta + u^\beta \cdot \nabla_x u^\beta)_i \varphi_i , \end{equation} and that the first two terms of the right-hand side of \eqref{mombound} correspond to the terms where all $\partial$ are applied to either $f$ or $\partial_{x_i} u^\beta_j$, and $R$ are all others. \end{proof} Next, we present pointwise estimates on $A$ and its derivatives (\cite{JK2020}) \begin{lemma}[Lemma 3 of \cite{JK2020}]\label{Abound} Suppose that \eqref{ekappabeta} holds. For $\varrho \in (0, 1/4)$, \begin{equation}\notag \begin{split} |\mathfrak{A}_{ij} (v) | &\lesssim e^{-\varrho |v-\e u^\beta |^2 }, \\ \sum_{s \le 2, D \in \{ \partial_t, \partial\} }\left | \partial^s \left ( (1+(u^\beta, \tilde{u}^\beta) )D \mathfrak{A}_{ij} (v) \right ) \right | & \lesssim \e V(\beta) e^{-\varrho |v-\e u^\beta |^2 }. \\ \hide | \nabla_x \mathfrak{A}_{ij} (v) | &\lesssim \e |\nabla_x u^\beta| e^{-\varrho |v-\e u^\beta |^2 },\\ |\partial_t \mathfrak{A}_{ij} (v) | &\lesssim \e |\partial_t u^\beta| e^{-\varrho |v-\e u^\beta |^2 }, \\ |(\partial_t + u^\beta \cdot \nabla_x) \mathfrak{A}_{ij} (v) | &\lesssim \e |(\partial_t + u^\beta \cdot \nabla_x ) u^\beta| e^{-\varrho |v-\e u^\beta |^2 }, \\ | \nabla_x \partial_t \mathfrak{A}_{ij} (v) | &\lesssim \e ( | \nabla_x \partial_t u | + \e |\nabla_x u| \partial_t u| ) e^{-\varrho |v- \e u^\beta |^2}, \\ |\nabla_x ^2 \mathfrak{A}_{ij} (v) | &\lesssim \e( | \nabla_x^2 u | + \e |\nabla_x u|^2 ) e^{-\varrho |v- \e u^\beta |^2}. \unhide \end{split} \end{equation} \end{lemma} Next, we have the following pointwise estimates on $\Gamma$ and $L$: \begin{lemma}[Lemma 4 of \cite{JK2020}] \label{Gammabound} Suppose that $\e |u^\beta (x,t) | \lesssim 1$. For $0< \varrho < 1/4$, $C \in \mathbb{R}^3$ and $s \le 2$, we have \begin{equation}\notag \begin{split} |\Gamma(f,g)(v) | &\lesssim \| e^{\varrho|v|^2 + C \cdot v }f(v) \|_{L^\infty_v} \| e^{\varrho|v|^2 + C \cdot v }g(v) \|_{L^\infty_v} \frac{\nu(v)}{e^{\varrho|v|^2+C\cdot v}}, \\ |{}_{\partial^s} \Gamma (f,g)(v) | &\lesssim \e V(\beta) \| e^{\varrho|v|^2 + C \cdot v }f(v) \|_{L^\infty_v} \| e^{\varrho|v|^2 + C \cdot v }g(v) \|_{L^\infty_v} \frac{\nu(v)}{e^{\varrho|v|^2+C\cdot v}}, \\ | {}_{\partial^s} L f (v) | & \lesssim \e V(\beta) \| e^{\varrho|v|^2 + C \cdot v }f(v) \|_{L^\infty_v} \frac{\nu(v)^2 }{e^{\varrho|v|^2+C\cdot v}}. \end{split} \end{equation} Here we can choose the constant for the bound uniformly for $\{ |C | \le 1 \}$. \end{lemma} Finally, we present pointwise estimates regarding projections $\mathbf{P}$ and $\mathbf{I} - \mathbf{P}$. \begin{lemma} \label{projbound} Suppose that $f(t,x,v) \in L^2_v$ satisfies $|f(t,x,v) | \le C(t,x ) \exp \left ( - \varrho |v- \e u^\beta(t,x) |^2 \right )$ for some constant $C(t,x)$ independent of $v$ and $\varrho \in (0, 1/4)$. Then \begin{equation} \begin{split}\label{hydrop} | \mathbf{P} f (t,x,v) | &\lesssim C(t,x ) \exp \left ( - \varrho |v- \e u^\beta(t,x) |^2 \right ), \\ |(\mathbf{I} - \mathbf{P} ) f(t,x,v) | &\lesssim C(t,x ) \exp \left ( - \varrho |v- \e u^\beta(t,x) |^2 \right ), \end{split}\end{equation} where the constants for inequalities are independent of $t,x,v$ but depend on $\varrho$. \end{lemma} \begin{proof} It suffices to show (\ref{hydrop}) only: the other follows from $|(\mathbf{I} - \mathbf{P} ) f(t,x,v) | \le | \mathbf{P} f (t,x,v) | + |f(t,x,v) |$. Note that, from \eqref{P}, \begin{align*} &| \mathbf{P} f(t,x,v)| \\ &\le \sum_{\ell=1} ^5 C(t,x) \int \langle v - \e u^\beta \rangle ^2 \exp \left ( - \left (\varrho + \frac{1}{4} \right ) |v- \e u^\beta(t,x) |^2 \right ) dv \langle v - \e u^\beta \rangle ^2 \sqrt{\mu} \\ & \le C(t,x) C_{\varrho} \exp \left ( - \varrho |v- \e u^\beta(t,x) |^2 \right ). \end{align*}\end{proof} \subsection{New Hilbert-type Expansion} We recall an explicit form of derivatives of ${\mu}^k$: \begin{equation} \begin{split}\label{vDmu} & \left [ \partial_t + u^\beta \cdot \nabla_x \right ] {\mu}^k = \e k (\partial_t u^\beta + u^\beta \cdot \nabla_x u^\beta ) \cdot (v - \e u^\beta ) \mu^k, \\ &(v - \e u^\beta) \cdot \nabla_x \mu^k = \e k ( \nabla_x u^\beta ) : ( (v-\e u^\beta ) \otimes (v - \e u^\beta)) \mu^k, \end{split}\end{equation} where $k > 0$ and $A:B = \mathrm{tr} (AB) = \sum_{i,j=1} ^3 A_{ij} B_{ji}$ for arbitrary rank 2 tensors $A$, $B$. Now we derive an equation of $f_R$. First, we plug (\ref{F_e}) into (\ref{eqtn_F}) to obtain \begin{align} &(\underline{v}-\e u^\beta) \cdot \nabla_x \left ({\mu} {+ \e^2 p^\beta \mu} {- \e^2 \kappa (\nabla_x u^\beta): \mathfrak{A}\sqrt{\mu} } {+ \e \kappa \tilde{u}^\beta \cdot (v-\e u^\beta) \mu} {+ \e^2 \kappa \tilde{p}^\beta \mu} \right ) \label{HE1} \\ &+ \e (\partial_t + u^\beta \cdot \nabla_x ) \left ({\mu} + \e^2 p^\beta \mu - \e^2 \kappa (\nabla_x u^\beta): \mathfrak{A}\sqrt{\mu} {+ \e \kappa \tilde{u}^\beta \cdot (v-\e u^\beta) \mu }+ \e^2 \kappa \tilde{p}^\beta \mu \right ) \label{HE2}\\ & - \frac{1}{\kappa \e} Q (\mu + \e^2 p^\beta \mu - \e^2 \kappa (\nabla_x u^\beta): \mathfrak{A}\sqrt{\mu} + \e \kappa \tilde{u}^\beta \cdot (v-\e u^\beta) \mu + \e^2 \kappa \tilde{p}^\beta \mu ) \label{HE3}\\ &+ \e^2 \Big\{ \partial_t (f_R\sqrt{\mu} ) + \frac{\underline{v}}{\e} \cdot \nabla_x (f_R\sqrt{\mu} ) - \frac{1}{\e \kappa} Q(f_R \sqrt{\mu}, f_R \sqrt{\mu} )\Big\} \label{HE5}\\ & - \frac{2}{ \kappa} Q(\mu + \e^2 p^\beta \mu - \e^2 \kappa (\nabla_x u^\beta): \mathfrak{A}\sqrt{\mu} + \e \kappa \tilde{u}^\beta \cdot (v-\e u^\beta) \mu + \e^2 \kappa \tilde{p}^\beta \mu, f_R\sqrt{\mu} ) = 0 \label{HE6}, \end{align} where we have used an abbreviation $Q(g)=Q(g,g)$ in \eqref{HE3}. We group the source terms \eqref{HE1}, \eqref{HE2}, \eqref{HE3} with corresponding order of magnitudes: it is good to keep in mind that in our method, all hydrodynamic terms of order of magnitude less than $\e^2 \kappa$ are considered small, and all non-hydrodynamic terms of order of magnitude less than $\e \sqrt{\kappa}$ are considered small. In the end, we will group all small terms altogether. \paragraph{0. Terms which are greater than $\e$:} Among terms which are independent of $f_R$, There are no terms whose magnitude is greater than $\e$: for terms in \eqref{HE1} and \eqref{HE2} this is obvious: the largest term comes from $(\underline{v} - \e u^\beta) \cdot \nabla_x \mu$, which is of order $\e$. For terms in \eqref{HE3}, we note that since $(v-\e u^\beta ) \sqrt{\mu}, \sqrt{\mu} \in \mathcal{N}$, in fact \eqref{HE3} can be rewritten as \begin{equation} \label{HEQ} \begin{split} &{ 2\e Q(\mu (1 {\color{black}+ \e^2 p^\beta + \e^2 \kappa \tilde{p}^\beta }) , (\nabla_x u^\beta): \mathfrak{A} \sqrt{\mu} )} {- \kappa \e Q(\tilde{u}^\beta \cdot (v -\e u^\beta) \mu,\tilde{u}^\beta \cdot (v -\e u^\beta) \mu)} \\ & {+ 2 \e^2 \kappa Q(\tilde{u}^\beta \cdot (v-\e u^\beta) \mu, (\nabla_x u^\beta) : \mathfrak{A} \sqrt{\mu} )} - \e^3 \kappa Q((\nabla_x u^\beta): \mathfrak{A} \sqrt{\mu}, (\nabla_x u^\beta): \mathfrak{A} \sqrt{\mu}), \end{split} \end{equation} whose leading order is $\e$. \subsubsection{Order $\e$:} Among terms which are independent of $f_R$, there are two terms of order $\e$: \begin{equation} \notag \begin{split} &(\underline{v} - \e u^\beta) \cdot \nabla_x \mu + \frac{2}{\kappa \e} Q(\mu, \e^2 \kappa (\nabla_x u^\beta):\mathfrak{A} \sqrt{\mu} ) \\ &=\e \nabla_{x} u^\beta : (v-\e u^\beta) \otimes (v-\e u^\beta) \mu - \e (\nabla_x u^\beta): L\mathfrak{A} \sqrt{\mu} = 0, \end{split} \end{equation} as $\nabla_x \cdot u^\beta = 0.$ \subsubsection{Order $\e \kappa$:} Among terms which are independent of $f_R$, there are two terms of order $\e \kappa$. \begin{align} &\e \kappa (\underline{v} - \e u^\beta) \cdot \nabla_x ( \tilde{u}^\beta \cdot (v- \e u^\beta) \mu ) - \e \kappa Q(\tilde{u}^\beta \cdot (v -\e u^\beta) \mu,\tilde{u}^\beta \cdot (v -\e u^\beta) \mu) \notag\\ &= \e \kappa \left ( (\nabla_x \tilde{u}^\beta): L \mathfrak{A} - \Gamma \left ( \tilde{u}^\beta \cdot (\underline{v} - \e u^\beta) \sqrt{\mu},\tilde{u}^\beta \cdot (\underline{v} - \e u^\beta) \sqrt{\mu} \right) \right ) \sqrt{\mu} \label{ekappa} \\ & + \e^2 \kappa ( \sum_{i, j } (\underline{v} - \e u^\beta)_i \tilde{u}^\beta_j \left (- \partial_{x_i} u^\beta_j + (\underline{v} - \e u^\beta)_j (\underline{v} - \e u^\beta)_k \partial_{x_i} u^\beta_k \right )) \mu ,\label{e2kappa1} \end{align} as $\nabla_x \cdot \tilde{u}^\beta = 0.$ Note that terms of order $\e \kappa$ are non-hydrodynamic: $\frac{1}{\sqrt{\mu} } \eqref{ekappa} \in \mathcal{N}^\perp$. \subsubsection{Order $\e^2$:} The following terms are of order $\e^2$: \begin{align} & \e (\partial_t + u^\beta \cdot \nabla_x) \mu + \e^2 (\underline{v} - \e u^\beta) \cdot \nabla_x (p^\beta \mu)\notag \\ &= \e^2 \left ((\partial_t + u^\beta \cdot \nabla_x )u^\beta + \nabla_x p^\beta \right ) \cdot (\underline{v} - \e u^\beta) \mu \notag \\ &+ \e^3 p^\beta \nabla_{x_i} u^\beta_j (\underline{v} - \e u^\beta)_i (\underline{v} - \e u^\beta)_j \mu = \e^3 p^\beta \nabla_{x_i} u^\beta_j (\underline{v} - \e u^\beta)_i (\underline{v} - \e u^\beta)_j \mu, \label{e3} \end{align} since $(\partial_t + u^\beta \cdot \nabla_x )u^\beta + \nabla_x p^\beta= 0$. \subsubsection{Order $\e^2 \kappa$:} The key reason to introduce correctors $\e \kappa \tilde{u}^\beta \cdot (v - \e u^\beta) \mu$ and $\e^2 \kappa \tilde{p}^\beta \mu$ is to get rid of hydrodynamic terms of order $\e^2 \kappa$: as a payback, we obtained terms of order $\e \kappa$, which is larger, but all of them are non-hydrodynamic, so they are small in our scale. The following is the collection of all terms of order $\e^2 \kappa$: \begin{align} &-\e^2 \kappa (\underline{v} - \e u^\beta) \cdot \nabla_x ( (\nabla_x u^\beta) : \mathfrak{A} \sqrt{\mu} ) + \e^2 \kappa (\underline{v} - \e u^\beta) \cdot \nabla_x (\tilde{p}^\beta \mu ) + \eqref{e2kappa1} \notag \\ & + \e^2 \kappa (\partial_t + u^\beta \cdot \nabla_x ) ( \tilde{u}^\beta \cdot (\underline{v} - \e u^\beta) \mu ) + 2 \e^2 \kappa \Gamma ( \tilde{u}^\beta \cdot (\underline{v} - \e u^\beta) \sqrt{\mu}, (\nabla_x u^\beta): \mathfrak{A}) \sqrt{\mu} \notag \\ & = \e^2 \kappa \{-\eta_0 \Delta_x u^\beta + \nabla_x \tilde{p}^\beta (\partial_t + u^\beta \cdot \nabla_x) \tilde{u}^\beta \} \cdot (\underline{v} - \e u^\beta) \mu \label{e2kappahydro1} \\ & {\tiny+ \e^2 \kappa \bigg (- \sum_{i,j} \tilde{u}^\beta_j \partial_{x_i} u^\beta_j (\underline{v} - \e u^\beta)_i + \sum_{i,j,k, \ell} \tilde{u}^\beta_j \partial_{x_i} u^\beta_k (\delta_{ij} \delta_{k\ell} + \delta_{ik} \delta_{j\ell} + \delta_{jk} \delta_{i\ell} ) (\underline{v} - \e u^\beta)_\ell \bigg ) \mu} \label{e2kappahydro2} \\ & + \e^2 \kappa \left (2 \Gamma ( \tilde{u}^\beta \cdot (\underline{v} - \e u^\beta) \sqrt{\mu}, (\nabla_x u^\beta): \mathfrak{A} ) - (\nabla_x ^2 u^\beta):(\mathbf{I} - \mathbf{P}) (\underline{v} - \e u^\beta) \mathfrak{A} \right ) \sqrt{\mu} \label{e2kappanonhydro1}\\ & + \e^2 \kappa \sum_{i,j, k} \tilde{u}^\beta_j \partial_{x_i} u^\beta_k (\mathbf{I} - \mathbf{P})\left ( (\underline{v} - \e u^\beta)_i (\underline{v} - \e u^\beta)_j (\underline{v} - \e u^\beta)_k \sqrt{\mu} \right ) \sqrt{\mu} \label{e2kappanonhydro2} \\ & {\tiny+ \e^2 \kappa \left ( - (\nabla_x u^\beta): (\underline{v} - \e u^\beta) \cdot \nabla_x (\mathfrak{A} \sqrt{\mu} ) + \tilde{p}^\beta (\underline{v} - \e u^\beta) \cdot \nabla_x \mu + \tilde{u}^\beta \cdot (\partial_t + u^\beta \cdot \nabla_x) ((\underline v - \e u^\beta )\mu ) \right ) } \label{e3kappa1} \\ &= \eqref{e2kappanonhydro1}+ \eqref{e2kappanonhydro2} + \eqref{e3kappa1}\notag. \end{align} Here, we have used Lemma \ref{hydroph3}, and that \eqref{e2kappahydro1} and \eqref{e2kappahydro2} can be gathered to form \begin{equation} \notag \eqref{e2kappahydro1} + \eqref{e2kappahydro2} = \e^2 \kappa ( (\partial_t + u^\beta \cdot \nabla_x) \tilde{u}^\beta + \tilde{u}^\beta \cdot \nabla_x u^\beta - \eta_0 \Delta_x u^\beta + \nabla_x \tilde{p}^\beta ) \cdot (\underline{v} - \e u^\beta) \mu = 0. \end{equation} Note that $\frac{1}{\sqrt{\mu} }\left (\eqref{e2kappanonhydro1} + \eqref{e2kappanonhydro2}\right ) \in \mathcal{N}^\perp$, that is, it is non-hydrodynamic so small in our scales, and \eqref{e3kappa1} is small: in fact, it is of order $\e^3 \kappa$. \subsubsection{Small, non-necessarily non-hydrodynamic remainders} The remaining terms are small in our scales: the following gathers all remaining terms. \begin{equation} \begin{split} \label{R1} \e^3 \sqrt{\mu} \mathfrak{R}_1 = & \ \eqref{e3} + \eqref{e3kappa1} + \e^3 (\partial_t + u^\beta \cdot \nabla_x) (p^\beta \mu) \\ & + \e^3 \kappa (\partial_t + u^\beta \cdot \nabla_x) ( - (\nabla_x u^\beta):\mathfrak{A} \sqrt{\mu} + \tilde{p}^\beta \mu ) \\ &- \e^3 p^\beta (L (\nabla_x u^\beta): \mathfrak{A} )\sqrt{\mu} - \e^3 \kappa \tilde{p}^\beta (L(\nabla_x u^\beta): \mathfrak{A} )\sqrt{\mu} \\ &- \e^3 \kappa \Gamma ((\nabla_x u^\beta): \mathfrak{A},(\nabla_x u^\beta): \mathfrak{A}) \sqrt{\mu}. \end{split} \end{equation} One can easily observe the following: \begin{proposition} Suppose that \eqref{ekappabeta} holds. $\mathfrak{R}_1$ consists of a linear combination of the terms in the following tensor product: \begin{equation} \left ( \begin{matrix} 1 \\ \kappa \\ \e \\ \e \kappa \end{matrix} \right ) \otimes \left ( \begin{matrix} 1 \\ p^\beta \\ \nabla_x u^\beta \\ \tilde{p}^\beta \\ \tilde{u}^\beta \\ \tilde{u}^\beta \otimes u^\beta \\ u^\beta \end{matrix} \right ) \otimes D \left ( \begin{matrix} p^\beta \\ u^\beta \\ \nabla_x u^\beta \\ \tilde{p}^\beta \end{matrix} \right ) \otimes \mathfrak{P}^{\le 2} ( (\underline v - \e u^\beta ) ) \left ( \begin{matrix} \sqrt{\mu} \\ \frac{1}{\e} \partial_t \mathfrak{A} \\\frac{1}{\e} \partial \mathfrak{A} \\ L \mathfrak{A} \\ \Gamma(\mathfrak{A}, \mathfrak{A} ) \end{matrix} \right ), \notag \end{equation} where $D$ is either $\partial_t$ or $\partial$, which is applied to $p^\beta, u^\beta, \nabla_x u^\beta, \tilde{p}^\beta$, and $\mathfrak{P}^{\le 2}$ is a polynomial of degree $\le 2$ of its arguments. In particular, for $\varrho \in (0,\frac{1}{4})$ and $s \le 2$, we have the following pointwise estimate: \begin{equation} \label{R1point} | \partial^s \mathfrak{R}_1 | \lesssim V(\beta) e^{-\varrho |v- \e u^\beta|^2 }. \end{equation} \end{proposition} \subsubsection{Small non-hydrodynamic remainders} \eqref{ekappa}, \eqref{e2kappanonhydro1}, \eqref{e2kappanonhydro2} are non-hydrodynamic remainders. We group them to obtain the following proposition: \begin{proposition} Suppose that \eqref{ekappabeta} holds. Let $ \mathfrak{R}_2$ be defined by \begin{equation} \label{R2} \e \kappa \sqrt{\mu} \mathfrak{R}_2 = \eqref{ekappa}+ \eqref{e2kappanonhydro1}+ \eqref{e2kappanonhydro2}. \end{equation} Then $\mathfrak{R}_2$ consists of a linear combination of the terms in the following tensor product: \begin{equation} \left ( \begin{matrix} 1 \\ \e \end{matrix} \right ) \otimes \left ( \begin{matrix} \nabla_x \tilde{u}^\beta \\ \tilde{u}^\beta \otimes \tilde{u}^\beta \\ \tilde{u}^\beta \otimes \nabla_x u^\beta \\ \nabla_x^2 u^\beta \end{matrix} \right ) \otimes \left ( \begin{matrix} L\mathfrak{A} \\ \Gamma((\underline v - \e u^\beta) \sqrt{\mu}, (\underline v - \e u^\beta) \sqrt{\mu} ) \\ \Gamma((\underline v - \e u^\beta) \sqrt{\mu}, \mathfrak{A} ) \\ (\mathbf{I} - \mathbf{P} ) (\underline v - \e u^\beta) \mathfrak{A} \\ (\mathbf{I} - \mathbf{P} ) (\underline v - \e u^\beta)^{\otimes^3} \sqrt{\mu}. \end{matrix} \right ). \notag \end{equation} In particular, $\mathfrak{R}_2 \in \mathcal{N}^\perp$, and for $\varrho \in (0,\frac{1}{4})$ and $s \le 2$, we have the following pointwise estimate: \begin{equation} \label{R2point} \begin{split} | (\mathbf{I} - \mathbf{P}) \partial^s \mathfrak{R}_2 | \lesssim V(\beta) e^{-\varrho |v- \e u^\beta|^2 }, \\ | \mathbf{P} \partial^s \mathfrak{R}_2 | \lesssim \e V(\beta) e^{-\varrho |v- \e u^\beta|^2 }. \end{split} \end{equation} \end{proposition} \begin{proof} It suffices to show \eqref{R2point}: we see that if all $\partial^s$ are applied to macroscopic quantities $\nabla_x \tilde{u}^\beta, \cdots, \nabla_x ^2 u^\beta$, then the resulting term is still non-hydrodynamic. In that case, the first inequality of \eqref{R2point} applies. On the other hand, if some of $\partial$ are applied to microscopic quantities $g = L\mathfrak{A}, \cdots, (\mathbf{I} - \mathbf{P}) (\underline{v} - \e u^\beta)^{\otimes^3} \sqrt{\mu}$, we note that \begin{equation} \notag \partial^{s'} g = \partial^{s'} (\mathbf{I} - \mathbf{P} ) g = (\mathbf{I} - \mathbf{P} ) \partial^{s'} g + [\![ \mathbf{P}, \partial^{s'} ]\!] g. \end{equation} The first term belongs to $\mathcal{N}^\perp$, and the second term belongs to $\mathcal{N}$ and is bounded by $\e (1+ \sum_{s''\le s'} \|\partial^{s''} u\|_{L^\infty} ) \| \partial^{s'-1} g \|_{L^\infty_x L^2_v } e^{-\varrho |v- \e u^\beta|^2 }$. In both cases, \eqref{R2point} is valid. \end{proof} Also, we can collect terms in \eqref{HE6} except for $\mu$ and $f_R$ by $\mathfrak{R}_3$: \begin{proposition} Suppose that \eqref{ekappabeta} holds. Let $\mathfrak{R}_3$ be defined by \begin{equation} \label{R3} \e \kappa \sqrt{\mu} \mathfrak{R}_3 = 2 \e \kappa \tilde{u}^\beta \cdot (\underline{v} - \e u^\beta) \mu + \e^2 p^\beta \mu - \e^2 \kappa (\nabla_x u^\beta): \mathfrak{A} \sqrt{\mu} + \e^2 \kappa \tilde{p}^\beta \mu. \end{equation} Then for $\varrho \in (0,\frac{1}{4})$ and $s \le 2$, we have the following pointwise estimate: \begin{equation} \label{R3point} | \partial^s \mathfrak{R}_3 | \lesssim V(\beta) e^{-\varrho |v- \e u^\beta|^2 }. \end{equation} \end{proposition} \subsection{Remainder equation and its derivatives} We have simplified \eqref{HE1}-\eqref{HE6} so far. Finally, by dividing \eqref{HE1}-\eqref{HE6} by $\e^2 \sqrt{\mu}$, we obtain \begin{equation} \label{Remainder} \begin{split} &\partial_t f_R + \frac{\underline v}{\e} \cdot \nabla_x f_R + f_R \left ( \frac{(\partial_t + \frac{\underline v}{\e} \cdot \nabla_x ) \sqrt{\mu} } {\sqrt{\mu} } \right ) + \frac{1}{\e^2 \kappa} L f_R\\ & = \frac{1}{\e\kappa} \Gamma (f_R, f_R) + \frac{1}{\e} \Gamma ( \mathfrak{R}_3, f_R) - \e \mathfrak{R}_1 - \frac{\kappa}{\e} \mathfrak{R}_2, \end{split} \end{equation} where $\mathfrak{R}_1, \mathfrak{R}_2$, and $\mathfrak{R}_3$ are defined by \eqref{R1}, \eqref{R2}, and \eqref{R3} respectively. Also, we have the equation for $\partial^s f$, for $s \le 2$: by Lemma \ref{Momentstreambound}, \begin{equation} \label{pRemainder} \begin{split} & \partial_t \partial^s f_R + \frac{\underline{v}}{\e} \cdot \nabla_x \partial^s f_R + \partial^s f_R \left ( \frac{(\partial_t + \frac{\underline v}{\e} \cdot \nabla_x ) \sqrt{\mu} } {\sqrt{\mu} } \right ) + \frac{1}{\e^2\kappa} L \partial^s f_R \\ &= - \sum_{s' < s} \partial^{s'} f_R \frac{1}{2} \sum_{i,j} (\partial^{s-s'} \partial_{x_i} u^\beta_j ) \varphi_i \varphi_j + R_s \\ & + \frac{1}{\e^2 \kappa} [\![\partial^s, L]\!] f_R+ \frac{1}{\e \kappa} \partial^s \Gamma (f_R, f_R) + \frac{1}{\e} \partial^s \Gamma(\mathfrak{R}_3, f_R) \\ & - \e \partial^s \mathfrak{R}_1 - \frac{\kappa}{\e} (\mathbf{I} - \mathbf{P} ) \partial^s \mathfrak{R}_2 - \frac{\kappa}{\e} \mathbf{P} \partial^s \mathfrak{R}_2, \end{split} \end{equation} where $ |R_s | \lesssim \e V(\beta) \nu(v) \sum_{s'< s} |\partial^{s'} f |.$ \subsection{Scaled $L^\infty$-estimate} In this section, we prove a pointwise estimate (with a weight \eqref{weight}) of an $L^p$ solution of the linear Boltzmann equation with a force term. We consider the following transport equation with \eqref{stream} term: \begin{equation} \label{transport_f_stream} [\partial_t + \e^{-1} \underline v \cdot \nabla_x ] f + \frac{1}{\e^2 \kappa} L f - \frac{\left ( \partial_t + \frac{1}{\e} \underline v \cdot \nabla_x \right ) \sqrt{\mu} }{\sqrt{\mu} } f_R = \tilde H \text { in } [0, T] \times \T^2 \times \mathbb{R}^3. \end{equation} Also, we have an issue of momentum stream: the remainder equations \eqref{Remainder} in our case contains the term \begin{equation} \label{stream} \frac{\left ( \partial_t + \frac{1}{\e} \underline v \cdot \nabla_x \right ) \sqrt{\mu} }{\sqrt{\mu} } f_R \end{equation} which cannot be controlled by $f_R$ for large $v$: this term precisely comes from that we expand around local Maxwellian, not global one. In \cite{JK2020}, a weight function of the form \begin{equation} w(x, v) := \exp ( \vartheta |v|^2 - Z(x) \cdot v), \end{equation} where $Z(x)$ is a suitable vector field, was introduced to bound \eqref{stream} term in the expansion around the local Maxwellian: \begin{equation} w \left ( \partial_t + \frac{1}{\e} \underline v \cdot \nabla_x \right ) f_R =\left ( \partial_t + \frac{1}{\e} \underline v \cdot \nabla_x \right ) (w f_R) + \frac{1}{\e} \left ( \underline v \cdot \nabla_x Z(x) \cdot \underline v \right ) w f_R, \end{equation} and if $Z(x)$ is chosen so that $\underline v\cdot \nabla_x Z(x)\cdot \underline v > 0$ for any $v$ ($Z(x) = z(x) x$ for a suitably chosen function $z(x)$ works), one may control the most problematic term in \eqref{stream}: $(\nabla_x u^\beta : \underline v \otimes \underline v) w f_R.$ Inspired by this, we introduce a suitable weight function, which is appropriate for periodic domain. Unlike the whole Euclidean space, existence of such $Z(x)$ in $\mathbb{T}^2$ is less obvious: in fact, if $Z = (Z_1, Z_2)$ is smooth, then since $\int_{\mathbb{T}^1} \partial_1 Z_1 (x_1, x_2) dx_1 = 0$, $\partial_1 Z_1 $ will have a mixed sign along the circle $\mathbb{T}^1 \times \{x_2 \}$ for each $x_2 \in \mathbb{T}^1$, unless being 0 over whole circle. Thus, $\nabla_x Z + (\nabla_x Z)^T$ is neither positive definite nor negative definite over whole domain $\mathbb{T}^2$. To overcome this difficulty, we introduce a weight function which cancels the most problematic term of \eqref{stream} instead of controlling it: we introduce \begin{equation} \label{weight} w(t,x,v) := \exp \left (\vartheta |v|^2 - \frac{1}{2} \e u^\beta (t,x) \cdot \underline v \right ), \end{equation} where $\vartheta \in (0, \frac{1}{4} )$, under the assumption \begin{equation} \label{smallnessubeta} \e | u^\beta (t,x) | = o(1). \end{equation} In our scale regime \eqref{ekappabeta}, \eqref{smallnessubeta} holds. \begin{proposition}\label{prop_infty}For an arbitrary $T>0$, suppose $f(t,x,v)$ is a distribution solution to \eqref{transport_f_stream}. Also, suppose that \eqref{ekappabeta} holds. Then, for $w= e^{\vartheta |v|^2 - \frac{1}{2} \e u^\beta (t,x) \cdot \underline v }$ with $\vartheta \in (0, \frac{1}{4})$ in \eqref{weight}, \begin{equation} \label{Ltinfty_2D} \begin{split} & \e \kappa \sup_{t \in [0,T]} \| wf (t) \|_{L^\infty (\mathbb{T}^2 \times \mathbb{R}^3)}\\ \lesssim & \ \e \kappa \| w f_0 \|_{L^\infty (\mathbb{T}^2 \times \mathbb{R}^3)}+ \sup_{t \in [0,T]} \| f(s) \|_{L^2 (\mathbb{T}^2 \times \mathbb{R}^3)} + \e^3 \kappa ^2 \sup_{t \in [0,T]} \| \nu^{-1}w\tilde H \|_{L^\infty (\mathbb{T}^2 \times \mathbb{R}^3)}. \end{split} \end{equation} \end{proposition} The proof is based on the Duhamel formula \eqref{form:h} along the trajectory with scaled variables, and the $L^p$-$L^\infty$ interpolation argument based on the change of variable. Let, with $w$ of \eqref{weight}, \begin{equation} \label{def:h} h= w f. \end{equation} From \eqref{transport_f_stream}, we can write the evolution equation of $h$: \begin{equation} \begin{split}\label{eqtn_h} &[ \partial_t + \frac{1}{\e} \underline v \cdot \nabla_x ] h = w [ \partial_t + \frac{1}{\e} \underline v \cdot \nabla_x ] f + f[ \partial_t + \frac{1}{\e} \underline v \cdot \nabla_x ] w \\ &= -\frac{1}{\e^2 \kappa} w L f + \frac{[ \partial_t + \frac{1}{\e} \underline v \cdot \nabla_x ] \sqrt{\mu} }{\sqrt{\mu}} + w\tilde H + h [ \partial_t - \frac{1}{\e} \underline v \cdot \nabla_x ] \frac{1}{2} \e u^\beta \cdot \underline v \\ &= - \frac{1}{\e^2 \kappa } w L f + w\tilde H \\ & \ \ + h \left ( -\frac{1}{2} (\underline v-\e u^\beta ) \cdot [ \partial_t + \frac{1}{\e} \underline v \cdot \nabla_x ] (-\e u^\beta ) - \frac{1}{2} [ \partial_t + \frac{1}{\e} \underline v \cdot \nabla_x ]\e u^\beta \cdot \underline v \right ) \\ &= - \frac{1}{\e^2 \kappa } w L \Big(\frac{h}{w}\Big) - h \left ( \frac{\e^2}{2} u^\beta \cdot \partial_t u^\beta + \frac{\e}{2} \underline v \cdot (\nabla_x u^\beta ) \cdot u^\beta \right ) + w\tilde H . \end{split} \end{equation} Next, we recall that $Lf = \nu f - Kf$ from \eqref{Lgammaform}. From the explicit form of $\nu$ in \eqref{Lgammaform}, we have a positive constant $\nu_0 >0$ such that \begin{equation} \label{nu0} \nu_0 (|v-\e u^\beta| +1 ) \le \nu(v) \le 2 \nu_0 (|v-\e u^\beta | + 1 ). \end{equation} In particular, \eqref{nu0} and \eqref{ekappabeta} implies that \begin{equation} \label{nutilde} \tilde{\nu} (t,x, v) := \nu(t,x,v) + \frac{\e^4 \kappa }{2} u^\beta \cdot \partial_t u^\beta + \frac{\e^3 \kappa }{2} \underline v \cdot (\nabla_x u^\beta ) \cdot u^\beta \end{equation} satisfies \begin{equation} \label{nutilde0} \frac{1}{2} \nu_0 (|v|+1) \le \tilde{\nu} (t,x,v) \le \frac{5}{2} \nu_0 (|v| + 1). \end{equation} With $\tilde{\nu}$, we can write the evolution equation for $h$: \begin{equation} \label{hevolution} \left ( \partial_t + \frac{1}{\e} \underline{v} \cdot \nabla_x \right ) h + \frac{1}{\e^2 \kappa} \tilde{\nu} h = \frac{1}{\e^2 \kappa} w K \frac{h}{w} + w\tilde H. \end{equation} Let $K_w h (v)= \int_{\mathbb{R}^3} \mathbf{k}_w (v,v_*) h(v_*) \mathrm{d} v_*$ with $\mathbf{k}_w (v,v_*):= \mathbf{k}(v,v_*) \frac{w (v)}{w(v_*)}$. Then \begin{equation} w(v) K \frac{h}{w} (v) = \int_{\mathbb{R}^3} \mathbf{k} (v, v_*) \frac{w(v) }{w(v_*) } h(v_*) \mathrm{d} v_* = K_w h (v). \end{equation} We will need the following estimate for $\mathbf{k}_w$: \begin{lemma}[Lemma 2 of\cite{JK2020}; also \cite{EGKM2}] Suppose that \eqref{smallnessubeta} holds. For $w= e^{\vartheta |v|^2 - \frac{1}{2} \e u^\beta \cdot v}$ with $\vartheta \in (0, \frac{1}{4})$, there exists $C_{\vartheta }>0$ such that \begin{equation} \label{k_w} \mathbf{k}_w(v,v_*) \lesssim \frac{1}{|v-v_*| } e^{-C_{\vartheta } \frac{|v- v_*|^2}{2} } =: \mathbf{k}^\vartheta (v-v_*), \end{equation} \begin{equation} \label{int:k_w} \int_{\mathbb{R}^3} (1+|v-v_*|) \mathbf{k}_w(v,v_*) \mathrm{d} v_* \lesssim \frac{1}{\nu(v)} \lesssim \frac{1}{1+|v|}, \end{equation} \begin{equation} \label{int:k_w2} \int_{\mathbb{R}^3} \frac{1}{|v-v_*| } \mathbf{k}_w(v,v_*) \mathrm{d} v_* \lesssim \frac{1}{\nu(v)} \lesssim 1. \end{equation} Note that $\mathbf{k}^\vartheta \in L^1 (\mathbb{R}^3)$. \end{lemma} We solve \eqref{hevolution} along the characteristics: \hide \begin{equation} \Bs\label{Duhamel_linear} &\frac{d}{ds}\Big\{ h(s , Y(s;t,x,v), v) e^{- \frac{\nu(v)}{\e^2 \kappa} (t-s) } \Big\}\\ =& \ \Big\{ \frac{1}{\e^2 \kappa}K_w h (s , Y(s;t,x,v), v) + H(s , Y(s;t,x,v), v)\Big\} e^{-\frac{\nu(v)}{\e^2 \kappa} (t-s) }, \end{split}\end{equation} Along the characteristics, \begin{equation} \Bs\label{Duhamel_linear} &\frac{d}{ds}\Big\{ h(s , Y(s;t,x,v), v) e^{- \frac{\nu(v)}{\e^2 \kappa} (t-s) } \Big\}\\ =& \ \Big\{ \frac{1}{\e^2 \kappa}K_w h (s , Y(s;t,x,v), v) + H(s , Y(s;t,x,v), v)\Big\} e^{-\frac{\nu(v)}{\e^2 \kappa} (t-s) }, \end{split}\end{equation} and hence \begin{equation} \Bs\label{form:h} h(t,x,v) =& \ h_0 (Y(0;t,x,v), v) e^{- \frac{\nu(v)}{\e^2 \kappa} t } \\ &+ \int^t_0 \frac{ e^{ - \frac{\nu(v)}{\e^2 \kappa} (t-s) }}{\e^2 \kappa } \int_{\mathbb{R}^3} \mathbf{k}_w (v,u ) \underline{ h (s , Y(s;t,x,v), u) } \mathrm{d} u \mathrm{d} s\\ & + \int^t_0 e^{ - \frac{\nu(v)}{\e^2 \kappa} (t-s) } H (s , Y(s;t,x,v), v) \mathrm{d} s. \end{split}\end{equation} then we have\unhide \begin{equation} \Bs\label{form:h} &h(t,x,v) = h_0 (Y(0;t,x, \underline v), v) \exp \left ({- \int_0 ^t \frac{\tilde{\nu}(\tau, Y(\tau; t, x, \underline v ), v)}{\e^2 \kappa} \mathrm{d} \tau } \right ) \\ &+ \int^t_0 \frac{ e^{ - \int_s^t \frac{\tilde{\nu}(\tau, Y(\tau; t, x, \underline v), v) \mathrm{d} \tau }{\e^2 \kappa} }}{\e^2 \kappa } \int_{\mathbb{R}^3} \mathbf{k}_w (s, Y(s; t, x, \underline v), v, v_* ) h (s , Y(s;t,x,\underline v), v_*) \mathrm{d} v_* \mathrm{d} s\\ & + \int^t_0 e^{ - \int_s ^t \frac{\tilde{\nu}(\tau, Y(\tau; t, x, \underline v ), v) \mathrm{d} \tau }{\e^2 \kappa} } (w \tilde H) (s , Y(s;t,x,\underline v), v) \mathrm{d} s. \end{split}\end{equation} \hide \begin{proposition}\label{prop_infty}For an arbitrary $T>0$, suppose $f(t,x,v)$ is a distribution solution to \eqref{transport_f} in $[0,T] \times \T^2 \times \mathbb{R}^3$. Then, for $w= e^{\vartheta |v|^2}$ with $\vartheta \in (0, \frac{1}{4})$, \begin{equation} \label{Linfty_2D} \begin{split} & \e \kappa \sup_{t \in [0,T]} \| wf (t) \|_{L^\infty (\T^2 \times \mathbb{R}^3)}\\ \lesssim & \ \e \kappa \| w f_0 \|_{L^\infty (\T^2 \times \mathbb{R}^3)}+ \sup_{t \in [0,T]} \| f(s) \|_{L^2 (\T^2 \times \mathbb{R}^3)} \\ & + \e^3 \kappa ^2 \sup_{t \in [0,T]} \| \nu^{-1}wH \|_{L^\infty (\T^2 \times \mathbb{R}^3)} . \end{split} \end{equation} \end{proposition} The proof is based on (1) the Duhamel formula of \eqref{form:h} along the trajectory with scaled variables, and (2) the change of variable \eqref{COV}. First, Let \begin{equation} \label{def:h} h= w f, \end{equation} and $K_w h (v)= \int_{\mathbb{R}^3} \mathbf{k}_w (v,u) h(u) \mathrm{d} u$ with $\mathbf{k}_w (v,u):= \mathbf{k}(v,u) \frac{w (v)}{w(u)}$. \begin{lemma}[\cite{EGKM2}] For $w= e^{\vartheta |v|^2}$ with $\vartheta \in (1, \frac{1}{4})$, \begin{equation} \label{k_w} \mathbf{k}_w(v,u) \lesssim \end{equation} \begin{equation} \label{int:k_w} \int_{\mathbb{R}^3} \mathbf{k}_w(v,u) \mathrm{d} u \lesssim \frac{1}{\nu(v)}. \end{equation} \end{lemma} Along the characteristics, \begin{equation} \Bs\label{Duhamel_linear} &\frac{d}{ds}\Big\{ h(s , Y(s;t,x,v), v) e^{- \frac{\nu(v)}{\e^2 \kappa} (t-s) } \Big\}\\ =& \ \Big\{ \frac{1}{\e^2 \kappa}K_w h (s , Y(s;t,x,v), v) + H(s , Y(s;t,x,v), v)\Big\} e^{-\frac{\nu(v)}{\e^2 \kappa} (t-s) }, \end{split}\end{equation} and hence \begin{equation} \Bs\label{form:h} h(t,x,v) =& \ h_0 (Y(0;t,x,v), v) e^{- \frac{\nu(v)}{\e^2 \kappa} t } \\ &+ \int^t_0 \frac{ e^{ - \frac{\nu(v)}{\e^2 \kappa} (t-s) }}{\e^2 \kappa } \int_{\mathbb{R}^3} \mathbf{k}_w (v,u ) \underline{ h (s , Y(s;t,x,v), u) } \mathrm{d} u \mathrm{d} s\\ & + \int^t_0 e^{ - \frac{\nu(v)}{\e^2 \kappa} (t-s) } H (s , Y(s;t,x,v), v) \mathrm{d} s. \end{split}\end{equation} \unhide \hide Next we consider the change of variable formula: \begin{lemma} Fix $N>0$. For any $s\geq s^\prime \geq 0$ and $(y,\underline u) \in \T^2 \times \{\underline u \in \mathbb{R}^2: |\underline u|<N\}$, the map \begin{equation} s^\prime \mapsto Y(s^\prime;s,y,\underline u) \in\T^2 \end{equation} is $m$-to-one, where $m \leq \max\big\{ \big(2N\frac{s-s^\prime}{\e } \big)^2, 1\big\}$. There is a change of variables formula: for a non-negative function $A: z \in \T^2 \mapsto A(z) \geq 0$, \begin{equation} \label{COV} \int_{\{\underline u \in \mathbb{R}^2: |\underline u|< N\}} A( Y(s^\prime;s,y,\underline u)) \mathrm{d} \underline u \leq \max\Big\{ N^2, \frac{\e^2}{|s-s^\prime|^2} \Big\} \int_{\T^2} A( z ) \mathrm{d} z. \end{equation} \end{lemma} \begin{proof}It suffices to show \eqref{COV}, while others are obvious. From $\det \big(\frac{\partial Y (s^\prime; s, y, \underline u)}{\partial \underline u}\big)= \frac{|s-s^\prime|^2}{\e^2}$, \begin{equation} \begin{split}\notag & \int_{\{\underline u \in \mathbb{R}^2: |\underline u|< N\}} A( Y(s^\prime;s,y,\underline u)) \mathrm{d} \underline u\\ &\leq \max\Big\{ \big(2N\frac{s-s^\prime}{\e } \Big)^2 , 1 \big\}\int_{\T^2} A(z) \frac{\e^2}{|s-s^\prime|^2} \mathrm{d} z\\ &= \max\Big\{ N^2, \frac{\e^2}{|s-s^\prime|^2} \Big\} \int_{\T^2} A( z ) \mathrm{d} z. \end{split} \end{equation} \end{proof} \unhide \begin{proof}[\textbf{Proof of Proposition \ref{prop_infty}}] We again apply \eqref{form:h} to the second term in the right-hand side of \eqref{form:h}: \begin{equation} \Bs\notag &h(t,x,v) = \ h_0 (Y(0;t,x, \underline v), v) \exp \left ({- \int_0 ^t \frac{\tilde{\nu}(\tau, Y(\tau; t, x, \underline v ), v)}{\e^2 \kappa} \mathrm{d} \tau } \right ) \\ & + \int^t_0 e^{ - \int_s ^t \frac{\tilde{\nu}(\tau, Y(\tau; t, x, \underline v ), v) \mathrm{d} \tau }{\e^2 \kappa} } (w H) (s , Y(s;t,x,\underline v), v) \mathrm{d} s \\ & + \int_0 ^t \frac{ e^{ - \int_s^t \frac{\tilde{\nu}(\tau, Y(\tau; t, x, \underline v), v) \mathrm{d} \tau }{\e^2 \kappa} }}{\e^2 \kappa } \int_{\mathbb{R}^3} \mathbf{k}_w (s, Y(s; t, x, \underline v), v, v_* ) \\ & \ \ \times h_0 (Y(0; s, Y(s; t, x, \underline v), \underline v_* ), v_* ) e^{-\int_0 ^s \frac{\tilde{\nu} (\tau', Y(\tau'; s, Y(s; t, x, \underline v), \underline v_* ), v_* ) }{\e^2 \kappa } \mathrm{d} \tau' } \mathrm{d} v_* \mathrm{d} s \\ &+ \int_0 ^t \frac{ e^{ - \int_s^t \frac{\tilde{\nu}(\tau, Y(\tau; t, x, \underline v), v) \mathrm{d} \tau }{\e^2 \kappa} }}{\e^2 \kappa } \int_{\mathbb{R}^3} \mathbf{k}_w (s, Y(s; t, x, \underline v), v, v_* ) \\ & \ \ \times \int_0 ^s e^{-\int_{\tau} ^s \frac{ \tilde{\nu} (\tau', Y(\tau'; s, Y(s; t, x, \underline v), \underline v_*), v_* ) \mathrm{d} \tau' }{\e^2 \kappa} } (wH) (\tau, Y(\tau; s, Y(s; t, x, \underline v), \underline v_*), v_* ) \mathrm{d} \tau \mathrm{d} v_* \mathrm{d} s \\ & + \int_0 ^t \frac{ e^{ - \int_s^t \frac{\tilde{\nu}(\tau, Y(\tau; t, x, \underline v), v) \mathrm{d} \tau }{\e^2 \kappa} }}{\e^2 \kappa } \int_{\mathbb{R}^3} \mathbf{k}_w (s, Y(s; t, x, \underline v), v, v_* ) \\ & \ \ \times \int_0 ^s \frac{e^{-\int_\tau ^s \frac{\tilde{\nu} (\tau', Y(\tau'; s, Y(s; t, x, \underline v), \underline v_* ), v_* ) \mathrm{d} \tau' }{\e^2 \kappa } } }{\e^2 \kappa } \int_{\mathbb{R}^3} \mathbf{k}_w (\tau, Y(\tau; s, Y(s; t, x, \underline v ), \underline v_* ), v_*, v_{**} ) \\ & \ \ \times h (\tau, Y(\tau; s, Y(s; t, x, \underline v ), \underline v_* ), v_{**} ) \mathrm{d} v_{**} \mathrm{d} \tau \mathrm{d} v_* \mathrm{d} s \\ & =: I_1 + I_2 + I_3 + I_4 + I_K. \end{split}\end{equation} First, we control $I_0 := I_1 + I_3$, contribution from initial data. We easily notice from \eqref{nutilde0}, \eqref{int:k_w} that \begin{equation} \notag \begin{split} & |I_1| \le \| h_0 \|_{L^\infty(\T^2 \times \mathbb{R}^3)} e^{- \frac{\nu_0 (|v|+1) t }{2 \e^2 \kappa } }\le \| h_0 \|_{L^\infty(\T^2 \times \mathbb{R}^3)}, \\ & |I_3 | \le \int_0 ^t \frac{e^{-\frac{\nu_0 (|v|+1) (t-s) }{2 \e^2 \kappa } }}{\e^2 \kappa } \int_{\mathbb{R}^3} \mathbf{k}_w ( v, v_* ) e^{- \frac{\nu_0 (|v_*| + 1)s }{2 \e^2 \kappa } } \| h_0 \|_{L^\infty (\T^2 \times \mathbb{R}^3 ) } \mathrm{d} v_* \mathrm{d} s \lesssim \| h_0 \|_{L^\infty (\T^2 \times \mathbb{R}^3 ) } . \end{split} \end{equation} In the second inequality, the dependence of $\mathbf{k}_w$ on $t,x$ variable is omitted as the bound is uniform on them. Next, we control $I_H := I_2 + I_4$, the contribution from source $H$. Again from \eqref{nutilde0}, \eqref{int:k_w} we have \begin{equation} \notag \begin{split} &|I_2 | \le \int_0 ^t e^{- \frac{\nu_0 (|v|+1) (t-s) }{2 \e^2 \kappa } } | w H (v) | \mathrm{d} s \lesssim \e^2 \kappa \| \nu^{-1} w H \|_{L^\infty ([0, T] \times \T^2 \times \mathbb{R}^3 ) } , \\ &|I_4| \le \int_0 ^t \frac{e^{-\frac{\nu_0 (|v|+1) (t-s) }{2 \e^2 \kappa } }}{\e^2 \kappa } \int_{\mathbb{R}^3} \mathbf{k}_w ( v, v_* ) \int_0 ^s e^{- \frac{\nu_0 (|v_*| + 1)(s-\tau) }{2 \e^2 \kappa } } | wH (v_*) |\mathrm{d} \tau \mathrm{d} v_* \mathrm{d} s \\ & \lesssim \e^2 \kappa \| \nu^{-1} wH \|_{L^\infty ([0, T] \times \T^2 \times \mathbb{R}^3 ) }. \end{split} \end{equation} Finally, we control $I_K$. The idea is the following: we decompose the time interval $[0, s]$ into $[0, s-\e^2 \kappa o(1)]$ and $[s - \e^2 \kappa o(1) , s]$: the first integral is controlled using the change of variables $\underline v_* \rightarrow Y(\tau; s, Y(s; t, x, \underline v ), \underline v_*)$ and thus we can rewrite the integral of $h$ with respect to $v_*, v_{**}$ variables into the space-time integral of $f$: for that reason we plugged \eqref{form:h} into itself. Also, the splitting of the time gives control of the Jacobian factor obtained from change of variables. On the other hand, the second term is controlled by the fact that it is a short time integral: this gives smallness and thus we can bound the integral with $o(1) \|h\|_{L^\infty ([0, T] \times \T^2 \times \mathbb{R}^3 )}$. For this purpose, we introduce a small positive number $\eta>0$, which is to be determined. Using \eqref{nutilde0}, \eqref{int:k_w}, we have the following: \begin{equation} \begin{split}\notag & |I_K| \le \int_0 ^t \frac{e^{-\frac{\nu_0 (|v|+1) (t-s) }{2\e^2 \kappa} } }{\e^2 \kappa} \int_0 ^s \frac{e^{-\frac{\nu_0 (|v_*|+1) (s-\tau ) }{2\e^2\kappa} } }{\e^2 \kappa } \\ & \ \times \int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \mathbf{k}^\vartheta (v- v_*) \mathbf{k}^\vartheta (v_* - v_{**} ) | h (\tau, Y(\tau; s, Y(s; t, x, \underline v), \underline v_*), v_{**} ) | \mathrm{d} v_{**} \mathrm{d} v_* \mathrm{d} \tau \mathrm{d} s \\ & = \int_0 ^t \frac{e^{-\frac{\nu_0 (|v|+1) (t-s) }{2\e^2 \kappa} } }{\e^2 \kappa} \int_0 ^{s- \e^2 \kappa \eta} \frac{e^{-\frac{\nu_0 (|v_*|+1) (s-\tau ) }{2\e^2\kappa} } }{\e^2 \kappa } \\ & \ \times \int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \mathbf{k}^\vartheta (v- v_*) \mathbf{k}^\vartheta (v_* - v_{**} ) | h (\tau, Y(\tau; s, Y(s; t, x, \underline v), \underline v_*), v_{**} ) | \mathrm{d} v_{**} \mathrm{d} v_* \mathrm{d} \tau \mathrm{d} s \\ &+ \int_0 ^t \frac{e^{-\frac{\nu_0 (|v|+1) (t-s) }{2\e^2 \kappa} } }{\e^2 \kappa} \int_{s - \e^2 \kappa \eta} ^s \frac{e^{-\frac{\nu_0 (|v_*|+1) (s-\tau ) }{2\e^2\kappa} } }{\e^2 \kappa } \\ & \ \times \int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \mathbf{k}^\vartheta (v- v_*) \mathbf{k}^\vartheta (v_* - v_{**} ) | h (\tau, Y(\tau; s, Y(s; t, x, \underline v), \underline v_*), v_{**} ) | \mathrm{d} v_{**} \mathrm{d} v_* \mathrm{d} \tau \mathrm{d} s \\ &=: I_{5,1} + I_{5,2}. \end{split} \end{equation} We first bound $I_{5,2}$. From the integrability of $\mathbf{k}^\vartheta$ we have \begin{equation} \notag I_{5,2} \le \int_0 ^t \frac{e^{-\frac{\nu_0 (|v|+1) (t-s) }{2\e^2 \kappa} } }{\e^2 \kappa} \mathrm{d} s\frac{\e^2 \kappa \eta}{\e^2 \kappa} \| \mathbf{k}^\vartheta \|_{L^1 (\mathbb{R}^3) }^2 \| h \|_{L^\infty ([0, T] \times \T^2 \times \mathbb{R}^3 ) } \lesssim \eta \| h \|_{L^\infty ([0, T] \times \T^2 \times \mathbb{R}^3 ) } . \end{equation} Next, to treat $I_{5,1}$, we introduce the following decomposition of $\mathbf{k}^\vartheta (v - v_*)$: for a given $N>0$, \begin{equation} \notag \begin{split} &\mathbf{k}^\vartheta (v- v_*) = \mathbf{k}_N ^\vartheta (v , v_*) + \mathbf{k}_R ^\vartheta (v, v_*), \text{ where } \\ &\mathbf{k}_N ^\vartheta (v , v_*) = \mathbf{k}^\vartheta (v- v_*) \mathbf{1}_{B_N (0) \setminus B_{\frac{1}{N} } (0) } (v-v_*) \mathbf{1}_{B_N (0) } (v_*), \text{ and } \\ & \mathbf{k}_R ^\vartheta (v, v_*) = \mathbf{k}^\vartheta (v- v_*) - \mathbf{k}_N ^\vartheta (v , v_*). \end{split} \end{equation} With this decomposition, we can split $I_{5,1}$ by \begin{equation} \begin{split}\notag & I_{5,1} = \int_0 ^t \frac{e^{-\frac{\nu_0 (|v|+1) (t-s) }{2\e^2 \kappa} } }{\e^2 \kappa} \int_0 ^{s- \e^2 \kappa \eta} \frac{e^{-\frac{\nu_0 (|v_*|+1) (s-\tau ) }{2\e^2\kappa} } }{\e^2 \kappa } \\ & \ \times \int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \mathbf{k}_N ^\vartheta (v, v_*) \mathbf{k}_N ^\vartheta (v_*, v_{**} ) | h (\tau, Y(\tau; s, Y(s; t, x, \underline v), \underline v_*), v_{**} ) | \mathrm{d} v_{**} \mathrm{d} v_* \mathrm{d} \tau \mathrm{d} s \\ & + \int_0 ^t \frac{e^{-\frac{\nu_0 (|v|+1) (t-s) }{2\e^2 \kappa} } }{\e^2 \kappa} \int_0 ^{s- \e^2 \kappa \eta} \frac{e^{-\frac{\nu_0 (|v_*|+1) (s-\tau ) }{2\e^2\kappa} } }{\e^2 \kappa } \\ & \ \times \int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \mathbf{k}_N ^\vartheta (v, v_*) \mathbf{k}_R ^\vartheta (v_* , v_{**} ) | h (\tau, Y(\tau; s, Y(s; t, x, \underline v), \underline v_*), v_{**} ) | \mathrm{d} v_{**} \mathrm{d} v_* \mathrm{d} \tau \mathrm{d} s \\ & + \int_0 ^t \frac{e^{-\frac{\nu_0 (|v|+1) (t-s) }{2\e^2 \kappa} } }{\e^2 \kappa} \int_0 ^{s- \e^2 \kappa \eta} \frac{e^{-\frac{\nu_0 (|v_*|+1) (s-\tau ) }{2\e^2\kappa} } }{\e^2 \kappa } \\ & \ \times \int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \mathbf{k}_R ^\vartheta (v,v_*) \mathbf{k}_N ^\vartheta (v_* , v_{**} ) | h (\tau, Y(\tau; s, Y(s; t, x, \underline v), \underline v_*), v_{**} ) | \mathrm{d} v_{**} \mathrm{d} v_* \mathrm{d} \tau \mathrm{d} s \\ & + \int_0 ^t \frac{e^{-\frac{\nu_0 (|v|+1) (t-s) }{2\e^2 \kappa} } }{\e^2 \kappa} \int_0 ^{s- \e^2 \kappa \eta} \frac{e^{-\frac{\nu_0 (|v_*|+1) (s-\tau ) }{2\e^2\kappa} } }{\e^2 \kappa } \\ & \ \times \int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \mathbf{k}_R ^\vartheta (v, v_*) \mathbf{k}_R ^\vartheta (v_*, v_{**} ) | h (\tau, Y(\tau; s, Y(s; t, x, \underline v), \underline v_*), v_{**} ) | \mathrm{d} v_{**} \mathrm{d} v_* \mathrm{d} \tau \mathrm{d} s \\ & =: I_{5,1} ^{NN} + I_{5,1} ^{NR} + I_{5,1} ^{RN} + I_{5,1} ^{RR}. \end{split} \end{equation} Since $\int_{\mathbb{R}^3} \mathbf{k}_N ^\vartheta (v, v_*) \mathrm{d} v_* \uparrow \| \mathbf{k}^\vartheta \|_{L^1 (\mathbb{R}^3)}$ as $N\rightarrow \infty$ and thus $A_N := \int_{\mathbb{R}^3} \mathbf{k}^\vartheta _R (v, v_{*} ) \mathrm{d} v_* \rightarrow 0$ as $N\rightarrow \infty$ by Monotone convergence theorem, we have \begin{equation} \begin{split}\notag &I_{5,1} ^{NR} \lesssim A_N \| \mathbf{k}^\vartheta \|_{L^1 (\mathbb{R}^3)} \| h \|_{L^\infty ([0, T] \times \T^2 \times \mathbb{R}^3 ) }, \\ &I_{5,1} ^{RN}\lesssim A_N \| \mathbf{k}^\vartheta \|_{L^1 (\mathbb{R}^3)} \| h \|_{L^\infty ([0, T] \times \T^2 \times \mathbb{R}^3 ) }, \\ &I_{5,1}^{RR} \lesssim A_N^2 \| h \|_{L^\infty ([0, T] \times \T^2 \times \mathbb{R}^3 ) }. \end{split} \end{equation} Finally, we estimate $I_{5,1} ^{NN}$. First, we recall that $\mathbf{k}^\vartheta _N (v, v_*)$ is supported on $\{\frac{1}{N} < |v-v_*| < N \}$ and therefore is bounded by some constant $C_N$. Thus, we have \begin{equation} \begin{split}\notag &\mathbf{k}^\vartheta _N (v, v_*) \le C_N \mathbf{1}_{B_N (0) } (v_*), \\ &\mathbf{k}^\vartheta _N (v_*, v_{**}) \le C_N \mathbf{1}_{B_N (0) } (v_{**}). \end{split} \end{equation} Next, we expand $|h(\tau, Y(\tau; s, Y(s; t, x, \underline v), \underline v_*), v_{**} )|$: in the support of $\mathbf{k}^\vartheta _N (v, v_*) \mathbf{k}^\vartheta _N (v_*, v_{**})$, $|v_*|, |v_{**} | < N$. Note that this implies $|\underline v_*| < N$ and $|(v_*)_3| < N$, where $v_* = (\underline v_*, (v_*)_3).$ Together with \eqref{smallnessubeta}, we have \begin{equation} \begin{split}\notag & |h(\tau, Y(\tau; s, Y(s; t, x, \underline v), \underline v_*), v_{**} )| = |w f (\tau, Y(\tau; s, Y(s; t, x, \underline v), \underline v_*), v_{**} )| \\ & \le e^{\vartheta |v_{**}|^2 + \frac{1}{2} \e \| u^\beta\|_{L^\infty ([0, T] \times \T^2 ) }| \underline{v_{**}} |} |f (\tau, Y(\tau; s, Y(s; t, x, \underline v), \underline v_*), v_{**} )| \\ & \le C_N |f (\tau, Y(\tau; s, Y(s; t, x, \underline v), \underline v_*), v_{**} )|. \end{split} \end{equation} Also, we rewrite $Y(\tau; s, Y(s; t, x, \underline v), \underline v_*)$: we have \begin{equation} \notag Y(\tau; s, Y(s; t, x, \underline v), \underline v_*) = x - \frac{t-s}{\e} \underline v - \frac{s-\tau} {\e} \underline v_* / \mathbb{Z}^2 \in \T^2. \end{equation} Finally, we remark that since $\tau \in [0, s - \e^2 \kappa \eta]$, we have $\frac{s-\tau}{\e} > \e \kappa \eta$. Combining these altogether, for $\tau \in [0, s - \e^2 \kappa \eta]$, we have \begin{equation} \begin{split}\notag &\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \mathbf{k}_N ^\vartheta (v, v_*) \mathbf{k}_N ^\vartheta (v_*, v_{**} ) | h (\tau, Y(\tau; s, Y(s; t, x, \underline v), \underline v_*), v_{**} ) | \mathrm{d} v_{**} \mathrm{d} v_* \\ & \lesssim_{N} \int_{|(v_*)_3 | < N} \int_{|\underline v_*| < N } \int_{|v_{**}|< N } | f_R (\tau, x - \frac{t-s}{\e} \underline v - \frac{s-\tau} {\e} \underline v_* / \mathbb{Z}^2, v_{**} ) | \mathrm{d} v_{**} \mathrm{d} v_* \\ & \lesssim_{N} \left ( \int_{|\underline v_*| < N } \int_{\mathbb{R}^3} \left | f_R (\tau, x - \frac{t-s}{\e} \underline v - \frac{s-\tau} {\e} \underline v_* / \mathbb{Z}^2, v_{**} ) \right |^2 \mathrm{d} v_{**} \mathrm{d} v_* \right )^{\frac{1}{2} }, \end{split} \end{equation} where we have used that the integrand is independent of $(v_*)_3$ and $\| \mathbf{1}_{\{|\underline v_*| < N\} \times \{|v_{**}| < N \} } \|_{L^2 (\mathbb{R}^2 \times \mathbb{R}^3 ) } \lesssim_N 1$. Next, we apply the change of variables $\underline v_* \rightarrow y = x - \frac{t-s}{\e} \underline v - \frac{s-\tau} {\e} \underline v_* \in \mathbb{R}^2$. This map is one-to-one, and maps $\underline v_* \in B_N (0)$ onto $y \in B_{\frac{s-\tau}{\e} N } (x - \frac{t-s}{\e} \underline v)$ with $\mathrm{d} y = \left (\frac{s-\tau}{\e} \right )^2 \mathrm{d} \underline v_*$. Therefore, we have \begin{equation} \label{chvarinteg} \begin{split} & \left ( \int_{|\underline v_*| < N } \int_{\mathbb{R}^3} \left | f_R (\tau, x - \frac{t-s}{\e} \underline v - \frac{s-\tau} {\e} \underline v_* / \mathbb{Z}^2, v_{**} ) \right |^2 \mathrm{d} v_{**} \mathrm{d} v_* \right )^{\frac{1}{2} } \\ &= \left ( \int_{y \in B_{\frac{s-\tau}{\e} N } (x - \frac{t-s}{\e} \underline v) } \int_{\mathbb{R}^3} \left | f_R (\tau, y / \mathbb{Z}^2 , v_{** } ) \right |^2 \left ( \frac{\e }{s-\tau } \right )^2 \mathrm{d} v_{**} \mathrm{d} y \right )^{\frac{1}{2} } \\ & = \left ( \sum_{k \in \mathbb{Z}^2} \int_{y \in ( \left [-\frac{1}{2},\frac{1}{2} \right ]^2 + k) \cap B_{\frac{s-\tau}{\e} N } (x - \frac{t-s}{\e} \underline v) } \int_{\mathbb{R}^3} \left | f_R (\tau, y-k , v_{** } ) \right |^2 \left ( \frac{\e }{s-\tau } \right )^2 \mathrm{d} v_{**} \mathrm{d} y \right )^{\frac{1}{2} } \\ & = \left ( \sum_{k \in \mathbb{Z}^2} \int_{z \in \left [-\frac{1}{2},\frac{1}{2} \right ]^2 \cap B_{\frac{s-\tau}{\e} N } (x - \frac{t-s}{\e} \underline v - k ) } \int_{\mathbb{R}^3} \left | f_R (\tau, z , v_{** } ) \right |^2 \left ( \frac{\e }{s-\tau } \right )^2 \mathrm{d} v_{**} \mathrm{d} z \right )^{\frac{1}{2} }, \end{split} \end{equation} where $z = y-k$ in each integral. Next, we count the number of $k \in \mathbb{Z}^2$ such that $\left [-\frac{1}{2},\frac{1}{2} \right ]^2 \cap B_{\frac{s-\tau}{\e} N } (x - \frac{t-s}{\e} \underline v - k ) \ne \emptyset$. There are two cases: if $N \frac{s-\tau}{\e} \le 1$, there are $O(1)$ such $k \in \mathbb{Z}^2$. If $N \frac{s-\tau}{\e} > 1$, there are $O \left (\left (N \frac{s-\tau}{\e} \right )^2 \right ) $ such $k \in \mathbb{Z}^2$. Therefore, we have \begin{equation} \begin{split}\notag &\eqref{chvarinteg} \lesssim \left ( \max \left ( \left (\frac{\e}{s-\tau} \right )^2, N^2 \right ) \int_{\T^2} \int_{\mathbb{R}^3} |f_R(\tau, z, v_{**} ) |^2 \mathrm{d} v_{**} \mathrm{d} z \right )^{\frac{1}{2}} \\ & \lesssim_{N, \eta} \frac{1}{\e \kappa} \sup_{\tau \in [0, t]} \| f_R (\tau ) \|_{L^2 (\T^2 \times \mathbb{R}^3 ) }. \end{split} \end{equation} Choosing $N$ large enough and $\eta$ small enough so that we can bury $I_{5,2}$, $I_{5,1}^{NR}, I_{5,1}^{RN}$, and $I_{5,1}^{RR} $ gives \begin{equation} \notag \begin{split} \| h \|_{L^\infty ([0, T] \times \T^2 \times \mathbb{R}^3 )} \lesssim&\ \| h_0 \|_{L^\infty (\T^2 \times \mathbb{R}^3)} + \e^2 \kappa \| \nu^{-1} w H \|_{L^\infty ([0, T] \times \T^2 \times \mathbb{R}^3) }\\ & + \frac{1}{\e \kappa} \| f_R \|_{L^\infty ([0, T] ; L^2 (\T^2 \times \mathbb{R}^3 ) ) }, \end{split} \end{equation} which is the desired conclusion. \end{proof} \subsection{Remainder Estimate}\label{sec:re} To admit far-from-equilibrium initial data, we need to keep the characteristic size of remainder as large as possible. A heuristic calculation suggests that the size $o(\e \kappa)$ for the remainder is the threshold: if the remainder becomes of the size $O(\e \kappa)$, we lose the control for nonlinearity of the remainder equation. Thus, we aim to keep our characteristic size of remainder to be slightly smaller than $\e \kappa$. There is only a very slight room for this: the only possible gain is the coercivity of the linearized Boltzmann operator $L$. However, many conventional techniques (averaging lemma, $L^\infty$-estimates) do not rely on it; up to the authors' knowledge the coercivity of $L$ is exploited only in $L^2_v$ estimates. If we rely on other techniques in too early stage, we enormously lose the scale and fail to achieve the goal. As a consequence, we need to push the $L^2_v$ estimates as far as possible. The important observation made in \cite{Guo2002} is that even for nonlinear term, we have control by $L^2$-in-$v$ integral of remainders, since nonlinear term is also expressed in terms of an integral with nicely decaying kernel: what lacks is $L^2$ integrability in $x$. This observation naturally leads us to pursue $L^2_v$-estimate for derivatives of remainder and then rely on interpolation - $H^2_x$, but $L^2_v$ estimate. It turns out that this idea gives a sharper scaling than many conventional techniques: the commutator $[\![ \partial^s, L ]\!]$ between spatial derivatives and $L$ forces us to lose $\sqrt{\kappa}$ scale for each derivative, but we do not lose scale in nonlinearity for 2-dimensional domain. Thus, by setting initial data decaying to 0 in arbitrary slow rate as $\e \rightarrow 0$, we can keep $L^2_x L^2_v$ norms of remainder and its derivatives small, provided that the source terms are also small, which is the main point of the next idea. Furthermore, we note that $H^2_x L^2_v$ suits very well with our goal to see convergence in a stronger topology: as we can control up to second derivatives of remainder small, we can keep our Boltzmann solution close to the local Maxwellian $M_{1, \e u^\beta, 1}$. Its zeroth and first derivatives may converge - they correspond to the velocity and vorticity. Its second derivatives may blow in general, which represents the formation of singular object, e.g. interfaces. Now we are ready to prove compactness of $f_R$ in a suitable topology, thereby proving convergence. For a fixed $T>0$ and $t \in (0, T)$, we use the following scaled energy and its dissipation: \begin{equation} \label{ED} \begin{split} \mathcal{E}(t) &:= \sum_{s \le 2} \sup_{t' \in (0, t) }\| \kappa^{-1 + \frac{s}{2}} \partial^s f_R (t') \|_{ L^2_x L^2 _v } ^2, \\ \mathcal{D} (t) &:= \sum_{s \le 2} \| \e^{-1} \kappa^{-\frac{3}{2} + \frac{s}{2} } \nu^{\frac{1}{2} } (\mathbf{I} - \mathbf{P} ) \partial^s f_R \|_{L^2 ((0, t); L^2 _x L^2_v ) ) } ^2. \end{split} \end{equation} We also need the following auxilliary norm: \begin{equation} \label{F} \mathcal{F} (t) := \e \sup_{t' \in (0, t) } \| f_R (t') \|_{L^\infty (\mathbb{T}^2 \times \mathbb{R}^3 ) }. \end{equation} Also, we will frequently use the following basic inequality: \begin{equation} \notag \sum_{s \le 2} \| \kappa^{-1 + \frac{s}{2}} \partial^s f_R \|_{L^2 ( (0, t); L^2_x L^2 _v ) } ^2 = \int_0 ^t \mathcal{E} \lesssim_{T} \mathcal{E}(t). \end{equation} The main theorem of this section is the following. \begin{theorem} \label{theo:remainder} Let $T>0$. Suppose that $\delta_s = \delta_s (\e)$, $s = 0, 1, 2$ satisfy the following: \begin{align} &\lim_{\e \rightarrow 0} \delta_0 (\e) ^2 \left (\|\nabla_x u^\beta\|_{L^\infty{((0, T) \times \mathbb{T}^2 }) } ^2 + 2 \right ) \exp \left ( 2 \mathbf{C}_0 \left (\|\nabla_x u^\beta\|_{L^\infty{((0, T) \times \mathbb{T}^2 }) } ^2 + 2 \right ) T \right ) = 0, \label{delta0} \\ & \delta_s (\e) < (\e^{-1} \kappa^{-1/2} )^s, s = 1, 2 \label{deltas}. \end{align} Suppose that $f_R (0)$ satisfies \begin{equation} \notag \sqrt{\mathcal{E}(0) }, \ \mathcal{F}(0) < \delta_0 (\e), \ \| w_s \partial^s f_{R0} \|_{L^\infty (\mathbb{T}^2 \times \mathbb{R}^3 ) } < \delta_s (\e), s = 1, 2. \end{equation} Then \eqref{Remainder} with initial data $f_R(0)$, and $\tilde{u}^\beta (0) \equiv 0$ has a solution $f_R(t)$, $t \in (0, T)$ such that \begin{equation} \begin{split}\notag & \mathcal{E}(t) + \mathcal{D}(t)\\ \le & \ (\delta_0^2 + \kappa) (1+T) C(\mathbf{C}_0)\\ & \times \left ( 2 \mathbf{C}_0 \left (\|\nabla_x u^\beta\|_{L^\infty((0, T) \times \mathbb{T}^2 ) } ^2 + 2 \right ) \exp \left ( 2 \mathbf{C}_0 \left (\|\nabla_x u^\beta\|_{L^\infty((0, T) \times \mathbb{T}^2 ) } ^2 + 2 \right ) T \right ) \right ) , \end{split} \end{equation} and $ \lim_{\e \rightarrow 0} \sup_{t \in (0, T) } (\mathcal{E}(t) + \mathcal{F}(t) ) = 0. $ \end{theorem} \subsubsection{Energy estimate} By taking $L^2$ norm for \eqref{Remainder}, \eqref{pRemainder} for $s \le 2$ and integrating over time, we have \begin{align} \mathcal{E}(t) &+ \mathcal{D}(t) \lesssim \mathcal{E}(0)\notag \\ &+ \| \nabla_x u^\beta\|_{L^\infty_{t,x} } \sum_{s \le 2} \int_{(0, t) \times \mathbb{T}^2 \times \mathbb{R}^3} \left |\frac{\partial^s f_R}{\kappa^{1- \frac{s}{2} } } \right |^2 \nu^2 \mathrm{d} v \mathrm{d} x \mathrm{d} t' \label{momstr1}\\ & + \sum_{s' < s} \kappa^{\frac{s-s'}{2} } V(\beta) \int_{(0, t) \times \mathbb{T}^2 \times \mathbb{R}^3} \left |\frac{\partial^{s'} f_R}{\kappa^{1- \frac{s'}{2} } } \right | \left |\frac{\partial^s f_R}{\kappa^{1- \frac{s}{2} } } \right | \nu^2 \mathrm{d} v \mathrm{d} x \mathrm{d} t' + \e V(\beta) (\mathcal{E}(t) + \mathcal{D}(t) ) \label{momstr2} \\ &+ \sum_{s \le 2} \int_{(0, t) \times \mathbb{T}^2 \times \mathbb{R}^3} \e^{-2} \kappa^{-2 + \frac{s}{2}} [\![ \partial^s , L ]\!] f_R \frac{\partial^s f_R}{\kappa^{1 - \frac{s}{2} } } \mathrm{d} v \mathrm{d} x \mathrm{d} t' \label{Lcommutator} \\ &+ \sum_{s \le 2}\int_{(0, t) \times \mathbb{T}^2 \times \mathbb{R}^3} \e^{-1} \kappa^{-2 + \frac{s}{2} } \partial^s \Gamma ( f_R, f_R ) \frac{\partial^s f_R}{\kappa^{1 - \frac{s}{2} } } \mathrm{d} v \mathrm{d} x \mathrm{d} t' \label{nonlinearity} \\ & + \sum_{s \le 2} \int_{(0, t) \times \mathbb{T}^2 \times \mathbb{R}^3} \e^{-1} \kappa^{-1 + \frac{s}{2}} \partial^s \Gamma (\mathfrak{R}_3, f_R)\frac{\partial^s f_R}{\kappa^{1 - \frac{s}{2} } } \mathrm{d} v \mathrm{d} x \mathrm{d} t' \label{linear} \\ &- \sum_{s \le 2}\int_{(0, t) \times \mathbb{T}^2 \times \mathbb{R}^3} \kappa^{-1 + \frac{s}{2} } \left (\e \partial^s \mathfrak{R}_1 + \frac{\kappa}{\e} (\mathbf{I} - \mathbf{P} ) \partial^s \mathfrak{R}_2 + \frac{\kappa}{\e} \partial^s \mathfrak{R}_2 \right ) \frac{\partial^s f_R}{\kappa^{1 - \frac{s}{2} } } \mathrm{d} v \mathrm{d} x \mathrm{d} t' . \label{source} \end{align} \textit{Step 1. Control of \eqref{source}.} From \eqref{R1point} and \eqref{R2point}, we have \begin{equation} \label{sourceestimate} \eqref{source} \lesssim \sum_{s \le 2} \left ( \e \kappa^{-1 + \frac{s}{2} } V(\beta) \sqrt{\mathcal{E}(t) } + \kappa^{\frac{3}{2} } V(\beta) \sqrt{\mathcal{D}(t) } + \kappa V(\beta) \sqrt{\mathcal{E}(t)} \right ) \lesssim \kappa^{\frac{1}{2}} \left (\sqrt{\mathcal{E}(t) } + \sqrt{\mathcal{D}(t) } \right ), \end{equation} by \eqref{ekappabeta}. \textit{Step 2. Control of \eqref{linear}.} We note that \begin{equation} \notag \partial^s \Gamma(\mathfrak{R}_3, f_R ) = \sum_{s_1 + s_2 + s_3 = s} {}_{\partial^{s_1}} \Gamma (\partial^{s_2} \mathfrak{R}_3 , \partial^{s_3} f_R ). \end{equation} There are two cases. First, if $s_1 = 0$, then \begin{equation} \begin{split}\notag \sum_{s_2 + s_3 = s} & \int_{(0, t) \times \mathbb{T}^2 \times \mathbb{R}^3} \e^{-1} \kappa^{-1 + \frac{s}{2} } \Gamma( \partial^{s_2} \mathfrak{R}_3, \partial^{s_3} f_R) \frac{\partial^s f_R}{\kappa^{1 - \frac{s}{2} } } \mathrm{d} v \mathrm{d} x \mathrm{d} t' \\ & \lesssim \sum_{ s_3 \le s} \kappa^{-\frac{1}{2} + \frac{s}{2} } V(\beta) \| \partial^{s_3} f_R \|_{L^2 ((0, t); L^2_x L^2_v ) } \sqrt{\mathcal{D}(t) } \lesssim \kappa^{\frac{1}{2} } V(\beta) \sqrt{\mathcal{E}(t) }\sqrt{\mathcal{D}(t) }. \end{split} \end{equation} If $s_1 \ge 1$, then by Lemma \ref{lemma_L} we have \begin{equation} \notag \begin{split} \sum_{s_1 +s_2 + s_3 = s} & \int_{(0, t) \times \mathbb{T}^2 \times \mathbb{R}^3} \e^{-1} \kappa^{-1 + \frac{s}{2} } {}_{\partial^{s_1} }\Gamma( \partial^{s_2} \mathfrak{R}_3, \partial^{s_3} f_R) \frac{\partial^s f_R}{\kappa^{1 - \frac{s}{2} } } \mathrm{d} v \mathrm{d} x \mathrm{d} t' \\ & \lesssim \sum_{s_3 < s} V(\beta) \kappa^{-1 + \frac{s}{2}} \left ( \| \partial^{s_3} f_R \|_{L^2((0, t); L^2_x L^2_v)} + \| \nu^{\frac{1}{2} } (\mathbf{I} - \mathbf{P }) \partial^{s_3} f_R \|_{L^2((0, T); L^2_x L^2_v ) } \right ) \\ & \times \left ( \sqrt{\mathcal{E}(t) } + \e \kappa^{\frac{1}{2} } \sqrt{\mathcal{D}(t) } \right ) \lesssim \kappa^{\frac{1}{2} } V(\beta) (\mathcal{E}(t) + \e^2 \kappa \mathcal{D}(t)), \end{split} \end{equation} since $s_3 < s$. In conclusion, we have \begin{equation} \label{linearestimate} \eqref{linear} \lesssim \kappa^{\frac{1}{2} }V(\beta) (\mathcal{E}(t) + \mathcal{D}(t) ). \end{equation} \textit{Step 3. Control of \eqref{Lcommutator}.} For $s=0$, $[\![\partial^s, L ]\!]=0$. When $s=1$, $[\![\partial^s, L ]\!] f_R$ consists of type 1 and type 2 of terms in Lemma \ref{Lpcomm}. When $s=2$, there are exactly one term in $[\![\partial^s, L ]\!] f_R$ which is of type 3 in Lemma \ref{Lpcomm}: ${}_{\partial} L [\![ \mathbf{P}, \partial ]\!] f_R$. For a given $s \le 2$ and type 1 term in Lemma \ref{Lpcomm}, we have an upper bound \begin{equation} \label{Lpcomuppb} \left ( \|\nabla_x u^\beta\|_{L^\infty_{t,x} } + \kappa^{\frac{1}{2} } V(\beta) \right ) \sqrt{\mathcal{D}(t) } \left ( \sqrt{\int_0 ^t \mathcal{E} } + \e \kappa^{\frac{1}{2} } \sqrt{\mathcal{D} (t) } \right ) \lesssim ( \|\nabla_x u^\beta\|_{L^\infty_{t,x} }^2 + 1) \int_0 ^t \mathcal{E} + o(1) \mathcal{D}(t), \end{equation} where the first $\|\nabla_x u^\beta\|_{L^\infty_{t,x}}$ term corresponds to ${}_{\partial^{s_1} } L (\mathbf{I} - \mathbf{P}) \partial^{s_2} f_R$ and the second $\kappa^{\frac{1}{2} } V(\beta) $ term corresponds to ${}_{\partial^2} L (\mathbf{I} - \mathbf{P} ) f_R$. For example, for $s=2$ with ${}_{\partial} L (\mathbf{I} - \mathbf{P}) \partial f_R$ term, we have \begin{equation} \begin{split}\notag &\int_{(0, t) \times \mathbb{T}^2 \times \mathbb{R}^3 } \e^{-2} \kappa^{-1} {}_{\partial} L (\mathbf{I} - \mathbf{P} ) \partial f_R \partial^2 f_R \mathrm{d} v \mathrm{d} x \mathrm{d} t' \lesssim \| \nabla_x u^\beta \|_{L^\infty_{t,x} } \| \e^{-1} \kappa^{-1} \nu^{\frac{1}{2} } (\mathbf{I} - \mathbf{P} ) \partial f_R \|_{L^2((0, t); L^2_x L^2 _v ) } \\ & \times \left ( \|\partial^2 f_R \|_{L^2 ((0, t); L^2_x L^2_v ) } + \e \kappa^{\frac{1}{2} } \|\e^{-1} \kappa^{-\frac{1}{2} } \nu^{\frac{1}{2} } (\mathbf{I} - \mathbf{P} ) \partial^2 f_R \|_{L^2 ((0, t); L^2_x L^2_v ) } \right ) \end{split} \end{equation} which is bounded by the right-hand side of \eqref{Lpcomuppb}. For a given $s \le 2$ and type 2 term in Lemma \ref{Lpcomm}, we have a similar upper bound \begin{equation} \notag \sum \e^{-1} \kappa^{-\frac{3}{2} + \frac{s}{2}} \| \partial \cdots [\![\mathbf{P}, \partial ]\!] \cdots \partial f_R \|_{L^2((0, t); L^2_x L^2_v)} \sqrt{\mathcal{D}(t) }, \end{equation} where summation is over possible combinations of $\partial \cdots [\![\mathbf{P}, \partial ]\!] \cdots \partial$, consisting of $s-1$ $\partial$ and one $[\![\mathbf{P}, \partial ]\!]$. We note that \begin{equation} \notag \begin{split} &\| \partial \cdots [\![\mathbf{P}, \partial ]\!] \cdots \partial f_R \|_{L^2((0, t); L^2_x L^2_v)} \\ & \lesssim \e \left ( \| \nabla_x u^\beta \|_{L^\infty_{t,x} } \| \partial^{s-1} f_R \|_{L^2((0, t); L^2_x L^2_v)} + V(\beta) \sum_{s' < s-1} \| \partial^{s'} f_R \|_{L^2((0, t); L^2_x L^2_v)}\right ), \end{split} \end{equation} where the former term corresponds to the case that all $s-1$ derivatives $\partial$ are applied to $f_R$, and the latter corresponds to the others. Thus, again, we have a bound \begin{equation} \notag \left ( \|\nabla_x u^\beta\|_{L^\infty_{t,x} } + \kappa^{\frac{1}{2} } V(\beta) \right ) \sqrt{\int_0 ^t \mathcal{E} } \sqrt{\mathcal{D} (t) } \lesssim ( \|\nabla_x u^\beta\|_{L^\infty_{t,x} }^2 + 1) \int_0 ^t \mathcal{E} + o(1) \mathcal{D}(t), \end{equation} Finally, for a type 3 term in Lemma \ref{Lpcomm} (which immediately implies $s=2$), we have \begin{equation} \notag \| \nabla_x u^\beta \|_{L^\infty_{t,x} }^2 \left \| \frac{f_R}{\kappa} \right \|_{L^2((0, t); L^2_x L^2_v ) } \sqrt{\int_0 ^t \mathcal{E} } \lesssim \| \nabla_x u^\beta \|_{L^\infty_{t,x} }^2 \int_0 ^t \mathcal{E}. \end{equation} To summarize, we have \begin{equation} \label{Lcommutatorestimate} \eqref{Lcommutator} \lesssim ( \|\nabla_x u^\beta\|_{L^\infty_{t,x} }^2 + 1) \int_0 ^t \mathcal{E} + o(1) \mathcal{D}(t). \end{equation} \textit{Step 4. Control of \eqref{momstr1}, \eqref{momstr2}. }We use the following standard estimate: let $0<\vartheta_2<\vartheta_1<\vartheta_0<\frac{1}{4}$, and let \begin{equation} \label{weightpfr} w_j = e^{\vartheta_j|v|^2 - \frac{1}{2} \e u^\beta \cdot \underline v }, j=0, 1, 2. \end{equation} For $s \le 2$, we have \begin{equation} \notag \begin{split} &\int_{(0, t) \times \mathbb{T}^2 \times \mathbb{R}^3} \left | \frac{\partial^{s} f_R}{\kappa^{1-\frac{s}{2} } } \right |^2 \nu^2 \mathrm{d} v \mathrm{d} x \mathrm{d} t' \lesssim \left \| \frac{ \mathbf{P} \partial^s f_R }{\kappa^{1- \frac{s}{2} } } \right \|_{L^2 ((0, t); L^2_x L^2_v ) }^2 + \left \| \nu^1 \frac{ (\mathbf{I} -\mathbf{P}) \partial^s f_R }{\kappa^{1- \frac{s}{2} } } \right \|_{L^2 ((0, t); L^2_x L^2_v ) }^2 \\ & \lesssim \int_0 ^t \mathcal{E} + \left \|\mathbf{1}_{ \{ |v - \e u^\beta|>(\e \sqrt{\kappa} )^{-o(1) } \} }\nu^1 \frac{ (\mathbf{I} -\mathbf{P}) \partial^s f_R }{\kappa^{1- \frac{s}{2} } }\right \|_{L^2 ((0, t); L^2_x L^2_v ) }^2 \\ &+ \left \|\mathbf{1}_{ \{ |v - \e u^\beta|\le (\e \sqrt{\kappa} )^{-o(1) } \} }\nu^1 \frac{ (\mathbf{I} -\mathbf{P}) \partial^s f_R }{\kappa^{1- \frac{s}{2} } }\right \|_{L^2 ((0, t); L^2_x L^2_v ) }^2 \\ & \lesssim \int_0 ^t \mathcal{E} + \|\mathbf{1}_{ \{ |v - \e u^\beta|>(\e \sqrt{\kappa} )^{-o(1) } \} } \nu^1 w_s^{-1} \|_{L^2 ((0, t); L^2_x L^2_v ) }^2 \| w_s \partial^s f \|_{L^\infty((0, t); L^\infty(\mathbb{T}^2 \times \mathbb{R}^3 ) )} ^2 \\ & \ \ + \left (\e \sqrt{\kappa} \right )^{1 - o(1) } \mathcal{D}(t) \\ & \lesssim \int_0 ^t \mathcal{E} +\left (\e \sqrt{\kappa} \right )^{1 - o(1) } \mathcal{D}(t) + e^{-\frac{1}{(\e \sqrt{\kappa})^{o(1)}} } \| w_s \partial^s f \|_{L^\infty((0, t); L^\infty(\mathbb{T}^2 \times \mathbb{R}^3 ) )} ^2. \end{split} \end{equation} Similar calculation for \eqref{momstr2} gives the following bound: \begin{equation} \label{momestimate} \eqref{momstr1} + \eqref{momstr2} \lesssim (1 + \| \nabla_x u^\beta \|_{L^\infty_{t,x} } ) \int_0 ^t \mathcal{E} + o(1) \mathcal{D}(t) + e^{-\frac{1}{(\e \sqrt{\kappa})^{o(1)}} } \sum_{s \le 2} \| w_s \partial^s f \|_{L^\infty((0, t); L^\infty(\mathbb{T}^2 \times \mathbb{R}^3 ) )} ^2. \end{equation} \textit{Step 5. Control of \eqref{nonlinearity}.} Finally, we control the nonlinear contribution \eqref{nonlinearity}: here we use anisotropic interpolation result (Lemma \ref{anint}) and Lemma \ref{lemma_L}. First, from Lemma \ref{nonhydroLp} and \eqref{ekappabeta}, we remark that \begin{equation} \notag \begin{split} \left \| \nu^{\frac{1}{2} } (\mathbf{I} - \mathbf{P} ) \frac{f}{\kappa} \right \|_{L^2 ((0, t); L^4_x L^2_v ) } &+ \left \| \nu^{\frac{1}{2} } (\mathbf{I} - \mathbf{P} ) \frac{\partial f}{\sqrt{\kappa} } \right \|_{L^2 ((0, t); L^4_x L^2_v ) } + \left \| \nu^{\frac{1}{2} } (\mathbf{I} - \mathbf{P} ) \frac{f}{\kappa} \right \|_{L^2 ((0, t); L^\infty_x L^2_v ) }\\ & \lesssim \e^{\frac{1}{2} } \left ( \sqrt{\mathcal{D}(t) } + \sqrt{\int_0 ^t \mathcal{E} } \right ). \end{split} \end{equation} Next, we estimate the following integrals: first, we estimate \begin{equation} \begin{split}\notag &\int_{(0, t) \times \mathbb{T}^2 \times \mathbb{R}^3} \frac{1}{\e \kappa^2} \Gamma(f_R, f_R) \frac{f_R}{\kappa} \mathrm{d} v \mathrm{d} x \mathrm{d} t' \\ & \lesssim \left ( \left \| \frac{f_R}{\kappa} \right \|_{L^2_{tvx}} + \left \| \nu^{\frac{1}{2} } (\mathbf{I} - \mathbf{P} ) \frac{f_R}{\kappa} \right \|_{L^2_{tvx} } \right ) \sqrt{\kappa}^{-1} \| f_R \|_{L^\infty_{tx} L^2_v} \sqrt{\mathcal{D} (t) } \\ & \lesssim \left ( \sqrt{\int_0 ^t \mathcal{E} } + \e \sqrt{\kappa} \sqrt{D(t) } \right ) \sqrt{D(t) } \left \| \frac{f_R}{\kappa} \right \|_{L^\infty_t L^2_{xv} } ^{\frac{1}{2} } \| \partial^2 f_R \|_{L^\infty _t L^2_{xv} } ^{\frac{1}{2} } \lesssim \left ( \int_0 ^t \mathcal{E} + \mathcal{D}(t) \right ) \sqrt{\mathcal{E}(t) }. \end{split} \end{equation} In a similar fashion, we see \begin{align}\notag \begin{split} &\int_{(0, t) \times \mathbb{T}^2 \times \mathbb{R}^3} \frac{1}{\e \kappa^{2 - \frac{s}{2} }} \Gamma(\partial^s f_R, f_R) \frac{\partial^s f_R}{\kappa^{1 - \frac{s}{2}}} \mathrm{d} v \mathrm{d} x \mathrm{d} t' \\ &\lesssim \sqrt{\mathcal{D}(t) } \frac{1}{\kappa^{\frac{3-s}{2} }} \left [ \left ( \| \partial^s f_R \|_{L^2_{txv}} + \| \nu^{\frac{1}{2} } (\mathbf{I} - \mathbf{P} ) \partial^s f_R \|_{L^2_{txv}} \right ) \| f_R \|_{L^\infty_{tx} L^2_v } \right. \\ &\left. + \left ( \| f_R \|_{L^2_t L^\infty_x L^2_v} + \| \nu^{\frac{1}{2} } (\mathbf{I} - \mathbf{P} ) f_R \|_{L^2_t L^\infty_x L^2_v} \right ) \| \partial^s f_R \|_{L^\infty_{t} L^2_{xv} } \right ] \\ & \lesssim \sqrt{\mathcal{D}(t) } \left [ \left (\sqrt{\int_0^t \mathcal{E} } + \e \sqrt{\kappa} \sqrt{\mathcal{D}(t) } \right ) \frac{1}{\sqrt{\kappa}} \| f_R \|_{L^\infty_{tx} L^2_v } \right. \\ &\left.+ \sqrt{\mathcal{E}(t) } \left ( \frac{1}{\sqrt{\kappa} } \| f_R \|_{L^2_t L^\infty_x L^2_v} + \e^{\frac{1}{2} } \left ( \sqrt{\mathcal{D}(t) } + \sqrt{\int_0 ^t \mathcal{E} } \right ) \right ) \right ] \lesssim \left ( \int_0 ^t \mathcal{E} + \mathcal{D}(t) \right ) \sqrt{\mathcal{E}(t) }, s \le 2, \end{split} \\ \begin{split}\notag &\int_{(0, t) \times \mathbb{T}^2 \times \mathbb{R}^3} \frac{1}{\e \kappa} \Gamma(\partial f_R, \partial f_R) \partial^2 f_R \mathrm{d} v \mathrm{d} x \mathrm{d} t' \\ &\lesssim \sqrt{\mathcal{D}(t) } \frac{1}{\sqrt{\kappa}}\left ( \| \partial f_R \|_{L^2_t L^4_x L^2_v} + \| \nu^{\frac{1}{2} } (\mathbf{I} - \mathbf{P} ) \partial f_R \|_{L^2_t L^4_x L^2_v } \right ) \| \partial f_R \|_{L^\infty_t L^4_x L^2_v} \lesssim \left ( \int_0 ^t \mathcal{E} + \mathcal{D}(t) \right ) \sqrt{\mathcal{E}(t) }. \end{split} \end{align} Here we have used Lemma \ref{lemma_L} to first bound terms with $L^2_v$ norms with mixed $L^p_x$ norms and then Lemma \ref{anint} to turn back to $L^2_x$ norm. In a similar manner, we have, for $s \le 2$, $s_1 + s_2 = s$, and $s_1 \ge 1$, \begin{equation} \begin{split} &\int_{(0, t) \times \mathbb{T}^2 \times \mathbb{R}^3} \frac{1}{\e \kappa^{2 - \frac{s}{2} }} {}_{\partial^{s_1}}\Gamma(\partial^{s_2} f_R, f_R) \frac{\partial^s f_R}{\kappa^{1 - \frac{s}{2}}} \mathrm{d} v \mathrm{d} x \mathrm{d} t' \\ &\lesssim \| \partial^{s_1} u^\beta \|_{L^\infty_{t,x} } \sqrt{\int_0 ^t \mathcal{E} } \frac{1}{\kappa^{2-\frac{s}{2} } } \left [ \left ( \| \partial^{s_2} f_R \|_{L^2_{txv} } + \| \nu^{\frac{1}{2} } (\mathbf{I} - \mathbf{P} ) \partial^{s_2} f_R \|_{L^2_{txv} } \right ) \| f_R \|_{L^\infty_{tx} L^2_v} \right. \\ &\left. + \left ( \| f_R \|_{L^2_t L^\infty_x L^2_v } + \| \nu^{\frac{1}{2} } (\mathbf{I} - \mathbf{P} ) f_R \|_{L^2_t L^\infty_x L^2_v } \right ) \| \partial^{s_2} f_R \|_{L^\infty_t L^2_{xv}} \right ] \\ & \lesssim \| \partial^{s_1} u^\beta \|_{L^\infty_{t,x} } \sqrt{\int_0^t \mathcal{E} } \frac{1}{\kappa^{\frac{1}{2} - \frac{s-s_2}{2}} } \left [ \left ( \sqrt{\int_0 ^t \mathcal{E} } + \e \sqrt{\kappa} \sqrt{\mathcal{D}(t) } \right ) \sqrt{\mathcal{E}(t) } \right. \\ &\left. + \left (\sqrt{\int_0 ^t \mathcal{E} } +\e^{\frac{1}{2}}\left (\sqrt{\mathcal{D}(t)} + \sqrt{\int_0 ^t \mathcal{E} } \right ) \right ) \sqrt{\mathcal{E}(t) } \right ] \\ & \lesssim (\| \nabla_x u^\beta \|_{L^\infty_{tx} } + V(\beta) \sqrt{\kappa} ) \sqrt{\mathcal{E}(t) } \left ( \int_0 ^ t \mathcal{E} + \e \mathcal{D}(t) \right ) \lesssim \sqrt{\mathcal{E} (t) } \left ( \| \nabla_x u^\beta \|_{L^\infty_{tx} } \int_0 ^t \mathcal{E} + \mathcal{D}(t) \right ) \end{split}\notag \end{equation} where the first factor $\| \nabla_x u^\beta \|_{L^\infty_{tx} }$ comes from the case $s_1 = 1$ and the second factor $V(\beta) \sqrt{\kappa} $ comes from the case case $s_2 = 2, s_2 = 0$. Also we have used \eqref{ekappabeta} to bury the contribution of $\|\nabla_x u^\beta \|_{L^\infty_{t,x} }$ in $\mathcal{D}(t) $. Therefore, we have \begin{equation} \label{nonlinearestimate} \eqref{nonlinearity} \lesssim \left ( (\| \nabla_x u^\beta \|_{L^\infty_{tx} }+1) \int_0 ^t \mathcal{E} + \mathcal{D}(t) \right ) \sqrt{\mathcal{E} (t) } \end{equation} Summing up \eqref{sourceestimate}, \eqref{linearestimate}, \eqref{Lcommutatorestimate}, \eqref{momestimate}, \eqref{nonlinearestimate}, we have \begin{equation} \label{Energy} \begin{split} \mathcal{E}(t) + \mathcal{D}(t) &\lesssim \mathcal{E}(0)+ (\|\nabla_x u^\beta \|_{L^\infty_{t,x} }^2 + 1 + \sqrt{\mathcal{E}(t) } ) \int_0 ^t \mathcal{E} + \kappa + \sqrt{\mathcal{E}(t) } \mathcal{D}(t) \\ &+ e^{-\frac{1}{(\e \sqrt{\kappa})^{o(1) } } } \sum_{s \le 2} \| w_s \partial^s f \|_{L^\infty ((0, t); L^\infty(\mathbb{T}^2 \times \mathbb{R}^3 ) ) } ^2. \end{split} \end{equation} \subsubsection{$L^\infty$ control} From Proposition \ref{prop_infty} and \eqref{Remainder} we obtain the following: \begin{equation} \label{LinfinityRem} \begin{split} &\| w_0 f_R \|_{L^\infty ((0, t) ; L^\infty(\mathbb{T}^2 \times \mathbb{R}^3 ) )} \\ \lesssim& \ \| w_0 f_{R0} \|_{L^\infty(\mathbb{T}^2 \times \mathbb{R}^3 )} + \frac{1}{\e} \sqrt{\mathcal{E}(t)} + \e \| w_0 f_R \|_{L^\infty ((0, t) ; L^\infty(\mathbb{T}^2 \times \mathbb{R}^3 ) )} ^2 \\ &+ \e \kappa V(\beta) \| w_0 f_R \|_{L^\infty ((0, t) ; L^\infty(\mathbb{T}^2 \times \mathbb{R}^3 ) )} + \e^3 \kappa V(\beta) + \e \kappa^2 V(\beta) \end{split} \end{equation} Here we have used Lemma \ref{Gammabound} to bound the right-hand side of \eqref{Remainder}. Proceeding similar argument to \eqref{pRemainder}, for $1 \le s \le 2$, we obtain \begin{equation} \notag \begin{split} \| w_s \partial^s f_R \|&_{L^\infty ((0, t) ; L^\infty(\mathbb{T}^2 \times \mathbb{R}^3 ) )} \lesssim \| w_s \partial^s f_{R0} \|_{L^\infty(\mathbb{T}^2 \times \mathbb{R}^3 )} + \frac{1}{\e\kappa^{\frac{s}{2}}} \mathcal{E}(t)\\ & + \e \| w_s \partial^s f_R \|_{L^\infty ((0, t) ; L^\infty(\mathbb{T}^2 \times \mathbb{R}^3 ) )} \| w_0 f_R \| _{L^\infty ((0, t) ; L^\infty(\mathbb{T}^2 \times \mathbb{R}^3 ) )} \\ & + \e \kappa V(\beta) \| w_s \partial^s f_R \|_{L^\infty ((0, t) ; L^\infty(\mathbb{T}^2 \times \mathbb{R}^3 ) )} \\ & + \e V(\beta) \sum_{s' < s} \| w_{s'} \partial^{s'} f_R \|_{L^\infty ((0, t) ; L^\infty(\mathbb{T}^2 \times \mathbb{R}^3 ) )} \\ & + \e V(\beta) \sum_{s_1+s_2 \le s, s_1, s_2 < s} \| w_{s_1} \partial^{s_1} f_R \|_{L^\infty ((0, t) ; L^\infty(\mathbb{T}^2 \times \mathbb{R}^3 ) )}\| w_{s_2} \partial^{s_2} f_R \|_{L^\infty ((0, t) ; L^\infty(\mathbb{T}^2 \times \mathbb{R}^3 ) )} \\ &+ \e^3 \kappa V(\beta) + \e \kappa^2 V(\beta) \end{split} \end{equation} Here we have used a pointwise bound $w_0 > \nu^2 w_1 > \nu^4 w_2$ for the third line. Therefore, we have \begin{equation} \label{Linfestimate} \begin{split} \mathcal{F}(t) &\lesssim \mathcal{F}(0) + \e^2 + \sqrt{\mathcal{E}(t)} + \mathcal{F}(t)^2, \\ \| w_1 \partial^1 f_R \|&_{L^\infty ((0, t) ; L^\infty(\mathbb{T}^2 \times \mathbb{R}^3 ) )} \lesssim \| w_1 \partial^1 f_{R0} \|_{L^\infty(\mathbb{T}^2 \times \mathbb{R}^3 )} + \mathcal{F}(t) \| w_1 \partial^1 f_R \|_{L^\infty ((0, t) ; L^\infty(\mathbb{T}^2 \times \mathbb{R}^3 ) )} \\ &+ \frac{1}{\e \sqrt{\kappa} } \sqrt{\mathcal{E}(t)} + \e V(\beta) \left ( 1 + \frac{\mathcal{F}(t)}{\e} \right )^2, \\ \| w_2 \partial^2 f_R \|&_{L^\infty ((0, t) ; L^\infty(\mathbb{T}^2 \times \mathbb{R}^3 ) )} \lesssim \| w_2 \partial^2 f_{R0} \|_{L^\infty(\mathbb{T}^2 \times \mathbb{R}^3 )} + \mathcal{F}(t) \| w_2 \partial^2 f_R \|_{L^\infty ((0, t) ; L^\infty(\mathbb{T}^2 \times \mathbb{R}^3 ) )} \\ &+ \frac{1}{\e \kappa} \sqrt{\mathcal{E}(t)} + \e V(\beta) \left ( 1 + \frac{\mathcal{F}(t)}{\e} +\| w_1 \partial^1 f_R \|_{L^\infty ((0, t) ; L^\infty(\mathbb{T}^2 \times \mathbb{R}^3 ) )} \right )^2 \end{split} \end{equation} In particular, giving explicit constants for \eqref{Energy} and \eqref{Linfestimate}, we obtain \begin{equation} \notag \begin{split} \mathcal{E}(t) + \mathcal{D}(t) &\le \mathbf{C}_0 \left (\mathcal{E}(0) + \left (\|\nabla_x u^\beta\|_{L^\infty{((0, T) \times \mathbb{T}^2 }) } ^2 + 1 + \sqrt{\mathcal{E}(t) } \right ) \int_0 ^t \mathcal{E} + \kappa + \sqrt{\mathcal{E}(t)} \mathcal{D}(t) \right. \\ &\left. e^{-\frac{1}{(\e \sqrt{\kappa})^{o(1) } } } \sum_{s \le 2} \| w_s \partial^s f \|_{L^\infty ((0, t); L^\infty(\mathbb{T}^2 \times \mathbb{R}^3 ) ) } ^2. \right ), \\ \mathcal{F}(t) &\le \mathbf{C}_0 \left ( \mathcal{F}(0) + \e^2 + \sqrt{\mathcal{E}(t)} + \mathcal{F}(t)^2 \right ), \\ \| w_s \partial^s f_R \|&_{L^\infty ((0, t) ; L^\infty(\mathbb{T}^2 \times \mathbb{R}^3 ) )} \le \mathbf{C}_0 \big (\| w_s \partial^s f_{R0} \|_{L^\infty(\mathbb{T}^2 \times \mathbb{R}^3 )} + \mathcal{F}(t) \| w_s \partial^s f_R \|_{L^\infty ((0, t) ; L^\infty(\mathbb{T}^2 \times \mathbb{R}^3 ) )} \\ & + \e^{-1} \kappa^{-\frac{s}{2} } \sqrt{\mathcal{E}(t)} + \e V(\beta) \big ( 1 + \e^{-1} \mathcal{F}(t) + \sum_{1 \le s' < s } \| w_{s'} \partial^{s'} f_R \|_{L^\infty ((0, t) ; L^\infty(\mathbb{T}^2 \times \mathbb{R}^3 ) )} \big)^2 \big ) \end{split} \end{equation} for some constant $\mathbf{C}_0 > 1$. \subsection{Proof of Theorem \ref{theo:remainder}} For given any arbitrary positive time $T>0$, choose $T_* \in [0,T]$ such that \begin{equation} \begin{split}\label{assumption<T} T_* = \sup \Big\{t>0: \sqrt{\mathcal{E}(t) } < \frac{1}{10 \mathbf{C}_0}, \mathcal{F}(t) < \frac{1}{10 \mathbf{C}_0} \Big\}. \end{split} \end{equation} Then for $t \in [0, T_*]$, \begin{equation} \notag \begin{split} \mathcal{E}(t) + \mathcal{D}(t) &\le 2 \mathbf{C}_0 \mathcal{E}(0) + 2 \mathbf{C}_0 \left (\|\nabla_x u^\beta\|_{L^\infty{((0, T) \times \mathbb{T}^2 }) } ^2 + 2 \right ) \int_0 ^ t \mathcal{E} + 2 \mathbf{C}_0 \kappa \\ & + 2 \mathbf{C}_0 e^{-\frac{1}{(\e \sqrt{\kappa})^{o(1) } } } \sum_{s \le 2} \| w_s \partial^s f \|_{L^\infty ((0, t); L^\infty(\mathbb{T}^2 \times \mathbb{R}^3 ) ) } ^2, \\ \mathcal{F}(t) &\le 2 \mathbf{C}_0 \mathcal{F}(0) + 2 \mathbf{C}_0 \e^2 + 2 \mathbf{C}_0 \sqrt{\mathcal{E}(t)}, \end{split} \end{equation} and for $1 \le s \le 2$, \begin{equation} \begin{split}\notag \| w_s \partial^s f_R \|&_{L^\infty ((0, t) ; L^\infty(\mathbb{T}^2 \times \mathbb{R}^3 ) )} \le 2 \mathbf{C}_0 \big (\| w_s \partial^s f_{R0} \|_{L^\infty(\mathbb{T}^2 \times \mathbb{R}^3 )} + \e^{-1} \kappa^{-\frac{s}{2} } \\ & + \e V(\beta) \big ( 1 + \e^{-1}/2 + \sum_{1 \le s' < s } \| w_{s'} \partial^{s'} f_R \|_{L^\infty ((0, t) ; L^\infty(\mathbb{T}^2 \times \mathbb{R}^3 ) )} \big)^2 \big ), \\ \| w_1 \partial^1 f_R \|&_{L^\infty ((0, t) ; L^\infty(\mathbb{T}^2 \times \mathbb{R}^3 ) )} \le 2 \mathbf{C}_0 \| w_1 \partial^1 f_{R0} \|_{L^\infty(\mathbb{T}^2 \times \mathbb{R}^3 )} + 4 \mathbf{C}_0 \e^{-1} \kappa^{-\frac{1}{2} }, \\ \| w_2 \partial^2 f_R \|&_{L^\infty ((0, t) ; L^\infty(\mathbb{T}^2 \times \mathbb{R}^3 ) )} \le 2\mathbf{C}_0 \| w_2 \partial^2 f_{R0} \|_{L^\infty(\mathbb{T}^2 \times \mathbb{R}^3 )} \\ &+ C(\mathbf{C}_0) \left ( \| w_1 \partial^1 f_{R0} \|_{L^\infty(\mathbb{T}^2 \times \mathbb{R}^3 )} ^2 + \e^{-1} \kappa^{-1} V(\beta) \right ). \end{split} \end{equation} Since $\sqrt{\mathcal{E}(0)}, \mathcal{F}(0) < \delta_0 = \delta_0 (\e)$, and $\| w_s \partial^s f_{R0} \|_{L^\infty(\mathbb{T}^2 \times \mathbb{R}^3 )} < \delta_s = \delta_s (\e)$ satisfying \eqref{deltas} for $s = 1,2$, we have \begin{equation} \begin{split}\notag \| w_s \partial^s f_R \|&_{L^\infty ((0, t) ; L^\infty(\mathbb{T}^2 \times \mathbb{R}^3 ) )} \le C(\mathbf{C}_0) (\e^{-1} \kappa^{-1/2} )^s, \\ &\mathcal{E}(t) + \mathcal{D}(t) \le C( \mathbf{C}_0) (\delta_0^2+ \kappa) + 2 \mathbf{C}_0 \left (\|\nabla_x u^\beta\|_{L^\infty{((0, T) \times \mathbb{T}^2 }) } ^2 + 2 \right ) \int_0 ^ t \mathcal{E} \end{split} \end{equation} since $e^{-\frac{1}{(\e \sqrt{\kappa})^{o(1) } } }$ factor decays faster than any algebraic blowups. By Gronwall's lemma, we have \begin{equation} \begin{split}\notag &\mathcal{E}(t) + \mathcal{D}(t) \le C(\mathbf{C}_0)(\delta_0^2 + \kappa) (1+T) \\ & \times \left ( 2 \mathbf{C}_0 \left (\|\nabla_x u^\beta\|_{L^\infty{((0, T) \times \mathbb{T}^2 }) } ^2 + 2 \right ) \exp \left ( 2 \mathbf{C}_0 \left (\|\nabla_x u^\beta\|_{L^\infty{((0, T) \times \mathbb{T}^2 }) } ^2 + 2 \right ) T \right ) \right ) . \end{split} \end{equation} Since $\delta_0$ satisfies \eqref{delta0}, we see that for sufficiently small $\e$, $\sqrt{\mathcal{E}(T_*)}, \mathcal{F}(T_*)$ satisfies \eqref{assumption<T}. Therefore, $T_* = T$ and we proved the claim. \section{Vorticity Convergence of the approximate solutions of Euler} \subsection{Stability of regular Lagrangian flow when the vorticity is unbounded} To study the stability of the regular Lagrangian flow when the vorticities do not belong to $L^\infty$, we adopt the functional used in \cite{ALM,CDe,BC2013}: for $(u^{\beta_i},X^{\beta_i})$ solving \eqref{ODE:X_beta}, \begin{equation} \label{Lambda} \Lambda (s;t) =\Lambda^{\beta_1, \beta_2} (s;t): = \int_{\mathbb{T}^2} \log \Big(1 + \frac{|X^{\beta_1}(s;t,x) - X^{\beta_2} (s;t,x)|}{\lambda} \Big) \mathrm{d} x, \end{equation} where we again abused the notation \begin{equation} \label{geodistance} |X^{\beta_1}(s;t,x) - X^{\beta_2} (s;t,x)| = \mathrm{dist}_{\mathbb{T}^2} (X^{\beta_1}(s;t,x),X^{\beta_2} (s;t,x)), \end{equation} that is, the geodesic distance between $X^{\beta_1}(s;t,x)$ and $X^{\beta_2} (s;t,x)$. We note that \begin{equation} \label{Lambda|t} \Lambda ({t}\color{black};t)=0 \end{equation} due to the last condition in both \eqref{dX} and \eqref{ODE:X_beta}. From \eqref{dX} and \eqref{ODE:X_beta}, a direct computation yields that \begin{align} | \dot{\Lambda} (s;t) | &\leq \int_{\mathbb{T}^2} \frac{|\dot{X}^{\beta_1} (s)- \dot{X}^{\beta_2}(s)| }{ \lambda + |X^{\beta_1}(s) - X^{\beta_2}(s)|} \mathrm{d} x \notag\\ & \leq \int_{\mathbb{T}^2} \frac{| u^{\beta_1} (s, X^{\beta_1}(s))- u^{\beta_2}(s, X^{\beta_2} (s)| }{ \lambda + |X^{\beta_1}(s) - X^{\beta_2}(s)|} \mathrm{d} x\notag\\ & \leq \int_{\mathbb{T}^2} \frac{| u^{\beta_1} (s, X^{\beta_1}(s))- u^{\beta_1} (s, X^{\beta_2} (s)| }{ \lambda + |X^{\beta_1}(s) - X^{\beta_2}(s)|} \mathrm{d} x \label{dL_1} \\ &+ \int_{\mathbb{T}^2} \frac{| u^{\beta_1} (s, X^{\beta_2}(s))- u^{\beta_2}(s, X^{\beta_2} (s)| }{ \lambda + |X^{\beta_1}(s) - X^{\beta_2}(s)|} \mathrm{d} x .\label{dL_2} \end{align} \begin{proposition}[\cite{CDe,BC2013}]\label{prop_stab} Let $(u^{\beta_i}, \o^{\beta_i})$ satisfies \eqref{vorticity_beta}, \eqref{BS_beta}, \eqref{Lag_beta}, and $X^{\beta_i}$ be the regular Lagrangian flow of \eqref{ODE:X_beta} for $i=1,2$. Suppose $ \| u^{\beta_1} - u^{\beta_2} \|_{L^1((0,T) ; L^1(\mathbb{T}^2))} \ll 1$. Then \begin{equation} \label{stab_rLf} \begin{split} & \| X^{\beta_1}(s;t,\cdot) - X^{\beta_2} (s;t,\cdot) \|_{L^1(\mathbb{T}^2)} \\ &\lesssim \frac{1+ \| \nabla u^{\beta_1}\|_{L^1((0,T) ; L^p(\mathbb{T}^2))} }{|\log \| u ^{\beta_1}- u^{\beta_2} \|_{L^1((0,T) ; L^1(\mathbb{T}^2))} |} \ \ \ \text{for} \ \ p >1. \end{split} \end{equation} For $p=1$, for every $\delta>0$ there exists $C_\delta>0$ such that for every $\gamma>0$ \begin{equation} \begin{split}\label{stab_1} & \mathscr{L}^2 (\{x \in \mathbb{T}^2: |X^{\beta_1}(s;t,x)- X^{\beta_2}(s;t,x)|>\gamma\}) \\ &\leq \frac{e^{\frac{4C_\delta}{\delta}}}{\frac{4C_\delta}{\delta}} \frac{ \| u^{\beta_1} - u^{\beta_2}\|_{L^1 ((0,T); L^1(\mathbb{T}^2))}}{\gamma} + \e \end{split} \end{equation} holds. \end{proposition} For the convenience of the reader we provide a sketch of the argument. The argument follows the line of \cite{CDe} for $p>1$, and that of \cite{BC2013} for $p=1$. \begin{proof} For \eqref{dL_2}, using \eqref{compression}, we have \begin{equation} \label{est:dL_2} \begin{split} \eqref{dL_2} &\leq \frac{1}{\lambda} \int_{\mathbb{T}^2} | u^{\beta_1} (s, X^{\beta_2}(s;t,x))- u^{\beta_2}(s, X^{\beta_2} (s;t,x)| \mathrm{d} x \\ & \leq \frac{\mathfrak{C}}{\lambda} \| u^{\beta_1}(s, \cdot) - u^{\beta_2}(s, \cdot) \|_{L^1(\mathbb{T}^2)} \end{split} \end{equation} with common compressibility bound $\mathfrak{C} = 1$. In the rest of the proof, we estimate \eqref{dL_1}. \textit{Step 1. The case of $p>1$. } Recall that the maximal function of $u$ is given by \begin{equation} \label{Max_f} M u(x) = \sup_{\e>0} \fint_{B_\e (x)} |u(y) | \mathrm{d} y= \sup_{\e>0} \frac{1}{ \mathscr{L}^2 (B_\e (x)) } \int_{B_\e (x)} |u(y) | \mathrm{d} y. \end{equation} We have the following (e.g. \cite{H1996}, Section 2): \begin{align} |u(x) - u(y)| &\lesssim |x-y| \{(M\nabla u) (x) + (M \nabla u) (y)\} \ \ \ \text{a.e. } x,y \in \mathbb{T}^2,\label{DQ}\\ \| Mw \|_{L^p(\mathbb{T}^2)} &\lesssim \| w\|_{L^p (\mathbb{T}^2)} \ \ \ \text{for} \ \ p \in (1,\infty]. \label{Max_ineq} \end{align} Now we bound \eqref{dL_1} for $p>1$, using \eqref{DQ} and \eqref{Max_ineq}, as \begin{equation} \label{est:dL_1_p} \begin{split} \eqref{dL_1} &\leq \int_{\mathbb{T}^2 }\{ {M} \nabla u^{\beta_1} (s, X^{\beta_1} (s;t,x)) + {M} \nabla u^{\beta_1} (s, X^{\beta_2} (s;t,x))\} \mathrm{d} x \\ & \lesssim \| \nabla u^{\beta_1} \|_{L^p(\mathbb{T}^2)} \ \ \text{for} \ \ p \in (1, \infty]. \end{split}\end{equation} Using the above \eqref{est:dL_1_p}, \eqref{est:dL_2}, together with \eqref{Lambda|t}, we derive that \begin{equation} \begin{split} \label{est:Lambda} \Lambda(s;t ) & \lesssim \| \nabla u^{\beta_1} \|_{L^1 ((0,T); L^p(\mathbb{T}^2))}\\ & + \frac{1}{\lambda} \| u^{\beta_1} - u^{\beta_2} \|_{L^1((0,T) ; L^1(\mathbb{T}^2))} \ \ \text{for all } \ (s, t) \in [0,t] \times [0,T]. \end{split} \end{equation} On the other hand, for any $(s, t) \in [0,t] \times [0,T]$ \begin{equation} \label{lower_Lambda} \begin{split} \mathbf{1}_{|X^{\beta_1}(s;t,x) - X^{\beta_2} (s;t,x)| \geq \gamma} \log \Big(1 + \frac{|X^{\beta_1}(s;t,x) - X^{\beta_2} (s;t,x)|}{\lambda} \Big) \geq \log \Big(1+\frac{ \gamma}{ {\lambda}}\Big . \end{split} \end{equation} Then \eqref{lower_Lambda} with $\gamma = \sqrt{\lambda}$, together with the definition \eqref{Lambda}, implies that \begin{equation} \begin{split}\label{L2_control} \mathscr{L}^2 (\{ x \in \mathbb{T}^2: |X^{\beta_1}(s;t,x) - X^{\beta_2} (s;t,x)| \geq \sqrt{\lambda} \}) \leq \frac{1}{|\log \sqrt{\lambda}|} \Lambda (s;t) \end{split} \end{equation} Therefore, by applying \eqref{est:Lambda} to \eqref{L2_control}, together with $\mathscr{L}^2(\mathbb{T}^2) = 1$ and $|x-y| \leq \sqrt{2}$ for $x,y \in \mathbb{T}^2$, we establish the stability: \begin{equation} \notag \begin{split} & \| X^{\beta_1}(s;t,\cdot) - X^{\beta_2}(s;t,\cdot) \|_{L^1(\mathbb{T}^2)} = \int_{\mathbb{T}^2} |X^{\beta_1}(s;t,x) - X^{\beta_2} (s;t,x) |\mathrm{d} x\\ & = \int_{|X^{\beta_1}(s;t,\cdot) - X^{\beta_2}(s;t,\cdot)| \leq \sqrt{\lambda} } + \int_{|X^{\beta_1}(s;t,\cdot) - X^{\beta_2} (s;t,\cdot)| \geq \sqrt{\lambda} } \\ &\leq \sqrt{\lambda} + \frac{\sqrt{2}}{|\log \sqrt{\lambda}|} \Lambda(s;t) \\ & \lesssim \sqrt{\lambda} + \frac{1}{|\log \sqrt{\lambda}|} \Big\{ \| \nabla u^{\beta_1} \|_{L^1((0,T) ; L^p(\mathbb{T}^2))} + \frac{1}{\lambda} \| u ^{\beta_1}- u^{\beta_2} \|_{L^1((0,T) ; L^1(\mathbb{T}^2))} \Big\}. \end{split} \end{equation} Choosing \begin{equation} \label{choice_lambda} \lambda = \| u^{\beta_1} - u^{\beta_2} \|_{L^1((0,T) ; L^1(\mathbb{T}^2))}, \end{equation} we have that \begin{equation} \begin{split}\label{stability_p} &\| X^{\beta_1}(s;t,\cdot) - X^{\beta_2}(s;t,\cdot) \|_{L^1(\mathbb{T}^2)} \\ &\lesssim \| u^{\beta_1} - u^{\beta_2} \|_{L^1((0,T) ; L^1(\mathbb{T}^2))}^{1/2}+ \frac{ \| \nabla u^{\beta_1} \|_{L^1((0,T) ; L^p(\mathbb{T}^2))}}{|\log \| u^{\beta_1} - u^{\beta_2} \|_{L^1((0,T) ; L^1(\mathbb{T}^2))} |}. \end{split} \end{equation} For $\| u^{\beta_1} - u^{\beta_2} \|_{L^1((0,T) ; L^1(\mathbb{T}^2))} \ll 1$, we prove \eqref{stab_rLf}. \medskip \textit{Step 2. The case of $p=1$. }Note that $p=1$ fails \eqref{Max_ineq}, but $\| M u \|_{L^{1,\infty} (\mathbb{T}^2)} \lesssim \| u \|_{L^1 (\mathbb{T}^2)}$ only holds instead of \eqref{Max_ineq}. Here, we recall the quasi-norm of the Lorentz space $L^{p, q}$ \begin{equation} \label{Lorentz} \begin{split} \| u \|_{L^{p,q}(\T^2, \mathrm{m})} &: = p^{1/q} \| \lambda \mathscr{L}^2(\{x \in \mathbb{T}^2: |u(x)|>\lambda\})^{1/p} \|_{L^q (\mathbb{R}_+, \frac{\mathrm{d} \lambda}{\lambda})},\\ \| u \|_{L^{p, \infty}(\mathbb{T}^2)}^p&= \| u \|_{L^{p, \infty}(\mathbb{T}^2, \mathscr{L}^2)}^p = \sup_{\lambda>0} \{\lambda^p \mathscr{L}^2 (\{ x \in \mathbb{T}^2: |u(x)|> \lambda\})\} . \end{split}\end{equation} For $p=1$, there exists a map $\tilde{M}$, defined as in Definition 3.1 of \cite{BC2013} with choice of functions in Proposition 4.2 of \cite{BC2013} such that (Theorem 3.3 of \cite{BC2013}), \begin{equation} \label{o_to_M} \tilde{M}: \o \mapsto \tilde M \nabla (\nabla^\perp (\Delta)^{-1} \o) \geq 0 \ \ \text{is bounded in $L^2 (\mathbb{T}^2)\rightarrow L^2(\mathbb{T}^2)$ and $L^1(\mathbb{T}^2)\rightarrow L^{1, \infty}(\mathbb{T}^2)$}. \end{equation} Note that if $(u, \o)$ satisfying \eqref{BS} in the sense of distributions then $\tilde M \nabla (B*\o)= \tilde M \nabla u$. The argument follows the line of \cite{BC2013}, with translation to the periodic domain by Proposition \ref{PeriodicKernel}. \begin{proposition} There exists an operator $ \o \rightarrow U (\o)$, which will be denoted by $\tilde{M} \nabla u$, defined either on $L^1 (\mathbb{T}^2)$ or $L^2 (\mathbb{T}^2 )$, satisfying \begin{equation} \notag \begin{split} & U(\o) (x) \ge 0, \\ &\| U(\o) \|_{L^{1, \infty} (\mathbb{T}^2 )} \lesssim \| \o \|_{L^1 (\mathbb{T}^2 ) }, \\ & \| U(\o )\|_{L^2 (\mathbb{T}^2 ) } \lesssim \| \o \|_{L^2 (\mathbb{T}^2 ) }. \end{split} \end{equation} Also, if $\o \in L^1 (\mathbb{T}^2)$, and $u = B*\o$, then there is a Lebesgue measure 0 set $\mathcal{N}$ such that \begin{equation} \notag |u(x) - u(y) | \le |x-y| (U(x) + U(y) ), x, y \in \mathbb{T}^2 \setminus \mathcal{N}. \end{equation} \end{proposition} \begin{proof} We first identify $x \in \mathbb{T}^2$ with $x \in \left [ -\frac{1}{2}, \frac{1}{2} \right ]^2 \subset \mathbb{R}^2$, denote $K(y) := \nabla_y ^2 G(y) \chi_{\left [ -\frac{1}{2}, \frac{1}{2} \right ]^2} (y), y \in \mathbb{R}^2$, and define \begin{equation} \notag K_0 (y) = \frac{1}{4\pi} \frac{ y \otimes y - \frac{1}{2} |y|^2 \mathbb{I}_2 }{|y|^4}, y \in \mathbb{R}^2. \end{equation} Also, we regard $\o$ and $u$ as a $\mathbb{Z}^2$-periodic function in $\mathbb{R}^2$: $\o(x+m) = \o(x)$, $u(x+m) = u(x)$ for $m \in \mathbb{Z}^2$. Now, for $x \in \left [ -\frac{5}{2}, \frac{5}{2} \right ]^2\subset \mathbb{R}^2$, $\int_{\mathbb{R}^2} K(y) \o (x-y) dy$ is well-defined as it is exactly $(\nabla^2 G *_{\mathbb{T}^2} \o) (x - m)$ for some $m\in \mathbb{Z}^2$ so that $x -m \in \left [ -\frac{1}{2}, \frac{1}{2} \right ]^2$. Then we see that $D(x)$ defined by \begin{equation} \begin{split}\notag D(x) &:= \int_{\mathbb{R}^2} K(y) \o (x-y) - K_0 (y) \o(x-y) \chi_{B_{100} (0) } (x-y) dy \\ &= \int_{\mathbb{R}^2} (K(y) \chi_{\left [ -\frac{1}{2}, \frac{1}{2} \right ]^2} (y) - K_0 (y) \chi_{B_{100} (x) } (y) ) \o (x-y) dy \end{split} \end{equation} for $x \in \left [ -\frac{5}{2}, \frac{5}{2} \right ]$, and $D(x) := 0$ for $x \notin \left [ -\frac{5}{2}, \frac{5}{2} \right ]$ is bounded. First, since $ B_{\mathfrak{r} } (0) \subset B_{100} (x) \cap \left [ -\frac{1}{2}, \frac{1}{2} \right ]^2$ and thus $(K(y) \chi_{\left [ -\frac{1}{2}, \frac{1}{2} \right ]^2} (y) - K_0 (y) \chi_{B_{100} (x) } (y) )$ is bounded for $y \in B_{\mathfrak{r} } (0)$. For $y \notin B_{\mathfrak{r} }(0)$, $(K(y) \chi_{\left [ -\frac{1}{2}, \frac{1}{2} \right ]^2} (y) - K_0 (y) \chi_{B_{100} (x) } (y) )$ is bounded as well. Finally, $(K(y) \chi_{\left [ -\frac{1}{2}, \frac{1}{2} \right ]^2} (y) - K_0 (y) \chi_{B_{100} (x) } (y) )$ is supported on $B_{100} (x ) $, thus we have \begin{equation} \notag |D(x) | \lesssim \int_{\mathbb{R}^2} \chi_{B_{100} (x) } (y) |\o (x-y) | dy \lesssim C \| \o \|_{L^1 (\mathbb{T}^2)}. \end{equation} Furthermore, this implies $|D | \le C\| \o \|_{L^1 (\mathbb{T}^2)} \chi_{\left [ -\frac{5}{2}, \frac{5}{2} \right ]^2}$ so in fact $D \in L^1$ as well. Therefore, we have \begin{equation} \label{nabxbound} \nabla_x u (x) = D(x) + K_0 \star_{\mathbb{R}^2} (\o \chi_{B_{100} (0) } ) (x), x \in \left [ -\frac{5}{2}, \frac{5}{2} \right ]^2. \end{equation} Next, we closely follow the argument of Proposition 4.2 of \cite{BC2013}. Let $\bar{h}$ be a smooth, nonnegative function, supported on $B_{\frac{1}{100} } (0)$ with $\int_{\mathbb{R}^2} \bar{h}(y) dy = 1$. Also, we denote $\bar{h}_r (x) = \frac{1}{r^2} \bar{h} \left (\frac{x}{r} \right )$ for $x \in \mathbb{R}^2$ and $r > 0$. Finally, for $\xi \in \mathbb{S}^1$ and $j=1,2$ we define \begin{equation} \notag \mathfrak{T}^{\xi, j} (w) := h (\frac{\xi}{2} - w) w_j, \end{equation} and $\mathfrak{T}^{\xi, j}_r$ is similarly defined for $r>0$. Now let $x, y \in \mathbb{T}^2 = \left [ -\frac{1}{2}, \frac{1}{2} \right ]^2$. Then there exists $\tilde{y} \in \left [ -\frac{3}{2}, \frac{3}{2} \right ]$, $\tilde{y} - y \in \mathbb{Z}^2$, such that the projection of line segment of $\tilde{y}$ and $x$ in $\mathbb{R}^2$ is the geodesic connecting $x, y$ in $\mathbb{T}^2$. Then we have \begin{equation} \notag \begin{split} &u(x) - u(y) = u(x) - u (\tilde{y} ) \\ &= \int_{\mathbb{R}^2} \bar{h}_{|x-\tilde{y}|} \left ( z - \frac{x+\tilde{y} }{2} \right ) (u(x) - u(z) ) dz + \int_{\mathbb{R}^2} \bar{h}_{|x-\tilde{y} |} \left ( z - \frac{x+\tilde{y}}{2} \right ) (u(z) - u(y) )dz. \end{split} \end{equation} We focus on the first term: the other gives similar contribution. Following the argument of Proposition 4.2 of \cite{BC2013}, we have \begin{equation} \notag \int_{\mathbb{R}^2} \bar{h}_{|x-\tilde{y}|} \left ( z - \frac{x+\tilde{y} }{2} \right ) (u(x) - u(z) ) dz = |x-\tilde{y}| \sum_{j=1} ^2 \int_0 ^1 \int_{\mathbb{R}^2} \mathfrak{T}_{s|x-\tilde{y}|} ^{\frac{x-y}{|x-y|}, j } (w) (\partial_j u) (x-w) dw ds. \end{equation} Note that $\mathfrak{T}_{s|x-\tilde{y}|} ^{\frac{x-\tilde{y}}{|x-\tilde{y}|}, j }$ is supported on $B_{\frac{1}{100} s|x-\tilde{y} | } \left (\frac{x-y}{2 |x-\tilde{y}|} \right ),$ and $|x - \tilde{y} | \le \frac{\sqrt{2}}{2}$, so if $w \in B_{\frac{1}{100} s|x-\tilde{y} | } \left (\frac{x-\tilde{y}}{2 |x-\tilde{y}|} \right )$, $|w| \le \frac{2}{3}$ and thus $x - w \in \left [ -\frac{5}{2}, \frac{5}{2} \right ]^2$, which implies that \eqref{nabxbound} is satisfied at $x-w$. (Similar consideration shows that at $\tilde{y} - w$ \eqref{nabxbound} is satisfied.) Therefore, \begin{equation} \notag \begin{split} & \left | \int_{\mathbb{R}^2} \bar{h}_{|x-\tilde{y}|} \left ( z - \frac{x+\tilde{y} }{2} \right ) (u(x) - u(z) ) dz \right | \le |x - \tilde{y} |\sum_{j=1} ^2 \int_0 ^1 \left | \left ( \mathfrak{T}_{s|x-\tilde{y} | }^{\frac{x-y}{|x-y|}, j } \star_{\mathbb{R}^2} D \right ) (x) \right | ds \\ & + |x - \tilde{y} | \sum_{j=1} ^2 \int_0 ^1 \left | \left ( \mathfrak{T}_{s|x-\tilde{y} | }^{\frac{x-y}{|x-y|}, j } \star_{\mathbb{R}^2} \left ( K_0 \star_{\mathbb{R}^2} \o \chi_{B_{100} } \right ) \right ) (x) \right | ds \\ & \le |x - \tilde{y} | \sum_{j=1} ^2 \left ( M_{\left \{ \mathfrak{T}^{\xi, j} | \xi \in \mathbb{S}^1 \right \} } (D) (x) + M_{\left \{ \mathfrak{T}^{\xi, j} | \xi \in \mathbb{S}^1 \right \} } ( K_0 \star_{\mathbb{R}^2} \o \chi_{B_{100} } ) (x) \right ), \end{split} \end{equation} where \begin{equation} \notag M_{\left \{ \mathfrak{T}^{\xi, j} | \xi \in \mathbb{S}^1 \right \} } ( g ) (x) = \sup_{\left \{ \mathfrak{T}^{\xi, j} | \xi \in \mathbb{S}^1 \right \} } \sup_{r>0} \left | \left ( \mathfrak{T}^{\xi, j} _r \star_{\mathbb{R}^2} g \right ) (x) \right |, x \in \mathbb{R}^2. \end{equation} By Theorem 3.3 of \cite{BC2013}, we have \begin{equation} \notag \| M_{\left \{ \mathfrak{T}^{\xi, j} | \xi \in \mathbb{S}^1 \right \} } ( K_0 \star_{\mathbb{R}^2} \o \chi_{B_{100} } ) \|_{L^{1, \infty } (\mathbb{R}^2 ) } \le C \| \o \chi_{B_{100} } \|_{L^1 (\mathbb{R}^2 ) } \le C \| \o \|_{L^1 (\mathbb{T}^2 ) }. \end{equation} Also, by Young's inequality, we have \begin{equation} \notag \| M_{\left \{ \mathfrak{T}^{\xi, j} | \xi \in \mathbb{S}^1 \right \} } (D) \|_{L^{1, \infty} (\mathbb{R}^2 ) } \le\| M_{\left \{ \mathfrak{T}^{\xi, j} | \xi \in \mathbb{S}^1 \right \} } (D) \|_{L^{\infty} (\mathbb{R}^2 ) } \le C \| D \|_{L^\infty (\mathbb{R}^2 ) } \le C \| \o \|_{L^1 (\mathbb{T}^2 )}. \end{equation} Finally, for $x \in \mathbb{T}^2$ identified with $\left [ \frac{-1}{2}, \frac{1}{2} \right ]^2$, we define \begin{equation} \notag U(x) := \sum_{\tilde{x} \in \left [ -\frac{3}{2}, \frac{3}{2} \right ], x - \tilde{x} \in \mathbb{Z}^2 } \sum_{j=1} ^2 \left ( M_{\left \{ \mathfrak{T}^{\xi, j} | \xi \in \mathbb{S}^1 \right \} } (D) (\tilde{x}) + M_{\left \{ \mathfrak{T}^{\xi, j} | \xi \in \mathbb{S}^1 \right \} } ( K_0 \star_{\mathbb{R}^2} \o \chi_{B_{100} } ) (\tilde{x}) \right ). \end{equation} Then obviously for $x, y \in \mathbb{T}^2$ \begin{equation} \notag |u(x) - u(y) | \le d_{\mathbb{T}^2} (x,y) (U(x) + U(y) ), \end{equation} and if $U(x) > \lambda$, then for $\tilde{x}_1, \cdots, \tilde{x}_9 \in \left [ -\frac{3}{2}, \frac{3}{2} \right ]^2$ such that $\tilde{x}_j - x \in \mathbb{Z}^2$, at least one of $\tilde{x}_i$ satisfies $$\sum_{j=1} ^2 \left ( M_{\left \{ \mathfrak{T}^{\xi, j} | \xi \in \mathbb{S}^1 \right \} } (D) (\tilde{x}_i) + M_{\left \{ \mathfrak{T}^{\xi, j} | \xi \in \mathbb{S}^1 \right \} } ( K_0 \star_{\mathbb{R}^2} \o \chi_{B_{100} } ) (\tilde{x}_i) \right ) > \frac{\lambda}{9},$$ and therefore \begin{equation} \notag \begin{split} &\left \{ x \in \left [ -\frac{1}{2}, \frac{1}{2} \right ]^2 | U(x) > \lambda \right \} \\ & \subset \bigcup_{m = (a,b), a,b \in \{ -1, 0, 1 \} } \left \{ y \in \left [ -\frac{1}{2}, \frac{1}{2} \right ]^2 + m \big | M_{\left \{ \mathfrak{T}^{\xi, j} | \xi \in \mathbb{S}^1 \right \} } (D) (y) + M_{\left \{ \mathfrak{T}^{\xi, j} | \xi \in \mathbb{S}^1 \right \} } ( K_0 \star_{\mathbb{R}^2} \o \chi_{B_{100} } ) (y) > \frac{\lambda}{9} \right \} \\ & \subset \left \{ y \in \mathbb{R}^2 | M_{\left \{ \mathfrak{T}^{\xi, j} | \xi \in \mathbb{S}^1 \right \} } (D) (y) + M_{\left \{ \mathfrak{T}^{\xi, j} | \xi \in \mathbb{S}^1 \right \} } ( K_0 \star_{\mathbb{R}^2} \o \chi_{B_{100} } ) (y) > \frac{\lambda}{9} \right \} . \end{split} \end{equation} Therefore, we see that \begin{equation} \notag \| U \|_{L^{1, \infty} (\mathbb{T}^2 ) } \le C \| \o \|_{L^1 (\mathbb{T}^2 ) } \end{equation} Also, if $\o \in L^2 (\mathbb{T}^2)$, we see that \begin{equation} \notag \begin{split} \| U \|_{L^2 (\mathbb{T}^2 ) } & \le C ( \| M_{\left \{ \mathfrak{T}^{\xi, j} | \xi \in \mathbb{S}^1 \right \} } ( K_0 \star_{\mathbb{R}^2} \o \chi_{B_{100} } ) \|_{L^{2 } (\mathbb{R}^2 ) } + \| M_{\left \{ \mathfrak{T}^{\xi, j} | \xi \in \mathbb{S}^1 \right \} } (D) \|_{L^{2} (\mathbb{R}^2 ) } \\ &\le C (\| \o \chi_{B_{100} } \|_{L^2 (\mathbb{R}^2 ) } + \|D \|_{L^2 (\mathbb{R}^2 ) } \le C \| \o \|_{L^2 (\mathbb{T}^2 ) } \end{split} \end{equation} by again Theorem 3.3 of \cite{BC2013} and Young's inequality. \end{proof} We return to the proof of \eqref{stab_1}. We have (Proposition 4.2 in \cite{BC2013}) \begin{equation} \label{DQ1} |u(x) - u(y)| \leq |x-y| \{\tilde{M} \nabla u(x) + \tilde{M} \nabla u(y) \} \ \ \text{a.e.} \ x,y \in \mathbb{T}^2. \end{equation} Now we check that $\{\o^\beta\}$ of \eqref{Lag_beta} with \eqref{vorticity_beta} is equi-integrable (in the sense of \eqref{o_beta:EI}). Fix any $\e>0$. We choose $\delta>0$ such that \begin{equation} \label{EI_o} \text{ if $\mathscr{L}^2 (E^\prime)< \delta$ then $ \int_{E^\prime} |\o_0(x)| \mathrm{d} x< \frac{\e}{2 \mathfrak{C}}$.} \end{equation} From \eqref{vorticity_beta} and \eqref{compression}, for any Borel set $E \subset \mathbb{T}^2$ with $\mathscr{L}^2 (E) < \delta/ \mathfrak{C}$, \begin{equation} \begin{split}\label{est:o1_1} &\| \o^\beta (t,x) \|_{L^1 (E)} = \| \o_0^\beta (X^\beta (0;t,x)) \|_{L^1 (\{x \in E\})} \\ &\leq \mathfrak{C} \int_{X^\beta (t;0,x) \in E} |\o_0^\beta (x) | \mathrm{d} x\\ & \leq \mathfrak{C} \int_{ \mathbb{R}^2} \Big(\int_{ \mathbb{T}^2}\mathbf{1}_{X^\beta (t;0,x) \in E} |\o_0(x-y)| \mathrm{d} x\Big) \varphi^\beta(y) \mathrm{d} y , \end{split} \end{equation} where $\o_0$ is regarded as a $\mathbb{Z}^2$-periodic function. For $y \in \mathbb{R}^2$, we define \begin{equation} \notag \tilde{E}_y:= \{\tilde{x} \in \mathbb{R}^2: X^\beta (t;0,\tilde{x}+y) \in E + \mathbb{Z}^2\} / \mathbb{Z}^2 \subset \mathbb{T}^2. \end{equation} From \eqref{compression} and the fact that $x \mapsto x-y$ is measure-preserving for fixed $y$, we have \begin{equation} \label{est:E} \mathscr{L}^2 (\{\tilde{x} \in \tilde E_y\})= \mathscr{L}^2 (\{x \in \mathbb{T}^2: X^\beta (t;0,x) \in E\}) \leq \mathfrak{C} \mathscr{L}^2 (E)< \delta. \end{equation} Therefore, applying \eqref{est:E} to \eqref{EI_o}, we have that, from \eqref{est:o1_1}, \begin{equation} \label{o_beta:EI} \text{if } \ \mathscr{L}^2 (E )< \delta/\mathfrak{C} \ \text{ then } \ \| \o^\beta (t,\cdot) \|_{L^1(E)} \leq \| \varphi^\beta \|_{L^1 (\mathbb{R}^2)} \sup_{y \in \mathbb{R}^2}\mathfrak{C} \int_{\tilde{x} \in \tilde{E}_y} |\o_0 (\tilde{x})| \mathrm{d} \tilde{x} < \e. \end{equation} Since $\o^\beta$ is equi-integrable, for every $\delta>0$ there exists $C_\delta>0$ and a Borel set $A_\delta \subset \mathbb{T}^2$ such that $\o^\beta = \o^\beta_1 + \o^\beta_2$ such that $\| \o_1^\beta \|_{L^1} \leq \delta$ and $\text{supp}(\o_2^\beta) \subset A_\delta$, $\| \o_2^\beta \|_{L^2} \leq C_\delta$ (Lemma 5.8 of \cite{BC2013}, whose proof can be established by noting that equi-integrability with $\sup_\beta \| \o^\beta\|_{L^1} < \infty$ is equivalent to $\lim_{K\rightarrow \infty} \sup_\beta \int_{\{ |\o^\beta| > K \} \cap \mathbb{T}^2}|\o^\beta| dx = 0$). Now apply \eqref{DQ1} to \eqref{dL_1} and use the decomposition of $u^\beta = u^\beta_1 + u^\beta_2$ with $u^\beta_i = \nabla^\perp (-\Delta)^{-1} \o^\beta_i$ to derive that \begin{equation} \begin{split}\label{def:U_l} \eqref{dL_1} \leq & \int_{\mathbb{T}^2}U^\lambda_1(s;t,x) \mathrm{d} x + \int_{\mathbb{T}^2}U^\lambda_2(s;t,x) \mathrm{d} x ,\\ U^\lambda_i(s;t,x) :=& \min \Big\{ \frac{|u_i^{\beta_1}(s, X^{\beta_1} (s;t,x))|}{\lambda} + \frac{|u_i^{\beta_1}(s, X^{\beta_2}(s;t,x))|}{\lambda} ,\\ & \ \ \ \ \ \ \ \ \tilde{M} \nabla u_i^{\beta_1} (s, X^{\beta_1}(s;t,x)) + \tilde{M} \nabla u_i^{\beta_1} (s, X^{\beta_2}(s;t,x)) \Big\}\geq 0. \end{split} \end{equation} \hide \begin{align} U^1_\lambda(s;t,x) =& \min \Big\{ \frac{|u(s, X (s;t,x))|}{\lambda} + \frac{|u (s, X^\beta (s;t,x))|}{\lambda} ,\\ & \ \ \ \ \ \ \ \ \tilde{M} S \o_1 (s, X(s;t,x)) + \tilde{M} S \o_1 (s, X^\beta(s;t,x)) \Big\}\\ U^2_\lambda(s;t,x) =& \min \Big\{ \frac{|u(s, X (s;t,x))|}{\lambda} + \frac{|u (s, X^\beta (s;t,x))|}{\lambda} ,\\ & \ \ \ \ \ \ \ \ \tilde{M} S \o_2 (s, X(s;t,x)) + \tilde{M} S \o_2 (s, X^\beta(s;t,x)) \Big\} \end{align} \unhide For $U_2^\lambda$, we use \eqref{compression} and \eqref{o_to_M} and simply derive that \begin{equation} \label{estp:U} \| U_2^\lambda (s;t, \cdot) \|_{L^2(\mathbb{T}^2)}\leq \mathfrak{C} \min\Big\{ \frac{ 2\| u^{\beta_1}_2 (s) \|_{L^2(\mathbb{T}^2)} }{\lambda} , \| \o^{\beta_1}_2 \|_{L^2} \Big\} \leq \mathfrak{C} C_\delta \end{equation} For $U^\lambda_1$, using \eqref{o_to_M} \begin{equation} \notag \| U_1^\lambda (s; t, \cdot) \|_{L^{1, \infty}} \lesssim \min \{ \frac{ \| u_1 ^\beta (s) \|_{L^{1, \infty}}}{\lambda }, \| \o_1 \|_{L^1 (\mathbb{T}^2)} \} \leq \| \o_1 \|_{L^1 (\mathbb{T}^2)} \leq \delta \end{equation} \begin{equation} \notag \| U^1_\lambda (s; t, \cdot) \|_{L^{p, \infty}} \lesssim \| U^1_\lambda (s; t, \cdot) \|_{L^{p }} \lesssim \min \{ \frac{ \| u_1 ^\beta (s) \|_{L^{p}}}{\lambda }, \| \o_1 \|_{L^p (\mathbb{T}^2)} \} \lesssim \frac{ \| u_1 ^\beta(s) \|_{L^{p}}}{\lambda } \lesssim \frac{\delta}{\lambda}. \end{equation} for some $p \in (1,2)$, using fractional integration. Using the interpolation $ \| g\|_{L^1(\mathbb{T}^2)} \lesssim \| g \|_{L^{1, \infty}} \Big\{1+ \log \Big( \frac{ \| g \|_{L^{p,\infty}}}{\|g \|_{L^{1, \infty}}} \Big) \Big\}$ (Lemma 2.2 of \cite{BC2013}), we end up with \begin{equation} \label{est1:U} \| U^1_\lambda (s;t, \cdot) \|_{L^1}\lesssim \| U^1_\lambda (s, \cdot) \|_{L^{1, \infty} } \Big\{1+ \log_+ \Big( \frac{ \|U^1_\lambda (s,\cdot ) \|_{L^{p,\infty}}}{\|U^1_\lambda (s,\cdot ) \|_{L^{1, \infty}}} \Big) \Big\} \lesssim \delta + \delta |\log \lambda|. \end{equation} where we have used that the map $z \rightarrow z(1+\log_+(K/z))$ is nondecreasing for $z \in [0, \infty)$. Together with \eqref{est:dL_2}, \eqref{estp:U} and \eqref{est1:U}, we conclude that \begin{equation} \notag \begin{split}\notag &\Lambda(s; t) \le \int_s ^t |\dot{\Lambda} (\tau;t) | \mathrm{d} \tau \le \int_0^t\{\eqref{dL_1} + \eqref{dL_2}\} \mathrm{d} s \\ &\leq \int_0^t\big\{ \| U_\lambda^1 (s;t,\cdot) \|_{L^1(\mathbb{T}^2) } + \| U_\lambda^2 (s;t,\cdot) \|_{L^2(\mathbb{T}^2)} + \frac{\mathfrak{C}}{\lambda} \| u^{\beta_1} (s, \cdot )- u^{\beta_2} (s, \cdot )\|_{L^1(\mathbb{T}^2)}\big\} \mathrm{d} s \\ & \leq \mathfrak{C} C_\delta T + \delta \{1+ |\log \lambda|\} T+ \frac{\mathfrak{C}}{\lambda}\| u^{\beta_1} - u^{\beta_2} \|_{L^1((0,T);L^1(\mathbb{T}^2))}. \end{split} \end{equation} From this inequality, and \eqref{lower_Lambda}, \eqref{Lambda|t}, we derive that \begin{equation} \begin{split}\label{X-X_1} & \mathscr{L}^2 (\{x \in \mathbb{T}^2: |X^{\beta_1}(s;t,x)- X^{\beta_2}(s;t,x)|>\gamma\}) \\ &\le \frac{\Lambda(s; t)}{\log \left (1 + \frac{\gamma}{\lambda} \right )}\lesssim \frac{\| u^{\beta_1} - u^{\beta_2}\|_{L^1L^1}}{ \lambda\log (1+ \frac{\gamma}{\lambda})|} + \frac{C_\delta }{|\log (1+ \frac{\gamma}{\lambda})|}+ \delta \end{split} \end{equation} for $\lambda, \gamma \in (0, 1/e)$. Here, for the last term, we have used that, for $0<\lambda< 1/e$ and $0<\gamma < 1/e$ \begin{equation} \notag \frac{\delta |\log \lambda|}{|\log (1+ \frac{\gamma}{\lambda})|} = \delta\frac{|\log \lambda|}{ - \log \lambda + \log (\lambda + \gamma) } =\delta \frac{|\log \lambda|}{|\log \lambda| - |\log(\lambda + \gamma)| } \leq \delta \frac{|\log \lambda|}{|\log \lambda| } \leq \delta . \end{equation} Choose \begin{equation} \label{choice_lambda1} \lambda= \lambda_{\delta, \gamma} =(e^{\frac{ 4C_\delta}{\delta}}-1)^{-1} \gamma . \end{equation} Note that $\log (1+ \frac{\gamma}{\lambda_{\e, \gamma}}) = \log (e^{\frac{ 4C_\delta}{\delta}})= \frac{4C_\delta}{\delta}$. Then \eqref{X-X_1} yields \eqref{stab_1}. \end{proof} \subsection{Convergence of Velocity field $u^\beta$} \begin{lemma}\label{lemma_AL} Let $T>0$. Assume \eqref{BS_beta} holds and \begin{equation} \begin{split}\notag \sup_{\beta} \| \o^\beta \|_{L^\infty ((0,T); L^1 (\mathbb{T}^2))}<\infty,\ \ \sup_{\beta} \| u^\beta \|_{L^\infty ((0,T); L^2 (\mathbb{T}^2))}<\infty. \end{split} \end{equation} Then there exists a subsequence $\{\beta^\prime\} \subset \{\beta\}$ such that $u^{\beta^\prime}$ is Cauchy in $L^1((0,T); L^1(\mathbb{T}^2))$. \end{lemma} \begin{proof} The proof is due to the elliptic regularity, the Frechet-Kolmogorov theorem, which states that $W^{s,p} (\mathbb{T}^2) \xhookrightarrow{} \xhookrightarrow{} L^q(\mathbb{T}^2)$ for $s>0$ and $1 \leq q \leq p < \infty$, and the Aubin-Lions lemma, which states that for reflexive Banach spaces $X, Y, Z$ such that $Y \xhookrightarrow{} \xhookrightarrow{} X\xhookrightarrow{} Z,$ \begin{equation} \begin{split}\label{Aubin-Lions} { W^{1,r} ((0,T); Z) \cap L^1((0,T); Y) \xhookrightarrow{} \xhookrightarrow{} L^1((0,T); X), \ \text{for} \ r>1. } \end{split} \end{equation} Note that, from $L^1(\T^2) \xhookrightarrow{} H^{s}(\T^2)$ for any $s<-1$, \begin{equation} \notag \o^\beta \in C^0([0,T]; H^s(\T^2)) \ \text{ uniformly-in-$\beta$ for any $s<-1$.} \end{equation} On the other hand, we have $- \Delta p^\beta = \text{div}(\text{div}(u^\beta \otimes u^\beta))$ with $\fint_{\T^2} p^\beta=0$. Since $u^\beta \in L^\infty ( (0,T); L^2)$ uniformly-in-$\beta$, $u^\beta \otimes u^\beta \in L^\infty((0,T);L^1(\T^2))$ uniformly-in-$\beta$. Using $L^1 (\T^2) \xhookrightarrow{} H^{s}(\T^2)$ for $s<-1$, an elliptic regularity says L^\infty( (0,T); H^{s-1}(\T^2))\ni \text{div}( u^\beta \otimes u^\beta ) \mapsto \nabla p^\beta \in L^\infty( (0,T);H^{s-1}(\T^2))$ uniformly-in-$\beta$. Therefore from $\partial_t u^\beta = - \text{div}(u^\beta \otimes u^\beta)- \nabla p^\beta$ we derive that $\partial_t u^\beta \in L^\infty ((0,T);H^{s-1})$ uniformly-in-$\beta$ for any $s<-1$. Therefore we conclude that \begin{equation} \label{AL1} u^\beta \in W^{1, \infty} ((0,T); H^{-5/2} (\T^2)) \ \text{uniformly-in-}\beta \end{equation} Next, we note that $L^1(\mathbb{T}^2) \xhookrightarrow{} W^{- \frac{3}{4}, 3} (\mathbb{T}^2)$. This is a consequence of an embedding $W^{ \frac{3}{4}, 3}(\mathbb{T}^2)\xhookrightarrow{} L^\infty(\mathbb{T}^2) $ (note $\frac{3}{4}> \frac{2}{3}$) and the duality argument $L^1(\mathbb{T}^2) \xhookrightarrow{} (L^\infty (\mathbb{T}^2))^* \xhookrightarrow{} (W^{ \frac{3}{4}, 3}(\mathbb{T}^2))^* = W^{- \frac{3}{4},\frac{3}{2}} (\mathbb{T}^2)$. Therefore we derive that $\o^\beta \in L^\infty ((0,T); W^{- \frac{3}{4},\frac{3}{2}} (\mathbb{T}^2) )$. Now applying the elliptic regularity theory to \eqref{BS_beta}, we drive that \begin{equation} \label{AL2} u^\beta \in L^\infty ((0,T); W^{\frac{1}{4}, \frac{3}{2}} (\mathbb{T}^2) ) \ \text{ uniformly-in-$\beta$.} \end{equation} Now we set $Y= W^{\frac{1}{4}, \frac{3}{2}} (\mathbb{T}^2), X=L^1(\mathbb{T}^2) ,Z= H^{-\frac{5}{2}} (\mathbb{T}^2)$. Using the Frechet-Kolmogorov theorem, we have $Y= W^{\frac{1}{4}, \frac{3}{2}} (\mathbb{T}^2) \xhookrightarrow{} \xhookrightarrow{} X=L^1(\mathbb{T}^2) \xhookrightarrow{}Z= H^{-\frac{5}{2}} (\mathbb{T}^2)$. Finally, we prove Lemma \ref{lemma_AL} using the Aubin-Lions lemma \eqref{Aubin-Lions}.\end{proof} \hide The Biot-Savart law implies that $(u^{\beta^\prime}, \o^{\beta^\prime})$ satisfying \eqref{vorticity_beta} should hold \begin{equation} \notag u^{\beta^\prime}(t,x) = \int_{\mathbb{R}^2} \mathbf{b}(x-y) \o^{\beta^\prime} (t,y) \mathrm{d} y, \ \ \ |\mathbf{b}(x)|\leq \frac{C_{\mathbf{b}}}{|x|}. \end{equation} We introduce smooth functions $\varphi^i_N{\color{red}:\mathbb{R}^2\rightarrow \mathbb{R}_+}$ with $\|\varphi^i_N \|_{C^2} \leq C_N <\infty $ for $i=1,2$, and $N\gg 1$, such that $\varphi^1_N(y) =1$ for $\frac{1}{N}\geq |y| $ and $\varphi^{\color{red}1}_N(y)=0$ for $\frac{2}{N}\leq |y|$; and $\varphi^2_N(y) =1$ for $|y|\geq N $ and $\varphi^{\color{red}2}_N(y)=0$ for $|y|\leq N/2$; and finally $\varphi_N^3:=(1- \varphi^1_N- \varphi^2_N).$ We decompose $\mathbf{b} = \varphi_N^1 \mathbf{b} + \varphi_N^2\mathbf{b} + \varphi^3_N \mathbf{b}$. {\color{red}We estimate each contributions separately. We first note that, for $r \in [1,\infty]$ and any $\beta^\prime_1, \beta^\prime_2 \in \{\beta^\prime\}$, \begin{equation} \begin{split}\label{u^beta:i} &\Big\|\int_{\mathbb{R}^2} \varphi_N^i( y) \mathbf{b}( y) \big(\o^{\beta^\prime_1}(t,x-y) - \o^{\beta^\prime_2}(t,x-y)\big) \mathrm{d} y \Big\|_{L^r( \{x \in \T^2\})}\\ &\leq\Big\|\int_{\mathbb{R}^2} \varphi_N^i( y) \mathbf{b}( y) \mathbf{1}_{ |(x-y)+ y| \leq \text{diam} (\T^2)}\big(\o^{\beta^\prime_1}(t,x-y) - \o^{\beta^\prime_2}(t,x-y)\big) \mathrm{d} y \Big\|_{L^r( \{x \in \mathbb{R}^2\})} \\ &\leq 2 \| \varphi^i_N \mathbf{b} \|_{L^r (\mathbb{R}^2)} \| \o^{\beta^\prime_1}(t, \cdot ) - \o^{\beta^\prime_2} (t, \cdot ) \|_{L^1(\T^2)} , \end{split} \end{equation} where we have used the Young's inequality and the fact that $\o$ is spatially periodic. } {\color{red}Using \eqref{u^beta:i} and the Fubini's theorem,} \begin{equation} \begin{split}\label{u^beta:1} &\Big\|\int_{\mathbb{R}^2} \varphi_N^1( y) \mathbf{b}( y) \big(\o^{\beta^\prime_1}(t,x-y) - \o^{\beta^\prime_2}(t,x-y)\big) \mathrm{d} y \Big\|_{L^1( \{x \in \T^2\})}\\ &\leq 2 \| \varphi^1_N \mathbf{b} \|_{L^1 (\mathbb{R}^2)} \| \o^{\beta^\prime_1}(t, \cdot ) - \o^{\beta^\prime_2} (t, \cdot ) \|_{L^1(\T^2)} \leq \frac{10 C_ \mathbf{b}}{N} \sup_{\beta} \| \o ^\beta (t, \cdot ) \|_{L^1(\T^2)}. \end{split} \end{equation} and, for $r >4$ from $\mathscr{L}^2(\T^2)=1$, \begin{equation} \begin{split}\label{u^beta:2} & \| \varphi_N^2 \mathbf{b} \star (\o^{\beta^\prime_1} (t)- \o^{\beta^\prime_2}(t)) \|_{L^1 (\{x \in \T^2\}) \\ &\leq \| \varphi_N^2 \mathbf{b} \star (\o^{\beta^\prime_1} (t)- \o^{\beta^\prime_2}(t)) \|_{L^r (\{x \in \T^2\})} \\ &\leq \| \varphi_N^2 \mathbf{b} \|_{L^r (\mathbb{R}^2)} N^2 \big\|\o^{\beta^\prime_1}(t,\cdot ) - \o^{\beta^\prime_2}(t,\cdot )\big\|_{L^1(\T^2)} \\ &\leq \frac{C_{\mathbf{b}}}{N^{r-4}} \sup_{\beta} \| \o ^\beta (t, \cdot ) \|_{L^1(\T^2)}. \end{split} \end{equation} {\color{red}Finally, the last term is bounded by} \begin{equation} \begin{split}\label{u^beta:3} & \| \varphi_N^3 \mathbf{b} \star (\o^{\beta^\prime_1} (t)- \o^{\beta^\prime_2}(t)) \|_{L^1 (\{x \in \T^2\})} \\ &\leq\Big\|\int_{\mathbb{R}^2} \varphi_N^3( y) \mathbf{b}( y) {\color{red}\mathbf{1}_{|(x-y) - x| \leq \text{diam} (\T^2)}} \big(\o^{\beta^\prime_1}(t,x-y) - \o^{\beta^\prime_2}(t,x-y)\big) \mathrm{d} y \Big\|_{L^\infty( \{x \in \T^2\})}\\ & \leq \| \varphi_N^3 \mathbf{b} \|_{H^{|s|}(\mathbb{R}^2)} N^2 \|\o^{\beta^\prime_1}(t,\cdot) - \o^{\beta^\prime_2}(t,\cdot) \|_{H^{s}(\T^2)} . \end{split} \end{equation} For any $\e>0$, we choose large $N=N ( \sup_{\beta} \| \o ^\beta (t, \cdot ) \|_{L^1(\T^2)})\gg 1$ so that $\eqref{u^beta:1} + \eqref{u^beta:2}< \e/2$. For such $N$, now we choose $\beta^\prime\gg_{N, \mathbf{b}} 1$ such that $ \eqref{u^beta:3}< \e/2$.\unhide \hide \begin{proof} The proof is due to the Biot-Savart law (Theorem 10.1 in \cite{BM}); and the Aubin-Lions lemma (Lemma 10.4 in \cite{BM}): $C^{0,1} ([0,T]; H^m) \cap C([0,T]; H^s) \xhookrightarrow{} \xhookrightarrow{} C([0,T]; H^r )$ for $m \leq r< s$. Note that, from $L^1(\mathbb{T}^2) \xhookrightarrow{} H^{s}(\mathbb{T}^2)$ for any $s<-1$, \begin{equation} \o^\beta \in C^0([0,T]; H^s(\mathbb{T}^2)) \ \text{ uniformly-in-$\beta$ for any $s<-1$.} \end{equation} On the other hand, we have $- \Delta p^\beta = \text{div}(\text{div}(u^\beta \otimes u^\beta))$ with $\fint_{\mathbb{T}^2} p^\beta=0$. Since $u^\beta \in L^\infty ( (0,T); L^2)$ uniformly-in-$\beta$, $u^\beta \otimes u^\beta \in L^\infty((0,T);L^1(\mathbb{T}^2))$ uniformly-in-$\beta$. Using $L^1 (\mathbb{T}^2) \xhookrightarrow{} H^{s}(\mathbb{T}^2)$ for $s<-1$, an elliptic regularity says L^\infty( (0,T); H^{s-1}(\mathbb{T}^2))\ni \text{div}( u^\beta \otimes u^\beta ) \mapsto \nabla p^\beta \in L^\infty( (0,T);H^{s-1}(\mathbb{T}^2))$ uniformly-in-$\beta$. Therefore from $\partial_t u^\beta = - \text{div}(u^\beta \otimes u^\beta)- \nabla p^\beta$ we derive that $\partial_t u^\beta \in L^\infty ((0,T);H^{s-1})$ uniformly-in-$\beta$ for any $s<-1$. Therefore we conclude that $u^\beta \in C^{0,1} ([0,T]; H^{s-1} (\mathbb{T}^2))$ uniformly-in-$\beta$ \begin{equation} \label{AL_o} \o^{\beta^\prime} \text{ is Cauchy in $C^0([0,T]; H^r)$ for any $r(<s)<-1$.} \end{equation} \color{red}{Joonhyun's correction started}\color{black}{} Next, we show that $u^\beta$ is uniformly bounded in $L^\infty ((0, T); W^{\frac{1}{2}, \frac{5}{4} } (\mathbb{T}^2))$. We use the fractional Sobolev space: one can refer to, for example, \color{red}{Taylor, Partial Differential Equations, Volume 3, Section 13.6.}\color{black} We note that $L^\infty (\mathbb{T}^2) \subset W^{s, p}(\mathbb{T}^2)$ continuously, provided that $sp > 2$. We pick $s = \frac{1}{2}, p = 5$ for simplicity. Therefore, we have \begin{equation} L^1 (\mathbb{T}^2) \subset (L^\infty (\mathbb{T}^2))^* \subset W^{-\frac{1}{2}, \frac{5}{4} }(\mathbb{T}^2) \end{equation} and thus $\o^\beta$ is uniformly bounded in $L^\infty ((0, T); W^{-\frac{1}{2}, \frac{5}{4} }(\mathbb{T}^2))$. Therefore, we have $\nabla_x u^\beta \in L^\infty ((0, T); W^{-\frac{1}{2}, \frac{5}{4} } )$ uniformly as well, and by Poincare inequality, $u^\beta \in L^\infty ((0, T); W^{\frac{1}{2}, \frac{5}{4} } (\mathbb{T}^2))$ uniformly. Then, since \begin{equation} W^{\frac{1}{2}, \frac{5}{4} } (\mathbb{T}^2) \xhookrightarrow{} \xhookrightarrow{} L^1 (\mathbb{T}^2) \end{equation} by Rellich-Kondrachov, by Aubin-Lions lemma we see that $\{u^\beta\}$ is precompact in $C([0, T]; L^1(\mathbb{T}^2))$. \color{red}{Joonhyun's correction ended}\color{black} \hide The Biot-Savart law implies that $(u^{\beta^\prime}, \o^{\beta^\prime})$ satisfying \eqref{vorticity_beta} should hold \begin{equation} \notag u^{\beta^\prime}(t,x) = \int_{\mathbb{R}^2} \mathbf{b}(x-y) \o^{\beta^\prime} (t,y) \mathrm{d} y, \ \ \ |\mathbf{b}(x)|\leq \frac{C_{\mathbf{b}}}{|x|}. \end{equation} We introduce a smooth functions $\varphi^i_N$ with $\|\varphi^i_N \|_{C^2} \leq C_N <\infty $ for $i=1,2$, such that $\varphi^1_N(y) =1$ for $\frac{1}{N}\geq |y| $ and $\varphi_N(y)=0$ for $\frac{2}{N}\leq |y|$; and $\varphi^2_N(y) =1$ for $|y|\geq N $ and $\varphi_N(y)=0$ for $|y|\leq N/2$; and finally $\varphi_N^3:=(1- \varphi^1_N- \varphi^2_N).$ We decompose $\mathbf{b} = \varphi_N^1 \mathbf{b} + \varphi_N^2\mathbf{b} + \varphi^3_N \mathbf{b}$. Choose any $\beta^\prime_1, \beta^\prime_2 \in \{\beta^\prime\}$. We estimate each contributions separately: from the Funini's theorem \begin{equation} \begin{split}\label{u^beta:1} &\Big\|\int_{\mathbb{R}^2} \varphi_N^1( y) \mathbf{b}( y) \big(\o^{\beta^\prime_1}(t,x-y) - \o^{\beta^\prime_2}(t,x-y)\big) \mathrm{d} y \Big\|_{L^1( \{x \in \mathbb{T}^2\})}\\ &\leq \| \varphi^1_N \mathbf{b} \|_{L^1 (\mathbb{R}^2)} \| \o^{\beta^\prime_1}(t, \cdot ) - \o^{\beta^\prime_2} (t, \cdot ) \|_{L^1(\mathbb{T}^2)} \leq \frac{10 C_ \mathbf{b}}{N} \sup_{\beta} \| \o ^\beta (t, \cdot ) \|_{L^1(\mathbb{T}^2)}; \end{split} \end{equation} and, for $r >4$ from $\mathscr{L}^2(\mathbb{T}^2)=1$ \begin{equation} \begin{split}\label{u^beta:2} & \| \varphi_N^2 \mathbf{b} * (\o^{\beta^\prime_1} (t)- \o^{\beta^\prime_2}(t)) \|_{L^1 (\{x \in \mathbb{T}^2\})} \leq \| \varphi_N^2 \mathbf{b} * (\o^{\beta^\prime_1} (t)- \o^{\beta^\prime_2}(t)) \|_{L^r (\{x \in \mathbb{T}^2\})} \\ &\leq \| \varphi_N^2 \mathbf{b} \|_{L^r (\mathbb{R}^2)} N^2 \big\|\o^{\beta^\prime_1}(t,\cdot ) - \o^{\beta^\prime_2}(t,\cdot )\big\|_{L^1(\mathbb{T}^2)} \\ &\leq \frac{C_{\mathbf{b}}}{N^{r-4}} \sup_{\beta} \| \o ^\beta (t, \cdot ) \|_{L^1(\mathbb{T}^2)}; \end{split} \end{equation} and \begin{equation} \begin{split}\label{u^beta:3} & \| \varphi_N^3 \mathbf{b} * (\o^{\beta^\prime_1} (t)- \o^{\beta^\prime_2}(t)) \|_{L^1 (\{x \in \mathbb{T}^2\})} \\ &\leq\Big\|\int_{\mathbb{R}^2} \varphi_N^3( y) \mathbf{b}( y) \big(\o^{\beta^\prime_1}(t,x-y) - \o^{\beta^\prime_2}(t,x-y)\big) \mathrm{d} y \Big\|_{L^\infty( \{x \in \mathbb{T}^2\})}\\ & \leq \| \varphi_N^3 \mathbf{b} \|_{H^{|s|}(\mathbb{R}^2)} N^2 \|\o^{\beta^\prime_1}(t,\cdot) - \o^{\beta^\prime_2}(t,\cdot) \|_{H^{s}(\mathbb{T}^2)} . \end{split} \end{equation} For any $\e>0$, we choose large $N=N ( \sup_{\beta} \| \o ^\beta (t, \cdot ) \|_{L^1(\mathbb{T}^2)})\gg 1$ so that $\eqref{u^beta:1} + \eqref{u^beta:2}< \e/2$. For such $N$, now we choose $\beta^\prime\gg_{N, \mathbf{b}} 1$ such that $ \eqref{u^beta:3}< \e/2$. \unhide \end{proof}\unhide \subsection{Rate of Convergence of $u^\beta$: Localized Yudovich solutions} We use the following version of the theorem, presented in \cite{CS2021}. The theorem in \cite{CS2021} provided the modulus of continuity for $u$ which we will use, and explicitly stated that the unique solution is regular Lagrangian. We begin with introducing localized Yudovich class of vorticity. Intuitively, the localized Yudovich class consists of vorticities with moderate growth of $L^p$ norm as $p \rightarrow \infty$. Existence and uniqueness results of Yudovich class of vorticity extends to localized Yudovich class. We refer to \cite{CS2021} and references therein for further details. \begin{equation} \notag \| \o \|_{Y_{\mathrm{ul} }^\Theta (\T^2) } := \sup_{1 \le \mathfrak{p} < \infty} \frac{ \| \o \|_{L^\mathfrak{p} (\T^2 ) } }{\Theta (\mathfrak{p}) }. \end{equation} In this paper, we focus on the growth function with the following condition, which gives quantitative bounds on the behavior of velocity field $u$; it would be interesting to see if one can generalize the presented results to arbitrary admissible growth functions. We assume that $\Theta: \mathbb{R}_{\ge 0} \rightarrow \mathbb{R}_{\ge 0}$ satisfies the following: there exists $m \in \mathbb{Z}_{>0}$ such that \begin{equation} \label{Thetap} \Theta (\mathfrak{p}) = \prod_{k=1} ^m \log_k \mathfrak{p} , \end{equation} for large $\mathfrak{p} >1$, where $\log_k \mathfrak{p}$ is defined inductively by $\log_1 \mathfrak{p} = \log \mathfrak{p}$ and \begin{equation} \notag \log_{k+1} \mathfrak{p} = \log \log_{k} \mathfrak{p} . \end{equation} Also, we adopt the convention that $\log_0 \mathfrak{p} = 1.$ We remark that we are only interested in the behavior of $\Theta$ for large $\mathfrak{p}$. Also, we denote the inverse function of $\log_m (\mathfrak{p}) $ (defined for large $\mathfrak{p}$) by $e_m$. Finally, we note that \begin{equation} \notag \int_{e_m(1) } ^\infty \frac{1}{\mathfrak{p} \Theta (\mathfrak{p} ) } = \infty, \end{equation} which turns out to be important in uniqueness of the solution. \begin{theorem}[\cite{CS2021}] \label{thm:localYudovichwellposedness} If $\o_0 \in Y_{\mathrm{ul}}^\Theta (\T^2)$, for every $T>0$ there exists a unique weak solution $\o \in L^\infty ([0, T]; Y_{\mathrm{ul}}^\Theta (\T^2))$ with $u \in L^\infty ([0, T]; C_b^{0, \varphi_{\Theta} }(\T^2, \mathbb{R}^2 ) )$, which is regular Lagrangian. Here, the function space $C_b^{0, \varphi_{\Theta} }(\T^2, \mathbb{R}^2 )$ is defined by \begin{equation} \notag C_b ^{0, \varphi_{\Theta} } (\T^2, \mathbb{R}^2 ) = \left \{ v \in L^\infty (\T^2, \mathbb{R}^2 ) | \sup_{x\ne y}\frac{|v(x) - v(y) | }{\varphi_\Theta (d(x,y)) < \infty} \right \}, \end{equation} where $d(x,y)$ is the geodesic distance on the torus $\T^2 = \mathbb{T}^2$, and $\varphi_\Theta$ is defined by \begin{equation} \notag \varphi_{\Theta} (r) = \begin{cases} 0, r=0, \\ r (1- \log r) \Theta( 1 - \log r), r \in (0, e^{-2} ), \\ e^{-2} 3 \Theta(3), r \ge e^{-2}. \end{cases} \end{equation} Also, $\| \o\|_{L^\infty ([0, T]; Y_{\mathrm{ul}}^\Theta (\T^2))}$ and $\| u \|_{C_b ^{0, \varphi_\Theta} (\T^2, \mathbb{R}^2 ) }$ depends only on $\|\o_0 \|_{Y_{\mathrm{ul}}^\Theta (\T^2)}$ and $T$. The dependence is non-decreasing in both $\|\o_0 \|_{Y_{\mathrm{ul}}^\Theta (\T^2)}$ and $T$. \end{theorem} In this subsection, we prove the following proposition: \begin{proposition} Let $\omega_0 \in Y_{\mathrm{ul}}^{\Theta} (\mathbb{T}^2)$. There exist constants $M$, depending only on $m$ and $\sup_{t \in [0, T]} \|u(t) \|_{L^\infty}$ (and therefore $\|\o_0\|_{L^3}$) (and dimension $d=2$), and $C$($C=2e$ works), which is universal, such that \begin{equation} \label{Rateftn} \sup_{0 \le t \le T} \| u^\beta (t) - u (t) \|_{L^2 (\T^2) } ^2 \le \frac{M}{e_m \left ( \left ( \log_m \left (\frac{M}{\beta^2 \| \o_0 \|_{L^2 (\T^2 ) }^2 } \right ) \right )^{e^{-C \| \o_0\|_{Y_{\mathrm{ul}}^\Theta} T } } \right ) } =: \mathrm{Rate}(\o_0 ; \beta). \end{equation} Note that $\lim_{\beta\rightarrow 0^+} \mathrm{Rate}(\o_0 ; \beta) = 0$. \end{proposition} In particular, the case $m=0$ corresponds to the Yudovich class, with $\mathrm{Rate}(\beta) = \beta^{2e^{-C \| \o_0\|_{Y_{\mathrm{ul}}^\Theta} T }} $. \begin{proof} We follow the proof of \cite{Y1995}. By letting $v = u^\beta - u$, we have \begin{equation} \notag \partial_t v + u^\beta \cdot \nabla_x v - v \cdot \nabla_x u + \nabla_x (p^\beta - p) = 0. \end{equation} Noting that $v$ is incompressible and taking $L^2$ norm of $v$, we obtain \begin{equation} \notag \frac{d}{2dt} \| v \|_{L^2 (\T^2) } ^2 \le \int_{\T^2} v \cdot \nabla_x u \cdot v \mathrm{d} x, \end{equation} or \begin{equation} \notag \| v(t) \|_{L^2 (\T^2)}^2 \le \| v(0) \|_{L^2 (\T^2) } + 2 \int_0 ^t \int_{\T^2} |\nabla_x u| |v|^2 \mathrm{d} x \end{equation} Next, we note that by Sobolev embedding \begin{equation} \notag \| v \|_{L^\infty (\T^2)}^2 \le 2(\| u \|_{L^\infty (\T^2)}^2 + \| u^\beta \|_{L^\infty (\T^2) }^2 ) \le 2 C \| \o_0 \|_{L^3 (\T^2)} ^2, \end{equation} while energy conservation gives \begin{equation} \notag \| v (t) \|_{L^2 (\T^2)}^2 \le 2 (\| u^\beta (t) \|_{L^2 (\T^2)}^2 + \| u(t) \|_{L^2 (\T^2 )} ^2 ) \le 4 \| u_0 \|_{L^2 (\T^2) }^2. \end{equation} Therefore, there exists a constant $M$, explicitly given by \begin{equation} \notag M:= 1+ 4 \| u_0 \|_{L^2 (\T^2)}^2 e_{m} (1) + 2C \| \o_0 \|_{L^3} ^2 \end{equation} satisfies \begin{equation} \notag \frac{M}{\| v(t) \|_{L^2 (\T^2)}^2} > e_{m} (1), \|v (t) \|_{L^\infty (\T^2)}^2 \le M. \end{equation} Then, by the definition of $Y_{\mathrm{ul}}^\Theta$ and the Calderon-Zygmund inequality \begin{equation} \notag \| \nabla_x u \|_{L^\mathfrak{p} (\T^2)} \le C \mathfrak{p} \| \o \|_{L^\mathfrak{p} (\T^2)} \end{equation} for $p \in (1, \infty)$, we have \begin{equation} \notag \| \nabla_x u \|_{L^\mathfrak{p} (\T^2) } \le \|\o_0 \|_{Y_{\mathrm{ul}}^\Theta } \mathfrak{p} \Theta(\mathfrak{p}) := \|\o_0 \|_{Y_{\mathrm{ul}}^\Theta }\phi(\mathfrak{p}), \end{equation} where we have used the conservation of $\|\o\|_{L^\mathfrak{p} (\T^2) }$ for every $1 \le \mathfrak{p} < \infty$. We first treat the case of $m \ge 1$. By H\"{o}lder's inequality, for each $\epsilon \in (0, \frac{1}{e_{m-1} (1)})$ ($\frac{1}{e_{m-1} (1) } \le 1$) we have \begin{align} &\int_{\mathbb{T}^2} |\nabla_x u | |v|^2 dx \le \| v \|_{L^\infty (\T^2 ) } ^{2\epsilon } \int |v|^{2 (1-\epsilon ) } |\nabla_x u | \mathrm{d} x \notag \\ &\le M^{\epsilon} \left (\int_{\T^2} |v|^2 \mathrm{d} x \right )^{1-\epsilon}\left ( \int_{\T^2} |\nabla_x u|^{\frac{1}{\epsilon } } \mathrm{d} x \right )^\epsilon \notag \\ &\le M^\epsilon \left (\| v \|_{L^2 (\T^2 ) }^2 \right )^{1-\epsilon} \| \o_0 \|_{Y_{\mathrm{ul} }^\Theta } \phi \left (\frac{1}{\epsilon}\right ) = \| \o_0 \|_{Y_{\mathrm{ul} }^\Theta } \| v \|_{L^2 (\T^2 ) }^2 \left (\frac{M}{\|v \|_{L^2 (\T^2 ) }^2 } \right )^\epsilon \phi \left ( \frac{1}{\epsilon } \right ).\notag \end{align} Now choose \begin{equation} \notag \epsilon^* = \frac{1}{\log \frac{M}{\|v(t) \|_{L^2 (\T^2 ) }^2 }}. \end{equation} Then since $\frac{M}{\|v(t) \|_{L^2 (\T^2 ) } ^2 } > e_{m} (1)$, $\log \left (\frac{M}{\|v(t) \|_{L^2 (\T^2 ) } ^2 } \right ) > \log (e_{m} (1) ) = e_{m-1} (1)$ so $\epsilon^* \in (0, \frac{1}{e_{m-1} (1) } )$. There, we have \begin{align} &\left (\frac{M}{\|v \|_{L^2 (\T^2 ) }^2 } \right )^{\epsilon^*} \phi \left ( \frac{1}{{\epsilon^*} } \right )\notag \\ &= e \log \left (\frac{M}{\|v(t) \|_{L^2 (\T^2 ) } ^2 } \right ) \log \left ( \log \left (\frac{M}{\|v(t) \|_{L^2 (\T^2 ) } ^2 } \right ) \right ) \cdots \log_m \left ( \log \left (\frac{M}{\|v(t) \|_{L^2 (\T^2 ) } ^2 } \right ) \right )\notag \\ &= e \Theta \left (\frac{M}{\|v(t) \|_{L^2 (\T^2 ) } ^2 } \right ).\notag \end{align} For $m=0$ (Yudovich case), $\epsilon \rightarrow \left (\frac{M}{\|v \|_{L^2 (\T^2 ) }^2 } \right )^\epsilon \phi \left ( \frac{1}{\epsilon } \right ) = \left (\frac{M}{\|v \|_{L^2 (\T^2 ) }^2 } \right )^\epsilon \frac{1}{\epsilon}$ attains its minimum at $\epsilon ^* = \frac{1}{\log \left (\frac{M}{\|v \|_{L^2 (\T^2 ) }^2 } \right )}$, so we choose $M$ such that $\epsilon^* < 1$. Therefore, we have \begin{equation} \notag \int_\T^2 |\nabla_x u| |v|^2 \mathrm{d} x \le e \| \o_0 \|_{Y_{\mathrm{ul} }^\Theta } \| v \|_{L^2 (\T^2 ) }^2 \Theta \left (\frac{M}{\|v \|_{L^2 (\T^2 ) } ^2 } \right ). \end{equation} To sum up, we have \begin{equation} \notag \| v(t) \|_{L^2 (\T^2 ) }^2 \le \| v_0 \|_{L^2 (\T^2 ) }^2 + \int_0 ^t 2 e \| \o_0 \|_{Y_{\mathrm{ul } }^\Theta } \Psi ( \| v(s) \|_{L^2 (\T^2 ) } ^2 ) \mathrm{d} s, \end{equation} where \begin{equation} \notag \Psi (r) = r \Theta \left (\frac{M}{r} \right ). \end{equation} Then by Osgood's lemma, we have \begin{equation} \notag -\mathcal{M} (\| v(t) \|_{L^2 (\T^2 ) }^2 ) +\mathcal{M} (\| v_0 \|_{L^2 (\T^2 ) }^2 ) \le 2e\| \o_0 \|_{Y_{\mathrm{ul} }^\Theta} t, \end{equation} where \begin{align} &\mathcal{M} (x) = \int_x ^a \frac{\mathrm{d} r}{\Psi (r) } = \int_x ^a \frac{\mathrm{d} r}{r \prod_{k=1} ^m \log_k \left (\frac{M}{r} \right )} \notag\\ & = \int_{\frac{M}{a}} ^{\frac{M}{x} } \frac{\mathrm{d} z}{z \prod_{k=1} ^m \log_k \left (z \right )} = \int_{\log_m (\frac{M}{a} ) } ^{\log_m (\frac{M}{x} ) } \frac{\mathrm{d} y}{y} = \log_{m+1} \left (\frac{M}{x} \right ) - \log_{m+1} \left (\frac{M}{a} \right ).\notag \end{align} where $a = 2 \| u_0 \|_{L^2(\T^2)}^2$ and we have used the substitution $z = \frac{M}{r}$ for the third identity and $y = \log_m (z)$ with \begin{equation} \notag \frac{\mathrm{d} y}{\mathrm{d} z} = \frac{1}{z \prod_{k=1} ^{m-1} \log_k (z) } \end{equation} for the forth identity. In particular, we have \begin{equation} \notag \begin{split} &\log_{m+1} \left (\frac{M}{\|v(t) \|_{L^2 (\T^2 )}^2 } \right )\\ & \ge \log_{m+1} \left (\frac{M}{\|v_0 \|_{L^2 (\T^2 )}^2 } \right ) - C \| \o_0 \|_{Y_{\mathrm{ul } } ^\Theta } t = \log \left ( \log_m\left (\frac{M}{\|v_0 \|_{L^2 (\T^2 )}^2 } \right ) e^{-C t \| \o_0 \|_{Y_{\mathrm{ul } }^\Theta} } \right ), \end{split} \end{equation} and taking $e_{m+1}$ and reciprocal gives the desired conclusion. Certainly $\mathrm{Rate}(\beta)$ is a continuous function of $\beta$, and it converges to 0 as $\beta \rightarrow 0$ as $\mathcal{M}(0)=\infty$. \end{proof} \subsection{Convergence of $\o^\beta$} \begin{proposition}\label{theo:convergence}For any fixed $\mathfrak{p} \in [1,\infty]$, suppose $\o_0 \in L^\mathfrak{p} (\mathbb{T}^2)$. Recall the regularization of the initial data $\o_0^\beta$ in \eqref{vorticity_beta}. Let $(u^\beta, \o^\beta)$ and $(u, \omega)$ be Lagrangian solutions of \eqref{vorticity_eqtn}-\eqref{BS_beta} and \eqref{vorticity}-\eqref{BS}, respectively. For any $T>0$ and the subsequence $\{\beta^\prime\} \subset \{\beta\}$ in Lemma \ref{lemma_AL}, we have \begin{equation} \label{stab_o_beta} \sup_{t \in [0,T]}\| \o^{\beta^\prime} (t, \cdot)- \o (t, \cdot) \| _{L^\mathfrak{p} (\mathbb{T}^2)} \rightarrow 0 \ \ \text{as} \ \ \beta^\prime \rightarrow \infty. \end{equation} \end{proposition} \begin{proof} For the subsequence $\{\beta^\prime\} \subset \{\beta\}$ in Lemma \ref{lemma_AL}, \begin{align} &|\o (t,x) - \o^{\beta^\prime} (t,x)|\notag\\ & = |\o_0(X(0;t,x)) - \o^{\beta^\prime} _0 (X^{\beta^\prime} (0;t,x))|\notag\\ & \leq |\o_0(X(0;t,x)) - \o^\ell_0 (X(0;t,x))| + |\o_0^\ell (X^{\beta^\prime} (0;t,x)) - \o_0^{\beta^\prime} (X^{\beta^\prime} (0;t,x))| \label{diff_o_1} \\ &+ |\o^\ell_0(X(0;t,x)) - \o^\ell_0 (X^{\beta^\prime} (0;t,x))|. \label{diff_o_2} \end{align} Using the compressibility \eqref{compression}, we derive that, for $\mathfrak{p} \in [1,\infty]$ \begin{equation} \label{est:diff_o_1} \| \eqref{diff_o_1}\|_{L^p} \leq 2 \mathfrak{C} \| \o_0 - \o_0^\ell \|_{L^p}. \end{equation} For the last term, we need a stability of the Lagrangian flows: \begin{equation} \begin{split}\label{est:diff_o_2} \|\eqref{diff_o_2}\|_{L^{\mathfrak{p}} (\mathbb{T}^2)} &\leq \| \nabla \o_0^\ell \|_{L^\infty} \|X(0;t,\cdot) - X^{\beta^\prime} (0;t,\cdot)\|_{L^\mathfrak{p} (\mathbb{T}^2)}\\ & \leq \| \nabla \varphi^\ell \|_{L^\infty} \| \o_0 \|_{L^1} \|X(0;t,\cdot) - X^{\beta^\prime} (0;t,\cdot)\|_{L^\mathfrak{p} (\mathbb{T}^2)}\\ & \leq \frac{1}{ \ell^3} \| \nabla \varphi \|_{L^\infty (\mathbb{T}^2)} \| \o_0 \|_{L^1} \|X(0;t,\cdot) - X^{\beta^\prime} (0;t,\cdot)\|_{L^\mathfrak{p} (\mathbb{T}^2)}, \end{split} \end{equation} where we have used \eqref{growth:Do_ell}. For $\mathfrak{p} >1$, we use \eqref{stab_rLf} in Proposition \ref{prop_stab} and Lemma \ref{lemma_AL} to have \begin{equation} \begin{split}\label{est1:diff_o_2} \eqref{est:diff_o_2} & \lesssim \frac{1}{\ell^3} \frac{1+ \| \nabla u^{\beta^\prime}\|_{L^1((0,T) ; L^\mathfrak{p}(\mathbb{T}^2))} }{|\log \| u - u^{\beta ^\prime} \|_{L^1((0,T) ; L^1(\mathbb{T}^2))} |} \end{split} \end{equation} Now we choose \begin{equation} \label{choice_ell} \ell= \ell(\beta^\prime) \sim |\log \| u - u^{\beta ^\prime} \|_{L^1((0,T) ; L^1(\mathbb{T}^2))} |^{-\frac{1}{10}} \ \ \text{for each } \beta^\prime, \end{equation} such that \begin{equation} \notag \begin{split} \ell= \ell(\beta^\prime) \downarrow 0 \ \ \text{as} \ \beta^\prime \downarrow 0,\\ \ell^3 |\log \| u - u^{\beta ^\prime} \|_{L^1((0,T) ; L^1(\mathbb{T}^2))} | \rightarrow \infty \ \ \text{as} \ \beta^\prime \downarrow 0. \end{split} \end{equation} Therefore, for $\mathfrak{p}>1$, we prove $\eqref{est1:diff_o_2} \rightarrow 0$ as $\beta^\prime \downarrow 0$. Combining this with \eqref{est:diff_o_1}, we conclude \eqref{stab_o_beta} for $\mathfrak{p} >1$. For $p=1$, there exists $C_\e>0$ for any $\e>0$ such that \begin{equation} \begin{split}\label{stab_1} & \mathscr{L}^2 (\{x \in \mathbb{T}^2: |X^{\beta_1}(s;t,x)- X^{\beta_2}(s;t,x)|>\gamma\}) \\ &\leq \frac{e^{\frac{4C_\e}{\e}}}{\frac{4C_\e}{\e}} \frac{ \| u^{\beta_1} - u^{\beta_2}\|_{L^1 ((0,T); L^1(\mathbb{T}^2))}}{\gamma} + \e \ \ \ \text{for any} \ \ \gamma >0. \end{split} \end{equation} For $\mathfrak{p} =1$, using \eqref{stab_1}, we have \begin{equation} \begin{split}\notag &\|X(0;t,\cdot) - X^{\beta^\prime}(0;t,\cdot)\|_{L^1 (\mathbb{T}^2)}\\ & \leq \int_{|X(0;t,\cdot) - X^{\beta^\prime} (0;t,\cdot)| \leq \gamma} |X(0;t,x) - X^{\beta^\prime} (0;t,x)| \mathrm{d} x\\ & \ \ + \int_{|X(0;t,\cdot) - X^{\beta^\prime}(0;t,\cdot)| \geq \gamma} |X(0;t,x) - X^{\beta^\prime} (0;t,x)| \mathrm{d} x\\ & \leq \gamma + \frac{e^{\frac{4C_\e}{\e}}}{\frac{4C_\e}{\e}} \frac{ \| u - u^{\beta^\prime} \|_{L^1 ((0,T); L^1(\mathbb{T}^2))}}{\gamma} + \e , \end{split} \end{equation} and hence \begin{equation} \label{est:diff_X} \eqref{est:diff_o_2}\lesssim \frac{1}{\ell^3}\Big\{ \gamma + \frac{e^{\frac{4C_\e}{\e}}}{\frac{4C_\e}{\e}} \frac{ \| u - u^{\beta^\prime} \|_{L^1 ((0,T); L^1(\mathbb{T}^2))}}{\gamma} + \e \Big\}. \end{equation} For each $\e>0$, we choose $\gamma=\e$ and $\ell= \e^{\frac{1}{10}}$. And $\beta^\prime\gg_\e1$ such that $\frac{e^{\frac{4C_\e}{\e}}}{\frac{4C_\e}{\e}} \frac{1}{\e^{\frac{13}{10}}} \| u - u^{\beta^\prime} \|_{L^1 ((0,T); L^1(\mathbb{T}^2))} \rightarrow 0$. Combining with \eqref{est:diff_o_1}, we conclude \eqref{stab_o_beta} for $\mathfrak{p}=1$.\end{proof} \subsubsection{When $\o_0$ has no regularity} If $\o_0 \in Y_{\mathrm{ul}} ^\Theta (\T^2)$ and no additional regularity is assumed, one cannot expect a convergence rate which is uniform over $\o_0$: the rate crucially depends on how fast $\o_0 ^\beta$ converges to $\o_0$. Suppose that $\o(t)$ is the Lagrangian solution with initial data $\o_0$. Then we have \begin{align} &|\o (t,x) - \o^\beta (t,x) | = | \o_0 (X(0; t, x) ) - \o_0 ^\beta (X^\beta (0; t, x)) | \notag\\ &\le |\o_0 (X(0; t, x)) - \o_0 ^\ell (X(0; t, x))| + |\o_0 ^\ell (X^\beta (0; t, x)) - \o_0 ^\beta (X^\beta (0; t, x))| \notag \\ & + |\o_0 ^\ell (X(0; t, x)) - \o_0 ^\ell (X^\beta (0; t, x))|,\notag \end{align} where $\o_0 ^\ell$ is the initial data regularization of $\o_0$ with parameter $\ell$. Therefore, by the compression property we have \begin{align} &\| \o(t) - \o^\beta (t) \|_{L^\mathfrak{p} (\T^2 ) } \le \mathfrak{C} \| \o_0 - \o_0 ^\ell \|_{L^\mathfrak{p} (\T^2) } + \| \o_0 ^\ell - \o_0 ^\beta \|_{L^\mathfrak{p} (\T^2 ) } \notag\\ &+ \| \o_0 ^\ell (X(0, t; \cdot ) - \o_0 ^\ell (X^\beta (0; t, \cdot ) ) \|_{L^\mathfrak{p} (\T^2)}.\notag \end{align} Using \eqref{est:diff_o_2}, we can estimate the first two terms: \begin{equation} \mathfrak{C} \| \o_0 - \o_0 ^\ell \|_{L^\mathfrak{p} (\T^2) } +( \| \o_0 ^\ell - \o_0 ^\beta \|_{L^\mathfrak{p} (\T^2 ) } \le (\mathfrak{C} +1 ) \| \o_0 - \o_0 ^\ell \|_{L^\mathfrak{p} (\T^2) } + \| \o_0 ^\beta - \o_0 \|_{L^\mathfrak{p} (\T^2 ) }.\notag \end{equation} The last term is estimated by \eqref{est:diff_o_2} and \eqref{est1:diff_o_2}: \begin{equation} \| \o_0 ^\ell (X(0, t; \cdot )) - \o_0 ^\ell (X^\beta (0; t, \cdot ) ) \|_{L^\mathfrak{p} (\T^2)} \le \frac{C (1 + \mathfrak{p} \| \o_0 \|_{L^\mathfrak{p} (\T^2) } t )}{ \ell^3 | \log \mathrm{Rate}(\o_0; \beta) | }.\notag \end{equation} Choosing $\ell = |\log \mathrm{Rate} (\beta) |^{-\frac{1}{4}}$ gives that for $t \in [0, T]$ \begin{equation} \label{Rateftnvort} \begin{split} \| \o(t) - \o^\beta (t) \|_{L^\mathfrak{p} (\T^2) } &\lesssim \| \o_0 ^\beta - \o_0 \|_{L^\mathfrak{p} (\T^2)} + \| \o_0 - \o_0 ^{|\log \mathrm{Rate} (\o_0; \beta) |^{-\frac{1}{4}} } \|_{L^\mathfrak{p} (\T^2) } + \frac{1+ \mathfrak{p} \| \o_0 \|_{L^\mathfrak{p} (\T^2 ) } T}{|\log \mathrm{Rate} (\o_0; \beta ) |^{\frac{1}{4} } } \\ & =: \mathrm{Rate}_\o (\o_0; \beta). \end{split} \end{equation} Since there is no explicit rate for the convergence of $\| \o_0 ^\beta - \o_0 \|_{L^\mathfrak{p} (\T^2 ) }$, the first two terms dominate the rate of convergence in general. \subsubsection{When $\o_0$ has some regularity} An important class of localized Yudovich vorticity functions belong to Besov space of positive regularity index: for example, $f(x) = \log( \log |x| ) \varphi(x) \in Y_{\mathrm{ul} }^\Theta$ with $\Theta (\mathfrak{p}) = \log \mathfrak{p}$, where $\varphi(x)$ is a smooth cutoff function, belongs to $W^{1, r} (\T^2)$ where $r < 2$, and thus in Besov space $B^s_{2, \infty}$ with $s < 1$. Of course, vortex patches $\chi_D$ with box-counting dimension of the boundary $d_F (\partial D)<2$ belongs to $B^{\frac{2-d_F (\partial D)}{p}}_{p, \infty}$ for $1 \le p <\infty$ (\cite {CW1996}) and thus vortex patch with a mild singularity in the interior of $D$ also belongs to a certain Besov space with positive regularity. In this subsection, we provide the rate of convergence of vorticity when $\o_0 \in Y_{\mathrm{ul}} ^\Theta (\T^2) \cap B^s_{2, \infty} (\T^2)$ or $\o_0 \in L^\infty (\T^2) \cap B^s_{2, \infty} (\T^2)$. Unlike Yudovich $\o_0 \in L^\infty (\T^2)$ case, if $\o_0$ is in localized Yudovich class $Y_{\mathrm{ul} }^\Theta (\T^2)$, even if initial vorticity has additional Besov regularity, that is, $\o_0 \in Y_{\mathrm{ul}}^\Theta (\T^2) \cap B^s_{2, \infty} (\T^2)$ for some $s>0$, the Besov regularity of vorticity $\o(t)$ may not propagate, even in the losing manner. The key obstruction is failure of generalization of propagation of regularity result. We will explain this after proving the result, following the argument of \cite{CDE2019}, \cite{BCD2011}, and \cite{V1999}. \begin{proposition} If $\o_0 \in Y_{\mathrm{ul}}^\Theta (\T^2) \cap B^s _{2, \infty } (\T^2)$ for some $s>0$, then we have \begin{equation} \label{vorticityratelocYud} \begin{split} &\| \o_0 - \o_0 ^\beta \|_{L^2 (\T^2 ) } \\ &\le C(T, \| \o_0 \|_{L^2 (\T^2) }, \| \o_0 \|_{B^s_{2, \infty } (\T^2 ) } ) \left (\beta^{\frac{s'}{1+s'}} + \left ( \frac{1}{|\log \mathrm{Rate} (\o_0; \beta ) | } \right )^{\frac{s'}{3+4s'} } \right ) \\ & =: \mathrm{Rate}_{\o, s, loc-Y} (\beta) \end{split} \end{equation} for any $s' \in (0, s)$. Moreover, if $\o_0 \in L^\infty (\T^2) \cap B^s _{2, \infty } (\T^2)$, \begin{equation} \label{vorticityrateYud} \| \o^\beta (t) - \o(t) \|_{L^2 (\T^2) } \le C(s, T, \|\o_0\|_{B^s_{2, \infty} (\T^2 ) } ) \beta^{C(s) e^{-C(\|\o_0 \|_{L^\infty (\T^2 ) }) T } } =: \mathrm{Rate}_{\o, s, Y} (\beta). \end{equation} In particular, if $\o_0$ is Yudovich with some Besov regularity, the vorticity converges with an algebraic rate $\beta^\alpha$. \end{proposition} \begin{proof} First, we prove the rate for $\o_0 \in Y_{\mathrm{ul}}^\Theta (\T^2) \cap B^s _{2, \infty } (\T^2)$. We rely on the above rate: \begin{equation} \notag \| \o(t) - \o^\beta (t) \|_{L^2 (\T^2) } \le C ( \| \o_0 - \o_0 ^\ell \|_{L^2 (\T^2 ) } + \| \o_0 - \o_0 ^\beta \|_{L^2 (\T^2 ) } ) + \frac{C(1+T \| \o_0 \|_{L^2 (\T^2 ) } )}{ \ell^3 | \log \mathrm{Rate}(\beta) | }. \end{equation} Since $\o_0 \in B^s _{2, \infty} (\T^2)$, we may use the following interpolation: \begin{align} &\| \o_0 - \o_0 ^\beta \|_{L^2 (\T^2)}\notag \\ &\le \| \o_0 - \o_0 ^\beta \|_{H^{-1} (\T^2)} ^{\frac{s'}{1+s'} }\| \o_0 - \o_0 ^\beta \|_{H^{s'} (\T^2)} ^{\frac{1}{1+s'} } \le \| \o_0 - \o_0 ^\beta \|_{H^{-1} (\T^2)} ^{\frac{s'}{1+s'} }\| \o_0 - \o_0 ^\beta \|_{B^s_{2, \infty} (\T^2) } ^{\frac{1}{1+s'} },\notag \end{align} for arbitrary $s' \in (0, s)$, where we have used that $H^s = B^s_{2,2}$ and $B^s_{p,q} (\T^2) \subset B^{s'}_{p, q'} (\T^2)$ for $s'<s$ and arbitrary $q, q'$. (The proof for whole space, which is standard, can be easily translated to periodic domain $\T^2$.) Since \begin{equation} \| \o_0 - \o_0 ^\beta \|_{H^{-1} (\T^2)} \le \| u_0 - u_0 ^\beta \|_{L^2 (\T^2 ) } \le C \beta \| \o_0 \|_{L^2 (\T^2 )}, \notag \end{equation} we have \begin{equation} \| \o_0 - \o_0 ^\beta \|_{L^2 (\T^2)} \le C \beta^{\frac{s'}{1+s'}} \| \o_0 \|_{L^2 (\T^2 ) }^{\frac{s'}{1+s'}} \| \o_0 \|_{B^s_{2,\infty} (\T^2) } ^{\frac{1}{1+s'}},\notag \end{equation} and similarly \begin{equation} \| \o_0 - \o_0 ^\ell \|_{L^2 (\T^2)} \le C \ell^{\frac{s'}{1+s'}} \| \o_0 \|_{L^2 (\T^2 ) }^{\frac{s'}{1+s'}} \| \o_0 \|_{B^s_{2,\infty} (\T^2) } ^{\frac{1}{1+s'}}.\notag \end{equation} Finally, we match $\ell$ and $\beta$ to find a rate of convergence: we match $\ell$ so that \begin{equation} \frac{1}{\ell^3 | \log \mathrm{Rate}(\beta) | } = \ell^{\frac{s'}{1+s'} }.\notag \end{equation} Then we have \begin{equation} \notag \ell^{\frac{s'}{1+s'}} = \frac{1}{\ell^3 | \log \mathrm{Rate}(\beta) | } = \left ( \frac{1}{|\log \mathrm{Rate} (\beta ) | } \right )^{\frac{s'}{3+4s'} } \rightarrow 0, \end{equation} as $\beta\rightarrow 0$. To summarize, we have \begin{equation} \notag \| \o_0 - \o_0 ^\beta \|_{L^2 (\T^2 ) } \le C(T, \| \o_0 \|_{L^2 (\T^2) }, \| \o_0 \|_{B^s_{2, \infty } (\T^2 ) } ) \left (\beta^{\frac{s'}{1+s'}} + \left ( \frac{1}{|\log \mathrm{Rate} (\beta ) | } \right )^{\frac{s'}{3+4s'} } \right ), \end{equation} as desired. Note that in the Yudovich class, $\mathrm{Rate}(\beta) = \beta^C$, and thus this rate is dominated by $\frac{1}{|\log \beta |^\alpha}$, which is much slower than algebraic rate $\beta^\alpha$. Next, we prove the improved rate for the Yudovich initial data $\o_0 \in L^\infty(\T^2)$. First, we calculate the rate of distance $d(X^\beta (0; t, x), X^\beta (0; t, y))$ with respect to $d(x,y)$, which is uniform in $\beta$. For the later purpose, we calculate the rate for localized Yudovich class as well: $m=0$ corresponds to $\o_0 \in L^\infty (\T^2)$. If $\o_0 \in Y_{\mathrm{ul}}^\Theta (\T^2) \cap B^s_{2, \infty} (\T^2)$, then so is $\o_0^\beta \in Y_{\mathrm{ul}}^\Theta (\T^2) \cap B^s_{2, \infty} (\T^2)$, with \begin{equation} \notag \sup_\beta ( \| \o_0^\beta \|_{Y_{\mathrm{ul}}^\Theta (\T^2)} + \|\o_0 ^\beta \|_{B^s_{2, \infty} (\T^2)} ) \le (\| \o_0 \|_{Y_{\mathrm{ul}}^\Theta (\T^2)} + \|\o_0 \|_{B^s_{2, \infty} (\T^2)} ). \end{equation} We first estimate the modulus of continuity for $u^\beta$ with $\o_0 \in Y_{\mathrm{ul}}^\Theta (\T^2)$, given by Theorem \ref{thm:localYudovichwellposedness}. \begin{equation} \notag \varphi_\Theta (r) \le \begin{cases} 0, r=0, \\ r (1 - \log r) \prod_{k=1} ^m \log_k (1 - \log r), 0 < r < \frac{1}{e^{e_m(1) -1 } }, \\ C(\Theta), r \ge \frac{1}{e^{e_m(1) -1 } } \end{cases} \end{equation} where $C(\Theta)$ is a constant depending on $\Theta$. We have \begin{align}\notag &| X^\beta (0; t, x)- X^\beta (0; t, y) | \le |x- y| + \int_0 ^t \left |\frac{d}{ds} X^\beta(s; t, x)- \frac{d}{ds} X^\beta(s; t, y)\right | \mathrm{d} s \notag\\ &= |x-y| + \int_0 ^t | u(X^\beta (s; t, x), s) - u(X^\beta(s; t, y) , s) | \mathrm{d} s \notag\\ & \le |x-y| + \int_0 ^t \varphi_\Theta (|X^\beta(s; t, x)- X^\beta(s; t, y) | ) B \mathrm{d} s.\notag \end{align} Here, by the Theorem \ref{thm:localYudovichwellposedness}, $C$ is uniform in $\beta$. Then by Osgood's lemma, we have \begin{equation} \notag -\mathcal{M} (\left |X^\beta (0; t, x), X^\beta(0; t, y) \right | ) + \mathcal{M} (|x-y| ) \le Bt, \end{equation} where \begin{equation} \notag \mathcal{M} (x) = \int_x ^1 \frac{\mathrm{d} r}{\varphi_\Theta (r) } = \begin{cases} \int_x ^{\exp(\frac{1}{{e_m(1) - 1 } })} \frac{1}{r (1-\log r) \prod_{k=1} ^m \log_k (1-\log r) } \mathrm{d} r + \int_{\frac{1}{e^{e_m(1) - 1 } }} ^1 \frac{\mathrm{d} r}{\varphi^\Theta (r) }, x < \exp(\frac{1}{e^{e_m(1) - 1 }}), \\ \int_{x } ^1 \frac{\mathrm{d} r}{\varphi_\Theta (r) }, x \ge \exp(\frac{1}{e^{e_m(1) - 1 }}), \end{cases} \end{equation} and $B$ is an upper bound for $\|u^\beta \|_{L^\infty ([0, T]; C_b ^{0, \varphi_\Theta} (\T^2, \mathbb{R}^2 ) ) }$. For future purpose, we take $B$ so that $e^{BT} > e_m (1)$. Thus, if $x \ge \exp(\frac{1}{e^{e_m(1) - 1 }})$, $\mathcal{M} (x) \le C_0$ for some positive constant $C_0$. If $x < \exp(\frac{1}{e^{e_m(1) - 1 }})$, then \begin{equation} \notag \int_x ^{\exp(\frac{1}{{e_m(1) - 1 } })} \frac{1}{r (1-\log r) \prod_{k=1} ^m \log_k (1-\log r) } \mathrm{d} r = \log_{m+1} (1- \log x) \end{equation} using the substitution $y = \log_m (1-\log r)$, and thus \begin{equation} \notag \mathcal{M} (x) \in [\log_{m+1} (1-\log x), \log_{m+1} (1-\log x) + C_0] \end{equation} for a (possibly larger) positive constant $C_0$. Therefore, if $|x-y|$ is sufficiently small so that $\log_{m+1} (1 - \log |x-y|) - BT > C_0$, then since \begin{equation} \notag \mathcal{M} ( | X^\beta (0; t, x)- X^\beta(0; t, y) |) \ge \mathcal{M} (|x-y|) - Bt \ge \log_{m+1} (1 - \log |x-y| ) - BT, \end{equation} $|X^\beta (0; t, x)- X^\beta(0; t, y) |< \exp(\frac{1}{e^{e_m(1) - 1 }})$, and therefore we have \begin{equation} \notag \log_{m+1} (1 - \log (|X^\beta (0; t, x)- X^\beta(0; t, y) | ) ) \ge \log_{m+1} (1 - \log |x-y| ) - BT - C_0, \end{equation} which gives \begin{equation} \notag 1 - \log (|X^\beta (0; t, x)- X^\beta(0; t, y) | ) \ge e_{m+1} (\log_{m+1} (1 - \log |x-y| ) - BT - C_0), \end{equation} or \begin{equation} \notag |X^\beta (0; t, x)- X^\beta(0; t, y) | \le e \exp ( - (e_{m+1} \left (\log_{m+2} (\frac{e}{|x-y|}) - BT - C_0\right ) ) ), \end{equation} which is uniform in $\beta$. From now on, we assume $m=0$. We closely follow the proof of \cite{CDE2019} (and \cite{BCD2011}). We rewrite the above as \begin{equation} \notag |X^\beta (0; t, x)- X^\beta(0; t, y) | \le e \left ( \frac{|x-y| }{e} \right )^{e^{-(BT + C_0 ) }} =: C(T) (|x-y|)^{\alpha(T)}, \end{equation} where $\alpha(T) = \exp (-(BT + C_0) )$ which is deteriorating in time and $C(T) = \exp(1 - e^{-(BT+ C_0) } )$ which increases in time. Next, we introduce the space $F^s_\mathfrak{p} (\T^2)$, which belongs to the family of Triebel-Lizorkin spaces $F^s_\mathfrak{p} = F^s_{\mathfrak{p}, \infty}$ for $\mathfrak{p}>1$: \begin{equation} \label{TriebelLizorkin} \begin{split} F^s_{\mathfrak{p}} (\T^2) &= \{ f \in L^\mathfrak{p} (\T^2) | \text{ there exists } g \in L^\mathfrak{p} (\T^2) \text{ such that for every } x, y \in \T^2,\\ & \ \ \ \ \frac{|f(x) - f(y) | }{|x-y|^s} \le g(x) + g(y) \},\end{split} \end{equation} and its seminorm $[\cdot ]_{F^s_\mathfrak{p}}$ is defined by \begin{equation} \notag [f]_{F^s_\mathfrak{p}} := \inf_{g \in L^\mathfrak{p} (\T^2)} \{ \| g \|_{L^\mathfrak{p} (\T^2)} | |f(x) - f(y) |\le (|x-y|)^s (g(x) + g(y) ), \text{ for every } x, y \in \T^2 \}. \end{equation} The norm on $F^s_{\mathfrak{p}} (\T^2)$ is naturally defined by $\| \cdot \|_{L^\mathfrak{p} (\T^2 ) } + [\cdot ]_{F^s_\mathfrak{p}}$. Now we argue that solution in Yudovich class propagates Besov regularity. First, we use the following embeddings: for $s_3 > s_2 > s_1$, we have continuous embeddings (the proof for whole space, which is standard, can be easily translated to periodic domain $\T^2$.) \begin{equation} \label{Besovembedding} B_{\mathfrak{p}, \infty} ^{s_3} (\T^2) \subset B_{\mathfrak{p}, 1} ^{s_2} (\T^2) \subset W^{s_2, \mathfrak{p}} (\T^2) \subset F_{\mathfrak{p}} ^{s_1} (\T^2) \subset B_{\mathfrak{p}, \infty} ^{s_1} (\T^2). \end{equation} Therefore, since $\o_0 \in B^s_{2, \infty} (\T^2)$ for some $s>0$, we have $\o_0 \in F^{s_1}_{2} $ for some $s_1 \in (0, s)$, and thus so are $\o_0 ^\beta$s with uniform bounds on $F^{s_1}_2$ norm. Then for any $\beta \ge 0$ (we introduce the convention that $X^0 = X$ and $\o^0 = \o$) we have \begin{align} &\frac{|\o^\beta (x,t) - \o^\beta (y, t) |}{(|x-y|)^{s_1 \alpha (T)} } = \frac{| \o_0 ^\beta (X^\beta (0; t, x) ) - \o_0 ^\beta (X^\beta (0; t, y) ) | } { (|x-y|)^{s_1 \alpha (T)} } \notag\\ &= \frac{| \o_0 ^\beta (X^\beta (0; t, x) ) - \o_0 ^\beta (X^\beta (0; t, y) ) | } {d(X^\beta (0; t, x), X^\beta (0; t, y) ) ^{s_1} } \frac{(|X^\beta (0; t, x)- X^\beta (0; t, y) | )^{s_1} }{ (|x-y|)^{s_1 \alpha (T)} } \notag \\ & \le \left (g(X^\beta (0; t, x)) + g(X^\beta (0; t, y) ) \right ) C(T),\notag \end{align} for any $g \in L^2 (\T^2)$ satisfying \eqref{TriebelLizorkin}. Therefore, $C(T) g \circ X^\beta (0; t , \cdot )$ satisfies defining condition for \eqref{TriebelLizorkin} and thus $\o^\beta (t) \in F^{s_1 \alpha(T)}_2$ with \begin{equation} \| \o^\beta (t) \|_{F^{s_1 \alpha(T) } _2 } \le C(T) \| \o_0 \|_{F^{s_1}_2}.\notag \end{equation} Therefore, using \eqref{Besovembedding}, we have \begin{equation} \notag \| \o^\beta (t) \|_{B^{s_1 \alpha (T) } _{2, \infty } (\T^2) } \le C \| \o^\beta (t) \|_{F^{s_1 \alpha(T) } _2 (\T^2) } \le C(T) \| \o_0 \|_{F^{s_1}_2 (\T^2) } \le C(T) \| \o_0 \|_{B^s_{2, \infty} (\T^2)}. \end{equation} Now we use the interpolation inequality: \begin{equation} \notag \| \o^\beta (t) - \o(t) \|_{L^2 (\T^2) } \le \| \o^\beta (t) - \o(t) \|_{H^{-1} (\T^2) } ^{\frac{s_0}{1+s_0 } } \| \o^\beta (t) - \o(t) \|_{B^{s_1 \alpha(T) } _{2, \infty } }^{\frac{1}{1+s_0 } }. \end{equation} for some $s_0 < s_1 \alpha(T)$. Therefore, we have \begin{equation} \notag \| \o^\beta (t) - \o(t) \|_{L^2 (\T^2) } \le \| u^\beta (t) - u(t) \|_{L^2 (\T^2 ) } ^{\frac{s_0}{1+s_0 } } C(T, \|\o_0\|_{B^s_{2, \infty} (\T^2 ) } ) \le C(T, \|\o_0\|_{B^s_{2, \infty} (\T^2 ) } ) \beta^{C e^{-C(\|\o_0 \|_{L^\infty (\T^2 ) }) T } }, \end{equation} by noting that the rate function for Yudovich case is algebraic: that is, $\mathrm{Rate}(\beta) = \beta^{2 e^{-C(\|\o_0 \|_{L^\infty (\T^2 ) }) T }}$. \end{proof} \begin{remark} One may naturally ask if one can obtain faster rate than \eqref{vorticityratelocYud}, analogous to \eqref{vorticityrateYud}. It seems that the argument we presented for \eqref{vorticityrateYud} does not extend to localized Yudovich space. First, if $m>0$ for the modulus of continuity given by \begin{equation} \label{modcontinuityX} \mu (|x-y|, T) = \exp \left ( - e_{m+1} \left ( \log_{m+2} \frac{e}{|x-y|} - (BT + C_0 ) \right ) \right ) \end{equation} cannot be bounded by any H\"{o}lder exponent $|x-y|^\alpha$ for any $\alpha \in (0,1)$. Thus, we cannot continue the argument from there. To see this, suppose that there exists a $\alpha>0$ and $C>0$ such that \begin{equation} \notag \mu(r, T) \le C r^\alpha, \end{equation} for any $r<1$ very small. This amounts to say that \begin{equation} \notag \log_{m+2} \frac{e}{r} - \log_{m+2} \frac{1}{Cr^\alpha} \ge BT + C_0. \end{equation} Taking exponential, we have \begin{equation} \notag \frac{\log_{m+1} \frac{e}{r} }{\log_{m+1} \frac{1}{Cr^\alpha} } \ge e^{BT + C_0}. \end{equation} Since both denominator and numerator diverges as $r \rightarrow 0^+$, we may apply L'Hopital's rule: \begin{align} &\frac{d}{dr} \log_{m+1} \frac{e}{r} = \frac{1}{\prod_{k=1} ^m \log_k \frac{e}{r} } \left ( -\frac{1}{r} \right ),\notag \\ &\frac{d}{dr} \log_{m+1} \frac{1}{Cr^\alpha} = \frac{1}{\prod_{k=1} ^m \log_k \frac{1}{Cr^\alpha} } \left ( -\frac{\alpha}{r} \right ).\notag \end{align} Inductively, we have \begin{align} &\lim_{r\rightarrow 0^+} \frac{\log_{0+1} \frac{e}{r} }{\log_{0+1} \frac{1}{Cr^\alpha} } = \frac{1}{\alpha},\notag \\ &\lim_{r\rightarrow 0^+} \frac{\log_{1+1} \frac{e}{r} }{\log_{1+1} \frac{1}{Cr^\alpha} } = \lim_{r\rightarrow 0^+} \frac{\log_1 \frac{1}{Cr^\alpha} }{\log_1 \frac{e}{r} } \frac{1}{\alpha} = \frac{\alpha}{\alpha} = 1,\notag \\ &\cdots \notag \\ &\lim_{r\rightarrow 0^+} \frac{\log_{m+1} \frac{e}{r} }{\log_{m+1} \frac{1}{Cr^\alpha} } = \prod_{k=1} ^m \frac{\log_k \frac{1}{Cr^\alpha} } {\log_k \frac{e}{r} } \frac{1}{\alpha} = 1.\notag \end{align} Therefore, except for $m=0$, where the limit is given by $\frac{1}{\alpha}$, for any $\alpha>0$ and $C>0$, there exists small $r>0$ such that $\mu(r, T) > Cr^\alpha$. Thus, control of vorticity in Triebel-Lizorkin space $F_p ^{s(t)}$ is not available. There are other methods for propagation of regularity (in losing manner), but it seems that they also suffer from similar issue: flows generated by localized Yudovich class does not propagate enough regularity. The argument of \cite{BCD2011} does not extend to localized Yudovich class as well: when $\o_0$ is locally Yudovich, the modulus of continuity for $u$ is weaker than log-Lipschitz: it is known that the norm defined by \begin{equation} \notag \|u\|_{LL'} = \| u\|_{L^\infty} + \sup_{j \ge 0} \frac{\| \nabla S_j u \|_{L^\infty} }{(j+1)}, \end{equation} where $S_j u = \sum_{k=-1} ^j \Delta_k u$, is equivalent to the norm of Log-Lipschitz space (proposition 2.111 of \cite{BCD2011}, which is for the whole case, but can be adopted the periodic domain easily). However, if $\o_0 \in Y_{\mathrm{ul} }^\Theta$, then $u$ has the modulus of continuity $\varphi_\Theta$, and the norm for $C_b ^{0, \varphi_\Theta} (\T^2)$ is equivalent to \begin{equation} \notag \|u\|_{L^\infty} + \sup_{j \ge 0} \frac{\| \nabla S_j u \|_{L^\infty} }{\prod_{k=1} ^{m+1} \log_k (e 2^j ) } , \end{equation} which is less than $\|u\|_{LL'}$. However, the critical growth rate for the denominator in applying the linear loss of regularity result (for example, Theorem 3.28 of \cite{BCD2011}) is $j+1$, which is the rate of Log-Lipschitz norm. Therefore, we cannot rely on the argument of \cite{BCD2011} to conclude that $\o(t)$ has certain Besov regularity. Finally, a borderline Besov space $B_\Gamma$, introduced by Vishik in \cite{V1999}) has a certain regularity (in the sense that $B_\Gamma$ restricts the rate of growth of frequency components) propagates, but it is not clear how to use this to obtain convergence rate for vorticity. For simplicity, we focus on one particular form of growth function: let \begin{equation} \notag \Gamma (r) = (r+2) \frac{\log (r+3)}{\log 2}, \Gamma_1 (r) = \frac{\log (r+3)}{\log 2} \end{equation} for $r \ge -1$ and $\Gamma(r) = \Gamma_1 (r) = 1$ for $r \le -1$. We define the space $B_\Gamma$ by \begin{equation} \notag B_\Gamma =\left \{ f | \| f \|_{\Gamma} := \sup_{N \ge -1} \frac{\sum_{j=-1} ^N\| \Delta_j f \|_{L^\infty }}{\Gamma(N)} < \infty \right \} \end{equation} and we define $B_{\Gamma_1}$ in a similar manner. In \cite{V1999}, the following was proved: \begin{theorem}[\cite{V1999}] If $\o_0 \in L^{\mathfrak{p}_0} \cap L^{\mathfrak{p}_1} \cap B_{\Gamma_1}$, for $1 < \mathfrak{p}_0 < 2 < \mathfrak{p}_1 < \infty$, then for any $T>0$ there uniquely exists a weak solution $\o(t)$ of Euler equation satisfying \begin{equation} \notag \| \o(t) \|_{\Gamma} \le \lambda(t), \end{equation} where $\lambda(t)$ depends only on the bounds on $\|\o_0 \|_{L^{\mathfrak{p}_0} \cap L^{\mathfrak{p}_1} \cap B_{\Gamma_1}}$. \end{theorem} Therefore, one can prove uniform boundedness of vorticity in $B_{\Gamma}$ space. However, it is not clear how one can interpolate $B_{\Gamma}$ space and the velocity space (where we have rate of convergence) to obtain rate for $L^p$ norm of the vorticity. Indeed, it was recently shown that if the velocity field is worse than Lipschitz ($u \in W^{1,p}$ for $p<\infty$), then it is possible for smooth data to lose all Sobolev regularity instantaneously from the transport by $u$ (\cite{ACM2019}). Instead, only a logarithm of a derivative can be preserved (see, e.g. \cite{BN2018}), and this loss of regularity prohibits faster convergence. \end{remark} \section{Proof of the Main theorems} \begin{lemma} \begin{equation} \label{est:dec:F-M} \begin{split} & \left\| \frac{ F^\e(t) - M_{1, \e u(t) , 1}}{\e \sqrt{M_{1,0,1}}} \right\|_{L^p_xL^2_v} \\ & \lesssim e^{ \frac{ \e^2}{4} \| u^\beta \|^2_{ \infty }} \Big\{ \|u^\beta (t)- u (t)\|_{L^p _x} e^{\e^2 \|u-u^\beta\|^2_{\infty}} + \kappa^{ \min \{1, \frac{p+2}{2p}\}} \sqrt{\mathcal{E} (t)} + \e \kappa V(\beta) \Big\}.\end{split} \end{equation} \begin{equation} \label{est:dec:DF-M} \begin{split} & \left\| \frac{ \nabla_x (F^\e - M_{1, \e u(t) , 1})}{\e (1+ |v|) \sqrt{M_{1,0,1}}} \right\|_{L^p_xL^2_v} \\ & \lesssim \big\{ \|\nabla_x u^\beta -\nabla_x u\|_{L^p_x} + \e \|\nabla_x u \|_{L^p_x} + \e \|\nabla_x u^\beta \|_{L^p_x} \big\} e^{\e^2 \|u-u^\beta\|^2_{\infty}} e^{\e^2 \| u^\beta\|^2_\infty}\\ &+ e^{ \frac{ \e^2 \| u^\beta \|^2_{L^\infty (\T^2)}}{4}} \{\kappa^{ \min \{\frac{1}{p}, \frac{1}{2}\}} \sqrt{\mathcal{E}(t)}+ \e \kappa V(\beta) \} . \end{split} \end{equation} \end{lemma} \begin{proof} \hide Recall the notation of a Maxwellian associates to the Lagrangian solutions $u^\beta$ of \eqref{vorticity_eqtn}-\eqref{vorticity_beta}: \begin{equation} \label{mu_beta} \mu =\mu (t,x,v) = M_{1, \e u^\beta (t,x), 1} (v). \end{equation} For the simplicity, we will use the following notation for the global Maxwellian: \begin{equation} \label{mu_0} \mu_0 = \mu_0 (v)= M_{1,0,1} (v). \end{equation} \textit{Step 1. Proof of \eqref{est:dec:F-M}. } We decompose \begin{equation} \label{dec:F-M} \begin{split} &\left\| \frac{ F^\e - M_{1, \e u , 1}}{\e \sqrt{M_{1,0 ,1}}} \right\|_{L^p_x L^2_v } \\ & \leq \left\| \frac{ M_{1, \e u^\beta , 1} - M_{1, \e u , 1} }{\e \sqrt{M_{1,0 ,1}}} \right\|_{L^p_x L^2_v }+ \left\| \sqrt{ \frac{M_{1, \e u^\beta, 1}}{ M_{1,0,1}} } \right\|_{L^\infty_{x,v} } \left\| \frac{ F^\e - M_{1, \e u^\beta , 1}}{\e \sqrt{M_{1,\e u^\beta ,1}}} \right\|_{L^p_x L^2_v } \\ & = \eqref{dec:F-M}_1 + \eqref{dec:F-M}_2 \eqref{dec:F-M}_3. \end{split} \end{equation} The bound of $\eqref{dec:F-M}_1$ raises the need for consideration of $\sqrt{M_{1, A , 1}/ M_{1,0,1}}$ for $A \in \mathbb{R}^3$: \begin{equation} \label{M/M_0} \sqrt{M_{1, A , 1}/ M_{1,0,1}} \lesssim e^{\frac{-|v-A|^2 + |v|^2}{4}} \leq e^{\frac{|A|^2}{4}} \end{equation} Using \eqref{M/M_0} and the Taylor expansion, we derive that, \begin{equation} \label{taylor} \begin{split} &\frac{| M_{1, \e u^\beta ,1} -M_{1, \e u , 1} |}{ \e \sqrt{M_{1,0,1}}}\\ &= \frac{1}{\e}\Big| \int^\e_0 \big( (v-\e u) + a (u- u^\beta ) \big) \cdot (u^\beta - u) \frac{M_{1, \e u - a (u- u^\beta),1}}{\sqrt{M_{1,0,1}}} \mathrm{d} a \Big|\\ &\leq |u^\beta - u| \frac{1}{\e} \int^\e_0 |(v-\e u) + a (u- u^\beta ) | | M_{1, \e u - a (u- u^\beta),1} |^{\frac{1}{2}} e^{\frac{ |\e u -a (u-u^\beta)|^2 }{4 }} \mathrm{d} a \\ &\lesssim |u^\beta - u| e^{\e^2 |u-u^\beta|^2} e^{\e^2 | u^\beta|^2} \frac{1}{\e} \int^\e_0 | M_{1, \e u - a (u- u^\beta),1 } (v)|^{\frac{1 }{4}} \mathrm{d} a , \end{split} \end{equation} where we have used $ |(v-\e u) + a (u- u^\beta ) | |M_{1, \e u - a (u- u^\beta),1}|^{\frac{1 }{2}} \lesssim |M_{1, \e u - a (u- u^\beta),1}|^{\frac{1 }{4}} $ and $|\e u -a (u-u^\beta)| = |(\e -a) u - (\e -a ) u^\beta + \e u^\beta| \leq |\e -a | |u- u^\beta| + \e |u^\beta| \leq \e \{|u- u^\beta| + |u^\beta|\}$. Now taking an $L^p_x L^2_v$-norm to \eqref{taylor}, we conclude that \begin{equation} \label{est:dec:F-M_1} \begin{split} \eqref{dec:F-M}_1 &\lesssim \|u^\beta - u\|_{L^p_x} e^{\e^2 \|u-u^\beta\|^2_{\infty} +\e^2 \| u^\beta\|^2_\infty} \frac{1}{\e} \sup_{x \in \T^2}\Big( \int^\e_0 \| M_{1, \e u - a (u- u^\beta),1 } \|_{L^2 (\mathbb{R}^3)}^{\frac{1 }{4}} \mathrm{d} a\Big)\\ &\lesssim \|u^\beta - u\|_{L^p _x} e^{\e^2 \|u-u^\beta\|^2_{\infty}} e^{\e^2 \| u^\beta\|^2_\infty} \sup_{x \in \T^2} \| M_{1, 0, 1} \|_{L^2(\mathbb{R}^3)}^{1/4}\\ &\lesssim \|u^\beta - u\|_{L^p _x} e^{\e^2 \|u-u^\beta\|^2_{\infty}} e^{\e^2 \| u^\beta\|^2_\infty} . \end{split}\end{equation} From \eqref{M/M_0}, clearly we have \begin{equation} \label{est:dec:F-M_2} \eqref{dec:F-M}_2 \lesssim e^{ \frac{ \e^2 \| u^\beta \|^2_{L^\infty (\T^2)}}{4}}. \end{equation} Using the expansion \eqref{F_e}, we can bound $\eqref{dec:F-M}_3$: \begin{equation} \label{est:dec:F-M_3} \eqref{dec:F-M}_3 \lesssim \| f_R \|_{L^p _x L^2_v } + \e \kappa V(\beta) \lesssim \| \nabla_x f_R \|_{L^2_{x, v }}^{\frac{p-2}{p}} \| f_R \|_{L^2_{x, v }} ^{\frac{2}{p}}+ \e \kappa V(\beta) \end{equation} \unhide We only prove \eqref{est:dec:DF-M}, as the proof of \eqref{est:dec:F-M} is similar and simpler. We decompose \begin{equation} \label{dec:F-M} \begin{split} &\left\| \frac{ \nabla_x (F^\e - M_{1, \e u , 1})}{\e (1+ |v|) \sqrt{M_{1,0 ,1}}} \right\|_{L^p_x L^2_v } \\ & \leq \left\| \frac{ \nabla_x ( M_{1, \e u^\beta , 1} - M_{1, \e u , 1}) }{\e (1+ |v|) \sqrt{M_{1,0 ,1}}} \right\|_{L^p_x L^2_v }+ \left\| \sqrt{ \frac{M_{1, \e u^\beta, 1} ^{1+o(1) }}{ M_{1,0,1}} } \right\|_{L^\infty_{x,v} } \left\| \frac{ \nabla_x (F^\e - M_{1, \e u^\beta , 1})}{\e (1+ |v|) \sqrt{M_{1,\e u^\beta ,1}^{1+o(1) } }} \right\|_{L^p_x L^2_v } \\ & = \eqref{dec:F-M}_1 + \eqref{dec:F-M}_2 \eqref{dec:F-M}_3. \end{split} \end{equation} The bound of $\eqref{dec:F-M}_1$ raises the need for consideration of $\sqrt{M_{1, A , 1}^{1+o(1) } / M_{1,0,1}}$ for $A \in \mathbb{R}^3$: \begin{equation} \label{M/M_0} \sqrt{M_{1, A , 1}^{1+o(1) } / M_{1,0,1}} \lesssim e^{\frac{-(1+o(1) )|v-A|^2 + |v|^2}{4}} \leq e^{\frac{|A|^2}{4}} \end{equation} Using \eqref{M/M_0} and the Taylor expansion, we derive that, \begin{equation} \label{taylor} \begin{split} &\frac{| \nabla_x (M_{1, \e u^\beta ,1} -M_{1, \e u , 1} )|}{ \e \sqrt{M_{1,0,1}}}\\ &= \frac{1}{\e}\Big| \int^\e_0 \nabla_x \Big( \big( (v-\e u) + a (u- u^\beta ) \big) \cdot (u^\beta - u) \frac{M_{1, \e u - a (u- u^\beta),1}}{\sqrt{M_{1,0,1}}}\Big) \mathrm{d} a \Big|\\ &\lesssim \{|\nabla_x u^\beta - \nabla_x u| + \e|\nabla_x u| + \e |\nabla_x u^\beta|\} e^{\e^2 |u-u^\beta|^2} e^{\e^2 | u^\beta|^2} \frac{1}{\e} \int^\e_0 | M_{1, \e u - a (u- u^\beta),1 } (v)|^{\frac{1 }{4}} \mathrm{d} a , \end{split} \end{equation} where we have used $ |(v-\e u) + a (u- u^\beta ) | |M_{1, \e u - a (u- u^\beta),1}|^{\frac{1 }{2} - o(1)/2 } \lesssim |M_{1, \e u - a (u- u^\beta),1}|^{\frac{1 }{4}} $ and $|\e u -a (u-u^\beta)| = |(\e -a) u - (\e -a ) u^\beta + \e u^\beta| \leq |\e -a | |u- u^\beta| + \e |u^\beta| \leq \e \{|u- u^\beta| + |u^\beta|\}$. Now taking an $L^p_x L^2_v$-norm to \eqref{taylor}, we conclude that \begin{equation} \label{est:dec:F-M_1} \begin{split} \eqref{dec:F-M}_1 &\lesssim \big\{ \|\nabla_x u^\beta -\nabla_x u\|_{L^p_x} + \e \|\nabla_x u \|_{L^p_x} + \e \|\nabla_x u^\beta \|_{L^p_x} \big\} e^{\e^2 \|u-u^\beta\|^2_{\infty}} e^{\e^2 \| u^\beta\|^2_\infty} . \end{split}\end{equation} From \eqref{M/M_0}, clearly we have \begin{equation} \label{est:dec:F-M_2} \eqref{dec:F-M}_2 \lesssim e^{ \frac{ \e^2 \| u^\beta \|^2_{L^\infty (\T^2)}}{4}}. \end{equation} Using the expansion \eqref{F_e}, we can bound $\eqref{dec:F-M}_3$: \begin{equation} \label{est:dec:F-M_3} \begin{split} \eqref{dec:F-M}_3& \lesssim \| \nabla_x f^\e \|_{L^p_x L^2_x}+ \e \| u^\beta \|_{\infty}\| f^\e \|_{L^p_x L^2_x}+ \e \kappa V(\beta) \\ & \lesssim \| \nabla_x^2 f_R \|_{L^2_{x, v }}^{\frac{p-2}{p}} \| \nabla_x f_R \|_{L^2_{x, v }} ^{\frac{2}{p}}+ \e \| u^\beta \|_{\infty} \| \nabla_x f_R \|_{L^2_{x, v }}^{\frac{p-2}{p}} \| f_R \|_{L^2_{x, v }} ^{\frac{2}{p}}+ \e \kappa V(\beta)\\ & \lesssim \kappa^{ \min \{\frac{1}{p}, \frac{1}{2}\}} \sqrt{\mathcal{E}(t)}+ \e \kappa V(\beta). \end{split} \end{equation} We finish the proof by applying \eqref{est:dec:F-M_1}-\eqref{est:dec:F-M_3} to \eqref{dec:F-M}. \end{proof} We claim that \begin{lemma} \begin{equation} \| \o_B^\e(t) - \o(t) \|_{L^\mathfrak{p} (\mathbb{T}^2)} \lesssim \| \o^\beta (t) - \o(t) \|_{L^\mathfrak{p} (\mathbb{T}^2)} + \kappa^{ \min \{\frac{1}{2}, \frac{1}{\mathfrak{p}}\}} \sqrt{\mathcal{E} (t)} + \e \kappa V(\beta). \end{equation} \end{lemma} \begin{proof}Recall $F^\e$ in \eqref{F_e}. Note that \begin{align} & \o^\e_B(t,x)- \o (t,x) = \nabla^\perp \cdot u^\e_B(t,x) - \nabla^\perp \cdot u (t,x) \notag\\ & = \frac{1}{\e} \int_{\mathbb{R}^3} \munderbar{v} \cdot \nabla^\perp (F^\e(t,x,v) - M_{1,\e u ,1} (v)) \mathrm{d} v\notag \\ & = \frac{1}{\e} \int_{\mathbb{R}^3} \munderbar{v} \cdot \nabla^\perp ( M_{1, \e u^\beta,1} (v) - M_{1,\e u ,1} (v)) \mathrm{d} v \ \ \ (=\o^\beta - \o) \label{diff:o1} \\ & \ \ + \int_{\mathbb{R}^3} \nabla^\perp f_R(t,x,v) \cdot \munderbar{v} \sqrt{\mu} \mathrm{d} v \label{diff:o3}\\ & \ \ + \nabla^\perp \cdot \int_{\mathbb{R}^3} \{ \e^2 p^\beta \mu - \e^2 \kappa (\nabla_x u^\beta) : \mathfrak{A} \sqrt{\mu} + \e \kappa \tilde{u}^\beta \cdot (v-\e u^\beta) \mu + \e^2 \kappa \tilde{p}^\beta \mu \} \mathrm{d} v . \label{diff:o2} \end{align} Clearly \begin{equation} \notag \| \eqref{diff:o1}\|_{L^\mathfrak{p} (\mathbb{T}^2)} =\| \o^\beta(t) - \o(t)\|_{L^\mathfrak{p} (\mathbb{T}^2)} . \end{equation} From Theorem \ref{theo:remainder}, we conclude that \begin{equation} \notag \| \eqref{diff:o3} \|_{L^\mathfrak{p}(\mathbb{T}^2)} \lesssim \begin{cases} \| \nabla_x f_R (t) \|_{L^2 (\mathbb{T}^2 \times \mathbb{R}^3)} \lesssim \sqrt{\kappa} \sqrt{\mathcal{E} (t)} \ \ for \ \mathfrak{p} \in [1,2],\\ \| \nabla_x^2 f_R(t) \|_{L^2 (\mathbb{T}^2 \times \mathbb{R}^3) }^{\frac{\mathfrak{p} -2}{\mathfrak{p}}} \| \nabla_x f_R(t) \|_{L^2 (\mathbb{T}^2 \times \mathbb{R}^3) }^{\frac{2}{\mathfrak{p}}} \lesssim \kappa^{\frac{1}{\mathfrak{p}}} \sqrt{\mathcal{E} (t)} \ \ for \ \mathfrak{p} \in (2, \infty), \end{cases} \end{equation} where we have used (anisotropic) Gagliardo-Nirenberg interpolation for the second, whose proof is analogous to Lemma \ref{anint}. Using Theorem \ref{Vbound}, we get that $ \| \eqref{diff:o2} \|_{L^\mathfrak{p} (\T^2)} \lesssim \e \kappa V(\beta). $\end{proof} Equipped with Proposition \ref{thm:Dptu}, Proposition \ref{theo:convergence}, and Proposition \ref{theo:remainder}, we are ready to prove the main theorem of this paper. \begin{theorem}\label{main_theorem1} Choose an arbitrary $T \in (0, \infty)$. Suppose $(u_0, \o_0) \in L^2(\T^2) \times L^\mathfrak{p} (\T^2)$ for $\mathfrak{p} \in [1, \infty)$ and $(u,\o)$ be a Lagrangian solution of \eqref{vorticity}-\eqref{BS}-\eqref{vorticity_initial}. Assume the initial data $F_0$ to \eqref{eqtn_F} satisfies conditions in Theorem \ref{theo:remainder}. Then there exists a family of Boltzmann solutions $F^\e(t,x,v)$ to \eqref{eqtn_F} in $[0,T]$ such that \begin{equation} \sup_{t \in [0,T]} \left\| \frac{ F^\e(t) - M_{1, \e u(t) , 1}}{\e \sqrt{M_{1,0,1}}} \right\|_{L^2 (\mathbb{T}^2 \times \mathbb{R}^3)} \rightarrow 0. \end{equation} Moreover the Boltzmann vorticity converges to the Lagrangian solution $\o$: \begin{equation} \label{conv:vorticity_B} \sup_{0 \leq t \leq T} \|\o^\e_B(t,\cdot)- \o (t,\cdot)\|_{L^\mathfrak{p}(\mathbb{T}^2)} \rightarrow 0. \end{equation} \end{theorem} \begin{theorem}\label{main_theorem2} Choose an arbitrary $T \in (0, \infty)$. Suppose $\o_0 \in Y_{\mathrm{ul}}^{\Theta} (\mathbb{T}^2)$ for some $\Theta$ in \eqref{Thetap} with $m \in \mathbb{Z}_{\ge 0}$, and let $(u, \o)$ be the unique weak solution of \eqref{vorticity}-\eqref{BS}-\eqref{vorticity_initial}. Assume the initial data $F_0$ to \eqref{eqtn_F} satisfies conditions in Theorem \ref{theo:remainder}. Then there exists a family of Boltzmann solutions $F^\e (t,x,v)$ to \eqref{eqtn_F} in $[0, T]$ such that \begin{equation} \sup_{t \in [0,T]} \left\| \frac{ F^\e(t) - M_{1, \e u(t) , 1}}{\e \sqrt{M_{1,0,1}}} \right\|_{L^2 (\mathbb{T}^2 \times \mathbb{R}^3)} \rightarrow 0. \end{equation} Moreover the Boltzmann velocity and vorticity converges to the solution $\o$ with an explicit rate as in \eqref{Rateftn}, \eqref{Rateftnvort}: \begin{equation} \label{conv:vorticity_B_rate} \begin{split} \sup_{0 \leq t \leq T} \| u^\e_B (t, \cdot) - u(t, \cdot ) \|_{L^2 (\mathbb{T}^2 ) } &\lesssim \mathrm{Rate} (\beta), \\ \sup_{0 \leq t \leq T} \| \o^\e_B (t, \cdot) - \o(t, \cdot ) \|_{L^\mathfrak{p} (\mathbb{T}^2 ) } &\lesssim \mathrm{Rate}_\o (\beta). \end{split} \end{equation} Furthermore, if $\o_0 \in Y_{\mathrm{ul}}^{\Theta} (\mathbb{T}^2) \cap B^s_{2, \infty} (\mathbb{T}^2)$ for some $s>0$, Boltzmann vorticity converges to the solution $\o$ with a rate which is uniform in $\o_0$ as in \eqref{vorticityratelocYud}, \eqref{vorticityrateYud}: \begin{equation} \label{conv:vorticity_B_uniform} \begin{split} \sup_{0 \leq t \leq T} \| \o^\e_B (t, \cdot) - \o(t, \cdot ) \|_{L^\mathfrak{p} (\mathbb{T}^2 ) } &\lesssim \mathrm{Rate}_{\o,s,loc-Y} (\beta), m>0 \text{ (localized Yudovich)},\\ \sup_{0 \leq t \leq T} \| \o^\e_B (t, \cdot) - \o(t, \cdot ) \|_{L^\mathfrak{p} (\mathbb{T}^2 ) } &\lesssim \mathrm{Rate}_{\o,s,Y} (\beta), m=0 \text{ (Yudovich)}. \end{split} \end{equation} \end{theorem} \hide \begin{proof}[Proof of Corollary \ref{cor:vorticity}] Recall the Lagrangian solution $(u^\beta, \o^\beta )$ to \eqref{vorticity_eqtn}-\eqref{vorticity_beta}; and the local Maxwellian associates to $u^\beta$, $\mu= M_{1, \e u^\beta, 1}$ in \eqref{mu_beta}. From \eqref{hydro_limit} and \eqref{F_e}, \begin{equation} \begin{split}\label{dec:o-o} & \o^\e_B(t,x)- \o (t,x) = \nabla^\perp u^\e_B(t,x) - \nabla^\perp u (t,x) \\ & = \frac{1}{\e} \nabla^\perp \left( \int_{\mathbb{R}^3} \munderbar{v} (F^\e(t,x,v) - M_{1,\e u ,1} (v)) \mathrm{d} v\right) \\ & = \frac{1}{\e} \nabla^\perp \left( \int_{\mathbb{R}^3} \munderbar{v} (F^\e(t,x,v)- M_{1, \e u^\beta, 1}) \mathrm{d} v\right) + \frac{1}{\e} \nabla^\perp \left( \int_{\mathbb{R}^3} \munderbar{v} (M_{1, \e u^\beta, 1} - M_{1,\e u ,1} (v)) \mathrm{d} v\right) \\ & = \nabla^\perp \left( \int_{\mathbb{R}^3} \munderbar{v} (\e f_2 + \delta f _R )\sqrt{\mu } \mathrm{d} v\right) + ( \o^\beta - \o ) \\ & = \Big( \e \nabla^\perp u_2 + \delta \nabla^\perp \int_{\mathbb{R}^3} \munderbar{v} f_R \sqrt{\mu } \mathrm{d} v \Big)+ ( \o^\beta - \o ). \end{split} \end{equation} We derive the convergence of the later term $( \o^\beta - \o )$ using \eqref{stab_o_beta} in Proposition \ref{theo:convergence}. For the $\e \nabla^\perp u_2$, we use (??) and easily derive that $\|\e \nabla^\perp u_2\|_{L^\mathfrak{p}} \rightarrow 0$. For the remainder $f_R$, we express that \begin{equation} \begin{split} &\delta \left\| \nabla^\perp \int_{\mathbb{R}^3} \munderbar{v} f _R \sqrt{\mu } \mathrm{d} v \right\|_{L^\mathfrak{p}}\\ & \lesssim \delta \left\| \int_{\mathbb{R}^3} \{- v_1 \partial_{x_2} f_R+ v_2 \partial_{x_1}f_R\} \sqrt{\mu } \mathrm{d} v \right\|_{L^\mathfrak{p}} + \e \delta \left\| \int_{\mathbb{R}^3} f_R \o^\beta |v- u^\beta| \sqrt{\mu } \mathrm{d} v \right\|_{L^\mathfrak{p}} \\ &\lesssim \delta \| \nabla_x f _R | \munderbar{v} | \sqrt{\mu} \|_{L^\mathfrak{p}} + \e \delta \| f_R \|_{L^\infty} \| \o^\beta \|_{L^\mathfrak{p}} . \end{split} \end{equation} \end{proof}\unhide
{ "attr-fineweb-edu": 1.430664, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUcfPxK1ThhBMLfngN
\section*{Acknowledgement} TM is supported by the MEXT Q-LEAP, JST FOREST, JST PRESTO No.JPMJPR176A, and the Grant-in-Aid for Scientific Research (B) No.JP19H04066 of JSPS. \fi \ifnum1=1 \bibliographystyle{alpha} \section{PKE with Publicly Verifiable Certified Deletion and Classical Communication}\label{sec:WE_plus_OSS} In this section, we define the notion of public key encryption with publicly verifiable certified deletion with classical communication, and construct it from the witness encryption and the one-shot signature. In \cref{sec:pk_pv_cd_def_classical_com}, we present the definition of the public key encryption with publicly verifiable certified deletion with classical communication. In \cref{sec:pk_pv_cd_cc_construction}, we give a construction and show its security. \subsection{Definition of PKE with Publicly Verifiable Certified Deletion with Classical Communication} \label{sec:pk_pv_cd_def_classical_com} In this section, we consider a PKE with publicly verifiable certified deletion with classical communication. It is a publicly verifiable version of the one given in (\cref{def:pk_cert_del_classical_com}). The construction of \cref{sec:PKE_cd_cc_construction} is not publicly verifiable, because the verification key $\keys{vk}$ (which is the trapdoor $\{\keys{td}_i\}_i$) should be secret to the adversary. On the other hand, in \cref{sec:pk_pv_cd_cc_construction} we construct a publicly verifiable one, which means that the security is kept even if the verification key $\keys{vk}$ (which is the public key $\mathsf{oss}.\keys{pk}$ of the one-shot signature in our construction) is given to the adversary. The definition (syntax) is the same as that of the non-publicly-verifiable one (\cref{def:pk_cert_del_classical_com}). Furthermore, its correctness, i.e., the decryption correctness and the verification correctness, are also the same as those of the non-publicly-verifiable one (\cref{def:pk_cd_correctness_classical_com}). Regarding the certified deletion security, it is the same as that of the non-publicly-verifiable one (\cref{def:pk_certified_del_classical_com}) except that the challenger sends $\keys{vk}$ to the adversary (which is $\mathsf{oss}.\keys{pk}$ in our construction). \subsection{Construction} \label{sec:pk_pv_cd_cc_construction} We construct a PKE scheme with publicly verifiable certified deletion with classical communication $\Sigma_{\mathsf{pvcccd}}=(\algo{KeyGen},\algo{Enc},\algo{Dec},\algo{Del},\algo{Vrfy})$ from a public key NCE scheme $\Sigma_\mathsf{nce}=\mathsf{NCE}.(\algo{KeyGen},\algo{Enc},\algo{Dec},\algo{Fake},\algo{Reveal})$, the witness encryption scheme $\Sigma_\mathsf{we}=\mathsf{WE}.(\algo{Enc},\algo{Dec})$, and the one-shot signature scheme $\Sigma_\mathsf{oss}=\mathsf{OSS}.(\algo{Setup},\algo{KeyGen},\algo{Sign},\algo{Vrfy})$. \begin{description} \item[$\algo{KeyGen}(1^\lambda)$:] $ $ \begin{itemize} \item Generate $(\mathsf{nce}.\keys{pk},\mathsf{nce}.\keys{sk},\mathsf{nce}.\mathsf{aux})\leftarrow \mathsf{NCE}.\algo{KeyGen}(1^\lambda)$. \item Output $(\keys{pk},\keys{sk})=(\mathsf{nce}.\keys{pk},\mathsf{nce}.\keys{sk})$. \end{itemize} \item[$\algo{Enc}\langle \mathcal{S}(\keys{pk},m),\mathcal{R}\rangle$:] This is an interactive protocol between a sender $\mathcal{S}$ with input $(\keys{pk},m)$ and a receiver $\mathcal{R}$ without input that works as follows. \begin{itemize} \item $\mathcal{S}$ parses $\keys{pk}=\mathsf{nce}.\keys{pk}$. \item $\mathcal{S}$ generates $\mathsf{crs}\leftarrow\mathsf{OSS}.\algo{Setup}(1^\lambda)$, and sends $\mathsf{crs}$ to $\mathcal{R}$. \item $\mathcal{R}$ generates $(\mathsf{oss}.\keys{pk},\mathsf{oss}.\keys{sk})\leftarrow \mathsf{OSS}.\algo{KeyGen}(\mathsf{crs})$, sends $\mathsf{oss}.\keys{pk}$ to $\mathcal{S}$, and keeps $\mathsf{oss}.\keys{sk}$. \item $\mathcal{S}$ computes $\mathsf{we}.\keys{CT}\leftarrow \mathsf{WE}.\algo{Enc}(1^\lambda,x,m)$ with the statement $x$ that ``$\exists \sigma$ s.t. $\mathsf{OSS}.\algo{Vrfy}(\mathsf{crs},\mathsf{oss}.\keys{pk},\sigma,0)=\top$". \item $\mathcal{S}$ computes $\mathsf{nce}.\keys{CT}\leftarrow\mathsf{NCE}.\algo{Enc}(\mathsf{nce}.\keys{pk},\mathsf{we}.\keys{CT})$, and sends $\mathsf{nce}.\keys{CT}$ to $\mathcal{R}$. \item $\mathcal{S}$ outputs $\keys{vk}=(\mathsf{crs},\mathsf{oss}.\keys{pk})$. $\mathcal{R}$ outputs $\keys{CT}=(\mathsf{nce}.\keys{CT},\mathsf{oss}.\keys{sk})$. \end{itemize} \item[$\algo{Dec}(\keys{sk},\keys{CT})$:] $ $ \begin{itemize} \item Parse $\keys{sk}=\mathsf{nce}.\keys{sk}$ and $\keys{CT}=(\mathsf{nce}.\keys{CT} ,\mathsf{oss}.\keys{sk})$. \item Compute $\sigma\leftarrow \mathsf{OSS}.\algo{Sign}(\mathsf{oss}.\keys{sk},0)$. \item Compute $m'\leftarrow \mathsf{NCE}.\algo{Dec}(\mathsf{nce}.\keys{sk},\mathsf{nce}.\keys{CT})$ \item Compute $m\leftarrow \mathsf{WE}.\algo{Dec}(m',\sigma)$. \item Output $m$. \end{itemize} \item[$\algo{Del}(\keys{CT})$:] $ $ \begin{itemize} \item Parse $\keys{CT}=(\mathsf{nce}.\keys{CT},\mathsf{oss}.\keys{sk})$. \item Compute $\sigma\leftarrow \mathsf{OSS}.\algo{Sign}(\mathsf{oss}.\keys{sk},1)$. \item Output $\keys{cert}=\sigma$. \end{itemize} \item[$\algo{Vrfy}(\keys{vk},\keys{cert})$:] $ $ \begin{itemize} \item Parse $\keys{vk}=(\mathsf{crs},\mathsf{oss}.\keys{pk})$ and $\keys{cert}=\sigma$. \item Compute $b\leftarrow \mathsf{OSS}.\algo{Vrfy}(\mathsf{crs},\mathsf{oss}.\keys{pk},\sigma,1)$. \item Output $b$. \end{itemize} \end{description} \paragraph{Correctness.} The decryption and verification correctness easily follow from the correctness of $\Sigma_\mathsf{we}$ and $\Sigma_\mathsf{oss}$. \paragraph{Security.} We show the following theorem. \begin{theorem}\label{theorem:WEOSS} If $\Sigma_\mathsf{nce}$ is RNC secure, $\Sigma_\mathsf{we}$ has the extractable security and $\Sigma_\mathsf{oss}$ is secure, then $\Sigma_{\mathsf{pvcccd}}$ is IND-CPA-CD secure. \end{theorem} \begin{proof} For clarity, we describe how $\expb{\Sigma_{\mathsf{pvcccd}},\mathcal{A}}{pvccpk}{cert}{del}(\lambda, b)$ (which we call $\hybi{0}(b)$ for simplicity) works below. \begin{enumerate} \item The challenger generates $(\mathsf{nce}.\keys{pk},\mathsf{nce}.\keys{sk},\mathsf{nce}.\mathsf{aux})\leftarrow\mathsf{NCE}.\algo{KeyGen}(1^\lambda)$, and sends $\mathsf{nce}.\keys{pk}$ to $\mathcal{A}$. \item $\mathcal{A}$ sends $(m_0,m_1)\in \mathcal{M}^2$ to the challenger. \item The challenger generates $\mathsf{crs}\leftarrow\mathsf{OSS}.\algo{Setup}(1^\lambda)$, and sends $\mathsf{crs}$ to $\mathcal{A}$. \item $\mathcal{A}$ sends $\mathsf{oss}.\keys{pk}$ to the challenger. \item The challenger computes $\mathsf{we}.\keys{CT}\leftarrow \mathsf{WE}.\algo{Enc}(1^\lambda,x,m_b)$, where $x$ is the statement that ``$\exists\sigma$ s.t. $\mathsf{OSS}.\algo{Vrfy}(\mathsf{crs},\mathsf{oss}.\keys{pk},\sigma,0)=\top$. The challenger computes $\mathsf{nce}.\keys{CT}\leftarrow \mathsf{NCE}.\algo{Enc}(\mathsf{nce}.\keys{pk},\mathsf{we}.\keys{CT})$. The challenger sends $\mathsf{nce}.\keys{CT}$ to $\mathcal{A}$. \item $\mathcal{A}$ sends $\keys{cert}=\sigma$ to the challenger. \item The challenger computes $b\leftarrow\mathsf{OSS}.\algo{Vrfy}(\mathsf{crs},\mathsf{oss}.\keys{pk},\sigma,1)$. If the output is $b=\bot$, the challenger sends $\bot$ to $\mathcal{A}$. If the output is $b=\top$, the challenger sends $\mathsf{nce}.\keys{sk}$ to $\mathcal{A}$. \item $\mathcal{A}$ outputs $b'$. The output of the experiment is $b'$. \end{enumerate} We define the following hybrid. \begin{description} \item[$\mathsf{Hyb}_1(b)$:] This is identical to $\hybi{0}(b)$ except that $\mathsf{nce}.\keys{CT}$ and $\mathsf{nce}.\keys{sk}$ are generated as \begin{align} \mathsf{nce}.\keys{CT}&\leftarrow \mathsf{NCE}.\algo{Fake}(\mathsf{nce}.\keys{pk},\mathsf{nce}.\keys{sk},\mathsf{nce}.\mathsf{aux}),\\ \mathsf{nce}.\keys{sk}&\leftarrow \mathsf{NCE}.\algo{Reveal}(\mathsf{nce}.\keys{pk},\mathsf{nce}.\keys{sk},\mathsf{nce}.\mathsf{aux},\mathsf{nce}.\keys{CT},\mathsf{we}.\keys{CT}). \end{align} \end{description} \begin{proposition}\label{prop:WEOSS_hyb_1} If $\Sigma_{\mathsf{nce}}$ is RNC secure, $\abs{\Pr[\hybi{0}(b)=1] - \Pr[\mathsf{Hyb}_1(b)=1]} \le {\mathsf{negl}}(\lambda)$. \end{proposition} \begin{proof} To show this, we assume that $\abs{\Pr[\hybi{0}(b)=1] - \Pr[\mathsf{Hyb}_1(b)=1]}$ is non-negligible, and construct an adversary $\mathcal{B}_\mathsf{nce}$ that breaks the RNC security of $\Sigma_{\mathsf{nce}}$. Let $\mathcal{A}$ be the distinguisher for $\hybi{0}(b)$ and $\hybi{1}(b)$. First, $\mathcal{B}_\mathsf{nce}$ receives $\mathsf{nce}.\keys{pk}$ from the challenger of the RNC security game. $\mathcal{B}_\mathsf{nce}$ then sends $\mathsf{nce}.\keys{pk}$ to $\mathcal{A}$. $\mathcal{B}_\mathsf{nce}$ receives $(m_0,m_1)$ from $\mathcal{A}$. $\mathcal{B}_\mathsf{nce}$ generates $\mathsf{crs}\leftarrow \mathsf{OSS}.\algo{Setup}(1^\lambda)$ and sends $\mathsf{crs}$ to $\mathcal{A}$. $\mathcal{B}_\mathsf{nce}$ receives $\mathsf{oss}.\keys{pk}$ from $\mathcal{A}$. $\mathcal{B}_\mathsf{nce}$ generates $\mathsf{we}.\keys{CT}\leftarrow\mathsf{WE}.\algo{Enc}(1^\lambda,x,m_b)$. $\mathcal{B}_\mathsf{nce}$ sends $\mathsf{we}.\keys{CT}$ to the challenger of the RNC security game, and receives $(\mathsf{nce}.\keys{CT}^*,\mathsf{nce}.\keys{sk}^*)$ from the challenger of the RNC security game. Here, \begin{align} (\mathsf{nce}.\keys{CT}^*,\mathsf{nce}.\keys{sk}^*)= (\mathsf{NCE}.\algo{Enc} (\mathsf{nce}.\keys{pk},\mathsf{we}.\keys{CT}),\mathsf{nce}.\keys{sk}) \end{align} if the challenger's bit is 0, and \begin{align} (\mathsf{nce}.\keys{CT}^*,\mathsf{nce}.\keys{sk}^*)= (\mathsf{NCE}.\algo{Fake} (\mathsf{nce}.\keys{pk},\mathsf{nce}.\keys{sk},\mathsf{nce}.\mathsf{aux}),\mathsf{NCE}.\algo{Reveal}(\mathsf{nce}.\keys{pk},\mathsf{nce}.\keys{sk},\mathsf{nce}.\mathsf{aux},\mathsf{nce}.\keys{CT}^*,\mathsf{we}.\keys{CT})) \end{align} if the challenger's bit is 1. $\mathcal{B}_\mathsf{nce}$ sends $\mathsf{nce}.\keys{CT}^*$ to $\mathcal{A}$. $\mathcal{B}_\mathsf{nce}$ receives $\keys{cert}=\sigma$ from $\mathcal{A}$. If $\mathsf{OSS}.\algo{Vrfy}(\mathsf{crs},\mathsf{oss}.\keys{pk},\sigma,1)=\bot$, $\mathcal{B}_\mathsf{nce}$ sends $\bot$ to $\mathcal{A}$, and otherwise sends $\mathsf{nce}.\keys{sk}^*$ to $\mathcal{A}$. By assumption, $\mathcal{A}$ can distinguish $\hybi{0}(b)$ and $\hybi{1}(b)$, and therefore $\mathcal{B}_\mathsf{nce}$ can output the bit of the challenger of the RNC security game with non-negligible probability, which breaks the RNC security of $\Sigma_\mathsf{nce}$. \end{proof} \begin{proposition}\label{prop:WEOSS_hyb_2} If $\Sigma_{\mathsf{oss}}$ is secure and $\Sigma_{\mathsf{we}}$ has the extractable security, $\abs{\Pr[\mathsf{Hyb}_1(0)=1] - \Pr[\mathsf{Hyb}_1(1)=1]} \le {\mathsf{negl}}(\lambda)$. \end{proposition} \begin{proof} Let $\mathsf{good}$ (resp. $\mathsf{bad}$) be the event that the adversary in $\hybi{1}(b)$ sends a valid (resp. invalid) $\keys{cert}$. Then, \begin{align} |\Pr[\hybi{1}(0)=1]-\Pr[\hybi{1}(1)=1]| &= |\Pr[\hybi{1}(0)=1\wedge\mathsf{good}]+\Pr[\hybi{1}(0)=1\wedge\mathsf{bad}]\\ &-\Pr[\hybi{1}(1)=1\wedge\mathsf{good}]-\Pr[\hybi{1}(1)=1\wedge\mathsf{bad}]|\\ &\le |\Pr[\hybi{1}(0)=1\wedge\mathsf{good}] -\Pr[\hybi{1}(1)=1\wedge\mathsf{good}]|\\ &+|\Pr[\hybi{1}(0)=1\wedge\mathsf{bad}] -\Pr[\hybi{1}(1)=1\wedge\mathsf{bad}]|\\ &= |\Pr[\hybi{1}(0)=1\wedge\mathsf{good}] -\Pr[\hybi{1}(1)=1\wedge\mathsf{good}]| \end{align} Assume that $|\Pr[\hybi{1}(0)=1]-\Pr[\hybi{1}(1)=1]|$ is non-negligible, which means that there exists an infinite subset $I\subseteq{\mathbb N}$ and a polynomial $p$ such that $|\Pr[\hybi{1}(0)=1\wedge\mathsf{good}]-\Pr[\hybi{1}(1)=1\wedge\mathsf{good}]|\ge1/p(\lambda)$ for all $\lambda\in I$. Because \begin{align} |\Pr[\hybi{1}(0)=1\wedge\mathsf{good}]-\Pr[\hybi{1}(1)=1\wedge\mathsf{good}]|= |\Pr[\hybi{1}(0)=1|\mathsf{good}]-\Pr[\hybi{1}(1)=1|\mathsf{good}]|\Pr[\mathsf{good}], \end{align} It means $|\Pr[\hybi{1}(0)=1|\mathsf{good}]-\Pr[\hybi{1}(1)=1|\mathsf{good}]|\ge1/p(\lambda)$ and $\Pr[\mathsf{good}]\ge1/p(\lambda)$ for all $\lambda\in I$. Let $\mathsf{aux}$ be the adversary's internal state after outputting $\keys{cert}$ conditioned on $\mathsf{good}$. Then, due to the extractability of the witness encryption, there is a QPT extractor $\mathcal{E}$ and polynomial $q$ such that the probability that $\mathcal{E}(1^\lambda,x,\mathsf{aux})$ outputs a valid $\sigma'$ such that $\mathsf{OSS}.\algo{Vrfy}(\mathsf{crs},\mathsf{oss}.\keys{pk},\sigma',0)=\top$ is at least $1/q(\lambda)$ for all $\lambda\in I$. We can construct an adversary $\mathcal{B}_{\mathsf{oss}}$ that breaks the security of the one-shot signature scheme as follows. $\mathcal{B}_{\mathsf{oss}}$ receives $\mathsf{oss}.\mathsf{crs}\leftarrow \mathsf{OSS}.\algo{Setup}(\lambda)$. It generates $(\mathsf{nce}.\keys{pk},\mathsf{nce}.\keys{sk},\mathsf{nce}.\mathsf{aux})\leftarrow \mathsf{NCE}.\algo{KeyGen}(1^\lambda)$, and sends $\mathsf{nce}.\keys{pk}$ to the adversary of $\hybi{1}(b)$. $\mathcal{B}_{\mathsf{oss}}$ receives $m_0,m_1$ from the adversary of $\hybi{1}(b)$, and returns $\mathsf{oss}.\mathsf{crs}$. $\mathcal{B}_{\mathsf{oss}}$ receives $\mathsf{oss}.\keys{pk}$ from the adversary of $\hybi{1}(b)$. $\mathcal{B}_{\mathsf{oss}}$ sends $\mathsf{nce}.\keys{CT}^*\leftarrow\mathsf{NCE}.\algo{Fake}(\mathsf{nce}.\keys{pk},\mathsf{nce}.\keys{sk},\mathsf{nce}.\mathsf{aux})$ to the adversary of $\hybi{1}(b)$, and receives $\keys{cert}$. $\mathcal{B}_{\mathsf{oss}}$ simulates $\mathcal{E}$ and gets its output $\sigma'$. $\mathcal{B}_{\mathsf{oss}}$ outputs $(\mathsf{oss}.\keys{pk},1,\keys{cert},0,\sigma')$. Because \begin{align} &\Pr[m_0\neq m_1\wedge \algo{Vrfy}(\mathsf{oss}.\mathsf{crs},\mathsf{oss}.\keys{pk},0,\sigma')=\top\wedge \algo{Vrfy}(\mathsf{oss}.\mathsf{crs},\mathsf{oss}.\keys{pk},1,\keys{cert})=\top]\\ &=\Pr[\mathsf{good}]\Pr[\mathcal{E}(1^\lambda,x,\mathsf{aux})\mbox{ outputs valid } \sigma'] \ge\frac{1}{p(\lambda)}\frac{1}{q(\lambda)} \end{align} for all $\lambda\in I$, $\mathcal{B}_{\mathsf{oss}}$ breaks the security of the one-shot signature scheme. \if0 In $\hybi{1}(b)$, if the adversary $\mathcal{A}$ sends $\sigma$ to the challenger such that $\mathsf{OSS}.\algo{Vrfy}(\mathsf{crs},\mathsf{oss}.\keys{pk},\sigma,1)=\bot$, it is trivial that $\abs{\Pr[\mathsf{Hyb}_1(0)=1] - \Pr[\mathsf{Hyb}_1(1)=1]} \le {\mathsf{negl}}(\lambda)$. Therefore let us assume that in $\hybi{1}(b)$, the adversary $\mathcal{A}$ sends $\sigma$ to the challenger such that $\mathsf{OSS}.\algo{Vrfy}(\mathsf{crs},\mathsf{oss}.\keys{pk},\sigma,1)=\top$. Assume that $\abs{\Pr[\mathsf{Hyb}_1(0)=1] - \Pr[\mathsf{Hyb}_1(1)=1]}$ is non-negligible. Then, due to the extractability of the witness encryption scheme, there is a QPT extractor $\mathcal{E}$ and polynomial $q$ such that the probability that $\mathcal{E}(1^\lambda,x,\mathsf{aux})$ outputs a valid $\sigma'$ such that $\mathsf{OSS}.\algo{Vrfy}(\mathsf{crs},\mathsf{oss}.\keys{pk},\sigma',0)=\top$ is at least $1/q(\lambda)$, but it breaks the security of the one-shot signature scheme. \fi \end{proof} By \cref{prop:WEOSS_hyb_1} and \cref{prop:WEOSS_hyb_2}, we obtain \cref{theorem:WEOSS}. \end{proof} \section{Attribute-Based Encryption with Certified Deletion}\label{sec:abe_cd} In this section, we define the notion of attribute-based encryption (ABE) with certified deletion, which is a natural extension of ABE and PKE with certified deletion and present how to achieve ABE with certified deletion from OT-CD secure SKE, IO, and OWFs. In~\cref{sec:abe_cd_def}, we present the definition of ABE with certified deletion and non-committing ABE (NCABE), which is a crucial tool to achieve ABE with certified deletion. In~\cref{sec:NCABE_from_IO}, we present how to achieve NCABE from IO and standard ABE. In~\cref{sec:const_abe_cd_from_sk}, we present how to achieve ABE with certified deletion from NCABE and OT-CD secure SKE with certified deletion. \subsection{Definition of ABE with Certified Deletion}\label{sec:abe_cd_def} The definition of ABE with certified deletion is a natural combination of ABE and PKE with certified deletion. \begin{definition}[Attribute-Based Encryption with Certified Deletion (Syntax)]\label{def:abe_cert_del} An ABE scheme with certified deletion is a tuple of QPT algorithms $(\algo{Setup},\algo{KeyGen},\algo{Enc},\algo{Dec},\algo{Del},\algo{Vrfy})$ with plaintext space $\mathcal{M}$, attribute space $\mathcal{X}$, and policy space $\mathcal{P}$. \begin{description} \item[$\algo{Setup}(1^\lambda)\rightarrow (\keys{pk},\keys{msk})$:] The setup algorithm takes as input the security parameter $1^\lambda$ and outputs a public key $\keys{pk}$ and a master secret key $\keys{msk}$. \item[$\algo{KeyGen} (\keys{msk},P) \rightarrow \keys{sk}_P$:] The key generation algorithm takes as input $\keys{msk}$ and a policy $P\in \mathcal{P}$, and outputs a secret key $\keys{sk}_P$. \item[$\algo{Enc}(\keys{pk},X,m) \rightarrow (\keys{vk},\keys{CT}_X)$:] The encryption algorithm takes as input $\keys{pk}$, an attribute $X\in\mathcal{X}$, and a plaintext $m \in \mathcal{M}$, and outputs a verification key $\keys{vk}$ and ciphertext $\keys{CT}_X$. \item[$\algo{Dec}(\keys{sk}_P,\keys{CT}_X) \rightarrow m^\prime \mbox{ or } \bot$:] The decryption algorithm takes as input $\keys{sk}_P$ and $\keys{CT}_X$, and outputs a plaintext $m^\prime \in \mathcal{M}$ or $\bot$. \item[$\algo{Del}(\keys{CT}_X) \rightarrow \keys{cert}$:] The deletion algorithm takes as input $\keys{CT}_X$ and outputs a certification $\keys{cert}$. \item[$\algo{Vrfy}(\keys{vk},\keys{cert})\rightarrow \top \mbox{ or }\bot$:] The verification algorithm takes as input $\keys{vk}$ and $\keys{cert}$, and outputs $\top$ or $\bot$. \end{description} \end{definition} \begin{definition}[Correctness for ABE with Certified Deletion]\label{def:abe_cd_correctness} There are two types of correctness. One is decryption correctness and the other is verification correctness. \begin{description} \item[Decryption correctness:] For any $\lambda\in \mathbb{N}$, $m\in\mathcal{M}$, $P\in\mathcal{P}$, and $X\in\mathcal{X}$ such that $P(X)=\top$, \begin{align} \Pr\left[ \algo{Dec}(\keys{sk}_P,\keys{CT}_X)\ne m \ \middle | \begin{array}{ll} (\keys{pk},\keys{msk})\leftarrow \algo{Setup}(1^\lambda)\\ \keys{sk}_P \leftarrow \algo{KeyGen}(\keys{msk},P)\\ (\keys{vk},\keys{CT}_X) \leftarrow \algo{Enc}(\keys{pk},X,m) \end{array} \right] \le{\mathsf{negl}}(\lambda). \end{align} \item[Verification correctness:] For any $\lambda\in \mathbb{N}$, $P\in\mathcal{P}$, $X\in\mathcal{X}$, $m\in\mathcal{M}$, \begin{align} \Pr\left[ \algo{Vrfy}(\keys{vk},\keys{cert})=\bot \ \middle | \begin{array}{ll} (\keys{pk},\keys{msk})\leftarrow \algo{Setup}(1^\lambda)\\ (\keys{vk},\keys{CT}_X) \leftarrow \algo{Enc}(\keys{pk},X,m)\\ \keys{cert} \leftarrow \algo{Del}(\keys{CT}_X) \end{array} \right] \le{\mathsf{negl}}(\lambda). \end{align} \end{description} \end{definition} \begin{definition}[ABE Certified Deletion Security]\label{def:abe_certified_del} Let $\Sigma=(\algo{Setup}, \algo{KeyGen}, \algo{Enc}, \algo{Dec}, \algo{Del}, \algo{Vrfy})$ be an ABE with certified deletion. We consider the following security experiment $\expc{\Sigma,\mathcal{A}}{ind}{cpa}{cd}(\lambda,b)$. \begin{enumerate} \item The challenger computes $(\keys{pk},\keys{msk}) \leftarrow \algo{Setup}(1^\lambda)$ and sends $\keys{pk}$ to $\mathcal{A}$. \item $\mathcal{A}$ sends a key query $P \in\mathcal{P}$ to the challenger and it returns $\keys{sk}_{P} \leftarrow \algo{KeyGen}(\keys{msk},P)$ to $\mathcal{A}$. This process can be repeated polynomially many times. \item $\mathcal{A}$ sends $X^\ast \in \mathcal{X}$ and $(m_0,m_1) \in \mathcal{M}^2$ to the challenger where $X^\ast$ must satisfy $P(X^\ast)=\bot$ for all key queries $P$ sent so far. \item The challenger computes $(\keys{vk}_b,\keys{CT}_b) \leftarrow \algo{Enc}(\keys{pk},X^\ast,m_b)$ and sends $\keys{CT}_b$ to $\mathcal{A}$. \item Again, $\mathcal{A}$ can send key queries $P$ that must satisfy $P(X^\ast)=\bot$. \item $\mathcal{A}$ computes $\keys{cert} \leftarrow \algo{Del}(\keys{CT}_b)$ and sends $\keys{cert}$ to the challenger. \item The challenger computes $\algo{Vrfy}(\keys{vk}_b,\keys{cert})$. If the output is $\bot$, the challenger sends $\bot$ to $\mathcal{A}$. If the output is $\top$, the challenger sends $\keys{msk}$ to $\mathcal{A}$. \item Again, $\mathcal{A}$ can send key queries $P$ that must satisfy $P(X^\ast)=\bot$.\footnote{Such queries are useless if $\mathcal{A}$ obtains $\keys{msk}$ in the previous item, but may be useful if the challenger returns $\bot$ there.} \item $\mathcal{A}$ outputs its guess $b'\in \{0,1\}$. \end{enumerate} We say that the $\Sigma$ is IND-CPA-CD secure if for any QPT adversary $\mathcal{A}$, it holds that \begin{align} \advd{\Sigma,\mathcal{A}}{ind}{cpa}{cd}(\lambda)\coloneqq \abs{\Pr[ \expc{\Sigma,\mathcal{A}}{ind}{cpa}{cd}(\lambda, 0)=1] - \Pr[ \expc{\Sigma,\mathcal{A}}{ind}{cpa}{cd}(\lambda, 1)=1] }\le {\mathsf{negl}}(\lambda). \end{align} \end{definition} Next, we define receiver non-committing ABE, which is a receiver non-committing encryption version of ABE. \begin{definition}[Receiver Non-Committing Attribute-Based Encryption (Syntax)]\label{def:ncabe_syntax} A receiver non-committing (key policy) attributed-based encryption (NCABE) is a tuple of PPT algorithms $(\algo{Setup},\algo{KeyGen},\algo{Enc},\algo{Dec},\algo{FakeSetup},\algo{FakeSK},\allowbreak \algo{FakeCT},\algo{Reveal})$ with plaintext space $\mathcal{M}$, attribute space $\mathcal{X}$, and policy space $\mathcal{P}$. \begin{description} \item [$\algo{Setup}(1^\lambda)\rightarrow (\keys{pk},\keys{msk})$:] The setup algorithm takes as input the security parameter $1^\lambda$ and outputs a public key $\keys{pk}$ and a master secret key $\keys{msk}$. \item [$\algo{KeyGen}(\keys{msk}, P)\rightarrow \keys{sk}_P$:] The key generation algorithm takes as input $\keys{msk}$ and a policy $P\in \mathcal{P}$, and outputs a secret key $\keys{sk}_P$. \item [$\algo{Enc}(\keys{pk},X,m)\rightarrow \keys{CT}$:] The encryption algorithm takes as input $\keys{pk}$, an attribute $X\in \mathcal{X}$, and a plaintext $m\in\mathcal{M}$, and outputs a ciphertext $\keys{CT}$. \item [$\algo{Dec}(\keys{sk},\keys{CT})\rightarrow m^\prime \mbox{ or }\bot$:] The decryption algorithm takes as input $\keys{sk}$ and $\keys{CT}$ and outputs a plaintext $m^\prime\in\mathcal{M}$ or $\bot$. \item [$\algo{FakeSetup}(1^\lambda)\rightarrow (\keys{pk},\mathsf{aux})$:] The fake setup algorithm takes as input the security parameter $1^\lambda$, and outputs a public key $\keys{pk}$ and an auxiliary information $\mathsf{aux}$. \item [$\algo{FakeCT}(\keys{pk},\mathsf{aux},X)\rightarrow \widetilde{\ct}$:] The fake encryption algorithm takes $\keys{pk}$, $\mathsf{aux}$, and $X\in\mathcal{X}$, and outputs a fake ciphertext $\widetilde{\ct}$. \item [$\algo{FakeSK}(\keys{pk},\mathsf{aux},P)\rightarrow \widetilde{\keys{sk}}$:] The fake key generation algorithm takes $\keys{pk}$, $\mathsf{aux}$, and $P\in \mathcal{P}$, and outputs a fake secret key $\widetilde{\keys{sk}}$. \item [$\algo{Reveal}(\keys{pk},\mathsf{aux},\widetilde{\ct},m)\rightarrow \widetilde{\keys{msk}}$:] The reveal algorithm takes $\keys{pk},\mathsf{aux}$, a fake ciphertext $\widetilde{\ct}$, and a plaintext $m\in\mathcal{M}$, and outputs a fake master secret key $\widetilde{\keys{msk}}$. \end{description} \end{definition} Correctness is the same as that of ABE. \begin{definition}[RNC Security for ABE]\label{def:ncabe_security} An NCABE scheme is RNC secure if it satisfies the following. Let $\Sigma=(\algo{Setup},\algo{KeyGen}, \algo{Enc}, \algo{Dec}, \algo{Fake}\algo{Setup},\algo{Fake}\keys{CT},\algo{Fake}\keys{SK},\algo{Reveal})$ be an NCABE scheme. We consider the following security experiment $\expa{\Sigma,\mathcal{A}}{rnc}{cpa}(\lambda,b)$. \begin{enumerate} \item The challenger does the following. \begin{itemize} \item If $b=0$, the challenger computes $(\keys{pk},\keys{msk}) \leftarrow \algo{Setup}(1^\lambda)$ and sends $\keys{pk}$ to $\mathcal{A}$. \item If $b=1$, the challenger computes $(\keys{pk},\mathsf{aux}) \leftarrow \algo{FakeSetup}(1^\lambda)$ and sends $\keys{pk}$ to $\mathcal{A}$. \end{itemize} \item $\mathcal{A}$ sends a query $P_i \in \mathcal{P}$ to the challenger. \begin{itemize} \item If $b=0$, the challenger returns a secret key $\keys{sk}_{P} \leftarrow \algo{KeyGen}(\keys{msk},P)$. \item If $b=1$, the challenger returns a fake secret key $\widetilde{\keys{sk}}_{P} \leftarrow \algo{FakeSK}(\keys{pk},\mathsf{aux},P)$. \end{itemize} $\mathcal{A}$ can send polynomially many key queries. \item At some point, $\mathcal{A}$ sends the target attribute $X^\ast\in \mathcal{X}$ and message $m \in \mathcal{M}$ to the challenger where $X^\ast$ must satisfy $P(X^\ast)=\bot$ for all key queries $P$ sent so far. The challenger does the following. \begin{itemize} \item If $b =0$, the challenger generates $\keys{CT}^\ast \leftarrow \algo{Enc}(\keys{pk},X^\ast,m)$ and returns $(\keys{CT}^*,\keys{msk})$ to $\mathcal{A}$. \item If $b=1$, the challenger generates $\widetilde{\ct}^\ast \leftarrow \algo{FakeCT}(\keys{pk},\mathsf{aux},X^\ast)$ and $\widetilde{\keys{msk}} \leftarrow \algo{Reveal}(\keys{pk},\mathsf{aux},\widetilde{\ct},m)$ and returns $(\widetilde{\ct},\widetilde{\keys{msk}})$ to $\mathcal{A}$. \end{itemize} \item Again, $\mathcal{A}$ can send key queries $P$ that must satisfy $P(X^\ast)=\bot$. \item $\mathcal{A}$ outputs $b'\in \{0,1\}$. \end{enumerate} We say that $\Sigma$ is RNC secure if for any QPT adversary, it holds that \begin{align} \advb{\Sigma,\mathcal{A}}{rnc}{cpa}(\lambda)\coloneqq \abs{\Pr[ \expa{\Sigma,\mathcal{A}}{rnc}{cpa}(\lambda, 0)=1] - \Pr[ \expa{\Sigma,\mathcal{A}}{rnc}{cpa}(\lambda, 1)=1] }\leq {\mathsf{negl}}(\lambda). \end{align} \end{definition} \subsection{Non-Committing ABE from IO}\label{sec:NCABE_from_IO} In this section, we construct NCABE scheme with plaintext space $\{0,1\}^{\ell_m}$, attribute space $\mathcal{X}$ where $\ell_m$ are some polynomials, and policy space $\mathcal{P}$ from IO for $\compclass{P}/\compclass{poly}$ and ABE scheme with plaintext space $\{0,1\}$, attribute space $\mathcal{X}$, and policy space $\mathcal{P}$. \paragraph{Our NCABE scheme.} Let $\Sigma_{\mathsf{abe}}=\ABE.(\algo{Setup},\algo{KeyGen},\algo{Enc},\algo{Dec})$ be an IND-CPA secure ABE scheme on the message space $\{0,1\}$ and $\Pi_{\mathsf{nizk}}$ be a NIZK proof for the $\compclass{NP}$ language $\mathcal{L}$ corresponding to the following relation $\mathcal{R}$. \begin{align} \mathcal{R} &\coloneqq \setbk{((\keys{pk},\setbk{\keys{CT}_{i,0},\keys{CT}_{i,1}}_{i\in [\ell_m]},X),\setbk{(m[i],r_{i,0},r_{i,1})}_{i\in[\ell_m]}) \mid \forall i\forall b\ \keys{CT}_{i,b}=\ABE.\algo{Enc}(\mathsf{abe}.\keys{pk}_{i,b},X,m[i];r_{i,b})} \end{align} where $\keys{pk}= \setbk{\mathsf{abe}.\keys{pk}_{i,0},\mathsf{abe}.\keys{pk}_{i,1}}_{i\in \ell_m}$. We construct an NCABE scheme $\Sigma_{\mathsf{nce}}=(\algo{Setup},\algo{KeyGen},\algo{Enc},\algo{Dec},\algo{FakeSetup},\algo{FakeCT},\algo{FakeSK},\algo{Reveal})$ as follows. \begin{description} \item[$\algo{Setup}(1^\lambda):$]$ $ \begin{enumerate} \item Generate $(\mathsf{abe}.\keys{pk}_{i,b},\mathsf{abe}.\keys{msk}_{i,b}) \leftarrow \ABE.\algo{Setup}(1^\lambda)$ for every $i\in[\ell_m]$ and $b\in\{0,1\}$. \item Choose $z \leftarrow \{0,1\}^{\ell_m}$. \item Computes $\mathsf{crs} \leftarrow \algo{NIZK}.\algo{Setup}(1^\lambda)$. \item Output $\keys{pk} \coloneqq (\setbk{\mathsf{abe}.\keys{pk}_{i,b}}_{i\in[\ell_m],b\in\{0,1\}},\mathsf{crs})$ and $\keys{msk}\coloneqq (\keys{pk},\setbk{\mathsf{abe}.\keys{msk}_{i,z[i]}}_{i\in[\ell_m]},z)$. \end{enumerate} \item[$\algo{KeyGen}(\keys{msk},P)$:]$ $ \begin{enumerate} \item Parse $\keys{msk} = (\keys{pk},\setbk{\mathsf{abe}.\keys{msk}_{i,z[i]}}_{i\in[\ell_m]},z)$. \item Generate $\keys{sk}_{i}\leftarrow \ABE.\algo{KeyGen}(\mathsf{abe}.\keys{msk}_{i,z[i]},P)$ for every $i\in[\ell_m]$. \item Generate and output $\keys{sk}_P \coloneqq i\mathcal{O}(\mathsf{D}[\mathsf{crs},\setbk{\keys{sk}_{i}}_{i\in[\ell_m]},z])$, where circuit $\mathsf{D}$ is described in~\cref{fig:LR_dec_circuit}. \end{enumerate} \item[$\algo{Enc}(\keys{pk},X,m):$]$ $ \begin{enumerate} \item Parse $\keys{pk} = (\setbk{\mathsf{abe}.\keys{pk}_{i,b}}_{i\in[\ell_m],b\in\{0,1\}},\mathsf{crs})$. \item Generate $\keys{CT}_{i,b}\leftarrow \ABE.\algo{Enc}(\mathsf{abe}.\keys{pk}_{i,b},X,m[i];r_{i,b})$ for every $i\in[\ell_m]$ and $b\in\{0,1\}$ where $r_{i,b}$ is uniformly chosen from the randomness space for $\ABE.\algo{Enc}$. \item Generate $\pi \leftarrow \algo{NIZK}.\algo{Prove}(\mathsf{crs},d,w)$ where $d = (\setbk{(\mathsf{abe}.\keys{pk}_{i,0},\mathsf{abe}.\keys{pk}_{i,1},\keys{CT}_{i,0},\keys{CT}_{i,1})}_{i\in[\ell_m]},X)$ and $w = (m,\setbk{r_{i,0},r_{i,1}}_{i\in[\ell_m]})$. \item Output $\keys{CT}_X \coloneqq (\setbk{\keys{CT}_{i,0},\keys{CT}_{i,1}}_{i\in[\ell_m]},\pi)$. \end{enumerate} \item[$\algo{Dec}(\keys{sk}_P,\keys{CT}_X):$]$ $ \begin{enumerate} \item Parse $\keys{sk}_P = \widetilde{\mathsf{D}}$. \item Compute and output $m\coloneqq \widetilde{\mathsf{D}}(\keys{CT}_X)$. \end{enumerate} \item[$\algo{FakeSetup}(1^\lambda):$]$ $ \begin{enumerate} \item Generate $(\mathsf{abe}.\keys{pk}_{i,b},\mathsf{abe}.\keys{msk}_{i,b}) \leftarrow \ABE.\algo{Setup}(1^\lambda)$ for every $i\in[\ell_m]$ and $b\in\{0,1\}$. \item Choose $z^\ast \leftarrow \{0,1\}^{\ell_m}$. \item Computes $(\widetilde{\mathsf{crs}},\keys{td}) \leftarrow \algo{Sim}_1(1^\lambda)$. \item Output $\keys{pk} \coloneqq (\setbk{\mathsf{abe}.\keys{pk}_{i,b}}_{i\in[\ell_m],b\in\{0,1\}},\widetilde{\mathsf{crs}})$ and $\mathsf{aux}\coloneqq (\keys{pk},\keys{td}, \setbk{\mathsf{abe}.\keys{msk}_{i,b}}_{i\in[\ell_m],b\in\{0,1\}},z^\ast)$. \end{enumerate} \item[$\algo{FakeSK}(\keys{pk},\mathsf{aux},P):$]$ $ \begin{enumerate} \item Parse $\mathsf{aux}=(\keys{pk},\keys{td}, \setbk{\mathsf{abe}.\keys{msk}_{i,b}}_{i\in[\ell_m],b\in\{0,1\}},z^\ast)$. \item Generate $\keys{sk}_i^{0} \leftarrow \ABE.\algo{KeyGen}(\mathsf{abe}.\keys{msk}_{i,0},P)$ for every $i \in [\ell_m]$ and set $\keys{sk}_P^0 \coloneqq \setbk{\keys{sk}_i^{0}}_{i\in[\ell_m]}$. \item Generate and output $\widetilde{\keys{sk}} \coloneqq i\mathcal{O}(\mathsf{D}_0[\widetilde{\mathsf{crs}},\keys{sk}_P^0])$, where circuit $\mathsf{D}_0$ is described in~\cref{fig:L_dec_circuit}. \end{enumerate} \item[$\algo{FakeCT}(\keys{pk},\mathsf{aux},X):$]$ $ \begin{enumerate} \item Parse $\keys{pk}= (\setbk{\mathsf{abe}.\keys{pk}_{i,b}}_{i\in[\ell_m],b\in\{0,1\}},\widetilde{\mathsf{crs}})$ and $\mathsf{aux} = (\keys{pk},\keys{td}, \setbk{\mathsf{abe}.\keys{msk}_{i,b}}_{i\in[\ell_m],b\in\{0,1\}},z^\ast)$. \item Compute $\keys{CT}^\ast_{i,z^\ast [i]}\leftarrow \ABE.\algo{Enc}(\mathsf{abe}.\keys{pk}_{i,z^\ast [i]},X,0)$ and $\keys{CT}^\ast_{i,1-z^\ast [i]}\leftarrow \ABE.\algo{Enc}(\keys{pk}_{i,1-z^\ast [i]},X,1)$ for every $i\in[\ell_m]$. \item Compute $\widetilde{\pi} \leftarrow \algo{Sim}_2(\widetilde{\mathsf{crs}},\keys{td},d^\ast)$ where $d^\ast = (\setbk{(\mathsf{abe}.\keys{pk}_{i,0},\mathsf{abe}.\keys{pk}_{i,1},\keys{CT}^\ast_{i,0},\keys{CT}^\ast_{i,1})}_{i\in[\ell_m]},X)$. \item Outputs $\widetilde{\ct}_X \coloneqq (\{\keys{CT}^\ast_{i,b}\}_{i\in[\ell_m],b\in\{0,1\}},\widetilde{\pi})$. \end{enumerate} \item[$\algo{Reveal}(\keys{pk},\mathsf{aux},\widetilde{\ct}_X,m):$]$ $ \begin{enumerate} \item Parse $\mathsf{aux} =(\keys{pk},\keys{td}, \setbk{\mathsf{abe}.\keys{msk}_{i,b}}_{i\in[\ell_m],b\in\{0,1\}},z^\ast)$. \item Outputs $\widetilde{\keys{msk}} \coloneqq (\keys{pk},\setbk{\mathsf{abe}.\keys{msk}_{i,z^*[i]\oplus m[i]}}_{i\in[\ell_m]},z^\ast \oplus m)$. \end{enumerate} \end{description} \protocol{Left-or-Right Decryption Circuit $\mathsf{D}$ } {The description of the left-or-right decryption circuit} {fig:LR_dec_circuit} { \begin{description} \item[Input:] A ciphertext $\keys{CT}_X $. \item[Hardwired value:] $\mathsf{crs}$, $z$, and $\setbk{\keys{sk}_i}_{i\in[\ell_m]}$. \end{description} \begin{enumerate} \item Parse $\keys{CT}_X = (\setbk{\keys{CT}_{i,0},\keys{CT}_{i,1}}_{i\in[\ell_m]},\pi)$ \item If $\algo{NIZK}.\algo{Vrfy}(\mathsf{crs},d,\pi)\ne \top$, output $\bot$. \item Compute $m[i] \leftarrow \ABE.\algo{Dec}(\keys{sk}_{i},\keys{CT}_{i,z[i]})$ for $i\in [\ell_m]$. \item Output $m \coloneqq m[1]\| \cdots \| m[\ell_m]$. \end{enumerate} } \protocol{Left Decryption Circuit $\mathsf{D}_0$ } {The description of the left decryption circuit} {fig:L_dec_circuit} { \begin{description} \item[Input:] A ciphertext $\keys{CT}_X $. \item[Hardwired value:] $\widetilde{\mathsf{crs}}$ and $\keys{sk}_P^{0}= \setbk{\keys{sk}_{i}^{0}}_{i\in[\ell_m]}$. \end{description} \begin{enumerate} \item Parse $\keys{CT}_X = (\setbk{\keys{CT}_{i,0},\keys{CT}_{i,1}}_{i\in[\ell_m]},\pi)$ \item If $\algo{NIZK}.\algo{Vrfy}(\widetilde{\mathsf{crs}},d,\pi)\ne \top$, output $\bot$. \item Compute $m[i] \leftarrow \ABE.\algo{Dec}(\keys{sk}_{i}^{0},\keys{CT}_{i,0})$ for $i\in [\ell_m]$. \item Output $m \coloneqq m[1]\| \cdots \| m[\ell_m]$. \end{enumerate} } \paragraph{Correctness.} Correctness of $\Sigma_{\mathsf{nce}}$ easily follows from correctness of $\Sigma_{\mathsf{abe}}$ and completeness of $\Pi_\mathsf{nizk}$. \paragraph{Security.} We prove the following theorem. \begin{theorem}\label{thm:ncabe_from_abe_io} If $\Sigma_{\mathsf{abe}}$ is perfectly correct and IND-CPA secure, $i\mathcal{O}$ is secure IO for $\compclass{P}/\compclass{poly}$, and $\Pi_{\mathsf{nizk}}$ is a NIZK proof system for $\compclass{NP}$, $\Sigma_{\mathsf{nce}}$ is RNC secure NCABE. \end{theorem} \begin{proof} Let $\mathcal{A}$ be a QPT adversary. We define the following sequence of hybrid games. \begin{itemize} \item $\hybi{0}$: This is the same as $\expa{\Sigma_\mathsf{nce},\mathcal{A}}{rnc}{cpa}(\lambda,0)$. Let $X^\ast$ and $m$ be the target attribute and message, respectively, as in~\cref{def:ncabe_security}. \item $\hybi{1}$: This is the same as $\hybi{0}$ except that the challenger uses the circuit $\mathsf{D}_0[\widetilde{\mathsf{crs}},\keys{sk}_P^0]$ instead of $\mathsf{D}[\mathsf{crs},\setbk{\keys{sk}_i}_{i\in[\ell_m]},z]$ to generate secret keys for key queries. That is, it returns $\widetilde{\keys{sk}} = i\mathcal{O}(\mathsf{D}_0[\widetilde{\mathsf{crs}},\keys{sk}_P^0])$ instead of $\keys{sk} =i\mathcal{O} (\mathsf{D}[\mathsf{crs},\setbk{\keys{sk}_i}_{i\in[\ell_m]},z])$. This change is indistinguishable by the IO security and statistical soundness of $\Pi_{\mathsf{nizk}}$. Note that secret keys do not depend on $z$ in this game. \item $\hybi{2}$: This is the same as $\hybi{1}$ except that the challenger generates a common reference string and proof of the NIZK by using simulators. That is, it generates $(\widetilde{\mathsf{crs}},\keys{td})\leftarrow \algo{Sim}_1(1^\lambda)$ and $\widetilde{\pi} \leftarrow \algo{Sim}_2(\widetilde{\mathsf{crs}},\keys{td},d^\ast)$ where $d^\ast=(\setbk{\mathsf{abe}.\keys{pk}_{i,0},\mathsf{abe}.\keys{pk}_{i,1},\keys{CT}^\ast_{i,0},\keys{CT}^\ast_{i,1}}_{i\in[\ell_m]},X^\ast)$. This change is indistinguishable by the computational zero-knowledge property of $\Pi_{\mathsf{nizk}}$. \item $\hybi{3}$: This is the same as $\hybi{2}$ except that the challenger generates an inconsistent target ciphertext. That is, it generates $\keys{CT}^\ast_{i,1-z[i]}\leftarrow \ABE.\algo{Enc}(\mathsf{abe}.\keys{pk}_{i,1-z[i]},X^\ast,1-m[i])$ and $\keys{CT}^\ast_{i,z[i]}\leftarrow \ABE.\algo{Enc}(\mathsf{abe}.\keys{pk}_{i,z[i]},X^\ast,m[i])$ instead of double encryption of $m[i]$ for all $i$. Note that the NIZK proof in the target ciphertext is generated by the simulator in this game. \item $\hybi{4}$: This is the same as $\hybi{3}$ except that the challenger chooses $z^*\leftarrow \zo{\ell_m}$, computes $\keys{CT}_{i,z[i]^\ast}^\ast \leftarrow \ABE.\algo{Enc}(\mathsf{abe}.\keys{pk}_{i,z^\ast[i]},X^\ast,0)$ and $\keys{CT}_{i,1-z^\ast [i]}^\ast \leftarrow \ABE.\algo{Enc}(\mathsf{abe}.\keys{pk}_{i,1-z^\ast[i]},X^\ast,1)$, and sets $\widetilde{\keys{msk}} \coloneqq (z^\ast \oplus m,\allowbreak\setbk{\keys{msk}_{i,z[i]^\ast \oplus m[i]}}_{i\in[\ell_m]})$ as a master secret key. \end{itemize} We prove~\cref{prop:ncabe_nizk_czk,prop:ncabe_hybrid_sk,prop:ncabe_abe_ind,prop:ncabe_conceptual}. \begin{proposition}\label{prop:ncabe_hybrid_sk} If $\Pi_{\mathsf{abe}}$ is perfectly correct, $\Pi_{\mathsf{nizk}}$ is statistically sound, and $i\mathcal{O}$ is secure, $\abs{\Pr[\hybi{0}=1] - \Pr[\hybi{1}=1]}\le {\mathsf{negl}}(\lambda)$. \end{proposition} \begin{proof} We define more hybrid games. Let $q$ be the total number of key queries. \begin{description} \item [$\hybij{0}{j}$:] This is the same as $\hybi{0}$ except that \begin{itemize} \item for $j < k \le q$, the challenger generates $\keys{sk} = i\mathcal{O}(\mathsf{D}[\mathsf{crs},\setbk{\keys{sk}_i}_{i\in[\ell_m]},z])$ for the $k$-th key query. \item for $1\le k \le j$, the challenger generates $\widetilde{\keys{sk}} =i\mathcal{O}(\mathsf{D}_0[\widetilde{\mathsf{crs}},\keys{sk}_P^0])$ for the $k$-th key query. \end{itemize} Clearly, $\hybij{0}{0} = \hybi{0}$ and $\hybij{0}{q}=\hybi{1}$. \end{description} Let $\mathsf{Invalid}$ be an event that there exists $d^\dagger\notin \mathcal{L}$ and $\pi^\dagger$ such that $\algo{NIZK}.\algo{Vrfy}(\mathsf{crs},d^\dagger,\pi^\dagger)=\top$. By statistical soundness of $\Pi_{\mathsf{nizk}}$, this happens with negligible probability. If $\mathsf{Invalid}$ does not occur, by the definitions of $\mathsf{D}$ and $\mathsf{D}_0$ and perfect correctness of $\Pi_{\mathsf{abe}}$, their functionalities are equivalent for all inputs Therefore, adversary's distinguishing advantage of obfuscation of these two circuits is negligible by the iO security. The difference between $\hybij{1}{j-1}$ and $\hybij{1}{j}$ is that the $j$-th key query answer is generated by $\mathsf{D}_0$ instead of $\mathsf{D}$. Therefore we have $\abs{\Pr[\hybij{i}{j-1}=1] - \Pr[\hybij{i}{j}=1]} \le {\mathsf{negl}}(\lambda)$. By a standard hybrid argument, we obtain~\cref{prop:ncabe_hybrid_sk}. \end{proof} \begin{proposition}\label{prop:ncabe_nizk_czk} If $\Pi_{\mathsf{nizk}}$ is computationally zero-knowledge, $\abs{\Pr[\hybi{1}=1] - \Pr[\hybi{2}=1]}\le {\mathsf{negl}}(\lambda)$. \end{proposition} \begin{proof} The only difference of these two games is how to generate the common reference string and proof of NIZK. Thus, there is a straightforward reduction to the zero-knowledge property of $\Pi_{\mathsf{nizk}}$. Specifically, we assume that $\abs{\Pr[\hybi{1}=1] - \Pr[\hybi{2}=1]}$ is non-negligible and construct an adversary $\mathcal{B}_\mathsf{nizk}$ that breaks the zero-knowledge property of $\Pi_\mathsf{nizk}$. Let $d = (\setbk{(\mathsf{abe}.\keys{pk}_{i,0},\mathsf{abe}.\keys{pk}_{i,1},\keys{CT}_{i,0},\keys{CT}_{i,1})}_{i\in[\ell_m]},X^\ast)$ and $w = (m,\setbk{r_{i,0},r_{i,1}}_{i\in[\ell_m]})$ be as in $\hybi{1}$ (or $\hybi{2}$, equivalently). $\mathcal{B}_\mathsf{nizk}$ is given $(\mathsf{crs}^\ast,\pi^\ast)$ which is generated by the real setup and proving algorithms or simulators. $\mathcal{B}_\mathsf{nizk}$ runs $\hybi{1}$ for $\mathcal{A}$ while embedding $(\mathsf{crs}^\ast,\pi^\ast)$ in the appropriate part. Finally, $\mathcal{B}_\mathsf{nizk}$ outputs whatever $\mathcal{A}$ outputs. \begin{itemize} \item If $(\mathsf{crs}^\ast,\pi^\ast)$ is the real one, i.e., it is generated by $\mathsf{crs}^\ast \leftarrow \algo{NIZK}.\algo{Setup}(1^\lambda)$ and $\pi^\ast \leftarrow \algo{NIZK}.\algo{Prove}(\mathsf{crs},d,w)$, $\mathcal{B}_\mathsf{nizk}$ perfectly simulates $\hybi{1}$. \item If $(\mathsf{crs}^\ast,\pi^\ast)$ is the simulated one, i.e., it is generated by $(\mathsf{crs}^\ast,\keys{td}) \leftarrow \algo{Sim}_1(1^\lambda)$ and $\pi^\ast \leftarrow \algo{Sim}_2(\mathsf{crs}^\ast,\keys{td},d)$, $\mathcal{B}_\mathsf{nizk}$ perfectly simulates $\hybi{1}$. \end{itemize} Thus, if $\mathcal{A}$ distinguishes these two hybrids, $\mathcal{B}_\mathsf{nizk}$ breaks the computational zero-knowledge property of $\Pi_{\mathsf{nizk}}$. This completes the proof. \end{proof} \begin{proposition}\label{prop:ncabe_abe_ind} If $\Pi_\mathsf{abe}$ is IND-CPA secure, $\abs{\Pr[\hybi{2}=1] - \Pr[\hybi{3}=1]}\le {\mathsf{negl}}(\lambda)$. \end{proposition} \begin{proof} We define more hybrid games. Recall $\ell_m$ is the length of plaintexts. \begin{description} \item [$\hybij{2}{j}$:] This is the same as $\hybi{2}$ except that \begin{itemize} \item for $j < i \le \ell_m$, the challenger generates $\keys{CT}_{i,b}^\ast \leftarrow \ABE.\algo{Enc}(\mathsf{abe}.\keys{pk}_{i,b},X^\ast,m[i])$ for $b\in\zo{}$. \item for $1\le i \le j$, the challenger generates $\keys{CT}_{i,1- z[i]}^\ast \leftarrow \ABE.\algo{Enc}(\mathsf{abe}.\keys{pk}_{i,1-z[i]},X^\ast,1-m[i])$ and $\keys{CT}^\ast_{i,z[i]}\leftarrow \ABE.\algo{Enc}(\mathsf{abe}.\keys{pk}_{i,z[i]},m[i])$. \end{itemize} Clearly, $\hybij{2}{0} = \hybi{2}$ and $\hybij{2}{\ell_m} = \hybi{3}$. \end{description} The difference between $\hybij{2}{j}$ and $\hybij{2}{j-1}$ is the $j$-th component of the target ciphertext is valid or invalid. We can show that this is indistinguishable by observing that the master secret key is set to be $\keys{msk}\coloneqq (\setbk{\mathsf{abe}.\keys{msk}_{i,z[i]}}_{i\in[\ell_m]},z)$ in these games and $\setbk{\keys{msk}_{j,1-z[i]}}_{i\in [\ell_m]}$ is never revealed to the adversary. Specifically, we can construct an adversary $\mathcal{B}_\mathsf{abe}$ that breaks IND-CPA security of $\Sigma_{\mathsf{abe}}$ under key $\mathsf{abe}.\keys{pk}_{j,1-z[j]}$ assuming that $\mathcal{A}$ distinguishes these two games. $\mathcal{B}_\mathsf{abe}$ receives $\mathsf{abe}.\keys{pk}$ from the challenger of $\expa{\Sigma_\mathsf{abe},\mathcal{B}_\mathsf{abe}}{ind}{cpa}(\lambda,b')$ for $b'\in \{0,1\}$, and sets $\mathsf{abe}.\keys{pk}_{j,1-z[j]}\coloneqq \mathsf{abe}.\keys{pk}$. For other public keys (that is, $\setbk{\mathsf{abe}.\keys{pk}_{i,b}}_{i\in[\ell_m],b\in\{0,1\}}\setminus \setbk{\mathsf{abe}.\keys{pk}_{j,1-z[j]}}$) and $\mathsf{crs}$, $\mathcal{B}_\mathsf{abe}$ generates them by itself. $\mathcal{B}_\mathsf{abe}$ sends $\keys{pk} \coloneqq (\setbk{\mathsf{abe}.\keys{pk}_{i,b}}_{i\in[\ell_m],b\in\{0,1\}},\mathsf{crs})$ to $\mathcal{A}$. When the distinguisher sends a key query $P$, $\mathcal{B}_\mathsf{abe}$ passes $P$ to the challenger and receives $\keys{sk}_{j,1-z[j]} \leftarrow \ABE.\algo{KeyGen}(\keys{msk}_{j,1-z[j]},P)$.\footnote{In fact, $\mathcal{B}_\mathsf{abe}$ need not query the challenger when $z[j]=0$ since $\keys{sk}_{j,1}$ is not needed for generating $\mathsf{D}_0[\widetilde{\mathsf{crs}},\keys{sk}_P^0]$.} For other secret keys for $P$ (that is, $\setbk{\keys{sk}_{i,b}\leftarrow \ABE.\algo{KeyGen}(\keys{msk}_{i,b},P)}_{i\in[\ell_m],b\in\{0,1\}}\setminus \setbk{\keys{sk}_{j,1-z[j]}}$), $\mathcal{B}_\mathsf{abe}$ generates them by itself since it has $\setbk{\keys{msk}_{i,b}}_{i\in[\ell_m],b\in\{0,1\}}$ except for $\keys{msk}_{j,1-z[j]}$. Thus, $\mathcal{B}_\mathsf{abe}$ can compute $\widetilde{\keys{sk}} = i\mathcal{O}(\mathsf{D}_0[\widetilde{\mathsf{crs}},\keys{sk}_P^0])$. At some point, $\mathcal{A}$ declares target attribute $X^\ast$ and message $m$. $\mathcal{B}_\mathsf{abe}$ sends $(m[j],1-m[j])$ to the challenger and receives $\keys{CT}_{j,1-z[j]}^\ast$. For $(i,b)\in[\ell_m]\times \zo{}\setminus (j,1-z[j])$, $\mathcal{B}_\mathsf{abe}$ generates $\keys{CT}_{j,z[j]}^\ast \leftarrow \ABE.\algo{Enc}(\mathsf{abe}.\keys{pk}_{j,z[j]},X^\ast,m[j])$ and $\setbk{\keys{CT}_{i,b}}_{i \in [\ell_m]\setminus\setbk{j},b\in\zo{}}$ as in $\hybij{2}{j}$ and $\hybij{2}{j-1}$. Note that the difference between two games is the $j$-th component (and in particular $(j,1-z[j])$ part) of the target ciphertext. Again, $\mathcal{B}_\mathsf{abe}$ simulates answers for secret key queries as above. $\mathcal{B}_\mathsf{abe}$ outputs whatever $\mathcal{A}$ outputs. \begin{itemize} \item If $b'=0$, i.e., $\keys{CT}_{j,1-z[j]}^\ast \leftarrow \ABE.\algo{Enc}(\mathsf{abe}.\keys{pk}_{j,1-z[j]},X^\ast,m^\ast[j])$, $\mathcal{B}_\mathsf{abe}$ perfectly simulates $\hybij{i}{j-1}$. \item If $b'=1$, i.e., $\keys{CT}_{j,1-z[j]}^\ast \leftarrow \ABE.\algo{Enc}(\mathsf{abe}.\keys{pk}_{j,1-z[j]},X^\ast,1-m^\ast[j])$, $\mathcal{B}_\mathsf{abe}$ perfectly simulates $\hybij{i}{j}$. \end{itemize} Thus, if $\mathcal{A}$ distinguishes these two games, $\mathcal{B}_\mathsf{abe}$ breaks IND-CPA security of $\Sigma_{\mathsf{abe}}$. This completes the proof. \end{proof} \begin{proposition}\label{prop:ncabe_conceptual} $\Pr[\hybi{3}=1]=\Pr[\hybi{4}=1]$. \end{proposition} \begin{proof} This is a conceptual change. The advantage of distinguishing these two games is $0$ since we can see that these two games are identical if we set $z\coloneqq z^\ast \oplus m ^\ast$. Note that secret keys do not depend on $z$ in these games. \end{proof} Clearly, $\hybi{4}=\expa{\Sigma_\mathsf{nce},\mathcal{A}}{rnc}{cpa}(\lambda,1)$. Therefore, we complete the proof by~\cref{prop:ncabe_nizk_czk,prop:ncabe_hybrid_sk,prop:ncabe_abe_ind,prop:ncabe_conceptual}. \end{proof} \subsection{ABE with Certified Deletion from NCABE and SKE with Certified Deletion}\label{sec:const_abe_cd_from_sk} In this section, we construct ABE with certified deletion from NCABE and OT-CD secure SKE with certified deletion. \paragraph{Our ABE with certified deletion scheme.} We construct an ABE with certified deletion scheme $\Sigma_{\mathsf{cd}} =(\algo{Setup},\algo{KeyGen},\allowbreak\algo{Enc},\algo{Dec},\algo{Del},\algo{Vrfy})$ with plaintext space $\mathcal{M}$, attribute space $\mathcal{X}$, and policy space $\mathcal{P}$ from an NCABE scheme $\Sigma_{\mathsf{nce}}=\mathsf{NCE}.(\algo{Setup},\algo{KeyGen},\algo{Enc},\algo{Dec},\algo{FakeSetup},\algo{FakeSK},\algo{FakeCT},\algo{Reveal})$ with plaintext space $\{0,1\}^{\ell}$, attribute space $\mathcal{X}$, and policy space $\mathcal{P}$ and an SKE with certified deletion scheme $\Sigma_{\mathsf{skcd}}=\SKE.(\algo{Gen},\algo{Enc},\algo{Dec},\algo{Del},\algo{Vrfy})$ with plaintext space $\mathcal{M}$ and key space $\{0,1\}^\ell$. \begin{description} \item[$\algo{Setup}(1^\lambda)$:]$ $ \begin{itemize} \item Generate $(\mathsf{nce}.\keys{pk},\mathsf{nce}.\keys{msk})\leftarrow \mathsf{NCE}.\algo{Setup}(1^\lambda)$. \item Output $(\keys{pk},\keys{msk}) \coloneqq (\mathsf{nce}.\keys{pk},\mathsf{nce}.\keys{msk})$. \end{itemize} \item[$\algo{KeyGen}(\keys{msk},P)$:] $ $ \begin{itemize} \item Generate $\mathsf{nce}.\keys{sk}_P \leftarrow \mathsf{NCE}.\algo{KeyGen}(\mathsf{nce}.\keys{msk},P)$ and output $\keys{sk}_P \coloneqq \mathsf{nce}.\keys{sk}_P$. \end{itemize} \item[$\algo{Enc}(\keys{pk},X,m)$:] $ $ \begin{itemize} \item Parse $\keys{pk} = \mathsf{nce}.\keys{pk}$. \item Generate $\mathsf{ske}.\keys{sk} \leftarrow \SKE.\algo{Gen}(1^\lambda)$. \item Compute $\mathsf{nce}.\keys{CT}_X \leftarrow \mathsf{NCE}.\algo{Enc}(\mathsf{nce}.\keys{pk},X,\mathsf{ske}.\keys{sk})$ and $\mathsf{ske}.\keys{CT} \leftarrow \SKE.\algo{Enc}(\mathsf{ske}.\keys{sk},m)$. \item Output $\keys{CT}_X \coloneqq (\mathsf{nce}.\keys{CT}_X,\mathsf{ske}.\keys{CT})$ and $\keys{vk} \coloneqq \mathsf{ske}.\keys{sk}$. \end{itemize} \item[$\algo{Dec}(\keys{sk}_P,\keys{CT}_X)$:] $ $ \begin{itemize} \item Parse $\keys{sk}_P = \mathsf{nce}.\keys{sk}_P$ and $\keys{CT}_X = (\mathsf{nce}.\keys{CT}_X,\mathsf{ske}.\keys{CT})$. \item Compute $\keys{sk}^\prime \leftarrow \mathsf{NCE}.\algo{Dec}(\mathsf{nce}.\keys{sk}_P,\mathsf{nce}.\keys{CT}_X)$. \item Compute and output $m^\prime \leftarrow \SKE.\algo{Dec}(\keys{sk}^\prime,\mathsf{ske}.\keys{CT})$. \end{itemize} \item[$\algo{Del}(\keys{CT})$:] $ $ \begin{itemize} \item Parse $\keys{CT}_X = (\mathsf{nce}.\keys{CT}_X,\mathsf{ske}.\keys{CT})$. \item Generate $\mathsf{ske}.\keys{cert} \leftarrow \SKE.\algo{Del}(\mathsf{ske}.\keys{CT})$. \item Output $\keys{cert} \coloneqq \mathsf{ske}.\keys{cert}$. \end{itemize} \item[$\algo{Vrfy}(\keys{vk},\keys{cert})$:] $ $ \begin{itemize} \item Parse $\keys{vk} = \mathsf{ske}.\keys{sk}$ and $\keys{cert} = \mathsf{ske}.\keys{cert}$. \item Output $b \leftarrow \SKE.\algo{Vrfy}(\mathsf{ske}.\keys{sk},\mathsf{ske}.\keys{cert})$. \end{itemize} \end{description} \paragraph{Correctness.} Correctness easily follows from correctness of $\Sigma_\mathsf{skcd}$ and $\Sigma_\mathsf{nce}$. \begin{theorem}\label{thm:abe_cd_from_sk_cd_and_ncabe} If $\Sigma_{\mathsf{nce}}$ is RNC secure ABE and $\Sigma_{\mathsf{skcd}}$ is OT-CD secure, $\Sigma_{\mathsf{cd}}$ is IND-CPA-CD secure ABE. \end{theorem} \begin{proof} Let $\mathcal{A}$ be a QPT adversary and $b$ be a bit. We define the following hybrid game $\hybi{}(b)$. \begin{description} \item[$\hybi{}(b)$:] This is the same as $\expb{\Sigma_{\mathsf{cd},\mathcal{A}}}{ind}{cpa}{cd}(\lambda,b)$ except for the following differences: \begin{enumerate} \item The challenger generates the public key as $(\mathsf{nce}.\keys{pk},\mathsf{nce}.\mathsf{aux})\leftarrow \mathsf{NCE}.\algo{FakeSetup}(1^\lambda)$. \item The challenger generates the challenge ciphertext as follows. It generates $\mathsf{ske}.\keys{sk} \leftarrow \SKE.\algo{Gen}(1^\lambda)$, $\mathsf{nce}.\keys{CT}_{X^\ast}^\ast \leftarrow \mathsf{NCE}.\algo{Fake}(\mathsf{nce}.\keys{pk},\mathsf{nce}.\mathsf{aux},X^\ast)$, and $\mathsf{ske}.\keys{CT}^\ast \leftarrow \SKE.\algo{Enc}(\mathsf{ske}.\keys{sk},m_b)$. The challenge ciphertext is $\keys{CT}_{X^\ast}^\ast \coloneqq (\mathsf{nce}.\keys{CT}_{X^\ast}^\ast,\mathsf{ske}.\keys{CT}^\ast)$. \item The challenger generates secret keys as $\mathsf{nce}.\widetilde{\keys{sk}}_P \leftarrow \mathsf{NCE}.\algo{FakeSK}(\mathsf{nce}.\keys{pk},\allowbreak\mathsf{nce}.\mathsf{aux},P)$. \item The challenger reveals $\mathsf{nce}.\widetilde{\keys{msk}} \leftarrow \algo{Reveal}(\mathsf{nce}.\keys{pk},\mathsf{nce}.\mathsf{aux},\mathsf{nce}.\keys{CT}_X^\ast,\mathsf{ske}.\keys{sk})$ instead of $\mathsf{nce}.\keys{msk}$. \end{enumerate} \end{description} \begin{proposition}\label{prop:abecd_hyb_one} If $\Sigma_{\mathsf{nce}}$ is RNC secure, $\abs{\Pr[\expb{\Sigma_{\mathsf{cd}},\mathcal{A}}{ind}{cpa}{cd}(\lambda, b)=1] - \Pr[\sfhyb{}{}(b)=1]} \le {\mathsf{negl}}(\lambda)$. \end{proposition} \begin{proof} We construct an adversary $\mathcal{B}_\mathsf{nce}$ that breaks the RNC security of $\Sigma_{\mathsf{nce}}$ by using $\mathcal{A}$ that distinguishes them. $\mathcal{B}_\mathsf{nce}$ receives $\mathsf{nce}.\keys{pk}$ from the challenger of $\expa{\Sigma_\mathsf{nce},\mathcal{B}_\mathsf{nce}}{rnc}{cpa}(\lambda, b')$ for $b'\in \{0,1\}$ and sends $\mathsf{nce}.\keys{pk}$ to $\mathcal{A}$. When $\mathcal{A}$ makes a key query $P$, $\mathcal{B}_\mathsf{nce}$ passes $P$ to the challenger, receives $\keys{sk}_P$, and passes it to $\mathcal{A}$. At some point, $\mathcal{A}$ sends the target attribute $X^\ast$ and messages $(m_0,m_1)$. $\mathcal{B}_\mathsf{nce}$ generates $\mathsf{ske}.\keys{sk} \leftarrow \SKE.\algo{Gen}(1^\lambda)$, sends the target attribute $X^\ast$ and message $\mathsf{ske}.\keys{sk}$ to the challenger, and receives $\mathsf{nce}.\keys{CT}_{X^\ast}^\ast$ and $\mathsf{nce}.\keys{msk}^\ast$ from the challenger. $\mathcal{B}_\mathsf{nce}$ generates $\mathsf{ske}.\keys{CT} \leftarrow \SKE.\algo{Enc}(\mathsf{ske}.\keys{sk},m_b)$ and sends $\keys{CT}_{X^\ast}^\ast\coloneqq (\mathsf{nce}.\keys{CT}_{X^\ast}^\ast,\mathsf{ske}.\keys{CT})$ to $\mathcal{A}$. Again, when $\mathcal{A}$ makes a key query $P$, $\mathcal{B}_\mathsf{nce}$ responds similarly as above. At some point, $\mathcal{A}$ sends a certificate $\keys{cert}$. If $\SKE.\algo{Vrfy}(\mathsf{ske}.\keys{sk},\keys{cert}) = \top$, $\mathcal{B}_\mathsf{nce}$ sends $\mathsf{nce}.\keys{msk}^\ast$ to $\mathcal{A}$. Otherwise, $\mathcal{B}_\mathsf{nce}$ sends $\bot$ to $\mathcal{A}$. Finally, $\mathcal{B}_\mathsf{nce}$ outputs whatever $\mathcal{A}$ outputs. We can see that $\mathcal{B}_\mathsf{nce}$ perfectly simulates $\expb{\Sigma_{\mathsf{cd}},\mathcal{A}}{ind}{cpa}{cd}(\lambda, b)$ if $b'=0$ and $\sfhyb{}{}(b)$ if $b'=1$. Thus, if $\mathcal{A}$ distinguishes the two hybrids, $\mathcal{B}_\mathsf{nce}$ breaks the RNC security. This completes the proof. \end{proof} \begin{proposition}\label{prop:abecd_hyb_end} If $\Sigma_{\mathsf{skcd}}$ is OT-CD secure, $\abs{\Pr[\sfhyb{}{}(0)=1] - \Pr[\sfhyb{}{}(1)=1] } \le {\mathsf{negl}}(\lambda)$. \end{proposition} \begin{proof} We construct an adversary $\mathcal{B}_\mathsf{skcd}$ that breaks the OT-CD security of $\Sigma_{\mathsf{skcd}}$ assuming that $\mathcal{A}$ distinguishes these two hybrids. $\mathcal{B}_\mathsf{skcd}$ plays the experiment $\expb{\Sigma_\mathsf{skcd},\mathcal{B}_\mathsf{skcd}}{otsk}{cert}{del}(\lambda,b')$ for some $b'\in \{0,1\}$. $\mathcal{B}_\mathsf{skcd}$ generates $(\mathsf{nce}.\keys{pk},\mathsf{nce}.\mathsf{aux}) \leftarrow \mathsf{NCE}.\algo{Setup}(1^\lambda)$ and sends $\mathsf{nce}.\keys{pk}$ to $\mathcal{A}$. When $\mathcal{A}$ sends a key query $P$, $\mathcal{B}_\mathsf{skcd}$ generates $\widetilde{\keys{sk}}_P \leftarrow \mathsf{NCE}.\algo{FakeSK}(\mathsf{nce}.\keys{pk},\mathsf{nce}.\mathsf{aux},P)$ and returns it to $\mathcal{A}$. When $\mathcal{A}$ sends the target attribute $X^\ast$ and messages $(m_0,m_1)$, $\mathcal{B}_\mathsf{skcd}$ sends $(m_0,m_1)$ to the challenger of $\expb{\Sigma_\mathsf{skcd},\mathcal{B}_\mathsf{skcd}}{otsk}{cert}{del}(\lambda,b')$, receives $\mathsf{ske}.\keys{CT}^\ast$ and generates $\mathsf{nce}.\widetilde{\ct}_X \leftarrow \mathsf{NCE}.\algo{FakeCT}(\mathsf{nce}.\keys{pk},\mathsf{nce}.\mathsf{aux},X^\ast)$. $\mathcal{B}_\mathsf{skcd}$ sends $(\mathsf{nce}.\widetilde{\ct}_X,\mathsf{ske}.\keys{CT}^\ast)$ to $\mathcal{A}$ as the challenge ciphertext. Again, when $\mathcal{A}$ makes a key query $P$, $\mathcal{B}_\mathsf{nce}$ responds similarly as above. At some point, $\mathcal{A}$ outputs $\keys{cert}$. $\mathcal{B}_\mathsf{skcd}$ passes $\keys{cert}$ to the challenger of $\expb{\Sigma_\mathsf{skcd},\mathcal{B}_\mathsf{skcd}}{otsk}{cert}{del}(\lambda,b')$. If the challenger returns $\mathsf{ske}.\keys{sk}$, $\mathcal{B}_\mathsf{skcd}$ generates $\widetilde{\keys{msk}} \leftarrow \mathsf{NCE}.\algo{Reveal}(\mathsf{nce}.\keys{pk},\mathsf{nce}.\keys{msk},\mathsf{nce}.\mathsf{aux},\mathsf{nce}.\widetilde{\ct}_X,\mathsf{ske}.\keys{sk})$ and sends $\widetilde{\keys{msk}}$ to $\mathcal{A}$. Otherwise, $\mathcal{B}_\mathsf{skcd}$ sends $\bot$ to $\mathcal{A}$. Finally, $\mathcal{B}_\mathsf{skcd}$ outputs whatever $\mathcal{A}$ outputs. \begin{itemize} \item If $b'=0$, i.e., $\mathsf{ske}.\keys{CT}^\ast = \SKE.\algo{Enc}(\mathsf{ske}.\keys{sk},m_0)$, $\mathcal{B}_\mathsf{skcd}$ perfectly simulates $\hybi{}(0)$. \item If $b'=1$, i.e., $\mathsf{ske}.\keys{CT}^\ast = \SKE.\algo{Enc}(\mathsf{ske}.\keys{sk},m_1)$, $\mathcal{B}_\mathsf{skcd}$ perfectly simulates $\hybi{}(1)$. \end{itemize} Thus, if $\mathcal{A}$ distinguishes the two hybrids, $\mathcal{B}_\mathsf{skcd}$ breaks the OT-CD security of $\Sigma_\mathsf{skcd}$. This completes the proof. \end{proof} By~\cref{prop:abecd_hyb_one,prop:abecd_hyb_end}, we immediately obtain~\cref{thm:pke_cd_from_sk_cd_and_pke}. \end{proof} \paragraph{Summary of this section.} Since IO and OWFs imply computational NIZK proof for $\compclass{NP}$~\cite{TCC:BitPan15} and IND-CPA secure ABE for circuits, we immediately obtain the following corollary by using~\cref{thm:ncabe_from_abe_io,thm:abe_cd_from_sk_cd_and_ncabe,thm:ske_cert_del_no_assumption,thm:ABE_circuits_from_LWE_or_IO}. \begin{corollary}\label{cor:existence_abe_cd} If there exist secure IO for $\compclass{P}/\compclass{poly}$ and OWFs against QPT adversaries, there exists ABE with certified deletion for circuits. \end{corollary} \section{Public Key Encryption with Certified Deletion}\label{sec:pk_cd} In this section, we define the notion of PKE with certified deletion, which is a natural extension of SKE with certified deletion and present how to achieve PKE with certified deletion from OT-CD secure SKE and IND-CPA secure (standard) PKE. \subsection{Definition of PKE with Certified Deletion}\label{sec:pk_cd_def} The definition of PKE with certified deletion is an extension of SKE with certified deletion. Note that a verification key for verifying a certificate is generated in the encryption algorithm. \begin{definition}[PKE with Certified Deletion (Syntax)]\label{def:pk_cert_del} A PKE with certified deletion is a tuple of QPT algorithms $(\algo{KeyGen},\algo{Enc},\algo{Dec},\algo{Del},\algo{Vrfy})$ with plaintext space $\mathcal{M}$. \begin{description} \item[$\algo{KeyGen} (1^\lambda) \rightarrow (\keys{pk},\keys{sk})$:] The key generation algorithm takes as input the security parameter $1^\lambda$ and outputs a classical key pair $(\keys{pk},\keys{sk})$. \item[$\algo{Enc}(\keys{pk},m) \rightarrow (\keys{vk},\keys{CT})$:] The encryption algorithm takes as input the public key $\keys{pk}$ and a plaintext $m\in\mathcal{M}$ and outputs a classical verification key $\keys{vk}$ and a quantum ciphertext $\keys{CT}$. \item[$\algo{Dec}(\keys{sk},\keys{CT}) \rightarrow m^\prime \mbox{ or } \bot$:] The decryption algorithm takes as input the secret key $\keys{sk}$ and the ciphertext $\keys{CT}$, and outputs a classical plaintext $m^\prime$ or $\bot$. \item[$\algo{Del}(\keys{CT}) \rightarrow \keys{cert}$:] The deletion algorithm takes as input the ciphertext $\keys{CT}$ and outputs a classical certificate $\keys{cert}$. \item[$\algo{Vrfy}(\keys{vk},\keys{cert})\rightarrow \top \mbox{ or }\bot$:] The verification algorithm takes the verification key $\keys{vk}$ and the certificate $\keys{cert}$, and outputs $\top$ or $\bot$. \end{description} \end{definition} \begin{definition}[Correctness for PKE with Certified Deletion]\label{def:pk_cd_correctness} There are two types of correctness. One is decryption correctness and the other is verification correctness. \begin{description} \item[Decryption correctness:] For any $\lambda\in \mathbb{N}$, $m\in\mathcal{M}$, \begin{align} \Pr\left[ \algo{Dec}(\keys{sk},\keys{CT})\ne m \ \middle | \begin{array}{ll} (\keys{pk},\keys{sk})\leftarrow \algo{KeyGen}(1^\lambda)\\ (\keys{vk},\keys{CT}) \leftarrow \algo{Enc}(\keys{pk},m) \end{array} \right] \le{\mathsf{negl}}(\lambda). \end{align} \item[Verification correctness:] For any $\lambda\in \mathbb{N}$, $m\in\mathcal{M}$, \begin{align} \Pr\left[ \algo{Vrfy}(\keys{vk},\keys{cert})=\bot \ \middle | \begin{array}{ll} (\keys{pk},\keys{sk})\leftarrow \algo{KeyGen}(1^\lambda)\\ (\keys{vk},\keys{CT}) \leftarrow \algo{Enc}(\keys{pk},m)\\ \keys{cert} \leftarrow \algo{Del}(\keys{CT}) \end{array} \right] \le{\mathsf{negl}}(\lambda). \end{align} \end{description} \end{definition} \begin{definition}[Certified Deletion Security for PKE]\label{def:pk_certified_del} Let $\Sigma=(\algo{KeyGen}, \algo{Enc}, \algo{Dec}, \algo{Del}, \algo{Vrfy})$ be a PKE with certified deletion scheme. We consider the following security experiment $\expb{\Sigma,\mathcal{A}}{pk}{cert}{del}(\lambda,b)$. \begin{enumerate} \item The challenger computes $(\keys{pk},\keys{sk}) \leftarrow \algo{KeyGen}(1^\lambda)$ and sends $\keys{pk}$ to $\mathcal{A}$. \item $\mathcal{A}$ sends $(m_0,m_1)\in \mathcal{M}^2$ to the challenger. \item The challenger computes $(\keys{vk}_b,\keys{CT}_b) \leftarrow \algo{Enc}(\keys{pk},m_b)$ and sends $\keys{CT}_b$ to $\mathcal{A}$. \item At some point, $\mathcal{A}$ sends $\keys{cert}$ to the challenger. \item The challenger computes $\algo{Vrfy}(\keys{vk}_b,\keys{cert})$. If the output is $\bot$, it sends $\bot$ to $\mathcal{A}$. If the output is $\top$, it sends $\keys{sk}$ to $\mathcal{A}$. \item $\mathcal{A}$ outputs its guess $b'\in \{0,1\}$. \end{enumerate} Let $\advc{\Sigma,\mathcal{A}}{pk}{cert}{del}(\lambda)$ be the advantage of the experiment above. We say that the $\Sigma$ is IND-CPA-CD secure if for any QPT adversary $\mathcal{A}$, it holds that \begin{align} \advc{\mathcal{E},\mathcal{A}}{pk}{cert}{del}(\lambda)\coloneqq \abs{\Pr[ \expb{\Sigma,\mathcal{A}}{pk}{cert}{del}(\lambda, 0)=1] - \Pr[ \expb{\Sigma,\mathcal{A}}{pk}{cert}{del}(\lambda, 1)=1] }\leq {\mathsf{negl}}(\lambda). \end{align} \end{definition} \subsection{PKE with Certified Deletion from PKE and SKE with Certified Deletion}\label{sec:const_pk_cd_from_sk} In this section, we present how to construct a PKE scheme with certified deletion from an SKE scheme with certified deletion and an NCE scheme, which can be constructed from standard IND-CPA PKE schemes. \paragraph{Our PKE Scheme.} We construct $\Sigma_{\mathsf{pkcd}} =(\algo{KeyGen},\algo{Enc},\algo{Dec},\algo{Del},\algo{Vrfy})$ with plaintext space $\mathcal{M}$ from an SKE with certified deletion scheme $\Sigma_{\mathsf{skcd}}=\SKE.(\algo{Gen},\algo{Enc},\algo{Dec},\algo{Del},\algo{Vrfy})$ with plaintext space $\mathcal{M}$ and key space $\mathcal{K}$ and a public key NCE scheme $\Sigma_{\mathsf{nce}}=\mathsf{NCE}.(\algo{KeyGen},\algo{Enc},\algo{Dec},\algo{Fake},\algo{Reveal})$ with plaintext space $\mathcal{K}$. \begin{description} \item[$\algo{KeyGen}(1^\lambda)$:] $ $ \begin{itemize} \item Generate $(\mathsf{nce}.\keys{pk},\mathsf{nce}.\keys{sk},\mathsf{nce}.\mathsf{aux})\leftarrow \mathsf{NCE}.\algo{KeyGen}(1^\lambda)$ and output $(\keys{pk},\keys{sk}) \coloneqq (\mathsf{nce}.\keys{pk},\mathsf{nce}.\keys{sk})$. \end{itemize} \item[$\algo{Enc}(\keys{pk},m)$:] $ $ \begin{itemize} \item Parse $\keys{pk} = \mathsf{nce}.\keys{pk}$. \item Generate $\mathsf{ske}.\keys{sk} \leftarrow \SKE.\algo{Gen}(1^\lambda)$. \item Compute $\mathsf{nce}.\keys{CT} \leftarrow \mathsf{NCE}.\algo{Enc}(\mathsf{nce}.\keys{pk},\mathsf{ske}.\keys{sk})$ and $\mathsf{ske}.\keys{CT} \leftarrow \SKE.\algo{Enc}(\mathsf{ske}.\keys{sk},m)$. \item Output $\keys{CT} \coloneqq (\mathsf{nce}.\keys{CT},\mathsf{ske}.\keys{CT})$ and $\keys{vk} \coloneqq \mathsf{ske}.\keys{sk}$. \end{itemize} \item[$\algo{Dec}(\keys{sk},\keys{CT})$:] $ $ \begin{itemize} \item Parse $\keys{sk} = \mathsf{nce}.\keys{sk}$ and $\keys{CT} = (\mathsf{nce}.\keys{CT},\mathsf{ske}.\keys{CT})$. \item Compute $\keys{sk}^\prime \leftarrow \mathsf{NCE}.\algo{Dec}(\mathsf{nce}.\keys{sk},\mathsf{nce}.\keys{CT})$. \item Compute and output $m^\prime \leftarrow \SKE.\algo{Dec}(\keys{sk}^\prime,\mathsf{ske}.\keys{CT})$. \end{itemize} \item[$\algo{Del}(\keys{CT})$:] $ $ \begin{itemize} \item Parse $\keys{CT}= (\mathsf{nce}.\keys{CT},\mathsf{ske}.\keys{CT})$. \item Generate $\mathsf{ske}.\keys{cert} \leftarrow \SKE.\algo{Del}(\mathsf{ske}.\keys{CT})$. \item Output $\keys{cert} \coloneqq \mathsf{ske}.\keys{cert}$. \end{itemize} \item[$\algo{Vrfy}(\keys{vk},\keys{cert})$:] $ $ \begin{itemize} \item Parse $\keys{vk} = \mathsf{ske}.\keys{sk}$ and $\keys{cert} = \mathsf{ske}.\keys{cert}$. \item Output $b \leftarrow \SKE.\algo{Vrfy}(\mathsf{ske}.\keys{sk},\mathsf{ske}.\keys{cert})$. \end{itemize} \end{description} \paragraph{Correctness.} The decryption and verification correctness easily follow from the correctness of $\Sigma_{\mathsf{nce}}$ and $\Sigma_{\mathsf{skcd}}$. \paragraph{Security.} We prove the following theorem. \begin{theorem}\label{thm:pke_cd_from_sk_cd_and_pke} If $\Sigma_{\mathsf{nce}}$ is RNC secure and $\Sigma_{\mathsf{skcd}}$ is OT-CD secure, $\Sigma_{\mathsf{pkcd}}$ is IND-CPA-CD secure. \end{theorem} \begin{proof} Let $\mathcal{A}$ be a QPT adversary and $b\in \{0,1\}$ be a bit. We define the following hybrid game $\sfhyb{}{}(b)$. \begin{description} \item[$\sfhyb{}{}(b)$:] This is the same as $\expb{\Sigma_{\mathsf{pkcd},\mathcal{A}}}{pk}{cert}{del}(\lambda,b)$ except that the challenger generate the target ciphertext as follows. It generates $\mathsf{ske}.\keys{sk} \leftarrow \SKE.\algo{Gen}(1^\lambda)$ and computes $\mathsf{nce}.\keys{CT}^\ast \leftarrow \mathsf{NCE}.\algo{Fake}(\mathsf{nce}.\keys{pk},\mathsf{nce}.\keys{sk},\mathsf{nce}.\mathsf{aux})$ and $\mathsf{ske}.\keys{CT}^\ast \leftarrow \SKE.\algo{Enc}(\mathsf{ske}.\keys{sk},m_b)$. The target ciphertext is $\keys{CT}^\ast \coloneqq (\mathsf{nce}.\keys{CT}^\ast,\mathsf{ske}.\keys{CT}^\ast)$. In addition, we reveal $\widetilde{\keys{sk}} \leftarrow \algo{Reveal}(\mathsf{nce}.\keys{pk},\mathsf{nce}.\keys{sk},\mathsf{nce}.\mathsf{aux},\mathsf{nce}.\keys{CT}^\ast,\mathsf{ske}.\keys{sk})$ instead of $\mathsf{nce}.\keys{sk}$. \end{description} \begin{proposition}\label{prop:pkecd_hyb_one} If $\Sigma_{\mathsf{nce}}$ is RNC secure, $\abs{\Pr[\expb{\Sigma_{\mathsf{pkcd}},\mathcal{A}}{pk}{cert}{del}(\lambda, b)=1] - \Pr[\sfhyb{}{}(b)=1]} \le {\mathsf{negl}}(\lambda)$. \end{proposition} \begin{proof} We construct an adversary $\mathcal{B}_\mathsf{nce}$ that breaks the RNC security of $\Sigma_{\mathsf{nce}}$ by assuming that $\mathcal{A}$ distinguishes these two experiments. First, $\mathcal{B}_\mathsf{nce}$ is given $\mathsf{nce}.\keys{pk}$ from the challenger of $\expa{\Sigma_\mathsf{nce},\mathcal{B}_\mathsf{nce}}{rec}{nc}(\lambda, b')$ for $b'\in \{0,1\}$. $\mathcal{B}_\mathsf{nce}$ generates $\mathsf{ske}.\keys{sk} \leftarrow \SKE.\algo{Gen}(1^\lambda)$ and sends $\mathsf{nce}.\keys{pk}$ to $\mathcal{A}$. When $\mathcal{A}$ sends $(m_0,m_1)$, $\mathcal{B}_\mathsf{nce}$ sends $\mathsf{ske}.\keys{sk}$ to the challenger of $\expa{\Sigma_\mathsf{nce},\mathcal{B}_\mathsf{nce}}{rec}{nc}(\lambda, b')$, receives $(\mathsf{nce}.\keys{CT}^\ast,\widetilde{\keys{sk}})$, and generates $\mathsf{ske}.\keys{CT} \leftarrow \SKE.\algo{Enc}(\mathsf{ske}.\keys{sk},m_b)$. $\mathcal{B}_\mathsf{nce}$ sends $(\mathsf{nce}.\keys{CT}^\ast,\mathsf{ske}.\keys{CT})$ to $\mathcal{A}$ as the challenge ciphertext. At some point, $\mathcal{A}$ outputs $\keys{cert}$. If $\SKE.\algo{Vrfy}(\mathsf{ske}.\keys{sk},\keys{cert}) = \top$, $\mathcal{B}_\mathsf{nce}$ sends $\widetilde{\keys{sk}}$ to $\mathcal{A}$. Otherwise, $\mathcal{B}_\mathsf{nce}$ sends $\bot$ to $\mathcal{A}$. Finally, $\mathcal{B}_\mathsf{nce}$ outputs whatever $\mathcal{A}$ outputs. \begin{itemize} \item If $b'=0$, i.e., $(\mathsf{nce}.\keys{CT}^\ast,\widetilde{\keys{sk}}) = (\mathsf{NCE}.\algo{Enc}(\mathsf{nce}.\keys{pk},\mathsf{ske}.\keys{sk}),\mathsf{nce}.\keys{sk})$, $\mathcal{B}_\mathsf{nce}$ perfectly simulates $\expb{\Sigma_{\mathsf{pkcd}},\mathcal{A}}{pk}{cert}{del}(\lambda, b)$. \item If $b'=1$, i.e., $(\mathsf{nce}.\keys{CT}^\ast,\widetilde{\keys{sk}}) = (\mathsf{NCE}.\algo{Fake}(\mathsf{nce}.\keys{pk},\mathsf{nce}.\keys{sk},\mathsf{nce}.\mathsf{aux}),\mathsf{NCE}.\algo{Reveal}(\mathsf{nce}.\keys{pk},\mathsf{nce}.\keys{sk},\mathsf{nce}.\mathsf{aux},\mathsf{nce}.\keys{CT}^\ast,\allowbreak\mathsf{ske}.\keys{sk}))$, $\mathcal{B}_\mathsf{nce}$ perfectly simulates $\sfhyb{}{}(b)$. \end{itemize} Thus, if $\mathcal{A}$ distinguishes the two experiments, $\mathcal{B}_\mathsf{nce}$ breaks the RNC security of $\Sigma_\mathsf{nce}$. This completes the proof. \end{proof} \begin{proposition}\label{prop:pkecd_hyb_end} If $\Sigma_{\mathsf{skcd}}$ is OT-CD secure, $\abs{\Pr[\sfhyb{}{}(0)=1] - \Pr[\sfhyb{}{}(1)=1] } \le {\mathsf{negl}}(\lambda)$. \end{proposition} \begin{proof} We construct an adversary $\mathcal{B}_\mathsf{skcd}$ that breaks the OT-CD security of $\Sigma_{\mathsf{skcd}}$ assuming that $\mathcal{A}$ distinguishes these two experiments. $\mathcal{B}_\mathsf{skcd}$ plays the experiment $\expb{\Sigma_\mathsf{skcd},\mathcal{B}_\mathsf{skcd}}{otsk}{cert}{del}(\lambda,b')$ for some $b'\in \{0,1\}$. First, $\mathcal{B}_\mathsf{skcd}$ generates $(\mathsf{nce}.\keys{pk},\mathsf{nce}.\keys{sk},\mathsf{nce}.\mathsf{aux}) \leftarrow \mathsf{NCE}.\algo{KeyGen}(1^\lambda)$ and sends $\mathsf{nce}.\keys{pk}$ to $\mathcal{A}$. When $\mathcal{A}$ sends $(m_0,m_1)$, $\mathcal{B}_\mathsf{skcd}$ sends $(m_0,m_1)$ to the challenger of $\expb{\Sigma_\mathsf{skcd},\mathcal{B}_\mathsf{skcd}}{otsk}{cert}{del}(\lambda,b')$, receives $\mathsf{ske}.\keys{CT}^\ast$, and generates $\mathsf{nce}.\widetilde{\ct} \leftarrow \mathsf{NCE}.\algo{Fake}(\mathsf{nce}.\keys{pk},\mathsf{nce}.\keys{sk},\mathsf{nce}.\mathsf{aux})$. $\mathcal{B}_\mathsf{skcd}$ sends $(\mathsf{nce}.\widetilde{\ct},\mathsf{ske}.\keys{CT}^\ast)$ to $\mathcal{A}$ as the challenge ciphertext. At some point, $\mathcal{A}$ outputs $\keys{cert}$. $\mathcal{B}_\mathsf{skcd}$ passes $\keys{cert}$ to the challenger of OT-CD SKE. If the challenger returns $\mathsf{ske}.\keys{sk}$, $\mathcal{B}_\mathsf{skcd}$ generates $\widetilde{\keys{sk}} \leftarrow \mathsf{NCE}.\algo{Reveal}(\mathsf{nce}.\keys{pk},\mathsf{nce}.\keys{sk},\mathsf{nce}.\mathsf{aux},\allowbreak \mathsf{nce}.\widetilde{\ct},\mathsf{ske}.\keys{sk})$ and sends $\widetilde{\keys{sk}}$ to $\mathcal{A}$. Otherwise, $\mathcal{B}_\mathsf{skcd}$ sends $\bot$ to $\mathcal{A}$. Finally, $\mathcal{B}_\mathsf{skcd}$ outputs whatever $\mathcal{A}$ outputs. \begin{itemize} \item If $b'=0$, i.e., $\mathsf{ske}.\keys{CT}^\ast = \SKE.\algo{Enc}(\mathsf{ske}.\keys{sk},m_0)$, $\mathcal{B}_\mathsf{skcd}$ perfectly simulates $\sfhyb{}{}(0)$. \item If $b'=1$, i.e., $\mathsf{ske}.\keys{CT}^\ast = \SKE.\algo{Enc}(\mathsf{ske}.\keys{sk},m_1)$, $\mathcal{B}_\mathsf{skcd}$ perfectly simulates $\sfhyb{}{}(1)$. \end{itemize} Thus, if $\mathcal{A}$ distinguishes the two experiments, $\mathcal{B}_\mathsf{skcd}$ breaks the OT-CD security. This completes the proof. \end{proof} By~\cref{prop:pkecd_hyb_one,prop:pkecd_hyb_end}, we immediately obtain~\cref{thm:pke_cd_from_sk_cd_and_pke}. \end{proof} By~\cref{thm:ske_cert_del_no_assumption,thm:indcpa-pke_to_rnc-pke,thm:pke_cd_from_sk_cd_and_pke}, we immediately obtain the following corollary. \begin{corollary} If there exists IND-CPA secure PKE against QPT adversaries, there exists IND-CPA-CD secure PKE with certified deletion. \end{corollary} \paragraph{Reusable SKE with certified deletion.} We can construct a secret key variant of $\Sigma_{\mathsf{pkcd}}$ above (that is, reusable SKE with certified deletion) by replacing $\Sigma_\mathsf{nce}$ with a secret key NCE scheme. We omit the proof since it is almost the same as that of~\cref{thm:pke_cd_from_sk_cd_and_pke}. By~\cref{thm:indcpa-pke_to_rnc-pke} and the fact that OWFs imply (reusable) SKE~\cite{SIAMCOMP:HILL99,JACM:GolGolMic86}, we also obtain the following theorem. \begin{theorem}\label{thm:reusable_SKE_cd} If there exists OWF against QPT adversaries, there exists IND-CPA-CD secure SKE with certified deletion. \end{theorem} See the definition and construction of reusable SKE with certified deletion in~\cref{sec:reusable_SKE_cd}. \section{Introduction}\label{sec:intro} The no-cloning theorem, which means that an unknown quantum state cannot be copied in general, is one of the most fundamental principles in quantum physics. As any classical information can be trivially copied, this indicates a fundamental difference between classical and quantum information. The no-cloning theorem has been the basis of many quantum cryptographic protocols, including quantum money \cite{Wiesner83} and quantum key distribution~\cite{BB84}. Broadbent and Islam \cite{TCC:BroIsl20} used the principle to construct \emph{quantum encryption with certified deletion}. In this primitive, a sender encrypts a classical message to generate a quantum ciphertext. A receiver in possession of the quantum ciphertext and a classical decryption key can either decrypt the ciphertext or ``delete" the encrypted message by generating a classical certificate. After generating a valid certificate of deletion, no adversary can recover the message \emph{even if the decryption key is given}.\footnote{We note that if the adversary is given the decryption key before the deletion, it can decrypt the ciphertext to obtain the message and keep it even after the deletion, but such an ``attack" is unavoidable.} We remark that this functionality is classically impossible to achieve since one can copy a classical ciphertext and keep it so that s/he can decrypt it at any later time. They prove the security of their construction without relying on any computational assumption, which ensures information-theoretical security. Although they achieved the exciting new functionality, their construction is limited to the one-time symmetric key encryption (SKE) setting. In one-time SKE, a sender and receiver have to share a common key in advance, and the key can be used only once. A possible application scenario of quantum encryption with certified deletion is the following. A user uploads encrypted data on a quantum cloud server. Whenever the user wishes to delete the data, the cloud generates a deletion certificate and sends it to the user. After the user verifies the validity of the certificate, s/he is convinced that the data cannot be recovered even if the decryption key is accidentally leaked later. Such quantum encryption could prevent data retention and help to implement the right to be forgotten~\cite{GDPR16}. In this scenario, one-time SKE is quite inconvenient. By the one-time restriction, the user has to locally keep as many decryption keys as the number of encrypted data in the cloud, in which case there seems to be no advantage of uploading the data to the cloud server: If the user has such large storage, s/he could have just locally kept the messages rather than uploading encryption of them to the cloud. Also, in some cases, a party other than the decryptor may want to upload data to the cloud. This usage would be possible if we can extend the quantum encryption with certified deletion to public key encryption (PKE). Remark that the one-time restriction is automatically resolved for PKE by a simple hybrid argument. Even more flexibly, a single encrypted data on the cloud may be supposed to be decrypted by multiple users according to some access control policy. Such an access control has been realized by attribute-based encryption (ABE)~\cite{EC:SahWat05,CCS:GPSW06} in classical cryptography. Thus, it would be useful if we have ABE with certified deletion. Our first question in this work is: \begin{center} \emph{Can we achieve PKE and ABE with certified deletion?} \end{center} Moreover, a sender needs to send quantum states (random BB84 states~\cite{BB84}) over a quantum channel in the construction by Broadbent and Islam~\cite{TCC:BroIsl20}. Although generating and sending random BB84 states are not so difficult tasks (and in fact they are already possible with current technologies), a classical sender and communication over only a classical channel are of course much easier. Besides, communicating over a classical channel is desirable in the application scenario above since many parties want to upload data to a cloud. In addition to these practical motivations, furthermore, achieving classical channel certified deletion is also an interesting theoretical research direction given the fact that many quantum cryptographic protocols have been ``dequantized" recently~\cite{FOCS:Mahadev18a,AC:CCKW19,CoRR:RadSat19,STOC:AGKZ20,EPRINT:KitNisYam20}. Thus, our second question in this work is: \begin{center} \emph{Can we achieve PKE with certified deletion, a classical sender, and classical communication?} \end{center} In the definition by Broadbent and Islam~\cite{TCC:BroIsl20}, a verification key for a deletion certificate must be kept secret (privately verifiable). If the verification key is revealed, the security is no longer guaranteed in their scheme. We can also consider public verifiability, which means the security holds even if a verification key is revealed to adversaries. Broadbent and Islam left the following question as an open problem: \begin{center} \emph{Is publicly verifiable encryption with certified deletion possible?} \end{center} \subsection{Our Result} We solve the three questions above affirmatively in this work. \paragraph{PKE and ABE with certified deletion and quantum communication.} We present formal definitions of PKE and ABE with certified deletion, and present constructions of them: \begin{itemize} \item We construct a PKE scheme with certified deletion assuming the existence of (classical) IND-CPA secure PKE. We also observe that essentially the same construction gives a reusable SKE scheme with certified deletion if we use IND-CPA secure SKE, which exists under the existence of one-way function (OWF), instead of PKE. \item We construct a (public-key) ABE scheme with certified deletion assuming the existence of indistinguishability obfuscation (IO)~\cite{JACM:BGIRSVY12} and OWF. This construction satisfies the collusion resistance and adaptive security, i.e., it is secure against adversaries that adaptively select a target attribute and obtain arbitrarily many decryption keys. \end{itemize} We note that our constructions rely on computational assumptions and thus not information-theoretically secure, unlike the construction in \cite{TCC:BroIsl20}. This is unavoidable since even plain PKE or ABE cannot be information-theoretically secure. We also note that the constructions above are privately verifiable as the definition of one-time SKE by Broadbent and Islam~\cite{TCC:BroIsl20}. Our main technical insight is that we can combine the one-time secure SKE with certified deletion of \cite{TCC:BroIsl20} and plain PKE to construct PKE with certified deletion by a simple hybrid encryption technique if the latter satisfies \emph{receiver non-committing} (RNC) security~\cite{STOC:CFGN96,EC:JarLys00,TCC:CanHalKat05}. Since it is known that PKE/SKE with RNC security can be constructed from any IND-CPA secure PKE/SKE \cite{TCC:CanHalKat05,C:KNTY19}, our first result follows. For the second result, we first give a suitable definition of RNC security for ABE that suffices for our purpose. Then we construct an ABE scheme with RNC security based on the existence of IO and OWF. By combining this with one-time SKE with certified deletion by hybrid encryption, we obtain an ABE scheme with certified deletion. \paragraph{PKE with certified deletion, a classical sender, and classical communication.} We also present formal definitions of PKE with certified deletion and classical communication, and present two constructions: \begin{itemize} \item We construct a PKE scheme with privately verifiable certified deletion and classical communication in the quantum random oracle model (QROM)~\cite{AC:BDFLSZ11}. Our construction is secure under the LWE assumption in the QROM. \item We construct a PKE scheme with publicly verifiable certified deletion and classical communication. Our construction uses one-shot signatures~\cite{STOC:AGKZ20} and extractable witness encryption~\cite{STOC:GGSW13,C:GKPVZ13}. This solves the open problem by Broadbent and Islam~\cite{TCC:BroIsl20}. \end{itemize} In both constructions, a sender is a classical algorithm, but needs to interact with a receiver during ciphertext generation. In the classical communication case, an encryption algorithm must be interactive even if we consider computationally bounded adversaries (and even in the QROM). The reason is that a malicious QPT receiver can generate two copies of a quantum ciphertext from classical messages sent from a sender, and one is used for generating a deletion certificate and the other is used for decryption. Moreover, both constructions rely on computational assumptions and thus not information-theoretically secure, unlike the construction by Broadbent and Islam~\cite{TCC:BroIsl20}. This is unavoidable even if an encryption algorithm is interactive (and even in the QROM). The reason is that a computationally unbounded malicious receiver can classically simulate its honest behavior to get a classical description of the quantum ciphertext. For the first construction, we use a new property of noisy trapdoor claw-free (NTCF) functions, \emph{the cut-and-choose adaptive hardcore property} (\cref{lem:cut_and_choose_adaptive_hardcore}), which we introduce in this work. We prove that the cut-and-choose adaptive hardcore property is reduced to the adaptive hardcore bit property~\cite{FOCS:BCMVV18} and injective invariance~\cite{FOCS:Mahadev18a}. Those properties hold under the LWE assumption~\cite{FOCS:BCMVV18,FOCS:Mahadev18a}. This new technique is of independent interest. The idea of the second construction is to encrypt a plaintext by witness encryption so that a valid witness is a one-shot signature for bit $0$. We use a valid one-shot signature for bit $1$ as a deletion certificate. The one-shot property of one-shot signatures prevents decryption of witness encryption after issuing a valid deletion certificate. Georgiou and Zhandry~\cite{EPRINT:GeoZha20} used a similar combination of one-shot signatures and witness encryption to construct unclonable decryption keys. \subsection{Related work}\label{sec:related_work} Before the work by Broadbent and Islam~\cite{TCC:BroIsl20}, Fu and Miller~\cite{FM18} and Coiteux-Roy and Wolf~\cite{CRW19} also studied the concept of certifying deletion of information in different settings. (See~\cite{TCC:BroIsl20} for the comparison with these works.) The quantum encryption scheme with certified deletion by Broadbent and Islam~\cite{TCC:BroIsl20} is based on Wiesner's conjugate coding, which is the backbone of quantum money~\cite{Wiesner83} and quantum key distribution~\cite{BB84}. A similar idea has been used in many constructions in quantum cryptography that include (but not limited to) revocable quantum timed-release encryption~\cite{JACM:Unruh15}, uncloneable quantum encryption~\cite{TQC:BroLor20}, single-decryptor encryption~\cite{EPRINT:GeoZha20}, and copy protection/secure software leasing~\cite{CMP20}. Among them, revocable quantum timed-release encryption is conceptually similar to quantum encryption with certified deletion. In this primitive, a receiver can decrypt a quantum ciphertext only after spending a certain amount of time $T$. The receiver can also choose to return the ciphertext before the time $T$ is over, in which case it is ensured that the message can no longer be recovered. As observed by Broadbent and Islam~\cite{TCC:BroIsl20}, an essential difference from quantum encryption with certified deletion is that the revocable quantum timed-release encryption does not have a mechanism to generate a \emph{classical} certificate of deletion. Moreover, the construction by Unruh~\cite{JACM:Unruh15} heavily relies on the random oracle heuristic~\cite{C:BelRog97,AC:BDFLSZ11}, and there is no known construction without random oracles. Kundu and Tan~\cite{KunduTan} constructed (one-time symmetric key) quantum encryption with certified deletion with the device-independent security, i.e., the security holds even if quantum devices are untrusted. Moreover, they show that their construction satisfies composable security. The notion of NTCF functions was first introduced by Brakerski et al.~\cite{FOCS:BCMVV18}, and further extended to construct a classical verification of quantum computing by Mahadev~\cite{FOCS:Mahadev18a}. (See also a related primitive so-called QFactory~\cite{AC:CCKW19}.) The adaptive hardcore bit property of NTCF functions was also used for semi-quantum money~\cite{CoRR:RadSat19} and secure software leasing with classical communication~\cite{EPRINT:KitNisYam20}. Ananth and Kaleoglu concurrently and independently present reusable secret key and public key uncloneable encryption schemes~\cite{EPRINT:AnaKal21}. Uncloneable encryption~\cite{TQC:BroLor20} is related to but different from quatum encryption with certified deletion. Uncloneable encryption prevents adversaries from creating multiple ciphertexts whose plaintext is the same as that of the original ciphertext. Their constructions are based on a similar idea to one of our main ideas. Specifically, their construction is obtained by combining one-time secret key uncloneable encryption and standard SKE/PKE with the ``fake-key property", which is similar to the RNC security. \input{sec_tech_overview_quantum} \input{sec_tech_overview_classical} \section{Preliminaries}\label{sec:prelim} \subsection{Notations and Mathematical Tools}\label{sec:notation} We introduce basic notations and mathematical tools used in this paper. In this paper, $x \leftarrow X$ denotes selecting an element from a finite set $X$ uniformly at random, and $y \gets \algo{A}(x)$ denotes assigning to $y$ the output of a probabilistic or deterministic algorithm $\algo{A}$ on an input $x$. When we explicitly show that $\algo{A}$ uses randomness $r$, we write $y \gets \algo{A}(x;r)$. When $D$ is a distribution, $x \leftarrow D$ denotes sampling an element from $D$. Let $[\ell]$ denote the set of integers $\{1, \cdots, \ell \}$, $\lambda$ denote a security parameter, and $y \coloneqq z$ denote that $y$ is set, defined, or substituted by $z$. For a string $s \in \zo{\ell}$, $s[i]$ denotes $i$-th bit of $s$. QPT stands for quantum polynomial time. PPT stands for (classical) probabilistic polynomial time. For a subset $S\subseteq W$ of a set $W$, $\overline{S}$ is the complement of $S$, i.e., $\overline{S}:=W\setminus S$. A function $f: \mathbb{N} \rightarrow \mathbb{R}$ is a negligible function if for any constant $c$, there exists $\lambda_0 \in \mathbb{N}$ such that for any $\lambda>\lambda_0$, $f(\lambda) < \lambda^{-c}$. We write $f(\lambda) \leq {\mathsf{negl}}(\lambda)$ to denote $f(\lambda)$ being a negligible function. A function $g: \mathbb{N} \rightarrow \mathbb{R}$ is a noticeable function if there exist constants $c$ and $\lambda_0$ such that for any $\lambda \ge \lambda_0$, $g(\lambda) \ge \lambda^{-c}$. The trace distance between two states $\rho$ and $\sigma$ is given by $\parallel \rho-\sigma \parallel_{tr}$, where $\parallel A\parallel_{tr}\coloneqq \mathrm{Tr}\sqrt{A^{\dagger}A}$ is the trace norm. We call a function $f$ a density on $X$ if $f:X\rightarrow[0,1]$ such that $\sum_{x\in X}f(x)=1$. For two densities $f_0$ and $f_1$ over the same finite domain $X$, the Hellinger distance between $f_0$ and $f_1$ is ${\bf H}^2(f_0,f_1)\coloneqq 1-\sum_{x\in X}\sqrt{f_0(x)f_1(x)}$. \subsection{Cryptographic Tools}\label{sec:basic_crypto} In this section, we review cryptographic tools used in this paper. \paragraph{Public key encryption.} \begin{definition}[Public Key Encryption (Syntax)]\label{definition:public key} A public key encryption scheme $\Sigma=(\algo{KeyGen},\algo{Enc},\algo{Dec})$ is a triple of PPT algorithms, a key generation algorithm $\algo{KeyGen}$, an encryption algorithm $\algo{Enc}$ and a decryption algorithm $\algo{Dec}$, with plaintext space $\mathcal{M}$. \begin{description} \item[$\algo{KeyGen}(1^\lambda)\rightarrow (\keys{pk},\keys{sk})$:] The key generation algorithm takes as input the security parameter $1^\lambda$ and outputs a public key $\keys{pk}$ and a secret key $\keys{sk}$. \item[$\algo{Enc}(\keys{pk},m) \rightarrow \keys{CT}$:] The encryption algorithm takes as input $\keys{pk}$ and a plaintext $m \in \mathcal{M}$, and outputs a ciphertext $\keys{CT}$. \item[$\algo{Dec}(\keys{sk},\keys{CT}) \rightarrow m^\prime \mbox{ or } \bot$:] The decryption algorithm takes as input $\keys{sk}$ and $\keys{CT}$, and outputs a plaintext $m^\prime$ or $\bot$. \end{description} \end{definition} \begin{definition}[Correctness for PKE]\label{def:pke_correctness} For any $\lambda\in \mathbb{N}$, $m\in\mathcal{M}$, \begin{align} \Pr\left[ \algo{Dec}(\keys{sk},\keys{CT})\ne m \ \middle | \begin{array}{ll} (\keys{pk},\keys{sk})\leftarrow \algo{KeyGen}(1^\lambda)\\ \keys{CT} \leftarrow \algo{Enc}(\keys{pk},m) \end{array} \right] \le{\mathsf{negl}}(\lambda). \end{align} \end{definition} \begin{definition}[OW-CPA security]\label{definition:OW-CPA} Let $\Sigma =(\algo{KeyGen},\algo{Enc},\algo{Dec})$ be a PKE scheme. For QPT adversaries $\mathcal{A}$, we define the following security experiment $\expa{\Sigma,\mathcal{A}}{ow}{cpa}(\lambda)$. \begin{enumerate} \item The challenger generates $(\keys{pk},\keys{sk})\leftarrow \algo{KeyGen}(1^{\lambda})$, chooses $m\leftarrow \mathcal{M}$, computes $\keys{CT}\leftarrow \algo{Enc}(\keys{pk},m)$, and sends $(\keys{pk},\keys{CT})$ to $\mathcal{A}$. \item $\mathcal{A}$ outputs $m'$. The experiment outputs $1$ if $m'=m$ and otherwise $0$. \end{enumerate} We say that the $\Sigma$ is OW-CPA secure if for any QPT $\mathcal{A}$, it holds that \begin{align} \advb{\Sigma,\mathcal{A}}{ow}{cpa}(\lambda) \coloneqq \Pr[\expa{\Sigma,\mathcal{A}}{ow}{cpa}(\lambda)=1]\leq {\mathsf{negl}}(\lambda). \end{align} Note that we assume $1/\abs{\mathcal{M}}$ is negligible. \end{definition} \begin{definition}[IND-CPA security]\label{definition:IND-CPA} Let $\Sigma =(\algo{KeyGen},\algo{Enc},\algo{Dec})$ be a PKE scheme. For QPT adversaries $\mathcal{A}$, we define the following security experiment $\expa{\Sigma,\mathcal{A}}{ind}{cpa}(\lambda,b)$. \begin{enumerate} \item The challenger generates $(\keys{pk},\keys{sk})\leftarrow \algo{KeyGen}(1^{\lambda})$, and sends $\keys{pk}$ to $\mathcal{A}$. \item $\mathcal{A}$ sends $(m_0,m_1)\in\mathcal{M}^2$ to the challenger. \item The challenger computes $\keys{CT}_b \leftarrow \algo{Enc}(\keys{pk},m_b)$, and sends $\keys{CT}_b$ to $\mathcal{A}$. \item $\mathcal{A}$ outputs $b'\in\{0,1\}$. This is the output of the experiment. \end{enumerate} We say that the $\Sigma$ is IND-CPA secure if for any QPT $\mathcal{A}$, it holds that \begin{align} \advb{\Sigma,\mathcal{A}}{ind}{cpa}(\lambda) \coloneqq \abs{\Pr[\expa{\Sigma,\mathcal{A}}{ind}{cpa}(\lambda,0)=1] - \Pr[\expa{\Sigma,\mathcal{A}}{ind}{cpa}(\lambda,1)=1]} \leq {\mathsf{negl}}(\lambda). \end{align} \end{definition} It is well-known that the IND-CPA security implies the OW-CPA security. There are many IND-CPA secure PKE schemes against QPT adversaries under standard cryptographic assumptions. A famous one is Regev PKE scheme, which is IND-CPA secure if the learning with errors (LWE) assumption holds against QPT adversaries~\cite{JACM:Regev09}. See the references for the LWE assumption and constructions of post-quantum secure PKE~\cite{JACM:Regev09,STOC:GenPeiVai08}. \begin{definition}[Indistinguishability Obfuscator~\cite{JACM:BGIRSVY12}]\label{def:io} A PPT algorithm $i\mathcal{O}$ is an IO for a circuit class $\{\mathcal{C}_\lambda\}_{\lambda \in \mathbb{N}}$ if it satisfies the following two conditions. \begin{description} \item[Functionality:] For any security parameter $\lambda \in \mathbb{N}$, circuit $C \in \mathcal{C}_\lambda$, and input $x$, we have that \begin{align} \Pr[C'(x)=C(x) \mid C' \leftarrow i\mathcal{O}(C)] = 1\enspace. \end{align} \item[Indistinguishability:] For any QPT distinguisher $\mathcal{D}$ and for any pair of circuits $C_0, C_1 \in \mathcal{C}_\lambda$ such that for any input $x$, $C_0(x) = C_1(x)$ and $\abs{C_0}=\abs{C_1}$, it holds that \begin{align} \adva{i\mathcal{O},\mathcal{D}}{io}(\lambda) \coloneqq \abs{ \Pr\left[\mathcal{D}(i\mathcal{O}(C_0))= 1\right] - \Pr\left[\mathcal{D}(i\mathcal{O}(C_1))= 1\right] } \leq {\mathsf{negl}}(\lambda)\enspace. \end{align} \end{description} \end{definition} There exist candidate constructions of IO against QPT adversaries~\cite{STOC:GayPas21,EC:WeeWic21,EPRINT:BDGM20b}. \paragraph{Attribute-based encryption.} We review the notion of (key-policy) attribute-based encryption (ABE)~\cite{EC:SahWat05,CCS:GPSW06}. \begin{definition}[Attribute-Based Encryption (Syntax)]\label{def:abe_syntax} An ABE scheme is a tuple of PPT algorithms $(\algo{Setup},\algo{KeyGen},\algo{Enc},\algo{Dec})$ with plaintext space $\mathcal{M}$, attribute space $\mathcal{X}$, and policy space $\mathcal{P}$. \begin{description} \item[$\algo{Setup}(1^\lambda)\rightarrow (\keys{pk},\keys{msk})$:] The setup algorithm takes as input the security parameter $1^\lambda$ and outputs a public key $\keys{pk}$ and a master secret key $\keys{msk}$. \item[$\algo{KeyGen} (\keys{msk},P) \rightarrow \keys{sk}_P$:] The key generation algorithm takes as $\keys{msk}$ and a policy $P \in \mathcal{P}$, and outputs a secret key $\keys{sk}_P$. \item[$\algo{Enc}(\keys{pk},X,m) \rightarrow \keys{CT}_X$:] The encryption algorithm takes as input $\keys{pk}$, an attribute $X\in\mathcal{X}$, and a plaintext $m \in \mathcal{M}$, and outputs a ciphertext $\keys{CT}_X$. \item[$\algo{Dec}(\keys{sk}_P,\keys{CT}_X) \rightarrow m^\prime \mbox{ or } \bot$:] The decryption algorithm takes as input $\keys{sk}_P$ and $\keys{CT}_X$, and outputs a plaintext $m^\prime$ or $\bot$. \end{description} \end{definition} \begin{definition}[Perfect Correctness for ABE]\label{def:abe_correctness} For any $\lambda\in \mathbb{N}$, $m\in\mathcal{M}$, $P\in\mathcal{P}$, and $X\in\mathcal{X}$ such that $P(X)=\top$, \begin{align} \Pr\left[ \algo{Dec}(\keys{sk}_P,\keys{CT}_X)\ne m \ \middle | \begin{array}{ll} (\keys{pk},\keys{msk})\leftarrow \algo{Setup}(1^\lambda)\\ \keys{sk}_P \leftarrow \algo{KeyGen}(\keys{msk},P)\\ \keys{CT}_X \leftarrow \algo{Enc}(\keys{pk},X,m) \end{array} \right] =0. \end{align} \begin{remark} Though we allow negligible decryption error for other primitives in this paper, we require perfect correctness for ABE. This is needed for the security proof of the non-committing ABE scheme in~\cref{sec:NCABE_from_IO}. \end{remark} \end{definition} \begin{definition}[(Adaptive) IND-CPA Security for ABE]\label{def:abe_ind-cpa} Let $\Sigma=(\algo{Setup}, \algo{KeyGen}, \algo{Enc}, \algo{Dec})$ be an ABE scheme with plaintext space $\mathcal{M}$, attribute space $\mathcal{X}$, and policy space $\mathcal{P}$. We consider the following security experiment $\expa{\Sigma,\mathcal{A}}{ind}{cpa}(\lambda,b)$. \begin{enumerate} \item The challenger computes $(\keys{pk},\keys{msk}) \leftarrow \algo{Setup}(1^\lambda)$ and sends $\keys{pk}$ to $\mathcal{A}$. \item $\mathcal{A}$ sends a query $P\in \mathcal{P}$ to the challenger and it returns $\keys{sk}_P \leftarrow \algo{KeyGen}(\keys{msk},P)$ to $\mathcal{A}$. This process can be repeated polynomially many times. \item $\mathcal{A}$ sends $X^\ast \in \mathcal{X}$ and $(m_0,m_1) \in \mathcal{M}^2$ to the challenger where $X^\ast$ must satisfy $P(X^\ast)=\bot$ for all key queries $P$ sent so far. \item The challenger computes $\keys{CT}_b \leftarrow \algo{Enc}(\keys{pk},X^\ast,m_b)$ and sends $\keys{CT}_b$ to $\mathcal{A}$. \item Again, $\mathcal{A}$ can send key queries $P$ that must satisfy $P(X^\ast)=\bot$. \item Finally, $\mathcal{A}$ outputs $b'\in \{0,1\}$. \end{enumerate} We say that the $\Sigma$ is IND-CPA secure if for any QPT adversary $\mathcal{A}$, it holds that \begin{align} \advb{\Sigma,\mathcal{A}}{ind}{cpa}(\lambda)\coloneqq \abs{\Pr[ \expa{\Sigma,\mathcal{A}}{ind}{cpa}(\lambda, 0)=1] - \Pr[ \expa{\Sigma,\mathcal{A}}{ind}{cpa}(\lambda, 1)=1] }\leq {\mathsf{negl}}(\lambda). \end{align} \end{definition} If $\mathcal{X} =\zo{\ell}$ where $\ell$ is some polynomial and $\mathcal{P}$ consists of all polynomial-sized Boolean circuits, we say ABE for circuits in this paper. It is known that \emph{selectively secure} ABE for circuits exists under the LWE assumption, which can be upgraded into adaptively secure one by complexity leveraging if we assume subexponential hardness of the LWE problem ~\cite{JACM:GorVaiWee15}. Alternatively, if there exist IO and OWFs, there exists adaptively secure functional encryption for $\compclass{P}/\compclass{poly}$~\cite{C:Waters15,C:ABSV15}, which can be trivially downgraded into (adaptively) IND-CPA secure ABE for circuits. \begin{theorem}\label{thm:ABE_circuits_from_LWE_or_IO} If one of the following holds, there exists (adaptively) IND-CPA secure (key-policy) ABE scheme for circuits against QPT adversaries. \begin{itemize} \item the LWE problem is subexponentially hard against QPT adversaries~\cite{JACM:GorVaiWee15}. \item there exist IO and OWFs secure against QPT adversaries~\cite{C:Waters15,C:ABSV15}. \end{itemize} \end{theorem} \paragraph{Encryption with certified deletion.} Broadbent and Islam introduced the notion of encryption with certified deletion~\cite{TCC:BroIsl20}. Their notion is for secret key encryption (SKE). They consider a setting where a secret key is used only once (that is, one time SKE). Although it is easy to extend the definition to reusable secret key setting, we describe the definition for the one-time setting in this section. We provide a definition that is accommodated to the reusable setting in~\cref{sec:reusable_SKE_cd}. \begin{definition}[One-Time SKE with Certified Deletion (Syntax)]\label{def:sk_cert_del} A one-time secret key encryption scheme with certified deletion is a tuple of QPT algorithms $(\algo{KeyGen},\algo{Enc},\algo{Dec},\algo{Del},\algo{Vrfy})$ with plaintext space $\mathcal{M}$ and key space $\mathcal{K}$. \begin{description} \item[$\algo{KeyGen} (1^\lambda) \rightarrow \keys{sk}$:] The key generation algorithm takes as input the security parameter $1^\lambda$ and outputs a secret key $\keys{sk} \in \mathcal{K}$. \item[$\algo{Enc}(\keys{sk},m) \rightarrow \keys{CT}$:] The encryption algorithm takes as input $\keys{sk}$ and a plaintext $m\in\mathcal{M}$ and outputs a ciphertext $\keys{CT}$. \item[$\algo{Dec}(\keys{sk},\keys{CT}) \rightarrow m^\prime \mbox{ or }\bot$:] The decryption algorithm takes as input $\keys{sk}$ and $\keys{CT}$ and outputs a plaintext $m^\prime \in \mathcal{M}$ or $\bot$. \item[$\algo{Del}(\keys{CT}) \rightarrow \keys{cert}$:] The deletion algorithm takes as input $\keys{CT}$ and outputs a certification $\keys{cert}$. \item[$\algo{Vrfy}(\keys{sk},\keys{cert})\rightarrow \top \mbox{ or }\bot$:] The verification algorithm takes $\keys{sk}$ and $\keys{cert}$ and outputs $\top$ or $\bot$. \end{description} \end{definition} \begin{definition}[Correctness for One-Time SKE with Certified Deletion]\label{def:sk_cd_correctness} There are two types of correctness. One is decryption correctness and the other is verification correctness. \begin{description} \item[Decryption correctness:] For any $\lambda\in \mathbb{N}$, $m\in\mathcal{M}$, \begin{align} \Pr\left[ \algo{Dec}(\keys{sk},\keys{CT})\ne m \ \middle | \begin{array}{ll} \keys{sk}\leftarrow \algo{KeyGen}(1^\lambda)\\ \keys{CT} \leftarrow \algo{Enc}(\keys{sk},m) \end{array} \right] \le{\mathsf{negl}}(\lambda). \end{align} \item[Verification correctness:] For any $\lambda\in \mathbb{N}$, $m\in\mathcal{M}$, \begin{align} \Pr\left[ \algo{Vrfy}(\keys{sk},\keys{cert})=\bot \ \middle | \begin{array}{ll} \keys{sk}\leftarrow \algo{KeyGen}(1^\lambda)\\ \keys{CT} \leftarrow \algo{Enc}(\keys{sk},m)\\ \keys{cert} \leftarrow \algo{Del}(\keys{CT}) \end{array} \right] \le{\mathsf{negl}}(\lambda). \end{align} \end{description} \end{definition} \begin{definition}[Certified Deletion Security for One-Time SKE]\label{def:sk_certified_del} Let $\Sigma=(\algo{KeyGen}, \algo{Enc}, \algo{Dec}, \algo{Del}, \algo{Vrfy})$ be a secret key encryption with certified deletion. We consider the following security experiment $\expb{\Sigma,\mathcal{A}}{otsk}{cert}{del}(\lambda,b)$. \begin{enumerate} \item The challenger computes $\keys{sk} \leftarrow \algo{KeyGen}(1^\lambda)$. \item $\mathcal{A}$ sends $(m_0,m_1)\in\mathcal{M}^2$ to the challenger. \item The challenger computes $\keys{CT}_b \leftarrow \algo{Enc}(\keys{sk},m_b)$ and sends $\keys{CT}_b$ to $\mathcal{A}$. \item $\mathcal{A}$ sends $\keys{cert}$ to the challenger. \item The challenger computes $\algo{Vrfy}(\keys{sk},\keys{cert})$. If the output is $\bot$, the challenger sends $\bot$ to $\mathcal{A}$. If the output is $\top$, the challenger sends $\keys{sk}$ to $\mathcal{A}$. \item $\mathcal{A}$ outputs $b'\in \{0,1\}$. \end{enumerate} We say that the $\Sigma$ is OT-CD secure if for any QPT $\mathcal{A}$, it holds that \begin{align} \advc{\Sigma,\mathcal{A}}{otsk}{cert}{del}(\lambda)\coloneqq \abs{\Pr[ \expb{\Sigma,\mathcal{A}}{otsk}{cert}{del}(\lambda, 0)=1] - \Pr[ \expb{\Sigma,\mathcal{A}}{otsk}{cert}{del}(\lambda, 1)=1] }\leq {\mathsf{negl}}(\lambda). \end{align} \end{definition} We sometimes call it one-time SKE with certified deletion if it satisfies OT-CD security. \begin{remark} \cref{def:sk_certified_del} intuitively means that once the valid certificate is issued, decrypting the ciphertext becomes impossible. One might think that it would be also possible to define the inverse: once the chiphertext is decrypted, the valid certificate can no longer be issued. This property is, however, impossible to achieve due to the decryption correctness (\cref{def:sk_cd_correctness}). In fact, if the quantum decryption algorithm $\algo{Dec}$ on a quantum ciphertext $\keys{CT}$ succeeds with probability at least $1-{\mathsf{negl}}(\lambda)$, then the gentle measurement lemma guarantees that $\keys{CT}$ is only negligibly disturbed, from which the valid certificate can be issued. \end{remark} We emphasize that in the existing construction of SKE with certified deletion, a secret key is a classical string though a ciphertext must be a quantum state. Broadbent and Islam prove the following theorem. \begin{theorem}[\cite{TCC:BroIsl20}]\label{thm:ske_cert_del_no_assumption} There exists OT-CD secure SKE with certified deletion with $\mathcal{M}= \{0,1\}^{\ell_m}$ and $\mathcal{K}= \{0,1\}^{\ell_k}$ where $\ell_m$ and $\ell_k$ are some polynomials, unconditionally. \end{theorem} \paragraph{Receiver non-committing encryption.} We introduce the notion of (public key) receiver non-committing encryption (RNCE)~\cite{STOC:CFGN96,EC:JarLys00,TCC:CanHalKat05}, which is used in~\cref{sec:const_pk_cd_from_sk,sec:PKE_cd_cc_construction,sec:pk_pv_cd_cc_construction}. We sometimes simply write NCE to mean RNCE since we consider only RNCE in this paper. See~\cref{sec:reusable_SKE_cd} for the definition of secret key NCE. \begin{definition}[RNCE (Syntax)]\label{def:nce_syntax} An NCE scheme is a tuple of PPT algorithms $(\algo{KeyGen},\algo{Enc},\algo{Dec},\algo{Fake},\algo{Reveal})$ with plaintext space $\mathcal{M}$. \begin{description} \item [$\algo{KeyGen}(1^\lambda)\rightarrow (\keys{pk},\keys{sk},\mathsf{aux})$:] The key generation algorithm takes as input the security parameter $1^\lambda$ and outputs a key pair $(\keys{pk},\keys{sk})$ and an auxiliary information $\mathsf{aux}$. \item [$\algo{Enc}(\keys{pk},m)\rightarrow \keys{CT}$:] The encryption algorithm takes as input $\keys{pk}$ and a plaintext $m\in\mathcal{M}$ and outputs a ciphertext $\keys{CT}$. \item [$\algo{Dec}(\keys{sk},\keys{CT})\rightarrow m^\prime \mbox{ or }\bot$:] The decryption algorithm takes as input $\keys{sk}$ and $\keys{CT}$ and outputs a plaintext $m^\prime$ or $\bot$. \item [$\algo{Fake}(\keys{pk},\keys{sk},\mathsf{aux})\rightarrow \widetilde{\ct}$:] The fake encryption algorithm takes $\keys{pk}$, $\keys{sk}$ and $\mathsf{aux}$, and outputs a fake ciphertext $\widetilde{\ct}$. \item [$\algo{Reveal}(\keys{pk},\keys{sk},\mathsf{aux},\widetilde{\ct},m)\rightarrow \widetilde{\keys{sk}} $:] The reveal algorithm takes $\keys{pk},\keys{sk},\mathsf{aux},\widetilde{\ct}$ and $m$, and outputs a fake secret key $\widetilde{\keys{sk}}$. \end{description} \end{definition} Correctness is the same as that of PKE. \begin{definition}[Receiver Non-Committing (RNC) Security]\label{def:nce_security} An NCE scheme is RNC secure if it satisfies the following. Let $\Sigma=(\algo{KeyGen}, \algo{Enc}, \algo{Dec}, \algo{Fake},\algo{Reveal})$ be an NCE scheme. We consider the following security experiment $\expa{\Sigma,\mathcal{A}}{rec}{nc}(\lambda,b)$. \begin{enumerate} \item The challenger computes $(\keys{pk},\keys{sk},\mathsf{aux}) \leftarrow \algo{KeyGen}(1^\lambda)$ and sends $\keys{pk}$ to $\mathcal{A}$. \item $\mathcal{A}$ sends a query $m \in \mathcal{M}$ to the challenger. \item The challenger does the following. \begin{itemize} \item If $b =0$, the challenger generates $\keys{CT} \leftarrow \algo{Enc}(\keys{pk},m)$ and returns $(\keys{CT},\keys{sk})$ to $\mathcal{A}$. \item If $b=1$, the challenger generates $\widetilde{\ct} \leftarrow \algo{Fake}(\keys{pk},\keys{sk},\mathsf{aux})$ and $\widetilde{\keys{sk}} \leftarrow \algo{Reveal}(\keys{pk},\keys{sk},\mathsf{aux},\widetilde{\ct},m)$ and returns $(\widetilde{\ct},\widetilde{\keys{sk}})$ to $\mathcal{A}$. \end{itemize} \item $\mathcal{A}$ outputs $b'\in \{0,1\}$. \end{enumerate} Let $\advb{\Sigma,\mathcal{A}}{rec}{nc}(\lambda)$ be the advantage of the experiment above. We say that the $\Sigma$ is RNC secure if for any QPT adversary, it holds that \begin{align} \advb{\Sigma,\mathcal{A}}{rec}{nc}(\lambda)\coloneqq \abs{\Pr[ \expa{\Sigma,\mathcal{A}}{rec}{nc}(\lambda, 0)=1] - \Pr[ \expa{\Sigma,\mathcal{A}}{rec}{nc}(\lambda, 1)=1] }\leq {\mathsf{negl}}(\lambda). \end{align} \end{definition} \begin{theorem}[{\cite{C:KNTY19}}]\label{thm:indcpa-pke_to_rnc-pke} If there exists an IND-CPA secure SKE/PKE scheme (against QPT adversaries), there exists an RNC secure secret/public key NCE scheme (against QPT adversaries) with plaintext space $\{0,1\}^{\ell}$, where $\ell$ is some polynomial, respectively. \end{theorem} Note that Kitagawa, Nishimaki, Tanaka, and Yamakawa~\cite{C:KNTY19} prove the theorem above for the SKE case, but it is easy to extend their theorem to the PKE setting. We also note that the core idea of Kitagawa et al. is based on the observation by Canetti, Halevi, and Katz~\cite{TCC:CanHalKat05}. \paragraph{Non-interactive zero-knowledge.} We review non-interactive zero-knowledge (NIZK) which is used in~\cref{sec:NCABE_from_IO}. \begin{definition}[Non-Interactive Zero-Knowledge Proofs (Syntax)] \label{def:NIZKP} A non-interactive zero-knowledge (NIZK) proof for an $\compclass{NP}$ language $\mathcal{L}$ consists of PPT algorithms $(\algo{Setup}, \algo{Prove}, \algo{Vrfy})$. \begin{description} \item[$\algo{Setup}(1^\lambda) \rightarrow \mathsf{crs}$:] The setup algorithm takes as input the security parameter $1^\lambda$ and outputs a common reference string $\mathsf{crs}$. \item[$\algo{Prove}(\mathsf{crs}, x, w) \rightarrow \pi$:] The prover's algorithm takes as input a common reference string $\mathsf{crs}$, a statement $x$, and a witness $w$ and outputs a proof $\pi$. \item[$\algo{Vrfy} (\mathsf{crs}, x, \pi) \rightarrow \top \mbox{ or }\bot$:] The verifier's algorithm takes as input a common reference string $\mathsf{crs}$, a statement $x$, and a proof $\pi$ and outputs $\top$ to indicate acceptance of the proof and $\bot$ otherwise. \end{description} A non-interactive proof must satisfy the following requirements. \begin{description} \item[Completeness:] For all $\lambda\in\mathbb{N}$ and all pairs $(x, w) \in \mathcal{R}_\mathcal{L}$, where $\mathcal{R}_\mathcal{L}$ is the witness relation corresponding to $\mathcal{L}$, we have \begin{align} \Pr[\algo{Vrfy}(\mathsf{crs}, x, \pi) = \top \mid \mathsf{crs} \leftarrow \algo{Setup}(1^\lambda), \pi \leftarrow \algo{Prove}(\mathsf{crs}, x, w)] = 1. \end{align} \item[Statistical Soundness:] For all unbounded time adversaries $\mathcal{A}$, if we run $\mathsf{crs} \leftarrow \algo{Setup}(1^\lambda)$, then we have \begin{align} \Pr [x \not \in \mathcal{L} \land \algo{Vrfy}(\mathsf{crs}, x, \pi) = \top \mid (x, \pi) \leftarrow \mathcal{A}(1^\lambda, \mathsf{crs})] \leq {\mathsf{negl}}(\lambda). \end{align} \item[(Computational) Zero-Knowledge:] If there exists a PPT simulator $\algo{Sim} = (\algo{Sim}_1,\algo{Sim}_2)$ such that for all QPT adversaries $\mathcal{A}$ and for all $(x,w)\in\mathcal{R}_\mathcal{L}$, we have \begin{align} \left| \Pr\left[ \mathcal{A}(1^\lambda, \mathsf{crs},x,\pi) = 1 \ \middle | \begin{array}{ll} \mathsf{crs} \leftarrow \algo{Setup}(1^\lambda),\\ \pi\leftarrow\algo{Prove}(\mathsf{crs},x,w) \end{array} \right] - \Pr\left[ \mathcal{A}(1^\lambda, \widetilde{\mathsf{crs}},x,\pi) = 1 \ \middle | \begin{array}{ll} (\widetilde{\mathsf{crs}},\keys{td})\leftarrow \algo{Sim}_1(1^\lambda,x),\\ \pi\leftarrow \algo{Sim}_2(\widetilde{\mathsf{crs}},\keys{td},x) \end{array} \right] \right| \leq {\mathsf{negl}}(\lambda). \end{align} \end{description} \end{definition} \begin{theorem}\label{thm:NIZK_from_LWE_or_IO} If one of the following holds, then there exists computational NIZK proof for $\compclass{NP}$ against QPT adversaries. \begin{itemize} \item the LWE assumption holds against QPT adversaries~\cite{C:PeiShi19}. \item there exist IO and OWFs~\cite{TCC:BitPan15}. \end{itemize} \end{theorem} \paragraph{One-shot signature.} We review the one-shot signature scheme that will be used in \cref{sec:pk_pv_cd_cc_construction}. The one-shot signature scheme was introduced and a construction relative to a classical oracle was given in \cite{STOC:AGKZ20}. \begin{definition}[One-Shot Signature (Syntax)]\label{definition:OSS} A one-shot signature scheme is a tuple of QPT algorithms $(\algo{Setup},\algo{KeyGen},\allowbreak\algo{Sign},\algo{Vrfy})$ with the following syntax: \begin{description} \item[$\algo{Setup}(1^\lambda)\rightarrow \mathsf{crs}$:] The setup algorithm takes as input a security parameter $1^\lambda$, and outputs a classical common reference string $\mathsf{crs}$. \item[$\algo{KeyGen}(\mathsf{crs})\rightarrow (\keys{pk},\keys{sk})$:] The key generation algorithm takes as input a common reference string $\mathsf{crs}$, and outputs a classical public key $\keys{pk}$ and a quantum secret key $\keys{sk}$. \item[$\algo{Sign}(\keys{sk},m) \rightarrow \sigma$:] The signing algorithm takes as input the secret key $\keys{sk}$ and a message $m$, and outputs a classical signature $\sigma$. \item[$\algo{Vrfy}(\mathsf{crs},\keys{pk},\sigma,m) \rightarrow \top \mbox{ or } \bot$:] The verification algorithm takes as input the common reference string $\mathsf{crs}$, the public key $\keys{pk}$, the signature $\sigma$, and the message $m$, and outputs $\top$ or $\bot$. \end{description} \end{definition} \begin{definition}[Correctness for One-Shot Signature] We say that a one-shot signature scheme is correct if \begin{align} {\rm Pr}[\algo{Vrfy}(\mathsf{crs},\keys{pk},\algo{Sign}(\keys{sk},m),m)=\top| (\keys{pk},\keys{sk})\leftarrow \algo{KeyGen}(\mathsf{crs}),\mathsf{crs}\leftarrow\algo{Setup}(1^\lambda) ]\ge1-{\mathsf{negl}}(\lambda) \end{align} for any message $m$. \end{definition} \begin{definition}[Security for One-Shot Signature] We say that a one-shot signature scheme is secure if for any QPT adversary $\mathcal{A}$, \begin{align} {\rm Pr}[ m_0\neq m_1 \wedge \algo{Vrfy}(\mathsf{crs},\keys{pk},\sigma_0,m_0)=\top \wedge \algo{Vrfy}(\mathsf{crs},\keys{pk},\sigma_1,m_1)=\top]\le {\mathsf{negl}}(\lambda) \end{align} is satisfied over $\mathsf{crs}\leftarrow \algo{Setup}(1^\lambda)$ and $(\keys{pk},m_0,\sigma_0,m_1,\sigma_1)\leftarrow \mathcal{A}(\mathsf{crs})$. \end{definition} \paragraph{Witness encryption.} We review the notion of witness encryption, which will be used in \cref{sec:pk_pv_cd_cc_construction}. The witness encryption was introduced in \cite{STOC:GGSW13}. The extractable witness encryption was introduced in \cite{C:GKPVZ13}. \begin{definition}[Witness Encryption for NP (Syntax)]\label{definition:WE} A witness encryption scheme for an $\compclass{NP}$ language $\mathcal{L}$ is a pair of algorithms $(\algo{Enc},\algo{Dec})$ with the following syntax: \begin{description} \item[$\algo{Enc}(1^\lambda,x,m) \rightarrow \keys{CT}$:] The encryption algorithm takes as input the security parameter $1^\lambda$, a statement $x$, and a message $m$, and outputs a ciphertext $\keys{CT}$. \item[$\algo{Dec}(\keys{CT},w) \rightarrow m \mbox{ or }\bot$:] The decryption algorithm takes as input the ciphertext $\keys{CT}$ and a witness $w$, and outputs $m$ or $\bot$. \end{description} \end{definition} \begin{definition}[Correctness for Witness Encryption] We say that a witness encryption scheme is correct if the following holds with overwhelming probability over the randomness of $\algo{Enc}$ and $\algo{Dec}$. For any $(x,w)\in R_\mathcal{L}$, where $R_\mathcal{L}$ is the corresponding witness relation, and any message $m$, it holds that $\algo{Dec}(\algo{Enc}(1^\lambda,x,m),w)=m$. \end{definition} \begin{definition}[Security for Witness Encryption] We say that a witness encryption scheme is secure if for any instance $x\notin \mathcal{L}$ and any two messages $m_0$ and $m_1$, it holds that for any QPT adversary $\mathcal{A}$, \begin{align} |\Pr[\mathcal{A}(\algo{Enc}(1^\lambda,x,m_0))=1]-\Pr[\mathcal{A}(\algo{Enc}(1^\lambda,x,m_1)) = 1]| \le {\mathsf{negl}}(\lambda). \end{align} \end{definition} \begin{definition}[Extractable Security for Witness Encryption] For any QPT adversary $\mathcal{A}$, polynomial $p$ and messages $m_0$ and $m_1$, there is a QPT extractor $\mathcal{E}$ and polynomial $q$ such that for any mixed state $\mathsf{aux}$ potentially entangled with an external register, if $|\Pr[\mathcal{A}(\mathsf{aux},\algo{Enc}(1^\lambda,x,m_0))=1]-\Pr[\mathcal{A}(\mathsf{aux},\algo{Enc}(1^\lambda,x,m_1)) = 1]| \ge \frac{1}{p(\lambda)}$ then $\Pr[(x,\mathcal{E}(1^\lambda,x,\mathsf{aux}))\in R_\mathcal{L}] \ge\frac{1}{q(\lambda)}$. \end{definition} \ifnum0=1 \else \subsection{Noisy Trapdoor Claw-Free Functions}\label{sec:NTCF} We define noisy trapdoor claw-free function family and its injective invariance following \cite{FOCS:BCMVV18,FOCS:Mahadev18a}. \begin{definition}[NTCF Family]\label{def:NTCF} Let $\mathcal{X}$, $\mathcal{Y}$ be finite sets, $\mathcal{D_Y}$ the set of probability distribution over $\mathcal{Y}$, and $\mathcal{K_F}$ a finite set of keys. A family of functions \begin{align} \mathcal{F}=\{f_{\mathsf{k},b}:\mathcal{X}\rightarrow\mathcal{D_Y} \}_{\mathsf{k}\in\mathcal{K_F},b\in\{0,1\}} \end{align} is a noisy trapdoor claw-free function (NTCF) family if the following holds.\\ \begin{itemize} \item{\bf Efficient Function Generation:} There exists a PPT algorithm $\algo{Gen}_{\mathcal{F}}$ which takes the security parameter $1^\lambda$ as input and outputs a key $\mathsf{k}\in\mathcal{K_F}$ and a trapdoor $\keys{td}$. \item{\bf Trapdoor Injective Pair:} For all keys $\mathsf{k}\in\mathcal{K_F}$, the following holds. \begin{enumerate} \item Trapdoor: For all $b\in\{0,1\}$ and $x\neq x' \in\mathcal{X}$, $\mathrm{Supp}(f_{\mathsf{k},b}(x))\cap \mathrm{Supp}(f_{\mathsf{k},b}(x'))=\emptyset$. In addition, there exists an efficient deterministic algorithm $\algo{Inv}_{\mathcal{F}}$ such that for all $b\in\{0,1\}$, $x\in\mathcal{X}$ and $y\in\mathrm{Supp}(f_{\mathsf{k},b}(x)),\algo{Inv}_{\mathcal{F}}(\keys{td},b,y)=x$. \item Injective pair: There exists a perfect matching relation $\mathcal{R}_{\mathsf{k}}\subseteq \mathcal{X}\times\mathcal{X}$ such that $f_{\mathsf{k},0}(x_0)=f_{\mathsf{k},1}(x_1)$ if and only if $(x_0,x_1)\in\mathcal{R}_{\mathsf{k}}$. \end{enumerate} \item{\bf Efficient Range Superposition:} For all keys $\mathsf{k}\in\mathcal{K_F}$ and $b\in\{0,1\}$, there exists a function $f'_{\mathsf{k},b}:\mathcal{X}\rightarrow\mathcal{D_Y}$ such that the following holds \begin{enumerate} \item For all $(x_0,x_1)\in \mathcal{R}_{\mathsf{k}}$ and $y\in\mathrm{Supp}(f'_{\mathsf{k},b}(x_b))$, $\algo{Inv}_{\mathcal{F}}(\keys{td},b,y)=x_b$ and $\algo{Inv}_{\mathcal{F}}(\keys{td},b\oplus1,y)=x_{b\oplus1}$. \item There exists an efficient deterministic algorithm $\algo{Chk}_{\mathcal{F}}$ that takes as input $\mathsf{k},b\in\{0,1\}$, $x\in\mathcal{X}$, and $y\in\mathcal{Y}$ and outputs 1 if $y\in\mathrm{Supp}(f'_{\mathsf{k},b}(x))$ and 0 otherwise. Note that this algorithm does not take the trapdoor $\keys{td}$ as input. \item For all $\mathsf{k}\in\mathcal{K}_{\mathcal{F}}$ and $b\in\{0,1\}$, \begin{align} \mathbb{E}_{x\leftarrow\mathcal{X}}[ {\bf H}^2(f_{\mathsf{k},b}(x),f'_{\mathsf{k},b}(x))]\leq {\mathsf{negl}}(\lambda). \end{align} Here {\bf H}$^2$ is the Hellinger distance(See Section~\ref{sec:notation}). In addition, there exists a QPT algorithm $\algo{Samp}_{\mathcal{F}}$ that takes as input $\mathsf{k}$ and $b\in\{0,1\}$ and prepares the quantum state \begin{align} |\psi '\rangle= \frac{1}{\sqrt{|\mathcal{X}|}}\sum_{x\in\mathcal{X}, y\in\mathcal{Y}}\sqrt{(f'_{\mathsf{k},b}(x))(y)}|x\rangle|y\rangle. \end{align} This property immediately means that \begin{align} \parallel |\psi\rangle\langle\psi|- |\psi'\rangle\langle\psi'|\parallel_{tr}\leq {\mathsf{negl}}(\lambda) \end{align} where $|\psi\rangle=\frac{1}{\sqrt{|\mathcal{X}|}} \sum_{x\in\mathcal{X},y\in\mathcal{Y}}\sqrt{(f_{\mathsf{k},b}(x))(y)}|x\rangle|y\rangle$. \end{enumerate} \item{\bf Adaptive Hardcore Bit:} For all keys $\mathsf{k}\in\mathcal{K}_\mathcal{F}$, the following holds. For some integer $w$ that is a polynomially bounded function of $\lambda$, \begin{enumerate} \item For all $b\in\{0,1\}$ and $x\in\mathcal{X}$, there exists a set $G_{\mathsf{k},b,x}\subseteq \zo{w}$ such that $\Pr_{d\leftarrow \zo{w}}[d \notin G_{\mathsf{k},b,x}] \le {\mathsf{negl}}(\lambda)$. In addition, there exists a PPT algorithm that checks for membership in $G_{\mathsf{k},b,x}$ given $\mathsf{k},b,x$, and $\keys{td}$. \item There is an efficiently computable injection $J: \mathcal{X} \rightarrow \zo{w}$ such that $J$ can be inverted efficiently on its range, and such that the following holds. Let \begin{align} H_\mathsf{k} & \coloneqq \setbracket{(b,x_b,d, d\cdot(J(x_0)\oplus J(x_1))) \mid b\in\zo{},(x_0,x_1)\in \mathcal{R}_\mathsf{k}, d \in G_{\mathsf{k},0,x_0} \cap G_{\mathsf{k},1,x_1}},\\ \overline{H}_\mathsf{k} & \coloneqq \setbracket{(b,x_b,d, e)\mid (b,x,d,e\oplus 1)\in H_\mathsf{k}}, \end{align} then for any QPT $\entity{A}$, it holds that \begin{align} \abs{\Pr_{(\mathsf{k},\keys{td})\gets \algo{Gen}_{\mathcal{F}} (1^\lambda)}[\entity{A}(\mathsf{k})\in H_\mathsf{k}] - \Pr_{(\mathsf{k},\keys{td})\gets \algo{Gen}_{\mathcal{F}} (1^\lambda)}[\entity{A}(\mathsf{k})\in \overline{H}_\mathsf{k}]} \le {\mathsf{negl}}(\lambda). \end{align} \end{enumerate} \end{itemize} \end{definition} It is known that we can amplify the adaptive hardcore property by parallel repetition in the following sense. \begin{lemma}[Amplified Adaptive Hardcore Property]\cite{CoRR:RadSat19,EPRINT:KitNisYam20}\label{lem:amplified_adaptive_hardcore} Any NTCF family $\mathcal{F}$ satisfies the amplified adaptive hardcore property, which means that for any QPT adversary $\mathcal{A}$ and $n=\omega(\log\lambda)$, \begin{align} \Pr\left[ \begin{array}{ll} \forall i\in[n]~ \algo{Chk}_{\mathcal{F}}(\mathsf{k}_i,b_i,x_i,y_i)=1,\\ d_i\in G_{\mathsf{k}_i,0,x_{i,0}} \cap G_{\mathsf{k}_i,1,x_{i,1}},\\ e_i=d_i\cdot (J(x_{i,0})\oplus J(x_{i,1})) \end{array} \middle | \begin{array}{ll} (\mathsf{k}_i,\keys{td}_i)\gets \algo{Gen}_{\mathcal{F}}(1^\lambda) \text{~for~} i\in[n] \\ \{(b_i,x_i,y_i,d_i,e_i)\}_{i\in[n]}\gets \entity{A}(\{\mathsf{k}_i\}_{i\in[n]})\\ x_{i,\beta} \gets \algo{Inv}_{\mathcal{F}} (\keys{td}_i,\beta,y_i)\text{~for~}(i,\beta)\in[n]\times \{0,1\} \end{array} \right]\leq {\mathsf{negl}}(\lambda). \end{align} \end{lemma} We call the procedures in the right half of the above probability the \emph{amplified adaptive hardcore game} and we say that $\entity{A}$ wins the game if the conditions in the left half of the probability are satisfied. By using this terminology, the above lemma says that no QPT adversary wins the amplified adaptive hardcore game with non-negligible probability. \begin{remark} A similar lemma is presented in \cite{EPRINT:KitNisYam20} with the difference that the first condition $\algo{Chk}_{\mathcal{F}}(\mathsf{k}_i,b_i,x_i,y_i)=1$ is replaced with $x_i=x_{i,b_i}$. Since $\algo{Chk}_{\mathcal{F}}(\mathsf{k}_i,b_i,x_i,y_i)=1$ implies $x_i=x_{i,b_i}$ by the first and second items of efficient range superposition property, the lemma in the above form also follows. \end{remark} Next, we define the injective invariance for an NTCF family. For defining this, we first define a trapdoor injective function family. \begin{definition}[Trapdoor Injective Function Family]\label{def:TIF} Let $\mathcal{X}$, $\mathcal{Y}$ be finite sets, $\mathcal{D_Y}$ the set of probability distribution over $\mathcal{Y}$, and $\mathcal{K_G}$ a finite set of keys. A family of functions \begin{align} \mathcal{G}=\{g_{\mathsf{k},b}:\mathcal{X}\rightarrow\mathcal{D_Y} \}_{\mathsf{k}\in\mathcal{K_G},b\in\{0,1\}} \end{align} is a trapdoor injective function family if the following holds.\\ \begin{itemize} \item{\bf Efficient Function Generation:} There exists a PPT algorithm $\algo{Gen}_{\mathcal{G}}$ which takes the security parameter $1^\lambda$ as input and outputs a key $\mathsf{k}\in\mathcal{K_G}$ and a trapdoor $\keys{td}$. \item{\bf Disjoint Trapdoor Injective Pair:} For all keys $\mathsf{k}\in\mathcal{K_G}$, for all $b,b'\in \{0,1\}$, and $x,x'\in \mathcal{X}$, if $(b,x)\neq (b',x')$, $\mathrm{Supp}(g_{\mathsf{k},b}(x))\cap \mathrm{Supp}(g_{\mathsf{k},b'}(x'))=\emptyset$. Moreover, there exists an efficient deterministic algorithm $\algo{Inv}_{\mathcal{G}}$ such that for all $b\in \{0,1\}$, $x\in\mathcal{X}$ and $y\in\mathrm{Supp}(g_{\mathsf{k},b}(x)),\algo{Inv}_{\mathcal{G}}(\keys{td},y)=(b,x)$. \item{\bf Efficient Range Superposition:} For all keys $\mathsf{k}\in\mathcal{K_G}$ and $b\in\{0,1\}$, \begin{enumerate} \item There exists an efficient deterministic algorithm $\algo{Chk}_{\mathcal{G}}$ that takes as input $\mathsf{k},b\in\{0,1\}$, $x\in\mathcal{X}$, and $y\in\mathcal{Y}$ and outputs 1 if $y\in\mathrm{Supp}(g_{\mathsf{k},b}(x))$ and 0 otherwise. Note that this algorithm does not take the trapdoor $\keys{td}$ as input. \item There exists a QPT algorithm $\algo{Samp}_{\mathcal{G}}$ that takes as input $\mathsf{k}$ and $b\in\{0,1\}$ and outputs the quantum state \begin{align} \frac{1}{\sqrt{|\mathcal{X}|}}\sum_{x\in\mathcal{X}, y\in\mathcal{Y}}\sqrt{(g_{\mathsf{k},b}(x))(y)}|x\rangle|y\rangle. \end{align} \end{enumerate} \end{itemize} \end{definition} \begin{definition}[Injective Invariance]\label{def:injective_invariance} We say that a NTCF family $\mathcal{F}$ is injective invariant if there exists a trapdoor injective family $\mathcal{G}$ such that: \begin{enumerate} \item The algorithms $\algo{Chk}_\mathcal{F}$ and $\algo{Samp}_{\mathcal{F}}$ are the same as the algorithms $\algo{Chk}_\mathcal{G}$ and $\algo{Samp}_{\mathcal{G}}$. In this case, we simply write $\algo{Chk}$ and $\algo{Samp}$ to mean them. \item For all QPT algorithm $\mathcal{A}$, we have \begin{align} \abs{\Pr[\mathcal{A}(\mathsf{k})=1:(\mathsf{k},\keys{td})\leftarrow \algo{Gen}_{\mathcal{F}}(1^\lambda)]-\Pr[\mathcal{A}(\mathsf{k})=1:(\mathsf{k},\keys{td})\leftarrow \algo{Gen}_{\mathcal{G}}(1^\lambda)]}\leq {\mathsf{negl}}(\lambda). \end{align} \end{enumerate} \end{definition} \begin{lemma}[\cite{FOCS:Mahadev18a}] If the LWE assumption holds against QPT adversaries, there exists an injective invariant NTCF family. \takashi{May be better to mention parameter for LWE.} \end{lemma} \subsection{Quantum Random Oracle Model}\label{sec:QROM} In \cref{sec:classical_com}, we rely on the quantum random oracle model (QROM)~\cite{AC:BDFLSZ11}. In the QROM, a uniformly random function with certain domain and range is chosen at the beginning, and quantum access to this function is given for all parties including an adversary. Zhandry showed that quantum access to random function can be efficiently simulatable by using so called the compressed oracle technique~\cite{C:Zhandry19}. We review the one-way to hiding lemma \cite{JACM:Unruh15,C:AmbHamUnr19}, which is often useful when analyzing schemes in the QROM. The following form of the lemma is based on \cite{C:AmbHamUnr19}. \begin{lemma}[{One-Way to Hiding Lemma~\cite{C:AmbHamUnr19}}] \label{lem:o2h} Let $S \subseteq \mathcal{X}$ be random. Let $G,H \colon \mathcal{X} \to \mathcal{Y}$ be random functions satisfying $\forall x \not\in S~[G(x) = H(x)]$. Let $z$ be a random classical bit string or quantum state. ($S,G,H,z$ may have an arbitrary joint distribution.) Let $\mathcal{A}$ be an oracle-aided quantum algorithm that makes at most $q$ quantum queries. Let $\mathcal{B}$ be an algorithm that on input $z$ chooses $i\leftarrow [q]$, runs $\mathcal{A}^H(z)$, measures $\mathcal{A}$'s $i$-th query, and outputs the measurement outcome. Then we have \[ \abs{\Pr[\mathcal{A}^{G}(z)=1]-\Pr[\mathcal{A}^H(z)=1]}\leq 2q\sqrt{\Pr[\mathcal{B}^{H}(z)\in S]}. \] \end{lemma} \begin{remark} In \cite{C:AmbHamUnr19}, $z$ is assumed to be classical. However, as observed in \cite{AC:HhaXagYam19}, the lemma holds even if $z$ is quantum since any quantum state can be described by an exponentially large classical string, and there is no restriction on the size of $z$ in the lemma. \end{remark} \fi \section{Reusable SKE with Certified Deletion}\label{sec:reusable_SKE_cd} We present the definition and construction of reusable SKE with certified deletion in this section. First, we recall the definition of secret key NCE. \begin{definition}[Secret Key NCE (Syntax)]\label{def:sk_nce_syntax} A secret key NCE scheme is a tuple of PPT algorithms $(\algo{KeyGen},\algo{Enc},\algo{Dec},\allowbreak\algo{Fake},\algo{Reveal})$ with plaintext space $\mathcal{M}$. \begin{description} \item [$\algo{KeyGen}(1^\lambda)\rightarrow (\keys{ek},\keys{dk},\mathsf{aux})$:] The key generation algorithm takes as input the security parameter $1^\lambda$ and outputs a key pair $(\keys{ek},\keys{dk})$ and an auxiliary information $\mathsf{aux}$. \item [$\algo{Enc}(\keys{ek},m)\rightarrow \keys{CT}$:] The encryption algorithm takes as input $\keys{ek}$ and a plaintext $m\in\mathcal{M}$ and outputs a ciphertext $\keys{CT}$. \item [$\algo{Dec}(\keys{dk},\keys{CT})\rightarrow m^\prime \mbox{ or }\bot$:] The decryption algorithm takes as input $\keys{dk}$ and $\keys{CT}$ and outputs a plaintext $m^\prime$ or $\bot$. \item [$\algo{Fake}(\keys{ek},\mathsf{aux})\rightarrow \widetilde{\ct}$:] The fake encryption algorithm takes $\keys{dk}$ and $\mathsf{aux}$, and outputs a fake ciphertext $\widetilde{\ct}$. \item [$\algo{Reveal}(\keys{ek},\mathsf{aux},\widetilde{\ct},m)\rightarrow \widetilde{\keys{dk}} $:] The reveal algorithm takes $\keys{ek},\mathsf{aux},\widetilde{\ct}$ and $m$, and outputs a fake secret key $\widetilde{\keys{dk}}$. \end{description} \end{definition} Correctness is similar to that of PKE, so we omit it. \begin{definition}[Receiver Non-Committing (RNC) Security for SKE]\label{def:sk_nce_security} A secret key NCE scheme is RNC secure if it satisfies the following. Let $\Sigma=(\algo{KeyGen}, \algo{Enc}, \algo{Dec}, \algo{Fake},\algo{Reveal})$ be a secret key NCE scheme. We consider the following security experiment $\expb{\Sigma,\mathcal{A}}{sk}{rec}{nc}(\lambda,b)$. \begin{enumerate} \item The challenger computes $(\keys{ek},\keys{dk},\mathsf{aux}) \leftarrow \algo{KeyGen}(1^\lambda)$ and sends $1^\lambda$ to the adversary $\mathcal{A}$. \item $\mathcal{A}$ sends an encryption query $m$ to the challenger. The challenger computes and returns $\keys{CT}\leftarrow \algo{Enc}(\keys{ek},m)$ to $\mathcal{A}$. This process can be repeated polynomially many times. \item $\mathcal{A}$ sends a query $m \in \mathcal{M}$ to the challenger. \item The challenger does the following. \begin{itemize} \item If $b =0$, the challenger generates $\keys{CT} \leftarrow \algo{Enc}(\keys{ek},m)$ and returns $(\keys{CT},\keys{dk})$ to $\mathcal{A}$. \item If $b=1$, the challenger generates $\widetilde{\ct} \leftarrow \algo{Fake}(\keys{ek},\mathsf{aux})$ and $\widetilde{\keys{dk}} \leftarrow \algo{Reveal}(\keys{ek},\mathsf{aux},\widetilde{\ct},m)$ and returns $(\widetilde{\ct},\widetilde{\keys{dk}})$ to $\mathcal{A}$. \end{itemize} \item Again $\mathcal{A}$ can send encryption queries. \item $\mathcal{A}$ outputs $b'\in \{0,1\}$. \end{enumerate} Let $\advc{\Sigma,\mathcal{A}}{sk}{rec}{nc}(\lambda)$ be the advantage of the experiment above. We say that the $\Sigma$ is RNC secure if for any QPT adversary, it holds that \begin{align} \advc{\Sigma,\mathcal{A}}{sk}{rec}{nc}(\lambda)\coloneqq \abs{\Pr[ \expb{\Sigma,\mathcal{A}}{sk}{rec}{nc}(\lambda, 0)=1] - \Pr[ \expb{\Sigma,\mathcal{A}}{sk}{rec}{nc}(\lambda, 1)=1] }\leq {\mathsf{negl}}(\lambda). \end{align} \end{definition} \begin{definition}[Reusable SKE with Certified Deletion (Syntax)]\label{def:reusable_sk_cert_del} A secret key encryption scheme with certified deletion is a tuple of quantum algorithms $(\algo{KeyGen},\algo{Enc},\algo{Dec},\algo{Del},\algo{Vrfy})$ with plaintext space $\mathcal{M}$ and key space $\mathcal{K}$. \begin{description} \item[$\algo{KeyGen} (1^\lambda) \rightarrow \keys{sk}$:] The key generation algorithm takes as input the security parameter $1^\lambda$ and outputs a secret key $\keys{sk} \in \mathcal{K}$. \item[$\algo{Enc}(\keys{sk},m) \rightarrow (\keys{vk},\keys{CT})$:] The encryption algorithm takes as input $\keys{sk}$ and a plaintext $m\in\mathcal{M}$ and outputs a verification key $\keys{vk}$ and a ciphertext $\keys{CT}$. \item[$\algo{Dec}(\keys{sk},\keys{CT}) \rightarrow m^\prime$:] The decryption algorithm takes as input $\keys{sk}$ and $\keys{CT}$ and outputs a plaintext $m^\prime \in \mathcal{M}$ or $\bot$. \item[$\algo{Del}(\keys{CT}) \rightarrow \keys{cert}$:] The deletion algorithm takes as input $\keys{CT}$ and outputs a certification $\keys{cert}$. \item[$\algo{Vrfy}(\keys{vk},\keys{cert})\rightarrow \top \mbox{ or }\bot$:] The verification algorithm takes $\keys{vk}$ and $\keys{cert}$ and outputs $\top$ or $\bot$. \end{description} \end{definition} A difference the definition by Broadbent and Islam and ours is that $\algo{Enc}$ outputs not only $\keys{CT}$ but also $\keys{vk}$, and $\keys{vk}$ is used in $\algo{Vrfy}$ istead of $\keys{sk}$. \begin{definition}[Correctness for reusable SKE with Certified Deletion]\label{def:reusable_sk_cd_correctness} There are two types of correctness. One is decryption correctness and the other is verification correctness. \begin{description} \item[Decryption correctness:] For any $\lambda\in \mathbb{N}$, $m\in\mathcal{M}$, \begin{align} \Pr\left[ \algo{Dec}(\keys{sk},\keys{CT})\ne m \ \middle | \begin{array}{ll} \keys{sk}\leftarrow \algo{KeyGen}(1^\lambda)\\ (\keys{vk},\keys{CT}) \leftarrow \algo{Enc}(\keys{sk},m) \end{array} \right] \le{\mathsf{negl}}(\lambda). \end{align} \item[Verification correctness:] For any $\lambda\in \mathbb{N}$, $m\in\mathcal{M}$, \begin{align} \Pr\left[ \algo{Vrfy}(\keys{vk},\keys{cert})=\bot \ \middle | \begin{array}{ll} \keys{sk}\leftarrow \algo{KeyGen}(1^\lambda)\\ (\keys{vk},\keys{CT}) \leftarrow \algo{Enc}(\keys{sk},m)\\ \keys{cert} \leftarrow \algo{Del}(\keys{CT}) \end{array} \right] \le{\mathsf{negl}}(\lambda). \end{align} \end{description} \end{definition} \begin{definition}[Certified Deletion Security for Reusable SKE]\label{def:reusable_sk_certified_del} Let $\Sigma=(\algo{KeyGen}, \algo{Enc}, \algo{Dec}, \algo{Del}, \algo{Vrfy})$ be a secret key encryption with certified deletion. We consider the following security experiment $\expb{\Sigma,\mathcal{A}}{sk}{cert}{del}(\lambda,b)$. \begin{enumerate} \item The challenger computes $\keys{sk} \leftarrow \algo{KeyGen}(1^\lambda)$. \item $\mathcal{A}$ sends an encryption query $m$ to the challenger. The challenger computes $(\keys{vk},\keys{CT})\leftarrow \algo{Enc}(\keys{sk},m)$ to $\mathcal{A}$ and returns $(\keys{vk},\keys{CT})$ to $\mathcal{A}$. This process can be repeated polynomially many times. \item $\mathcal{A}$ sends $(m_0,m_1)\in\mathcal{M}^2$ to the challenger. \item The challenger computes $(\keys{vk}_b,\keys{CT}_b) \leftarrow \algo{Enc}(\keys{sk},m_b)$ and sends $\keys{CT}_b$ to $\mathcal{A}$. \item Again, $\mathcal{A}$ can send encryption queries. \item At some point, $\mathcal{A}$ sends $\keys{cert}$ to the challenger. \item The challenger computes $\algo{Vrfy}(\keys{vk}_b,\keys{cert})$. If the output is $\bot$, the challenger sends $\bot$ to $\mathcal{A}$. If the output is $\top$, the challenger sends $\keys{sk}$ to $\mathcal{A}$. \item If the challenger sends $\bot$ in the previous item, $\mathcal{A}$ can send encryption queries again. \item $\mathcal{A}$ outputs $b'\in \{0,1\}$. \end{enumerate} Let $\advc{\Sigma,\mathcal{A}}{sk}{cert}{del}(\lambda)$ be the advantage of the experiment above. We say that the $\Sigma$ is IND-CPA-CD secure if for any QPT $\mathcal{A}$, it holds that \begin{align} \advc{\Sigma,\mathcal{A}}{sk}{cert}{del}(\lambda)\coloneqq \abs{\Pr[ \expb{\Sigma,\mathcal{A}}{sk}{cert}{del}(\lambda, 0)=1] - \Pr[ \expb{\Sigma,\mathcal{A}}{sk}{cert}{del}(\lambda, 1)=1] }\leq {\mathsf{negl}}(\lambda). \end{align} \end{definition} \paragraph{Our reusable SKE scheme.} We construct $\Sigma_{\mathsf{r}\mathsf{skcd}} =(\algo{KeyGen},\algo{Enc},\algo{Dec},\algo{Del},\algo{Vrfy})$ with plaintext space $\mathcal{M}$ from an one-time SKE with certified deletion scheme $\Sigma_{\mathsf{o}\mathsf{skcd}}=\SKE.(\algo{Gen},\algo{Enc},\algo{Dec},\algo{Del},\algo{Vrfy})$ with plaintext space $\mathcal{M}$ and key space $\mathcal{K}$ and a secret key NCE scheme $\Sigma_{\mathsf{nce}}=\mathsf{NCE}.(\algo{KeyGen},\algo{Enc},\algo{Dec},\algo{Fake},\algo{Reveal})$ with plaintext space $\mathcal{K}$. \begin{description} \item[$\algo{KeyGen}(1^\lambda)$:] $ $ \begin{itemize} \item Generate $(\mathsf{nce}.\keys{ek},\mathsf{nce}.\keys{dk},\mathsf{nce}.\mathsf{aux})\leftarrow \mathsf{NCE}.\algo{KeyGen}(1^\lambda)$ and output $\keys{sk} \coloneqq (\mathsf{nce}.\keys{ek},\mathsf{nce}.\keys{dk})$. \end{itemize} \item[$\algo{Enc}(\keys{sk},m)$:] $ $ \begin{itemize} \item Parse $\keys{sk} = (\mathsf{nce}.\keys{ek},\mathsf{nce}.\keys{dk})$. \item Generate $\mathsf{oske}.\keys{sk} \leftarrow \SKE.\algo{Gen}(1^\lambda)$. \item Compute $\mathsf{nce}.\keys{CT} \leftarrow \mathsf{NCE}.\algo{Enc}(\mathsf{nce}.\keys{ek},\mathsf{oske}.\keys{sk})$ and $\mathsf{oske}.\keys{CT} \leftarrow \SKE.\algo{Enc}(\mathsf{oske}.\keys{sk},m)$. \item Output $\keys{CT} \coloneqq (\mathsf{nce}.\keys{CT},\mathsf{oske}.\keys{CT})$ and $\keys{vk} \coloneqq \mathsf{oske}.\keys{sk}$. \end{itemize} \item[$\algo{Dec}(\keys{sk},\keys{CT})$:] $ $ \begin{itemize} \item Parse $\keys{sk} = (\mathsf{nce}.\keys{ek},\mathsf{nce}.\keys{dk})$ and $\keys{CT} = (\mathsf{nce}.\keys{CT},\mathsf{oske}.\keys{CT})$. \item Compute $\keys{sk}^\prime \leftarrow \mathsf{NCE}.\algo{Dec}(\mathsf{nce}.\keys{dk},\mathsf{nce}.\keys{CT})$. \item Compute and output $m^\prime \leftarrow \SKE.\algo{Dec}(\keys{sk}^\prime,\mathsf{oske}.\keys{CT})$. \end{itemize} \item[$\algo{Del}(\keys{CT})$:] $ $ \begin{itemize} \item Parse $\keys{CT}= (\mathsf{nce}.\keys{CT},\mathsf{oske}.\keys{CT})$. \item Generate $\mathsf{oske}.\keys{cert} \leftarrow \SKE.\algo{Del}(\mathsf{oske}.\keys{CT})$. \item Output $\keys{cert} \coloneqq \mathsf{oske}.\keys{cert}$. \end{itemize} \item[$\algo{Vrfy}(\keys{vk},\keys{cert})$:] $ $ \begin{itemize} \item Parse $\keys{vk} = \mathsf{oske}.\keys{sk}$ and $\keys{cert} = \mathsf{oske}.\keys{cert}$. \item Output $b \leftarrow \SKE.\algo{Vrfy}(\mathsf{oske}.\keys{sk},\mathsf{oske}.\keys{cert})$. \end{itemize} \end{description} \begin{theorem}\label{thm:reusable_ske_cd_from_sk_cd_and_ske} If $\Sigma_{\mathsf{nce}}$ is RNC secure and $\Sigma_{\mathsf{o}\mathsf{skcd}}$ is OT-CD secure, $\Sigma_{\mathsf{r}\mathsf{skcd}}$ is IND-CPA-CD secure. \end{theorem} We omit the proof of this theorem since it is almost the same as the proof of~\cref{thm:pke_cd_from_sk_cd_and_pke}. \section{PKE with Certified Deletion and Classical Communication}\label{sec:classical_com} In this section, we define the notion of public key encryption with certified deletion with classical communication, and construct it from the LWE assumption \revise{in the QROM}. In \cref{sec:pk_cd_def_classical_com}, we present the definition of the public key encryption with certified deletion with classical communication. In \cref{sec:cut_and_choose_adaptive_hardcore}, we introduce what we call the {\it cut-and-choose adaptive hardcore property}, which is used in the security proof of the PKE with certified deletion with classical communication. In \cref{sec:PKE_cd_cc_construction}, we construct a PKE with certified deletion with classical communication, and show its security. \subsection{Definition of PKE with Certified Deletion and Classical Communication}\label{sec:pk_cd_def_classical_com} We define PKE with certified deletion with classical communication. Note that the encryption algorithm of a PKE with certified deletion with classical communication is interactive unlike PKE with certified deletion (with quantum communication) as defined in \cref{def:pk_cert_del}. It is easy to see that the interaction is necessary if we only allow classical communication. \begin{definition}[PKE with Certified Deletion with Classical Communication (Syntax)]\label{def:pk_cert_del_classical_com} A public key encryption scheme with certified deletion with classical communication is a tuple of quantum algorithms $(\algo{KeyGen},\algo{Enc},\algo{Dec},\algo{Del},\algo{Vrfy})$ with plaintext space $\mathcal{M}$. \begin{description} \item[$\algo{KeyGen} (1^\lambda) \rightarrow (\keys{pk},\keys{sk})$:] The key generation algorithm takes as input the security parameter $1^\lambda$ and outputs a classical key pair $(\keys{pk},\keys{sk})$. \item[$\algo{Enc}\langle \mathcal{S}(\keys{pk},m),\mathcal{R}\rangle \rightarrow (\keys{vk},\keys{CT})$:] This is an interactive process between a classical sender $\mathcal{S}$ with input $\keys{pk}$ and a plaintext $m\in\mathcal{M}$, and a quantum receiver $\mathcal{R}$ without input. After exchanging classical messages, $\mathcal{S}$ outputs a classical verification key $\keys{vk}$ and $\mathcal{R}$ outputs a quantum ciphertext $\keys{CT}$. \item[$\algo{Dec}(\keys{sk},\keys{CT}) \rightarrow m^\prime \mbox{ or } \bot$:] The decryption algorithm takes as input the secret key $\keys{sk}$ and the ciphertext $\keys{CT}$, and outputs a plaintext $m^\prime$ or $\bot$. \item[$\algo{Del}(\keys{CT}) \rightarrow \keys{cert}$:] The deletion algorithm takes as input the ciphertext $\keys{CT}$ and outputs a classical certificate $\keys{cert}$. \item[$\algo{Vrfy}(\keys{vk},\keys{cert})\rightarrow \top \mbox{ or }\bot$:] The verification algorithm takes the verification key $\keys{vk}$ and the certificate $\keys{CT}$, and outputs $\top$ or $\bot$. \end{description} \end{definition} \begin{definition}[Correctness for PKE with Certified Deletion with Classical Communication]\label{def:pk_cd_correctness_classical_com} There are two types of correctness. One is decryption correctness and the other is verification correctness. \begin{description} \item[Decryption correctness:] For any $\lambda\in \mathbb{N}$, $m\in\mathcal{M}$, \begin{align} \Pr\left[ \algo{Dec}(\keys{sk},\keys{CT})\ne m \ \middle | \begin{array}{ll} (\keys{pk},\keys{sk})\leftarrow \algo{KeyGen}(1^\lambda)\\ (\keys{vk},\keys{CT}) \leftarrow \algo{Enc}\langle \mathcal{S}(\keys{pk},m),\mathcal{R}\rangle \end{array} \right] \le{\mathsf{negl}}(\lambda). \end{align} \item[Verification correctness:] For any $\lambda\in \mathbb{N}$, $m\in\mathcal{M}$, \begin{align} \Pr\left[ \algo{Vrfy}(\keys{vk},\keys{cert})=\bot \ \middle | \begin{array}{ll} (\keys{pk},\keys{sk})\leftarrow \algo{KeyGen}(1^\lambda)\\ (\keys{vk},\keys{CT}) \leftarrow \algo{Enc}\langle \mathcal{S}(\keys{pk},m),\mathcal{R}\rangle\\ \keys{cert} \leftarrow \algo{Del}(\keys{CT}) \end{array} \right] \le{\mathsf{negl}}(\lambda). \end{align} \end{description} \end{definition} \begin{definition}[Certified Deletion Security for PKE with Classical Communication]\label{def:pk_certified_del_classical_com} Let $\Sigma=(\algo{KeyGen}, \algo{Enc}, \algo{Dec}, \algo{Del}, \algo{Vrfy})$ be a PKE with certified deletion scheme with classical communication. We consider the following security experiment $\expb{\Sigma,\mathcal{A}}{ccpk}{cert}{del}(\lambda,b)$. \begin{enumerate} \item The challenger computes $(\keys{pk},\keys{sk}) \leftarrow \algo{KeyGen}(1^\lambda)$ and sends $\keys{pk}$ to $\mathcal{A}$. \item $\mathcal{A}$ sends $(m_0,m_1)\in \mathcal{M}^2$ to the challenger. \item The challenger and $\mathcal{A}$ jointly execute $(\keys{vk}_b,\keys{CT}_b) \leftarrow \algo{Enc}\langle\mathcal{S}(\keys{pk},m_b),\mathcal{A}(\keys{pk})\rangle$ where the challenger plays the role of the sender and $\mathcal{A}$ plays the role of the receiver. \item At some point, $\mathcal{A}$ sends $\keys{cert}$ to the challenger. \item The challenger computes $\algo{Vrfy}(\keys{vk}_b,\keys{cert})$. If the output is $\bot$, the challenger sends $\bot$ to $\mathcal{A}$. If the output is $\top$, the challenger sends $\keys{sk}$ to $\mathcal{A}$. \item $\mathcal{A}$ outputs its guess $b'\in \{0,1\}$. \end{enumerate} Let $\advc{\Sigma,\mathcal{A}}{ccpk}{cert}{del}(\lambda)$ be the advantage of the experiment above. We say that the $\Sigma$ is IND-CPA-CD secure if for any QPT adversary $\mathcal{A}$, it holds that \begin{align} \advc{\Sigma,\mathcal{A}}{ccpk}{cert}{del}(\lambda)\coloneqq \abs{\Pr[ \expb{\Sigma,\mathcal{A}}{ccpk}{cert}{del}(\lambda, 0)=1] - \Pr[ \expb{\Sigma,\mathcal{A}}{ccpk}{cert}{del}(\lambda, 1)=1] }\leq {\mathsf{negl}}(\lambda). \end{align} \end{definition} \if0 \begin{remark} \cref{def:pk_certified_del_classical_com} intuitively means that once the valid certificate is issued, decrypting the ciphertext becomes impossible. One might think that it would be also possible to define the inverse: once the chiphertext is decrypted, the valid certificate can no longer be issued. This property is, however, impossible to achieve due to the decryption correctness (\cref{def:pk_cd_correctness_classical_com}). In fact, if the quantum decyption algorithm $\algo{Dec}$ on a quantum ciphertext $\keys{CT}$ succeeds with probability at least $1-{\mathsf{negl}}(\lambda)$, then the gentle measurement lemma guarantees that $\keys{CT}$ is only negligibly disturbed, from which the valid certificate can be issued. \mor{I have added this remark.} \takashi{Can we move this remark to after~\cref{def:sk_certified_del} where certified deletion appears for the first time?} \end{remark} \fi \subsection{Preparation} \label{sec:cut_and_choose_adaptive_hardcore} We prove that any injective invariant NTCF family satisfies a property which we call the \emph{cut-and-choose adaptive hardcore property}, which is used in the security proof of our PKE with certified deletion with classical communication. To prove the cut-and-choose adaptive hardcore property, we first prove a simple combinatorial lemma and its corollary. \begin{lemma}\label{lem:subset_probability} Let $n,k$ be a positive integers and $T\subseteq [2n]$ be a subset. Let $S\subseteq [2n]$ be a uniformly random subset conditioned on that $|S|=n$. If $k\leq |T| $, we have \[ \Pr\left[S\cap T=\emptyset\right]\leq \left(\frac{1}{2}\right)^{k}. \] If $|T|\leq k\leq n$, we have \[ \Pr\left[S\cap T=\emptyset\right]> \left(\frac{n-k}{2n-k}\right)^{k}. \] \end{lemma} \begin{proof} When $|T|\geq n+1$, we have $\Pr\left[S\cap T=\emptyset\right]=0$ by the pigeonhole principle and thus the first inequality trivially holds. Note that we need not consider such a case for the second inequality since we assume $|T|\leq n$ for the second inequality. In the following, we assume $|T|\leq n$. Let $k'\coloneqq |T|$. Then we have \begin{align} \Pr\left[S\cap T=\emptyset\right]=\frac{\binom{2n-k'}{n}}{\binom{2n}{n}} =\frac{(2n-k')!}{(n-k')!n!}\cdot\frac{n!n!}{(2n)!}=\prod_{i=0}^{k'-1}\frac{n-i}{2n-i}. \end{align} For all $0\leq i\leq k'-1$, we have \begin{align} \frac{n-k'}{2n-k'} <\frac{n-i}{2n-i}\leq \frac{1}{2}. \end{align} Therefore we have \begin{align} \left(\frac{n-k'}{2n-k'}\right)^{k'}<\Pr\left[S\cap T=\emptyset\right]\leq \left(\frac{1}{2}\right)^{k'}. \end{align} Then, \cref{lem:subset_probability} follows from $\left(\frac{1}{2}\right)^{k'}\leq \left(\frac{1}{2}\right)^{k}$ for $k\leq k'$ and $\left(\frac{n-k}{2n-k}\right)^{k}\leq \left(\frac{n-k'}{2n-k'}\right)^{k'}$ for $k'\leq k \leq n$. \end{proof} \begin{corollary}\label{cor:subset_probability_two} Let $n,k$ be a positive integers and $T\subseteq [4n]$ be a subset. Let $S\subseteq [4n]$ be a uniformly random subset conditioned on that $|S|=2n$ and $U\subseteq S$ be a uniformly random subset conditioned on that $|U|=n$. If $k\leq |T| $, we have \[ \Pr\left[S\cap T=\emptyset\right]\leq \left(\frac{1}{2}\right)^{k}. \] For any $S^*\subseteq [4n]$ such that $|S^*|=2n$, if $|T|<k<n$, we have \[ \Pr\left[U\cap T=\emptyset|S=S^*\right]> \left(\frac{n-k}{2n-k}\right)^{k}. \] \end{corollary} \begin{proof} The former part immediately follows from \cref{lem:subset_probability} by considering $2n$ as $n$ in \cref{lem:subset_probability}. For the latter part, suppose that $S$ is fixed to be $S^*$. We have $U\cap T=U\cap (S^*\cap T)$ and $|S^*\cap T|\leq |T|<k<n$. By considering a one-to-one map from $S^*$ to $[2n]$, we can apply the latter statement of \cref{lem:subset_probability}, where we think of the images of $U$ and $S^*\cap T$ as $S$ and $T$ in \cref{lem:subset_probability}, respectively, to obtain \cref{cor:subset_probability_two}. \end{proof} Then we define the cut-and-choose adaptive hardcore property and prove that any injective invariant NTCF family satisfies it. \begin{lemma}[Cut-and-Choose Adaptive Hardcore Property]\label{lem:cut_and_choose_adaptive_hardcore} Let $\mathcal{F}$ be an injective invariant NTCF family and $\mathcal{G}$ be the corresponding trapdoor injective family. Then $\mathcal{F}$ and $\mathcal{G}$ satisfy what we call the \emph{cut-and-choose adaptive hardcore property} defined below. For a QPT adversary $\mathcal{A}$ and a positive integer $n$, we consider the following experiment $\expb{(\mathcal{F},\mathcal{G}),\mathcal{A}}{cut}{and}{choose}(\lambda,n)$. \begin{enumerate} \item The challenger chooses a uniform subset $S\subseteq [4n]$ such that $|S|=2n$.\footnote{We can also take $S\subseteq [2n]$ such that $|S|=n$, but we do as above just for convenience in the proof.} \item The challenger generates $(\mathsf{k}_i,\keys{td}_i)\leftarrow \algo{Gen}_{\mathcal{G}}(1^\lambda)$ for all $i\in S$ and $(\mathsf{k}_i,\keys{td}_i)\leftarrow \algo{Gen}_{\mathcal{F}}(1^\lambda)$ for all $i\in \overline{S}$ and sends $\{\mathsf{k}_i\}_{i\in[4n]}$ to $\mathcal{A}$. \item $\mathcal{A}$ sends $\{y_i,d_i,e_i\}_{i\in[4n]}$ to the challenger. \item The challenger computes $x_{i,\beta}\leftarrow \algo{Inv}_{\mathcal{F}}(\keys{td}_i,\beta,y_i)$ for all $(i,\beta)\in \overline{S}\times \{0,1\}$ and checks if $d_i\in G_{\mathsf{k}_i,0,x_{i,0}} \cap G_{\mathsf{k}_i,1,x_{i,1}}$ and $e_i=d_i\cdot (J(x_{i,0})\oplus J(x_{i,1}))$ hold for all $i\in \overline{S}$. If they do not hold for some $i\in \overline{S}$, the challenger immediately aborts and the experiment returns $0$. \item \label{step:reveal_S} The challenger sends $S$ to $\mathcal{A}$. \item $\mathcal{A}$ sends $\{b_i,x_i\}_{i\in S}$ to the challenger. \item The challenger checks if $\algo{Chk}_{\mathcal{G}}(\mathsf{k}_i,b_i,x_i,y_i)=1$ holds for all $i\in S$. If this holds for all $i\in S$, the experiment returns $1$. Otherwise, it returns $0$. \end{enumerate} Then for any $n$ such that $n\leq {\mathrm{poly}}(\lambda)$ and $n=\omega(\log \lambda)$, it holds that \begin{align} \advc{(\mathcal{F},\mathcal{G}),\mathcal{A}}{cut}{and}{choose}(\lambda,n)\coloneqq \Pr[ \expb{(\mathcal{F},\mathcal{G}),\mathcal{A}}{cut}{and}{choose}(\lambda,n)=1] \leq {\mathsf{negl}}(\lambda). \end{align} \end{lemma} \begin{proof} We consider a modified experiment $\wtexpb{\mathcal{F},\mathcal{A}}{cut}{and}{choose}(\lambda,n)$ that works similarly to $\expb{(\mathcal{F},\mathcal{G}),\mathcal{A}}{cut}{and}{choose}(\lambda,n)$ except that the challenger generates $(\mathsf{k}_i,\keys{td}_i)$ by $\algo{Gen}_{\mathcal{F}}$ for all $i\in[4n]$. Since the challenger in these experiments does not use $\keys{td}_i$ for $i\in S$ at all, the injective invariance implies \begin{align} \abs {\Pr[ \wtexpb{\mathcal{F},\mathcal{A}}{cut}{and}{choose}(\lambda,n)=1] -\Pr[ \expb{(\mathcal{F},\mathcal{G}),\mathcal{A}}{cut}{and}{choose}(\lambda,n)=1]} \leq {\mathsf{negl}}(\lambda) \end{align} by a straightforward hybrid argument. Therefore, it suffices to prove \begin{align} \Pr[ \wtexpb{\mathcal{F},\mathcal{A}}{cut}{and}{choose}(\lambda,n)=1] \leq {\mathsf{negl}}(\lambda). \end{align} In the following, we reduce this to the amplified adaptive hardcore property (\cref{lem:amplified_adaptive_hardcore}). For the sake of contradiction, we assume that $\Pr[ \wtexpb{\mathcal{F},\mathcal{A}}{cut}{and}{choose}(\lambda,n)=1]$ is non-negligible. Then there exists a polynomial $p$ such that \begin{align} \Pr[ \wtexpb{\mathcal{F},\mathcal{A}}{cut}{and}{choose}(\lambda,n)=1] \geq \frac{1}{p(\lambda)} \end{align} for infinitely many $\lambda\in \mathbb{N}$. Let $k$ be the minimal integer such that \[ \left(\frac{1}{2}\right)^{k}\leq \frac{1}{2p(\lambda)}. \] In $ \wtexpb{\mathcal{F},\mathcal{A}}{cut}{and}{choose}(\lambda,n)$, let $T\subseteq [4n]$ be the subset consisting of $i\in [4n]$ such that $d_i\notin G_{\mathsf{k}_i,0,x_{i,0}} \cap G_{\mathsf{k}_i,1,x_{i,1}}$ or $e_i\neq d_i\cdot (J(x_{i,0})\oplus J(x_{i,1}))$ where $x_{i,\beta}\leftarrow \algo{Inv}_{\mathcal{F}}(\keys{td}_i,\beta,y_i)$ for $\beta \in \{0,1\}$.\footnote{Note that $x_{i,\beta}$ was defined only for $(i,\beta)\in \overline{S}\times \{0,1\}$ in $\wtexpb{\mathcal{F},\mathcal{A}}{cut}{and}{choose}(\lambda,n)$, but we naturally generalize it to all $(i,\beta)\in [4n]\times \{0,1\}$. We remark that this is well-defined since the challenger uses $\algo{Gen}_{\mathcal{F}}$ on all positions in $\wtexpb{\mathcal{F},\mathcal{A}}{cut}{and}{choose}(\lambda,n)$ unlike in $\expb{(\mathcal{F},\mathcal{G}),\mathcal{A}}{cut}{and}{choose}(\lambda,n)$.} Let $\mathtt{Bad}$ be the event such that $|T|\geq k$. When $\wtexpb{\mathcal{F},\mathcal{A}}{cut}{and}{choose}(\lambda,n)$ returns $1$, we must have $\overline{S}\cap T= \emptyset$. Therefore, by \cref{cor:subset_probability_two}, we have \begin{align} &\Pr[\wtexpb{\mathcal{F},\mathcal{A}}{cut}{and}{choose}(\lambda,n)=1 \land \mathtt{Bad}]\\ &\leq \Pr[\overline{S}\cap T= \emptyset \land \mathtt{Bad}]\\ &\leq \Pr[\overline{S}\cap T= \emptyset | \mathtt{Bad}]\\ &\leq \left(\frac{1}{2}\right)^k\\ &\leq \frac{1}{2p(\lambda)}. \end{align} We remark that we can apply \cref{cor:subset_probability_two} to get the third inequality above since $|T|\geq k$ when $\mathtt{Bad}$ occurs and no information of $S$ is given to $\entity{A}$ before Step \ref{step:reveal_S} in $\wtexpb{\mathcal{F},\mathcal{A}}{cut}{and}{choose}(\lambda,n)$ and thus $\overline{S}$ is independent of $\mathtt{Bad}$. On the other hand, we have \begin{align} \Pr[ \wtexpb{\mathcal{F},\mathcal{A}}{cut}{and}{choose}(\lambda,n)=1] =\Pr[ \wtexpb{\mathcal{F},\mathcal{A}}{cut}{and}{choose}(\lambda,n)=1 \land \mathtt{Bad}] +\Pr[ \wtexpb{\mathcal{F},\mathcal{A}}{cut}{and}{choose}(\lambda,n)=1 \land \overline{\mathtt{Bad}}] \geq \frac{1}{p(\lambda)} \end{align} for infinitely many $\lambda$. Therefore we have \begin{align} \Pr[\wtexpb{\mathcal{F},\mathcal{A}}{cut}{and}{choose}(\lambda,n)=1 \land \overline{\mathtt{Bad}}]\geq \frac{1}{2p(\lambda)} \end{align} for infinitely many $\lambda$. This naturally gives an adversary $\mathcal{B}$ that breaks the amplified adaptive hardcore property that is described below. \begin{description} \item [$\mathcal{B}(\{\mathsf{k}^*_j\}_{j\in [n]})$:] Given a problem instance $\{\mathsf{k}^*_j\}_{j\in [n]}$, it works as follows. \begin{enumerate} \item Choose a uniform subset $S\subseteq [4n]$ such that $|S|=2n$. \item Choose a uniform subset $U\subseteq S$ such that $|U|=n$. Let $i_1,...,i_n$ be the elements of $U$. \item Set $\mathsf{k}_{i_j}:=\mathsf{k}^*_{j}$ for all $j\in [n]$ and generate $(\mathsf{k}_i,\keys{td}_i)\leftarrow \algo{Gen}_{\mathcal{F}}(1^\lambda)$ for all $i\in \overline{U}$. \item Send $\{\mathsf{k}_i\}_{i\in [4n]}$ to $\entity{A}$ and receives the response $\{y_i,d_i,e_i\}_{i\in[4n]}$ from $\entity{A}$. \item Send $S$ to $\entity{A}$ and receives the response $\{b_i,x_i\}_{i\in S}$ from $\entity{A}$. \item Output $\{b_i,x_i,y_i,d_i,e_i\}_{i\in U}$. \end{enumerate} \end{description} We can see that $\mathcal{B}$ perfectly simulates $\wtexpb{\mathcal{F},\mathcal{A}}{cut}{and}{choose}(\lambda,n)$ for $\mathcal{A}$, and $\mathcal{B}$ wins the amplified adaptive hardcore game whenever we have $\wtexpb{\mathcal{F},\mathcal{A}}{cut}{and}{choose}(\lambda,n)=1$ and $U\cap T=\emptyset$ in the simulated experiment. We have $k=O(\log \lambda)$ by the definition of $k$ and $n=\omega(\log \lambda)$ by the assumption. Therefore we have $k<\frac{n}{2}$ for sufficiently large $\lambda$. Then, for sufficiently large $\lambda$, we have \begin{align} &\Pr[U\cap T=\emptyset|\wtexpb{\mathcal{F},\mathcal{A}}{cut}{and}{choose}(\lambda,n)=1 \land\overline{\mathtt{Bad}}]\\ &=\frac{\Pr[U\cap T=\emptyset \land \wtexpb{\mathcal{F},\mathcal{A}}{cut}{and}{choose}(\lambda,n)=1 \land\overline{\mathtt{Bad}}]}{\Pr[\wtexpb{\mathcal{F},\mathcal{A}}{cut}{and}{choose}(\lambda,n)=1 \land\overline{\mathtt{Bad}}]}\\ &=\sum_{S^*\subseteq [4n]\text{~s.t.~}|S^*|=2n}\sum_{T^*\subseteq [4n]\text{~s.t.~}|T^*|<k}\frac{\Pr[U\cap T^*=\emptyset \land \wtexpb{\mathcal{F},\mathcal{A}}{cut}{and}{choose}(\lambda,n)=1 \land\overline{\mathtt{Bad}}\land S=S^* \land T=T^*]}{\Pr[\wtexpb{\mathcal{F},\mathcal{A}}{cut}{and}{choose}(\lambda,n)=1 \land\overline{\mathtt{Bad}}]}\\ &=\sum_{S^*\subseteq [4n]\text{~s.t.~}|S^*|=2n}\sum_{T^*\subseteq [4n]\text{~s.t.~}|T^*|<k}\frac{\Pr[U\cap T^*=\emptyset]\cdot\Pr[\wtexpb{\mathcal{F},\mathcal{A}}{cut}{and}{choose}(\lambda,n)=1 \land\overline{\mathtt{Bad}}\land S=S^* \land T=T^*]}{\Pr[\wtexpb{\mathcal{F},\mathcal{A}}{cut}{and}{choose}(\lambda,n)=1 \land\overline{\mathtt{Bad}}]}\\ &> \sum_{S^*\subseteq [4n]\text{~s.t.~}|S^*|=2n}\sum_{T^*\subseteq [4n]\text{~s.t.~}|T^*|<k}\left(\frac{n-k}{2n-k}\right)^{k}\cdot \frac{\Pr[\wtexpb{\mathcal{F},\mathcal{A}}{cut}{and}{choose}(\lambda,n)=1 \land\overline{\mathtt{Bad}}\land S=S^* \land T=T^*]}{\Pr[\wtexpb{\mathcal{F},\mathcal{A}}{cut}{and}{choose}(\lambda,n)=1 \land\overline{\mathtt{Bad}}]}\\ &= \left(\frac{n-k}{2n-k}\right)^{k}\\ &\geq \frac{1}{{\mathrm{poly}}(\lambda)} \end{align} where the first equality follows from the definition of conditional probability, the second and fourth equalities follow from the fact that $|T|<k$ when $\overline{\mathtt{Bad}}$ occurs, the third equality follows from the fact that $U$ is independent of the events $\wtexpb{\mathcal{F},\mathcal{A}}{cut}{and}{choose}(\lambda,n)=1$, $\overline{\mathtt{Bad}}$, and $T=T^*$ when we fix $S$, the first inequality follows from \cref{cor:subset_probability_two} and $|T|<k<n/2<n$ for sufficiently large $\lambda$, and the second inequality follows from $k<\frac{n}{2}$ for sufficiently large $\lambda$, in which case we have $\frac{n-k}{2n-k}> \frac{1}{3}$, and $k=O(\log \lambda)$. Therefore we have \begin{align} \Pr[\mathcal{B}\text{~wins}] &\geq \Pr[\wtexpb{\mathcal{F},\mathcal{A}}{cut}{and}{choose}(\lambda,n)=1 \land U\cap T=\emptyset]\\ &\geq \Pr[\wtexpb{\mathcal{F},\mathcal{A}}{cut}{and}{choose}(\lambda,n)=1 \land U\cap T=\emptyset \land \overline{\mathtt{Bad}}]\\ &=\Pr[\wtexpb{\mathcal{F},\mathcal{A}}{cut}{and}{choose}(\lambda,n)=1 \land \overline{\mathtt{Bad}}]\cdot \Pr[ U\cap T=\emptyset |\wtexpb{\mathcal{F},\mathcal{A}}{cut}{and}{choose}(\lambda,n)=1 \land \overline{\mathtt{Bad}}]\\ &\geq \frac{1}{{\mathrm{poly}}(\lambda)} \end{align} for infinitely many $\lambda$. This contradicts the amplified adaptive hardcore property (\cref{lem:amplified_adaptive_hardcore}). Therefore, our assumption that $\Pr[ \wtexpb{\mathcal{F},\mathcal{A}}{cut}{and}{choose}(\lambda,n)=1]$ is non-negligible is false, which completes the proof of \cref{lem:cut_and_choose_adaptive_hardcore}. \end{proof} \subsection{Construction} \label{sec:PKE_cd_cc_construction} We construct a PKE scheme with certified deletion with classical communication $\Sigma_{\mathsf{cccd}} =(\algo{KeyGen},\algo{Enc},\algo{Dec},\algo{Del},\algo{Vrfy})$ with plaintext space $\mathcal{M}=\{0,1\}^\ell$ from an NTCF family $\mathcal{F}$ with the corresponding trapdoor injective family $\mathcal{G}$ for which we use notations given in~\cref{sec:NTCF}, a public key NCE scheme $\Sigma_{\mathsf{nce}}=\mathsf{NCE}.(\algo{KeyGen},\algo{Enc},\algo{Dec},\algo{Fake},\algo{Reveal})$ with plaintext space $\{S\subseteq [4n]: |S|=2n\}$ where $n$ is a positive integer such that $n\leq {\mathrm{poly}}(\lambda)$ and $n=\omega(\log \lambda)$ and we just write $S$ to mean the description of the set $S$ by abuse of notation, a OW-CPA secure PKE scheme $\Sigma_{\mathsf{ow}}=\mathsf{OW}.(\algo{KeyGen},\algo{Enc},\algo{Dec})$ with plaintext space $\{0,1\}^\lambda$, and a hash function $H$ from $\{0,1\}^{\lambda} \times (\{0,1\}\times \mathcal{X})^{2n}$ to $\{0,1\}^\ell$ modeled as a quantumly-accessible random oracle. \begin{description} \item[$\algo{KeyGen}(1^\lambda)$:] $ $ \begin{itemize} \item Generate $(\mathsf{nce}.\keys{pk},\mathsf{nce}.\keys{sk},\mathsf{nce}.\mathsf{aux})\leftarrow \mathsf{NCE}.\algo{KeyGen}(1^\lambda)$ and $(\mathsf{ow}.\keys{pk},\mathsf{ow}.\keys{sk})\leftarrow \mathsf{OW}.\algo{KeyGen}(1^\lambda)$ and output $(\keys{pk},\keys{sk}) \coloneqq ((\mathsf{nce}.\keys{pk},\mathsf{ow}.\keys{pk}),(\mathsf{nce}.\keys{sk},\mathsf{ow}.\keys{sk}))$. \end{itemize} \item[$\algo{Enc}\langle\mathcal{S}(\keys{pk},m),\mathcal{R} \rangle$:] This is an interactive protocol between a sender $\mathcal{S}$ with input $(\keys{pk},m)$ and a receiver $\mathcal{R}$ without input that works as follows. \begin{itemize} \item $\mathcal{S}$ parses $\keys{pk}=(\mathsf{nce}.\keys{pk},\mathsf{ow}.\keys{pk})$. \item $\mathcal{S}$ chooses a uniform subset $S\subseteq [4n]$ such that $|S|=2n$, generates \begin{align} (\mathsf{k}_i,\keys{td}_i)\leftarrow \begin{cases} \algo{Gen}_{\mathcal{G}}(1^\lambda)~~~&i \in S\\ \algo{Gen}_{\mathcal{F}}(1^\lambda)~~~~&i \in \overline{S} \end{cases} \end{align} for $i\in [4n]$, and sends $\{\mathsf{k}_i\}_{i\in[4n]}$ to $\mathcal{R}$. \item For $i\in [4n]$, $\mathcal{R}$ generates a quantum state \begin{align} \ket{\psi'_{i}} = \begin{cases} \frac{1}{\sqrt{|\mathcal{X}|}}\sum_{x\in\mathcal{X}, y\in\mathcal{Y},b\in\{0,1\}}\sqrt{(g_{\mathsf{k}_i,b}(x))(y)}|b,x\rangle|y\rangle ~~~&(i\in S)\\ \frac{1}{\sqrt{\abs{\mathcal{X}}}}\sum_{x\in \mathcal{X}, y\in\mathcal{Y}, b\in\zo{}}\sqrt{(f'_{\mathsf{k}_i,b}(x))(y)}\ket{b,x}\ket{y} & (i \in \overline{S}) \end{cases} \end{align} by using $\algo{Samp}$, measure the last register to obtain $y_i\in\mathcal{Y}$, and let $\ket{\phi'_i}$ be the post-measurement state where the measured register is discarded. Note that this can be done without knowing $S$ since $\algo{Samp}_{\mathcal{F}}=\algo{Samp}_{\mathcal{G}}$, which is just denoted by $\algo{Samp}$, as required in \cref{def:injective_invariance}. By \cref{def:NTCF,def:TIF}, we can see that for all $i\in [4n]$, $\ket{\phi'_i}$ has a negligible trace distance from the following state: \begin{align} \ket{\phi_{i}} = \begin{cases} \ket{b_i}\ket{x_{i}} ~~~&(i\in S)\\ \frac{1}{\sqrt{2}}\left(\ket{0}\ket{x_{i,0}}+\ket{1}\ket{x_{i,1}}\right) & (i \in \overline{S}) \end{cases} \end{align} where $(x_{i},b_i)\leftarrow \algo{Inv}_{\mathcal{G}}(\keys{td}_i,y_i)$ for $i\in S$ and $x_{i,\beta}\leftarrow \algo{Inv}_{\mathcal{F}}(\keys{td}_i,\beta,y_i)$ for $(i,\beta)\in \overline{S}\times \{0,1\}$.\footnote{Indeed, $\ket{\psi'_i}=\ket{\psi_i}$ for $i\in S$.} $\mathcal{R}$ sends $\{y_i\}_{i\in[4n]}$ to $\mathcal{S}$ and keeps $\{\ket{\phi'_i}\}_{i\in[4n]}$. \item $\mathcal{S}$ chooses $K\leftarrow \{0,1\}^\lambda$ and computes $(b_i,x_i)\leftarrow \algo{Inv}_{\mathcal{G}}(\keys{td}_i,y_i)$ for all $i\in S$. If $\algo{Chk}_{\mathcal{G}}(\mathsf{k}_i,b_i,x_i,y_i)=0$ for some $i\in S$, $\mathcal{S}$ returns $\bot$ to $\mathcal{R}$. Otherwise, let $i_1,...,i_{2n}$ be the elements of $S$ in the ascending order. $\mathcal{S}$ sets $Z\coloneqq (K,(b_{i_1}, x_{i_1} ),(b_{i_2},x_{i_2}) ,...,(b_{i_{2n}},x_{i_{2n}}))$, computes \begin{align} &\mathsf{nce}.\keys{CT}\leftarrow \mathsf{NCE}.\algo{Enc}(\mathsf{nce}.\keys{pk},S),\\ &\mathsf{ow}.\keys{CT}\leftarrow \mathsf{OW}.\algo{Enc}(\mathsf{ow}.\keys{pk},K),\\ &\keys{CT}_{\mathsf{msg}}\coloneqq m\oplus H(Z), \end{align} and sends $(\mathsf{nce}.\keys{CT},\mathsf{ow}.\keys{CT},\keys{CT}_{\mathsf{msg}})$ to $\mathcal{R}$. \item $\mathcal{S}$ outputs $\keys{vk}\coloneqq \{\keys{td}_i,y_i\}_{i\in \overline{S}}$ and $\mathcal{R}$ outputs $\keys{CT}\coloneqq (\{\ket{\phi'_i}\}_{i\in[4n]},\mathsf{nce}.\keys{CT},\mathsf{ow}.\keys{CT},\keys{CT}_{\mathsf{msg}})$. \end{itemize} \item[$\algo{Dec}(\keys{sk},\keys{CT})$:] $ $ \begin{itemize} \item Parse $\keys{sk} = (\mathsf{nce}.\keys{sk},\mathsf{ow}.\keys{sk})$ and $\keys{CT} = (\{\ket{\phi'_i}\}_{i\in[4n]},\mathsf{nce}.\keys{CT},\mathsf{ow}.\keys{CT},\keys{CT}_{\mathsf{msg}})$. \item Compute $S^\prime \leftarrow \mathsf{NCE}.\algo{Dec}(\mathsf{nce}.\keys{sk},\mathsf{nce}.\keys{CT})$. \item Compute $K^\prime \leftarrow \mathsf{OW}.\algo{Dec}(\mathsf{ow}.\keys{sk},\mathsf{ow}.\keys{CT})$. \item For all $i\in S'$, measure $\ket{\phi'_i}$ in the computational basis and let $(b^\prime_i,x^\prime_i)$ be the outcome. \item Compute and output $m^\prime \coloneqq \keys{CT}_{\mathsf{msg}} \oplus H(K^\prime,(b_{i_1}^\prime, x_{i_1}^\prime),(b_{i_2}^\prime, x_{i_2}^\prime),...,(b_{i_{2n}}^\prime, x_{i_{2n}}^\prime))$ where $i_1,...,i_{2n}$ are the elements of $S'$ in the ascending order.\footnote{If $S'=\bot$ or $K'=\bot$, output $\bot$.} \end{itemize} \item[$\algo{Del}(\keys{CT})$:] $ $ \begin{itemize} \item Parse $\keys{CT} = (\{\ket{\phi'_i}\}_{i\in[4n]},\mathsf{nce}.\keys{CT},\mathsf{ow}.\keys{CT},\keys{CT}_{\mathsf{msg}})$. \item For all $i\in [4n]$, evaluate the function $J$ on the second register of $\ket{\phi'_i}$. That is, apply an isometry that maps $\ket{b,x}$ to $\ket{b,J(x)}$ to $\ket{\phi'_i}$. (Note that this can be done efficiently since $J$ is injective and efficiently invertible.) Let $\ket{\phi''_i}$ be the resulting state. \item For all $i\in [4n]$, measure $\ket{\phi''_i}$ in the Hadamard basis and let $(e_i,d_i)$ be the outcome. \item Output $\keys{cert} \coloneqq \{(e_i,d_i)\}_{i\in [4n]}$. \end{itemize} \item[$\algo{Vrfy}(\keys{vk},\keys{cert})$:] $ $ \begin{itemize} \item Parse $\keys{vk} = \{\keys{td}_i,y_i\}_{i\in \overline{S}}$ and $\keys{cert} =\{(e_i,d_i)\}_{i\in [4n]}$. \item Compute $x_{i,\beta}\leftarrow \algo{Inv}_{\mathcal{F}}(\keys{td}_i,\beta,y_i)$ for all $(i,\beta)\in \overline{S}\times \{0,1\}$. \item Output $\top$ if $d_i\in G_{\mathsf{k}_i,0,x_{i,0}} \cap G_{\mathsf{k}_i,1,x_{i,1}}$ and $e_i=d_i\cdot (J(x_{i,0})\oplus J(x_{i,1}))$ hold for all $i\in \overline{S}$ and output $\bot$ otherwise. \end{itemize} \end{description} \paragraph{Correctness.} As observed in the description, $\ket{\phi'_i}$ in the ciphertext has a negligible trace distance from $\ket{\phi_i}$. Therefore, it suffices to prove correctness assuming that $\ket{\phi'_i}$ is replaced with $\ket{\phi_i}$. After this replacement, decryption correctness clearly holds assuming correctness of $\Sigma_{\mathsf{nce}}$ and $\Sigma_{\mathsf{ow}}$. We prove verification correctness below. For $i\in \overline{S}$, if we apply $J$ to the second register of $\ket{\phi_i}$ and then apply Hadamard transform for both registers as in $\algo{Del}$, then the resulting state can be written as \begin{align} &2^{- \frac{w+2}{2}}\sum_{d,b,e} (-1)^{d\cdot J(x_{i,b})\oplus e b }\ket{e}\ket{d}\\ & = 2^{-\frac{w}{2}}\sum_{d \in \zo{w}}(-1)^{d\cdot J(x_{i,0})}\ket{d\cdot (J(x_{i,0})\oplus J(x_{i,1}))}\ket{d}. \end{align} Therefore, the measurement result is $(e_i,d_i)$ such that $e_i=d_i\cdot (J(x_{i,0})\oplus J(x_{i,1}))$ for a uniform $d_i\leftarrow \{0,1\}^w$. By the first item of the adaptive hardcore property in~\cref{def:NTCF}, it holds that $d_i \in G_{\mathsf{k}_i,0,x_{i,0}}\cap G_{\mathsf{k}_i,1,x_{i,1}}$ except for a negligible probability. Therefore, the certificate $\keys{cert}=\{(e_i,d_i)\}_{i\in[4n]}$ passes the verification by $\algo{Vrfy}$ with overwhelming probability. \paragraph{Security.} We prove the following theorem. \begin{theorem}\label{thm:pke_cccd} If $\Sigma_{\mathsf{nce}}$ is RNC secure, $\Sigma_{\mathsf{ow}}$ is OW-CPA secure, and $\mathcal{F}$ is an injective invariant NTCF family with the corresponding injective trapdoor family $\mathcal{G}$, $\Sigma_{\mathsf{cccd}}$ is IND-CPA-CD secure in the QROM where $H$ is modeled as a quantumly-accessible random oracle. \end{theorem} \begin{proof} What we need to prove is that for any QPT adversary $\mathcal{A}$, it holds that \begin{align} \advc{\Sigma_{\mathsf{cccd}},\mathcal{A}}{ccpk}{cert}{del}(\lambda)\coloneqq \abs{\Pr[ \expb{\Sigma_{\mathsf{cccd}},\mathcal{A}}{ccpk}{cert}{del}(\lambda, 0)=1] - \Pr[ \expb{\Sigma_{\mathsf{cccd}},\mathcal{A}}{ccpk}{cert}{del}(\lambda, 1)=1] }\leq {\mathsf{negl}}(\lambda). \end{align} Let $q={\mathrm{poly}}(\lambda)$ be the maximum number of $\mathcal{A}$'s random oracle queries. For clarity, we describe how $\expb{\Sigma_{\mathsf{cccd}},\mathcal{A}}{ccpk}{cert}{del}(\lambda, b)$ works below. \begin{enumerate} \item A uniformly random function $H$ from $\{0,1\}^{\lambda} \times (\{0,1\}\times \mathcal{X})^{2n}$ to $\{0,1\}^\ell$ is chosen, and $\mathcal{A}$ can make arbitrarily many quantum queries to $H$ at any time in the experiment. \item The challenger generates $(\mathsf{nce}.\keys{pk},\mathsf{nce}.\keys{sk},\mathsf{nce}.\mathsf{aux})\leftarrow \mathsf{NCE}.\algo{KeyGen}(1^\lambda)$ and $(\mathsf{ow}.\keys{pk},\mathsf{ow}.\keys{sk})\leftarrow \mathsf{OW}.\algo{KeyGen}(1^\lambda)$ and sends $\keys{pk} \coloneqq (\mathsf{nce}.\keys{pk},\mathsf{ow}.\keys{pk})$ to $\mathcal{A}$. \item $\mathcal{A}$ sends $(m_0,m_1)\in \mathcal{M}^2$ to the challenger. \item The challenger chooses a uniform subset $S\subseteq [4n]$ such that $|S|=2n$, generates \begin{align} (\mathsf{k}_i,\keys{td}_i)\leftarrow \begin{cases} \algo{Gen}_{\mathcal{G}}(1^\lambda)~~~&i \in S\\ \algo{Gen}_{\mathcal{F}}(1^\lambda)~~~~&i \in \overline{S} \end{cases} \end{align} for $i\in [4n]$, and sends $\{\mathsf{k}_i\}_{i\in[4n]}$ to $\mathcal{A}$. \item $\mathcal{A}$ sends $\{y_i\}_{i\in [4n]}$ to the challenger. \item \label{step:send_ct} The challenger chooses $K\leftarrow \{0,1\}^\lambda$ and computes $(b_i,x_i)\leftarrow \algo{Inv}_{\mathcal{G}}(\keys{td}_i,y_i)$ for all $i\in S$. If $\algo{Chk}_{\mathcal{G}}(\mathsf{k}_i,b_i,x_i,y_i)=0$ for some $i\in S$, the challenger sets $Z\coloneqq\mathsf{null}$ and returns $\bot$ to $\mathcal{A}$ where $\mathsf{null}$ is a special symbol indicating that $Z$ is undefined. Otherwise, let $i_1,...,i_{2n}$ be the elements of $S$ in the ascending order. The challenger sets $Z\coloneqq (K,(b_{i_1}, x_{i_1} ),(b_{i_2},x_{i_2}) ,...,(b_{i_{2n}},x_{i_{2n}}))$, computes \begin{align} &\mathsf{nce}.\keys{CT}\leftarrow \mathsf{NCE}.\algo{Enc}(\mathsf{nce}.\keys{pk},S),\\ &\mathsf{ow}.\keys{CT}\leftarrow \mathsf{OW}.\algo{Enc}(\mathsf{ow}.\keys{pk},K),\\ &\keys{CT}_{\mathsf{msg}}\coloneqq m_b\oplus H(Z), \end{align} and sends $(\mathsf{nce}.\keys{CT},\mathsf{ow}.\keys{CT},\keys{CT}_{\mathsf{msg}})$ to $\mathcal{A}$. \item \label{step:send_cert} $\mathcal{A}$ sends $\keys{cert}=\{(e_i,d_i)\}_{i\in [4n]}$ to the challenger. \item \label{step:reveal_sk} The challenger computes $x_{i,\beta}\leftarrow \algo{Inv}_{\mathcal{F}}(\keys{td}_i,\beta,y_i)$ for all $(i,\beta)\in \overline{S}\times \{0,1\}$. If $d_i\in G_{\mathsf{k}_i,0,x_{i,0}} \cap G_{\mathsf{k}_i,1,x_{i,1}}$ and $e_i=d_i\cdot (J(x_{i,0})\oplus J(x_{i,1}))$ hold for all $i\in \overline{S}$, sends $\keys{sk}\coloneqq(\mathsf{nce}.\keys{sk},\mathsf{ow}.\keys{sk})$ to $\mathcal{A}$, and otherwise sends $\bot$ to $\mathcal{A}$. \item $\mathcal{A}$ outputs $b'$. The output of the experiment is $b'$. \end{enumerate} We define the following sequence of hybrids. \begin{description} \item[$\mathsf{Hyb}_1(b)$:] Let $\mathtt{Reveal}_{\keys{sk}}$ be the event that the challenger sends $\keys{sk}$ in Step~\ref{step:reveal_sk}. $\mathsf{Hyb}_1(b)$ is identical to $\expb{\Sigma_{\mathsf{cccd}},\mathcal{A}}{ccpk}{cert}{del}(\lambda, b)$ except that $K$ is chosen at the beginning and the oracle given to $\mathcal{A}$ before $\mathtt{Reveal}_{\keys{sk}}$ occurs is replaced with $H_{K\|\ast \rightarrow H'}$, which is $H$ reprogrammed according to $H'$ on inputs whose first entry is $K$ where $H'$ is another independent random function. More formally, $H_{K\|\ast \rightarrow H'}$ is defined by \begin{align} H_{K\|\ast \rightarrow H'}(K',(b_1,x_1),...,(b_{2n}, x_{2n}))\coloneqq \begin{cases} H(K',(b_1,x_1),...,(b_{2n}, x_{2n}))~~~&(K'\neq K)\\ H'(K',(b_1,x_1),...,(b_{2n}, x_{2n})) ~~~&(K'= K) \end{cases}. \end{align} We note that the challenger still uses $H$ to generate $\keys{CT}_{\mathsf{msg}}$ and the oracle after $\mathtt{Reveal}_{\keys{sk}}$ occurs is still $H$ similarly to the real experiment. On the other hand, if $\mathtt{Reveal}_{\keys{sk}}$ does not occur, the oracle $H_{K\|\ast \rightarrow H'}$ is used throughout the experiment except for the generation of $\keys{CT}_{\mathsf{msg}}$. \item[$\mathsf{Hyb}_2(b)$:] This is identical to $\mathsf{Hyb}_1(b)$ except that $\mathsf{nce}.\keys{CT}$ and $\mathsf{nce}.\keys{sk}$ that may be sent to $\mathcal{A}$ in Step~\ref{step:send_ct}~and~\ref{step:reveal_sk} are replaced by \begin{align} &\mathsf{nce}.\widetilde{\ct}\leftarrow \mathsf{NCE}.\algo{Fake}(\mathsf{nce}.\keys{pk},\mathsf{nce}.\keys{sk},\mathsf{nce}.\mathsf{aux}),\\ &\mathsf{nce}.\widetilde{\keys{sk}}\leftarrow \mathsf{NCE}.\algo{Reveal}(\mathsf{nce}.\keys{pk},\mathsf{nce}.\keys{sk},\mathsf{nce}.\mathsf{aux},\mathsf{nce}.\widetilde{\ct},S). \end{align} \item[$\mathsf{Hyb}_3(b)$:] This is identical to $\mathsf{Hyb}_2(b)$ except that the oracle given to $\mathcal{A}$ after $\mathtt{Reveal}_{\keys{sk}}$ occurs is replaced with $H_{Z \rightarrow r}$, which is $H$ reprogrammed to output $r$ on input $Z=(K,(b_{i_1},x_{i_1}),...,(b_{i_{2n}},x_{i_{2n}}))$ where $r$ is an independently random $\ell$-bit string. More formally, $H_{Z \rightarrow r}$ is defined by \begin{align} H_{Z \rightarrow r}(Z')\coloneqq \begin{cases} H(Z')~~~&(Z'\neq Z)\\ r ~~~&(Z'= Z) \end{cases}. \end{align} Note that we have $H_{Z \rightarrow r}= H$ if $Z=\mathsf{null}$, i.e., if $\algo{Chk}_{\mathcal{G}}(\mathsf{k}_i,b_i,x_i,y_i)=0$ for some $i\in S$ in Step \ref{step:send_ct}. \end{description} \begin{proposition}\label{prop:pkecccd_hyb_one} If $\Sigma_{\mathsf{ow}}$ is OW-CPA secure, $\abs{\Pr[\expb{\Sigma_{\mathsf{cccd}},\mathcal{A}}{ccpk}{cert}{del}(\lambda, b)=1] - \Pr[\mathsf{Hyb}_1(b)=1]} \le {\mathsf{negl}}(\lambda)$. \end{proposition} \begin{proof} \input{Proof1.tex} \end{proof} \begin{proposition}\label{prop:pkecccd_hyb_two} If $\Sigma_{\mathsf{nce}}$ is RNC secure, $\abs{\Pr[\mathsf{Hyb}_1(b)=1]-\Pr[\mathsf{Hyb}_2(b)=1]} \le {\mathsf{negl}}(\lambda)$. \end{proposition} \begin{proof} \input{Proof2.tex} \end{proof} \begin{proposition}\label{prop:pkecccd_hyb_three} If $\mathcal{F}$ and $\mathcal{G}$ satisfy the cut-and-choose adaptive hardcore property (\cref{lem:cut_and_choose_adaptive_hardcore}), $\abs{\Pr[\mathsf{Hyb}_2(b)=1]-\Pr[\mathsf{Hyb}_3(b)=1]} \le {\mathsf{negl}}(\lambda)$. \end{proposition} \begin{proof} \input{Proof3.tex} \end{proof} \begin{proposition}\label{prop:pkecccd_hyb_final} It holds that $\Pr[\mathsf{Hyb}_3(0)=1]=\Pr[\mathsf{Hyb}_3(1)=1]$. \end{proposition} \begin{proof} In $\hybi{3}$, the challenger queries $H$ while the adversary queries $H_{K\|*\rightarrow H'}$ or $H_{Z\rightarrow r}$. Therefore, $H(Z)$ is used only for generating $\keys{CT}_{\mathsf{msg}}$ in $\mathsf{Hyb_3}$ and thus $\keys{CT}_{\mathsf{msg}}$ is an independently uniform string regardless of $b$ from the view of the adversary. Therefore \cref{prop:pkecccd_hyb_final} holds. \end{proof} By combining \cref{prop:pkecccd_hyb_one,prop:pkecccd_hyb_two,prop:pkecccd_hyb_three,prop:pkecccd_hyb_final} \cref{thm:pke_cccd} is proven. \end{proof} \subsection{Technical Overview Part II: Classical Communication Case}\label{sec:technical_overview_classical} We provide an overview of how to achieve privately verifiable and publicly verifiable PKE with certified deletion using classical communication in this section. We note that both of them rely on interactive encryption algorithms. \paragraph{Privately verifiable construction.} For realizing a privately verifiable construction with classical communication, we rely on \emph{NTCF functions}~\cite{FOCS:BCMVV18,FOCS:Mahadev18a}. In this overview, we consider an ideal version, noise-free claw-free permutations for simplicity. A trapdoor claw-free permutation is $f:\{0,1\} \times \{0,1\}^w \rightarrow \{0,1\}^w$ such that \revise{ (1) $f(0,\cdot)$ and $f(1,\cdot)$ are permutations over $\{0,1\}^w$, (2) given the description of $f$, it is hard to find $x_0$ and $x_1$ such that $f(0,x_0)=f(1,x_1)$, and (3) there is a trapdoor $\keys{td}$ that enables one to efficiently find $x_0$ and $x_1$ such that $f(0,x_0)=f(1,x_1)=y$ for any $y$. } In addition, the existing work showed that (a noisy version of) it satisfies a property called the \emph{adaptive hardcore bit property} under the LWE assumption ~\cite{FOCS:BCMVV18}. To explain this, suppose that one generates the state $\sum_{b,x}\ket{b}|x\rangle|f(b,x)\rangle$, and measures the third register in the computational basis to get a result $y$. Then the first and second registers collapse to the state $\frac{1}{\sqrt{2}}\left(\ket{0}\ket{x_0}+\ket{1}\ket{x_1}\right)$ with $f(0,x_0)=f(1,x_1)=y$. If one measures the state in the computational basis, the measurement outcome is $(0,x_0)$ or $(1,x_1)$. If, on the other hand, one measures the state in the Hadamard basis, the measurement outcome is $(e,d)$ such that $e=d\cdot(x_0\oplus x_1)$. The adaptive hardcore bit property roughly means that once one gets $(0,x_0)$ or $(1,x_1)$, it cannot output $(e,d)$ such that $d\neq 0$ and $e=d\cdot(x_0\oplus x_1)$ with probability better than $1/2+{\mathsf{negl}}(\lambda)$. Note that this is a tight bound since $e=d\cdot(x_0\oplus x_1)$ holds with probability $1/2$ if we randomly choose $e$. Existing works showed that this property can be amplified by parallel repetition~\cite{CoRR:RadSat19,EPRINT:KitNisYam20}: Specifically, let $(0,x_{i,0})$ and $(1,x_{i,1})$ be the preimages of $y_i$ under $f_i$ for $i\in[n]$ where $n=\omega(\log \lambda)$. Then once one gets a sequence $\{b_i,x_{i,b_i}\}_{i\in[n]}$ for some $b_1\|...\| b_n \in \{0,1\}^n$, it can get a sequence $\{e_i,d_i\}_{i\in [n]}$ such that $d_i\neq 0$ and $e_i=d_i\cdot(x_{i,0}\oplus x_{i,1})$ only with negligible probability. We use this property to construct an encryption scheme with certified deletion. A natural idea would be as follows: The sender sends $n$ functions $\{f_i\}_{i\in [n]}$ to the receiver, the receiver generates $\{y_i\}_{i\in [n]}$ along with states $\{\frac{1}{\sqrt{2}}\left(\ket{0}\ket{x_{i,0}}+\ket{1}\ket{x_{i,1}}\right)\}_{i\in [n]}$ as above and sends $\{y_i\}_{i\in [n]}$ to the sender, and the sender sends receiver a ciphertext $\keys{CT}$ decryptable only when $\{b_i,x_{i,b_i}\}_{i\in[n]}$ for some $b_1\|...\| b_n \in \{0,1\}^n$ is available. We discuss how to implement such a ciphertext later. We use $\{e_i,d_i\}_{i\in [n]}$ such that $e_i=d_i\cdot (x_{i,0}\oplus x_{i,1})$ as a deletion certificate. The receiver can decrypt the ciphertext by measuring the states in the computational basis, and once it outputs a valid deletion certificate, it must ``forget" preimages by the amplified adaptive hardcore property and thus cannot decrypt the ciphertext. This idea can be implemented by a straightforward manner if we generate $\keys{CT}$ by (extractable) witness encryption~\cite{STOC:GGSW13,C:GKPVZ13} under the corresponding $\compclass{NP}$ language. However, since witness encryption is a strong assumption, we want to avoid this. Indeed, we can find the following candidate construction using a hash function $H$ modeled as a random oracle. We set the ciphertext as $\keys{CT}\coloneqq \{\keys{CT}_{i,b}\}_{i\in[n],b\in\{0,1\}}$ where $\{m_i\}_{i\in[n]}$ is an $n$-out-of-$n$ secret sharing of the message $m$ and $\keys{CT}_{i,b}\coloneqq m_i\oplus H(b\| x_{i,b})$. The intuition is that an adversary has to get $m_i$ for all $i\in[n]$ to get $m$ and it has to know $(0,x_{i,0})$ or $(1,x_{i,1})$ to know $m_i$. Therefore, it seems that any adversary that gets any information of $m$ can be used to extract a sequence $\{b_i,x_{i,b_i}\}_{i\in[n]}$ for some $b_1\|...\| b_n \in \{0,1\}^n$. If this is shown, then it is straightforward to prove that the adversary can get no information of $m$ once it submits a valid deletion certificate by the amplified adaptive hardcore property as explained above. However, turning this intuition into a formal proof seems difficult. A common technique to extract information from adversary's random oracle queries is the one-way to hiding lemma~\cite{JACM:Unruh15,C:AmbHamUnr19}, which roughly claims that if the adversary distinguishes $H(X)$ from random, then we would get $X$ with non-negligible probability by measuring a randomly chosen query. Here, a problem is that we have to extract $n$ strings $\{b_i,x_{i,b_i}\}_{i\in[n]}$ simultaneously. On the other hand, the extraction by the one-way to hiding lemma disturbs adversary's state by a measurement, and thus we cannot use this technique sequentially.\footnote{A recent work by Coladangelo, Majenz, and Poremba \cite{CMP20} studied what is called ``simultaneous one-way to hiding lemma", but their setting is different from ours and their lemma cannot be used in our setting.} The difficulty above comes from the fact that the sender cannot know which of $(0,x_{i,0})$ and $(1,x_{i,1})$ the receiver will get, and thus it has to send a ciphertext that can be decrypted in either case. To resolve this issue, we rely on the injective invariance, which roughly says that there is an injective function $g$ that is computationally indistinguishable from $f$~\cite{FOCS:Mahadev18a}. First, suppose that we just use $g$ instead of $f$ in the above idea. Since $g$ is injective, there is a unique preimage $(b_i,x_i)$ of $y_i$, in which case the sender knows that the receiver will get $\{(b_i,x_i)\}_{i\in[n]}$ by the standard basis measurement. In this case, the aforementioned problem can be easily resolved by setting $\keys{CT}\coloneqq m\oplus H(b_1\| x_1\|...\| b_n\| x_n)$ as the ciphertext. In this case, it is easy to prove that we can extract $\{b_i,x_i\}_{i\in[n]}$ if an adversary obtains some information of $m$ by applying the standard one-way to hiding lemma. However, an obvious problem is that the deletion certificate no longer works for $g$ since the receiver's state collapses to a classical state after the measurement of $\{y_i\}_{i\in [n]}$ and thus the Hadamard basis measurement results in just uniform bits. Our idea is to take advantages of both of them. Specifically, the sender sends functions $\{\eta_i\}_{i\in[n]}$, where $\eta_i$ is the $g$-type function for $i\in S$ and it is the $f$-type function for $i\in [n]\setminus S$ with a certain set $S\subset[n]$. The receiver generates a set of states each of which is a superposition of two preimages of a $f$-type function or a state encoding the unique preimage of a $g$-type function. The preimages of $g$-type functions are used for encryption/decryption, and the Hadamard measurement results are used for deletion certificate, whose validity is only checked on positions where $f$-type functions are used. We also include an encryption of the description of the subset $S$ in the ciphertext so that a legitimate receiver can know which position should be used in the decryption. More precisely, we set $\keys{CT}\coloneqq ( \algo{Enc}(S), m\oplus H(\{b_i,x_i\}_{i\in [S]}))$ where $\algo{Enc}$ is a PKE scheme with the RNC security.\footnote{We require $\algo{Enc}$ to satisfy the RNC security due to a similar reason to that in~\cref{sec:technical_overview_quantum}, which we omit to explain here.}\footnote{In the actual construction, there is an additional component that is needed for preventing an adversary from decrypting the ciphertext \emph{before} outputting a valid deletion certificate without the decryption key. This is just a security as standard PKE and can be added easily. Thus, we omit this and focus on the security \emph{after} outputting a valid deletion certificate. } A deletion certificate $\{e_i,d_i\}_{i\in[n]}$ is valid if we have $d_i\neq 0$ and $e_i=d_i\cdot (x_{i,0}\oplus x_{i,1})$ for all $i\in [n]\setminus S$. For the security proof of this construction, the amplified adaptive hardcore property cannot be directly used, because it is a property about $f$-type functions whereas the above construction mixes $f$-type functions and $g$-type functions, and what we want to have is the mutually-exclusive property between preimages of $g$-type functions and deletion certificates of $f$-type functions. To solve the problem, we introduce a new property which we call {\it the cut-and-choose adaptive hardcore property} (\cref{lem:cut_and_choose_adaptive_hardcore}). The cut-and-choose adaptive hardcore property intuitively means that once the receiver issues a deletion certificate $\{e_i,d_i\}_{i\in [n]}$ that is valid for all $i\in [n]\setminus S$ before knowing $S$, it can no longer generate correct preimages $\{b_i,x_i\}_{i\in [S]}$ even if it receives $S$ later. Intuitively, this holds because the only way to obtain such $\{e_i,d_i\}_{i\in [n]}$ before knowing $S$ would be to measure the states in the Hadamard basis for all $i\in[n]$, in which case the receiver should forget all preimages. We show that the cut-and-choose adaptive hardcore property can be reduced to the adaptive hardcore bit property and injective invariance. The new property we show itself is of independent interest, and we believe it will be useful in many other applications of quantum cryptography. Because the only known construction of NTCF functions~\cite{FOCS:BCMVV18,FOCS:Mahadev18a} assumes the LWE assumption, our construction of the PKE with privately verifiable certified deletion with classical communication is also based on the LWE assumption and our security proof is done in the QROM. We note that the construction only achieves private verification because verification of deletion certificates requires both of two preimages of $f$-type functions, which cannot be made public. \paragraph{Publicly verifiable construction.} The above construction is not publicly verifiable because the verification of the validity of $(e_i,d_i)$ requires both preimages $x_{i,0}$ and $x_{i,1}$, which cannot be made public. One might notice that the validity check of the preimage can be done publicly, and might suggest the following construction: preimages are used for deletion certificate, and Hadamard measurement outcomes $\{e_i,d_i\}_{i\in [n]}$ are used as the decryption key of the encryption. Because a valid $\{e_i,d_i\}_{i\in [n]}$ is a witness of an $\compclass{NP}$ statement, we could use (extractable) witness encryption~\cite{STOC:GGSW13,C:GKPVZ13} to ensure that a receiver can decrypt the message only if it knows a valid $\{e_i,d_i\}_{i\in [n]}$. This idea, however, does not work, because the statement of the witness encryption contains private information (i.e., preimages), and witness encryption ensures nothing about privacy of the statement under which a message is encrypted. Our idea to solve the problem is to use the one-shot signature~\cite{STOC:AGKZ20}. Roughly speaking, one-shot signature (with a message space $\{0,1\}$) enables one to generate a classical public key $\keys{pk}$ along with a quantum secret key $\keys{sk}$, which can be used to generate either of a signature $\sigma_0$ for message $0$ or $\sigma_1$ for message $1$, but not both. We note that a signature can be verified publicly. We combine one-shot signatures with extractable witness encryption.\footnote{We note that a combination of one-shot signatures and extractable witness encryption appeared in the work of Georgiou and Zhandry \cite{EPRINT:GeoZha20} in a related but different context.} The encryption $\algo{Enc}(m)$ of a message $m$ in our construction is a ciphertext of witness encryption of message $m$ under the statement corresponding to the verification of one-shot signature for message $0$. The deletion certificate is, on the other hand, a one-shot signature for message $1$. Once a valid signature of $1$ is issued, a valid signature of $0$, which is a decryption key of our witness encryption, is no longer possible to generate due to the security of the one-shot signature. This intuitively ensures the certified deletion security of our construction. Because signatures are publicly verifiable, the verification of our construction is also publicly verifiable. In the actual construction, in order to prevent an adversary from decrypting the ciphertext before issuing the deletion certificate, we add an additional layer of encryption, for which we use RNCE due to the similar reason to that in \cref{sec:technical_overview_quantum}. Unfortunately, the only known construction of the one-shot signature needs classical oracles. It is an open question whether we can construct a PKE with publicly verifiable certified deletion with classical communication based on only standard assumptions such as the LWE assumption. \subsection{Technical Overview Part I: Quantum Communication Case}\label{sec:technical_overview_quantum} We provide an overview of how to achieve PKE and ABE with certified deletion using quantum communication in this section. To explain our idea, we introduce the definition of PKE with certified deletion. \paragraph{Definition of quantum encryption with certified deletion.} A PKE with certified deletion consists of the following algorithms. \begin{description} \item [$\algo{KeyGen}(1^\lambda) \rightarrow (\keys{pk},\keys{sk})$:] This is a key generation algorithm that generates a pair of public and secret keys. \item[$\algo{Enc}(\keys{pk},m)\rightarrow (\keys{vk},\keys{CT})$:] This is an encryption algorithm that generates a ciphertext of plaintext and a verification key for this ciphertext. \item[$\algo{Dec}(\keys{sk},\keys{CT}) \rightarrow m^\prime$:] This is a decryption algorithm that decrypts a ciphertext. \item[$\algo{Del}(\keys{CT})\rightarrow \keys{cert}$:] This is a deletion algorithm that generates a certificate to guarantee that the ciphertext $\keys{CT}$ was deleted. \item[$\algo{Vrfy}(\keys{vk},\keys{cert})\rightarrow \top$ or $\bot$:] This is a verification algorithm that checks the validity of a certificate $\keys{cert}$ by using a verification key. As correctness, we require that this algorithm returns $\top$ (i.e., it accepts) if $\keys{cert}$ was honestly generated by $\algo{Del}(\keys{CT})$ and $(\keys{vk},\keys{CT})$ was honestly generated by $\algo{Enc}$. \end{description} Roughly speaking, certified deletion security requires that no quantum polynomial time (QPT) adversary given $\keys{pk}$ and $\keys{CT}$ can obtain any information about the plaintext in $\keys{CT}$ \emph{even if $\keys{sk}$ is given after a valid certificate $\keys{cert} \leftarrow \algo{Del}(\keys{CT})$ is generated}. The difference between PKE and reusable SKE with certified deletion is that, in reusable SKE, $\algo{KeyGen}$ outputs only $\keys{sk}$. In the one-time SKE case by Broadbent and Islam~\cite{TCC:BroIsl20}, $\algo{Enc}$ does not output $\keys{vk}$ and $\algo{Vrfy}$ uses $\keys{sk}$ instead of $\keys{vk}$. \paragraph{Our idea for PKE.} We use the construction of one-time SKE with certified deletion by Broadbent and Islam~\cite{TCC:BroIsl20}. However, we do not need to know the detail of the SKE scheme since we use it in a black-box way in our PKE scheme. What we need to understand about the SKE scheme are the following abstracted properties: (1) A secret key and a plaintext are classical strings. (2) A ciphertext is a quantum state. (3) The encryption algorithm does not output a verification key since the verification key is equal to the secret key. (4) It satisfies the verification correctness and certified deletion security explained above. Our idea is to convert the SKE with certified deletion scheme into a PKE with certified deletion scheme by combining with a standard PKE scheme (standard hybrid encryption technique). This conversion is possible since a secret key of the SKE scheme is a classical string. Let $\PKE.(\algo{KeyGen},\algo{Enc},\algo{Dec})$ and $\SKE.(\algo{KeyGen},\algo{Enc},\algo{Dec},\algo{Del},\algo{Vrfy})$ be normal PKE and one-time SKE with certified deletion schemes, respectively. Our PKE with certified deletion scheme is described as follows. \begin{description} \item [$\algo{KeyGen}(1^\lambda)$:] This outputs $(\mathsf{pke}.\keys{pk},\mathsf{pke}.\keys{sk})\leftarrow \PKE.\algo{KeyGen}(1^\lambda)$. \item[$\algo{Enc}(\keys{pk},m)$:] This generates $\mathsf{ske}.\keys{sk} \leftarrow \SKE.\algo{KeyGen}(1^\lambda)$, $\mathsf{ske}.\keys{CT} \leftarrow \SKE.\algo{Enc}(\mathsf{ske}.\keys{sk},m)$, and $\mathsf{pke}.\keys{CT}\leftarrow \PKE.\algo{Enc}(\mathsf{pke}.\keys{pk},\allowbreak \mathsf{ske}.\keys{sk})$, and outputs $\keys{vk} \coloneqq \mathsf{ske}.\keys{sk}$ and $\keys{CT} \coloneqq (\mathsf{ske}.\keys{CT},\mathsf{pke}.\keys{CT})$. \item[$\algo{Dec}(\keys{sk},\keys{CT})$:] This computes $\mathsf{ske}.\keys{sk}^\prime \leftarrow \PKE.\algo{Dec}(\mathsf{pke}.\keys{sk},\mathsf{pke}.\keys{CT})$ and $m^\prime \leftarrow \SKE.\algo{Dec}(\mathsf{ske}.\keys{sk}^\prime,\mathsf{ske}.\keys{CT})$, and outputs $m^\prime$. \item[$\algo{Del}(\keys{CT})$:] This generates and outputs $\keys{cert} \leftarrow \SKE.\algo{Del}(\mathsf{ske}.\keys{CT})$. \item[$\algo{Vrfy}(\keys{vk},\keys{cert})$:] This outputs the output of $\SKE.\algo{Vrfy}(\mathsf{ske}.\keys{sk},\keys{cert})$ (note that $\keys{vk} =\mathsf{ske}.\keys{sk}$). \end{description} At first glance, this naive idea seems to work since even if $\mathsf{pke}.\keys{sk}$ is given to an adversary after a valid $\keys{cert}$ is generated, $\mathsf{ske}.\keys{CT}$ does not leak information about the plaintext by certified deletion security of the SKE scheme. Note that PKE is used to encrypt $\mathsf{ske}.\keys{sk}$ (not $m$). One-time SKE is sufficient since $\mathsf{ske}.\keys{sk}$ is freshly generated in $\algo{Enc}$. The proof outline is as follows. First, we use IND-CPA security of normal PKE to erase information about $\mathsf{ske}.\keys{sk}$. Then, we use the one-time certified deletion security of $\SKE$. Unfortunately, we do not know how to prove the first step above because we must give $\mathsf{pke}.\keys{sk}$ to an adversary in a security reduction. In the first step, we need to show that if a distinguisher detects that $\PKE.\algo{Enc}(\mathsf{pke}.\keys{pk}, \mathsf{ske}.\keys{sk})$ is changed to $\PKE.\algo{Enc}(\mathsf{pke}.\keys{pk},0^{\abs{\mathsf{ske}.\keys{sk}}})$, we can break IND-CPA security of the normal PKE. However, to run the distinguisher, we need to give $\mathsf{pke}.\keys{sk}$ to the distinguisher after it sends a valid certificate for deletion. The reduction has no way to give $\mathsf{pke}.\keys{sk}$ to the distinguisher since the reduction is trying to break the PKE scheme! To solve this problem, we use RNC encryption (RNCE)~\cite{EC:JarLys00,TCC:CanHalKat05}. RNCE consists of algorithms $(\algo{KeyGen},\algo{Enc},\algo{Dec},\algo{Fake},\algo{Reveal})$. The key generation algorithm outputs not only a key pair $(\keys{pk},\keys{sk})$ but also an auxiliary trapdoor information $\mathsf{aux}$. The fake ciphertext generation algorithm $\algo{Fake}(\keys{pk},\keys{sk},\mathsf{aux})$ can generate a fake ciphertext $\widetilde{\ct}$ that does not include information about a plaintext. The reveal algorithm $\algo{Reveal}(\keys{pk},\keys{sk},\mathsf{aux},\widetilde{\ct},m)$ can generate a fake secret key that decrypts $\widetilde{\ct}$ to $m$. The RNC security notion requires that $(\widetilde{\ct}=\algo{Fake}(\keys{pk},\keys{sk},\mathsf{aux}),\algo{Reveal}(\keys{pk},\keys{sk},\mathsf{aux},\widetilde{\ct},m))$ is computationally indistinguishable from $(\algo{Enc}(\keys{pk},m),\keys{sk})$. RNCE perfectly fits the scenario of certified deletion. We use an RNCE scheme $\mathsf{NCE}.(\algo{KeyGen},\algo{Enc},\algo{Dec},\algo{Fake},\algo{Reveal})$ instead of a normal PKE in the PKE with certified deletion scheme above. To erase $\mathsf{ske}.\keys{sk}$, we use the RNC security. We change $\mathsf{NCE}.\algo{Enc}(\mathsf{nce}.\keys{pk},\mathsf{ske}.\keys{sk})$ and $\mathsf{nce}.\keys{sk}$ into $\mathsf{nce}.\widetilde{\ct} =\mathsf{NCE}.\algo{Fake}(\mathsf{nce}.\keys{pk},\mathsf{nce}.\keys{sk},\mathsf{nce}.\mathsf{aux})$ and $\mathsf{NCE}.\algo{Reveal}(\mathsf{nce}.\keys{pk},\mathsf{nce}.\keys{sk},\mathsf{nce}.\mathsf{aux},\mathsf{nce}.\widetilde{\ct},\mathsf{ske}.\keys{sk})$, respectively. Thus, as long as $\mathsf{ske}.\keys{sk}$ is given after a valid certification is generated, we can simulate the secret key of the PKE with certified deletion scheme. Using RNCE solves the problem above since the reduction obtains both a target ciphertext and a secret key (real or fake) in the RNC security game. To complete the security proof, we use the certified deletion security of $\SKE$. Here, the point is that the reduction can simulate a secret key by $\algo{Reveal}$ since the reduction is given $\mathsf{ske}.\keys{sk}$ after a valid certificate is sent in the certified deletion security game. If we use secret key RNCE instead of public key RNCE, we can achieve reusable SKE with certified deletion via the design idea above. Secret/public key RNCE can be constructed from IND-CPA SKE/PKE, respectively~\cite{TCC:CanHalKat05,C:KNTY19}, and SKE with certified deletion exists unconditionally~\cite{TCC:BroIsl20}. Thus, we can achieve PKE (resp. reusable SKE) with certified deletion from IND-CPA PKE (resp. OWFs). Note that the RNCE technique above is the fundamental technique in this work.\footnote{Ananth and Kaleoglu concurrently and independently present essentially the same technique in the uncloneable encryption setting~\cite{EPRINT:AnaKal21}.} We use this technique both in the quantum communication case and in the classical communication case. \paragraph{Our idea for ABE.} We can extend the idea for PKE to the ABE setting. In this work, we focus on key-policy ABE, where a policy (resp. attribute) is embedded in a secret key (resp. ciphertext). The crucial tool is (receiver) non-committing ABE (NCABE), which we introduce in this work. Although the definition of NCABE is basically a natural extension of that of RNCE, we describe algorithms of NCABE for clarity. It helps readers who are not familiar with normal ABE. The first four algorithms below are algorithms of normal ABE. \begin{description} \item[$\algo{Setup}(1^\lambda)\rightarrow (\keys{pk},\keys{msk})$:] This is a setup algorithm that generates a public key and a master secret key. \item[$\algo{KeyGen}(\keys{msk},P)\rightarrow \keys{sk}_P$:] This is a key generation algorithm that generates a secret key for a policy $P$. \item[$\algo{Enc}(\keys{pk},X,m)\rightarrow \keys{CT}_X$:] This is an encryption algorithm that generates a ciphertext of $m$ under an attribute $X$. \item[$\algo{Dec}(\keys{sk}_P,\keys{CT}_X)\rightarrow m^\prime$ or $\bot$:] This is a decryption algorithm that decrypts $\keys{CT}_X$ if $P(X)=\top$. If $P(X)=\bot$, it outputs $\bot$. \item[$\algo{FakeSetup}(1^\lambda)\rightarrow (\keys{pk},\mathsf{aux})$:] This is a fake setup algorithm that generates a public key and a trapdoor auxiliary information $\mathsf{aux}$. \item[$\algo{FakeCT}(\keys{pk},\mathsf{aux},X)\rightarrow \widetilde{\ct}_X$:] This is a fake ciphertext generation algorithm that generates a fake ciphertext $\widetilde{\ct}_X$ under an attribute $X$. \item[$\algo{FakeSK}(\keys{pk},\mathsf{aux},P)\rightarrow \widetilde{\keys{sk}}_P$:] This is a fake key generation algorithm that generates a fake secret key $\widetilde{\keys{sk}}_P$ for $P$. \item[$\algo{Reveal}(\keys{pk},\mathsf{aux},\widetilde{\ct},m)\rightarrow \widetilde{\keys{msk}}$:] This is a reveal algorithm that generates a fake master secret key $\widetilde{\keys{msk}}$. \end{description} Roughly speaking, the NCABE security notion requires that the fake public key, master secret key, ciphertext, and secret keys are computationally indistinguishable from the normal public key, master key, ciphertext, and secret keys. It is easy to see that the hybrid encryption approach works in the ABE setting as well. Thus, the goal is achieving an NCABE scheme. Our NCABE construction follows the RNCE construction based on IND-CPA PKE~\cite{TCC:CanHalKat05,C:KNTY19}. However, the crucial difference between the PKE and ABE settings is that, in the ABE setting, adversaries are given many secret keys for queried policies (that is, we consider collusion-resistance). There is an obstacle to achieving collusion resistance because secret keys for policies depend on a master secret key. Note that adversaries can send secret key queries \emph{both before and after} the target ciphertext is given. First, we explain the RNCE scheme from PKE. Although we explain the $1$-bit plaintext case, it is easy to extend to the multi-bit case. The idea is the simple double encryption technique by Naor and Yung~\cite{STOC:NaoYun90}, but we do not need non-interactive zero-knowledge (NIZK). We generate two key pairs $(\keys{pk}_0,\keys{sk}_0)$ and $(\keys{pk}_1,\keys{sk}_1)$ and set $\keys{pk}\coloneqq (\keys{pk}_0,\keys{pk}_1)$, $\keys{sk}\coloneqq \keys{sk}_z$, and $\mathsf{aux}=(\keys{sk}_0,\keys{sk}_1,z^\ast)$ where $z,z^\ast\leftarrow\zo{}$. A ciphertext consists of $\algo{Enc}(\keys{pk}_0,b)$ and $\algo{Enc}(\keys{pk}_1,b)$. We can decrypt the ciphertext by using $\keys{sk}_z$. A fake ciphertext $\widetilde{\ct}$ is $(\algo{Enc}(\keys{pk}_{z^\ast},0),\algo{Enc}(\keys{pk}_{1-z^\ast},1))$. To generate a fake secret key for a plaintext $m^\ast$, the reveal algorithm outputs $\keys{sk}_{z^\ast \oplus m^\ast}$. It is easy to see decrypting $\widetilde{\ct}$ by $\keys{sk}_{z^\ast \oplus m^\ast}$ yields $m^\ast$. Our NCABE is based on the idea above. That is, we use two key pairs $(\keys{pk}_0,\keys{msk}_0)$ and $(\keys{pk}_1,\keys{msk}_1)$ of a normal ABE scheme $\ABE.(\algo{Setup},\algo{KeyGen},\algo{Enc},\algo{Dec})$, and a ciphertext consists of $(\ABE.\algo{Enc}(\keys{pk}_0,X,b),\ABE.\algo{Enc}(\keys{pk}_1,X,b))$ where $X$ is an attribute. Our reveal algorithm outputs $\keys{msk}_{z^\ast \oplus m^\ast}$ for a plaintext $m^\ast$ as in the PKE case. The problem is a secret key for a policy $P$. A naive idea is that a key generation algorithm outputs $\keys{sk}_P \leftarrow \ABE.\algo{KeyGen}(\keys{msk}_z,P)$ where $z \leftarrow \zo{}$ is chosen in the setup algorithm, and a fake key generation algorithm outputs $\widetilde{\keys{sk}}_P \leftarrow \ABE.\algo{KeyGen}(\keys{msk}_{z^\ast \oplus m^\ast},P)$. However, this apparently does not work since $\widetilde{\keys{sk}}_P$ depends on $m^\ast$. Unless $\widetilde{\keys{sk}}_P$ is independent of $m^\ast$, we cannot use NCABE to achieve ABE with certified deletion because $\mathsf{ske}.\keys{sk}$ of SKE with certified deletion is sent \emph{after} a valid certification is generated ($\mathsf{ske}.\keys{sk}$ would be a plaintext of ABE in the hybrid encryption). To make a fake key generation be independent of $m^\ast$, we need to hide which master secret key is used to generate a secret key for $P$. If a secret key leaks information about which secret key (extracted from $\keys{msk}_0$ or $\keys{msk}_1$) is used, we cannot adaptively select a fake master secret key in the reveal algorithm. IO helps us to overcome this hurdle. Our idea is as follows. A key generation algorithm outputs an obfuscated circuit of a circuit $\mathsf{D}[\keys{sk}_z]$ that takes a ciphertext $(\mathsf{abe}.\keys{CT}_0,\mathsf{abe}.\keys{CT}_1)\coloneqq (\ABE.\algo{Enc}(\keys{pk}_0,X,b),\ABE.\algo{Enc}(\keys{pk}_1,X,b))$ and outputs $\ABE.\algo{Dec}(\keys{sk}_z,\mathsf{abe}.\keys{CT}_z)$ where $z\leftarrow \zo{}$ and $\keys{sk}_z \leftarrow \ABE.\algo{KeyGen}(\keys{msk}_z,P)$ is hard-coded in $\mathsf{D}$. A fake key generation algorithm outputs an obfuscated circuit of a circuit $\mathsf{D}_0[\keys{sk}_0]$ that takes $(\mathsf{abe}.\keys{CT}_0,\mathsf{abe}.\keys{CT}_1)$ and outputs $\ABE.\algo{Dec}(\keys{sk}_0,\mathsf{abe}.\keys{CT}_0)$ where $\keys{sk}_0 \leftarrow \ABE.\algo{KeyGen}(\keys{msk}_0,P)$ is hard-coded in $\mathsf{D}_0$. Note that the fake secret key cannot be used to decrypt a fake ciphertext $(\mathsf{abe}.\keys{CT}_{z^\ast},\mathsf{abe}.\keys{CT}_{1-z^\ast})\coloneqq (\ABE.\algo{Enc}(\keys{pk}_{z^\ast},X,0),\ABE.\algo{Enc}(\keys{pk}_{1-z^\ast},X,1))$ where $z^\ast \leftarrow \zo{}$ since $P(X)=\bot$ must hold by the requirement on ABE security. Since the decryption circuits $\mathsf{D}$ and $\mathsf{D}_0$ are obfuscated, adversaries have no idea about which secret key ($\keys{sk}_0$ or $\keys{sk}_1$) is used for decryption. This idea is inspired by the functional encryption (FE) scheme by Garg et al.~\cite{SICOMP:GGHRSW16}. The final issue is that adversaries can detect whether a secret key is real or fake if they use an invalid ciphertext $(\ABE.\algo{Enc}(\keys{pk}_0,b),\ABE.\algo{Enc}(\keys{pk}_1,1-b))$ as an input to the obfuscated circuits. To prevent this attack, we use statistically sound NIZK to check the consistency of double encryption as the FE scheme by Garg et al.~\cite{SICOMP:GGHRSW16}. By the statistical soundness of NIZK, we can guarantee that the obfuscated decryption circuit does not accept invalid ciphertexts, and $\mathsf{D}$ and $\mathsf{D}_0$ are functionally equivalent. Note that a secret key for policy $P$ outputs $\bot$ for the target ciphertext since a target attribute $X^\ast$ in the target ciphertext satisfies $P(X)=\bot$. We do not need the simulation-soundness, unlike the FE scheme by Garg et al. due to the following reason. In the FE scheme, plain PKE schemes are used for the double encryption technique and a secret key $\keys{sk}_0$ or $\keys{sk}_1$ is hard-coded in a functional decryption key. Before we use PKE security under $\keys{pk}_b$, we need to switch decryption from by $\keys{sk}_b$ to $\keys{sk}_{1-b}$ by IO security. During this phase, we need to use a fake simulated proof of NIZK. Thus, the simulation-soundness is required. However, in our ABE setting, a secret key for $P$ (not the master secret keys $\keys{msk}_0,\keys{msk}_1$) is hard-coded in $\mathsf{D}$ (or $\mathsf{D}_0$) above. Thanks to the ABE key oracle, $\keys{sk}_0$ and $\keys{sk}_1$ for $P$ are always available in reductions. We can first use IO security to switch from $\mathsf{D}$ to $\mathsf{D}_0$. After that, we change a real NIZK proof into a fake one. Thus, our NCABE scheme does not need the simulation-soundness. This observation enables us to achieve the adaptive security rather than the selective security unlike the FE scheme by Garg et al.\footnote{In the initial version of this work~\cite{EPRINT:NisYam21}, we achieve only the selective security because we use statistical simulation-sound NIZK as the FE scheme by Garg et al.~\cite{SICOMP:GGHRSW16}. We improve the result.} See~\cref{sec:NCABE_from_IO} for the detail. Thus, we can achieve NCABE from IO and OWFs since adaptively secure standard ABE can be constructed from IO and OWFs.
{ "attr-fineweb-edu": 1.155273, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUcj_xK7Tt52mM8iYF
\section{Introduction} In parper \cite{JAP} the approach to calculate mechanical properties of levitation systems with melt-processed high temperature superconductors (MP HTS) was introduced. The approach starts from an `ideally hard superconductor' approximation which assumes that the penetration depth $\delta$ of alternating magnetic field is zero. Within this `zero' approximation the stiffness or resonance frequencies in the permanent magnet (PM)--MP HTS system can be calculated analytically and appeared to be in a good agreement with the experiment \cite{Met}, but to calculate energy loss due to PM motion \cite{MSEB} or hysteresis of levitation force \cite{APL2} the next (`first') approximation has to be used and finite values of $\delta$ have to be considered. In \cite{MSEB} we have shown, that the energy loss $W$ in the PM--HTS system during PM oscillations is mostly determined by AC loss in the HTS undersurface layer $\delta = (c/4\pi)h_r/J_\mathrm{c}$, where $h_r$ is the tangential component of AC field at the HTS surface $S$ and $J_\mathrm{c}$ is the critical current density in $ab$-plane for field parallel to this plane, and for initial MP samples can be subdivided into two parts: $W = \int_S dS \left(\alpha h_r^3 + \beta h_r^2 \right)$. The first part is well known bulk hysteretic loss within critical state model from which $J_\mathrm{c}$ \cite{MSEB} and even its profiles \cite{PhyC} can be determined. In this paper we consider the second part of $W$ and investigate the effect of surface treatment (polishing) on $W(h)$ dependence. We discuss a possibility for thermal activation through surface barrier to be detected here and introduce an idea of dynamic surface barrier appearance. \section{Experiment and Discussion} Fig.\ \ref{fig1} represents the experimental dependencies of inverse $Q$-factor of PM forced oscillations at resonance frequency $\omega$ on PM amplitude $A \propto h$. $Q^{-1} = 2\pi W/W_0 \propto W/h^2$, where $W_0 \propto A^2$ is storage energy. Symbols represent the data for the MP HTS sample with polished top surface; dotted line shows $Q^{-1}(A)$ dependence for depolished sample. To explain the presence of the part of $W$ which is $\propto A^2$, a motion of perpendicular to the surface vortices with an amplitude $s(A)$ was considered. In axially symmetrical configuration ${\bf r} = (r,z)$, due to small value of $\delta$, we can say that normal to the surface AC magnetic field component $b_z$ is determined by $h_r(r)$ distribution: $b_z = (1/2\pi r)(\mathrm{d}\Delta\Phi_r/\mathrm{d}r)$, where $\Delta\Phi_r = (c r/4 J_\mathrm{c})h_r^2$ is the parallel to surface magnetic flux variation. This is true for $b_z \ll b_r$ which in our case is reinforced by anisotropy: $J_\mathrm{c}({\bf B}\|c) \ll J_\mathrm{c}({\bf B}\|ab)$. The function $s(r,A)$ can be obtained from the equation \begin{eqnarray} rB_z(r) - (r-s) B_z(r-s)=-r b_z(r,A), \label{1} \end{eqnarray} \noindent where $B_z(r)$ is distribution of normal component of `frozen' magnetic field. In such a way we have shown that for HTS with uniform bulk properties both parts of AC loss are related to vortex motion in HTS volume. By polishing the surface of sample we introduce a surface barrier for flux entry which causes a field jump $\Delta h (B_r)$ in undersurface `vortex free region' \cite{Clem1}. The influence of $\Delta h$ on $W$ can be taken into account by substitution $h_r - \Delta h$ instead of $h_r$ and adding the surface loss as it was made by Clem \cite{Clem2}. The dependence $Q^{-1}(A)$ which is obtained in such a way is shown in Fig.\ \ref{fig1} as a dashed line. So, we can deduce that experimental data show transition from absence of barrier at $A=0$ to distinguish barrier at $A>0$, and the first reason which comes to mind here is thermally activated flux penetration over the barrier \cite{Burl}, but it seems to be impossible to describe the experimentally observed barrier disappearence at small $A$ using the relations from \cite{Burl}. This makes to suppose another possible mechanism to suppress the surface effect at low amplitudes. It is quite natural to expect that the barrier leads to $\Delta h$ not at whole surface but at its part $\varepsilon$ only. The reason for this is nonuniform flux penetration: when a part of vortex or vortex bundle has penetrated into HTS, the further penetration can take place without surmounting the barrier but with vortex propagation along the surface. Fixing vortex velocity the part of vortices that penetrates through barrier can be calculated by energy loss minimizing. We have found: $\varepsilon = (1 + \zeta/h)^\mu$, where $\mu = -1/2$ and $\zeta \approx 25$ Oe. Then AC loss \begin{eqnarray} W(h)=\varepsilon W(h,\Delta h) + (1-\varepsilon)W(h,0). \label{2} \end{eqnarray} \noindent The dependence (\ref{2}) is represented in Fig.\ \ref{fig1} by solid line. Dashed line and dotted line represents the dependencies $W(h)$ for $\varepsilon = 0$ and 1 respectively. \begin{figure}[t] \begin{center}\leavevmode \includegraphics[width=1\linewidth]{fig2.eps} \caption{ Inverse $Q$-factor vs. PM amplitude: experimental and calculated data. }\label{fig1}\end{center}\end{figure}
{ "attr-fineweb-edu": 1.751953, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUcnvxK7ICUuXekk38
\section{Introduction}\label{sec:intro} This paper proposes a microscopic (agent-based) model for describing the dynamics of pedestrians in a very special situation characterized by a moving crowd whose members gradually reverse their direction of motion, thus colliding with the newly incoming people still moving in the original direction. This situation creates a nonstandard self-regulating counter-flow with a rather complex dynamics, which can be investigated by suitably tuning the parameters of the model. The final goal is to predict (and prevent) the formation of dangerous congestion in real mass events. \medskip \emph{Relevant literature.} Modelling the behavior of people in a crowd is a difficult task, since it requires to identify the most important behavioral rules which greatly vary from person to person. For that reason, the study of crowds can be regarded as a multidisciplinary area, which have attracted since many years the interest of mathematicians, physicists, engineers, and psychologists. Crowd modelling has a long-standing tradition, starting from the pioneering papers by Hirai and Tarui \cite{hirai1975}, Okazaki \cite{okazaki1979TAIJa}, and Henderson \cite{henderson1974} in the '70s. Since then, all types of models were proposed, spanning from microscale to macroscale, including multiscale ones, both differential and nondifferential (e.g., cellular automata). Models can be first-order (i.e.\ velocity based) or second-order (i.e.\ acceleration based), with local or nonlocal interactions, with metric or topological interactions, with or without contact-avoidance features. The presence of social groups can also be taken into account. A number of review papers \cite{aghamohammadi2020, bellomo2011, dong2020, duives2013, eftimie2018, haghani2020, martinez2017, papadimitriou2009}, and books \cite{cristiani2014book, rosini2013book, kachroo2008book, maurybook2019} are now available, we refer the interested reader to these references for an introduction to the field. It is also useful to mention that models for pedestrians often stem from those developed in the context of vehicular traffic \cite{helbing2001, rosini2013book}. Moreover, there is a strict connection between pedestrian modeling and control theory, including mean-field games, see, e.g., \cite{albi2020, cristiani2021pp, cristiani2014book} and reference therein. In this paper we deal specifically with the \emph{counter-flow} dynamics: such a dynamics occur when two groups of pedestrians move in opposite directions, therefore each group has to find a way to pass through the other. The importance of accurate modeling and simulation of counter-flow dynamics is supported by evidence from crowds disaster analysis \cite{helbing2012crowd}. Over the last years, the major accidents often occurred for overcrowding and counter-flow phenomena taking place inside the area of the events, or in the proximity of the entrances and exit points. For all these reasons, the literature on counter-flow is quite reach; see, among others, \cite{helbing1995social, hoogendoorn2003simulation, heliovaara2012counterflow} in the context of microscopic differential models, \cite{cristiani2011MMS} for a multiscale differential models, and \cite{weng2006cellular, nowak2012quantitative} in the context of cellular automata. Moreover, it is now well established that counter-flow dynamics lead to the so-called \emph{lane formation}: in order to avoid collision, pedestrians arrange in alternate lanes ($\leftrightarrows$) having the same walking direction. The behavior displayed by numerical simulations is in good agreement with observations of real people in both artificial and natural environments \cite{Kretz2006experimental, hoogendoorn2003extracting, helbing2001self, murakami2021mutual}. Finally, let us discuss the impact of the \emph{social groups} on the dynamics of crowds: despite most of the models assume that each pedestrian moves in a crowd on its own, real crowds are typically formed by small subgroups, such as friends or families. The impact of social groups on crowd dynamics has been explored since the '70s in a number of papers \cite{aveni1977not, moussaid2010walking, von2017empirical, singh2009modelling}, resulting in the fact that the presence of groups is not negligible at all. In particular, in the context of bi-directional counter-flow dynamics, theoretical and experimental observations suggest that the presence of groups slows down the formation of lanes, which are more fragmented \cite{zanlungo2020effect, crociani2017micro}. \medskip \emph{Paper contribution.} In this paper, we consider a specific scenario related to the counter-flow dynamics: a crowd (with social groups) in a corridor, initially moving in one direction towards an open gate, at some moment is no longer able to proceed because of the closure of the gate. After that, people have to decide whether to stop \& wait for a possible re-opening, or to move back. This is the case, e.g., of a inflow of people towards an area dedicated to a mass event, which is interrupted by the organizers when the area capacity has been reached. From the modelling point of view, the main novelty is that the decision to stay or to move back is taken \emph{dynamically} by each group, on the basis of the behaviour of the surrounding groups. More precisely, we assume that the decision is taken whenever \emph{the group leader is no longer able to move forward for a certain time}. This can happen either because it has reached the gate or it has reached the other people queuing in front of the gate, or it is hit by the people going back and blocking the way. The overall dynamics is also complicated by the fact that people who decide to stay do not want to be overtaken by the other people approaching the gate, since they do not want to lose their priority in the queue. On the other hand, staying people want to facilitate the passage of reversing people, because the latest are leaving free space for the former. From the mathematical point of view, we propose a microscopic differential model inspired by the well-known Helbing's Social Force Model \cite{helbing1995social}, based on a large system of ODEs. We introduce several variants with respect to the original model: the two most important of them are that a) we consider a first-order (velocity-based) model and b) we consider topological interactions, taking into account the first neighbour only. The last choice greatly speed up the numerical code. The final goal of this research, which is especially dedicated to practitioners who are involved in the organization of real mass events, is the prevention of critical situations which could arise in the reversing-flow scenario under consideration. Moreover, since digital twins are commonly used to support the safety plan development, highlighting critical aspects that need to be solved before and during the event (cf.\ \cite{scozzari2018modeling}), we also propose a crowd control strategy based on the optimal placing of signals/stewards, aiming at informing people in due time about the gate status (open or close). \medskip \emph{Paper organization.} The paper is organized as follows. In Section \ref{sec:model} we present the model for the crowd dynamics. In Section \ref{sec:casestudy} we present our case study: we describe the geometry and the mechanism undergoing the behavioural choices of pedestrians. In Section \ref{sec:tests} we discuss the results obtained from numerical simulations. We end the paper with some conclusions and future perspectives. \section{The model}\label{sec:model} \subsection{General principles} All force models share the same structure of the Newtonian dynamics, namely \begin{equation}\label{structure} \left\{ \begin{array}{l} \dot{\textbf{X}}_k(t)= \textbf{V}_k(t) \\ [2mm] \dot{\textbf{V}}_k(t)= \textbf{F}_k(t,\textbf{X},\textbf{V}) \end{array}, \qquad k=1,\ldots,N \right. \end{equation} where $N$ is the total number of agents, $\textbf{X}_k(t),\textbf{V}_k(t)\in \mathbb{R}^{2}$, denote the position and velocity of agent $k$ at time $t$, respectively, and $\textbf{X}=\left(\textbf{X}_1,\ldots,\textbf{X}_N\right)$, $\textbf{V}=\left(\textbf{V}_1,\ldots,\textbf{V}_N\right)$. The function $\mathbf{F}_k$, the so-called \emph{social force}, models the total force exerted on agent $k$, and gathers all the physical, psychological and behavioral aspects of pedestrian dynamics. The social force is not a real force, but rather an empirical mathematical tool which translates in formulas all these aspects. In its minimal form, it takes into account the following three contributions: \begin{itemize} \item[i)] An individual desired velocity term: the velocity that a single pedestrian would keep if it was alone in the domain. \item[ii)] A repulsion term: pedestrians tend to maintain a certain distance with respect to other members of the crowd, obstacles and walls present in the environment, in order to avoid collisions. \item[iii)] An attraction term: pedestrians who are not moving on their own tends to stay close to the members of their social group (friends, family members). \end{itemize} Social force models can be enhanced adding random fluctuations due to unpredictable behavioral variations or further small-scale interactions. In this regards, it can be useful to note that the (generally undesired) numerical instabilities often play the same role. \medskip \emph{First-order models.} In pedestrian dynamics, much more than in vehicular dynamics, accelerations are almost instantaneous (at least if compared with a reference typical time scale) and then inertia-based effects are negligible. Therefore, first-order models of the form \begin{equation}\label{structure-firstorder} \dot\textbf{X}_k(t)=\textbf{V}_k(t,\textbf{X}),\qquad k=1,\ldots,N \end{equation} are also suitable, see, e.g., \cite{cristiani2011MMS}. We think that such a models are easier to calibrate and are computationally less expensive, for these reasons in this paper we will adopt a model of this kind. This means that all the aspects of the dynamics which were encapsulated in the force $\textbf{F}$ are now inserted directly in the velocity vector $\textbf{V}$. \medskip \emph{Social groups.} In order to account for the presence of social groups in the crowd, such as families or friends, we assume that each pedestrian is part of a group. Each group has at least two members, we do not consider the presence of lonely people. Moreover, each group has a leader, which never changes in the time frame of the simulation. The leader of the group takes decisions about the common target of the whole group. Groups tend to stay together, but they can temporarily break up and then reunite. The leader does not necessarily walk in front of the group because all group members know the destination (decided by the leader) and are able to reach it independently. \medskip \emph{Groups status.} We assume that at any given time, each group (identified with its leader) has a unique \emph{status} which corresponds to its target and, more in general, to its behaviour. The four possible statutes will be detailed later on in Sect.\ \ref{sec:behavior}. \medskip \emph{Topological interactions.} We consider topological, rather than metric, interactions, meaning that each agent $k$ interacts with a fixed number of agents at the same time, regardless of their distance from the agent $k$. More precisely, we assume that each leader interacts with the first neighbour outside its social group only, while followers (i.e.\ not leaders) interact with the first neighbour inside their social group, the first neighbour outside, and their leader. As it happens in molecular dynamics, the fact that a particle interacts with a few other particles at a time does not mean that the interactions are limited to them. The first neighbour changes continuously and after few time iterations all the agents reasonably close to each agent have interacted with it. While standard social force model assumes contemporary interactions with neighbours within a certain threshold distance, we prefer to consider fewer interactions at a time, recovering the same results, in average, over a longer, but still short, time period. This choice stabilizes the dynamics and it is convenient from the computational point of view. \subsection{Mathematical details} As we said, we consider a first-order social force model with social groups and topological interactions. Agents have no dimension and we do not consider contact-avoidance features. Let us start with the dynamics of \emph{group leaders}. We set \begin{equation}\label{leaderdynamics} \textbf{V}_k(\textbf{X};\S)= \textbf{V}^d(\S_k)+ \textbf{V}^{R}(\textbf{X}_k,\textbf{X}_{k^{**}};\S_k,\S_{k^{**}})+ \textbf{V}^o(\textbf{X}_k) \end{equation} where $\textbf{V}^d$ is the desired velocity, which only depends on the status $\S_k$ of the group the agent $k$ belongs to; $\textbf{V}^{R}$ accounts for the repulsion from strangers, and depends on the positions $\textbf{X}_k$ of the agent $k$ itself, the position $\textbf{X}_{k^{**}}$ of the agent $k^{**}$, defined as the nearest neighbour of $k$ \emph{outside} its social group, and the statuses of the two agents; $\textbf{V}^o$ is the repulsion from obstacles. Clearly we have $\S:=(\S_1,\ldots,\S_N)$. The dynamics of \emph{followers} is instead \begin{equation}\label{followerdynamics} \textbf{V}_k(\textbf{X};\S)= \textbf{V}^d(\S_k)+ \textbf{V}^{r}(\textbf{X}_k,\textbf{X}_{k^*})+ \textbf{V}^{R}(\textbf{X}_k,\textbf{X}_{k^{**}};\S_k,\S_{k^{**}})+ \textbf{V}^a(\textbf{X}_k,\textbf{X}_{k^L})+ \textbf{V}^o(\textbf{X}_k) \end{equation} where $\textbf{V}^{r}$ accounts for the repulsion from group members, and depends on the positions $\textbf{X}_k$ of the agent $k$ itself, the position $\textbf{X}_{k^*}$ of the agent $k^*$, defined as the nearest neighbour of $k$ \emph{inside} its social group; $\textbf{V}^a$ accounts for the attraction towards the group's leader, whose index is denoted by $k^L$. Note that group leaders are not attracted by group mates, the cohesion of the group being totally left to followers. Moreover, leaders are not repulsed by group mates. This avoids artifacts in the dynamics and self-propulsion of the group. The reason why we distinguish internal and external repulsion is that pedestrians tend to stay close to group mates, while keeping a larger distance from the strangers. \medskip The repulsion is defined in order to be inversely proportional to the distance between the agents, \begin{equation} \textbf{V}^{r}= -C^{r}\frac{\textbf{X}_{k^*} - \textbf{X}_k}{\left\| \textbf{X}_{k^*} - \textbf{X}_k \right\|^2}, \qquad \textbf{V}^{R}= -C^{R}(\S_k,\S_{k^{**}})\frac{\textbf{X}_{k^{**}} - \textbf{X}_k}{\left\| \textbf{X}_{k^{**}} - \textbf{X}_k \right\|^2}, \end{equation} where the parameter $C^{r}>0$ is constant, whereas $C^{R}(\S_k,\S_{k^{**}})>0$ depends on the behavioral statuses of the two interacting agents, see Sect.\ \ref{sec:behavior}. To avoid numerical issues in the case two agents get temporarily too close to each other, a threshold is imposed to the value $1/\|\cdot\|^2$. The attraction follower-leader, instead, is proportional to the distance between the two agents \begin{equation} \textbf{V}^a= C^a (\textbf{X}_{k^L} - \textbf{X}_k), \end{equation} where the parameter $C^{a}>0$ is constant. \section{Description of the case study} \label{sec:casestudy} \subsection{Geometry} We model the access road to the area of a mass event as a two-dimensional corridor of length $L$ and height $H$. People initially move along the corridor from left to right and enter the area of the event through a gate, placed at the end of the corridor, see Fig.\ \ref{fig:geometry}. At a certain time ($t=0$ in the simulation) the gate closes for safety reasons so that people are forced to change target (staying or going back). In the following numerical simulations we focus on the behavior of the crowd after the gate closure, which involves a risk of high densities. In particular, we measure the pedestrian density along the corridor by means of four square regions of side $R$, equally spaced along the corridor. Regions are numbered from left to right, see Fig.\ \ref{fig:geometry}. \begin{figure}[h!] \centering \includegraphics[width=12cm, height=2.5cm]{figure/corridor_regions_name} \caption{Geometry of the simulated domain. The gate is placed on the far right. Pedestrian density evolution is monitored within the four square regions placed along the corridor.} \label{fig:geometry} \end{figure} \subsection{Behavior}\label{sec:behavior} In order to complete the description of the model we need to specify how the groups take decisions. In particular, how and when they decide to move towards the gate, stop or go back. We select 4 statuses $\S_k\in\{1,2,3,4\}$ at the group level (i.e.\ all components of a group share the same status, decided by the group leader), defined as follows: \begin{itemize} \item $\S_k=1$: move rightward towards the gate; \item $\S_k=2$: doubt phase, decision in progress; \item $\S_k=3$: decision taken, go back moving leftward; \item $\S_k=4$: decision taken, queuing in the corridor. \end{itemize} The group status affects the desired velocity and the interactions with the others. Status' changes happen necessarily in this order: $1 \rightarrow 2 \rightarrow \{3,4\}$, and once the group is in status 3 or 4 it will not change further. The change 1 $\rightarrow$ 2 occurs if \begin{equation} t>\delta t \quad \text{ and } \quad X^1_k(t)-X^1_k(t-\delta t)\leq\overline{\delta\ell}, \end{equation} where $\delta t$, $\overline{\delta \ell}>0$ are two additional parameters and $X^1_k$ is the horizontal component of the position of the agent $k$. This means that the group leader was not able to move forward for more than $\overline{\delta\ell}$ in a period of time of length $\delta t$. In practice, $\delta t$ represents a degree of willingness to get to the gate. The status 2 has a fixed duration, set to $D$. After $D$ time units, the change 2 $\rightarrow$ $\{3,4\}$ occurs randomly in such a way that, in average, $p$\% of groups fall in status 3 and $(1-p)$\% in status 4. \section{Numerical tests}\label{sec:tests} \subsection{Parameters} Let us begin with setting the values of the parameters. Parameters are primarily chosen in order to get realistic (observed) behaviour in situations of equilibrium. In particular we match observed crowds in the initial phase, where all people are in status 1 and are normally walking rightward, and in the final phase, when only people with status 4 are present and waiting. In these two phases the forces which rule the interpersonal distances are in equilibrium, and the relative positions of persons do not vary (apart for small negligible oscillations). In the numerical tests we discuss the effect of five parameters, namely $N$, $\delta t$, $p$, $D$, $H$. We also investigate the effect of the desired velocity of pedestrians with status 4. Equation \eqref{sec:model} is numerically solved by means of the explicit Euler scheme, with time step $\Delta t$. \medskip \emph{Fixed parameters.} $\Delta t=0.01$ s, $\overline{\delta\ell}=1.5$ m, $L=130$ m, $R=10$ m, $\textbf{V}^d(1)=(1,0)$ m/s, $\textbf{V}^d(2)=(0,0)$ m/s, $\textbf{V}^d(3)=(-1.2,0)$ m/s. The 16 possible values of $C^R$ are summarized in the following matrix, where the entry $ij$ is the force exerted by and agent in status $j$ over an agent in status $i$, $$ \left[ \begin{array}{cccc} 2.0 & 2.5 & 1.0 & 2.5 \\ 2.0 & 2.0 & 2.0 & 2.0 \\ 0.75 & 0.75 & 0.75 & 0.75 \\ 2.0 & 2.0 & 0.75 & 4.5 \end{array} \right] $$ Some comments on the repulsion forces are in order: \begin{enumerate} \item People in status 3 (coming back) are repelled from all the others with little intensity. In fact, they accept to be close to other people, even to strangers, because the contact is supposed to be of short duration. \item People in status 4 (staying) are strongly repelled by people of the same status because they want to reach a large comfort distance. \item People in status 4 (staying) are also repelled by people in status 1 (moving rightward) because they want to keep their priority in the queue and keep the newly coming people behind them. In other words, they do not leave room for others to move rightward. Conversely, they are little repelled by people in status 3 (coming back) because waiting people take advantage from the departure of them. People going back leave free space and allow waiting people to get even closer to the gate. All these features have an important consequence: staying people are compressed towards the gate and the closer they are to the gate, the more compressed they are. \end{enumerate} \emph{Variable parameters.} $N$ = 400 or 800 or 1200, $\delta t$ = 7 or 20 s, $p$ = 25 or 75, $D$ = 10 or 40 s, $\textbf{V}^d(4)$=$(0.5,0)$ or (0,0) m/s, $H$ = 7 or 10 or 13 m. \medskip In all simulations the initial density of pedestrians is 0.8 p/m$^2$ (the larger $N$ the larger the area occupied). The initial positions are randomly chosen. The number of members of the groups is uniformly random in the set $\{2,3,4,5,6\}$. \subsection{Numerical results} Fig.\ \ref{fig:screenshot} shows some screenshots of a reference simulation with 1200 agents: \begin{figure}[t!] \centering {\footnotesize (a)}\subfigure{\includegraphics[width=13.3cm, height=1.9cm]{figure/dinamica_6s.png}}\\ {\footnotesize (b)}\subfigure{\includegraphics[width=13.3cm, height=1.9cm]{figure/dinamica_35s.png}}\\ {\footnotesize (c)}\subfigure{\includegraphics[width=13.3cm, height=1.9cm]{figure/dinamica_92s.png}}\\ {\footnotesize (d)}\subfigure{\includegraphics[width=13.3cm, height=1.9cm]{figure/dinamica_150s.png}}\\ {\footnotesize (e)}\subfigure{\includegraphics[width=13.3cm, height=1.9cm]{figure/dinamica_240s.png}} \caption{Five screenshots of a simulation with $N=1200$ pedestrians, with $\delta t$ = 7 s, $p$=25, $D=10$ s, $\textbf{V}^d(4)$=$(0.5,0)$ m/s, $H$ = 10 m. Status 1 is blue, status 2 is magenta, status 3 is red, and status 4 is black. Circled agents are the leaders of their groups.} \label{fig:screenshot} \end{figure} (a) at the beginning all agents are walking rightward. As we said before, initial density, repulsion forces and desired velocity are compatible with each other, in the sense that people can move forward at constant velocity, nobody slows down due to excessive proximity to the neighbor and no queue is formed. As soon as the first pedestrians reach the closed gate the situation changes: people in proximity of the gate stop and people behind start slowing down. (b) A queue begins to form and gradually still people take decision about the new status to get. At this point all four statuses are present at the same time and the dynamics becomes quite complex. Some people in status 1 (moving rightward, blue) are able to overcome people in status 4 (staying, black) before changing status themselves. (c) Lanes are formed as in more standard counter flow dynamics (cf.\ Sect. \ref{sec:intro}), although they are quite perturbed by the presence of the social groups. Social groups sometimes disperse for a while but then reunite. (d) Gradually people in status 3 (coming back, red) succeed in passing through the staying crowd and leave the corridor. While they move leftward, people in status 4 (staying, black) can move a bit backward to let them free space. (e) Finally, only people in status 4 (staying, black) are present in the corridor. They reach an equilibrium characterized by a noncostant density, due to the natural tendency to stay close to the gate. \medskip Fig.\ \ref{fig:densities_new} shows instead the evolution of the density in the four regions of interests, varying $N$. \begin{figure}[t!] \centering \subfigure[][$N_p=400$]{\includegraphics[width=4.2cm, height=4.2cm]{figure/dubbio10_Np400}} \hspace{0 mm} \subfigure[][$N_p=800$]{\includegraphics[width=4.2cm, height=4.2cm]{figure/dubbio10_Np800}} \hspace{0 mm} \subfigure[][$N_p=1200$]{\includegraphics[width=4.2cm, height=4.2cm]{figure/dubbio10_Np1200}} \caption{Evolution in the four regions of interest, for $N_p=400, 800, 1200$ and other parameters as in Fig.\ \ref{fig:screenshot}.} \label{fig:densities_new} \end{figure} The comparison of the plots brings to light some facts: \begin{enumerate} \item The most crowded region is the fourth one, the region closest to the gate. \item In all cases it is well visible a peak of the density in regions 3-2-1 which decreases and shifts forward in time as the region moves away from the gate (see, e.g., Fig.\ \ref{fig:densities_new}(c)). This moving peak is due to the reversing people encountering the newly arriving people. \item As $N$ increases, average densities increase in all regions but less than one could expect. We think that the reason for this stability is that \emph{the crowd is able to self-regulate}: the more people in the corridor, the earlier (i.e.\ before in both space and time) people get to make a decision about what to do, and, if they decide to leave, they do so before remaining trapped by the others. We observe a sort of a compensation phenomenon that consists in the fact that the more people there are, the sooner they leave, overall leading to small differences in the density evolution. \end{enumerate} Now we consider the numerical setting of Fig.\ \ref{fig:densities_new}(c) as \emph{reference case} for investigating the role of the variable parameters of the model. The results are shown in Fig.\ \ref{fig:densities_aggiunta}. \begin{figure}[t!] \centering \subfigure[][$H=7$ m]{\includegraphics[width=4.2cm, height=4.2cm]{figure/H7}} \hspace{0 mm} \subfigure[][$H=13$ m]{\includegraphics[width=4.2cm, height=4.2cm]{figure/H13}} \hspace{0 mm} \subfigure[][$D=40$ s]{\includegraphics[width=4.2cm, height=4.2cm]{figure/dubbio40}}\\ \subfigure[][$\delta t=20$ s]{\includegraphics[width=4.2cm, height=4.2cm]{figure/dt_20}} \hspace{0 mm} \subfigure[][$p=75$]{\includegraphics[width=4.2cm, height=4.2cm]{figure/Pinversa}} \hspace{0 mm} \subfigure[][$\textbf{V}^d(4)=(0,0)$]{\includegraphics[width=4.2cm, height=4.2cm]{figure/V4_nulla}} \caption{Same simulation as in Fig.\ \ref{fig:densities_new}(c) with (a) $H$ decreased from 10 to 7 m, (b) $H$ increased from 10 to 13 m, (c) $D$ increased from 10 to 40 s, (d) $\delta t$ increased from 7 to 20 s, (e) $p$ increased from 25 to 75, (f) $\textbf{V}^d(4)$ decreased from $(0.5,0)$ to $(0,0)$ m/s.} \label{fig:densities_aggiunta} \end{figure} Figs.\ \ref{fig:densities_aggiunta}(a,b) show the result obtained modifying the height $H$ of the corridor. In both cases we maintain the initial density of people as 0.8 p/m$^2$. As expected, we observe the largest average densities in the case of the smallest corridor. The difference is remarkable in the regions farthest from the gate. Fig.\ \ref{fig:densities_aggiunta}(c) shows the results obtained for an increased value of the doubt phase, namely $D=40$ s. We do not observe a relevant increase of the maximum values of the densities, but the peaks are more delayed. Fig.\ \ref{fig:densities_aggiunta}(d) shows the results obtained for an increased value of the time needed to decide either to enter the doubt phase or to keep going to the gate, namely $\delta t=20$ s. People are now more resistant to the idea of renouncing going to the gate and therefore necessarily spend more time in the corridor. Here the peak of density in region 4 is much greater than before and reaches the dangerous level of 5 p/m$^2$ \cite{helbing2012crowd}. This is mainly due to the longer time spent there by people in statuses 1--4 all together. Overall, the parameter $\delta t$ is the most effective in increasing the density. Fig.\ \ref{fig:densities_aggiunta}(e) shows the effect of increasing $p$ from 25 to 75. In this case 75\% people decide to leave the corridor and only 25\% to stay. We see that at the beginning the densities in all regions are larger than those in the reference solution, because more people are trying to go back and form a big group which move very slowly. Conversely, at final time all regions are almost empty and have similar density. Fig.\ \ref{fig:densities_aggiunta}(f) shows the effect of vanishing the desired velocity of people with status 4. Although those people are just waiting in the corridor, a rightward constant desired velocity is needed to reproduce the will to stay close to the gate and not to be overcome by newly incoming people in status 1. If $\textbf{V}^d(4)=(0,0)$ the dynamics change completely: people in status 1 continuously take the place of waiting people, and waiting people move back. At final time, the result is an almost constant density all along the corridor. \subsection{Congestion control via Control \& Information Points} As final tests, we try to lower the congestion in the four regions adding some Control \& Information Points (CIPs) along the corridor. The idea is that people walking in the corridor are informed about the closure of the gate by some signals or stewards. In this way, people can enter the doubt phase much before (both in space and time) than they do without any knowledge of the status of gate. More precisely, we assume that anyone in status 1 moves immediately to status 2 as soon as it crosses a CIP. The question arises when and where it is most convenient to locate CIPs. Starting from the setting shown in Fig.\ \ref{fig:densities_aggiunta}(d), which is the most critical among the investigated ones, we have run a brute-force optimization procedure to test the effect of the presence of a single CIP; we have found that the best option is positioning the CIP in region 3, 20 s after the gate closed. The result is shown in Fig.\ \ref{fig:signals}(a). \begin{figure}[h!] \centering \subfigure[][Single CIP in region 3]{\includegraphics[width=4.2cm, height=4.2cm]{figure/X100_N20}} \hspace{25 mm} \subfigure[][Two CIPs in regions 3 and 1]{\includegraphics[width=4.2cm, height=4.2cm]{figure/two_signals}} \caption{Density evolution in the four regions of interest. 20 s after the gate closed, pedestrians are informed by (a) one CIP located in region 3, (b) two CIPs, simultaneously activated, located in regions 3 and 1.} \label{fig:signals} \end{figure} The effect of the CIP is clear since the maximal density decreases from $\sim$5 to $\sim$3 p/m$^2$, and the density evolution of region 3 becomes very similar to that of region 4. Fig.\ \ref{fig:signals}(b) shows the result after adding a second CIP in region 1, again 20 s after the gate closed. Maximal densities are not further decreased but densities are less fluctuating in all regions. This can be an advantage in terms of pedestrian safety. \section{Conclusions and future perspectives} We have conducted a numerical investigation in a particular situation characterized by a complex self-interaction of a crowd in a corridor. Although the model has a large number of parameters, we have seen that it is quite sensitive only to some of them. In particular, the parameter $N$ does not affect very much the maximal densities, as if the crowd was able to self-regulate. This ability comes from the dynamical way people change status, the decision being depending on the crowd itself. The parameter $\delta t$, instead, affects the maximal density near the gate more than the size $N$ of the crowd itself. This conveys the idea that large crowds are not an issue \emph{per se}, but high densities actually arise whenever people keep moving towards a blocked crowd. This creates a compression which is then very hard to resolve. Finally, numerical simulations suggest that it is possible to prevent the formation of congestion by informing people along the corridor about the status of the gate. Even one information point is able to drastically reduce the maximal density since it prevents the encounter between the first people who have reached the closed gate and those who are arriving. \medskip In the next future we will further investigate the role of the geometry of the domain in crowd dynamics, in particular testing different shapes of the corridor. We will also evaluate the impact of the visibility of the signals which inform people about the status of the gate. Even more important, we will validate the results of the simulations comparing them with the real dynamics of people observed during mass events by means of video cameras. Overall, we are strongly convinced that the adoption of predictive tools for congestion formation and technologies/automatisms for informing the crowd should be a further subject of study in event planning and crowd safety management. \section*{Funding} This work was carried out within the research project ``SMARTOUR: Intelligent Platform for Tourism'' (No. SCN\_00166) funded by the Ministry of University and Research with the Regional Development Fund of European Union (PON Research and Competitiveness 2007--2013). E.C. would also like to thank the Italian Ministry of Instruction, University and Research (MIUR) to support this research with funds coming from PRIN Project 2017 (No. 2017KKJP4X entitled ``Innovative numerical methods for evolutionary partial differential equations and applications''). E.C.\ and M.M.\ are members of the INdAM Research group GNCS. \section*{Authors' contribution} G.A. proposed the research topic, suggested the case study, provided some real data for calibration, interpreted the results, and proofread the paper. E.C. and M.M. developed the model, wrote the numerical code, performed numerical tests, interpreted the results, and wrote the paper. \baselineskip=0.9\normalbaselineskip \phantomsection\addcontentsline{toc}{section}{\numberline{}References} \section{Introduction}\label{sec:intro} This paper proposes a microscopic (agent-based) model for describing the dynamics of pedestrians in a very special situation characterized by a moving crowd whose members gradually reverse their direction of motion, thus colliding with the newly incoming people still moving in the original direction. This situation creates a nonstandard self-regulating counter-flow with a rather complex dynamics, which can be investigated by suitably tuning the parameters of the model. The final goal is to predict (and prevent) the formation of dangerous congestion in real mass events. \medskip \emph{Relevant literature.} Modelling the behavior of people in a crowd is a difficult task, since it requires to identify the most important behavioral rules which greatly vary from person to person. For that reason, the study of crowds can be regarded as a multidisciplinary area, which have attracted since many years the interest of mathematicians, physicists, engineers, and psychologists. Crowd modelling has a long-standing tradition, starting from the pioneering papers by Hirai and Tarui \cite{hirai1975}, Okazaki \cite{okazaki1979TAIJa}, and Henderson \cite{henderson1974} in the '70s. Since then, all types of models were proposed, spanning from microscale to macroscale, including multiscale ones, both differential and nondifferential (e.g., cellular automata). Models can be first-order (i.e.\ velocity based) or second-order (i.e.\ acceleration based), with local or nonlocal interactions, with metric or topological interactions, with or without contact-avoidance features. The presence of social groups can also be taken into account. A number of review papers \cite{aghamohammadi2020, bellomo2011, dong2020, duives2013, eftimie2018, haghani2020, martinez2017, papadimitriou2009}, and books \cite{cristiani2014book, rosini2013book, kachroo2008book, maurybook2019} are now available, we refer the interested reader to these references for an introduction to the field. It is also useful to mention that models for pedestrians often stem from those developed in the context of vehicular traffic \cite{helbing2001, rosini2013book}. Moreover, there is a strict connection between pedestrian modeling and control theory, including mean-field games, see, e.g., \cite{albi2020, cristiani2021pp, cristiani2014book} and reference therein. In this paper we deal specifically with the \emph{counter-flow} dynamics: such a dynamics occur when two groups of pedestrians move in opposite directions, therefore each group has to find a way to pass through the other. The importance of accurate modeling and simulation of counter-flow dynamics is supported by evidence from crowds disaster analysis \cite{helbing2012crowd}. Over the last years, the major accidents often occurred for overcrowding and counter-flow phenomena taking place inside the area of the events, or in the proximity of the entrances and exit points. For all these reasons, the literature on counter-flow is quite reach; see, among others, \cite{helbing1995social, hoogendoorn2003simulation, heliovaara2012counterflow} in the context of microscopic differential models, \cite{cristiani2011MMS} for a multiscale differential models, and \cite{weng2006cellular, nowak2012quantitative} in the context of cellular automata. Moreover, it is now well established that counter-flow dynamics lead to the so-called \emph{lane formation}: in order to avoid collision, pedestrians arrange in alternate lanes ($\leftrightarrows$) having the same walking direction. The behavior displayed by numerical simulations is in good agreement with observations of real people in both artificial and natural environments \cite{Kretz2006experimental, hoogendoorn2003extracting, helbing2001self, murakami2021mutual}. Finally, let us discuss the impact of the \emph{social groups} on the dynamics of crowds: despite most of the models assume that each pedestrian moves in a crowd on its own, real crowds are typically formed by small subgroups, such as friends or families. The impact of social groups on crowd dynamics has been explored since the '70s in a number of papers \cite{aveni1977not, moussaid2010walking, von2017empirical, singh2009modelling}, resulting in the fact that the presence of groups is not negligible at all. In particular, in the context of bi-directional counter-flow dynamics, theoretical and experimental observations suggest that the presence of groups slows down the formation of lanes, which are more fragmented \cite{zanlungo2020effect, crociani2017micro}. \medskip \emph{Paper contribution.} In this paper, we consider a specific scenario related to the counter-flow dynamics: a crowd (with social groups) in a corridor, initially moving in one direction towards an open gate, at some moment is no longer able to proceed because of the closure of the gate. After that, people have to decide whether to stop \& wait for a possible re-opening, or to move back. This is the case, e.g., of a inflow of people towards an area dedicated to a mass event, which is interrupted by the organizers when the area capacity has been reached. From the modelling point of view, the main novelty is that the decision to stay or to move back is taken \emph{dynamically} by each group, on the basis of the behaviour of the surrounding groups. More precisely, we assume that the decision is taken whenever \emph{the group leader is no longer able to move forward for a certain time}. This can happen either because it has reached the gate or it has reached the other people queuing in front of the gate, or it is hit by the people going back and blocking the way. The overall dynamics is also complicated by the fact that people who decide to stay do not want to be overtaken by the other people approaching the gate, since they do not want to lose their priority in the queue. On the other hand, staying people want to facilitate the passage of reversing people, because the latest are leaving free space for the former. From the mathematical point of view, we propose a microscopic differential model inspired by the well-known Helbing's Social Force Model \cite{helbing1995social}, based on a large system of ODEs. We introduce several variants with respect to the original model: the two most important of them are that a) we consider a first-order (velocity-based) model and b) we consider topological interactions, taking into account the first neighbour only. The last choice greatly speed up the numerical code. The final goal of this research, which is especially dedicated to practitioners who are involved in the organization of real mass events, is the prevention of critical situations which could arise in the reversing-flow scenario under consideration. Moreover, since digital twins are commonly used to support the safety plan development, highlighting critical aspects that need to be solved before and during the event (cf.\ \cite{scozzari2018modeling}), we also propose a crowd control strategy based on the optimal placing of signals/stewards, aiming at informing people in due time about the gate status (open or close). \medskip \emph{Paper organization.} The paper is organized as follows. In Section \ref{sec:model} we present the model for the crowd dynamics. In Section \ref{sec:casestudy} we present our case study: we describe the geometry and the mechanism undergoing the behavioural choices of pedestrians. In Section \ref{sec:tests} we discuss the results obtained from numerical simulations. We end the paper with some conclusions and future perspectives. \section{The model}\label{sec:model} \subsection{General principles} All force models share the same structure of the Newtonian dynamics, namely \begin{equation}\label{structure} \left\{ \begin{array}{l} \dot{\textbf{X}}_k(t)= \textbf{V}_k(t) \\ [2mm] \dot{\textbf{V}}_k(t)= \textbf{F}_k(t,\textbf{X},\textbf{V}) \end{array}, \qquad k=1,\ldots,N \right. \end{equation} where $N$ is the total number of agents, $\textbf{X}_k(t),\textbf{V}_k(t)\in \mathbb{R}^{2}$, denote the position and velocity of agent $k$ at time $t$, respectively, and $\textbf{X}=\left(\textbf{X}_1,\ldots,\textbf{X}_N\right)$, $\textbf{V}=\left(\textbf{V}_1,\ldots,\textbf{V}_N\right)$. The function $\mathbf{F}_k$, the so-called \emph{social force}, models the total force exerted on agent $k$, and gathers all the physical, psychological and behavioral aspects of pedestrian dynamics. The social force is not a real force, but rather an empirical mathematical tool which translates in formulas all these aspects. In its minimal form, it takes into account the following three contributions: \begin{itemize} \item[i)] An individual desired velocity term: the velocity that a single pedestrian would keep if it was alone in the domain. \item[ii)] A repulsion term: pedestrians tend to maintain a certain distance with respect to other members of the crowd, obstacles and walls present in the environment, in order to avoid collisions. \item[iii)] An attraction term: pedestrians who are not moving on their own tends to stay close to the members of their social group (friends, family members). \end{itemize} Social force models can be enhanced adding random fluctuations due to unpredictable behavioral variations or further small-scale interactions. In this regards, it can be useful to note that the (generally undesired) numerical instabilities often play the same role. \medskip \emph{First-order models.} In pedestrian dynamics, much more than in vehicular dynamics, accelerations are almost instantaneous (at least if compared with a reference typical time scale) and then inertia-based effects are negligible. Therefore, first-order models of the form \begin{equation}\label{structure-firstorder} \dot\textbf{X}_k(t)=\textbf{V}_k(t,\textbf{X}),\qquad k=1,\ldots,N \end{equation} are also suitable, see, e.g., \cite{cristiani2011MMS}. We think that such a models are easier to calibrate and are computationally less expensive, for these reasons in this paper we will adopt a model of this kind. This means that all the aspects of the dynamics which were encapsulated in the force $\textbf{F}$ are now inserted directly in the velocity vector $\textbf{V}$. \medskip \emph{Social groups.} In order to account for the presence of social groups in the crowd, such as families or friends, we assume that each pedestrian is part of a group. Each group has at least two members, we do not consider the presence of lonely people. Moreover, each group has a leader, which never changes in the time frame of the simulation. The leader of the group takes decisions about the common target of the whole group. Groups tend to stay together, but they can temporarily break up and then reunite. The leader does not necessarily walk in front of the group because all group members know the destination (decided by the leader) and are able to reach it independently. \medskip \emph{Groups status.} We assume that at any given time, each group (identified with its leader) has a unique \emph{status} which corresponds to its target and, more in general, to its behaviour. The four possible statutes will be detailed later on in Sect.\ \ref{sec:behavior}. \medskip \emph{Topological interactions.} We consider topological, rather than metric, interactions, meaning that each agent $k$ interacts with a fixed number of agents at the same time, regardless of their distance from the agent $k$. More precisely, we assume that each leader interacts with the first neighbour outside its social group only, while followers (i.e.\ not leaders) interact with the first neighbour inside their social group, the first neighbour outside, and their leader. As it happens in molecular dynamics, the fact that a particle interacts with a few other particles at a time does not mean that the interactions are limited to them. The first neighbour changes continuously and after few time iterations all the agents reasonably close to each agent have interacted with it. While standard social force model assumes contemporary interactions with neighbours within a certain threshold distance, we prefer to consider fewer interactions at a time, recovering the same results, in average, over a longer, but still short, time period. This choice stabilizes the dynamics and it is convenient from the computational point of view. \subsection{Mathematical details} As we said, we consider a first-order social force model with social groups and topological interactions. Agents have no dimension and we do not consider contact-avoidance features. Let us start with the dynamics of \emph{group leaders}. We set \begin{equation}\label{leaderdynamics} \textbf{V}_k(\textbf{X};\S)= \textbf{V}^d(\S_k)+ \textbf{V}^{R}(\textbf{X}_k,\textbf{X}_{k^{**}};\S_k,\S_{k^{**}})+ \textbf{V}^o(\textbf{X}_k) \end{equation} where $\textbf{V}^d$ is the desired velocity, which only depends on the status $\S_k$ of the group the agent $k$ belongs to; $\textbf{V}^{R}$ accounts for the repulsion from strangers, and depends on the positions $\textbf{X}_k$ of the agent $k$ itself, the position $\textbf{X}_{k^{**}}$ of the agent $k^{**}$, defined as the nearest neighbour of $k$ \emph{outside} its social group, and the statuses of the two agents; $\textbf{V}^o$ is the repulsion from obstacles. Clearly we have $\S:=(\S_1,\ldots,\S_N)$. The dynamics of \emph{followers} is instead \begin{equation}\label{followerdynamics} \textbf{V}_k(\textbf{X};\S)= \textbf{V}^d(\S_k)+ \textbf{V}^{r}(\textbf{X}_k,\textbf{X}_{k^*})+ \textbf{V}^{R}(\textbf{X}_k,\textbf{X}_{k^{**}};\S_k,\S_{k^{**}})+ \textbf{V}^a(\textbf{X}_k,\textbf{X}_{k^L})+ \textbf{V}^o(\textbf{X}_k) \end{equation} where $\textbf{V}^{r}$ accounts for the repulsion from group members, and depends on the positions $\textbf{X}_k$ of the agent $k$ itself, the position $\textbf{X}_{k^*}$ of the agent $k^*$, defined as the nearest neighbour of $k$ \emph{inside} its social group; $\textbf{V}^a$ accounts for the attraction towards the group's leader, whose index is denoted by $k^L$. Note that group leaders are not attracted by group mates, the cohesion of the group being totally left to followers. Moreover, leaders are not repulsed by group mates. This avoids artifacts in the dynamics and self-propulsion of the group. The reason why we distinguish internal and external repulsion is that pedestrians tend to stay close to group mates, while keeping a larger distance from the strangers. \medskip The repulsion is defined in order to be inversely proportional to the distance between the agents, \begin{equation} \textbf{V}^{r}= -C^{r}\frac{\textbf{X}_{k^*} - \textbf{X}_k}{\left\| \textbf{X}_{k^*} - \textbf{X}_k \right\|^2}, \qquad \textbf{V}^{R}= -C^{R}(\S_k,\S_{k^{**}})\frac{\textbf{X}_{k^{**}} - \textbf{X}_k}{\left\| \textbf{X}_{k^{**}} - \textbf{X}_k \right\|^2}, \end{equation} where the parameter $C^{r}>0$ is constant, whereas $C^{R}(\S_k,\S_{k^{**}})>0$ depends on the behavioral statuses of the two interacting agents, see Sect.\ \ref{sec:behavior}. To avoid numerical issues in the case two agents get temporarily too close to each other, a threshold is imposed to the value $1/\|\cdot\|^2$. The attraction follower-leader, instead, is proportional to the distance between the two agents \begin{equation} \textbf{V}^a= C^a (\textbf{X}_{k^L} - \textbf{X}_k), \end{equation} where the parameter $C^{a}>0$ is constant. \section{Description of the case study} \label{sec:casestudy} \subsection{Geometry} We model the access road to the area of a mass event as a two-dimensional corridor of length $L$ and height $H$. People initially move along the corridor from left to right and enter the area of the event through a gate, placed at the end of the corridor, see Fig.\ \ref{fig:geometry}. At a certain time ($t=0$ in the simulation) the gate closes for safety reasons so that people are forced to change target (staying or going back). In the following numerical simulations we focus on the behavior of the crowd after the gate closure, which involves a risk of high densities. In particular, we measure the pedestrian density along the corridor by means of four square regions of side $R$, equally spaced along the corridor. Regions are numbered from left to right, see Fig.\ \ref{fig:geometry}. \begin{figure}[h!] \centering \includegraphics[width=12cm, height=2.5cm]{figure/corridor_regions_name} \caption{Geometry of the simulated domain. The gate is placed on the far right. Pedestrian density evolution is monitored within the four square regions placed along the corridor.} \label{fig:geometry} \end{figure} \subsection{Behavior}\label{sec:behavior} In order to complete the description of the model we need to specify how the groups take decisions. In particular, how and when they decide to move towards the gate, stop or go back. We select 4 statuses $\S_k\in\{1,2,3,4\}$ at the group level (i.e.\ all components of a group share the same status, decided by the group leader), defined as follows: \begin{itemize} \item $\S_k=1$: move rightward towards the gate; \item $\S_k=2$: doubt phase, decision in progress; \item $\S_k=3$: decision taken, go back moving leftward; \item $\S_k=4$: decision taken, queuing in the corridor. \end{itemize} The group status affects the desired velocity and the interactions with the others. Status' changes happen necessarily in this order: $1 \rightarrow 2 \rightarrow \{3,4\}$, and once the group is in status 3 or 4 it will not change further. The change 1 $\rightarrow$ 2 occurs if \begin{equation} t>\delta t \quad \text{ and } \quad X^1_k(t)-X^1_k(t-\delta t)\leq\overline{\delta\ell}, \end{equation} where $\delta t$, $\overline{\delta \ell}>0$ are two additional parameters and $X^1_k$ is the horizontal component of the position of the agent $k$. This means that the group leader was not able to move forward for more than $\overline{\delta\ell}$ in a period of time of length $\delta t$. In practice, $\delta t$ represents a degree of willingness to get to the gate. The status 2 has a fixed duration, set to $D$. After $D$ time units, the change 2 $\rightarrow$ $\{3,4\}$ occurs randomly in such a way that, in average, $p$\% of groups fall in status 3 and $(1-p)$\% in status 4. \section{Numerical tests}\label{sec:tests} \subsection{Parameters} Let us begin with setting the values of the parameters. Parameters are primarily chosen in order to get realistic (observed) behaviour in situations of equilibrium. In particular we match observed crowds in the initial phase, where all people are in status 1 and are normally walking rightward, and in the final phase, when only people with status 4 are present and waiting. In these two phases the forces which rule the interpersonal distances are in equilibrium, and the relative positions of persons do not vary (apart for small negligible oscillations). In the numerical tests we discuss the effect of five parameters, namely $N$, $\delta t$, $p$, $D$, $H$. We also investigate the effect of the desired velocity of pedestrians with status 4. Equation \eqref{sec:model} is numerically solved by means of the explicit Euler scheme, with time step $\Delta t$. \medskip \emph{Fixed parameters.} $\Delta t=0.01$ s, $\overline{\delta\ell}=1.5$ m, $L=130$ m, $R=10$ m, $\textbf{V}^d(1)=(1,0)$ m/s, $\textbf{V}^d(2)=(0,0)$ m/s, $\textbf{V}^d(3)=(-1.2,0)$ m/s. The 16 possible values of $C^R$ are summarized in the following matrix, where the entry $ij$ is the force exerted by and agent in status $j$ over an agent in status $i$, $$ \left[ \begin{array}{cccc} 2.0 & 2.5 & 1.0 & 2.5 \\ 2.0 & 2.0 & 2.0 & 2.0 \\ 0.75 & 0.75 & 0.75 & 0.75 \\ 2.0 & 2.0 & 0.75 & 4.5 \end{array} \right] $$ Some comments on the repulsion forces are in order: \begin{enumerate} \item People in status 3 (coming back) are repelled from all the others with little intensity. In fact, they accept to be close to other people, even to strangers, because the contact is supposed to be of short duration. \item People in status 4 (staying) are strongly repelled by people of the same status because they want to reach a large comfort distance. \item People in status 4 (staying) are also repelled by people in status 1 (moving rightward) because they want to keep their priority in the queue and keep the newly coming people behind them. In other words, they do not leave room for others to move rightward. Conversely, they are little repelled by people in status 3 (coming back) because waiting people take advantage from the departure of them. People going back leave free space and allow waiting people to get even closer to the gate. All these features have an important consequence: staying people are compressed towards the gate and the closer they are to the gate, the more compressed they are. \end{enumerate} \emph{Variable parameters.} $N$ = 400 or 800 or 1200, $\delta t$ = 7 or 20 s, $p$ = 25 or 75, $D$ = 10 or 40 s, $\textbf{V}^d(4)$=$(0.5,0)$ or (0,0) m/s, $H$ = 7 or 10 or 13 m. \medskip In all simulations the initial density of pedestrians is 0.8 p/m$^2$ (the larger $N$ the larger the area occupied). The initial positions are randomly chosen. The number of members of the groups is uniformly random in the set $\{2,3,4,5,6\}$. \subsection{Numerical results} Fig.\ \ref{fig:screenshot} shows some screenshots of a reference simulation with 1200 agents: \begin{figure}[t!] \centering {\footnotesize (a)}\subfigure{\includegraphics[width=13.3cm, height=1.9cm]{figure/dinamica_6s.png}}\\ {\footnotesize (b)}\subfigure{\includegraphics[width=13.3cm, height=1.9cm]{figure/dinamica_35s.png}}\\ {\footnotesize (c)}\subfigure{\includegraphics[width=13.3cm, height=1.9cm]{figure/dinamica_92s.png}}\\ {\footnotesize (d)}\subfigure{\includegraphics[width=13.3cm, height=1.9cm]{figure/dinamica_150s.png}}\\ {\footnotesize (e)}\subfigure{\includegraphics[width=13.3cm, height=1.9cm]{figure/dinamica_240s.png}} \caption{Five screenshots of a simulation with $N=1200$ pedestrians, with $\delta t$ = 7 s, $p$=25, $D=10$ s, $\textbf{V}^d(4)$=$(0.5,0)$ m/s, $H$ = 10 m. Status 1 is blue, status 2 is magenta, status 3 is red, and status 4 is black. Circled agents are the leaders of their groups.} \label{fig:screenshot} \end{figure} (a) at the beginning all agents are walking rightward. As we said before, initial density, repulsion forces and desired velocity are compatible with each other, in the sense that people can move forward at constant velocity, nobody slows down due to excessive proximity to the neighbor and no queue is formed. As soon as the first pedestrians reach the closed gate the situation changes: people in proximity of the gate stop and people behind start slowing down. (b) A queue begins to form and gradually still people take decision about the new status to get. At this point all four statuses are present at the same time and the dynamics becomes quite complex. Some people in status 1 (moving rightward, blue) are able to overcome people in status 4 (staying, black) before changing status themselves. (c) Lanes are formed as in more standard counter flow dynamics (cf.\ Sect. \ref{sec:intro}), although they are quite perturbed by the presence of the social groups. Social groups sometimes disperse for a while but then reunite. (d) Gradually people in status 3 (coming back, red) succeed in passing through the staying crowd and leave the corridor. While they move leftward, people in status 4 (staying, black) can move a bit backward to let them free space. (e) Finally, only people in status 4 (staying, black) are present in the corridor. They reach an equilibrium characterized by a noncostant density, due to the natural tendency to stay close to the gate. \medskip Fig.\ \ref{fig:densities_new} shows instead the evolution of the density in the four regions of interests, varying $N$. \begin{figure}[t!] \centering \subfigure[][$N_p=400$]{\includegraphics[width=4.2cm, height=4.2cm]{figure/dubbio10_Np400}} \hspace{0 mm} \subfigure[][$N_p=800$]{\includegraphics[width=4.2cm, height=4.2cm]{figure/dubbio10_Np800}} \hspace{0 mm} \subfigure[][$N_p=1200$]{\includegraphics[width=4.2cm, height=4.2cm]{figure/dubbio10_Np1200}} \caption{Evolution in the four regions of interest, for $N_p=400, 800, 1200$ and other parameters as in Fig.\ \ref{fig:screenshot}.} \label{fig:densities_new} \end{figure} The comparison of the plots brings to light some facts: \begin{enumerate} \item The most crowded region is the fourth one, the region closest to the gate. \item In all cases it is well visible a peak of the density in regions 3-2-1 which decreases and shifts forward in time as the region moves away from the gate (see, e.g., Fig.\ \ref{fig:densities_new}(c)). This moving peak is due to the reversing people encountering the newly arriving people. \item As $N$ increases, average densities increase in all regions but less than one could expect. We think that the reason for this stability is that \emph{the crowd is able to self-regulate}: the more people in the corridor, the earlier (i.e.\ before in both space and time) people get to make a decision about what to do, and, if they decide to leave, they do so before remaining trapped by the others. We observe a sort of a compensation phenomenon that consists in the fact that the more people there are, the sooner they leave, overall leading to small differences in the density evolution. \end{enumerate} Now we consider the numerical setting of Fig.\ \ref{fig:densities_new}(c) as \emph{reference case} for investigating the role of the variable parameters of the model. The results are shown in Fig.\ \ref{fig:densities_aggiunta}. \begin{figure}[t!] \centering \subfigure[][$H=7$ m]{\includegraphics[width=4.2cm, height=4.2cm]{figure/H7}} \hspace{0 mm} \subfigure[][$H=13$ m]{\includegraphics[width=4.2cm, height=4.2cm]{figure/H13}} \hspace{0 mm} \subfigure[][$D=40$ s]{\includegraphics[width=4.2cm, height=4.2cm]{figure/dubbio40}}\\ \subfigure[][$\delta t=20$ s]{\includegraphics[width=4.2cm, height=4.2cm]{figure/dt_20}} \hspace{0 mm} \subfigure[][$p=75$]{\includegraphics[width=4.2cm, height=4.2cm]{figure/Pinversa}} \hspace{0 mm} \subfigure[][$\textbf{V}^d(4)=(0,0)$]{\includegraphics[width=4.2cm, height=4.2cm]{figure/V4_nulla}} \caption{Same simulation as in Fig.\ \ref{fig:densities_new}(c) with (a) $H$ decreased from 10 to 7 m, (b) $H$ increased from 10 to 13 m, (c) $D$ increased from 10 to 40 s, (d) $\delta t$ increased from 7 to 20 s, (e) $p$ increased from 25 to 75, (f) $\textbf{V}^d(4)$ decreased from $(0.5,0)$ to $(0,0)$ m/s.} \label{fig:densities_aggiunta} \end{figure} Figs.\ \ref{fig:densities_aggiunta}(a,b) show the result obtained modifying the height $H$ of the corridor. In both cases we maintain the initial density of people as 0.8 p/m$^2$. As expected, we observe the largest average densities in the case of the smallest corridor. The difference is remarkable in the regions farthest from the gate. Fig.\ \ref{fig:densities_aggiunta}(c) shows the results obtained for an increased value of the doubt phase, namely $D=40$ s. We do not observe a relevant increase of the maximum values of the densities, but the peaks are more delayed. Fig.\ \ref{fig:densities_aggiunta}(d) shows the results obtained for an increased value of the time needed to decide either to enter the doubt phase or to keep going to the gate, namely $\delta t=20$ s. People are now more resistant to the idea of renouncing going to the gate and therefore necessarily spend more time in the corridor. Here the peak of density in region 4 is much greater than before and reaches the dangerous level of 5 p/m$^2$ \cite{helbing2012crowd}. This is mainly due to the longer time spent there by people in statuses 1--4 all together. Overall, the parameter $\delta t$ is the most effective in increasing the density. Fig.\ \ref{fig:densities_aggiunta}(e) shows the effect of increasing $p$ from 25 to 75. In this case 75\% people decide to leave the corridor and only 25\% to stay. We see that at the beginning the densities in all regions are larger than those in the reference solution, because more people are trying to go back and form a big group which move very slowly. Conversely, at final time all regions are almost empty and have similar density. Fig.\ \ref{fig:densities_aggiunta}(f) shows the effect of vanishing the desired velocity of people with status 4. Although those people are just waiting in the corridor, a rightward constant desired velocity is needed to reproduce the will to stay close to the gate and not to be overcome by newly incoming people in status 1. If $\textbf{V}^d(4)=(0,0)$ the dynamics change completely: people in status 1 continuously take the place of waiting people, and waiting people move back. At final time, the result is an almost constant density all along the corridor. \subsection{Congestion control via Control \& Information Points} As final tests, we try to lower the congestion in the four regions adding some Control \& Information Points (CIPs) along the corridor. The idea is that people walking in the corridor are informed about the closure of the gate by some signals or stewards. In this way, people can enter the doubt phase much before (both in space and time) than they do without any knowledge of the status of gate. More precisely, we assume that anyone in status 1 moves immediately to status 2 as soon as it crosses a CIP. The question arises when and where it is most convenient to locate CIPs. Starting from the setting shown in Fig.\ \ref{fig:densities_aggiunta}(d), which is the most critical among the investigated ones, we have run a brute-force optimization procedure to test the effect of the presence of a single CIP; we have found that the best option is positioning the CIP in region 3, 20 s after the gate closed. The result is shown in Fig.\ \ref{fig:signals}(a). \begin{figure}[h!] \centering \subfigure[][Single CIP in region 3]{\includegraphics[width=4.2cm, height=4.2cm]{figure/X100_N20}} \hspace{25 mm} \subfigure[][Two CIPs in regions 3 and 1]{\includegraphics[width=4.2cm, height=4.2cm]{figure/two_signals}} \caption{Density evolution in the four regions of interest. 20 s after the gate closed, pedestrians are informed by (a) one CIP located in region 3, (b) two CIPs, simultaneously activated, located in regions 3 and 1.} \label{fig:signals} \end{figure} The effect of the CIP is clear since the maximal density decreases from $\sim$5 to $\sim$3 p/m$^2$, and the density evolution of region 3 becomes very similar to that of region 4. Fig.\ \ref{fig:signals}(b) shows the result after adding a second CIP in region 1, again 20 s after the gate closed. Maximal densities are not further decreased but densities are less fluctuating in all regions. This can be an advantage in terms of pedestrian safety. \section{Conclusions and future perspectives} We have conducted a numerical investigation in a particular situation characterized by a complex self-interaction of a crowd in a corridor. Although the model has a large number of parameters, we have seen that it is quite sensitive only to some of them. In particular, the parameter $N$ does not affect very much the maximal densities, as if the crowd was able to self-regulate. This ability comes from the dynamical way people change status, the decision being depending on the crowd itself. The parameter $\delta t$, instead, affects the maximal density near the gate more than the size $N$ of the crowd itself. This conveys the idea that large crowds are not an issue \emph{per se}, but high densities actually arise whenever people keep moving towards a blocked crowd. This creates a compression which is then very hard to resolve. Finally, numerical simulations suggest that it is possible to prevent the formation of congestion by informing people along the corridor about the status of the gate. Even one information point is able to drastically reduce the maximal density since it prevents the encounter between the first people who have reached the closed gate and those who are arriving. \medskip In the next future we will further investigate the role of the geometry of the domain in crowd dynamics, in particular testing different shapes of the corridor. We will also evaluate the impact of the visibility of the signals which inform people about the status of the gate. Even more important, we will validate the results of the simulations comparing them with the real dynamics of people observed during mass events by means of video cameras. Overall, we are strongly convinced that the adoption of predictive tools for congestion formation and technologies/automatisms for informing the crowd should be a further subject of study in event planning and crowd safety management. \section*{Funding} This work was carried out within the research project ``SMARTOUR: Intelligent Platform for Tourism'' (No. SCN\_00166) funded by the Ministry of University and Research with the Regional Development Fund of European Union (PON Research and Competitiveness 2007--2013). E.C. would also like to thank the Italian Ministry of Instruction, University and Research (MIUR) to support this research with funds coming from PRIN Project 2017 (No. 2017KKJP4X entitled ``Innovative numerical methods for evolutionary partial differential equations and applications''). E.C.\ and M.M.\ are members of the INdAM Research group GNCS. \section*{Authors' contribution} G.A. proposed the research topic, suggested the case study, provided some real data for calibration, interpreted the results, and proofread the paper. E.C. and M.M. developed the model, wrote the numerical code, performed numerical tests, interpreted the results, and wrote the paper. \baselineskip=0.9\normalbaselineskip \phantomsection\addcontentsline{toc}{section}{\numberline{}References}
{ "attr-fineweb-edu": 1.884766, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUcqY4ubnjop7FE2-_
\section{Introduction} The supermembrane theory was derived in \cite{bst}. Its $SU(N)$ regularization was introduced in \cite{hoppe} and in \cite{dwhn,dwmn} the $SU(N)$ regularized Hamiltonian in the light cone gauge was obtained. The zero mode eigenfunction can be described in terms of the $\mathrm{D}=11$ supergravity multiplet, however the existence of the ground state of the Hamiltonian requires a proof of the existence of a unique nontrivial eigenfunction for the nonzero modes. Moreover, in order to be identified with the 11D supergravity multiplet, it must be invariant under $SO(9)$. The existence of this ground state is still an ellusive open problem. For an (incomplete) list of contributions towards its solution, mainly in asymptotic regimes, c.f. \cite{hoppe,hasler,fh,michishita,hl,hlt,frolich}. In 11D supermembrane theory, the zero modes associated with the center of mass and the non-zero modes associated with the internal excitations, decouple. The groundstate of the Hamiltonian with zero eigenvalue and their associated eigenfunction can be described in terms of the $\mathrm{D}=11$ supergravity multiplet once it is proven the existence for the non-zero modes of a unique nontrivial eigenfunction invariant under the R-symmetry $SO(9)$ \cite{dwhn}. The $SU(N)$ regularized Hamiltonian for nonzero modes coincides with the Hamiltonian of the BFSS matrix model, \cite{bfss}. This Hamiltonian was first obtained as the $0+1$ reduction of the 10D Super Yang Mills \cite{claudson,halpern}. Hence, the existence of the ground state for this matrix model, turns out to be exactly the missing step in the proof of the existence of the ground state for the $\mathrm{D}=11$ supermembrane. In \cite{sethi-stern} a prescription to compute the index of non-Fredholm operators was presented. Although a precise definition of the domain of the non-Fredholm operator ( which certainly is not the whole Hilbert space) is not given and hence the definition of the trace involved in the evaluation of the index is not precise, they introduce a prescription, which they claim, allows to compute an index of the non-Fredholm operator. Using this result they claim to obtain a unique ground state for the Hamiltonian of the BFSS matrix model with gauge group SU(2). In this paper we emphasise the bounds on the ground state wave function on the flat directions, which is absent in the previous works. Although we consider the SU(2) case, our approach indicates a new way to analyse the SU(N), N going to infinity, model. This is the relevant gauge group for the D=11 Supermembrane. Due to its complexity, a natural approach for the solution of this problem is to divide it into three parts, \cite{bgmrO,bgmrSU2,bgmrGS}. Firstly, determine the existence and uniqueness of a solution for the Dirichlet problem on a bounded region of arbitrary radius. Secondly, determine the existence and uniqueness of the solution for the Dirichlet problem on the unbounded complementary region. Thirdly, determine if both these solutions match with one another and can be smoothly patched into a single solution of the full problem. The overall state will then be the ground state of the Hamiltonian of the non-zero modes of the $\mathrm{D}=11$ supermembrane. In \cite{bgmrGS} (also \cite{bgmrO,bgmrSU2}) we settled the first step. The present work is about the second step. Our proof of existence and uniqueness for bounded regions, relied on two fundamental properties of the Hamiltonian: i) its supersymmetric structure as $H=\{Q,Q^{\dag}\}$ and ii) the polynomial form of the potential expression as a function of the bosonic coordinates. We combined these two properties with iii) the Rellich-Kondrashov compact embedding theorem. Then the existence and uniqueness followed from ellipticity and the Lax-Milgram theorem for strongly coercive sesquilinear forms. The Rellich-Kondrashov compact embedding theorem holds true for every bounded region of $\mathbb{R}^n$, but it might fail in general on unbounded regions. For the second step, one requires an estimate for the contribution of the potential to the mean value of the Hamiltonian, taking into account that this potential is unbounded from below along the sub-varieties where the bosonic potential vanishes. An estimate must therefore be obtained on the ``valleys'', denoted by $\Omega$ below, surounding these sub-varieties. In the complement of $\Omega$, the bosonic potential is the dominant part of the potential, it is strictly positive and it tends to infinity at infinity. In these ``good'' regions, the existence and uniqueness of the solution to the Dirichlet problem follows from general arguments, similar to those used for non-relativistic Schr{\"o}dinger operators. We expect that the ground state (if it exists) should extend along $\Omega$ and decay rapidly to zero in the complement of $\Omega$. Hard work, however, has to be conducted in the interior of $\Omega$. That is, to show the existence and uniqueness of the solution to the Dirichlet problem on $\Omega$ minus a ball of finite radius. In order to achieve this goal, we devote this letter to establishing that the Rellich-Kondrashov compact embedding theorem holds true on $\Omega$ for $\mathrm{D}\geq 5$ on Sobolev spaces defined following \cite{berger-schechter}. Concretely, we show that the measure (Lebesgue measure) of the unbounded set $$\Omega=\{x\in \mathbb{R}^n: V_{B}(x)<1\}$$ is finite and decays at infinity for any $\mathrm{D}\geq 5$. See lemma~\ref{L2}. This includes, for the bosonic potential, the important cases of the $\mathrm{D}=7$ and $\mathrm{D}=11$ supermembrane. Consequently, with respect to properties i), ii) and iii), arguments analogous to i) and iii) can be made on $\Omega$. Property ii) is not valid in $\Omega$, but it might be possible to consider an estimate of the fermionic contribution to the mean value of the Hamiltonian which allows a different version of coercivity. We hope to report on this eventually. \section{Formulation of the problem} Before establishing our main current contribution, let us summarize the formulation of the problem. We follow the seminal work \cite{dwhn}. The $\mathrm{D}=11$ supermembrane is described in terms of the membrane coordinates $X^m$ and fermionic coordinates $\theta _{\alpha}$, transforming as a Majorana spinor on the target space. Both fields are scalar under worldvolume transformations. When the theory is formulated in the Light Cone Gauge the residual symmetries are global supersymmetry, the R-symmetry $SO(9)$ and a gauge symmetry, the area preserving diffeomorphisms on the base manifold. The fields of the Hamiltonian and the wavefunction are decomposed according to the symmetry group $SO(9)$ in such a way that the Majorana spinor is expressed in terms of the linear representations of the subgroup $SO(7)\times U(1)\subset SO(9)$. The bosonic coordinates $X^{M}$ are decomposed as $(X^m,Z,\overline{Z})$. Where $X^m$ for $m=1,\dots,7$ are the components of a $SO(7)$ vector, and $Z,\overline{Z}$ are the complex scalars \[Z=\frac{1}{\sqrt{2}}(X^8+iX^9)\quad \text{and} \quad \overline{Z}=\frac{1}{\sqrt{2}}(X^8-iX^9),\] which transform under $U(1)$. The corresponding bosonic canonical momenta is accordingly decomposed as a $SO(7)$ vector of components $P_m$ and a complex $U(1)$ momentum $\mathcal{P}$ and its conjugate $\overline{\mathcal{P}}$: $P_{M}=(P_{m},\mathcal{P},\overline{\mathcal{P}})$ where \[\mathcal{P}=\frac{1}{\sqrt{2}}(P^8-iP^9) \quad \text{and} \quad \overline{\mathcal{P}}=\frac{1}{\sqrt{2}}(P^8+iP^9).\] Denoting by $\lambda_{\alpha}$ the invariant $SO(7)$ spinor of the operator associated to the fermionic coordinates. We can express it in terms of an eight component complex spinor $\theta^{\pm}$ eigenstate of $\gamma_9$, for $\gamma_9\theta^{\pm}=\pm\theta^{\pm}$, such that \[\lambda^{\dag}=2^{1/4}(\theta^+-i\theta^{-})\quad \text{and} \quad \lambda=2^{1/4}(\theta^++i\theta^{-}),\] where $\lambda^{\dag}$ is the fermionic canonical conjugate momentum to $\lambda$. Once the theory is regularized by means of the group $SU(N)$, the field operators are labeled by an $SU(N)$ index $A$ and they transform in the adjoint representation of the group. The realization of the wavefunctions is formulated in terms of the $2^{8(N^2-1)}$ an irreducible representation of the Clifford algebra span by $(\lambda^{\dag}+\lambda)$ and $i(\lambda^{\dag}-\lambda)$ in the fermion Fock space. The Hilbert space of physical states consists of the wavefunctions which takes values in the fermion Fock space. Once it is shown that the zero mode states transform under $SO(9)$ as a $[(44\oplus 84)_{\mathrm{bos}}\oplus 128_{\mathrm{fer}}]$ representation which corresponds to the massless $\mathrm{D}=11$ supergravity supermultiplet, the construction of the ground state wave function reduces to finding a nontrivial solution to \begin{equation*}\label{A}H\Psi=0\end{equation*} where $H=\frac{1}{2}M$ and $\Psi\equiv \Psi^{\mathrm{non-zero}}$. The latter is required to be a singlet under $SO(9)$ and $M$ is the mass operator of the supermembrane. The Hamiltonian associated to the the regularized mass operator of the supermembrane \cite{dwhn} is \[ \begin{aligned} H&=\frac{1}{2}M^2=-\Delta+V_B+V_F\\ \Delta&=\frac{1}{2}\frac{\partial^2}{\partial X^i_A\partial X_i^A}+\frac{1}{2}\frac{\partial^2}{\partial Z_A\partial \overline{Z}^A}\\ V_{B}&=\frac{1}{4}f_{AB}^Ef_{CDE}\{X_i^AX_j^BX^{iC}X^{jD}+4X_i^AZ^BX^{iC}\overline{Z}^{D}+2Z^A\overline{Z}^B\overline{Z}^{C}Z^{D}\}\\ V_F&=if_{ABC}X_i^A\lambda_{\alpha}^B\Gamma_{\alpha\beta}^i\frac{\partial}{\partial\lambda_{\beta C}} +\frac{1}{\sqrt{2}}f_{ABC}(Z^A\lambda_{\alpha}^B\lambda_{\alpha}^C-\overline{Z}^A\frac{\partial}{\partial \lambda_{\alpha\beta}}\frac{\partial}{\partial\lambda_{\alpha C}}). \end{aligned} \] The generators of the local $SU(N)$ symmetry are $$\varphi^A =f^{ABC}\left( X_{i}^B\partial_{X_i^C}+Z_B\partial_{Z^C}+\overline{Z}_B\partial_{\overline{Z}^C} +\lambda_{\alpha}^{B}\partial_{\lambda_{\alpha}^C}\right).$$ From the supersymmetric algebra, it follows that the Hamiltonian can be express in terms of the supercharges as $$H =\{Q_{\alpha},Q^{\dagger}_{\alpha}\}$$ for the physical subspace of solutions, given by the kernel of the first class constraint $\varphi^A$ of the theory, that is \begin{equation*} \label{constraint} \varphi^A\Psi=0.\end{equation*} The supercharges associated to modes invariant under $SO(7)\times U(1)$ are given explicitly in \cite{dwhn} as \[ \begin{aligned}Q_{\alpha}&=\left\{-i\Gamma_{\alpha\beta}^i\partial_{X_i^A}+\frac{1}{2}f_{ABC}X_i^B X_j^C \Gamma^{ij}_{\alpha\beta}-f_{ABC} Z^B\overline{Z}^C\delta_{\alpha\beta}\right\}\lambda_{\beta}^A\\ &+\sqrt{2}\left\{\delta_{\alpha\beta}\partial_{Z^A}+i f_{ABC}X_i^B \overline{Z}^C \Gamma^i_{\alpha\beta} \right\}\partial_{\lambda_{\beta}^A}\end{aligned}\] and \[ \begin{aligned}Q_{\alpha}^{\dagger}&=\left\{i\Gamma_{\alpha\beta}^i\partial_{X_i^A}+\frac{1}{2}f_{ABC}X_i^B X_j^C \Gamma^{ij}_{\alpha\beta}+f_{ABC} Z^B\overline{Z}^C\delta_{\alpha\beta}\right\}\partial_{\lambda_{\beta}^A}\\ &+\sqrt{2}\left\{-\delta_{\alpha\beta}\partial_{Z^A}+i f_{ABC}X_i^B \overline{Z}^C \Gamma^i_{\alpha\beta} \right\}\lambda_{\beta}^A.\end{aligned}\] The corresponding superalgebra satisfies, \cite{dwhn} \[\begin{aligned}&\{Q_{\alpha},Q_{\beta}\}=2\sqrt{2}\delta_{\alpha\beta}\overline{Z}^A\varphi_A,\\ &\{Q_{\alpha}^{\dagger},Q_{\beta}^{\dagger}\}=2\sqrt{2}\delta_{\alpha\beta}Z^A\varphi^A,\\ &\{Q_{\alpha},Q^{\dagger}_{\beta}\}=2\delta_{\alpha\beta}H-2i\Gamma^i_{\alpha\beta}\varphi_A.\end{aligned}\] These must annihilate the physical states. The Hamiltonian $H$ is a positive operator which annihilates $\Psi$, on the physical subspace, if and only if $\Psi$ is a \emph{singlet} under supersymmetry\footnote{$\Psi_0$, the zero mode wave function, in distinction is a supermultiplet under supersymmetry.}. In such a case, $$Q_{\alpha}\Psi=0\quad \text{and}\quad Q_{\alpha}^{\dagger}\Psi=0.$$ The latter ensures that the wavefunction is massless, however it does not guarantee that the ground-state wave function is the corresponding supermultiplet associated to supergravity. For this, $\Psi$ must also become a singlet under $SO(9)$. The spectrum of $H$ in $L^2(R^n)$ is continuous \cite{dwln}, comprising the segment $[0,\infty)$. The previous supersymmetric structure implies the following. This is the property i) described above. The current interest is the case $\Sigma=\Omega$. \begin{lemma} \label{lemma2} Let $\Sigma\subset \mathbb{R}^n$ be a region with smooth boundary. If $u\in H_0^1(\Sigma)$ satisfies $Qu=Q^{\dag}u=0$ in $\Sigma$ then $u=0$ in $\overline{\Sigma}$. \end{lemma} \begin{proof} The argument is the same as in \cite{bgmrGS}. If $u$ satisfies $Qu=Q^{\dag}u=0$, then $u$ is analytic in $\Sigma$, since the potential is analytic in $x$. Hence $Qu=Q^{\dag}u=0$ also on the boundary $\partial\Sigma$. Then using the explicit expression of $Q$ and $Q^{\dag}$, we obtain that the normal derivative of $u$ on $\partial \Sigma$ is also zero. We thus have $u=0$ and $\partial_n u=0$ simultaneously on $\partial\Sigma$. By virtue of the Cauchy-Kowaleski theorem on $\partial\Sigma$, $u=0$ in a neighborhood of $\partial\Sigma$. Since $u$ is analytic we conclude that $u=0$ in $\overline{\Sigma}$. \end{proof} \section{Analysis of the Lebesgue measure of the bosonic valleys} We simplify the proof of our main result below by denoting the bosonic coordinates with $X^A_i$, for $i=1,\dots,\mathrm{D}$ and $A=1,2,3$ the $SU(2)$ index. We will denote a vector of $3\times \mathrm{D}$ components by means of D vectors of 3 components: $\vec{X}_i\in \mathbb{R}^3$, $i=1,\dots,\mathrm{D}.$ We denote with a single bar, $\vert\cdot\vert$, the Euclidean norm on any number of components. The bosonic potential reduces to \[ V_B=\frac{1}{2}\sum_{i,j=1}^D\vert \vec{X}_i\wedge \vec{X}_j\vert^2. \] Below we repeatedly use the following property without further mention. If $R$ is any rotation of $\mathbb{R}^3$, then \[ V_B(\vec{X}_1,\ldots,\vec{X}_{\mathrm{D}})= V_B(R\vec{X}_1,\ldots,R\vec{X}_{\mathrm{D}}). \] For $a_0\geq 0$, let \[ \Omega_{a_0}=\{ (\vec{X}_1,\ldots,\vec{X}_{\mathrm{D}}): V_B<1, \vert (\vec{X}_1,\ldots,\vec{X}_{\mathrm{D}})\vert \geq a_0\}. \] So that $\Omega=\Omega_0$. We denote the (Lebesgue) measure of any of these sets by $\mu(\Omega_{a_0})$. \begin{lemma}\label{L2} For $D\geq 5$, $\mu(\Omega)$ is finite and \[\lim_{a_0\to\infty} \mu(\Omega_{a_0})= 0.\] \end{lemma} \begin{proof} We firstly notice that we can exchange the orders of integration below, because $V_B$ is a polynomial in its components. Fix one direction $\vec{e}$ and consider the change of variables that rotates $\vec{X}_1$ to $a\vec{e}$. Then \[ \mu(\Omega_{a})=4\pi \int_{a_0}^\infty a^2 \mathrm{d}a \int_{\hat{\Omega}_{a_0}} \prod_{j=2}^{\mathrm{D}}\mathrm{d} x_{j}^1 \mathrm{d} x_{j}^2 \mathrm{d} x_{j}^3 \] where \[ \hat{\Omega}_{a}=\{(a\vec{e},\vec{X}_2,\ldots,\vec{X}_{\mathrm{D}}): V_B<1, \vert (a\vec{e},\vec{X}_2,\ldots,\vec{X}_{\mathrm{D}})\vert \geq a_0\}. \] Considering \[\vec{X}_1= a\vec{e},\quad \vec{X}_i=b_i\vec{e}+X_i^{\perp} \] where $e\cdot X_i^{\perp}=0,\, i=2,\dots,\mathrm{D}$, we have \[ \vert \vec{X}_i\wedge\vec{X}_j\vert^2 =\vert b_i{X}_j^{\perp}- b_j{X}_i^{\perp}\vert^2 +\vert {X}_i^{\perp}\wedge {X}_j^{\perp}\vert^2. \] Write now, $X_i^{\perp}=c_i\vec{e}_2+d_i\vec{e_3}$ with $\vec{e}_2\cdot\vec{e}_3=0$ and $\vert\vec{e}_2\vert=\vert\vec{e}_2\vert=1$. Substituting in $V_B$ we get \begin{equation} \label{potBCD} \begin{aligned}V_B= a^2 (\vert C\vert^2+& \vert D\vert^2)+ \\ & \vert B\vert^2\vert C\vert^2-(B\cdot C)^2+ \\ & \vert B\vert^2\vert D\vert^2-(B\cdot D)^2+ \\ & \vert C\vert^2\vert D\vert^2-(C\cdot D)^2 \end{aligned} \end{equation} where we have denoted by $B;C;D$ the points in $\mathbb{R}^{D-1}$ with components \[ B=\begin{pmatrix} b_2\\b_3\\ \vdots \\ b_D \end{pmatrix};C=\begin{pmatrix} c_2\\c_3\\ \vdots \\ c_D \end{pmatrix};D=\begin{pmatrix} d_2\\d_3\\ \vdots \\ d_D \end{pmatrix}. \] Then \begin{equation} \label{lebeme} \mu(\Omega_{a_0})=4\pi\int_{a_0}^\infty a^2 \mathrm{d}a \int_{\tilde{\Omega}_a} \prod_{i=2}^{D}\mathrm{d}b_i\prod_{j=2}^{D}\mathrm{d}c_j\prod_{k=2}^{D}\mathrm{d}d_k \end{equation} where \[ \tilde{\Omega}_a=\{(B;C;D):\text{RHS of }\eqref{potBCD}<1,\, a^2+|B|^2+|C|^2+|D|^2\geq a_0\}. \] In order to estimate the integrals in \eqref{lebeme} we change variables to \[ C=\alpha \frac{B}{|B|} + C_{\perp} \qquad D=\beta \frac{B}{|B|} + B_{\perp} \] where $B\cdot C_{\perp}=B\cdot D_{\perp}=0$ so that $\alpha=C\cdot \frac{B}{|B|}$ and $\beta = D\cdot \frac{B}{|B|}$. The potential becomes \begin{align*} V_B&=a^2(|C|^2+|D|^2)+|\beta|^2(|C_{\perp}|^2+|D_\perp|^2) \\ &\geq (a^2+|B|^2)(|C_\perp|^2+|D_{\perp}|^2)+a^2(\alpha^2+\beta^2). \end{align*} For $a$ and $B$ fixed, the region \[ E_{a,B}=\{(\alpha,C_{\perp},\beta,D_{\perp}):(a^2+|B|^2)(|C_\perp|^2+|D_{\perp}|^2)+a^2(\alpha^2+\beta^2)<1\} \] is an ellipsoid which contains \[ \{(\alpha,C_{\perp},\beta,D_{\perp}):V_B<1\}. \] Then \begin{align*} \mu(\Omega_{a_0})& \leq k_1(\mathrm{D})\int_{a_0}^\infty a^2 \mathrm{d}a \int_{B\in \mathbb{R}^{\mathrm{D}-1}} \mu(E_{a,B}) \prod_{i=2}^{D}\mathrm{d}b_i \\ &= k_1(\mathrm{D}) \int_{a_0}^\infty \mathrm{d}a \int_{B\in \mathbb{R}^{\mathrm{D}-1}} \frac{\prod_{i=2}^{D}\mathrm{d}b_i}{(a^2+|B|^2)^{\mathrm{D}-2}} \\ &= k_2(\mathrm{D}) \int_{a_0}^\infty \int_{0}^{\infty} \frac{u^{\mathrm{D-2}}\mathrm{d}u\mathrm{d}a}{(a^2+u^2)^{\mathrm{D}-2}} \\ &= k_3(\mathrm{D}) \int_{a_0}^\infty \frac{\mathrm{d}a}{a^{\mathrm{D}-3}} \end{align*} where $k_l(\mathrm{D})$ are constants. Finaly notice that the right hand side is finite for all $a_0> 0$ and decreases to $0$ as $a_0\to\infty$, whenever $\mathrm{D}\geq 5$. \end{proof} We denote by $H^p (\Omega)$ and $H^p_0(\Omega)$, respectively, the Sobolev spaces $H^{p,2}(\Omega)$ and $\mathring{W}^{p,2}(\Omega)$ in the notation of \cite{berger-schechter}. A crucial observation here is the fact that these spaces are amenable to patching inner and outer domains in the solution of Dirichlet problems. We recall that $H^p(\Omega)$ is the Hilbert space arising from restricting to $\Omega$ functions in the Sobolev space $H^p(\mathbb{R}^n)$, the norm being the infimum of the Sobolev norm over all possible extensions. We also recall that $H^p_0(\Omega)$ is the completion with respect to this norm of all smooth functions with support a compact subset of $\Omega$. By combining lemma~\ref{L2} with Theorem~2.8 of \cite{berger-schechter}, we immediately obtain the remarkable property that that both $H^p (\Omega)$ and $H^p_0(\Omega)$ are compactly embedded into $L^2(\Omega)$ for $\mathrm{D}\geq 5$. \section{Bounds for the fermionic potential} In order to prove the existence and uniqueness of the solution to the outer Dirichlet problem we need a bound on the contribution of the mean value of the fermionic potential $(u,V_Fu)_{L^2(\Omega)}$. We notice that the fermionic potential is linear on the bosonic coordinates. Then $$\vert(u,V_F u)\vert_{L^2(\Omega)}\le C(u,\rho u)_{L^2 (\Omega)}$$ for some $C>0$, where $\rho^2 =\vert x\vert^2\equiv a^2+\vert B\vert^2+\vert C\vert^2+\vert D\vert^2 $. From Lemma~\ref{L2}, it is possible to derive the following bound, $$\vert(u,V_F u)\vert_{L^2(\Omega)}\le C \left\|u\right\|^2_{L^{\infty}(\Omega)}\int_{\Omega}\rho$$ provided that \begin{equation} \label{intcond} \int_{\Omega}\rho<\infty. \end{equation} In turns, we have the following results which follows from similar arguments as those of lemma~\ref{L2}. \begin{lemma} If $D>5$, then $\int_{\Omega}\rho^2<\infty$. \end{lemma} As a corollary of this lemma, we obtain that for $D>5$, also \eqref{intcond} holds true. We hope to discuss a sharp bound of the form $(u,V_Fu)_{L^2 (\Omega)}\le k\left\|u\right\|^2 _{H^1(\Omega)}$ in future work. \section{Conclusions} We have shown that the volume of the valleys, the set $\Omega$, is finite when the dimension of the target space on which the supermembrane theory is formulated is greater than or equal to five (transverse) dimensions. This include the important 7 and 11-dimensional supermembranes. Using a framework due to Berger and Schechter we have shown that on $\Omega$, the embeddings of $H^1(\Omega)$ and $H^1_0(\Omega)$ onto $L^2(\Omega)$ are compact. Notice that this property is not related to the zero point energy of the bosonic membrane, a main observation in the argument to conclude that the bosonic membrane has discrete spectrum. In fact this result does not depend on the target space dimension. Furthermore, we have argued with supporting evidence about bounds for the mean value of the fermionic potential. We have then shown properties i) and iii) proposed in the introduction and claim that it is possible to get an appropiate bound for the mean value of the fermionic potential on any wave function in $H_0^1(\Omega)$. The complete proof of the three statements will allow determining the existence and uniqueness of the solution of the outer Dirichlet problem on the valleys of the bosonic potential. \section{Acknowledgements} AR and MPGM were partially supported by Projects Fondecyt 1161192 (Chile). LB kindly acknowledges support from MINEDUC-UA project code ANT1755. This research was initiated as part of a visit of this author to the Universidad de Antofagasta in April~2018 and completed when he was on a study leave at the {\v C}esk{\'e} Vysok{\'e} U{\v c}en{\'i} Technick{\'e} v Praze in November~2018.
{ "attr-fineweb-edu": 1.891602, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUcvHxK7Tt52mM8s-R
\section{Introduction} Recently, there has been considerable interest in field theories in which Lorentz invariance is explicitly violated by terms containing higher order spatial derivatives. The presence of these terms leads to softer ultraviolet behaviour while preserving unitarity (which is typically lost in the presence of Lorentz invariant higher derivative terms because of ghosts associated with higher time derivatives). At very high energies, these Lorentz invariance violating terms dominate, leading to Lifshitz-like anisotropic scaling symmetry (in the classical theory) in which time and space scale differently: $x \to x/a, t \to t/a^z$. The exponent $z$ characterizes the scaling symmetry. Lifshitz-like field theories with anisotropic scaling have been used in condensed matter systems to describe quantum criticality \cite{Hornreich:1975zz}-\cite{Son:2007ja}. Recently they have also been discussed in string theory in the context of possible applications of AdS/CFT duality \cite{Son:2008ye}-\cite{Hartnoll:2009sz} to condensed matter systems involving strongly interacting constituents. In a separate development, the idea that a relativistic theory at low energies may have a Lorentz non-invariant ultraviolet completion was suggested in \cite{Horava:2008jf}. This idea has been further explored in \cite{Visser:2009fg}-\cite{Chao:2009dw}. The suggestion that an ultraviolet completion of quantum gravity may be similarly formulated \cite{Horava:2009uw} has serious difficulties \cite{Charmousis:2009tc}-\cite{Papazoglou:2009fj} because gravity elevates Lorentz invariance to a local gauge symmetry, which cannot be broken except by some kind of Higgs mechanism. In the present work we will focus only on non-gravitational theories. Lifshitz-like field theories with Lorentz invariance violations (LIV) have also recently been discussed in the context of applications to particle physics \cite{Anselmi:2008bt,Anselmi:2009vz,Dhar:2009dx,Chao:2009dw,Kawamura:2009re,Kaneta:2009ci}. In \cite{Anselmi:2009vz,Dhar:2009dx} it was argued that a $z=3$ Lifshitz-like ultraviolet completion of the Nambu$-$Jona-Lasinio (NJL) model \cite{Nambu:1961tp} in $3+1$ space-time dimensions has the required properties to replace the Higgs sector of the electro-weak theory. The four-fermion coupling in this model is asymptotically free, leading to dynamical mass generation for the fermions and chiral symmetry breaking. In an appropriately gauged version, fluctuations of the magnitude of the fermion bilinear order parameter\footnote{The phase of the fermion bilinear is the Goldstone mode which combines with the gauge field as usual to make it massive.} can be interpreted as the Higgs field. This model obviates the need for fine tuning at the expense of introducing LIV at high energies. The hope is that for a sufficiently large LIV scale, the low-energy theory could be consistent with experimental constraints on LIV\footnote{For current situation on experimental searches of Lorentz symmetry violations, see \cite{Liberati:2009pf}-\cite{Scully:2008jp}.}. This may, however, require a new fine tuning of parameters \cite{Iengo:2009ix}. The main purpose of this work is to analyze renormalization group (RG) flows in the model of \cite{Dhar:2009dx} to understand in detail the possible emergence of Lorentz invariance at low energies. The analysis has been performed in the leading large-$N$ approximation. Our findings can be summarized as follows. \medskip \noindent $\bullet$ At low energies, in the fermionic sector, the theory recovers approximate Lorentz invariance, violations being of order $E^2/\mu^2$, where $\mu$ is the energy scale associated with LIV. However, in the bosonic sector, in the broken chiral symmetry phase, the induced kinetic terms violate Lorentz invariance at ${\cal{O}}(1)$ level. The origin of these violations is simple to understand $-$ they arise from fermion modes with energies higher than $\mu$ propagating in a loop. These violations can be made small by imposing an effective cut-off on the theory and arranging $\mu$ to be much larger than the cut-off (see the last paragraph of Section \ref{sec-low}). \medskip \noindent $\bullet$ The RG flows reveal a new nontrivial fixed point, apart from the $z=3$ ultraviolet fixed point. The theory flows down from high energies to this fixed point at low energies. The above mentioned approximate Lorentz invariance in the fermionic sector and ${\cal{O}}(1)$ LIV in the bosonic sector are characteristic of this new fixed point. If one works with a fixed finite cut-off $\L$, and sends the LIV scale $\mu$ much above $\L$, the Lorentz violations become smaller and smaller, leading to an approximately Lorentz invariant theory which is identical to an effective theory derived from the NJL model at low energies, with a cut-off $\L$. \medskip The plan of the paper is as follows. In section 2 we deform the z=3 action with all possible marginal and relevant couplings (from the point of view of $z=3$ scaling) and study their effect on the vacuum solution. We find that only one of the three possible four-fermion (marginal) couplings has a nontrivial flow. Moreover, there is only one relevant coupling that affects the low energy physics, namely the coupling that determines the scale of LIV. This had already been remarked in \cite{Dhar:2009dx}, but section 2 provides a detailed justification for it. In section 3, we derive the effective action at a scale much smaller than the scale of LIV. We find that at low energies, in the broken symmetry phase, the kinetic terms induced by fermion loops for the massive scalar bound state of the fermions violate Lorentz invariance at ${\cal{O}}(1)$ level, which cannot be corrected by any fine-tuning of parameters. In section 4 we study the RG flow of the couplings and locate fixed points. We compare this fixed point structure with that in the relativistic NJL model. We end with some concluding remarks in section 5. Details of some of the calculations have been given in three Appendices. \section{The action and vacuum solutions} The model discussed in \cite{Dhar:2009dx} consists of 2N species of fermions, $\psi_{ai}(t,\vec x)$, where the index $a$ runs over the values $1,~2$ and the index $i$ runs from 1 to $N$. Each of these fermions is an $SU(2)$ spinor, where $SU(2)$ is the double cover of the spatial rotation group $SO(3)$. It is useful to view the index $a$ as denoting the two Weyl components of a Dirac fermion in a four dimensional theory with Lorentz invariance. The action we consider will have the following symmetries: a global $U(1)_1 \times U(1)_2$ symmetry\footnote{This is the analogue of chiral symmetry in the corresponding Lorentz invariant four fermi model.} under which the fermions transform as \bea \psi_{ai} \to e^{i\alpha_a} \psi_{ai},\hspace{0.5cm} a=1,2; \label{chiral} \eea and a global $U(N)$ symmetry under which the fermions transform in the fundamental representation \bea \psi_{ai} \to U_{ij}\psi_{aj}, \hspace{0.5cm} i=1,...,N. \eea In addition to these symmetries, we will ensure that the action is invariant under the interchange $\psi_{1i}(t, \vec x) \to \psi_{2i}(t,-\vec x) $. This is the analogue of the parity operation in the relativistic Dirac theory. A general action which is consistent with the above symmetries and contains all the relevant and marginal couplings is given by\footnote{All other possible four-fermion terms, like e.g. $(\psi_{1i}^\dagger \psi_{2j})(\psi_{2j}^\dagger \psi_{1i})$, can be related to the terms in \eq{relevant} and/or terms involving vector bilinears like $(\psi_{1i}^\dagger \vec{\sigma} \psi_{1i}).(\psi_{2j}^\dagger \vec{\sigma}\psi_{2j})$. The latter do not affect vacuum solutions since the vevs of such bilinears vanish because of invariance under spatial rotations.} \bea \!\!\!S =\!\! \int\!\!\mm d^3\vec x\ dt \left[ {\psi^\dagger}_{\!1i} \left\{i \del_t - i \vec \del. \vec \s\ \left(g_0(-i\vec{\del})^2+ g_1\right) - g_2 \vec{\del}^2 \right\} \psi_{1i} \right. \nn \mm + {\psi^\dagger}_{\!2i} \left\{i \del_t + i \vec \del. \vec \s\ \left(g_0(-i\vec{\del})^2+ g_1\right) - g_2 \vec{\del}^2 \right\} \psi_{2i} + g_3 \left( {\psi^\dagger}_{\!1i} \psi_{1i} + {\psi^\dagger}_{\!2i} \psi_{2i} \right) \nn \mm \left. + g_4^2 \left\{\left({\psi^\dagger}_{\!\!1i}\psi_{1i}\right)^2 + \left({\psi^\dagger}_{\!\!2i} \psi_{2i}\right)^2\right\} + g_5^2 \left( {\psi^\dagger}_{\!\!1i}\psi_{1i} {\psi^\dagger}_{2j}\psi_{2j} \right) + g^2 \left( {\psi^\dagger}_{\!\!1i}\psi_{2i} {\psi^\dagger}_{2j}\psi_{1j} \right) \right]. \nn \label{relevant} \eea At high energies, this action has the scale invariance of a $z=3$ Lifshitz-like theory in which space and time have the dimensions $[x]=-1$ and $[t]=-3$. At low energies, the relevant term with coupling $g_1$ dominates, so one expects the model to flow down to an approximately Lorentz invariant theory at low energies. We will study the dynamics of this action in the large $N$ limit. It is useful to employ the notation of Dirac matrices and rewrite the action \eq{relevant} in terms of the four-component spinor \bea \Psi_i = \left(\begin{array}{c} \psi_{1i} \\ \psi_{2i} \end{array} \right) \label{dirac1} \eea and its Weyl components $P_{L,R}\Psi_i={\Psi_i}_{L,R}=\psi_{1,2i}$. We will also use the Dirac gamma matrices \bea \gamma^0 = \s_1 \otimes {\bf 1}, \quad \g^i = i\s_2 \otimes \s_i, \quad \g^5 =i\g^0 \g^1 \g^2 \g^3 = \s_3 \otimes {\bf 1}, \label{dirac2} \eea where the $\sigma$'s are standard Pauli matrices satisfying $[\sigma_i,\sigma_j]=i\epsilon_{ijk}\sigma_k$ and ${\bf 1}$ is the $2\times 2$ identity matrix. In terms of these matrices, the Weyl projection operators are given by $P_{L,R}=\frac{1}{2}(1 \pm \gamma^5)$. In this notation, the above action takes the form \begin{eqnarray} S=\int \mm d^3\vec x\ dt \biggl[ \bar{\Psi}_i \biggl\{\gamma^0 \left(i\partial_0-g_2 \vec{\partial}^2+g_3\right) +i \vec{\gamma}.\vec{\partial} \left(g_1-g_0\vec{\partial}^2\right) \biggr\} \Psi_i \nonumber\\ \mm +g_4^2 \biggl\{(\bar{\Psi}_{Li} \gamma^0 \Psi_{Li})^2+ (\bar{\Psi}_{Ri} \gamma^0 \Psi_{Ri})^2\biggr\}+ g_5^2(\bar{\Psi}_{Li} \gamma^0 \Psi_{Li})(\bar{\Psi}_{Rj} \gamma^0 \Psi_{Rj}) \nonumber\\ \mm +g^2 (\bar{\Psi}_{Li}\Psi_{Rj})(\bar{\Psi}_{Rj}\Psi_{Li}) \biggr], \label{actiondirac} \end{eqnarray} As is usual for actions with four-fermion interactions, we now introduce auxiliary scalar fields to rewrite the above in the completely equivalent form of an action quadratic in fermions: \bea S=\int \mm d^3\vec x\ dt \biggl[ \bar{\Psi}_i \biggl\{\gamma^0 \left(i\partial_0-g_2 \vec{\partial}^2+g_3\right) +i \vec{\gamma}.\vec{\partial} \left(g_1-g_0\vec{\partial}^2\right) +\left(\phi P_L +\phi^{\ast} P_R\right)\nonumber\\ \mm+\gamma^0 \left(\alpha P_L+\beta P_R\right)\biggr\} \Psi_i -\frac{\rho^2+\rho\eta}{g_4^2}-\frac{g_4^2\eta^2}{g_5^2} -\frac{|\phi|^2}{g^2}\biggr]. \label{actionboson} \eea Here $\rho$ and $\eta$ are real scalar fields and $\phi$ is a complex scalar field. We have also defined \bea 2\rho+\eta \equiv \alpha, \qquad \frac{g_5^2}{g_4^2}\rho +2\frac{g_4^2}{g_5^2}\eta \equiv \beta. \label{scalars} \eea The scalars are fermion bilinear composites, as can be easily derived from their equations of motion: \bea \rho=g_4^2(\bar{\Psi}_{Li} \gamma^0 \Psi_{Li}), \qquad \eta=g_5^2(\bar{\Psi}_{Ri} \gamma^0 \Psi_{Ri}), \qquad \phi=g^2(\bar{\Psi}_{Li} \Psi_{Ri}). \label{bilinears} \eea \subsection{Vacuum solutions} Since the action \eq{actionboson} is quadratic in fermions, one can integrate these out to get an effective action for the scalar fields. For vacuum solutions, it is sufficient to consider only the homogeneous modes, $\rho_0,~\eta_0$ and $\phi_0$, of the scalar fields. Moreover, without loss of generality, one may take $\phi_0$ to be real. In this case, the effective action for the scalars is given by \begin{eqnarray} S_{eff}=-i \mm N \ {\rm Tr} \ln \biggl\{l_0\gamma^0-\vec{l}.\vec{\gamma} +\phi_0+\gamma^0(\alpha_0 P_L+\beta_0 P_R)\biggr\}\nonumber\\ \mm-\left(\frac{\rho_0^2+\rho_0\eta_0}{g_4^2}+\frac{g_4^2\eta_0^2}{g_5^2} +\frac{\phi_0^2}{g^2}\right) V, \label{effective action} \end{eqnarray} where $V$ denotes the volume of space-time. Also, in the momentum space representation, we have \begin{eqnarray} l_0(k_0,\vec{k})=k_0+g_2 k^2+g_3, \qquad \vec{l}(\vec{k})=\vec{k}(g_0k^2+g_1), \qquad k=|\vec{k}|. \label{l} \end{eqnarray} The equations of motion are obtained by varying this action with respect to $\rho_0,~\eta_0$ and $\phi_0$. The calculation can be simplified by noticing that due to the ``parity'' symmetry $\Psi_{Li}(t, \vec x) \to \Psi_{Ri}(t,-\vec x)$, which we assume is unbroken, and the relations \eq{bilinears}, the vacuum solutions $\rho_0$ and $\eta_0$ are related, i.e. \bea \rho_0/g_4^2=\eta_0/g_5^2. \label{vacuumrel} \eea Thus, there are really only two independent variables, $\rho_0$ and $\phi_0$ . Using this relation after varying \eq{effective action} with respect to $\rho_0,~\eta_0$ and $\phi_0$, the two independent equations of motion can be written as \bea &&\frac{\rho_0}{\lambda_4}=-2i \int \frac{d^4k}{(2\pi)^4} \frac{(l_0+\alpha_0)}{(l_0+\alpha_0)^2-\vec{l}^2-\phi_0^2} \label{rho},\\ &&\frac{1}{\lambda}=2i\int \frac{d^4k}{(2\pi)^4} \frac{1}{(l_0+\alpha_0)^2-\vec{l}^2-\phi_0^2}.\label{phi} \eea where we have defined the 't Hooft couplings, \bea g_{4,5}^2N \equiv \lambda_{4,5}, \qquad g^2N \equiv \lambda, \label{thooft} \eea and made use of the relation \bea \alpha_0=\beta_0=\left(2+\frac{\lambda_5}{\lambda_4}\right)\rho_0, \label{alphabeta} \eea which follows from equations \eq{scalars} and \eq{vacuumrel}. The equations \eq{rho} and \eq{phi} are exact in the usual large N limit in which the 't Hooft couplings are held fixed. Let us first consider equation \eq{rho}. The momentum integral on the right hand side has a power divergence, so the dependence on $\alpha_0$ cannot be removed by shifting $k_0$. To do the integral, we need to introduce a regulator. We will use a simple cut-off regulator\footnote{Note that the regulator used here explicitly breaks Lorentz invariance. This does not affect vacuum solutions. However, in sections 3 and 4, where we consider the question of restoration of Lorentz invariance at low energies, it will be important to choose a suitably different regulator that allows for such a possibility.}. Continuing to Euclidean momenta, we may then write \bea \tilde{\rho}_0 &=& \frac{\lambda_4}{2\pi^3} \int_{0}^{M} dk \ k^2 \int_{-M^3}^{M^3} dk_0 \ \frac{k_0-g(k)}{\left(k_0-g(k)\right)^2+l^{2}+\phi_0^2} \nonumber \\ &=& \frac{\lambda_4}{4\pi^3}\int_{0}^{M} dk \ k^2 \ \ln \left \{\frac{l^{2}+\phi_0^2+\left(M^3-g(k)\right)^2} {l^{2}+\phi_0^2+\left(M^3+g(k)\right)^2}\right\}, \label{rhointegral} \eea where $l=|\vec{l}|=k|g_0k^2+g_1|$, $g(k)=(\tilde{g}_2k^2+\tilde{g}_3+\tilde{\alpha_0})$ and $\tilde{\rho}_0=i\rho_0$, $\tilde{g}_2=i g_2$ and $\tilde{g}_3=i g_3$ are Euclidean continuations of respectively $\rho_0$, $g_2$ and $g_3$. In the limit of large $M$, the integral in \eq{rhointegral} can be done by expanding both the numerator and denominator of the argument of the `ln' function around $(M^6+g_0^2k^6)$. Discarding terms that vanish as $M \rightarrow \infty$, we get \begin{eqnarray} \rho_0= \frac{\lambda_4}{\pi^3} \biggl\{-M^2 g_2 I_0-(g_3+\alpha_0) I_1 +\left(2 g_0 g_1-g_2^2\right)g_2 I_2 +\frac{4}{3} g_2^3 I_3\biggr\}, \label{rho-0} \end{eqnarray} where the coefficients $I_0,~I_1,~I_2$ and $I_3$ are functions of $g_0$ only and are listed in Appendix 1. In writing the above, we have continued the various parameters and fields back to Minkowski signature. Now, using relation \eq{alphabeta}, the above equation can be solved for $\rho_0$: \begin{eqnarray} && \rho_0 \left\{1 + \left(2\lambda_4+\lambda_5 \right)\frac{I_1}{\pi^3} \right\} \nonumber \\ && =\frac{\lambda_4}{\pi^3}\left\{-(g_3+M^2 g_2I_0/I_1)I_1 +\left(2 g_0 g_1-g_2^2\right)g_2 I_2+\frac{4}{3} g_2^3 I_3 \right\}. \label{rhosolution} \end{eqnarray} Notice that $\rho_0$ vanishes if $g_2=g_3=0$. If $g_2=0$ but $g_3 \neq 0$, then $\rho_0$ is nonzero, but we may take all the couplings appearing in the solution as fixed, independent of the cut-off. In case $g_2 \neq 0$, then the right hand side of \eq{rhosolution} diverges quadratically with the cut-off. However, this divergence can be removed by shifting $g_3$, i.e. we set $g_3=(g_3'-M^2 g_2I_0/I_1)$, where $g_3'$ is independent of the cut-off $M$. This determines $\rho_0$ in terms of the $M$-independent couplings $g_2$, $g_3'$, $g_4$, etc. The other vacuum parameter, $\phi_0$, is determined by equation \eq{phi}. In this case, the integral over $k_0$ on the right hand side is convergent and the entire integral is only logarithmically divergent. So a shift of the integration variable $k_0$ is allowed. Doing this enables us to get rid of the couplings $g_2,~g_3,~g_4,~g_5$ and $\rho_0$ from this gap equation. After a Euclidean continuation ($k_0=ik_4$) it then takes the form \bea \frac{1}{\lambda}=2\int \frac{d^4k}{(2\pi)^4} \frac{1}{(k_4^2+l^2(k)+\phi_0^2)}. \label{gapeqn} \eea As discussed in \cite{Dhar:2009dx}, this equation signals the breaking of the global $U(1)_1 \times U(1)_2$ ``chiral'' symmetry \eq{chiral} down to $U(1)$. It also requires that the coupling $\lambda$ have a nontrivial RG flow, which depends on the couplings $g_0$ and $g_1$. We will analyze this flow in detail in section 4. We end this section with the following comment. It should be clear from the above discussion that in the present model all the interesting dynamics arises from the NJL type of four fermi interaction (conjugate to the 't Hooft coupling $\lambda$) and its RG flow is decoupled from the dynamics of all the other marginal couplings. Therefore, for calculational simplicity, in the rest of the paper we will set the marginal couplings $g_4$ and $g_5$ to zero. We will also set $g_2=g_3=0$ since these couplings also do not affect the RG flow of $\lambda$. \section{\label{sec-low} Low energy effective action} Setting $g_2=g_3=g_4=g_5=0$ in \eq{actiondirac} as discussed above, and going over to the bosonic variable $\phi(x)$ as in \eq{actionboson}, we get the action \bea S= \int d^4x \biggl[\mm\bar{\Psi}_i\biggl\{i\gamma^0\partial_0+i\vec{\gamma}. \vec{\partial}\left(g_1-g_{0} \vec{\partial}^2\right)+(\phi P_L +\phi^* P_R) \biggr\}\Psi_i-\frac{N}{\lambda}|\phi|^2\biggr]. \nonumber \\ \label{simpleaction} \eea We are interested in the low energy effective action for this system in the phase in which the symmetry \eq{chiral} is broken. To derive the low energy action, we substitute in the above $\phi(x)=(\phi_0+\frac{\sigma(x)}{\sqrt{N}}) e^{i\frac{\theta(x)}{\sqrt{N}}}$, where $\phi_0$ is the vacuum solution and $\sigma(x)$ and $\theta(x)$ are respectively the magnitude and phase of the fluctuation of $\phi(x)$ around the vacuum. We get, \bea S= \int d^4x \biggl[\mm\bar{\Psi}_i\biggl\{i\gamma^0\partial_0+i\vec{\gamma}. \vec{\partial}\left(g_1-g_{0} \vec{\partial}^2\right)+\biggl(\phi_0+ \frac{\sigma}{\sqrt{N}}\biggr) \biggl(P_L e^{i\frac{\theta}{\sqrt{N}}}+ P_R e^{-i\frac{\theta}{\sqrt{N}}}\biggr) \biggr\}\Psi_i \nonumber\\ \mm -\frac{1}{\lambda} \sigma^2-\frac{2\sqrt{N}} {\lambda}\phi_0 \sigma-\frac{N}{\lambda}\phi_0^2\biggr]. \label{simpleraction} \eea The phase $\theta$ is the Goldstone mode\footnote{If the $U(1)_1 \times U(1)_2$ symmetry is gauged (in such a manner that there are no anomalies), then this mode will be eaten up by the gauge fields by the usual Higgs mechanism.}. It can be ``rotated'' away from the Yukawa-type coupling by making the change of variables $\Psi_{iL} \rightarrow e^{-i\frac{\theta}{2\sqrt{N}}} \Psi_{iL}$ and $\Psi_{iR} \rightarrow e^{i\frac{\theta}{2\sqrt{N}}} \Psi_{iR}$. Its derivatives will, however, now appear from the fermion kinetic terms. After this change of variables, the action takes the form \bea S= \int d^4x \biggl[\mm\bar{\Psi}_i\biggl\{i\gamma^0\partial_0+i\vec{\gamma}. \vec{\partial}\left(g_1 -g_{0} \vec{\partial}^2\right)+\phi_0+ \frac{\sigma}{\sqrt{N}} \biggr\}\Psi_i+{\cal O}(\bar{\Psi} \del\theta \Psi) \nonumber \\ \mm -\frac{1}{\lambda} \sigma^2 -\frac{2\sqrt{N}} {\lambda}\phi_0 \sigma-\frac{N}{\lambda}\phi_0^2\biggr], \label{simplestaction} \eea where ${\cal O}(\bar{\Psi} \del \theta \Psi)$ indicates terms involving derivatives of the Goldstone mode $\theta$. In the following we will ignore these terms and retain only the mode $\sigma$. There is no basic difficulty in retaining $\theta$ also, but this is not essential for describing the RG flow of $\lambda$ and LIV properties of the low energy action\footnote{There are no terms in the low energy action which mix $\theta$ and $\sigma$ at the quadratic level. Moreover, the induced kinetic terms for both the fields are finite in the limit of infinite UV cut-off. Only the coefficient of the $\sigma^2$ term diverges and needs to be renormalized. Hence, for simplicity, in the following we have set $\theta$ to a constant.}. To proceed further, it is convenient to rewrite the action \eq{simplestaction} in momentum space. We have \begin{eqnarray} S = \mm\int \frac{d^4k}{(2\pi)^4} \int \frac{d^4q}{(2\pi)^4} \bar\Psi_i(k) \biggl\{\left(\gamma^0k_0-\vec{\gamma}.\vec{l}(\vec{k})+\phi_0\right)(2\pi)^4 \delta^4(k-q) +\frac{1}{\sqrt{N}}\sigma(k-q)\biggr\}\Psi_i(q)\nonumber\\ \mm -\int \frac{d^4k}{(2\pi)^4} \frac{1}{\lambda} \ |\sigma(k)|^2 -\frac{2\sqrt{N}}{\lambda}\phi_0 \sigma(k=0)-\frac{VN}{\lambda}\phi_0^2, \label{momaction} \end{eqnarray} where $V$ denotes the volume of space-time and $\vec{l}(\vec{k})$ is given by \eq{l}. As discussed in the next section, although this action depends on the two parameters $g_0$ and $g_1$, the physics described by it actually depend on only a particular combination of these, namely \bea \mu=|g_1^3/g_0|^{\frac{1}{2}}. \label{masslv} \eea $\mu$ has the dimensions of energy. As discussed below, its significance is that it is the energy scale at which Lorentz violations become important. The action \eq{momaction} can, in fact, be explicitly written in terms of this combination. However, two different versions of the action are needed to cover the entire line of real values of $\mu$, from zero to infinity. In the form \eq{momaction}, the action is valid for all values, but at the expense of having a redundant variable. We will discuss this issue further in the next section after we have obtained the low energy action. The action at low energies is obtained by integrating out high energy modes from this action. To carry out this procedure, we will need to regularize divergences in loop momentum integrals, which we will do by using a cut-off that scales like energy. However, the way in which the cut-off is imposed needs to be chosen carefully, because an arbitrarily chosen cut-off procedure (in a theory with LIV) might preclude the possibility of an approximate restoration of Lorentz invariance at low energies. We choose to impose a cut-off on the expression $(k_4^2+l^2(k))$, where $k_4=-ik_0$ is the Euclidean continuation of energy. To understand why it is natural to adopt this cut-off procedure, recall the definition of $l(k)=|\vec{l}(\vec{k})|=k|g_0k^2+g_1|$, given in \eq{l}. Now, for $g_1 \neq 0$, we can completely scale it out of this expression by the scaling $k \rightarrow k/|g_1|$. This gives $l \rightarrow k|\epsilon \mu^{-2}k^2+1|$, where $\epsilon=\pm 1$ is the relative sign of $g_0$ with respect to $g_1$. For $k << \mu$, $(k_4^2+l^2(k))$ can be well approximated by the $SO(4)$-invariant form $(k_4^2+k^2)$. Thus, the proposed cut-off is naturally consistent with the possibility of Lorentz symmetry restoration at low energies\footnote{By contrast, a cut-off such as the one used in \eq{rhointegral} is intrinsically Lorentz non-invariant. With such a cut-off, possible emergence of Lorentz invariance at low energies will not be manifest.}. In the following we will use the notation $\int [d^4k]_{E \rightarrow \L}$ to indicate that the integral over Euclidean momenta is restricted to the region $E^2 \leq (k_4^2+l^2(k)) \leq \L^2$, i.e. \bea && \int [d^4k]_{E \rightarrow \L} \ \cdots \ \equiv \nonumber \\ &&~~~~~~~\int d^4k \ \theta\left(\L^2-(k_4^2+l^2(k))\right) \theta\left((k_4^2+l^2(k))-E^2\right) \ \cdots, \eea where the $\theta$ is the usual step function. Note that both $\L$ and $E$ have dimensions of energy. For the special case of $E=0$, it will be convenient to use the more compact notation $\int [d^4k]_\L$. Let us now integrate out from \eq{momaction} all the modes between $(k_4^2+l^2(k))=\L^2$ and $(k_4^2+l^2(k))=E^2$, where $\L$ is the cut-off and $E < \L$. The action with the lower energy cut-off $E$ is given by \begin{eqnarray} S\kern-10pt &=& \kern-5pt \int \frac{[d^4k]_E}{(2\pi)^4} \kern-5pt \int \frac{[d^4q]_E}{(2\pi)^4} \ \bar\Psi_i(k) \biggl\{\kern-2pt (2\pi)^4 \biggl(k_0 \gamma^0- \vec{l}(\vec{k}).\vec{\gamma} +\phi_0\biggr)\delta^4(k-q)+\frac{\sigma(k-q)}{\sqrt{N}} \kern-2pt\biggr\} \Psi_i(q)\nonumber\\ && -\int \frac{[d^4k]_E}{(2\pi)^4} \left(C_0+C_1 k_0^2-C_2 k^2 \right)|\sigma(k)|^2 -2C_3\sqrt{N}\phi_0 \sigma(k=0)-\frac{\phi_0^2 NV}{\lambda} \nonumber\\ && +{\rm ~classical~part}. \label{lea} \end{eqnarray} Details of the calculation leading to \eq{lea} as well as the calculations of the coefficients $C_n~(n=0,1,2,3)$ have been given in Appendix B. We have retained terms only up to ${\cal O}(1)$ in large $N$ and up to two derivatives of $\sigma(x)$. The ``classical part'' refers to the contribution which comes from the action evaluated on the classical solution. Notice that even though the scalar field started out as an auxiliary field, it has now developed kinetic terms. As can be seen from equations \eq{c1}-\eq{dimless}, these terms imply a maximum attainable velocity at low energies for the $\sigma$ particle, which is given by \bea c_\sigma^2=\frac{C_2}{C_1}=c_\psi^2 \ \frac{\bar{C_2}}{\bar{C_1}}, \label{LIV} \eea where $c_\psi=|g_1|$ is the maximum attainable velocity of the fermions at low energies. The quantities ${\bar C_{1,2}}$ have been defined in \eq{dimless}. In general $c_\sigma \neq c_\psi$, leading to LIV at low energies. As a check on our calculations, it is not difficult to see that for $g_0=0$, $c_\sigma=|g_1|$. Thus, we recover properties of the usual relativistic NJL model in the appropriate limit. Note that for $g_0 \neq 0$, the model is renormalizable since the momentum integrals involved in the computation of $C_{1,2}$ are finite in the limit $\L \rightarrow \infty$. However, for $g_0=0$, a finite $\L$ is essential since in this usual, nonrenormalizable, relativistic NJL case the momentum integrals diverge. What is the extent of the LIV at low energies? It follows from the above discussion that the magnitude of LIV depends on the deviation of the ratio $c_\sigma/c_\psi$ from unity. From equations \eq{c1}-\eq{dimless} in Appendix B, we see that this quantity depends on the various energy scales only through the dimensionless ratios\footnote{The reason for the subscript ``R'' will be clear in the next section where we discuss the renormalization of the various couplings.} \bea \mu_R \equiv \mu/E, \qquad \bar{\mu} \equiv \mu/\L. \label{muel} \eea In Figure \ref{fig} we have plotted $c_\sigma/c_\psi$ as a function of $\mu_R^{-1}$ for different values of $\bar{\mu}$. \begin{figure}[htb] \centering \includegraphics[height=5cm, width=8cm]{fig1.eps} \caption{The variation of the ratio $c_\sigma/c_\psi$, or equivalently $({\bar C_2}/{\bar C_1})^{1/2}$, {\em cf.} \eq{LIV}, with $\mu_R^{-1}$ for different values of $\bar{\mu}$. This ratio measures Lorentz invariance violations. The dashed curve corresponds to $\bar{\mu}=0$. The other two cases have $\bar{\mu}=1$ and $5$. We have used $\phi_0/\mu=10^{-3}$.} \label{fig} \end{figure} In these plots, we have chosen the relative sign of $g_0$ and $g_1$ to be positive, i.e. $\epsilon=+1$. The dashed line in Figure \ref{fig} corresponds to case with $\bar{\mu}=0$, i.e. the case in which the cut-off $\L$ is infinite. We see that even at very large values of $\mu_R$ there is violation of Lorentz invariance at ${\cal O}(1)$ level. The origin of these LIVs becomes clear from the other two lines in Figure \ref{fig}. The full line corresponds to the case with $\bar{\mu}=1$ and for the dotted line $\bar{\mu}=5$. We see that the LIVs are dramatically reduced in these cases, being smaller for larger value of $\bar{\mu}$. Clearly, fermion modes with energy larger than $\mu$ propagating in the loop (Eq. \eq{log-det}) make Lorentz violating contributions to $C_{1,2}$ at ${\cal O}(1)$ level. These modes can be removed from the loop by imposing a cut-off which is smaller than the LIV scale, i.e. for $\bar{\mu} > 1$. For smaller cut-off, the effect should be better. This is precisely what we see in the calculations shown in Figure \ref{fig}. It should be emphasized that the ${\cal O}(1)$ violations of Lorentz invariance at low energies that we have found in the bosonic sector of the present model cannot be tuned away by adjusting any parameter. This makes (an appropriately gauged version of) the present model an unsuitable candidate for an alternative solution to the hierarchy problem. A way out could be provided by a supersymmetric extension of the model. If supersymmetry is broken at a scale $M_s$ much smaller than the Lorentz invariance violating scale $\mu$, then a cancellation with bosonic partners would remove the Lorentz violating contributions from fermionic modes with energy larger than $M_s$ propagating in the loop. This essentially means that the role of the cut-off $\L$ would then be played by $M_s$. The residual low energy LIV would then be controlled by $M_s^2/\mu^2$, which can be made small by tuning the scale at which supersymmetry is broken. Existing constraints on LIV coming from bounds on the maximum attainable velocity of various particles ($\delta c < 10^{-24}$) \cite{Liberati:2009pf}-\cite{Scully:2008jp} implies $M_s/\mu < 10^{-12}$. For $\mu \sim 10^{17}$GeV, this means that supersymmetry can be broken at a much higher scale, $M_s \sim 100$ TeV, than in the currently popular scenarios\footnote{For a recent review, see \cite{Altarelli:2009bz}.}. The present scenario might become a serious possibility, should LHC see a composite Higgs but no supersymmetry. \section{RG flows and fixed points} Classically, at high enough energies (where all masses can be ignored), the action in \eq{simplestaction} has the scaling symmetry under which $x^0 \rightarrow x^0/a^3, \ \vec{x} \rightarrow \vec{x}/a, \ \Psi \rightarrow a^{3/2}\Psi, \ \sigma \rightarrow a^{3}\sigma$, where $a$ is the parameter of scaling transformation. This is the $z=3$ Lifshitz-like fixed point. For $g_0=0$, classically the free fermi theory has the scaling symmetry $x^0 \rightarrow x^0/a, \ \vec{x} \rightarrow \vec{x}/a, \ \Psi \rightarrow a^{3/2}\Psi$. This is the familiar Lorentz invariant case, which can be described in the above language as a $z=1$ fixed point. At energies much smaller than the scale $\mu$, Lorentz violations are small even for $z=3$, and so classically one recovers an approximately Lorentz invariant theory at low energies. In the following, we will study RG flow in the quantum theory from the $z=3$ fixed point in the ultraviolet to find out what theory it flows to in the infrared. \subsection{Determination of renormalized parameters} In order to study the flow from high to low energies, we need to find out how the various couplings get renormalized. The starting point in the determination of the renormalization of the couplings in action \eq{momaction}, in the leading large $N$ approximation, is the low energy action \eq{lea}. To implement the Wilson RG procedure, we need to rescale the cut-off $E$ in \eq{lea} back to the original cut-off $\L$. As discussed above equation \eq{masslv}, our cut-off procedure imposes the restriction $(k_4^2+l^2(k)) \leq \L^2$ on Euclidean space momentum integrals. Writing $E=b\L$, we see that the cut-off $E$ on the low energy action \eq{lea} can be rescaled to $\L$ by the scale transformations (change of variables) $k_4 \rightarrow b k_4,\ k \rightarrow a k$, followed by the scalings of the couplings \bea a^3 g_0=b g_0', \qquad a g_1=b g_1'. \label{grg2} \eea These give the renormalized couplings and RG flow in the free fermion theory. Notice that the scaling parameter, $b$, for the energy is, a priori, unrelated to the scaling parameter, $a$, for the momenta. The $z=3$ fixed point behaviour corresponds to choosing $a=b^{1/3}$, and then the couplings scale as $g_0'=g_0$ and $g_1'=b^{-2/3}g_1$. Since in this case $g_0$ is invariant under the RG flow, we can set it to unity by scaling $k \rightarrow |g_0|^{-1/3} k$, leaving only one independent coupling, namely $g_1$. Choosing $a=b$ instead, one gets $g_0'=b^{2/3}g_0$ and $g_1'=g_1$, which are the scalings appropriate for a $z=1$ fixed point. In this case, $g_1$ is invariant under the RG flow and so we can scale it away by $k \rightarrow k/|g_1|$. Once again we are left with only one independent coupling. The two fixed point behaviours discussed above can be treated together by setting $a=b^{1/z}$ where $z=3$ or $1$ \footnote{Away from the fixed points, in general the RG equations \eq{grg2} describe a flow in two parameters, namely $a$ and $b$. More generally, in a theory with anisotropy in $n$ different directions, the RG equations will describe an $n$-parameter flow. It would be interesting to explore such more general flows. Here we will confine ourselves to a more traditional view of RG as a flow in a single scale parameter.}. Then, using the cut-off $\L$ to define the dimensionless renormalized couplings, we get \bea g_{0R} \equiv \L^{\frac{3}{z}-1}g_0'=E^{\frac{3}{z}-1}g_0, \qquad g_{1R} \equiv \L^{\frac{1}{z}-1}g_1'=E^{\frac{1}{z}-1}g_1. \label{dimlessgrg} \eea They satisfy the RG equations \bea \dot{g}_{0R}=-\left(\frac{3}{z}-1\right)g_{0R}, \qquad \dot{g}_{1R}=-\left(\frac{1}{z}-1\right) g_{1R}, \label{grg} \eea where a dot denotes a derivative with respect to ($-{\rm ln}E)$ \footnote{Note that in this convention, the RG flow is from high to low energies. This is opposite to the convention generally used in high energy physics.}. Using these two couplings, we can define the renormalized version of \eq{masslv}: \bea |g_{1R}^3/g_{0R}|^{\frac{1}{2}}=\mu/E=\mu_R. \label{mur} \eea This is precisely the quantity defined in \eq{muel}. In the leading large-$N$ approximation, the free field renormalization \eq{grg} is not affected by the Yukawa coupling. However, the 't Hooft coupling $\lambda$ does receive quantum corrections. Its renormalization can be deduced from the term proportional to $C_0$ in the low energy action \eq{lea}. Scaling the cut-off $E$ back to $\L$ in \eq{lea} and using the expression for $C_0$ given in \eq{c0}, we get \bea \frac{1}{\lambda'}=\frac{b}{a^3}\biggl\{\frac{1}{\lambda}- 2\int \frac{[d^4k]_{E \rightarrow \L}}{(2\pi)^4} \ \frac{k_4^2+l^2(k)-\phi_0^2}{(k_4^2+l^2(k)+\phi_0^2)^2} \biggr\}, \label{lrg0} \eea Now, using $(\int [d^4k]_{E \rightarrow \L} \ \cdots)=(\int [d^4k]_\L \ \cdots)-(\int [d^4k]_E \ \cdots)$ and substituting $a=b^{1/z}$, we can simplify this equation to get \bea \frac{1}{\lambda'}=b^{1-\frac{3}{z}} \biggl\{\xi+ 2\int \frac{[d^4k]_E}{(2\pi)^4} \ \frac{k_4^2+l^2(k)-\phi_0^2}{(k_4^2+l^2(k)+\phi_0^2)^2} \biggr\}, \label{lrg1} \eea where\footnote{In the broken phase, $\xi=0$ by the gap equation \eq{gapeqn}. In the unbroken phase, $\xi$ is a non-zero constant, independent of $E$. For this reason, it turns out that the RG equation obtained in \eq{lrg3}, for the dimensionless coupling $\lambda_R$ defined in \eq{lrg2}, does not depend on $\xi$. Consequently the RG equation in the unbroken phase can be obtained from \eq{lrg3} by specializing to $\phi_R=0$.} \bea \xi \equiv \frac{1}{\lambda}-2\int \frac{[d^4k]_\L}{(2\pi)^4} \ \frac{k_4^2+l^2(k)-\phi_0^2}{(k_4^2+l^2(k)+\phi_0^2)^2}. \label{xi} \eea So, for the dimensionless renormalized coupling, $\lambda_R \equiv \L^{\frac{3}{z}-1}\lambda'$, we get \bea \frac{1}{\lambda_R} =\frac{1}{E^{\frac{3}{z}-1}} \biggl\{\xi+2\int \frac{[d^4k]_E}{(2\pi)^4} \ \frac{k_4^2+l^2(k)-\phi_0^2}{(k_4^2+l^2(k)+\phi_0^2)^2} \biggr\}. \label{lrg2} \eea This leads to the RG equation \bea \dot{\lambda}_R=-\left(\frac{3}{z}-1\right)\lambda_R+ \frac{(1-\phi_R^2)}{(1+\phi_R^2)^2} \ \frac{\lambda_R^2}{\pi^3 |g_{0R}|} \int_0^{h_0} ds \ \frac{s^2}{\sqrt{1-s^2(\epsilon s^2+\mu_R^{\frac{2}{3}})^2}} \label{lrg3} \eea where $\epsilon=\pm 1$ is the relative sign of $g_0$ and $g_1$, $\phi_R \equiv \phi_0/E$ is the dimensionless renormalized coupling corresponding to the vev (with the RG equation $\dot{\phi}_R=\phi_R$) and \bea h_0=\left(\frac{1}{2}+\sqrt{\frac{1}{4}+ \frac{\mu_R^2}{27}}\right)^{\frac{1}{3}}- \frac{\mu_R^{\frac{2}{3}}}{3} \left(\frac{1}{2}+\sqrt{\frac{1}{4}+\frac{\mu_R^2}{27}}\right)^{-\frac{1}{3}} \label{h0} \eea Note that the form in which the right-hand side of \eq{lrg3} has been written is inappropriate for the special case $g_{0R}=0$. In this case one must use the alternative, but entirely equivalent, form: \bea \dot{\lambda}_R=-\left(\frac{3}{z}-1\right)\lambda_R+ \frac{(1-\phi_R^2)}{(1+\phi_R^2)^2} \ \frac{\lambda_R^2}{\pi^3 |g_{1R}|^3} \int_0^{h_1} ds \ \frac{s^2}{\sqrt{1-s^2(\epsilon \mu_R^{-2}s^2+1)^2}}, \label{lrg4} \eea where $h_1=h_0\mu_R^{\frac{2}{3}}$. It is easy to see that $h_1 \rightarrow 1$ as $\mu_R \rightarrow \infty$. \subsection{The renormalized action} In terms of the dimensionless renormalized couplings, the low energy action \eq{lea} can be written as \bea S &=& \int \frac{[d^4k]_\L}{(2\pi)^4} \int \frac{[d^4q]_\L}{(2\pi)^4} \ \bar\Psi_{iR}(k) \biggl\{(2\pi)^4 \L \biggl(\tilde{k}_0 \gamma^0- \vec{l_R}(\vec{\tilde{k}}).\vec{\gamma} +\phi_R\biggr)\delta^4(k-q) \nonumber \\ && +\frac{1}{\sqrt{N}}\sigma_R(k-q)\biggr\} \Psi_{iR}(q)-\L^{\frac{3}{z}-1} \int \frac{[d^4k]_\L}{(2\pi)^4} \left(\frac{1}{\lambda_R}+C_{1R}\tilde{k}_0^2-C_{2R}\tilde{k}^2 \right) |\sigma_R(k)|^2 \nonumber \\ && -2\frac{\L^{\frac{3}{z}}}{\lambda_R}\sqrt{N}\phi_R \sigma_R(k=0) \ + \ \text{classical part}, \label{rlea} \eea where $\vec{l_R}(\vec{\tilde{k}})=\vec{\tilde{k}} (g_{0R}\tilde{k}^2+g_{1R}),\ \tilde{k}_0=k_0/\L,\ \tilde{k}=k\L^{-\frac{1}{z}}$. Moreover, the renormalized fields are related to the bare fields by $\Psi_{iR}=b^{\frac{3}{2z}+1}\Psi_i,\ \sigma_R=b^{\frac{3}{z}}\sigma$. The coefficients $C_{1,2R}$ are related to $C_{1,2}$ and have been defined in \eq{rdimless2}. This action seems to depend separately on the two couplings $g_{0R},\ g_{1R}$, but actually the physics described by it depends only on the combination $\mu_R$, \eq{mur}. To see this more explicitly, let us make the change of variables $\tilde{k} \rightarrow \tilde{k}/|g_{1R}|,\ \Psi_{iR} \rightarrow |g_{1R}|^{\frac{3}{2}}\Psi_{iR},\ \sigma_R \rightarrow |g_{1R}|^{3}\sigma_R,\ \lambda_R \rightarrow \lambda_1=\lambda_R/|g_{1R}|^{3}$. After this change of variables, the action takes the form \bea S &=& \int \frac{[d^4k]_\L}{(2\pi)^4} \int \frac{[d^4q]_\L}{(2\pi)^4} \ \bar\Psi_{iR}(k) \biggl\{(2\pi)^4 \L \biggl(\tilde{k}_0 \gamma^0- \vec{\tilde{l}}_R(\vec{\tilde{k}}).\vec{\gamma} +\phi_R\biggr)\delta^4(k-q) \nonumber \\ && +\frac{1}{\sqrt{N}}\sigma_R(k-q)\biggr\} \Psi_{iR}(q)-\L^{\frac{3}{z}-1} \int \frac{[d^4k]_\L}{(2\pi)^4} \left(\frac{1}{\lambda_1}+\bar{C}_1\tilde{k}_0^2- \bar{C}_2\tilde{k}^2 \right)|\sigma_R(k)|^2 \nonumber \\ && -2\frac{\L^{\frac{3}{z}}}{\lambda_1}\sqrt{N}\phi_R \sigma_R(k=0) \ + \ \text{classical part}, \label{rlea1} \eea where $\vec{\tilde{l}}_R(\vec{\tilde{k}})= \vec{\tilde{k}} (\epsilon \tilde{k}^2/\mu_R^2+1)$ and $\bar{C}_{1,2}$ are given by \eq{rdimless1}. This form of the action makes it explicit that physics depends only on the combination $\mu_R$ since any separate dependence on $g_{0R},\ g_{1R}$ has now disappeared. The form \eq{rlea1} of the low energy action is not suitable for small values of $\mu_R$ (equivalently for small values of $g_{1R}$ or large values of $g_{0R}$). In this case, a more suitable change of variables in the action \eq{rlea} is $\tilde{k} \rightarrow \tilde{k}|g_{0R}|^{-\frac{1}{3}},\ \Psi_{iR} \rightarrow |g_{0R}|^{\frac{1}{2}}\Psi_{iR},\ \sigma_R \rightarrow |g_{0R}|\sigma_R,\ \lambda_R \rightarrow \lambda_3=\lambda_R/|g_{0R}|$. After this change of variables, the action takes the form \bea S &=& \int \frac{[d^4k]_\L}{(2\pi)^4} \int \frac{[d^4q]_\L}{(2\pi)^4} \ \bar\Psi_{iR}(k) \biggl\{(2\pi)^4 \L \biggl(\tilde{k}_0 \gamma^0- \vec{\tilde{l}}_R'(\vec{\tilde{k}}).\vec{\gamma} +\phi_R\biggr)\delta^4(k-q) \nonumber \\ && +\frac{1}{\sqrt{N}}\sigma_R(k-q)\biggr\} \Psi_{iR}(q)-\L^{\frac{3}{z}-1} \int \frac{[d^4k]_\L}{(2\pi)^4} \left(\frac{1}{\lambda_3}+C_1'\tilde{k}_0^2-C_2'\tilde{k}^2 \right) |\sigma_R(k)|^2 \nonumber \\ && -2\frac{\L^{\frac{3}{z}}}{\lambda_3}\sqrt{N}\phi_R \sigma_R(k=0) \ + \ \text{classical part}, \label{rlea2} \eea where $\vec{\tilde{l}}_R'(\vec{\tilde{k}})= \vec{\tilde{k}} (\epsilon \tilde{k}^2+\mu_R^{\frac{2}{3}})$ and $C_1'=\bar{C}_1\mu_R^{-2}, \ C_2'=\bar{C}_2\mu_R^{-2/3}$. It can be shown that $C_{1,2}'$ have a finite limit as $\mu_R \rightarrow 0$; see equations \eq{dimless}-\eq{dimless3}. This form of the low energy action is now suitable for small values of $\mu_R$. We have thus found two equally valid descriptions of the physics of the 4-fermi theory. One is that given by the action in \eq{rlea}, which is valid for all the values of the renormalized coupling $\mu_R$. However, in this form the action depends on two couplings, $g_{0R}$ and $g_{1R}$. In the form \eq{rlea1} and \eq{rlea2}, the low energy action depends only on the combination $\mu_R$ of these, but two different descriptions are needed to cover the entire range of possible values of $\mu_R$. \subsection{Fixed points} As we have argued above, the relevant renormalized coupling constants in the low energy theory are $\lambda_1=\lambda_R/|g_{1R}|^{3}$ and $\lambda_3=\lambda_R/|g_{0R}|$, with $\lambda_3=\mu_R^2 \lambda_1$. The RG equations for these can be obtained from equations \eq{lrg3} and \eq{lrg4} using \eq{grg}. We get \bea \dot{\lambda}_3 &=& \frac{(1-\phi_R^2)}{(1+\phi_R^2)^2} \ \frac{\lambda_3^2}{\pi^3} \int_0^{h_0} ds \ \frac{s^2}{\sqrt{1-s^2(\epsilon s^2+\mu_R^{\frac{2}{3}})^2}}. \label{lrg31} \\ \dot{\lambda}_1 &=& -2 \lambda_1+ \frac{(1-\phi_R^2)}{(1+\phi_R^2)^2} \ \frac{\lambda_1^2}{\pi^3} \int_0^{h_1} ds \ \frac{s^2}{\sqrt{1-s^2(\epsilon \mu_R^{-2}s^2+1)^2}}, \label{lrg41} \eea Together with these, we also have the RG equation for $\mu_R$, namely \bea \dot{\mu_R}=\mu_R, \qquad ({\mu_R^{-1}})^{\hbox{$\cdot$}} =\mu_R^{-1}. \label{mur1} \eea The second of these is appropriate for large $\mu_R$. Equations \eq{lrg31}-\eq{mur1} constitute the set that describes the RG flows in this model\footnote{Note that $\phi_R$ is not an independent variable since it is determined in terms of $\lambda_{1,3}$ and $\mu_R$ by the gap equation in the broken phase, while in the unbroken phase it vanishes.}. We emphasize that the explicit dependence on $z$ has dropped out of these equations. This is nice since one expects that specific values of $z$ should characterize only the end points of an RG trajectory, not the trajectory itself. Now, let us first consider the case of small $\mu_R$. In this case, the appropriate equation is \eq{lrg31}. We see that there is a possible fixed point at $\lambda_3=0$. For this to be a fixed point, we must also have $\mu_R=0$ and $\phi_R=0$. This is what we have been describing as the $z=3$ Lifshitz-like fixed point. The case $\mu_R \rightarrow \infty$ is more interesting. In this case we must use \eq{lrg41}, which in the limit approximates to the equation \bea \dot{\lambda}_1=-2\lambda_1+ \frac{(1-\phi_R^2)}{4\pi^2(1+\phi_R^2)^2}\lambda_1^2. \label{l3rg} \eea For a fixed point we must have $\phi_R=0$ and one of the two possibilities: $\lambda_1=0, \ 8\pi^2$. The first of these is the free field (Gaussian) fixed point and the second is a new Lorentz invariance violating fixed point. Figure \ref{fig2} shows a plot of the RG flows near the three fixed points we have found. The data for this figure have been obtained using the exact RG equations \eq{lrg31} and \eq{lrg41}. Note that we have used $\epsilon=+1$ in these calculations. \begin{figure}[htb] \centering \includegraphics[height=6cm, width=6.5cm]{lfp.eps}~~\includegraphics[height=6cm, width=6.5cm]{njl.eps} \caption{In the figure on the left are plotted RG flows in the $(\lambda_3,\ \mu_R)$ plane and in the one on the right are flows in the $(\lambda_1,\ 1/\mu_R)$ plane. $\mu_R$ increases from left to right in both figures. The chiral symmetry broken (unbroken) phase is indicated by the letter B (U). The dotted line is the critical curve on which $\phi_0$ vanishes.} \label{fig2} \end{figure} In the broken phase, on the critical line $\phi_R=0$, the RG flow from UV ends on the Lorentz invariance violating fixed point in the IR. This can be seen directly from \eq{lrg2}. Making the change of variables $k_4 \rightarrow Ek_4,\ k \rightarrow Ek/|g_1|$ in this equation, we get \bea \frac{1}{\lambda_1}=2\int \frac{[d^4k]_1}{(2\pi)^4} \ \frac{k_4^2+\tilde{l}_R^2(k)-\phi_R^2} {(k_4^2+\tilde{l}_R^2(k)+\phi_R^2)^2}+\frac{|g_1|^3}{E^2}\xi, \label{rgsol} \eea where, as before, $\tilde{l}_R(k)=k(\epsilon k^2/\mu_R^2+1)$ and $\int [d^4k]_1 \ \cdots=\int d^4k \ \theta(1-(k_4^2+\tilde{l}_R^2(k))) \ \cdots$. Now, in the broken phase, the gap equation implies $\xi=0$. So, for $\mu_R^{-1}=\phi_R=0$, the right-hand side of the above equation evaluates to $1/8\pi^2$. What happens for $\mu_R^{-1}=0$, but $\phi_R \neq 0$? In this case, for small values of $\phi_R$, the coupling increases as $\lambda_1 \sim 8\pi^2/(1-\phi_R^2 {\rm ln}\phi_R^{-2})$. So, these trajectories diverge to larger values of the coupling, doing so faster for larger values of $\phi_R$. Figure 2 confirms this for trajectories in the broken phase. Note that there is no fixed point for $\lambda_1 \rightarrow \infty$ since the beta function of $\lambda_1^{-1}$ does not vanish at $\lambda_1^{-1}=0$. In the unbroken phase, $|g_1|^3 \xi$ is a non-zero constant, independent of the flow parameter $E$, while $E \rightarrow 0$ in the IR. Thus, in the unbroken phase the RG flow will terminate at $\lambda_1=0$ in the IR. What does the theory look like at these two fixed points? Consider first the nontrivial fixed point at $\lambda_1=8\pi^2$. The fixed point action can be obtained from \eq{rlea1} by setting $\phi_R=0$ and taking the limit $\mu_R \rightarrow \infty$. For $\phi_R=0$, the fermions become massless. In the $\sigma$ kinetic terms, the coefficients $\bar{C}_{1,2}$ grow logarithmically in the limit $\mu_R \rightarrow \infty$, as shown in \eq{c12r1}. The implication is that as we approach the fixed point $\lambda_1=8\pi^2$, the kinetic terms for $\sigma$ grow and eventually dominate the mass term. This can be seen more directly by the rescaling $\sigma \rightarrow \sigma/\sqrt{\bar{C}_1}$. In the limit $\mu_R \rightarrow \infty$, the mass term and the Yukawa interaction disappear, leaving behind a massless scalar decoupled from the fermions. So the theory at this fixed point has free massless fermions and a free massless scalar, the maximum attainable velocity of the latter being different from the former, unless $\bar{\mu} \gg 1$, in which case Lorentz invariance is restored near the fixed point. Note that our analysis implies the existence of a fixed point in the usual relativistic NJL model as well. This can be established as follows. The RG equation for the 4-fermi coupling in the NJL model can be obtained by setting $\bar{\mu} \gg 1$ and $z=1$ in \eq{lrg4}. Since $\mu_R \gg \bar{\mu}$, this also implies $\mu_R \gg 1$. For large $\mu_R$ and $z=1$, the equation for $\lambda_1=\lambda_R/|g_{1R}|^3$ is precisely \eq{l3rg}. Moreover, from the low energy action \eq{rlea1}, we see that in this parameter regime the Lorentz violating piece in the fermion kinetic term vanishes and the coefficients $\bar{C}_{1,2}$ work out to be those appropriate for a relativistic NJL model with a cut-off $\L$, as we have argued below equation \eq{c12r1}. The other fixed point, that at $\lambda_1=0$, is described by just a free massless fermion. This is because near this fixed point the $\sigma$ mass goes to infinity as $\mu_R^2$ because of the manner in which $\lambda_1$ approaches the fixed point, which is described by equation \eq{rgsol}. Therefore, this time the rescaling $\sigma \rightarrow \sigma/\sqrt{\bar{C}_1}$ leaves the mass term as dominant, with the mass going to infinity. Hence $\sigma$ decouples at the fixed point, leaving behind free massless relativistic fermions. This is the theory that one gets from the original four-fermi model at the trivial (Gaussian) fixed point. \section{Concluding remarks} In this paper we have analysed RG flows in a $z=3$ Lifshitz-like four fermi model, which is ultraviolet complete in $3+1$ dimensions. The model flows in the infrared to a theory in which Lorentz invariance is violated at ${\cal O}(1)$ level, which cannot be tuned away by adjusting a parameter. The origin of these violations can be traced to fermions with energies higher than the Lorentz violating energy scale, propagating in loops and contributing to the induced kinetic terms for the composite boson, in the chiral symmetry broken vacuum. However, if one works with a finite cut-off, which is taken to be much smaller than the Lorentz invariance violating scale, then the model flows in the infrared to an approximately Lorentz invariant theory even in the bosonic sector, which is similar to the low energy limit of the usual Nambu$-$Jona-Lasinio model in the broken phase. A physical way of interpreting the cut-off could be as supersymmetry breaking scale in a supersymmetric version of this model. In this case, the offending contributions of fermions in loops would be cancelled by their supersymmetric partners and the Lorentz violations would be controlled by the ratio of the Lorentz violating scale to the supersymmetry breaking scale, which can, in principle, be made small. Possible applications of the present model to the Higgs sector of the Standard Model would then put constraints on these two scales for consistency with data. A remarkable feature of the general RG equations we have obtained, \eq{grg2} and \eq{lrg0}, is that they describe flow in two scaling parameters, namely $a$ and $b$. More generally, in a theory with anisotropy in $n$ different directions, the RG equations will describe flow in $n$ parameters. The parameters presumably get related near a fixed point, as in the present example in which we found that $a=b^{\frac{1}{z}}$ near the fixed point labeled by the exponent $z$. Away from the fixed points, however, a more general flow in multiple scaling parameters would seem to be more appropriate. It would be interesting to explore such more general flows. \section{Acknowledgements} We would like to thank Spenta Wadia for a collaboration at an early stage of this work and for numerous discussions. We would also like to thank Sumit Das, Alfred Shapere, Juan Maldacena, Shiraz Minwalla and Michael Peskin for discussions. G.M. would like to thank the organizers of the Benasque conference on Gravity (July 2009), the organizers of the QTS6 meeting in Lexington, the University of Kentucky, Lexington and the School of Natural Sciences, IAS, Princeton for hospitality during part of this project.
{ "attr-fineweb-edu": 1.330078, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUcxs4ubngyA6kKptP
\section{Introduction} Embracing grand unification, a solution to the electroweak hierarchy problem as well as a consistent quantum theory of gravity, there is strong motivation from the bottom-up to consider string theory as UV completion of the SM. At the same time, it is crucial to ensure that the SM can indeed be consistently incorporated in a concrete realization of string theory and expose the constraints and predictions that may arise from such a derivation. A particularly advanced setup is provided by orbifold compactifications of the heterotic string \cite{Ibanez:1986tp,Ibanez:1987sn,Gross:1984dd,Gross:1985fr,Dixon:1985jw,Dixon:1986jc} which cannot only consistently host the (supersymmetric) SM~\cite{Buchmuller:2005jr} but may also shed light on one of its most pressing puzzles by automatically including family repetition and flavor symmetries~\cite{Kobayashi:2006wq,Olguin-Trejo:2018wpw}. Since a ``theory of everything'' has to be, in particular, a theory of flavor, there is a strong motivation to understand the flavor puzzle of the SM from such a top-down perspective. In this talk (see~\cite{Trautner:2022biw} for an earlier version), we present new progress achieved in consistently deriving the complete flavor symmetry from concrete string theory models, including the role of the different sources that can contribute to the flavor symmetry breaking in the infrared (IR). As a proof of principle, we present the first consistent string theory derived model that gives rise to potentially realistic low-energy flavor phenomenology~\cite{Baur:2022hma}. The example is a heterotic string theory compactified on a $\mathbbm{T}^2/\Z3$ manifold. Our results show that this model provides a successful fit to all available experimental data while giving rise to concrete predictions for so-far undetermined parameters. Corrections from the K\"ahler potential turn out to be instrumental in obtaining a successful simultaneous fit to quark and lepton data. While in our effective description there are still more parameters than observables, this gives a proof of principle of the existence of consistent global explanations of flavor in the quark and lepton sector from a top-down perspective. In the end we will also point out possible lessons for bottom-up flavor model building and important open problems. \section{Types of discrete flavor symmetries and the eclectic symmetry} The action of the 4D effective $\mathcal{N}=1$ SUSY theory can schematically be written as (here, $K$: K\"ahler potential, $W$: Superpotential, $x$: spacetime, $\theta$: superspace, $\Phi:$ superfields, $T$: modulus) \begin{equation} \mathcal{S}=\int d^4x\,d^2\theta\,d^2\bar{\theta}\,K(T,\bar{T},\Phi,\bar{\Phi})+ \int d^4x\,d^2\theta\,W(T,\Phi)+ \int d^4x\,d^2\bar{\theta}\,\bar{W}(\bar{T},\bar{\Phi})\;. \end{equation} There are four categories of possible symmetries that differ by their effect on fields and coordinates: \begin{itemize} \item ``Traditional'' flavor symmetries ``$G_\mathrm{traditional}$'', see e.g.~\cite{Ishimori:2010au}: $\Phi\mapsto \rho(\mathsf{g})\Phi$\,,\quad$\mathsf{g}\in G_\mathrm{traditional}$. \item Modular flavor symmetries ``$G_\mathrm{modular}$'' \cite{Feruglio:2017spp}: (partly cancel between $K$ and $W$) \begin{equation} \gamma:=\begin{pmatrix}a&b\\c&d\end{pmatrix}\in\mathrm{SL}(2,\Z{})\;,\quad\Phi\xmapsto{\gamma}(c\,T+d)^n \rho(\gamma)\Phi\;,\quad T\xmapsto{\gamma}\frac{a\,T+b}{c\,T+d}\;. \end{equation} In this case couplings are promoted to modular forms: $Y=Y(T)$, $Y(\gamma T)=\left(c\,T+d\right)^{k_Y}\rho_Y(\gamma)\,Y(T)$. \item R flavor symmetries ``$G_R$'' that differ for fields and their superpartners~\cite{Chen:2013dpa}~(cancel between $W$ and $d^2\theta$). \item General symmetries of the ``\ensuremath{\mathcal{CP}}\xspace'' type~\cite{Baur:2019kwi,Novichkov:2019sqv}: (partly cancel between $K$ and $W$ and $d^4x$) \begin{equation} \det\left[\,\bar{\gamma}\in\mathrm{GL}(2,\Z{})\,\right]=-1\;,\quad\Phi~\xmapsto{\bar{\gamma}}~(c\bar{T}+d)^n \rho(\bar{\gamma}) \bar{\Phi}\;,\quad T~\xmapsto{\bar{\gamma}}~\frac{a\bar{T}+b}{c\bar{T}+d}\;. \end{equation} \end{itemize} All of these symmetries are individually known from bottom-up model building, see~\cite{Feruglio:2019ybq}. In explicit top-down constructions we find that \textit{all of these arise at the same time} in a non-trivially unified fashion~\cite{Baur:2019iai,Baur:2019kwi,Nilles:2020nnc,Nilles:2020kgo,Nilles:2020tdp,Nilles:2020gvu,Ohki:2020bpo,Nilles:2021ouu}, that we call the ``eclectic'' flavor symmetry~\cite{Nilles:2020nnc} \begin{equation}\label{eq:eclectic} \fcolorbox{red}{yellow}{\Large ~$G_\mathrm{\mathbf{eclectic}} ~=~ G_\mathrm{traditional} ~\cup~ G_\mathrm{modular} ~\cup~ G_\mathrm{R} ~\cup~ \ensuremath{\mathcal{CP}}\xspace\,.$\vspace{0.2cm}~} \end{equation} \section{Origin of the eclectic flavor symmetry in heterotic orbifolds} A new insight is that in the Narain lattice formulation of compactified heterotic string theory~\cite{Narain:1985jj,Narain:1986am,GrootNibbelink:2017usl} the complete unified eclectic flavor symmetry can unambiguously derived from the outer automorphisms~\cite{Trautner:2016ezn} of the Narain lattice space group~\cite{Baur:2019kwi, Baur:2019iai}. These outer automorphisms contain modular transformations, including the well-known T-duality transformation and the so called mirror symmetry (permutation of different moduli) of string theory, but also symmetries of the \ensuremath{\mathcal{CP}}\xspace-type as well as traditional flavor symmetries and, therefore, naturally yield the unification shown in Eq.~\eqref{eq:eclectic}. The eclectic transformations also automatically contain the previously manually derived so-called ``space-group selection rules'' \cite{Hamidi:1986vh,Dixon:1986qv,Ramos-Sanchez:2018edc} and non-Abelian ``traditional'' flavor symmetries~\cite{Kobayashi:2006wq}. \section[The eclectic flavor symmetry of T2/Z3]{\boldmath The eclectic flavor symmetry of $\mathbbm T^2/\Z3$ \unboldmath} Let us now focus on a specific example model~\cite{Baur:2021bly} in which the six extra dimensions of ten-dimensional heterotic string theory are compactified in such a way that two of them obey the $\mathbbm T^2/\Z3$ orbifold geometry. The discussion of this $D=2$ subspace involve a K\"ahler and complex structure modulus $T$ and $U$, respectively, with the latter being fixed to $\langle U\rangle=\exp(\nicefrac{2\pi\mathrm{i}}3)=:\omega$ by the orbifold action. The outer automorphisms of the corresponding Narain space group yield the full eclectic group of this setting, which is of order $3888$ and given by\footnote{% Finite groups are denoted by $\mathrm{SG}\left[\cdot,\cdot\right]$ where the first number is the order of the group and the second their GAP SmallGroup ID~\cite{GAP4url}. }~\cite{Nilles:2020tdp,Nilles:2020gvu} \begin{equation} G_\mathrm{eclectic} ~=~ \Omega(2)\rtimes\Z2^\ensuremath{\mathcal{CP}}\xspace\,,\qquad\text{with}\quad \Omega(2)\cong \mathrm{SG}[1944,3448]\,. \end{equation} More specifically, $G_\mathrm{eclectic}$ contains \begin{itemize} \item a $\Delta(54)$ traditional flavor symmetry, \item the $\mathrm{SL}(2,\Z{})_T$ modular symmetry of the $T$ modulus, which acts as a $\Gamma'_{3}\cong T'$ finite modular symmetry on matter fields and their couplings, \item a $\Z9^R$ discrete R symmetry as remnant of $\mathrm{SL}(2,\Z{})_U$, and \item a $\Z2^\ensuremath{\mathcal{CP}}\xspace$ \ensuremath{\mathcal{CP}}\xspace-like transformation. \end{itemize} These symmetries and their interplay are shown in table~\ref{tab:Z3FlavorGroups}. Twisted strings localized at the three fixed points of the $\mathbbm{T}^2/\Z3$ orbifold form three generations of massless matter fields in the effective IR theory with transformations under the various symmetries summarized in table~\ref{tab:Representations}. Explicit representation matrices of the group generators are shown in the slides of the talk and in the papers~\cite{Baur:2021bly,Baur:2022hma}. Examples for complete string theory realizations are known, see~\cite{Carballo-Perez:2016ooy,Ramos-Sanchez:2017lmj} and \cite{Baur:2021bly,Baur:2022hma}, and we show the derived charge assignment of the SM-like states in one particular example in table~\ref{tab:Z3xZ3configurations}. \begin{table}[t!] \center \resizebox{\textwidth}{!}{ \begin{tabular}{|c|c||c|c|c|c|c|c|} \hline \multicolumn{2}{|c||}{nature} & outer automorphism & \multicolumn{5}{c|}{\multirow{2}{*}{flavor groups}} \\ \multicolumn{2}{|c||}{of symmetry} & of Narain space group & \multicolumn{5}{c|}{}\\ \hline \hline \parbox[t]{3mm}{\multirow{6}{*}{\rotatebox[origin=c]{90}{eclectic}}} &\multirow{2}{*}{modular} & rotation $\mathrm{S}~\in~\SL{2,\Z{}}_T$ & $\Z{4}$ & \multicolumn{3}{c|}{\multirow{2}{*}{$T'$}} &\multirow{6}{*}{$\Omega(2)$}\\ & & rotation $\mathrm{T}~\in~\SL{2,\Z{}}_T$ & $\Z{3}$ & \multicolumn{3}{c|}{} & \\ \cline{2-7} & & translation $\mathrm{A}$ & $\Z{3}$ & \multirow{2}{*}{$\Delta(27)$} & \multirow{3}{*}{$\Delta(54)$} & \multirow{4}{*}{$\Delta'(54,2,1)$} & \\ & traditional & translation $\mathrm{B}$ & $\Z{3}$ & & & & \\ \cline{3-5} & flavor & rotation $\mathrm{C}=\mathrm{S}^2\in\SL{2,\Z{}}_T$ & \multicolumn{2}{c|}{$\Z{2}^R$} & & & \\ \cline{3-6} & & rotation $\mathrm{R}\in\SL{2,\Z{}}_U$ & \multicolumn{3}{c|}{$\Z{9}^R$} & & \\ \hline \end{tabular} } \caption{\label{tab:Z3FlavorGroups} Eclectic flavor group $\Omega(2)$ for six-dimensional orbifolds that contain a $\mathbbm T^2/\Z{3}$ orbifold sector~\cite{Nilles:2020tdp}. } \end{table} \begin{table}[h] \center \resizebox{\textwidth}{!}{ \begin{tabular}{|c||c||c|c|c|c||c|c|c|c||c|} \hline \multirow{3}{*}{sector} &\!\!matter\!\!& \multicolumn{9}{c|}{eclectic flavor group $\Omega(2)$}\\ &fields & \multicolumn{4}{c||}{modular $T'$ subgroup} & \multicolumn{4}{c||}{traditional $\Delta(54)$ subgroup} & $\Z{9}^R$ \\ &$\Phi_n$ & \!\!irrep $\rep{s}$\!\! & $\rho_{\rep{s}}(\mathrm{S})$ & $\rho_{\rep{s}}(\mathrm{T})$ & $n$ & \!\!irrep $\rep{r}$\!\! & $\rho_{\rep{r}}(\mathrm{A})$ & $\rho_{\rep{r}}(\mathrm{B})$ & $\rho_{\rep{r}}(\mathrm{C})$ & $R$\\ \hline \hline bulk & $\Phi_{\text{\tiny 0}}$ & $\rep1$ & $1$ & $1$ & $0$ & $\rep1$ & $1$ & $1$ & $+1$ & $0$ \\ & $\Phi_{\text{\tiny $-1$}}$& $\rep1$ & $1$ & $1$ & $-1$ & $\rep1'$ & $1$ & $1$ & $-1$ & $3$ \\ \hline $\theta$ & $\Phi_{\nicefrac{-2}{3}}$ & $\rep2'\oplus\rep1$ & $\rho(\mathrm{S})$ & $\rho(\mathrm{T})$ & $\nicefrac{-2}{3}$& $\rep3_2$ & $\rho(\mathrm{A})$& $\rho(\mathrm{B})$ & $+\rho(\mathrm{C})$ & $1$\\ & $\Phi_{\nicefrac{-5}{3}}$ & $\rep2'\oplus\rep1$ & $\rho(\mathrm{S})$ & $\rho(\mathrm{T})$ & $\nicefrac{-5}{3}$& $\rep3_1$ & $\rho(\mathrm{A})$& $\rho(\mathrm{B})$ & $-\rho(\mathrm{C})$ & $-2$\\ \hline $\theta^2$& $\Phi_{\nicefrac{-1}{3}}$ & $\rep2''\oplus\rep1$& $(\rho(\mathrm{S}))^*$& $(\rho(\mathrm{T}))^*$& $\nicefrac{-1}{3}$& $\crep3_1$& $\rho(\mathrm{A})$& $(\rho(\mathrm{B}))^*$& $-\rho(\mathrm{C})$ & $2$\\ & $\Phi_{\nicefrac{+2}{3}}$ & $\rep2''\oplus\rep1$& $(\rho(\mathrm{S}))^*$& $(\rho(\mathrm{T}))^*$& $\nicefrac{+2}{3}$& $\crep3_2$& $\rho(\mathrm{A})$& $(\rho(\mathrm{B}))^*$& $+\rho(\mathrm{C})$ & $5$\\ \hline \hline super- & \multirow{2}{*}{$W$} & \multirow{2}{*}{$\rep1$} & \multirow{2}{*}{$1$} & \multirow{2}{*}{$1$} & \multirow{2}{*}{$-1$} & \multirow{2}{*}{$\rep1'$} & \multirow{2}{*}{$1$} & \multirow{2}{*}{$1$} & \multirow{2}{*}{$-1$} & \multirow{2}{*}{$3$}\\ \!\!potential\!\! & & & & & & & & & & \\ \hline \end{tabular} } \caption{\label{tab:Representations} $T'$, $\Delta(54)$ and $\Z9^R$ representations of massless matter fields $\Phi_n$ with modular weights $n$ in semi-realistic heterotic orbifold compactifications with a $\mathbbm{T}^2/\Z3$ sector~\cite{Nilles:2020kgo}. } \end{table} \begin{table}[t] \centering \begin{tabular}{cllllllllll} \toprule & $\ell$ & $\bar e$ & $\bar\nu$ & $q$ & $\bar u$ & $\bar d$ & $H_u$ & $H_d$ & $\varphi_\mathrm{f}$ & $\phi^0_\mathrm{f}$\\ \midrule Model A & $\Phi_{\nicefrac{-2}3}$ & $\Phi_{\nicefrac{-2}3}$ & $\Phi_{\nicefrac{-2}3}$ & $\Phi_{\nicefrac{-2}3}$ & $\Phi_{\nicefrac{-2}3}$ & $\Phi_{\nicefrac{-2}3}$ & $\Phi_{0}$ & $\Phi_{0}$ & $\Phi_{\nicefrac{-2}3}$ & $\Phi_{0}$\\ \bottomrule \end{tabular} \caption{\label{tab:Z3xZ3configurations} Flavor symmetry representations of MSSM quark ($q,\bar u,\bar d$), lepton ($\ell,\bar e,\bar\nu$), Higgs and flavon fields ($\varphi$,$\phi$) in an example of a consistent string theory configuration with a $\mathbbm T^2/\Z3$ orbifold sector. Following the notation of table~\ref{tab:Representations}, representations are \textit{entirely} determined by stating the respective modular weight. The \textit{complete} gauge symmetry and field content of the string derived model, incl.\ exotic vector-like fields and others which are irrelevant for this analysis, is given in~\cite[Appendix C]{Baur:2022hma}. } \end{table} Generic $\Omega(2)$ compliant super- and K\"ahler potentials have been derived in~\cite{Nilles:2020kgo} and their explicit form can be found in~\cite{Baur:2022hma}. For our example model A, \begin{equation}\label{eq:superpotential} \begin{split} W ~=~ \phi^0 & \left[ \left(\phi^0_\mathrm{u}\,\varphi_\mathrm{u}\right) Y_\mathrm{u}\, H_\mathrm{u}\,\bar{u}\,q + \left(\phi^0_\mathrm{d}\,\varphi_\mathrm{e}\right) Y_\mathrm{d}\, H_\mathrm{d}\, \bar{d}\, q\, + \left(\phi^0_\mathrm{e}\,\varphi_\mathrm{e}\right) Y_\ell\, H_\mathrm{d}\, \bar{e}\, \ell\right] \\ & + \left(\phi^0\varphi_\nu\right) Y_\nu\, H_\mathrm{u}\,\bar\nu\, \ell + \phi^0_\mathrm{M}\,\varphi_\mathrm{e}\,Y_\mathrm{M}\,\bar\nu\,\bar\nu \,. \end{split} \end{equation}\enlargethispage{0.5cm} Two important empirical observations can be made in this top-down setting: (i) While matter fields can have fractional modular weights, they always combine in such a way that all Yukawa couplings are modular forms of \textit{integer} weight. (ii) The charge assignments under the eclectic symmetry are \textit{uniquely} fixed in one-to-one fashion by the modular weight of a field. The latter also holds for all other known top-down constructions, see~\cite{Kikuchi:2021ogn,Baur:2020jwc,Baur:2021mtl,Almumin:2021fbk,Ishiguro:2021ccl}, and can be conjectured to be a general feature of top-down models~\cite{Baur:2021bly}. \section{Sources of eclectic flavor symmetry breaking} The eclectic flavor symmetry is broken by both, the vacuum expectation value (VEV) of the modulus $\langle T\rangle$ and the VEVs of flavon fields. This is unlike in virtually all current bottom-up models where either one or the other breaking mechanism is implemented. Note that all VEVs $\langle T\rangle$ have non-trivial stabilizers in the eclectic symmetry that lead to enhancements of the residual traditional flavor symmetry beyond what has been previously known in the literature. This situation is depicted in figure~\ref{fig:Xi22breaking}. \begin{figure}[t!] \centering \includegraphics[width=0.4\linewidth]{planeTmodT.png} \caption{\label{fig:Xi22breaking} Residual symmetries of the eclectic flavor symmetry $\Omega(2)$ in dependence of the modulus VEV $\langle T\rangle$ in the bulk of the fundamental domain and at symmetry enhanced special points. The point $\vev{T}=\mathrm{i}\infty$ is dual (equivalent by a modular transformation) to the highlighted point $\vev{T}=1$. } \end{figure} \begin{figure} \includegraphics[width=1.0\linewidth]{H321_marked.png} \caption{\label{fig:H321breaking} Possible flavon VEV induced breaking patterns of the linearly realized unified flavor symmetry $H(3,2,1)$ at $\vev{T}=\mathrm{i}\infty$ or $\vev{T}=1$. In red we highlight the breaking path we follow in our specific example model and misaligned flavons of the type $\Phi_{\nicefrac{-2}3}$. } \end{figure} For a realistic phenomenology the residual traditional flavor symmetry has to be further broken by the VEVs of flavon fields. In figure~\ref{fig:H321breaking} we show the possible breaking of the residual flavor symmetry $H(3,2,1)$ at $\langle T\rangle=\mathrm{i}\infty$ by differently (mis-)aligned VEVs of different flavons~\cite{Baur:2021bly}. Since different residual symmetries are possible for different sectors of the theory, the overall symmetry can be completely broken even if moduli and VEVs would be stabilized at symmetry enhanced points. In a model with a single modulus and only one type of flavon we can achieve complete flavor symmetry breaking by misaligning their VEVs slightly away from the symmetry enhanced points. In our specific example model A we follow one specific breaking path that we selected by hand in order to successfully reproduce the experimental data. The path is illustrated in red in figure~\ref{fig:H321breaking} and in more detail in figure~\ref{fig:BreakdownPattern}. Our model has only flavons of type $\Phi_{\nicefrac{-2}3}$ which transform as $\rep{3}_2$ under the traditional flavor symmetry $\Delta(54)$ as can be inferred from table~\ref{tab:Representations}. We parametrize the effective (dimensionless) flavon VEVs and the misaligned modulus VEV as \begin{align} &\vev{\tilde\varphi_{\rep{3}_2}}=\left(\lambda_1, \lambda_2, 1\right)\;,& &\epsilon:=\mathrm{e}^{2\pi\mathrm{i}\vev{T}}\;.& \end{align} Exact alignment of the flavon and modulus to the symmetry enhanced point would give rise to a $\Z{3}^{(2)}\times\Z{3}^{(3)}$ residual symmetry, with factors \begin{align} &\Z{3}^{(2)}~\subset~G_\mathrm{traditional}& &\text{generated by}& \rho_{\rep{3}_2,\mathrm{i}\infty}(\mathrm{ABA^2})~&=~ \begin{pmatrix}\omega&0&0\\0&\omega^2&0\\0&0&1\end{pmatrix}\;,& \\ &\Z{3}^{(3)}~\subset~G_\mathrm{modular}& &\text{generated by}& \rho_{\rep{3}_2,\mathrm{i}\infty}(\mathrm{T})~&=~ \begin{pmatrix}\omega^2&0&0\\0&1&0\\0&0&1\end{pmatrix}\;.& \end{align} The stepwise breaking of these symmetries, see figure~\ref{fig:BreakdownPattern}, gives rise to technically natural small parameters \begin{equation} \epsilon, \lambda_1\ll \lambda_2\ll 1 \;, \end{equation} which will allow to analytically control our mass and mixing hierarchies. \begin{figure} \begin{tikzpicture}[node distance=0.2cm and 2.27cm, rounded corners, >=stealth] \node[minimum height=22pt, draw, rectangle,fill=lightgray] (Omega2) {$\Omega(2)$ }; \node[minimum height=22pt, draw, rectangle, right=of Omega2,fill=lightgray] (H321) { $H(3,2,1)$ }; \node[minimum height=22pt, draw, rectangle, right=of H321,fill=lightgray] (Z3xZ3) { $\;\!\Z{3}^{(2)} \!\times \Z{3}^{(3)}$ }; \node[minimum height=22pt, draw, rectangle, right = of Z3xZ3,fill=lightgray] (Z3) { $\;\!\Z3^{(3)}$ }; \node[minimum height=22pt, draw, rectangle, above right=of Z3,fill=lightgray] (empty1) { $~\emptyset~$ }; \node[minimum height=22pt, draw, rectangle, below right=of Z3,fill=lightgray] (empty2) { $~\emptyset~$ }; \draw[->, >=stealth, shorten >=2pt, shorten <=2pt] (Omega2.east) -- node [fill=white,rectangle,midway,align=center] {\resizebox*{30pt}{!}{$\vev{T}=\mathrm{i}\infty$}} (H321.west); \draw[->, >=stealth, shorten >=2pt, shorten <=2pt] (H321.east) -- node [fill=white,rectangle,midway,align=center] {\resizebox*{35pt}{!}{$\vev{\tilde\varphi}=\begin{pmatrix}0\\0\\1\end{pmatrix}$}} (Z3xZ3.west); \draw[->, >=stealth, shorten >=2pt, shorten <=2pt] (Z3xZ3.east) -- node [fill=white,rectangle,midway,align=center] {\resizebox*{38pt}{!}{$\vev{\tilde\varphi}=\begin{pmatrix}0\\\lambda_2\\1\end{pmatrix}$}} (Z3.west); \draw[->, >=stealth, shorten >=2pt, shorten <=2pt] (Z3.east) -- node [draw=black, fill=white,rectangle,midway,align=center] {\resizebox*{45pt}{!}{$\epsilon=\mathrm{e}^{2\pi\mathrm{i}\vev{T}} \neq 0$}} (empty2.west); \draw[->, >=stealth, shorten >=2pt, shorten <=2pt] (Z3.east) -- node [draw=black, fill=white,rectangle,midway,align=center] {\resizebox*{30pt}{!}{$\vev{\tilde\varphi}=\begin{pmatrix}\lambda_1\\\lambda_2\\1\end{pmatrix}$}}(empty1.west); \end{tikzpicture} \caption{\label{fig:BreakdownPattern} Breakdown pattern of the eclectic flavor symmetry $\Omega(2)$ of a $\mathbbm T^2/\Z3$ orbifold model triggered by the VEVs of the modulus $T$ and (dimensionless) flavons $\tilde\varphi$. All flavons transform in the $\rep3_2$ representation of $\Delta(54)$, see table~\ref{tab:Representations}.} \end{figure} \section{Mass matrices} For model A all terms in the superpotential~\eqref{eq:superpotential} have the generic structure \begin{equation} \Phi_0 \dots \Phi_0 \,\hat{Y}^{(1)}(T)\, \Phi^{(1)}_{\nicefrac{-2}{3}}\,\Phi^{(2)}_{\nicefrac{-2}{3}}\,\Phi^{(3)}_{\nicefrac{-2}{3}}\;. \end{equation} Schematically, this is given by \begin{equation} \text{``singlet flavon(s)} \times \text{modular form} \times \text{\textbf{triplet} matter} \times \text{\textbf{triplet} matter} \times \text{\textbf{triplet} flavon''}\;. \end{equation} Hence, the resulting mass matrices for quarks, charged leptons and neutrinos can all be written as~\cite{Nilles:2020kgo,Nilles:2020gvu} \begin{equation} \left(\Phi^{(1)}_{\nicefrac{-2}{3}}\right)^\mathrm{T} \; M\left(T, c, \Phi_{\nicefrac{-2}{3}}^{(3)}\right) \;\;\; \Phi^{(2)}_{\nicefrac{-2}{3}}\;, \end{equation} with \begin{equation}\label{eq:massmatrix} M\left(T,c,\Phi_{\nicefrac{-2}{3}}^{(3)}\right) ~=~ c~ \begin{pmatrix} \phantom{-}\hat{Y}_2(T) \,X &- \dfrac{\hat{Y}_1(T)}{\sqrt{2}}\, Z&- \dfrac{\hat{Y}_1(T)}{\sqrt{2}} \, Y\\[6pt] - \dfrac{\hat{Y}_1(T)}{\sqrt{2}} \,Z &\phantom{-}\hat{Y}_2(T) \,Y &- \dfrac{\hat{Y}_1(T)}{\sqrt{2}} \, X\\[6pt] - \dfrac{\hat{Y}_1(T)}{\sqrt{2}} \,Y&- \dfrac{\hat{Y}_1(T)}{\sqrt{2}} \,X&\phantom{-}\hat{Y}_2(T) \,Z \end{pmatrix}\;. \end{equation} Here we have parametrized the effective flavon as $\Phi_{\nicefrac{-2}{3}}^{(3)}\equiv\left(X,Y,Z\right)$, and used the modular form \begin{equation} \hat{Y}^{(1)}(T)\equiv \begin{pmatrix} \hat{Y}_1(T) \\ \hat{Y}_2(T) \end{pmatrix} \equiv \frac{1}{\eta(T)} \begin{pmatrix} -3\sqrt{2}\,\eta^3(3\,T)\\ 3\eta^3(3\,T) + \eta^3(T/3) \end{pmatrix}, \end{equation} where $\eta$ is the Dedekind function. In the vicinity of the symmetry enhanced points discussed above the mass matrices all take the form \begin{equation} M\left(\vev T,\Lambda,\vev{\tilde{\varphi}}\right)=\Lambda\, \begin{pmatrix} \lambda_1 & 3\,\epsilon^{1/3} & 3\,\lambda_2\,\epsilon^{1/3}\\ 3\,\epsilon^{1/3} & \lambda_2 & 3\,\lambda_1\,\epsilon^{1/3}\\ 3\,\lambda_2\,\epsilon^{1/3} & 3\,\lambda_1\,\epsilon^{1/3} & 1 \end{pmatrix} \,+\,\mathcal{O}(\epsilon)\;. \end{equation} The exact values of the parameters $\lambda_{1,2}$ and the overall scale $\Lambda$ are different for the different sectors, but this shows the analytic control over the hierarchical entries in the mass matrices. \section{Numerical analysis: fit to data} To give a proof of existence of functioning top-down models we fit the parameters of our model to the observed data in lepton and quark sectors. As input, we take the mass ratios and $1\sigma$ errors for charged lepton masses, quark masses and quark mixings at the GUT scale, see e.g.~\cite{Antusch:2013jca}, assuming RGE running with benchmark parameters $\tan\beta=10$, $M_\mathrm{SUSY}=10$\,TeV, and $\bar\eta_b=0.09375$, as is common practice in bottom up constructions~\cite{Feruglio:2017spp,Chen:2021zty,Ding:2021zbg}. The data on the lepton mixing was taken from the global analysis $\mathrm{NuFIT v}5.1$~\cite{Esteban:2020cvm} including the full dependence of $\Delta\chi^2$ profiles. While the leptonic mixing parameters are given at the low scale, the correction from RGE running in our type-I seesaw scenario is expected to be smaller than the experimental errors, see e.g.~\cite{Antusch:2003kp}, such that we ignore the effect of running for those. We define a $\chi^2$ function \begin{equation} \chi^2(x) := \sum_i \dfrac{\mu_{i,\mathrm{exp}} - \mu_{i,\mathrm{model}}(x)}{\sigma_i}\;, \end{equation} where $\mu_\mathrm{exp}$ and $\sigma$ are experimental best-fit value and $1\sigma$ error, while $\mu_\mathrm{model}$ is the model prediction. In order to fix the free parameters of our model we numerically minimize $\chi^2$ using~\texttt{lmfit}~\cite{lmfit}. Subsequently we explore each minimum with the Markov-Chain-Monte-Carlo sampler~\texttt{emcee}~\cite{emcee}. \subsection{Lepton sector} For the fit to the lepton sector there are effectively only $7$ parameters given by \begin{equation} x~=~ \left\{\re\,\vev T,\, \im\,\vev T,\, \vev{\tilde\varphi_{\mathrm{e},1}},\, \vev{\tilde\varphi_{\mathrm{e},2}},\, \vev{\tilde\varphi_{\nu,1}},\, \vev{\tilde\varphi_{\nu,2}},\, \Lambda_\nu \right\}\,. \end{equation} The best-fit results are shown in table~\ref{tab:LeptonFitParameters}. The fit is bimodal as clearly seen in figure~\ref{fig:modulispace}, which also shows the best-fit point for the expectation value of the modulus. The corresponding values of the experimental parameters and their best-fit values in our model are collected in table~\ref{tab:FitLeptons}. The fit to the data is only successfully possible if \begin{enumerate} \item atmospheric mixing lies in the lower octant $\theta_{23}^{\ell}<45^\circ$, \item neutrino masses obey a normal ordering with masses at $1\sigma$ predicted to be $3.9~\mathrm{meV} < m_1 < 4.9~\mathrm{meV}$, $9.5~\mathrm{meV} < m_2 < 9.9~\mathrm{meV}$, $50.1~\mathrm{meV} < m_3 < 50.5~\mathrm{meV}$, and, \item the Majorana phases are close to the CP conserving values $\eta_{1,2}\approx\pi$. \end{enumerate} These can be considered predictions of this scenario. The corresponding posteriors are shown in figure~\ref{fig:neutrinomasses} together also with the allowed effective neutrino mass for $0\nu\beta\beta$-decay on the lower right. Gray-shaded areas are excluded by KamLAND-Zen~\cite{KamLAND-Zen:2022tow} or cosmology~\cite{GAMBITCosmologyWorkgroup:2020rmf,Planck:2018vyg}. Future generations of $0\nu\beta\beta$-decay experiments such as CUPD-1T~\cite{CUPID:2022wpt} are expected to probe the available parameter space. The model does not constrain the \ensuremath{\mathcal{CP}}\xspace violating phase $\delta_{\mathrm{\ensuremath{\mathcal{CP}}\xspace}}^\ell$ better than the combined experimental information. \begin{table}[t] \centering\vspace{-0.5cm} \resizebox{\textwidth}{!}{ \begin{tabular}{llclc} \toprule &\multicolumn{2}{c}{right green region} & \multicolumn{2}{c}{left green region}\\\cmidrule(r{4pt}l{4pt}){2-3}\cmidrule(l{4pt}r{4pt}){4-5} parameter~~ & best-fit value~ & $1\sigma$ interval & best-fit value~ & $1\sigma$ interval \\\midrule $\re\, \vev T$ & $\phantom{-}0.02279$ & $0.01345 \rightarrow 0.03087$ & $-0.04283$ & $-0.05416 \rightarrow -0.02926$\\ $\im\, \vev T$ & $\phantom{-}3.195$ & $3.191 \rightarrow 3.199$ & $\phantom{-}3.139$ & $3.135 \rightarrow 3.142$\\ $\vev{\tilde\varphi_{\mathrm{e,1}}}$ & $-4.069 \cdot 10^{-5}$ & $-4.321 \cdot 10^{-5}\rightarrow -3.947 \cdot 10^{-5}$ & $\phantom{-}2.311 \cdot 10^{-5}$ & $2.196 \cdot 10^{-5}\rightarrow 2.414 \cdot 10^{-5}$\\ $\vev{\tilde\varphi_{\mathrm{e,2}}}$ & $\phantom{-}0.05833$ & $0.05793 \rightarrow 0.05876$ & $\phantom{-}0.05826$ & $0.05792 \rightarrow 0.05863$\\ $\vev{\tilde\varphi_{\mathrm{\nu,1}}}$ & $\phantom{-}0.001224$ & $0.001201 \rightarrow 0.001248$ & $-0.001274$ & $-0.001304 \rightarrow -0.001248$ \\ $\vev{\tilde\varphi_{\mathrm{\nu,2}}}$ & $-0.9857$ & $-1.0128 \rightarrow -0.9408$ & $\phantom{-}0.9829$ & $0.9433 \rightarrow 1.0122$\\ $\Lambda_\nu~[\mathrm{eV}]$ & $\phantom{-}0.05629$ & $0.05442 \rightarrow 0.05888$ & $\phantom{-}0.05591$ & $0.05408 \rightarrow 0.05850$ \\\midrule $\,\chi^2$ & $\phantom{-}0.08$ && $\phantom{-}0.45$ & \\ \bottomrule \end{tabular}} \caption{Best-fit values for the free model parameters in the lepton sector and their corresponding $1\sigma$ intervals for the two best-fit regions (green) also visible in figure~\ref{fig:modulispace}. \label{tab:LeptonFitParameters}} \end{table} \begin{figure}[t] \centering \includegraphics[width=0.96\linewidth]{Modulispace_compressed.pdf} \caption{Regions in the fundamental domain of $\Gamma(3)$ that yield fits for $\vev{T}$ with $\chi^2\leq25$. The green, yellow, and orange regions show the $1$, $2$, and $3\sigma$ confidence intervals. The best-fit value of the model lies in the upper right green region. \label{fig:modulispace}} \end{figure} \begin{table} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{lrrrrrr} \toprule & \multicolumn{3}{c}{model} &\multicolumn{3}{c}{experiment}\\\cmidrule(r{4pt}l{4pt}){2-4}\cmidrule(l{4pt}r{4pt}){5-7} observable & best fit & $1\sigma$ interval & $3\sigma$ interval & best fit & $1\sigma$ interval & $3\sigma$ interval\\\midrule $m_\mathrm{e}/m_\mu$ & $0.00473$ & $0.00470\rightarrow0.00477$ & $0.00462\rightarrow0.00485$ & $0.00474$ & $0.00470\rightarrow0.00478$ & $0.00462\rightarrow0.00486$\\ $m_\mu/m_\tau$ & $0.0586$ & $0.0581\rightarrow0.0590$ & $0.0572\rightarrow0.0600$ & $0.0586$ & $0.0581\rightarrow0.0590$ & $0.0572\rightarrow0.0600$\\\midrule $\sin^2\theta_{12}$ & $0.303$ & $0.294\rightarrow0.315$ & $0.275\rightarrow0.335$ & $0.304$ & $0.292\rightarrow0.316$ & $0.269\rightarrow0.343$ \\ $\sin^2\theta_{13}$ & $0.02254$ & $0.02189\rightarrow0.02304$ & $0.02065\rightarrow0.02424$ & $0.02246$ & $0.02184\rightarrow0.02308$ & $0.02060\rightarrow0.02435$\\ $\sin^2\theta_{23}$ & $0.449$ & $0.436\rightarrow0.468$ & $0.414\rightarrow0.593$ & $0.450$ & $0.434 \rightarrow 0.469$ & $0.408 \rightarrow 0.603$ \\\midrule $\delta^\ell_\ensuremath{\mathcal{CP}}\xspace/\pi$ & $1.28$ & $1.15\rightarrow1.47$ & $0.81\rightarrow1.94$ & $1.28$ & $1.14 \rightarrow 1.48$ & $0.80 \rightarrow 1.94$ \\ $\eta_1/\pi \mod 1$ & $0.029$ & $0.018\rightarrow0.048$ & $-0.031\rightarrow0.090$ & - & - & - \\ $\eta_2/\pi \mod 1$ & $0.994$ & $0.992\rightarrow0.998$ & $0.935\rightarrow1.004$ & - & - & - \\ $J_\ensuremath{\mathcal{CP}}\xspace$ & $-0.026$ & $-0.033\rightarrow-0.015$ & $-0.035\rightarrow0.019$ & $-0.026$ & $-0.033\rightarrow-0.016$ & $-0.033\rightarrow0.000$ \\ $J_\ensuremath{\mathcal{CP}}\xspace^\mathrm{max}$ & $0.0335$ & $0.0330\rightarrow0.0341$ & $0.0318\rightarrow0.0352$ & $0.0336$ & $0.0329\rightarrow0.0341$ & $0.0317\rightarrow0.0353$ \\\midrule $\Delta m_{21}^2/10^{-5}~[\mathrm{eV}^2]$ & $7.39$ & $7.35\rightarrow7.49$ & $7.21\rightarrow7.65$ & $7.42$ & $ 7.22\rightarrow 7.63$ & $6.82 \rightarrow 8.04$ \\ $\Delta m_{31}^2/10^{-3}~[\mathrm{eV}^2]$ & $2.508$ & $2.488\rightarrow2.534$ & $2.437\rightarrow2.587$ & $2.521$ & $ 2.483\rightarrow 2.537$ & $2.430 \rightarrow 2.593$ \\ $m_1~[\mathrm{eV}]$ & $0.0042$ & $0.0039\rightarrow0.0049$ & $0.0034\rightarrow0.0131$ & $<0.037$ & - & - \\ $m_2~[\mathrm{eV}]$ & $0.0095$ & $0.0095\rightarrow0.0099$ & $0.0092\rightarrow0.0157$ & - & - & - \\ $m_3~[\mathrm{eV}]$ & $0.0504$ & $0.0501\rightarrow0.0505$ & $0.0496\rightarrow0.0519$ & - & - & - \\ $\sum_i m_i~[\mathrm{eV}]$ & $0.0641$ & $0.0636\rightarrow0.0652$ & $0.0628\rightarrow0.0806$ & $<0.120$ & - & - \\ $m_{\beta\beta}~[\mathrm{eV}]$ & $0.0055$ & $0.0045\rightarrow0.0064$ & $0.0040\rightarrow0.0145$ & $<0.036$ & - & - \\ $m_{\beta}~[\mathrm{eV}]$ & $0.0099$ & $0.0097\rightarrow0.0102$ & $0.0094\rightarrow0.0159$ & $<0.8$ & - & - \\\midrule $\chi^2$ & $0.08$ & & & & & \\ \bottomrule \end{tabular} } \caption{Lepton sector best-fit values of our model compared to the experimental data. \label{tab:FitLeptons}} \end{table} \begin{figure} \centering \begin{minipage}[c]{0.5\linewidth} \includegraphics[width=1.0\linewidth]{Pictures/CP_vs_23_2.jpg} \end{minipage}% \begin{minipage}[c]{0.50\linewidth} \hspace{0.25cm}\includegraphics[width=0.97\linewidth]{Pictures/Chi2_vs_mnu_2.jpg} \end{minipage} \\ \begin{minipage}[c]{0.50\linewidth} \includegraphics[width=0.95\linewidth]{Pictures/eta1_vs_eta2_2.jpg} \end{minipage}% \begin{minipage}[c]{0.50\linewidth} \includegraphics[width=0.94\linewidth]{Pictures/mbeta_vs_m1_2.jpg} \end{minipage \caption{\label{fig:neutrinomasses} Best fit points of our model and projections on experimentally observable parameters of the lepton sector, see text for details.} \end{figure} \subsection{Simultaneous fit to the quark sector and importance of K\"ahler corrections} Next, we extend our fit to the quark sector. As visibile from the superpotential~\eqref{eq:superpotential}, the up-type quark Yukawa couplings include an additional flavon triplet $\varphi_\mathrm{u}$ while the down-type Yukawa couplings share the flavon triplet of the charged leptons $\varphi_\mathrm{e}$ (this is a specific feature of this model~A and cannot be changed as it is determined by the underlying string theory). Since the structure of the mass matrices is tightly fixed, see equation~\eqref{eq:massmatrix}, this implies that at leading order in the EFT, masses of charged leptons and down-type quarks would only differ by their overall scale, which contradicts experimental observation. However, this is only a leading order statement, and the superpotential~\eqref{eq:superpotential}, in principle, is subject to corrections originating from a non-canonical K\"ahler potential. Including these corrections allows us to obtain a successful fit to quark and lepton sector simultaneously. K\"ahler corrections have usually not been taken into account in bottom-up constructions even though they are unconstrained there~\cite{Chen:2019ewa} and, therefore, potentially destabilize predictions. Unlike in pure modular flavor theories, the traditional flavor symmetry present in the full eclectic picture allows one to keep control of the K\"ahler potential. In particular, $K$ is canonical at leading order~\cite{Nilles:2020kgo}. As discussed in detail in~\cite{Baur:2022hma}, there are considerable off-diagonal corrections to the K\"ahler metric at next-to-leading order if flavons develop VEVs that break the traditional flavor symmetry. While K\"ahler corrections, in principle, can affect both lepton and quark sectors, we only take into account the K\"ahler corrections to quarks, for simplicity, and ignore the corrections to the lepton sector. This is not worse than the common assumption of total absence of these corrections in bottom-up constructions. In any case, it is conceivable that the inclusion of additional parameters would only make our fit better, not worse. For the quark sector of model A the corrections are essential and must be included to obtain a good fit. The following discussion of K\"ahler corrections is specific to model A. Schematically, the corrections at leading order (LO) and next-to-leading order (NLO) are given by~\cite{Baur:2022hma} \begin{align} K_\mathrm{LO} &\supset -\log(-\mathrm{i} T+\mathrm{i} \bar T) + \sum_{\Phi}\left[(-\mathrm{i} T+ \mathrm{i}\bar T)^{\nicefrac{-2}3} + (-\mathrm{i} T+ \mathrm{i}\bar T)^{\nicefrac13}|\hat Y^{(1)}(T)|^2\right]|\Phi|^2\;,& \\ K_\mathrm{NLO} &\supset \sum_{\Psi,\varphi} \left[(-\mathrm{i} T+\mathrm{i} \bar T)^{\nicefrac{-4}3}\sum_a |\Psi\varphi|^2_{\rep1,a} +(-\mathrm{i} T+\mathrm{i}\bar T)^{\nicefrac{-1}3}\sum_a |\hat Y^{(1)}(T)\Psi\varphi|^2_{\rep1,a}\right]\;.& \end{align} For a given quark flavor $f=\left\{\mathrm{u,d,q}\right\}$, \begin{equation} K^{(f)}_{ij} \approx \chi^{(f)} ~\left[\delta_{ij}+ \lambda_{\varphi_\mathrm{eff}}^{(f)}\, \left(A_{ij}^{(f)}+ \kappa_{\varphi_\mathrm{eff}}^{(f)}\, B_{ij}^{(f)}\right)\right]\;, \end{equation} with flavor space structures $A=A(\varphi,T)$ and $B=B(\varphi,T)$ that are fixed by group theory but depend on \textit{all} flavon fields. We can define ``effective flavons'' such that \begin{align} \sum_\varphi \lambda^{(f)}_\varphi \, A_{ij}(\varphi) ~&=:~ \lambda^{(f)}_{\varphi_{\mathrm{eff}}} \, A_{ij}(\tilde\varphi_{\mathrm{eff}}^{(A,f)})& &\equiv\lambda^{(f)}_{\varphi_{\mathrm{eff}}} \, A^{(f)}_{ij}\;,&\\ \sum_\varphi \lambda^{(f)}_\varphi \, \kappa^{(f)}_\varphi \, B_{ij}(\varphi) ~&=:~ \lambda^{(f)}_{\varphi_{\mathrm{eff}}} \, \kappa^{(f)}_{\varphi_{\mathrm{eff}}} \, B_{ij}(\tilde\varphi_{\mathrm{eff}}^{(B,f)})& &\equiv\lambda^{(f)}_{\varphi_{\mathrm{eff}}} \, \kappa^{(f)}_{\varphi_{\mathrm{eff}}} \, B^{(f)}_{ij}\;. \end{align} The tilde here is used once we took the scale out of the flavon directions \begin{equation} \tilde\varphi_{\mathrm{eff}}^{(A,B)} := \varphi_{\mathrm{eff}}^{(A,B)}/\Lambda_{\varphi_{\mathrm{eff}}^{(A,B)}} \quad\text{such that}\quad \tilde\varphi_{\mathrm{eff}}^{(A,B)} ~:=~ \left(\tilde\varphi_{\mathrm{eff},1}^{(A,B)},\,\tilde\varphi_{\mathrm{eff},2}^{(A,B)},\,1\right)^\mathrm{T}\,. \end{equation} Finally, we can define the parameters \begin{align} \alpha_i^{(f)} &:= \sqrt{\lambda_{\varphi_{\mathrm{eff}}}^{(f)}} \, \vev{\tilde{\varphi}_{\mathrm{eff},i}^{(A,f)}}\;,& \beta_i^{(f)} &:= \sqrt{\lambda_{\varphi_{\mathrm{eff}}}^{(f)}} \, \vev{\tilde{\varphi}_{\mathrm{eff},i}^{(B,f)}}\;,& \end{align} and one can show that \begin{align} \lambda_{\varphi_{\mathrm{eff}}}^{(f)} A_{ij}^{(f)} &= \alpha_i^{(f)}\,\alpha_j^{(f)}\;,& \lambda_{\varphi_{\mathrm{eff}}}^{(f)} B_{ij}^{(f)} &\approx \beta_i^{(f)}\,\beta_j^{(f)}\;.& \end{align} The parameters $\alpha_i^{f}$ and $\beta_i^{f}$ represent a good measure of the size of the K\"ahler corrections. Altogether, the parameters of the quark sector are given by the components of the up-type flavon triplet \begin{equation} \label{eq:uflavon} \vev{\tilde\varphi_\mathrm{u}} ~=~ \Big( \vev{\tilde\varphi_{\mathrm{u},1}} \,\exp\;\!\!\left(\mathrm{i}\vev{\vartheta_{\mathrm{u},1}}\right), \vev{\tilde\varphi_{\mathrm{u},2}} \,\exp\;\!\!\left(\mathrm{i}\vev{\vartheta_{\mathrm{u},2}}\right), 1\Big)\;, \end{equation} and the K\"ahler corrections additionally introduce 9 parameters $\alpha_i^f$, 9 $\beta_i^f$ and 3 $\kappa_{\varphi_\mathrm{eff}}^{f}$. To reduce the number of parameters we impose the constraints: \begin{itemize} \item $\kappa_{\varphi_\mathrm{eff}}^{f}=1$ for all $f\in\{\mathrm{u,d,q}\}$, \item $\alpha_i^{f} = \beta_i^{f}$ for all $f$ and $i\in\{1,2,3\}$, and \item all $\alpha_i^{f}\in\mathbb{R}$. \end{itemize} We recall that the philosophy here is not to scan the full parameter space but to identify a region in the parameter space that agrees with realistic phenomenology in the first place. Including the constraints we arrive at a total of 13 quark parameters that we include in our fit of both, leptons and quarks. The resulting best-fit values are collected in table~\ref{tab:SimultaneousFit}. The magnitude of all required K\"ahler corrections satisfies $\alpha^f_i<1$. The modulus VEV \vev{T} and the VEVs of the charged lepton and neutrino flavons $\vev{\tilde\varphi_{\mathrm{e},i}}$ and $\vev{\tilde\varphi_{\nu,i}}$ stay at the values obtained in the exclusive lepton fit, see table~\ref{tab:FitLeptons}. Table~\ref{tab:SimultaneousFitObservables} shows our best fit compared to the experimental input values of quark and lepton parameters. The best fit function to all fermion mass ratios, mixing angles and \ensuremath{\mathcal{CP}}\xspace phases yields $\chi^2=0.11$. Even though the quark sector fit is not predictive we have fulfilled our goal to show that the eclectic scenario arising from a string compactification can fit the observed data well. \begin{table}[!t!] \vspace*{10pt} \begin{minipage}{0.25\textwidth}% \centering \subfloat[(a)][]{ \label{tab:SimultaneousFitParameters} \centering \scalebox{0.8}{ \begin{tabular}{cr|l} \toprule \multicolumn{2}{r|}{parameter} & best-fit value \\\midrule &$\im\, \vev T$ & $\phantom{-}3.195$ \\ &$\re\,\vev T$ & $\phantom{-}0.02279$ \\ &$\vev{\tilde{\varphi}_{\mathrm{u},1}}$ & $\phantom{-}2.0332\cdot10^{-4}$ \\ &$\vev{\vartheta_{\mathrm{u},1}}$ & $\phantom{-}1.6481$ \\ &$\vev{\tilde{\varphi}_{\mathrm{u},2}}$ & $\phantom{-}6.3011\cdot10^{-2}$ \\ &$\vev{\vartheta_{\mathrm{u},2}}$ & $-1.5983$ \\ &$\vev{\tilde{\varphi}_{\mathrm{e},1}}$ & $-4.069\cdot 10^{-5}$ \\ &$\vev{\tilde{\varphi}_{\mathrm{e},2}}$ & $\phantom{-}5.833\cdot10^{-2}$ \\ &$\vev{\tilde{\varphi}_{\nu,1}}$ & $\phantom{-}1.224\cdot10^{-3}$ \\ &$\vev{\tilde{\varphi}_{\nu,2}}$ & $-0.9857$ \\ \multirow{-11}{10pt}{\rotatebox{90}{superpotential}}&$\Lambda_\nu~[\mathrm{eV}]$ & $\phantom{-}0.05629$\\\midrule &$\alpha_1^{\mathrm{u}}$ & $-0.94917$ \\ &$\alpha_2^{\mathrm{u}}$ & $\phantom{-}0.0016906$ \\ &$\alpha_3^{\mathrm{u}}$ & $\phantom{-}0.31472$ \\ &$\alpha_1^{\mathrm{d}}$ & $\phantom{-}0.95067$ \\ &$\alpha_2^{\mathrm{d}}$ & $\phantom{-}0.0077533$ \\ &$\alpha_3^{\mathrm{d}}$ & $\phantom{-}0.30283$ \\ &$\alpha_1^{\mathrm{q}}$ & $-0.96952$ \\ &$\alpha_2^{\mathrm{q}}$ & $-0.20501$ \\ \multirow{-9}{10pt}{\rotatebox{90}{K\"ahler potential}}&$\alpha_3^{\mathrm{q}}$ & $\phantom{-}0.041643$ \\ \bottomrule \end{tabular}} } \vspace*{37mm} \end{minipage}% \hspace*{28pt} \begin{minipage}{0.7\textwidth}% \centering \subfloat[(b)][]{ \label{tab:SimultaneousFitObservables} \centering \scalebox{0.8}{ \begin{tabular}{cr|llc} \toprule \multicolumn{2}{r|}{observable} & model best fit & exp. best fit & exp. $1\sigma$ interval \\\midrule &$m_\mathrm{u}/m_\mathrm{c}$ & $\phantom{-}0.00193$ & $\phantom{-}0.00193$ & $0.00133 \rightarrow 0.00253$ \\ &$m_\mathrm{c}/m_\mathrm{t}$ & $\phantom{-}0.00280$ & $\phantom{-}0.00282$ & $0.00270 \rightarrow 0.00294$\\ &$m_\mathrm{d}/m_\mathrm{s}$ & $\phantom{-}0.0505$ & $\phantom{-}0.0505$ & $0.0443 \rightarrow 0.0567$\\ &$m_\mathrm{s}/m_\mathrm{b}$ & $\phantom{-}0.0182$ & $\phantom{-}0.0182$ & $0.0172 \rightarrow 0.0192$ \\\cmidrule{2-5} &$\vartheta_{12}~[\mathrm{deg}]$ & $\,\;\!13.03$ & $\,\;\!13.03$ & $12.98 \rightarrow 13.07$\\ &$\vartheta_{13}~[\mathrm{deg}]$ & $\phantom{-}0.200$ & $\phantom{-}0.200$ & $0.193 \rightarrow 0.207$ \\ &$\vartheta_{23}~[\mathrm{deg}]$ & $\phantom{-}2.30$ & $\phantom{-}2.30$ & $2.26 \rightarrow 2.34$\\ \multirow{-8}{10pt}{\rotatebox{90}{quark sector}} &$\delta_\ensuremath{\mathcal{CP}}\xspace^\mathrm{q}~[\mathrm{deg}]$ & $\,\;\!69.2$ & $\,\;\!69.2$ & $66.1 \rightarrow 72.3$\\\midrule &$m_\mathrm{e}/m_\mu$ & $\phantom{-}0.00473$ & $\phantom{-}0.00474$ & $0.00470\rightarrow0.00478$ \\ &$m_\mu/m_\tau$ & $\phantom{-}0.0586$ & $\phantom{-}0.0586$ & $0.0581\rightarrow0.0590$ \\\cmidrule{2-5} &$\sin^2\theta_{12}$ & $\phantom{-}0.303$ & $\phantom{-}0.304$ & $0.292\rightarrow0.316$ \\ &$\sin^2\theta_{13}$ & $\phantom{-}0.0225$ & $\phantom{-}0.0225$ & $0.0218\rightarrow0.0231$ \\ &$\sin^2\theta_{23}$ & $\phantom{-}0.449$ & $\phantom{-}0.450$ & $0.434 \rightarrow 0.469$ \\\cmidrule{2-5} &$\delta_\ensuremath{\mathcal{CP}}\xspace^\ell/\pi$ & $\phantom{-}1.28$ & $\phantom{-}1.28$ & $1.14 \rightarrow 1.48$ \\ &$\eta_1/\pi$ & $\phantom{-}0.029$ & ~~~\,~- & - \\ &$\eta_2/\pi$ & $\phantom{-}0.994$ & ~~~\,~- & - \\ &$J_\ensuremath{\mathcal{CP}}\xspace$ & $-0.026$ & $-0.026$ & $-0.033\rightarrow-0.016$ \\ &$J_\ensuremath{\mathcal{CP}}\xspace^\mathrm{max}$ & $\phantom{-}0.0335$ & $\phantom{-}0.0336$ & $0.0329\rightarrow0.0341$ \\\cmidrule{2-5} &$\Delta m_{21}^2/10^{-5}~[\mathrm{eV}^2]$ & $\phantom{-}7.39$ & $\phantom{-}7.42$ & $ 7.22\rightarrow 7.63$ \\ &$\Delta m_{31}^2/10^{-3}~[\mathrm{eV}^2]$ & $\phantom{-}2.521$ & $\phantom{-}2.510$ & $ 2.483\rightarrow 2.537$ \\ &$m_1~[\mathrm{eV}]$ & $\phantom{-}0.0042$ & $\!<\!0.037$ & - \\ &$m_2~[\mathrm{eV}]$ & $\phantom{-}0.0095$ & ~~~\,~- & - \\ &$m_3~[\mathrm{eV}]$ & $\phantom{-}0.0504$ & ~~~\,~- & - \\ &$\sum_i m_i~[\mathrm{eV}]$ & $\phantom{-}0.0641$ & $\!<\!0.120$ & - \\ &$m_{\beta\beta}~[\mathrm{eV}]$ & $\phantom{-}0.0055$ & $\!<\!0.036$ & - \\ \multirow{-18}{10pt}{\rotatebox{90}{lepton sector}}&$m_{\beta}~[\mathrm{eV}]$ & $\phantom{-}0.0099$ & $\!<\!0.8$ & - \\\midrule & $\chi^2$ & $\phantom{-}0.11$ & & \\ \bottomrule \end{tabular} } } \end{minipage}% \caption{\label{tab:SimultaneousFit} Best fit results of our model in a simultaneous fit to quark and lepton sectors. (a) Best-fit values for the free model parameters. The lepton sector and modulus parameters agree with the one of the exclusively leptonic fit shown in table~\ref{tab:LeptonFitParameters}. (b) Best fit points of our model as compared to experimentally determined parameters.} \end{table} \section{Possible lessons for consistent bottom-up model building} Given an explicit example of a complete top-down model, we make some empirical observations that might be taken as useful guidelines for bottom-up constructions: (i) Neither modular nor traditional flavor symmetries arise alone. They arise as mutualy overlapping parts of the full eclectic flavor symmetry, including also \ensuremath{\mathcal{CP}}\xspace-type and R symmetries, see~\eqref{eq:eclectic}. (ii) Modular weights of matter fields are fractional, while modular weights of (Yukawa) couplings are integer. (iii) Modular weights are $1:1$ ``locked'' to other flavor symmetry representations. This holds true for all known top-down constructions~\cite{Kikuchi:2021ogn,Baur:2020jwc,Baur:2021mtl,Almumin:2021fbk,Ishiguro:2021ccl} and might be conjectured to be a general feature of top-down models~\cite{Baur:2021bly}. (iv) Different sectors of the theory may have different moduli and/or different residual symmetries allowing for what has been called ``local flavor unification''~\cite{Baur:2019kwi}. If all these features would indeed be confirmed on other UV complete top-down constructions one may anticipate that in a modern language, the modular flavor ``swampland'' may be much bigger than anticipated. \section{Important open problems} We stress directions in which our discussion can be generalized and important open problems. The additional compact dimensions in string theory may give rise to non-trivially interlinked extra tori with additional moduli, giving rise to metaplectic groups and their corresponding flavor symmetries~\cite{Ding:2020zxw,Ding:2021iqp,Nilles:2021glx}. Also, it would be important to investigate other consistent string configurations for possibly realistic eclectic flavor scenarios to get a grasp of the \mbox{``size of the realistic `landscape' ''.} Note also that for the sake of our discussion we have taken VEVs of the flavon fields as well as the size of the K\"ahler corrections as free parameters of our model. However, in a full string model the computation of the flavon potential and the dynamic stabilization of their VEVs are in principle achievable. The same is true for the full constraints on the K\"ahler potential, see~\cite{Chen:2019ewa,Almumin:2021fbk}, including the computation of the potential of $T$ which corresponds to the ``evergreen'' problem of moduli stabilization, in the present context see~\cite{Novichkov:2022wvg} and references therein. All these tasks have not been solved so far and also remain as open questions for our model. Finally, note that while our investigation here was focused on the flavor structure, the framework we work in has successfully been shown in many earlier influential works to be capable of a realistic phenomenology also with respect to many other questions of particle physics and cosmology. Examples include grand unification with symmetry based explanations for proton stability and the suppression of the $\mu$-term, mechanisms for supersymmetry breakdown and a successful solution to the hierarchy problem, an origin of dark matter \textit{etc.}, see references in~\cite{Baur:2022hma}. It may be attractive to complete our construction also in the extension to other relevant phenomenological questions, such as identifying the cause of inflation, or the origin of the baryon asymmetry of the Universe. \section{Summary} There are explicit models of compactified heterotic string theory that reproduce in the IR the MSSM+eclectic flavor symmetry+flavon fields. The complete eclectic flavor symmetry here can be unambiguously computed from the outer automorphisms of the Narain space group and it non-trivially unifies previously discussed traditional, modular, R and \ensuremath{\mathcal{CP}}\xspace-type flavor symmetries, see equation~\eqref{eq:eclectic}. The eclectic flavor symmetry is broken by vacuum expectation values of the moduli and of the flavon fields. While residual symmetries are common, their breaking and subsequent approximate nature can help to naturally generate hierarchies in masses and mixing matrix elements. This allows analytic control over the generated hierarchies. Here we have identified one example of a heterotic string theory model compactified on $\mathbbm{T}^2/\Z3$ that can give rise to a realistic flavor structure of quark and lepton sectors. To show this, we have derived the super- and K\"ahler potential and identified vacuua that give rise to non-linearly realized symmetries which allow to protect potentially realistic hierarchical flavor structures. Using the parameters of the effective superpotential, the non-canonical K\"ahler potential, as well as the vacuum expectation values of flavons and the $T$ moduli field, we have performed a simultaneous fit to all experimentally determined quark and lepton sector parameters. All observables can be accommodated and several to date undetermined parameters in the lepton sector are predicted by the fit. Nontheless, we stress that our goal was not primarily the derivation of these predictions (which are likely very model specific) but to demonstrate as a proof of principle that a realistic SM flavor structure can be obtained in the tightly symmetry constrained and predictive framework of UV complete string theory models. Further topics to be investigated encompass the inclusion of the extra tori, the question of the computation of the flavon potential, as well as moduli stabilization. \section*{Acknowledgements}\noindent I would like to thank my collaborators Alexander Baur, Hans Peter Nilles, Saul Ramos-S\'anchez, and Patrick Vaudrevange. A special thanks goes to Eleftheria Malami and Maria Laura Piscopo for supporting the experimental application of Monte Carlo methods during a visit to the Casino Baden-Baden.
{ "attr-fineweb-edu": 1.711914, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUcyE4ubngtpssnPzw
\section{Introduction} In this paper we exhibit some virtual snowflakes, or {\it snowfakes\/}, generated by a natural, fully three-dimensional algorithm for snow crystal evolution. The present study extends our earlier work on growth and deposition \cite{GG1, GG2, GG3}, and other previous efforts in this direction \cite{Pac, Rei}. The key features of our model are {\it diffusion\/} of vapor, anisotropic {\it attachment\/} of water molecules, and a narrow {\it semi-liquid layer\/} at the boundary. All three ingredients seems to be essential for faithful emulation of the morphology observed in nature. The algorithm assumes a mesoscopic (micron) scale of basic units for the ice crystal and water vapor, which eliminates inherent randomness in the diffusion and the attachment mechanism. This brings the process within reach of realistic simulation; by contrast, any three-dimensional approach based on microscopic dynamics is completely beyond the scope of present computing technology. We refer the reader to \cite{GG3} for a brief history of snow crystal observation and modeling, background on our approach in a two-dimensional setting, and many references to the literature. See also \cite{NR} for another attempt at spatial mesoscopic modeling. There are many papers and books, for a variety of audiences, dealing with snowflake photography and classification, the underlying physics, or some combination thereof, so we will not offer a comprehensive review here. Excellent introductions to the subject include {\it the\/} classic book by Nakaya \cite{Nak}, early empirical studies and classification schemes \cite{BH} and \cite{ML}, and more recent papers and books by K.~Libbrecht \cite{Lib1, Lib2, Lib3, Lib4, Lib5, LR}. Among research papers that attempt to decipher the three-dimensional aspects of snow crystals, the standout reference is \cite{TEWF}; also worth mentioning are \cite{Iwa}, \cite{NK} and \cite{Nel}. The single most convenient resource for comparison of our simulations to physical crystals is Libbrecht's field guide \cite{Lib6}. \vskip0.2cm \hskip-0.5cm \begin{minipage}{8cm} \vskip-0.15cm \includegraphics[trim=0.0cm 0cm 0.0cm 0.1cm, clip, height=1.5in]{introfig_1.jpg} \end{minipage} \vskip-0.25cm \begin{minipage}[b]{8cm} \includegraphics[trim=0cm 0cm 0cm 0cm, clip, height=2.25in, angle=0]{cs0gonda21km_1.jpg} \end{minipage} \hskip0.4cm \begin{minipage}[b]{8cm} \includegraphics[trim=0cm 0cm 0cm 0cm, clip, height=1.8in, angle=0]{cs0gonda21kmbottom.jpg} \end{minipage} \vskip-.3cm {\bf Fig.~1.} Tip instability and oblique top ({\it left\/}) and bottom ({\it right\/}) views of the final crystal. As a preview of the capabilities of our model, let us illustrate the crystal tip instability and initiation of side branching studied in the laboratory by Gonda and Nakahara \cite{GN}. A sequence of four still frames from their paper was reproduced in \cite{GG3} so we will not show it here. But Fig.~1 depicts the top view of a corresponding snowfake at four different times (12, 15, 18, and 21 thousand), and oblique views of the crystal's top and bottom at the final time. The parameters are: $\beta_{01}=2.8$, $\beta_{10}=\beta_{20}=2.2$, $\beta_{11}=\beta_{21}=1.6$, $\beta_{30}=\beta_{31}=1$, $\kappa\equiv .005$, $\mu_{10}=\mu_{20}=.001$, $\mu=.0001$ otherwise, $\phi=.01$, and $\rho=.12$. Their role, and that of the initial state, will be described in Section 2. Similarity between the real and simulated sequences is striking: in both instances a defect arises at a characteristic distance from the crystal tip, becomes more pronounced, and later gives rise to a side branch with its own ridge structure similar to that of the main branch. Note also that our snowfake has its ridges and most of its markings on the top side; the bottom is almost featureless. This is due to a small downward drift in our model, an aspect we will discuss later in more detail. The direction of the drift represents the motion of the crystal in the opposite direction --- we prefer upward motion because interesting features then appear on top, although this would obviously correspond to the bottom of a falling snowflake. We should also note that the drift value means that, during its evolution, our simulated crystal moved for about 200 space units, which is comparable to the diameter it reached. This is typical of drift values that erase features on one side without otherwise significantly changing the morphology. Our model thus predicts that a significantly larger range of motion during growth is not possible for most interesting physical snow crystals, such as dendrites or plates. Another example of our algorithm's potential to make new predictions about basic aspects of snow crystal growth is the location of markings. From micrographs, it is almost impossible to tell whether these are on the top, bottom, or inside a given physical specimen, so little attention has been paid to this issue to date. We have gathered a considerable amount of evidence that inside markings are quite common (cf.~Sections 7, 8 and 9). Our account will focus on seven case studies that reproduce many features commonly observed in actual snowflakes: ridges, ribs, flumes and other ``hieroglyphs,'' formation of side branches, emergence of sandwich plates, hollow columns, hollow prism facets, and so forth. We also explore dependence on the density of vapor, and the aforementioned effect of drift, and inhibition of side branches by the semi-liquid layer. Varying meteorological conditions during growth are considered very important \cite{Lib6} so we include several examples, such as plates with dendritic tips and capped columns, that are believed to arise due to sudden changes in the weather. However, we will encounter snowfakes that grew in a homogeneous environment but give the impression that they did not. We will occasionally address dependence of the final crystal on its early development, and conclude with a few eccentric examples that may be too brittle to occur in nature. These typically arise near a phase boundary, when the dominant direction of growth is precarious. A complete collection of snowfakes from our case studies (with some additional information, such as simulation array sizes), and a slide show are available for download from: \centerline{\tt http://psoup.math.wisc.edu/Snowfakes.htm} The first order of business, in the next section, is to describe the snowfake algorithm in detail. Four subsequent sections discuss computer implementation and visualization tools, mathematical foundations, parameter tuning, and extensions of the model. The remainder of the paper is then devoted to the case studies. \section{The algorithm for three-dimensional snow crystal growth} Our basic assumptions are as follows: \begin{itemize} \item[A1.] The mesoscopic (micron-scale) building blocks are (appropriately scaled) translates of the {\it fundamental prism\/}, which has hexagonal base of side length $1/\sqrt 3$ and height $1$; \item[A2.] In its early stages of growth, from microscopic to mesoscopic, the crystal forms a hexagonal prism, and then it maintains this simple polyhedral shape until it reaches the size of a few microns across. \item[A3.] Diffusion outside the growing crystal is isotropic except possibly for a small drift in the ${\mathbb Z}$-direction; \item[A4.] Crystallization and attachment rates depend on the direction and local convexity at the boundary; \item[A5.] There is a melting rate at the boundary, creating a quasi-liquid layer. \end{itemize} \noindent Note that the side (rectangular) faces of the fundamental prism are commonly referred to as {\it prism\/} faces, while the top and bottom (hexagonal) ones are called {\it basal\/} faces. The lattice for our model is ${\mathbb T}\times {\mathbb Z}$, where ${\mathbb T}$ is the planar triangular lattice (see Fig.~2). This is not precisely the crystalline lattice of hexagonal ice {\it Ih\/}, which is obtained by removing certain edges and sites from ${\mathbb T}\times {\mathbb Z}$, and then applying a periodic deformation \cite{NR}, but we are constructing a mesoscopic model that should obscure such fine details. Therefore, each $x\in {\mathbb T}\times {\mathbb Z}$ has 8 neighbors, 6 in the ${\mathbb T}$-direction and 2 in the ${\mathbb Z}$-direction. At each discrete time $t=0,1,2,\dots$ and with each site $x\in {\mathbb T}\times {\mathbb Z}$, we associate a Boolean variable and two varieties of mass: the state of the system at time $t$ at site $x$ is $\xi_t(x)=(a_t(x), b_t(x), d_t(x))$ where the attachment flag $$ a_t(x)= \begin{cases} 1 &\quad\text{if $x$ belongs to the crystal at time $t$},\\ 0 &\quad\text{otherwise;} \end{cases} $$ and $$ \begin{aligned} &b_t(x)=\text{the {\it boundary mass\/} at $x$ at time $t$}\quad &&\text{({\it frozen\/} if $a_t(x)=1$, {\it quasi-liquid\/} if $a_t(x)=0$)\/}, \\ &d_t(x)=\text{the {\it diffusive mass\/} at $x$ at time $t$}\quad&&\text{({\it vapor\/})}. \end{aligned} $$ Our dynamics assumes that the diffusive and the quasi-liquid mass both change to ice when the site joins the crystal, and stay in that state thereafter. The two types of mass can coexist on the boundary of the snowfake, but only boundary mass persists inside the snowfake while only diffusive mass occurs outside and away from the boundary. The initial state will consist of frozen mass 1 at each site of some finite set, on which also $a_0\equiv 1$, with $a_0$ and $b_0 \equiv 0$ and $d_0\equiv\rho$ everywhere else. In keeping with assumption (A2), the most natural choice for this finite set, a singleton at the origin, often does not work well, as its ${\mathbb Z}$-direction neighbors see 7 neighbors off the crystal's boundary. This means that it is common, even for low $\rho$, that the dynamics immediately triggers a rapid expansion in the ${\mathbb Z}$-direction. To prevent this singularity, our canonical initial state consists of a hexagon of radius 2 and thickness 1, consisting of 20 sites. Other non-symmetric initial states will be discussed later. \newcommand\orig{\text{\bf 0}} Let us now describe the update rule of our snowflake simulator, which performs steps ({\it i\/})--({\it iv\/}) below in order every discrete time unit. The reader should observe that total mass is conserved by each step, and hence by the dynamics as a whole. Write ${\mathcal N}_x^{{\mathbb T}}=\{x\}\cup\{y: y$ is a neighbor of $x$ in the ${\mathbb T}$-direction$\}$, ${\mathcal N}_x^{{\mathbb Z}}=\{x\}\cup\{y: y$ is a neighbor of $x$ in the ${\mathbb Z}$-direction$\}$ for the ${\mathbb T}$-{\it neighborhood\/} and ${\mathbb Z}$-{\it neighborhood\/} of $x$, respectively. We also let ${\mathcal N}_x= {\mathcal N}_x^{{\mathbb T}}\cup{\mathcal N}_x^{{\mathbb Z}}$, and set $$ \begin{aligned} &A_t=\{x: a_t(x)=1\}=\text{the snowfake at time $t$};\\ &\partial A_t=\{x\notin A_t: a_t(y)=1\text{ for some }y\in {\mathcal N}_x\}= \text{the boundary of the snowfake at time $t$};\\ &{\bar A}_t=A_t\cup \partial A_t. \end{aligned} $$ The complement of a set $A$ is denoted by $A^c$. Also, we use $^\circ$ (degree) and $'$ (prime) notation to denote amounts of mass before and after a step or substep is completed. If there is more than one intermediate step, we use double primes. This is necessary since some mass allocations may change more than once during a single cycle of the steps. At the end of each cycle the time $t$ advances to $t+1$. \vskip0.25cm \noindent{\bf Steps of the update rule:} \noindent{\it i\/. Diffusion} Diffusive mass evolves on $A_t^c$ in two, or possibly three, substeps. The first substep is by discrete diffusion with uniform weight $\frac 17$ on the center site and each of its ${\mathbb T}$-neighbors. Reflecting boundary conditions are used at the edge of the crystal. In other words, for $x\in {\bar A}_t^c$, \begin{equation} d_t'(x)=\frac 17\sum_{y\in {\mathcal N}_x^{{\mathbb T}}}d_t^\circ(y). \tag{1a} \end{equation} The second substep does the same in the ${\mathbb Z}$-direction: \begin{equation} d_t''(x)=\frac 47 d_t'(x)+\frac 3{14}\sum_{y\in {\mathcal N}_x^{{\mathbb Z}},y\ne x}d_t'(y). \tag{1b} \end{equation} For $x\in\partial A_t$ any term in the sum in (1a) (resp.~(1b)) corresponding to $y\in A_t$ is replaced by $d_t^\circ(x)$ (resp, $d_t'(x)$). The reason for the weights in (1b) is as follows. Imagine we tessellate ${\mathbb R}^3$ with translates of the fundamental prism and scale the lattice ${\mathbb T}\times {\mathbb Z}$ so that the lattice points are in the centers of these prisms. The ``bonds'' in the top left frame of Fig.~2 thus all have unit length and we eventually visualize the crystal by drawing prisms that are centered about sites of $A_t$. Rule (1b) ensure that diffusion on the scaled lattice is isotropic, in agreement with assumption A2. As mentioned in the Introduction, there is also good reason to consider the more general case of diffusion with drift in the ${\mathbb Z}$-direction, corresponding to downward (or upward) motion of the snowflake. The third diffusion substep is thus: \begin{equation} d_t'''(x)= (1-\phi\cdot(1-a_t(x-e_3))\cdot d_t''(x)+\phi\cdot(1-a_t(x+e_3))\cdot d_t''(x+e_3), \tag{1c} \end{equation} where $e_3=(0,0,1)$ is the third basis vector. Parameter $\phi$ measures the strength of the drift, and needs to be small for the dynamics to remain diffusion-limited. \vskip0.25cm \noindent{\it ii\/. Freezing} Assume that $x\in \partial A_t$, and denote \begin{equation} n_t^{{\mathbb T}}(x)=\#\{y\in {\mathcal N}_x^{{\mathbb T}}: a_t^\circ(y)=1\}\wedge 3, \quad n_t^{{\mathbb Z}}(x)=\#\{y\in {\mathcal N}_x^{{\mathbb Z}}: a_t^\circ(y)=1\}\wedge 1. \tag {2a} \end{equation} Proportion $1-\kappa(n_t^{{\mathbb T}}(x), n_t^{{\mathbb Z}}(x))$ of the diffusive mass at $x$ becomes boundary mass. That is, \begin{equation} \begin{aligned} &b_t'(x)=b_t^\circ(x)+(1-\kappa(n_t^{{\mathbb T}}(x), n_t^{{\mathbb Z}}(x)))d_t^\circ(x), \\ &d_t'(x)=\kappa(n_t^{{\mathbb T}}(x), n_t^{{\mathbb Z}}(x)) d_t^\circ(x).\\ \end{aligned} \tag{2b} \end{equation} The seven parameters $\kappa(i,j)$, $i\in \{0,1\}$, $j\in \{0,1,2,3\}$, $i+j>0$, constitute one of the ingredients that emulate the dynamics of the quasi-liquid layer at the boundary of the crystal. The other ingredient, $\mu$, appears in step {\it iv\/} below. We assume that $\kappa$ decreases in each coordinate since ``more concave corners'' at the boundary $\partial A_t$, i.e., those with more neighbors in $A_t$, should catch diffusing particles more easily. \vskip0.25cm \noindent{\it iii\/. Attachment} Assume again that $x\in \partial A_t$ and define the neighborhood counts as in (2a). Then $x$ needs boundary mass at least $\beta(n_t^{{\mathbb T}}(x), n_t^{{\mathbb Z}}(x))$ to join the crystal: \begin{equation} \text{ If $b_t^\circ(x)\ge \beta(n_t^{{\mathbb T}}(x), n_t^{{\mathbb Z}}(x))$, then $a_t'(x)=1$.} \tag{3} \end{equation} Again, we have seven parameters $\beta(i,j)$, $i\in \{0,1\}$, $j\in \{0,1,2,3\}$, $i+j>0$, and the assignment only makes physical sense if $\beta$ decreases in each coordinate. In addition, we assume that $a_t'(x)=1$ automatically whenever $n_t^{{\mathbb T}}(x)\ge 4$ and $n_t^{{\mathbb Z}}(x)\ge 1$. This last rule fills holes and makes the surface of the crystal smoother, without altering essential features of the dynamics. At sites $x$ for which $a_t'(x)=1$, the diffusive mass becomes boundary mass: $b_t'(x)=b_t^\circ(x)+d_t^\circ(x)$, $d_t'(x)=0$. Attachment is permanent, and there are no further dynamics at attached sites. Thus we do not model sublimation, although it may play a significant role in the last stages of snow crystal evolution (cf.~p.~27 of \cite{Lib6}). \vskip0.25cm \noindent{\it iv\/. Melting} Proportion $\mu(n_t^{{\mathbb T}}(x), n_t^{{\mathbb Z}}(x))$ of the boundary mass at each boundary site becomes diffusive mass. Thus, for $x\in\partial A_t$, \begin{equation} \begin{aligned} &b_t'(x)=(1-\mu(n_t^{{\mathbb T}}(x), n_t^{{\mathbb Z}}(x)))b_t^\circ(x),\\ &d_t'(x)=d_t^\circ(x)+ \mu(n_t^{{\mathbb T}}(x), n_t^{{\mathbb Z}}(x)) b_t^\circ(x). \end{aligned} \tag{4} \end{equation} Again, $\mu$ is decreasing in each coordinate. \vskip0.25cm Fig.~2 summarizes our model in three frames. At the upper left is a portion of the underlying lattice ${\mathbb T}\times {\mathbb Z}$. The central site represented as a larger black ball has its neighborhood indicated in black, and a translate of the fundamental prism is centered at that site. In the upper right detail, blue translates of the fundamental prism are drawn around each site of a small crystal. Seven boundary sites are depicted in red and each is labeled by its boundary configuration. For example, the ``21'' site has 2 horizontal (${\mathbb T}$-) neighbors and 1 vertical (${\mathbb Z}$-) neighbor, and consequently needs boundary mass $\beta_{21}$ to join the crystal. Finally, the lower panel shows a flowchart for the algorithm. There are three epochs in the life of a site. Away from the crystal's boundary, it only exchanges diffusive mass $d_t$ with its neighbors. Once the crystal grows to reach the site's neighborhood, two additional effects, melting and freezing, promote exchange between diffusive mass $d_t$ and boundary mass $b_t$. Final changes occur once boundary mass exceeds the threshold $\beta$ (which depends on the neighborhood configuration): the site attaches and the two types of mass merge into $b_t$. \begin{comment} \begin{center} {\includegraphics[trim=0cm 0cm 0cm 0cm, clip, height=10.5cm]{fig2psp.jpg}} \end{center} \vskip-0.25cm \end{comment} \vskip-0.1cm \hskip-0.5cm \begin{minipage}{8cm} \resizebox{16cm}{!}{\input{test2.pdf_t}} \end{minipage} \vskip-0.1cm {\bf Fig.~2.} The stacked triangular lattice ${\mathbb T}\times {\mathbb Z}$ ({\it top left\/}), coding of boundary configurations ({\it top right\/}), and a flowchart for the growth algorithm ({\it bottom\/}). \vskip0.5cm \section{Notes on computation and visualization} Following the same strategy as for our previous two-dimensional model \cite{GG3}, the dynamics actually run on the cubic lattice ${\mathbb Z}^3$, which can be mapped onto ${\mathbb T}^2\times {\mathbb Z}$. Our basic computational engine is written in C, but MATLAB is used for mapping and visualization. As mentioned previously, the snowfakes are depicted by drawing visible boundaries of translates of the fundamental prism centered on sites of $A_t$. Since this straightforward procedure makes jagged vertical boundaries, we apply a smoothing algorithm at the boundary that enlarges the crystal by no more than one mesoscopic unit. (This algorithm is {\it not\/} applied to the small snowfake in Fig.~2.) MATLAB's {\tt patch} routine renders the faces. For better results we then emphasize edges using the {\tt line} routine. MATLAB's visualization tools certainly provide adequate representations for detailed investigation of the resulting crystals. They do not, however, give a satisfactory comparison with the best snowflake photographs \cite{LR,Lib5,Lib6}, typically taken from directly above the (predominantly two-dimensional) crystal, which is in turn illuminated from below. This viewpoint can be effectively simulated by ray-tracing, as implemented here by the POV-Ray software \cite{POV}. Our program automatically outputs a file with a triangulation of the crystal's boundary, which is then used by the {\tt mesh2} command in POV-Ray. We would like to point out that both the algorithm and visualization procedures require considerable computing power and memory. At present (fall 2007), our simulations are very time consuming, barely feasible on commercial personal computers. (In fact, an adaptive resolution algorithm is necessary to make the boundary descriptions manageable.) Progress in studying snowfakes is therefore quite slow, precluding systematic classification of the dynamics. Our goal has been to find representative examples that seem to replicate physical snow crystals and thereby shed light on their evolution. For computational efficiency, if the diffusion step is isotropic one can exploit symmetry by taking the finite lattice to be a discrete hexagonal prism with patched wrap edge conditions. When $\phi=0$ and the initial state has complete symmetry, it thereby suffices to compute the dynamics on $\frac 1{24}$ of the whole space. There are two good reasons for giving up complete symmetry of the rule. First, the initial state may not be symmetric, and second, the diffusion may have a drift. For computational efficiency, we only give up reflectional symmetry around the $xy$-plane (recall that the drift is only in the ${\mathbb Z}$-direction), allowing the initial state to depend on the $z$ coordinate, but retaining its hexagonal symmetry in the $x$ and $y$ coordinates. This increases the space and time demands of the fully symmetric program by a factor of 2. The program stops automatically when the density at the edge of the lattice falls below a given proportion of the initial density (typically $2\rho/3$ or $\rho/2$), or when the crystal gets too close to the edge (snowfake radius greater than 80\% the radius of the system). \section{Connection to pde, and size of the parameter space} Mathematically, our algorithm is a discrete space and time version of a {\it free boundary\/}, or {\it Stefan\/}, problem \cite{Lib2, Lib3, Lib4}. This is a partial differential equation (pde) in which the crystal is represented by a growing set $A_t$ and the density (i.e., supersaturation) of vapor outside it as $u=u(x,t)$. Then $u$ is 0 on the boundary $\partial A_t$, and satisfies the diffusion equation outside the crystal \begin{equation} \frac {\partial u}{\partial t}=\Delta u,\quad x\in A_t^c. \tag {1.1} \end{equation} The velocity of the boundary at a point $x\in\partial A_t$ with outside normal $\nu$ is given by a function \begin{equation} w\left( \frac{\partial \rho}{\partial \nu}, \nu\right). \tag{1.2} \end{equation} Considering the slow growth of $A_t$, diffusion equation (1.1) may be simplified to its equilibrium counterpart $\Delta u=0$ \cite{Lib2, Lib3, Lib4}, which makes this into an anisotropic version of the {\it Hele-Shaw\/} problem. Presumably under diffusion scaling, in which space is scaled by $\epsilon$, time by $\epsilon^{-2}$, and $\epsilon\to 0$, the density field and the occupied set in our model converge to a solution of the Stefan problem. We hope to provide rigorous justification for this connection, and identification of the limit $w$ in terms of model parameters, in future work. The boundary velocity function $w=w(\lambda,\nu)$ is defined for $\lambda\ge 0$ and three-dimensional unit vectors $\nu\in S^2$. In order to develop a rigorous mathematical theory, the most convenient assumptions are that $w$ is continuous in both variables, nondecreasing in $\lambda$, and satisfies $w(\lambda,\nu)\le C\lambda$ for some constant $C$ independent of $\lambda$ and $\nu$. Under these conditions, the non-isotropic Stefan problem (1.1--1.2) has a unique viscosity solution at all times $t\ge 0$, starting from any smooth initial crystal. This is proved in \cite{Kim} for the isotropic case (when $w$ is constant); assuming the listed properties of $w$, the proof extends to our general setting. We should note, however, it has long been known that the crystal's boundary will not remain smooth \cite{SB}. Indeed, this will be no mystery once we present our simulations, which feature a considerable variety of singularities and instabilities. Presumably these make direct numerical computation with the pde very challenging, explaining why numerical pde-based models for snow crystal growth have not been satisfactory (cf.~\cite{Sch}). For further mathematical theory and references, we refer the reader to \cite{Kim, CK}. For the model studied here, $w(\lambda, u)$ will be linear in $\lambda$, since the attachment and melting rates are independent of the vapor density. This may not always be the case; in fact, some of the literature even considers the possibility that $w$ is non-monotone in $\lambda$ \cite{Lib3, GG3}. Analysis of such cases would present new theoretical challenges, and from simulations of our 3d model it appears that nonmonotonicity is not needed for observed phenomena in nature. Monotone nonlinearity, arising from monotone density dependent rates, is harder to dismiss and worth further investigation -- for instance, it is possible that $w$ vanishes for very small $\lambda$. Once we accept that our scheme approximates the viscosity solution of (1.1--1.2), the macroscopic evolution of the crystal is uniquely determined by its initial state and the velocity function $w$. In turn, $w$ is determined by very few physical parameters, perhaps just two: temperature and atmospheric pressure \cite{Lib2, Lib3, Lib4}. Therefore, possible evolutions from a fixed seed comprise a three-dimensional manifold (its coordinates being the supersaturation level, temperature, and pressure) in an infinite-dimensional space of possible velocities $w$. Much of the ongoing snow crystal research constitutes an attempt to understand the structure of this manifold, a daunting task since the underlying (perhaps quantum) attachment physics is very poorly understood, controlled homogeneous environments are hard to design, and crystal evolution is difficult to record. Our model does not have these problems. Instead, its main weakness is the number of free parameters that need to be tuned to approximate $w$ at a particular temperature and pressure. It helps that our parameters have intuitive meaning, but finding a particular realistic snowfake involves approximating an a priori infinite-dimensional object $w$ by one of finite but high dimensionality. The challenge is compounded by very incomplete information -- all that is typically observable in nature is the final crystal, which may have been subjected to numerous changes in conditions and orientation during growth, as well as sublimation and perhaps even artifacts of the recording process. It is thus no surprise that our parameter selection is an arduous and imprecise task. In the next section we will describe some ad hoc rules that we have used to generate our case studies, but the issue of parameter selection is in dire need of further investigation. What we can say is that the best examples are quite sensitive to perturbations in $w$. Thus they require good approximations and a large number of judicious parameter choices. In addition, the dependence on the initial seed is often quite dramatic. These observations underscore both the marvel and the fragility of natural snowflakes. At the same time, we wish to emphasize the conceptual simplicity of our model. The large parameter space is a consequence of geometry rather than an excessive number of modeling ingredients. Apart from the two scalar parameters -- density $\rho$ and drift $\phi$ -- we have only three vector parameters --- attachment threshold $\beta$, freezing rate $1-\kappa$, and melting rate $\mu$ --- whose high dimensionality arises from the many possible boundary arrangements. The parameter set can be reduced, but {\it some\/} tuning will always be necessary, as illustrated by the ``random'' crystal in Fig.~3. This was obtained by choosing $\kappa\equiv .1$, $\mu\equiv .001$, $\rho=.1$, $\phi=0$, and all $\beta$'s equal to 1 except $\beta_{01}=1.73$ and $\beta_{10}=\beta_{20}=1.34$. These values are in a sensible neighborhood of the parameter space, but the last two attachment rates were selected by chance. The result has some physically reasonable features, but one immediately notices an excessive density of branches and inordinately high ridges. \begin{center} {\includegraphics[trim=0cm 0cm 0cm 0cm, clip, height=5cm]{figran.jpg}} \end{center} \vskip-0.5cm {\bf Fig.~3.} A ``failed'' snowfake. \vskip1cm \section{Effective choice of parameters for simulations} While optimal choices of parameters requires considerable guesswork, there are a few guidelines we have developed. Some come from mathematical arguments, others from experimentation; both are described in this section. Our simulator represents diffusion by discrete averaging in time $t$, which is also discrete. The bulk effects of this operation expand at the rate $\sqrt t$, although the extreme radius of its influence (or {\it light cone\/}) grows linearly in $t$. If the initial density $\rho$ of our discrete vapor field is too large, then the crystal may expand in some direction as fast as the light cone, or perhaps fall behind it by $\mathcal O(\sqrt t)$. We call parameter sets leading to this behavior the {\it Packard regime\/}; it is clearly not physical, as it depends on the discrete nature of the averaging. However, systems of this sort are able to generate fractal plates reminiscent of Packard snowflakes \cite{Pac, GG3} and exhibit one variety of faceting (cf.~\cite{NR}). In our simulations we systematically avoid the Packard regime by keeping the density low. For the extremal points of our snowfakes not to expand at light speed, the conditions are $$ (1-\kappa_{01})\rho<\beta_{01},\quad (1-\kappa_{10})\rho< \beta_{10}, $$ as is easy to see from the description of the rule. Our densities are typically considerably smaller, since large densities generate expansion that is too rapid to be realistic, at least in its initial stages. As mentioned previously, a surprisingly important role is also played by the choice of initial seed. On the other hand, it is clear that a very large melting rate will stop growth altogether. This happens if the flow out of the boundary mass exceeds the flow in just before that mass exceeds the threshold for attachment. A sufficient condition for continual growth in all directions is therefore $$ \mu_{01}\beta_{01}<(1-\kappa_{01})\rho, \quad \mu_{10}\beta_{10}<(1-\kappa_{10})\rho, $$ since the 01 and 10 boundary arrangements always have the slowest potential growth. In the great majority of examples we will present, parameters for the 20 and 10 arrangements agree. In this case, the last condition is necessary as well --- if it does not hold, then the growth is convex-confined in the ${\mathbb T}$-direction. Let us now describe a few rules of thumb when searching for snowfakes that emulate nature. We commonly start with a {\it reduced\/} parameter set. Namely, we set the $\kappa$'s to a common value, say, $\kappa\equiv .1$. Then we select two different $\beta$ parameters, $\beta_{01}$ and $\beta_{10}=\beta_{20}=\beta_{11}$, with all the remaining $\beta$'s fixed to 1. The size of $\beta_{20}$ controls the strength of the convexifying mechanism, assumed to be the same in both the $xy$ and $z$ directions. Indeed, if $\beta_{20}$ is large, then the crystal will remain a perfect hexagonal prism for a long time. The only other parameters are the common value of all $\mu$'s and the vapor density $\rho$. This is a more manageable four-parameter space that encodes four essential elements of three-dimensional snowflake growth, each with a single tunable parameter: diffusing supersaturation level ($\rho$), convexifying strength ($\beta_{20}$), quasi-liquid layer smoothing ($\mu$), and preference for the ${\mathbb Z}$-direction over the ${\mathbb T}$-direction ($\beta_{01}/\beta_{20}$). This scheme is used to identify the neighborhood of a desired morphological type in phase space. Then parameters are perturbed for added realism. One of the most important lessons of our two-dimensional model \cite{GG3} was that the melting parameter $\mu$ inhibits side-branching and is therefore important for dendrite formation. When $\mu\equiv 0$, it seems impossible to avoid an excessive density of branches. Indeed, this role of $\mu$ is easily understood. Namely, $\mu$ creates a positive density at the boundary, due to flow out of the boundary layer. This density has the effect of reducing the ambient vapor density by a fixed amount, independent of location, and hence disproportionately affects regions of smaller density. (To a very rough first approximation \cite{Lib4}, the expansion speed is proportional to $\sqrt\rho/\sqrt t$ when $\mu\equiv 0$.) Since there is clearly less mass between branches than at the tips, growth and side branching there gets stunted by increasing $\mu$. Realistic ``classic'' dendrites occur for a relatively narrow range of choices for $\mu$, once the other parameters are held fixed. Typically, though, the other parameters need to be perturbed along with $\mu$; increasing $\mu$ alone tends to erode all complex structure. The markings seen on snow crystal plates are sometimes called {\it hieroglyphs\/}. These often have fairly regular geometric forms, such as ridges, flumes, ribs, and circular shapes, but can also exhibit more chaotic patterns. In photomicrograph collections \cite{BH, LR, Lib5, Lib6} it is usually unclear whether the marks are on the outside of the crystal or within what we call sandwich plates. In our experiments, the inner structures are {\it much\/} more prevalent, so we are glad to observe that they are abundant in nature \cite{EMP}. To obtain nice {\it outer\/} markings, the ratio $\beta_{01}/\beta_{20}$ needs to be sufficiently large, but there is then a tendency for the crystal to become too three-dimensional. Again, the correct choice is often rather delicate. Inner markings occur generically for small values of this ratio. Finally, different $\kappa$'s may appear to be a more natural mechanism to enforce anisotropy than different $\beta$'s, as they directly correspond to sticking, or {\it killing\/}, of particles at the crystal's boundary. However, for this effect to be significant, the $\kappa$'s need to be very close to 1; otherwise the killing at the crystal boundary is too rapid to make a difference, and then the already slow growth proceeds at an even more sluggish pace. While less physically appealing, we view the $\beta$'s as a reasonable compromise for the sake of computational efficiency. \vskip0.5cm \section{Variants and extensions of the model} \subsection{Uniform snowfakes} Since attachment thresholds $\beta$ vary, the mass of the final crystal is not uniform. There is a variant of our algorithm that removes this defect with little change in observed morphology. Assume that there is no automatic filling of holes; instead, boundary mass exactly 1 is needed for attachment when $n_t^{{\mathbb T}}(x)\ge 4$ and $n_t^{{\mathbb Z}}(x)\ge 1$. Then a uniform crystal is obtained by performing the following additional step just after step {\it iii} in the simulator: \noindent{\it iii'\/. Post-attachment mass redistribution} To redistribute any excess mass from the attached site to its unattached neighbors, let $$ n_t^{c}(x)=\#\{y\in {\mathcal N}_x: a_t^\circ(y)=0\} $$ be the number of non-attached neighbors. Then, for every $x$ with $a_t^\circ(x)=0$, $$ b_t'(x)=b_t^\circ(x)+\sum_{y:a_t^\circ(y)=1} \frac {b_t^\circ(y)-1}{n_t^c(y)}. $$ \subsection{Simulation without symmetry} As explained in Section 3, at the cost of a 24-fold slowdown compared to our fully symmetric model, implementation of the algorithm without exploiting symmetry makes it possible to study the evolution from arbitrary initial seeds. Such an extension is necessary in order to produce snowfakes corresponding to exotic forms such as triangular crystals, split stars, and bullets. We have conducted a few experiments along these lines with our planar model \cite{GG3}, but in three dimensions a simulator dramatically faster than our current one is needed. We have future plans to develop a suitably high-performance parallel version. \subsection{Random dynamics} Our only three-dimensional snowfakes to date are deterministic, since randomness would also require the just discussed simulation without symmetry. We propose to include an additional parameter $\epsilon$ representing residual noise on the mesoscopic scale, as we did in the two-dimensional setting \cite{GG3}. Again, $\epsilon$ would need to be quite small, say on the order $10^{-5}$. The random perturbation of diffusive mass from \cite{GG3} is not suitable in 3d since it is not physical to violate mass conservation. Instead, a small random slowdown in the diffusion rate is more appropriate. To this end, first denote the (linear) operation on the field $d_t^\circ$ in (1a--1c) by ${\mathcal D}$; thus step {\it i\/} can be written as $d_t'''={\mathcal D}(d_t^\circ)$. Next, let $\xi_t(x)$, $t\ge0$, $x\in {\mathbb T}\times{\mathbb Z}$, be independent~random variables, equal to $\epsilon>0$ or 0, each with probability $1/2$. Here the field $\xi$ represents the proportion of particles that refuse to diffuse at position $x$ and time $t$. The randomized step {\it i\/} now reads $$ d_t'''={\mathcal D}((1-\xi_t)d_t^\circ)+\xi_t d_t^\circ={\mathcal D}(d_t^\circ)+\xi_t d_t^\circ- {\mathcal D}(\xi_td_t^\circ). $$ In a natural way, this represents small random temperature fluctuations in space and time. Similarly, one could introduce a small proportion of particles that refuse to freeze in (2b), or melt in (4); e.g., (2b) would be replaced by \begin{equation*} \begin{aligned} &b_t'(x)=b_t^\circ(x)+(1-\kappa(n_t^{{\mathbb T}}(x), n_t^{{\mathbb Z}}(x)))d_t^\circ(x)(1-\xi_t(x)), \\ &d_t'(x)=\kappa(n_t^{{\mathbb T}}(x), n_t^{{\mathbb Z}}(x)) d_t^\circ(x)(1-\xi_t(x))+d_t^\circ(x)\xi_t(x).\\ \end{aligned} \end{equation*} \vskip0.5cm \section{Case study $i$ : ridges and plates} Our prototypical snowfake has $\rho=.1$ and the canonical initial state of radius 2 and thickness 1. Fig.~4 depicts the crystal after 70000 time steps, when its radius is about 350. Its parameters are $\beta_{01}=2.5$, $\beta_{10}=\beta_{20}=\beta_{11}=2$, $\beta_{30}=\beta_{21}=\beta_{31}=1$, $\kappa\equiv .1$, $\mu\equiv .001$, and $\phi=0$. \vskip-0.3cm \null\hskip-0.6cm \begin{minipage}[b]{8cm} \includegraphics[trim=8cm 1cm 8cm 2cm, clip, height=2.9in]{cs1t70000m.jpg} \end{minipage} \hskip0.5cm \begin{minipage}[b][7cm][t]{15cm} \vskip-0.25cm \includegraphics[trim=1.2cm 0cm 1.2cm 0cm, clip, height=2.5in]{cs1t70000rt.jpg} \end{minipage} \vskip-0.75cm {\bf Fig.~4.} The oblique (MATLAB-rendered) and top (ray-traced) views of the crystal. \vskip0.5cm \vfill\eject We invite the reader to compare the simulated crystal with some of the photographs at \cite{Lib5} and especially with Fig.~1(h) in \cite{TEWF}, a snowflake obtained at temperature about $-13^\circ$C. We think of our length unit as about $1\mu$m, so even the sizes of the two objects roughly match. Perhaps the most striking features shared by the snowfake in Fig.~4 and physical ones are the {\it ridges\/}, elevations in the middle of each main branch, with less pronounced counterparts on the side branches. We begin by illustrating how these ridges are formed and maintained. In the process we also encounter the {\it branching instability\/}, when the initial growth of a thin hexagonal plate is no longer viable and it gives birth to the six main branches. As shown in Fig.~5, ridges are formed quite early in the evolution, by mesoscopic bumps known as {\it macrosteps\/} that are near, but not too near, the center of the plate. This is how the ridges grow ({\it very\/} slowly) in the vertical direction --- compare with times 4044 and 7099, which also feature such bumps. The ridges spread to a characteristic width, but sharpen to a point near the branch tip. One can also observe the commonly observed {\it flumes\/} (called {\it grooves\/} in \cite{Lib6}) that form on both sides of a ridge. \begin{center} {\includegraphics[trim=18cm 6cm 17cm 5cm, clip, height=3cm]{cs1t820.jpg} \includegraphics[trim=18cm 6cm 16.5cm 5cm, clip, height=3cm]{cs1t863.jpg} \includegraphics[trim=17.5cm 6cm 16cm 5cm, clip, height=3cm]{cs1t1600.jpg} \includegraphics[trim=16cm 6cm 14cm 5cm, clip, height=3cm]{cs1t4044.jpg} \includegraphics[trim=14cm 6cm 13cm 5cm, clip, height=3cm]{cs1t5500.jpg} \includegraphics[trim=13.5cm 6cm 12cm 5cm, clip, height=3cm]{cs1t7099.jpg} \includegraphics[trim=11cm 6cm 10cm 5cm, clip, height=3cm]{cs1t9500.jpg} } \end{center} \vskip-0.5cm {\bf Fig.~5.} The crystal at times 820, 863, 1600, 4044, 5500, 7099, and 9500. \vskip0.5cm The small indentation that emerges, due to lower vapor density, in the middle of each prism facet at time 5500, has appeared several times before. However, this is the first instance when the growth is unable to repair it. Instead, the growth there virtually stops, while the six main arms continue to grow and eventually produce two types of side branches: common, relatively thick double-plated branches that we call {\it sandwich plates\/}, and more unusual thin plates with their own ridges. The tip of each arm assumes its characteristic shape by the final frame of Fig.~5. It is perhaps surprising how dramatically this scenario depends on the initial (micron scale) state. Keeping everything else the same, we change the initial prism to one with radius 2 and thickness 3. The previous rather complex and aesthetically pleasing evolution is replaced by a growing double plate (Fig.~6). (Remarkably, even adding a small drift does not help matters much.) This dichotomy arises frequently in our model --- within a neighborhood of the parameter space that produces planar crystals there are two stable attractors: one with outside ridges and the other a split plate with ridges on the inside. As much of the literature points out, split plates are extremely common in physical crystals (cf.~\cite{Iwa}). \vskip-0.25cm \begin{minipage}[b]{7cm} \includegraphics[trim=8cm 1cm 8cm 2cm, clip, height=2.5in]{cs1init22.jpg} \end{minipage} \hskip0.05cm \begin{minipage}[b][5cm][t]{3cm} \includegraphics[trim=5cm 8cm 0cm 6cm, clip, height=1.0in]{cs1init22_side.jpg} \end{minipage} \vskip-1cm {\bf Fig.~6.} Oblique and side views of the crystal from a different initial state. \vskip0.2cm Finally, let us experiment with changing the density $\rho$. We exhibit five crystals, each with the canonical initial condition and all other parameters of the prototype unchanged, but at different densities and different final times. Dramatically lower density does promote faceting (\cite{Lib6, LR}), but a moderate perturbation seems to mainly promote slower growth, without a change in morphology. \null\hskip-0.6cm \begin{minipage}[b]{8cm} \includegraphics[trim=8cm 1cm 8cm 2cm, clip, height=2.9in]{cs1dens15m.jpg} \end{minipage} \hskip0.5cm \begin{minipage}[b][7cm][t]{15cm} \vskip-0.25cm \includegraphics[trim=1.2cm 0cm 1.2cm 0cm, clip, height=2.5in]{cs1dens15rt.jpg} \end{minipage} \hfill\break \vskip-2.25cm \null\hskip5cm \includegraphics[trim=14.5cm 6cm 12.5cm 4cm, clip, height=1.75in]{cs1dens15detail.jpg} \vskip-0.2cm {\bf Fig.~7.} At density $\rho=.15$, the side branches have particularly well-defined ridges. \vskip0.5cm \vfill\eject \null\hskip-0.6cm \begin{minipage}[b]{8cm} \includegraphics[trim=8cm 1cm 8cm 2cm, clip, height=2.9in]{cs1dens09m.jpg} \end{minipage} \hskip0.5cm \begin{minipage}[b][7cm][t]{15cm} \vskip-0.25cm \includegraphics[trim=0.7cm 0cm 0.7cm 0cm, clip, height=2.5in]{cs1dens09rt.jpg} \end{minipage} \hfill\break \vskip-2.25cm \null\hskip5cm \includegraphics[trim=14cm 2cm 11cm 6cm, clip, height=1.75in]{cs1dens09detail.jpg} \vskip-0.6cm {\bf Fig.~8.} At density $\rho=.09$, the flumes are well-delineated. \vskip1.5cm \null\hskip-0.6cm \begin{minipage}[b]{8cm} \includegraphics[trim=8cm 1cm 8cm 2.0cm, clip, height=2.9in]{cs1dens05m.jpg} \end{minipage} \hskip0.5cm \begin{minipage}[b][7cm][t]{15cm} \vskip-0.25cm \includegraphics[trim=0.7cm 0cm 0.7cm 0cm, clip, height=2.5in]{cs1dens05rt.jpg} \end{minipage} \vskip-0.75cm {\bf Fig.~9.} Density $\rho=.05$ results in sectored plates. \vskip1cm \null\hskip-0.6cm \begin{minipage}[b]{8cm} \includegraphics[trim=8cm 1cm 8cm 2cm, clip, height=2.9in]{cs1dens045m.jpg} \end{minipage} \hskip0.5cm \begin{minipage}[b][7cm][t]{15cm} \vskip-0.25cm \includegraphics[trim=0.7cm 0cm 0.7cm 0cm, clip, height=2.5in]{cs1dens045rt.jpg} \end{minipage} \vskip-0.75cm {\bf Fig.~10.} Density $\rho=.045$ results in sectored branches. \vskip1.5cm \null\hskip-0.6cm \begin{minipage}[b]{8cm} \includegraphics[trim=8cm 1cm 8cm 2cm, clip, height=2.9in]{cs1dens04m.jpg} \end{minipage} \hskip0.5cm \begin{minipage}[b][7cm][t]{15cm} \vskip-0.25cm \includegraphics[trim=0.7cm 0.0cm 0.7cm 0.0cm, clip, height=2.5in]{cs1dens04rt.jpg} \end{minipage} \vskip-0.75cm {\bf Fig.~11.} Density $\rho=.4$ results in sandwich plates with inner ridges. \vskip.75cm The example in Fig.~11 (pictured at time $120000$) never undergoes the branching instability illustrated in Fig.~5, although it does develop fairly standard ridges that persist until about time $40000$. This is the time shown in the first frame of Fig.~12; subsequent frames show the evolution in time increments of 10000. We observe that a completely different {\it sandwich instability\/} takes place: first the tips and then the sides of the snowfake thicken and develop sandwich plates. It is also clear from the time sequence that this morphological change is accompanied by a significant slowdown in growth. We should emphasize that this slowdown is {\it not\/} due to the depletion of mass on a finite system: much larger systems give rise to the same sandwich instability well before the edge density diminishes significantly. Neither is this slowdown accompanied by a significant growth in the ${\mathbb Z}$-direction --- in the period depicted, the radius in the ${\mathbb Z}$-direction increases from 6 to 7, whereas the radius in the ${\mathbb T}$-direction increases from 67 to 87. Instead, much of the growth fills space between the ridges, the remnants of which end up almost completely below the surface. \begin{center} {\includegraphics[trim=13cm 5cm 11cm 4cm, clip, height=2.8cm]{cs1dens04_t40000.jpg}} {\includegraphics[trim=12cm 5cm 10.5cm 4cm, clip, height=2.8cm]{cs1dens04_t50000.jpg} } {\includegraphics[trim=11cm 5cm 10cm 4cm, clip, height=2.8cm]{cs1dens04_t60000.jpg} } {\includegraphics[trim=11cm 5cm 10cm 4cm, clip, height=2.8cm]{cs1dens04_t70000.jpg} } {\includegraphics[trim=11cm 5cm 9.5cm 4cm, clip, height=2.8cm]{cs1dens04_t80000.jpg} } \end{center} \vskip-0.5cm {\bf Fig.~12.} The crystal of Fig.~11 at earlier times. \vskip0.5cm Note that the snowfake of Fig.~10 is also experiencing the sandwich instability at about the capture time. The difference in that case is that the growing crystal also experienced the branching instability earlier in its development. \vskip1.25cm \section{Case study $ii$ : classic dendrites} \null\hskip-0.6cm \begin{minipage}[b]{8cm} \hskip0.5cm \includegraphics[trim=8cm 1cm 8cm 2cm, clip, height=2.9in]{cs2dens105m.jpg} \end{minipage} \hskip1cm \begin{minipage}[b][7cm][t]{15cm} \vskip-0.25cm \includegraphics[trim=1.4cm 0cm 1.4cm 0cm, clip, height=2.5in]{cs2dens105rt.jpg} \end{minipage} \vskip-0.8cm {\bf Fig.~13.} $\rho=.105$ : a fern dendrite. \vskip0.5cm For this series of snowfakes, $\beta_{01}=1.6$, $\beta_{10}=\beta_{20}=1.5$, $\beta_{11}=1.4$, $\beta_{30}=\beta_{21}=\beta_{31}=1$, $\kappa\equiv .1$, all $\mu\equiv .008$, $\phi=0$, and growth starts from the canonical initial state. We will again look at how morphology is affected by different vapor densities $\rho$. The simulations argue persuasively that the frequency of side branches decreases with decreasing $\rho$. When $\rho=.105$, the branches are so dense that the crystal is rightly called a fern, while the examples with $\rho=.1$ and $\rho=.095$ have the classic look of winter iconography. These are our largest crystals, with radii around 400. A more substantial decrease in $\rho$ eliminates any significant side branching on this scale, resulting in a simple star for $\rho=.09$. As should be expected from Section 7, further decrease finally produces a sandwich instability at the tips, resulting in thick double plates. In this instance, slow growth at the branch tips is accompanied by significant fattening in the ${\mathbb Z}$-direction. \vskip1cm \null\hskip-0.6cm \begin{minipage}[b]{8cm} \includegraphics[trim=8cm 1cm 8cm 2cm, clip, height=2.9in]{cs2dens1m.jpg} \end{minipage} \hskip0.5cm \begin{minipage}[b][7cm][t]{15cm} \vskip-0.25cm \includegraphics[trim=1.4cm 0cm 1.4cm 0cm, clip, height=2.5in]{cs2dens1rt.jpg} \end{minipage} \vskip-0.75cm {\bf Fig.~14.} $\rho=.1$ : a classic dendrite. \vskip2cm \null\hskip-0.6cm \begin{minipage}[b]{8cm} \includegraphics[trim=8cm 1cm 8cm 2cm, clip, height=2.9in]{cs2dens095m.jpg} \end{minipage} \hskip0.5cm \begin{minipage}[b][7cm][t]{15cm} \vskip-0.25cm \includegraphics[trim=1.4cm 0cm 1.4cm 0cm, clip, height=2.5in]{cs2dens095rt.jpg} \end{minipage} \vskip-0.75cm {\bf Fig.~15.} $\rho=.095$ : fewer side branches. \vskip0.5cm \null\hskip-0.6cm \begin{minipage}[b]{8cm} \includegraphics[trim=8cm 1cm 8cm 2cm, clip, height=2.9in]{cs2dens09m.jpg} \end{minipage} \hskip0.5cm \begin{minipage}[b][7cm][t]{15cm} \vskip-0.25cm \includegraphics[trim=1.4cm 0cm 1.4cm 0cm, clip, height=2.5in]{cs2dens09rt.jpg} \end{minipage} \vskip-0.75cm {\bf Fig.~16.} $\rho=.09$ : no significant side branches on this scale. \vskip0.0cm \null\hskip-0.6cm \begin{minipage}[b]{8cm} \includegraphics[trim=8cm 1cm 8cm 2cm, clip, height=2.9in]{cs2dens082m.jpg} \end{minipage} \hskip0.5cm \begin{minipage}[b][7cm][t]{15cm} \vskip-0.25cm \includegraphics[trim=1.2cm 0cm 1.2cm 0cm, clip, height=2.5in]{cs2dens082rt.jpg} \end{minipage} \vskip-0.75cm {\bf Fig.~17.} $\rho=.082$ : the tip undergoes a sandwich instability. \vskip0.5cm The crystal in Fig.~17 is captured at about time 60000. The series of close-ups in Fig.~18 provides another illustration of the sandwich instability --- snapshots of the same snowfake are shown at time intervals of 1000, starting from time 37000. \begin{center} {\includegraphics[trim=14cm 10cm 12cm 0cm, clip, height=2.8cm]{cs2t37K.jpg}} {\includegraphics[trim=14cm 10cm 12cm 0cm, clip, height=2.8cm]{cs2t38K.jpg}} {\includegraphics[trim=14cm 10cm 12cm 0cm, clip, height=2.8cm]{cs2t39K.jpg}} {\includegraphics[trim=14cm 10cm 12cm 0cm, clip, height=2.8cm]{cs2t40K.jpg}} {\includegraphics[trim=14cm 10cm 12cm 0cm, clip, height=2.8cm]{cs2t41K.jpg}} {\includegraphics[trim=14cm 10cm 12cm 0cm, clip, height=2.8cm]{cs2t42K.jpg}} \end{center} \vskip-0.5cm {\bf Fig.~18.} Close-up of the sandwich instability at $\rho=.082$. \vskip0.5cm Our final example, with $\rho=.081$, demonstrates that a further decrease in density makes the crystal increasingly three-dimensional. \null\hskip-0.6cm \begin{minipage}[b]{8cm} \includegraphics[trim=8cm 1cm 8cm 2cm, clip, height=2.9in]{cs2dens081m.jpg} \end{minipage} \hskip0.5cm \begin{minipage}[b][7cm][t]{15cm} \vskip-0.25cm \includegraphics[trim=0.7cm 0cm 0.7cm 0cm, clip, height=2.5in]{cs2dens081rt.jpg} \end{minipage} \vskip-0.75cm {\bf Fig.~19.} Fattening from the tip inward at $\rho=.081$. \vskip1cm \section{Case study $iii$ : sandwich plates} When growth in the ${\mathbb Z}$-direction is much slower than in the $xy$-plane, outer ridges never develop. Instead, the dynamics grows a featureless prism, which, when sufficiently thick, undergoes a sandwich instability producing inner ridges. Much later the crystal experiences the branching instability, with plate-like branches that bear a superficial resemblance to Packard snowflakes \cite{Pac, GG2} during early stages. Throughout the evolution the external surface of the crystal has few markings, whereas inside features include ridges and {\it ribs\/}, which signify gradual thinning of the plates from the center outward before the branching instability. The sole surface designs are {\it reverse shapes\/}, which occur when the crystal grows in the ${\mathbb Z}$-direction from buds that arise close to the tips. These macrostep nuclei result in rapid growth of a single layer in the ${\mathbb T}$-direction until this layer outlines a nearly circular hole near the crystal's center; the hole then proceeds to shrink much more slowly. We note that this observation provides a convincing explanation for the circular markings seen on many snow crystal photographs \cite{Lib6, LR}. It also suggests that ribs are predominantly inner structures. While outer ribs could occur due to instabilities or changing conditions (cf.~Fig.~11), there is scant evidence of them in electron microscope photographs \cite{EMP}, which completely obscure inner structure. On the other hand, those photos reveal an abundance of sandwich plates, which appear as the crystal centers, at the tips of the six main arms, and as side branches. We now present two examples. Both start from the canonical seed. In the first, depicted in Fig.~20, $\beta_{01}=6$, $\beta_{10}=\beta_{20}=2.5$, $\beta_{11}=2$, and the remaining $\beta$'s are 1. All $\kappa$'s are .1, except that $\kappa_{01}=.5$, $\mu\equiv .0001$, and $\rho=.08$. The final radius of the crystal at the capture time 100000 is about 150. Note that the main ridge is interrupted: while initially it connects the two plates (and it has darker color in the ray-traced image as the background can be seen through it), it later splits and each plate has its own ridge. There is a suggestion of this phenomenon in real crystals (e.g., on p.~26 of \cite{Lib6}). \null\hskip-0.6cm \begin{minipage}[b]{8cm} \includegraphics[trim=8cm 1cm 8cm 2cm, clip, height=2.9in]{cs3plate1m.jpg} \end{minipage} \hskip0.5cm \begin{minipage}[b][7cm][t]{15cm} \vskip-0.25cm \includegraphics[trim=0.7cm 0cm 0.7cm 0cm, clip, height=2.5in]{cs3plate1rt.jpg} \end{minipage} \vskip-0.75cm {\bf Fig.~20.} A sandwich plate. \vskip1cm Our second example (Figs.~21 and 22) has interrupted main ridges and a few ribs. The parameter set now has $\beta_{01}=6.5$, $\beta_{10}=\beta_{20}=2.7$, and $\rho=.15$. The remaining values are as before, and the final sizes (this one at $t=36100$) are comparable. We provide a few intermediate stages and a detail of the inner structure. Observe the buds at times 25883 and 31671; also note that the outermost rib at time 19000 later disappears. \null\hskip-0.6cm \begin{minipage}[b]{8cm} \includegraphics[trim=8cm 1cm 8cm 2cm, clip, height=2.9in]{cs3plate2m.jpg} \end{minipage} \hskip0.5cm \begin{minipage}[b][7cm][t]{15cm} \vskip-0.25cm \includegraphics[trim=0.7cm 0cm 0.7cm 0cm, clip, height=2.5in]{cs3plate2rt.jpg} \end{minipage} \vskip-0.75cm {\bf Fig.~21.} Another sandwich plate. \vskip1cm \begin{center} {\includegraphics[trim=12cm 5cm 12cm 4cm, clip, height=2.5cm]{cs3t19000.jpg}} {\includegraphics[trim=12cm 5cm 11cm 4cm, clip, height=2.5cm]{cs3t25883.jpg}} {\includegraphics[trim=12cm 5cm 11cm 4cm, clip, height=2.5cm]{cs3t25900.jpg}} {\includegraphics[trim=11cm 5cm 11cm 4cm, clip, height=2.5cm]{cs3t25950.jpg}} {\includegraphics[trim=11cm 5cm 11cm 4cm, clip, height=2.5cm]{cs3t26000.jpg}} {\includegraphics[trim=10cm 5cm 10cm 4cm, clip, height=2.5cm]{cs3t31028.jpg}} \end{center} \begin{center} \vskip-0.25cm {\includegraphics[trim=0cm 0cm 0cm 0cm, clip, height=3cm]{cs3t19Kdetail_1.jpg}} \end{center} \vskip-0.3cm {\bf Fig.~22.} The plate of Fig.~21 at $t=19000, 25883, 25900, 25950, 26000, 31671$. The detail is from the first time, obtained by cutting the crystal along the plane $z=0$ and zooming in on the bottom half of the upper portion. \vskip1cm \section{Case study $iv$: the roles of drift and melting} From some of the electron micrographs at \cite{EMP}, it appears possible that the basal facets may have ridges and other markings on one side only, while the other side is nearly featureless. As far as we are aware, no attempt has been made to ``turn over'' these specimens and confirm the asymmetry, but \cite{NK,Nel} offer a theoretical explanation. They suggest that the one-sided structure is a consequence of early growth and that ridges are actually vestiges of the skeleton of hollow prisms such as Fig.~31 in Section 11 (see Fig.~3 of \cite{Nel}). In fact, it is widely held that the micron-scale prism from which a prototypical snowflake evolves develops slight asymmetries in the radii of its two basal facets, and that the larger facet acquires an increasing advantage from the feedback effect of diffusion-limited growth. As a result many crystals have a stunted hexagonal plate at their center. In \cite{Nak} this effect is described on p.~206 and in sketch 15 of Fig.~369. Another potential source of asymmetry in the ${\mathbb Z}$-direction is identified in Section 3.5 of \cite{Iwa} and on p.~18 of \cite{TEWF}, based on cloud tunnel experiments in the laboratory. Planar snowflakes evidently assume a preferred orientation parallel to the ground as they slowly fall, resulting in a small upward drift of the diffusion field relative to the crystal. We emulate these aspects of asymmetric growth by means of the drift $\phi$ in step (1c) of our algorithm and asymmetry of the initial seed as mentioned in Section 3. Consider first the snowfake of Fig.~1 and the closely related sectored plate in Fig.~23. The former starts from our fundamental prism and never undergoes the sandwich instability, but develops ridges on the bottom side and an almost featureless top due to the presence of $\phi = .01$. The dynamic parameters of the sectored plate below are identical, but growth starts from a mesoscopic prism that is 5 cells high, with radius 7 at the top and 3 at the bottom. The idea here is to mimic the situation where the upper basal plate has established an advantage over the lower basal plate early in the evolution. As is clear from the side view, in contrast to Fig.~6, growth of the lower facet stops completely due to diffusion limitation even though the small drift offers a slight advantage in the early stages. (According to \cite{Iwa}, falling snowflakes prefer the more aerodynamically stable orientation of Fig.~23.) Very many photos of physical snow crystals show evidence of such a stunted simple plate at the center; see \cite{Lib6}, pp.~75--76, for further discussion. \vskip0.3cm \begin{minipage}[b]{7cm} \includegraphics[trim=0.7cm 0cm 0.7cm 0cm, clip, height=2.5in]{cs4stuntedgnrt.jpg} \end{minipage} \hskip0.5cm \begin{minipage}[b][4cm][t]{7cm} \includegraphics[trim=0cm 0cm 0cm 0cm, clip, height=0.85in]{cs4stuntedgnm_1.jpg} \end{minipage} {\bf Fig.~23.} A sectored plate with a stunted double, from the top ({\it left\/}) and side ({\it right\/}). \vskip0.4cm \begin{comment} \null\hskip-0.75cm \begin{minipage}[b][0cm][b]{7cm} \includegraphics[trim=0cm 0cm 0cm 0cm, clip, height=0.85in]{cs4stuntedgnm_1.jpg} \end{minipage} \hskip1.5cm \begin{minipage}[b][6.5cm][b]{15cm} \vskip-0.0cm \includegraphics[trim=0.7cm 0cm 0.7cm 0cm, clip, height=2.5in]{cs4stuntedgnrt.jpg} \end{minipage} \vskip-0.0cm {\bf Fig.~23.} A sectored plate with a stunted double, from the side ($l$) and bottom ($r$). \vskip0.4cm \end{comment} \null\hskip -0.6cm \begin{minipage}[b]{8cm} \includegraphics[trim=8cm 1cm 8cm 2cm, clip, height=2.9in]{cs4mu005m.jpg} \end{minipage} \hskip0.5cm \begin{minipage}[b][7cm][t]{15cm} \vskip-0.25cm \includegraphics[trim=0.7cm 0cm 0.7cm 0cm, clip, height=2.5in]{cs4mu005rt.jpg} \end{minipage} \vskip-0.75cm {\bf Fig.~24.} A fern dendrite for $\mu_{10}=\mu_{20}=.005$. \vskip0.4cm The remaining examples of this section also start from slightly asymmetric seeds, experience a small drift, and have almost all their external markings on one side. Our goal is to explore the role of the melting rate, in much the same way we studied density dependence in Section 7, by varying $\mu$ in a series of snowfakes with all other parameters held fixed. In each instance, the seed has height 3, lower radius 2, and upper radius 1. For the next four crystals, $\beta_{01}=3$, $\beta_{10}=\beta_{20}=\beta_{11}=1.4$, $\beta_{30}=\beta_{21}=\beta_{31}=1$, $\kappa\equiv .1$, $\phi=.01$, and $\rho=.14$. Moreover $\mu_{01}=.002$, $\mu_{30}=\mu_{11}=\mu_{21}=\mu_{31}=.001$ and we vary only the common value of $\mu_{10}=\mu_{20}$. This value governs the speed of tips and --- as explained in Section 5 --- has more effect in regions of low density, so an increase inhibits side branching. Like the sectored plates just discussed, these are relatively rare snowfakes with outside ridges on the main arms and most side branches. All our modeling experience suggests that crystal tips tend to symmetrize with respect to the ${\mathbb T}$-direction, managing to avoid the sandwich instability only under quite special environmental conditions. We have seen little evidence in our simulations for the mechanism of ridge formation proposed in \cite{NK, Nel}, so we feel that drift is a more likely explanation of one-sided structures in snowflakes. \null\hskip-0.6cm \begin{minipage}[b]{8cm} \includegraphics[trim=8cm 1cm 8cm 2cm, clip, height=2.9in]{cs4mu008m.jpg} \end{minipage} \hskip0.5cm \begin{minipage}[b][7cm][t]{15cm} \vskip-0.25cm \includegraphics[trim=0.7cm 0cm 0.7cm 0cm, clip, height=2.5in]{cs4mu008rt.jpg} \end{minipage} \vskip-0.75cm {\bf Fig.~25.} Reduced side branching for $\mu_{10}=\mu_{20}=.008$. \vskip0.25cm \null\hskip-0.6cm \begin{minipage}[b]{8cm} \includegraphics[trim=8cm 1cm 8cm 2cm, clip, height=2.9in]{cs4mu009m.jpg} \end{minipage} \hskip0.5cm \begin{minipage}[b][7cm][t]{15cm} \vskip-0.25cm \includegraphics[trim=0.7cm 0cm 0.7cm 0cm, clip, height=2.5in]{cs4mu009rt.jpg} \end{minipage} \vskip-0.75cm {\bf Fig.~26.} Further reduction in the number of side branches for $\mu_{10}=\mu_{20}=.009$. \vskip1cm Starting with the classic fern of Fig.~24, the common prism facet melting threshold $\mu_{10}=\mu_{20}$ is gradually increased to twice the original value in Figs.~25--7. Stellar dendrites with fewer and fewer side branches result, until the final snowfake has only a few short sandwich plates on the sides of each arm. \null\hskip-0.6cm \begin{minipage}[b]{8cm} \includegraphics[trim=8cm 1cm 8cm 2cm, clip, height=2.9in]{cs4mu010m.jpg} \end{minipage} \hskip0.5cm \begin{minipage}[b][7cm][t]{15cm} \vskip-0.25cm \includegraphics[trim=0.7cm 0cm 0.7cm 0cm, clip, height=2.5in]{cs4mu010rt.jpg} \end{minipage} \vskip-0.75cm {\bf Fig.~27.} When $\mu_{10}=\mu_{20}=.01$, very few side branches remain. \vskip1cm The final example of this section is a classic simple star, a crystal with no side branches at all and a characteristic parabolic shape to its tips (cf.~\cite{Lib6}, p.~57 bottom). This elegant snowfake required considerable tweaking of parameters; they are: $\beta_{01}=3.1$, $\beta_{10}=1.05$, $\beta_{20}=1.03$, $\beta_{11}=1.04$, $\beta_{30}=1.02$, $\beta_{21}=1.01$, $\beta_{31}=1$, $\kappa\equiv .01$, $\mu_{01}=\mu_{30}=\mu_{11}=\mu_{21}=\mu_{31}=.01$ , $\mu_{10}=\mu_{20}=.03$, $\phi=.005$, and $\rho=.16$. \null\hskip-0.6cm \begin{minipage}[b]{8cm} \includegraphics[trim=8cm 1cm 8cm 2cm, clip, height=2.9in]{cs4simplestarm.jpg} \end{minipage} \hskip0.5cm \begin{minipage}[b][7cm][t]{15cm} \vskip-0.25cm \includegraphics[trim=0.7cm 0cm 0.7cm 0cm, clip, height=2.5in]{cs4simplestarrt.jpg} \end{minipage} \vskip-0.75cm {\bf Fig.~28.} A simple star. \vskip1cm \vskip0.5cm \section{Case study $v$ : needles and columns} Let us now turn to the common but less familiar snow crystals that expand primarily in the ${\mathbb Z}$-direction. As one would expect, these have $\beta_{01}$ small compared to $\beta_{10}$ and $\beta_{20}$, but surprisingly small advantage often suffices. We offer three snowfakes that emulate their physical counterparts quite well. All start from the canonical seed. Our first example, with a substantial bias toward attachment on the basal facets, is a (simple) {\it needle\/}. In Fig.~29, $\beta_{01}=2$, $\beta_{10}=\beta_{20}=\beta_{11}=4$, $\beta_{30}=\beta_{21}=\beta_{31}=1$, $\kappa\equiv .1$, $\mu\equiv .001$, $\phi=0$, and $\rho=.1$. This snowfake reproduces structure observed in nature and the laboratory: slender hollow tubes, often with cable-like protuberances at the ends (cf.~Fig.~135 of \cite{Nak}, pp.~67--68 of \cite{Lib6}). \null\hskip-0.7cm \begin{minipage}[b]{8cm} \includegraphics[trim=8cm 1cm 8cm 2cm, clip, height=2.9in]{cs5needlem.jpg} \end{minipage} \hskip-1.5cm \begin{minipage}[b][7cm][t]{15cm} \vskip-0.25cm \includegraphics[trim=0cm 0cm 0cm 0cm, clip, height=2.5in]{cs5needlert.jpg} \end{minipage} \vskip-0.5cm {\bf Fig.~29.} A needle. \vskip0.6cm \null\hskip-0.5cm \begin{minipage}[b]{8cm} \includegraphics[trim=8cm 1cm 8cm 2cm, clip, height=2.9in]{cs5columnm.jpg} \end{minipage} \hskip0.25cm \begin{minipage}[b][7.5cm][t]{15cm} \vskip-0.0cm \includegraphics[trim=0cm 0cm 0cm 0cm, clip, height=2.7in]{cs5columnrt.jpg} \end{minipage} \vskip-0.5cm {\bf Fig.~30.} A hollow column. \vskip1cm Next, Fig.~30 simulates the common type of snow crystal known as a {\it hollow column\/}. Here the bias toward attachment on the basal facets is not as pronounced. The parameter set is: $\beta_{01}=1$, $\beta_{10}=\beta_{20}=2$ $\beta_{30}=\beta_{11}=\beta_{21}=.5$ $\beta_{31}=1$, $\kappa\equiv .1$, $\mu\equiv .01$, $\phi=0$, and $\rho=.1$. Evidently, the hole starts developing early on. See pp.~64--66 of \cite{Lib6}) for photos of actual hollow columns and a qualitative description of their growth. The final example of this section is a column whose facets are hollow as well. The morphology of Fig.~31 occurs when the rates of expansion in the two directions are not very different. Photos and a description of this sort of snowflake appear on pp.~35--37 of \cite{Lib6}. Here $\beta_{01}=1.5$, $\beta_{10}=\beta_{20}=1.6$ $\beta_{11}=\beta_{30}=\beta_{21}=\beta_{31}=1$, $\kappa\equiv .1$, $\mu\equiv .015$, $\phi=0$, and $\rho=.1$. \null\hskip0.5cm \begin{minipage}[b]{8cm} \includegraphics[trim=0cm 0cm 0cm 0cm, clip, height=2.75in]{cs5hollowfacem.jpg} \end{minipage} \hskip-.75cm \begin{minipage}[b][5.7cm][t]{15cm} \vskip-1cm \includegraphics[trim=0cm 0cm 0cm 0cm, clip, height=2.5in]{cs5hollowfacert.jpg} \end{minipage} \vskip-0.25cm {\bf Fig.~31.} A column with hollow prism facets. \vskip1cm \section{Case study $vi$ : change of environment} In his pioneering research, Nakaya \cite{Nak} reproduced several of the most striking types found in nature by subjecting the cold chamber in his lab to a precisely controlled schedule of temperature and humidity changes, either sudden or gradual. Based on such experiments, he argued that {\it plates with dendritic extensions\/}, for example, are formed when a snowflake's early growth occurs in the upper atmosphere and then it drops to another layer more conducive to branching (\cite{Nak}, p.~16). In this section we mimic such varying environments by consider the effect of an abrupt change of parameters on some of our previous snowfakes. Let us begin with two examples of the type cited in the last paragraph: plates with dendritic extensions. Both start from a prism that is 3 cells high with radius 2 at the top and 1 at the bottom. The first stage for both is a simple plate similar to the snowfake of Fig.~1, but with a delayed branching instability. The initial parameters are: $\beta_{01}=3.5$, $\beta_{10}=\beta_{20}=\beta_{11}=2.25$, $\beta_{30}=\beta_{21}=\beta_{31}=1$, $\kappa\equiv .005$, $\mu\equiv .001$, $\phi=.01$, and $\rho=.12$. The first stage runs until time 8000 in the first example, and until time 12500 in the second. At that time most parameters remain the same, but in order to promote branching we change $\beta_{10}=\beta_{20}=\beta_{11}$ to 1.15 (resp.~1.4) and $\mu_{10}=\mu_{20}$ to .006 (resp.~004). The results, once the two snowfakes have reached a radius of 200 cells, are shown in Figs.~32--3. Predictably, the first example has more branching in its dendritic phase since the prism facet attachment threshold is lower. The large image on the cover of {\cite{Lib6}} shows a beautiful natural example of this type. \vskip 0.5cm \null\hskip-0.6cm \begin{minipage}[b]{8cm} \includegraphics[trim=8cm 1cm 8cm 2cm, clip, height=2.9in]{cs6dendritetips1m.jpg} \end{minipage} \hskip0.5cm \begin{minipage}[b][7cm][t]{15cm} \vskip-0.25cm \includegraphics[trim=0.7cm 0cm 0.7cm 0cm, clip, height=2.5in]{cs6dendritetips1rt.jpg} \end{minipage} \vskip-0.75cm {\bf Fig.~32.} A plate with fern extensions. \vskip1cm \null\hskip-0.6cm \begin{minipage}[b]{8cm} \includegraphics[trim=8cm 1cm 8cm 2cm, clip, height=2.9in]{cs6dendritetips3m.jpg} \end{minipage} \hskip0.5cm \begin{minipage}[b][7cm][t]{15cm} \vskip-0.25cm \includegraphics[trim=0.7cm 0cm 0.7cm 0cm, clip, height=2.5in]{cs6dendritetips3rt.jpg} \end{minipage} \vskip-0.75cm {\bf Fig.~33.} A plate with dendrite extensions. \vskip1cm A hybrid evolution at the opposite end of the spectrum is described in \cite{Lib6}, pp.~51--53, and many of the most striking snowflakes in \cite{LR} are of this type. As presumably in nature, conditions need to be just right for the corresponding snowfake to evolve. In this vein, we present three snowfakes that begin as stellar dendrites with minimal branching and later encounter an environment promoting plates. All start from a prism of height 5 with top radius 6 and bottom radius 2. The first stage runs the simple star dynamics of Fig.~28 until time 4000, 3000, or 2000, respectively. Then new parameters for the three experiments with higher attachment thresholds are run until time, respectively, 24000, 20000, and close to 20000. Common parameters are: $\beta_{30}=\beta_{31}=1$, $\kappa\equiv .1$, $\rho=.16$. In Fig.~34, the remaining parameters are $\beta_{01}=3.0$, $\beta_{10}=\beta_{20}=2.2$, $\beta_{11}=2.0$, $\beta_{21}=1.1$, $\mu\equiv .01$, $\phi=.005$. Note that in this instance the branches of the star broaden considerably after the change of environment, and the tips form sandwich plates. \vskip 0.25cm \null\hskip-0.6cm \begin{minipage}[b]{8cm} \includegraphics[trim=8cm 1cm 8cm 2cm, clip, height=2.9in]{cs6explodingtips1m.jpg} \end{minipage} \hskip0.5cm \begin{minipage}[b][7cm][t]{15cm} \vskip-0.25cm \includegraphics[trim=0.7cm 0cm 0.7cm 0cm, clip, height=2.5in]{cs6explodingtips1rt.jpg} \end{minipage} \vskip-0.75cm {\bf Fig.~34.} A broad-branched stellar crystal with sandwich-plate extensions. \vskip1cm \null\hskip-0.6cm \begin{minipage}[b]{8cm} \includegraphics[trim=8cm 1cm 8cm 2cm, clip, height=2.9in]{cs6explodingtips2m.jpg} \end{minipage} \hskip0.5cm \begin{minipage}[b][7cm][t]{15cm} \vskip-0.25cm \includegraphics[trim=0.7cm 0cm 0.7cm 0cm, clip, height=2.5in]{cs6explodingtips2rt.jpg} \end{minipage} \vskip-0.75cm {\bf Fig.~35.} A broad-branched stellar crystal with sectored-plate extensions. \vskip1cm By raising the attachment thresholds somewhat we avoid the sandwich instability and obtain instead the sectored-plate extensions with outside ridges seen in Fig.~35. Here $\beta_{01}=3.5$, $\beta_{10}=\beta_{20}=2.45$, $\beta_{11}=2.25$, $\beta_{21}=1.1$, $\mu_{10}=\mu_{20}=.002$, $\mu= .001$ otherwise, $\phi=.015$. Our final broad-branched example interpolates between the previous two. The values of $\beta$ are large enough to avoid the sandwich instability, but small enough that side branching leads to sectored plate structure of the extensions. Here $\beta_{01}=3.0$, $\beta_{10}=\beta_{20}=2.25$, $\beta_{11}=2.05$, $\beta_{21}=1.05$, $\mu\equiv .001$, $\phi=.015$. \null\hskip-0.6cm \begin{minipage}[b]{8cm} \includegraphics[trim=8cm 1cm 8cm 2cm, clip, height=2.9in]{cs6explodingtips3m.jpg} \end{minipage} \hskip0.5cm \begin{minipage}[b][7cm][t]{15cm} \vskip-0.25cm \includegraphics[trim=0.7cm 0cm 0.7cm 0cm, clip, height=2.5in]{cs6explodingtips3rt.jpg} \end{minipage} \vskip-0.75cm {\bf Fig.~36.} Another broad-branched stellar crystal. \vskip0.75cm We conclude this case study with two crystals that combine a three-dimensional column and two-dimensional plates. These are the {\it tsuzumi\/}, or {\it capped columns\/}, described on pp.~69--74 of \cite{Lib6}. They are thought to arise when crystals are transported to higher and colder regions of the atmosphere by a passing storm. Without a preferred orientation, it is most reasonable to model these as driftless. Both our snowfakes use the canonical seed and evolve with the parameters for the hollow column of Fig.~30 until time 20000. Then they run with new parameters that promote planar growth, until time 80000 for the first example, 60000 for the second. Common values for the two examples are: $\beta_{01}=5$, $\beta_{30}=\beta_{21}=\beta_{31}=1$, $\kappa\equiv .1$, $\mu\equiv .001$, $\phi=0$, and $\rho=.1$. The difference is the common value $\beta_{10}=\beta_{20}=\beta_{11}$ is $2.4$ in Fig.~37, and $2.1$ in Fig.~38. Higher attachment thresholds delay the branching instability in the first capped column so the caps are simple plates, as opposed to sectored plates in the second. The transition period from column to cap in lab tsuzumi is described in some detail by Nakaya (\cite{Nak}, p.~221; see also the sketch on p.~222). We remark that our snowfake versions evolve in the same way. Namely, for a considerable time after the change of environment, outward growth occurs almost exclusively along the 18 edges of the hexagonal column. This is a diffusion-limited effect similar to the hollowing in Fig.~31. Then, rather suddenly, growth in the ${\mathbb T}$-direction takes over. \null\hskip1cm \begin{minipage}[b]{8cm} \includegraphics[trim=0cm 0cm 0cm 0cm, clip, height=2.5in]{cs6cappedcolumn1m.jpg} \end{minipage} \hskip-1cm \begin{minipage}[b][5.7cm][t]{15cm} \vskip-0.8cm \includegraphics[trim=0cm 0cm 0cm 0cm, clip, height=2.5in]{cs6cappedcolumn1rt.jpg} \end{minipage} \vskip0cm {\bf Fig.~37.} A column capped with hexagonal plates. \vskip1cm \null\hskip1cm \begin{minipage}[b]{8cm} \includegraphics[trim=0cm 0cm 0cm 0cm, clip, height=2.5in]{cs6cappedcolumn2m.jpg} \end{minipage} \hskip-1cm \begin{minipage}[b][5.7cm][t]{15cm} \vskip-0.8cm \includegraphics[trim=0cm 0cm 0cm 0cm, clip, height=2.5in]{cs6cappedcolumn2rt.jpg} \end{minipage} \vskip0cm {\bf Fig.~38.} A column capped with sectored plates. \vskip1cm \vskip0.5cm \section{Case study $vii$ : eccentric crystals} This last section features snowfakes that result from a careful search through parameter space and are quite sensitive to any change. They are close-to-critical, near the phase boundary between dominant growth in the ${\mathbb Z}$-direction and the ${\mathbb T}$-direction. Consequently they may be rare in nature, though variants of some of the forms have been observed, and even represent morphological types in the Magono-Lee classification \cite{ML}. All our final examples start from the canonical seed. As mentioned in Section 2, starting from a single cell our algorithm has a strong tendency to grow rapidly in the ${\mathbb Z}$-direction due to the immediate onset of a {\it needle instability\/}. Even if the initial mesoscopic prism is wider in the ${\mathbb T}$-direction, it is still quite common for this instability to arise later on if the dynamics are close to critical. After an initial phase of typical planar growth, needles suddenly nucleate at concentric locations scattered over the central plate or arms. Fig.~137 of \cite{Nak} shows an excellent example of this type in nature, and our first two examples illustrate a similar phenomenon in our model. The conventional explanation for such hybrid types, called {\it stellar crystals with needles\/} in \cite{ML}, involves a sudden change in the environment, but this is one of several cases where our algorithm suggests that homogeneous conditions can sometimes produce the same effect. Fig.~39 has features like a classic planar snowflake that has developed {\it rime\/} from attachment of surrounding water droplets. In fact these protrusions are potential needle instabilities --- the two symmetric rings close to the center and the tips are stunted needles, whereas the intermediate needles have successfully nucleated. The parameters of this snowfake are: $\beta_{01}=1.58$, $\beta_{10}=\beta_{20}=\beta_{11}=1.5$, $\beta_{30}=\beta_{21}=\beta_{31}=1$, $\kappa\equiv .1$, $\mu\equiv .006$, $\phi=0$ and $\rho=.1$. Partial symmetry of bumps in many natural crystals, statistically unlikely to be the result of rime, often indicates vestiges of rims and ribs after sublimation, but can also be due to nascent needles, as in the middle specimen of Plate 116 in \cite{Nak}. Since the locations where needles nucleate are quite sensitive to changes in parameters, residual randomness in the mesoscopic dynamics is apt to degrade the symmetry. \null\hskip-0.4cm \begin{minipage}[b]{8cm} \includegraphics[trim=0cm 0cm 0cm 0cm, clip, height=2.3in]{cs7needleinstabilitym.jpg} \end{minipage} \hskip0.0cm \begin{minipage}[b][5.7cm][t]{15cm} \vskip-0.1cm \includegraphics[trim=0cm 0cm 0cm 0cm, clip, height=2.2in]{cs7needleinstabilityrt.jpg} \end{minipage} \vskip-0.1cm {\bf Fig.~39.} A stellar dendrite with stunted and nucleating needles. \vskip 0.5cm The next three examples have $\beta\equiv 1$, $\mu\equiv .03$, $\kappa_{10}=\kappa_{20}=.1$, $\kappa_{30}=.05$, and $\kappa_{11}=\kappa_{21}=\kappa_{31}=.01$. The remaining parameters for Fig.~40 are $\kappa_{01}=.11$ and $\rho=.06$. This snowfake is a rather extreme instance of a stellar crystal with needles in which the planar portion is a thick but very narrow simple star. \begin{comment} In \cite{ML}, this snowfake would be called a {\it stellar crystal with needles\/}. As mentioned in Section 2, starting from a single cell our algorithm has a strong tendency to grow rapidly in the ${\mathbb Z}$-direction due to the immediate onset of a {\it needle instability\/}. Even if the initial mesoscopic prism is wider in the ${\mathbb T}$-direction and the dynamics are close to critical, it is still quite common for this instability to arise later on. After an initial phase of typical planar growth, needles suddenly nucleate at concentric locations scattered over the central plate or arms. Fig.137 of \cite{Nak} shows an excellent example of this type in nature. The snowfake pictured here is a rather extreme instance in which the planar portion is a thick but very narrow simple star. More representative sectored plates with nucleating needles are not hard to find. The conventional explanation for hybrid types such as stellar crystals with needles involves a sudden change in the environment, but this is one of several cases where our algorithm suggests that homogeneous conditions can sometimes produce the same effect. \end{comment} \null\hskip-0.5cm \begin{minipage}[b]{8cm} \includegraphics[trim=8cm 1cm 8cm 2cm, clip, height=3in]{cs7simplestarwithneedlesm.jpg} \end{minipage} \hskip0.5cm \begin{minipage}[b][7cm][t]{15cm} \vskip0.25cm \includegraphics[trim=0cm 0cm 0cm 0cm, clip, height=2in]{cs7simplestarwithneedlesrt.jpg} \end{minipage} \vskip-1.75cm {\bf Fig.~40.} A simple star with needles. \vskip1cm Our next two examples seem never to have been seen at all, and it is clear why: even if they managed to grow, their thin plates would be extremely brittle and susceptible to random fluctuations. They are characterized by very small differences in the growth rates. After starting as planar crystals, they suddenly nucleate thin structures extending into the third dimension. In Fig.~41 $\kappa_{01}=.12$ and $\rho=.057$; in Fig.~42 $\kappa_{01}=.116$ and $\rho=.06$. For obvious reasons, we call these {\it butterflakes\/}. They are idealizations of the {\it stellar crystals with spatial plates\/} in \cite{ML}; chaotic snow crystals with thin plates growing every which way are relatively common. \null\hskip0.5cm \begin{minipage}[b]{8cm} \includegraphics[trim=0cm 0cm 0cm 0cm, clip, height=2.4in]{cs7butterflakeinstabilitym.jpg} \end{minipage} \hskip-0.5cm \begin{minipage}[b][5.7cm][t]{15cm} \vskip-0.2cm \includegraphics[trim=0cm 0cm 0cm 0cm, clip, height=2.2in]{cs7butterflakeinstabilityrt.jpg} \end{minipage} \vskip-0.1cm {\bf Fig.~41.} A butterflake with wings in the directions of the main arms. \vskip-0.4cm \null\hskip-0.7cm \begin{minipage}[b]{8cm} \includegraphics[trim=8cm 1cm 8cm 2cm, clip, height=2.8in]{cs7butterflakem.jpg} \end{minipage} \hskip-0.0cm \begin{minipage}[b][7cm][t]{15cm} \vskip 0.5cm \includegraphics[trim=0cm 0cm 0cm 0cm, clip, height=2.1in]{cs7butterflakert.jpg} \end{minipage} \vskip-1.1cm {\bf Fig.~42.} A butterflake with side wings. \vskip0.5cm We conclude the paper with a family of five related examples. The first is a common sandwich plate (cf.~p.~39, lower right, in \cite{Lib6}) with parameter values $\beta_{01}=1.41$, $\beta_{10}=\beta_{20}=1.2$ $\beta_{11}=\beta_{30}=\beta_{21}=\beta_{31}=1$, $\kappa\equiv .1$, $\mu\equiv .025$, $\phi=0$, and $\rho=.09$. \null\hskip-0.7cm \begin{minipage}[b]{8cm} \includegraphics[trim=8cm 1cm 8cm 2cm, clip, height=2.7in]{cs7platem.jpg} \end{minipage} \hskip0.3cm \begin{minipage}[b][7cm][t]{15cm} \vskip0.45cm \includegraphics[trim=0.7cm 0cm 0.7cm 0cm, clip, height=2.2in]{cs7platert.jpg} \end{minipage} \vskip-0.9cm {\bf Fig.~43.} A sandwich plate with broad branches. \vskip0.5cm The remaining four are minor perturbations, which nevertheless look quite different. Namely, even though their model parameters are constant over time, they undergo ``exploding tips'' quite similar to crystals such as the one in Fig.~35 that results from inhomogeneous environmental conditions. The principle behind all four variants is the same: eventually, the growing tip thickens and slows down considerably. Usually this happens close to the beginning of the evolution (as, in fact, occurred in the dynamics leading to Fig.~43), so the snowfake is unremarkable. But with some experimentation we find cases when the onset of the sandwich instability is delayed and the final picture can be quite dramatic. The complex inner patterns are the result of extraordinarily intricate dynamics. Parameter values that differ from those of Fig.~43 are given in the captions. \vskip-0.5cm \null\hskip-0.7cm \begin{minipage}[b]{8cm} \includegraphics[trim=8cm 1cm 8cm 2cm, clip, height=2.7in]{cs7fattip1m.jpg} \end{minipage} \hskip0.3cm \begin{minipage}[b][7cm][t]{15cm} \vskip0.45cm \includegraphics[trim=0.7cm 0cm 0.7cm 0cm, clip, height=2.2in]{cs7fattip1rt.jpg} \end{minipage} \vskip-0.9cm {\bf Fig.~44.} Perturbed parameters: $\beta_{01}=1.25$, $\rho=.091$. \vskip.05cm \null\hskip-0.7cm \begin{minipage}[b]{8cm} \includegraphics[trim=8cm 1cm 8cm 2cm, clip, height=2.7in]{cs7fattip2m.jpg} \end{minipage} \hskip0.3cm \begin{minipage}[b][7cm][t]{15cm} \vskip0.45cm \includegraphics[trim=0.7cm 0cm 0.7cm 0cm, clip, height=2.2in]{cs7fattip2rt.jpg} \end{minipage} \vskip-0.9cm {\bf Fig.~45.} Perturbed parameter: $\beta_{01}=1.5$. \vskip.05cm \null\hskip-0.7cm \begin{minipage}[b]{8cm} \includegraphics[trim=8cm 1cm 8cm 2cm, clip, height=2.7in]{cs7fattip3m.jpg} \end{minipage} \hskip0.3cm \begin{minipage}[b][7cm][t]{15cm} \vskip0.45cm \includegraphics[trim=0.7cm 0cm 0.7cm 0cm, clip, height=2.2in]{cs7fattip3rt.jpg} \end{minipage} \vskip-0.9cm {\bf Fig.~46.} Perturbed parameter: $\beta_{01}=1.19$. \vskip-.25cm \null\hskip-0.7cm \begin{minipage}[b]{8cm} \includegraphics[trim=8cm 1cm 8cm 2cm, clip, height=2.7in]{cs7moosem.jpg} \end{minipage} \hskip0.3cm \begin{minipage}[b][7cm][t]{15cm} \vskip0.45cm \includegraphics[trim=0.7cm 0cm 0.7cm 0cm, clip, height=2.2in]{cs7moosert.jpg} \end{minipage} \vskip-0.9cm {\bf Fig.~47.} Perturbed parameter: $\beta_{01}=1.25$. \vskip0cm \newpage
{ "attr-fineweb-edu": 1.901367, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUczI241xiErqyXbJC
\section{Introduction} Time-series data analytics using deep neural networks \cite{ismail2019deep} enable many real-world applications in health-care \citep{tripathy2018use}, finance \citep{yang2021novel}, smart grids \citep{zheng2017wide}, and intrusion detection \citep{kim2017method}. However, safe and reliable deployment of such machine learning (ML) systems require the ability to detect time-series data which does not follow the distribution of training data, aka in-distribution (ID). This task is referred to as out-of-distribution (OOD) detection. If the ML model encounters OOD inputs, it can output wrong predictions with high confidence. For example, a circumstantial event for an epilepsy patient or a sudden surge in a smart grid branch result in sensor readings which deviate from the training distribution. Another important application of OOD detection for the time-series domain is synthetic data generation. Many time-series applications suffer from limited or imbalanced data, which motivates methods to generate synthetic data \citep{smith2020conditional}. A key challenge is to automatically assess the validity of synthetic data, which can be alleviated using accurate OOD detectors. There is a growing body of work on OOD detection for the image domain \citep{hendrycks2017baseline,liu2020energy,liang2017enhancing,xiao2020regret,YY2020roblocallip,cao2020benchmark} and other types of data such as genomic sequences \citep{jie2019likelihood}. These methods can be categorized into \begin{itemize} \item Supervised methods which fine-tune the ML system or perform specific training to distinguish examples from ID and OOD. \item Unsupervised methods which employ Deep Generative Models (DGMs) or unlabeled data to perform OOD detection. \end{itemize} However, time-series data with its unique characteristics (e.g., sparse peaks, fast oscillations) pose unique challenges that are not encountered in the image domain: \begin{itemize} \item Spatial relations between pixels is not similar to the temporal relations across different time-steps of time-series signals. \item Pixel variables follow a categorical distribution of values $\{0,1,\cdots,255\}$ where as time-series variables follow a continuous distribution. \item The semantics of images (e.g., background, edges) do not apply to time-series data. \item Humans can identify OOD images for fine-tuning purposes, but this task is challenging for time-series data. Hence, prior OOD methods are not suitable for the time-series domain. \end{itemize} \begin{figure*}[!t] \centering \includegraphics[width=\linewidth]{Figures/Framework.jpg} \caption{Overview of the seasonal ratio (SR) scoring algorithm. The semantic component $S_{\hat{y}}$ for the predicted output $\hat{y}$ is obtained from the training stage via Seasonal and Trend decomposition using Loess (STL). The semantic component $S_{\hat{y}}$ is subtracted from the time-series $x$ to obtain the remainder $r$. The trained CVAE models $\mathcal{M}_x$ and $\mathcal{M}_r$ are used to compute the SR score. If the SR score is within the threshold interval $[\tau_l, \tau_u]$ identified during training, then $x$ is classified as ID. Otherwise, it is flagged as OOD.} \label{fig:framework} \end{figure*} This paper proposes a novel OOD detection algorithm for the time-series domain referred to as Seasonal Ratio Scoring (SRS). {\em To the best of our knowledge, this is the first work on OOD detection over time-series data}. SRS employs the Seasonal and Trend decomposition using Loess (STL) \citep{cleveland1990stl} on time-series signals from the ID data to create class-wise semantic pattern and remainder for each signal. For example, in a human activity recognition system, SRS would extract a pattern "running" that describes semantically all the "running" windows. If the person trips and falls, SRS would detect that this event does not belong to the pre-defined classes of activity and flags it as OOD. For this purpose, we train two separate DGMs over this STL decomposition to estimate the class-wise conditional likelihoods for time-series signal and remainder. The seasonal ratio score for each time-series signal from ID is computed from these two estimates. A threshold interval is estimated from the statistics of all these scores over ID data. Given a new time-series input and a classifier at the testing time, the SRS approach computes the seasonal ratio score for the predicted output and classifies the time-series signal as OOD example if the score lies outside the threshold interval. Figure~\ref{fig:framework} illustrates the SRS algorithm. The effectiveness of SRS critically depends on the extraction of accurate class-wise semantic components. Since time-series data is prone to warping and time-shifts, we also propose a new alignment approach based on dynamic time warping to improve the output accuracy of STL decomposition. Our experiments on diverse real-world time-series datasets demonstrate that SRS method is well-suited for time-series OOD detection when compared to prior methods. \vspace{1.0ex} \noindent {\bf Contributions.} The main contribution of this paper is the development and evaluation of the Seasonal Ratio Scoring (SRS) algorithm for OOD detection in the time-series domain. Specific contributions include: \begin{enumerate} \item Principled algorithm based on STL decomposition and deep generative models to compute seasonal ratio score to detect OOD time-series examples. \item Novel time-series alignment algorithm based on dynamic time warping to improve the effectiveness of seasonal ratio score based OOD detection. \item Formulation of experimental setting for time-series OOD detection. Experimental evaluation of SRS algorithm on real-world datasets and comparison with state-of-the-art baselines. \item Open-source code and data for SRS method \href{https://github.com/tahabelkhouja/SRS}{https://github.com/tahabelkhouja/SRS}. \end{enumerate} \section{Problem Setup} Suppose $\mathcal{D}_{in}$ is an in-distribution (ID) time-series dataset with $d$ examples $\{(x_i, y_i)\}$ sampled from the distribution $P^*$ defined on the joint space of input-output pairs $(\mathcal{X}, \mathcal{Y})$. Each $x_i \in \mathbb{R}^{n \times T}$ from $\mathcal{X}$ is a multi-variate time-series input, where $n$ is the number of channels and $T$ is the window-size of the signal. $y_i \in \mathcal{Y}=\{1,\cdots,C\}$ represents the class label for time-series signal $x_i$. We consider a time-series classifier $F: \mathbb{R}^{n \times T} \rightarrow \{1, \cdots, C\}$ learned using $\mathcal{D}_{in}$. For example, in a health monitoring application using physiological sensors for patients diagnosed with cardiac arrhythmia, we use the measurements from wearable devices to predict the likelihood of a cardiac failure. OOD samples $(x,y)$ are typically generated from a distribution other than $P^*$. Specifically, we consider a sample $(x,y)$ to be OOD if the class label $y$ is different from the set of in-distribution class labels, i.e., $y \notin \mathcal{Y}$. The classifier $F_{\theta}(x)$ learned using $\mathcal{D}_{in}$ will assign one of the $C$ class labels from $\mathcal{Y}$ when encountering an OOD sample $(x,y)$. Our goal is to detect such OOD examples for safe and reliable real-world deployment of time-series classifiers. We provide a summary of the mathematical notations used in this paper in the Table \ref{tab:notation}. \begin{table}[h] \caption{Mathematical notations used in this paper.} \label{tab:notation} \centering \resizebox{.8\linewidth}{!}{% \centering \begin{tabular}{|c|l|} \hline \textbf{Variable} & \textbf{Definition} \\ \hline $\mathcal{D}_{in}$ & dataset of in-distribution time-series signals\\ \hline $P^*$ & True distribution of the time-series dataset \\ \hline $x_i$ & Input time-series signal \\ \hline $\mathcal{Y}$ & Set of output class labels $y \in \{1,\cdots,C\}$ \\ \hline $F_{\theta}$ & Classifier that maps an input$x\in \mathbb{R}^{n \times T}$ to a class label $ y\in \mathcal{Y}$ \\ \hline $S_y$ & Semantic pattern of a class $y$ according to STL \\ \hline $SR$ & Seasonal Ratio score \\ \hline \end{tabular} } \end{table} \noindent \textbf{Challenges of time-series data.} The unique characteristics of time-series data (e.g., temporal relation across time-steps, fast oscillations, continuous distribution of variables) pose unique challenges not seen in the image domain. Real-world time-series datasets are typically small (relative to image datasets) and exhibit high class imbalance \cite{dau2019ucr}. Therefore, estimating a good approximation of in-distribution $P^*$ is hard, which results in failure of prior OOD methods. Indeed, our experiments demonstrate that prior OOD methods are not suited for the time-series domain. As a prototypical example, Figure \ref{fig:histoverlap} shows the limitation of Likelihood Regret score \citep{xiao2020regret} to identify OOD examples: ID and OOD scores of real-world time-series examples overlap. \section{Background and Preliminaries} In this section, we provide the necessary background on conditional VAE and STL decomposition, to be able to understand the proposed seasonal ratio score based OOD detection approach. \vspace{1.0ex} \noindent {\bf Conditional VAE.} Variational Auto-Encoders (VAEs) are a class of likelihood-based generative models with many real-world applications \citep{doersch2016tutorial}. They rely on the encoding of raw input $x$ as a latent Gaussian variable $z$ to estimate the likelihood of $x$. The latent variable $z$ is used to compute the likelihood of the training data: $p_{\theta}(x)=\int p_{\theta}(x|z)p(z)dz$. Since the direct computation of this likelihood is impractical, the principle of evidence lower bound (ELBO) \citep{xiao2020regret} is employed. In this work, we consider the ID data $\mathcal{D}_{in}$ as $d$ input-output samples of the form $(x_i, y_i)$. We want to estimate the ID using both $x_i$ and $y_i$. Therefore, we propose to use conditional VAE (CVAE) for this purpose. CVAEs are a class of likelihood-based generative models \citep{doersch2016tutorial}. They rely on the encoding of raw input $(x,y)$ as a latent Gaussian variable $z$ to estimate the conditional likelihood of $x$ over the class label $y$. CVAE is similar to VAE with the key difference being the use of conditional probability over both $x_i$ and $y_i$. The ELBO objective of CVAE is: \begin{equation*} \mathcal{L}_{\text{ELBO}}\overset{\Delta}{=} \mathbb{E}_{\phi}\big[\log p_{\theta}(x|z,y)\big] - D_{\text{KL}}\big[q_{\phi}(z|x,y)||p(z|y)\big] \end{equation*} where $q_{\phi}(z|x,y)$ is the variational approximation of the true posterior distribution $p_{\theta}(x|z,y)$. As CVAE only computes the lower bound of the log-likelihood of a given input, the exact log-likelihood is estimated using Monte-Carlo sampling as shown below: \begin{equation} \mathcal{L}_M=\mathbb{E}_{z^m \sim q_{\phi}(z|x,y)}\bigg[\log \dfrac{1}{M} \sum_{m=1}^M \dfrac{p_{\theta}(x|z^m,y)p(z^m)}{q_{\phi}(z^m|x,y)}\bigg] \label{eq:cvae_mc} \end{equation} The intuitive expectation from a DGM learned using training data is to assign a high likelihood to ID samples and a low likelihood to OOD samples. However, recent research showed that DGMs tend to assign highly unreliable likelihood to OOD samples regardless of the different semantics of both ID and OOD data \citep{xiao2020regret}. Indeed, our experimental results shown in Table \ref{tab:OODrecons} demonstrate that this observation is also true for the time-series domain. \begin{figure}[t] \centering \begin{minipage}{.38\linewidth} \centering \includegraphics[width=\linewidth]{Figures/Pattern_ERing5_1.png} \end{minipage}% \hspace{2ex} \begin{minipage}{.38\linewidth} \centering \includegraphics[width=\linewidth]{Figures/Pattern_ERing2_2.png} \end{minipage} \caption{Illustration of STL method for two different classes from the ERing dataset. Dotted signals are natural time-series signals $x$ and the red signal is the semantic pattern $S_y$.} \label{fig:stlexample} \end{figure} \vspace{1.0ex} \noindent \textbf{STL decomposition.} STL \citep{jiang2011part} is a statistical method for decomposing a given time-series signal $x$ into three different components: 1) The seasonality $x_s$ is a fixed regular pattern that recurs in the data; 2) The trend $x_t$ is the increment or the decrement of the seasonality over time; and 3) The residual $x_r$ represents random additive noise. STL employs Loess (LOcal regrESSion) smoothing in an iterative process to estimate the seasonality component $x_s$ \citep{mckinney2011time}. The remainder is the additive residual from the input $x$ after summing both $x_s$ and $x_t$. For the proposed SRS algorithm, we assume that there is a fixed semantic pattern $S_{y}$ for every class label $y \in \mathcal{Y}$, and this pattern recurs in all examples $(x_i, y_i)$ from $\mathcal{D}_{in}$ with the same class label, i.e., $y_i$=$y$. We will elaborate more on this assumption and a reformulation of the problem that can be used when the assumption is violated in the next section. Hence, every time-series example has the following two elements: $x_i = S_{y_i} + r_i$, where $S_{y_i}$ is the pattern for the class label $y$=$y_i$ and $r_i$ is the remainder noise w.r.t $S_{y_i}$. For time-series classification tasks, ID samples are assumed to be stationary. Therefore, we propose to average the trend $x_t$ component observed during training and include it in the semantic pattern $S_{y}$. Figure \ref{fig:stlexample} illustrates the above-mentioned decomposition for two different classes from the ERing dataset. \section{Seasonal Ratio Scoring Approach for OOD Detection} \vspace{1.0ex} \noindent {\bf Overview of SRS algorithm.} The training stage proceeds as follows. We employ STL decomposition to get the semantic component $S_y$ for each class label $y \in \mathcal{Y}$=$\{1,\cdots,C\}$ from the given in-distribution (ID) data $\mathcal{D}_{in}$. To improve the accuracy of STL decomposition, we apply a time-series alignment method based on dynamic time warping to address scaling, warping, and time-shifts. We train two CVAE models $\mathcal{M}_x$ and $\mathcal{M}_r$ to estimate the class-wise conditional likelihood of each time-series signal $x_i$ and its remainder component $r_i$ w.r.t the semantic component $S_{y_i}$. The seasonal ratio score for each ID example $(x_i, y_i)$ from $\mathcal{D}_{in}$ is computed as the ratio of the class-wise conditional likelihood estimates for $x_i$ and its remainder $r_i$: $SR_i(x_i, y_i) \overset{\Delta}{=} \frac{p(x_i|y=y_i)}{p(r_i|y=y_i)}$. We compute the SR scores for all in-distribution examples from $\mathcal{D}_{in}$ to estimate the threshold interval $[\tau_l, \tau_u]$ for OOD detection. During the inference stage, given a time-series signal $x$ and a trained classifier $F(x)$, we compute the SR score of $x$ with the predicted output $\hat{y}$=$F(x) \in \mathcal{Y}$ and identify it as an OOD example if the SR score lies outside the threshold interval $[\tau_l, \tau_u]$. Figure \ref{fig:framework} provides a high-level illustration of the SRS algorithm. \vspace{1.0ex} Below we first provide an intuitive explanation to motivate the seasonal ratio score. Next, we describe the complete details of the SRS algorithm including both training and inference stages. Finally, we motivate and describe a time-series alignment approach based on dynamic time warping to improve the effectiveness of SRS. \subsection{Intuition for Seasonal Ratio Score} We explain the intuition behind the proposed SRS algorithm using STL decomposition of time-series signals and CVAE models for likelihood estimation. Current research shows that DGMs alone can fail to identify OOD samples \citep{xiao2020regret}. They not only assign high likelihood to OOD samples, but they also exhibit good reconstruction quality. In fact, we show in Table \ref{tab:OODrecons} that CVAEs trained on a given ID data generally exhibit a low reconstruction error on most of the OOD samples. Furthermore, we show in Table \ref{tab:auroc} that using a trained CVAE likelihood output for OOD detection fails to perform well. These results motivate the new for a new OOD scoring method for the time-series domain. \vspace{1.0ex} \noindent {\bf Class-wise seasonality via STL decomposition.} The proposed SRS algorithm relies on the following assumption to analyze the time-series space for OOD detection. \bigbreak \fbox{% \begin{minipage}{.96\linewidth} \centering \begin{assump}Each time-series example $(x_i, y_i)$ from the in-distribution data $\mathcal{D}_{in}$ consists of two components. 1) A class-wise semantic pattern $S_y$ for each class label $y \in \mathcal{Y}$ representing the meaningful semantics of the class label $y$. 2) A remainder noise $r_i$ representing an additive perturbation to the semantic portion. Hence, $\forall (x_i, y_i) \in \mathcal{D}_{in}:~ x_i = S_{y_i} + r_i$ \label{asp:main}% \end{assump}% \end{minipage} }% \bigbreak \noindent We propose to employ STL decompostion to estimate semantic pattern $S_y$ (as illustrated in Figure \ref{fig:stlexample}) and deduce the remainder noise that can be due to several factors including errors in sensor measurements and noise in communication channel. These two components are analogous to the foreground and the background of an image, where the foreground is the interesting segment of the input that describes it, and the background may not be necessarily related to the foreground. In spite of this analogy, prior methods for the image domain are not suitable for time-series as explained in the related work. In this decomposition, we cannot assume that $S_y$ and $r$ are independent for a given time-series example $(x, y)$, as $S_y$ is class-dependant and $r$ is the remainder of the input $x$ using $S_y$. Hence, we present their conditional likelihoods in the following observation. \begin{observation} Let $x \in \mathbb{R}^{n \times T}$ is a time-series signal and $y_i \in \mathcal{Y}$=$\{1,\cdots,C\}$ be the corresponding class label. As $x$ = $S_{y_i} + r$, we have: \begin{equation} p(x|y_i) = p(r|y_i)p(S_{y_i}|y_i) \end{equation} \label{th:likelihood} \end{observation} \textbf{Proof of Observation 1} As $X = S_{y_i} + r$, it is intuitive to think that $p(X|y=y_i) = p(S_{y_i})\times p(r)$. However, we cannot assume that $S_{y_i}$ and $r$ are independent, as $S_{y_i}$ is class-dependant and $r$ is the remainder of the input $X$ given $S_{y_i}$. \vspace{1.0ex} \noindent Therefore, we make use of the conditional probabilities of the components. The likelihood $p(X)$ can be decomposed as follows: \begin{equation*} \begin{split} p(X|y=y_i) &= p(S_{y_i},r|y=y_i)\\ &= \frac{p(S_{y_i},r,y=y_i)}{p(y=y_i)}\\ &= \frac{p(r,y=y_i)p(S_{y_i}|r,y=y_i)}{p(y=y_i)} \end{split} \end{equation*} For the conditional probability $p(S_{y_i}|r,y=y_i)$, as only the pattern $S_{y_i}$ depends on the class label, and that we have defined $r$ as a non-meaningful noise to the input, we can assume that \textbf{$S_{y_i}$ and $r$ are conditionally independent given the class $y_i$}. Therefore, we have the following. \begin{equation*} \begin{split} p(X|y=y_i) &= \frac{p(r,y=y_i)p(S_{y_i}|r,y=y_i)}{p(y=y_i)}\\ &= \frac{p(r,y=y_i)p(S_{y_i}|y=y_i)}{p(y=y_i)}\\ &= \frac{p(r|y=y_i)p(y=y_i)p(S_{y_i}|y=y_i)}{p(y=y_i)}\\ &= p(r|y=y_i)p(S_{y_i}|y=y_i)\\ \end{split} \end{equation*} \vspace{1.0ex} \noindent {\bf OOD detection using CVAEs.} Observation \ref{th:likelihood} shows the relationship between the conditional likelihood of the input $x$ and its remainder $r$. Since both likelihoods are conditional, we employ CVAEs. Recall that OOD examples come from an unknown distribution which is different from the in-distribution $P^*$ and do not belong to any pre-defined class label from $\mathcal{Y}$. Therefore, we propose to use the following observation for OOD detection in the time-series domain. \begin{observation} Let $x \in \mathbb{R}^{n \times T}$ is a time-series signal and $y \in \mathcal{Y}$=$\{1,\cdots,C\}$ be the corresponding class label. As $x$ = $S_{y} + r$, $x$ is an OOD example if $p(x|y) \neq p(r|y)$ and an in-distribution example if $p(x|y)$ = $p(r|y)$. \label{th:lemmaOOD} \end{observation} Observation \ref{th:lemmaOOD} shows how we can exploit the relationship between the estimated conditional likelihood of the time-series signal $x$ and its remainder $r$ to predict whether $x$ is an OOD example or not. This observation relies on the assumption that $p(S_y|y)=1$ for in-distribution data. For ID data, the semantic pattern $S_y$ is a class-dependant signal that defines the class label $y$. Since the semantic component is guaranteed to be $S_y$ for any time-series example with class label $y$, we have $p(S_y|y)$=1. On the other hand, OOD examples do not belong to any class label from $\mathcal{Y}$, i.e., $p(S_y|y) \neq 1$ for any $y \in \mathcal{Y}$. To estimate $p(x|y \in \mathcal{Y})$ and $p(r|y \in {\mathcal Y})$ in observation \ref{th:lemmaOOD}, we train two separate CVAE models using the in-distribution data $\mathcal{D}_{in}$. \vspace{1.0ex} \noindent {\bf Discussion on Assumption 1.} The paper acknowledges that this assumption may fail to hold in some real-world scenarios. However, surprisingly, our experimental results shown in Table \ref{tab:patternerr} strongly corroborate this key assumption: the distance between each time-series signal $x_i$ and its semantic pattern $S_{y_i}$ is very small. The strong OOD performance of SRS algorithm in our diverse experiments demonstrates the effectiveness of a simple approach based on this assumption. Suppose the assumption does not hold and some class label $y$ can possess $K > 1$ different semantics $\{S^k_y\}_{k\le K}$. If we take a human activity recognition example, it is safe to think of a certain activity (e.g., running or walking) will have $K>1$ different patterns (e.g., athletic runners vs. young runners). Therefore, the decomposition in Assumption \ref{asp:main} for a given time-series example $(x_i, y_i)$ will result into a semantic pattern describing the patterns of the different sub-categories (e.g., a pattern that describes both athletic runners and young runners). By using Lowess smoothing, the STL season extracted over a multiple-pattern class is a pattern $S_y$ that is a linear combination of $\{S^k_y\}_{k\le K}$ (for our example, it describes the combination of both athletic runs and young runs). While for an in-distribution example, $p(S_y|y)$=1 of Observation 2 will not hold, $p(S_y|y)$ is likely to be well-defined from $p(S^k_y|y)$ as $\{S^k_y\}_{k\le K}$ are fixed and natural for the class label $y$. Hence, we can still rely on the CVAEs to estimate this distribution and to perform successful OOD detection. Alternatively, we can use a simple reformulation of the problem by clustering time-series signals of a class label $y$ (for which the assumption is not satisfied) to identify sub-classes and apply the SRS algorithm on transformed data. Since we found the assumption to be true in all our experimental scenarios (see Table \ref{tab:patternerr}), we didn't find the need to apply this reformulation. \begin{algorithm}[!h] \caption{Seasonal Ratio Scoring Algorithm for OOD Detection} \label{alg:sr} \begin{algorithmic}[1] \Input $\mathcal{D}_{in} = \{(x_i, y_i)\}_{i=1}^{n}$, Input training/validation time-series samples; $\mathcal{M}_x$, CVAE for training data with likelihood $\mathcal{L}_{\mathcal{M}_x}$; $\mathcal{M}_r$, CVAE for data remainders with likelihood $\mathcal{L}_{\mathcal{M}_r}$; $F_{\theta}$, Time-series classifier; $X_{test}$, testing time-series signal; \Output OOD Boolean decision. \preproc \State Cluster the time-series data in $\mathcal{D}_{in}$, one cluster for each class label ${y \in \mathcal{Y}=\{1,\cdots,C\}}$ \State Perform STL decomposition to compute class-wise semantics $\{S_y\}$ \State Compute remainder component $r_i$ for each time-series example $(x_i, y_i) \in \mathcal{D}_{in}$ \State Train CVAEs $\mathcal{M}_r$ and $\mathcal{M}_x$ using the decomposed data \State Estimate the threshold interval $[\tau_l, \tau_u]$ from sample SR scores of in-distribution data \endpreproc \State Compute the predicted class label: $\hat{y}$=$F_{\theta}(X_{test})$ \State Compute the remainder component: $r_{test}$=$X_{test}-S_{y=\hat{y}}$ \State Estimate the input log-likelihood $l_x=\mathcal{L}_{\mathcal{M}_x}(X_{test}, \hat{y})$ \State Estimate remainder log-likelihood $l_r=\mathcal{L}_{\mathcal{M}_r}(r_{test}, \hat{y})$ \State Compute the seasonal ratio score: $SR(X_{test})$ = $l_x/l_r$ \If{$SR(X_{test}) \in [\tau_l, \tau_u]$} \State \Return FALSE \Else \State \Return TRUE \EndIf \end{algorithmic} \end{algorithm} \subsection{OOD Detection Approach} One key advantage of SRS method is that it can be directly executed at inference stage and does not require additional training similar to prior VAE-based methods such as Likelihood Regret scoring. \vspace{1.0ex} \noindent {\bf Training stage.} Our overall training procedure for time-series OOD detection is as follows: \begin{enumerate} \item Train a CVAE $\mathcal{M}_x$ using in-distribution data $\mathcal{D}_{in}$ to estimate the conditional likelihood $p(x|y \in \mathcal{Y})$ of time-series signal $x$. \item Create one group $\mathcal{D}_y$ with all examples from in-distribution data $\mathcal{D}_{in}$, which have class label $y \in \mathcal{Y}$. Execute STL decomposition on each group $\mathcal{D}_y$ after serializing the data over the $T$ dimension to extract the corresponding semantic component $S_y \in \mathbb{R}^{n \times T}$. \item Create the remainder for each training example $(x_i, y_i) \in \mathcal{D}_{in}$ using the patterns $S_y$ for each class label $y$ $:~ r_i = x_i - S_{y_i}$. We train another CVAE $\mathcal{M}_r$ using all these remainders to estimate the conditional likelihood $p(r|y \in {\mathcal Y})$. \item Compute seasonal ratio score for each $(x_i, y_i) \in D_{in}$ using the trained CVAEs $\mathcal{M}_x$ and $\mathcal{M}_r$. \begin{equation} SR_i(x_i, y_i) \overset{\Delta}{=} \dfrac{p(x_i|y=y_i)}{p(r_i|y=y_i)} \end{equation} \item Compute the mean $\mu_{SR}$ and variance $\sigma_{SR}$ over SR scores of all in-distribution examples seen during training. Set the OOD detection threshold interval as $[\tau_l, \tau_u]$ such that $\tau_l$ = $\mu_{SR} - \lambda \times \sigma_{SR}$ and $\tau_u$ = $\mu_{SR} + \lambda \times \sigma_{SR}$, where $\lambda$ is a hyper-parameter. \item Tune the hyper-parameter $\lambda$ on the validation data to maximize OOD detection accuracy. \end{enumerate} \vspace{1.0ex} The choice of $[\tau_l, \tau_u]$ for OOD detection is motivated by the fractional nature of the seasonal ratio scores. SRS algorithm assumes that in-distribution examples satisfy $p(x|y)= p(r|y)$. Hence, we characterize in-distribution examples with an SR score close to 1, whether from left ($\tau_l$) or right ($\tau_u$) side. To identify in-distribution examples, we rely on SR scores that are close to the mean score recorded during training, whether from left ($\tau_l$) or right ($\tau_u$) side. This design choice is based on the fact that SR score is a quotient ideally centered around the value 1. Indeed, we observe in Figure \ref{fig:taus} that the SR score for OOD examples can go on either the left or the right side of the SR scores for in-distribution examples. Ideally, the SR score for in-distribution examples is closest to $\mu_{SR}$ as illustrated in Figure \ref{fig:framework}. $\lambda$ is tuned to define the valid range of SR scores for in-distribution examples from $\mathcal{D}_{in}$. We note that the score can be changed easily to consider the quantiles of estimated ratios during training stage and use it to separate the region of OOD and ID score. Therefore, we can redefine $\tau_l = (0.5 - \lambda)$ as the $\tau_l$-quantile for the lower-limit of ID score and $\tau_u = (0.5 + \lambda)$ as the $\tau_u$-quantile for the upper-limit of ID score. Given this definition, we need to tune the hyper-parameter $0<\lambda\le 0.5$ on the validation data to maximize OOD detection accuracy. We have observed in our experiments that both these settings give similar performance. Therefore, we only consider $\tau_{u,l}$ = $\mu_{SR} \pm \lambda \times \sigma_{SR}$ for simplicity for our experimental evaluation. \begin{figure}[t] \centering \begin{minipage}{.38\linewidth} \centering \includegraphics[width=\linewidth]{Figures/muvarHist1.png} \end{minipage}% \begin{minipage}{.38\linewidth} \centering \includegraphics[width=\linewidth]{Figures/muvarHist2.png} \end{minipage} \caption{Histogram showing the ID and OOD scores along the seasonal ratio score axis. The seasonal ratio scores for OOD examples can be either greater or less than the seasonal ratio scores for ID examples.} \label{fig:taus} \end{figure} \vspace{1.0ex} \noindent {\bf Inference stage.} Given a time-series signal $x$, our OOD detection approach works as follows. \begin{enumerate} \item Compute the predicted class label $\hat{y}$ using the classifier $F(x)$. \item Create the remainder component of $x$ with the predicted label $\hat{y}$: $r$ = $x - S_{\hat{y}}$. \item Compute conditional likelihoods $p(x|\hat{y})$ and $p(r|\hat{y})$ from trained CVAE models $\mathcal{M}_x$ and $\mathcal{M}_r$. \item Compute the seasonal ratio score using conditional likelihoods. \begin{equation*} SR(x, \hat{y})=\frac{p(x|y=\hat{y})}{p(r|y=\hat{y})} \end{equation*} \item If the seasonal ratio score $SR(x, \hat{y})$ does not lie within the threshold interval $[\tau_l, \tau_u]$, then classify $x$ as OOD example. Otherwise, classify $x$ as in-distribution example. \end{enumerate} Algorithm \ref{alg:sr} shows the complete pseudo-code including the offline training stage and online inference stage for new time-series signals. For a given time-series signal $x$ at the inference stage, we first compute the seasonal ratio score. If the score is within $[\tau_l, \tau_u]$, then the time-series signal is classified as in-distribution. Otherwise, we flag it as an OOD time-series signal. \subsection{Alignment method for improving the accuracy of SRS algorithm} In this section, we first motivate the need for pre-processing raw time-series signals to improve the accuracy of SRS algorithm. Subsequently, we describe a novel time-series alignment method based on dynamic time warping to achieve this goal. \vspace{1.0ex} \noindent \textbf{Motivation.} The effectiveness of the SRS algorithm depends critically on the accuracy of the STL decomposition. STL method employs fixed-length window over the serialized data to estimate the recurring pattern. This is a challenge for real-world time-series signals as they are prone to scaling, warping, and time-shifts. We illustrate in Figure \ref{fig:patternalign} the challenge of scaling, warping, and time-shift occurrences in time-series data. The top-left figure depicts a set of time-series signals with a clear ECG pattern. Due to their misalignment, if we subtract one fixed ECG pattern from every time-series signal, the remainder will be inaccurate. The figures in the left and right column show the difference in the remainder components between the {\em natural} data (Left) and the {\em aligned} version of the time-series data (Right). We can clearly observe that the remainder components from the aligned data are more accurate. If input time-series data is not aligned, it can significantly affect the estimation of $p(r_i|y=y_i)$ and the effectiveness of SRS for OOD detection. Hence, we propose a novel alignment method using the class-wise semantic for the in-distribution data $\mathcal{D}_{in}$ during both training and inference stages. \begin{figure}[!h] \centering \begin{minipage}{.4\linewidth} \centering \includegraphics[width=\linewidth]{Figures/N_aligned.png} \end{minipage}% \begin{minipage}{.4\linewidth} \centering \includegraphics[width=\linewidth]{Figures/Aligned.png} \end{minipage} \begin{minipage}{.4\linewidth} \centering \includegraphics[width=\linewidth]{Figures/R_N_aligned.png} \end{minipage}% \begin{minipage}{.4\linewidth} \centering \includegraphics[width=\linewidth]{Figures/R_aligned.png} \end{minipage} \begin{minipage}{.4\linewidth} \centering Before alignment \end{minipage}% \hfill \begin{minipage}{.4\linewidth} \centering After alignment \end{minipage} \caption{Illustration of the challenges in time-series data for STL decomposition: semantic component and remainder. (Left column) Set of natural time-series signals with an ECG wave as semantic component $S_y$ and the corresponding remainders w.r.t $S_y$. (Right column) Time-series signals and remainders from STL decomposition after applying the alignment procedure.} \label{fig:patternalign} \end{figure} \vspace{1.0ex} \noindent \textbf{Time-series alignment algorithm.} The overall goal of our approach is to produce a class-wise aligned time-series signals using the ID data $\mathcal{D}_{in}$ so that STL algorithm will produce accurate semantic components $S_{y}$ for all $y \in \mathcal{Y}$. We propose to employ dynamic time warping (DTW) \citep{muller2007dynamic} based optimal alignment to achieve this goal. The optimal DTW alignment describes the warping between two time-series signals to make them aligned in time. It overcomes warping and time-shifts issue by developing a one-to-many match over time steps. There are two key steps in our alignment algorithm. First, we compute the semantic components $S_{y}$ for all $y \in \mathcal{Y}$ from $\mathcal{D}_{in}$ using STL decomposition. For each in-distribution example $(x_i, y_i) \in \mathcal{D}_{in}$, we compute the optimal DTW alignment between $S_{y_i}$ and $x_i$. Second, we use an appropriate time-series transformation for each in-distribution example $(x_i, y_i)$ to improve the DTW alignment from the first step. Specifically, we use the time-steps of the longest one-to-many or many-to-one or sequential one-to-one sequence match to select the Expand, Reduce, and Translate transformation as illustrated in Figure \ref{fig:talign}. \vspace{1.0ex} \noindent We define these three time-series transformation below. Let $X^1=(t^1_1, t^1_2, \cdots, t^1_T)$ and $X^2=(t^2_1, t^2_2, \cdots, t^2_T)$ be two time-series signals of length $T$. \begin{itemize} \item {\bf Expand$(X^1, X^2)$}: We employ this transformation for a one-to-many time-step matching ($t^1_i$ is matched with $[t^2_j, \cdots, t^2_{j+k}]$). It duplicates the $t^1_i$ time-step for $k$ times. \item {\bf Reduce$(X^1, X^2)$}: We employ this transformation in the case of a many-to-one time-step matching ($[t^1_i, \cdots, t^1_{i+k}]$ is matched with $t^2_j$). It replaces the time-steps $[t^1_i, \cdots, t^1_{i+k}]$ by a single averaged value. \item {\bf Translate$(X^1, X^2)$}: We employ this transformation in the case of a sequential one-to-one time-step matching ($[t^1_i, \cdots, t^1_{i+k}]$ is matched one-to-one with $[t^2_j, \cdots, t^2_{j+k}]$). It translates $X_1$ to ensure that $t^1_i=t^2_j$. \end{itemize} \begin{figure}[t] \centering \begin{minipage}{.29\linewidth} \centering \includegraphics[width=\linewidth]{Figures/one_to_many.png} \end{minipage}% \begin{minipage}{.29\linewidth} \centering \includegraphics[width=\linewidth]{Figures/many_to_one.png} \end{minipage}% \begin{minipage}{.29\linewidth} \centering \includegraphics[width=\linewidth]{Figures/one_to_one.png} \end{minipage} \begin{minipage}{.29\linewidth} \centering One-to-Many \end{minipage}% \begin{minipage}{.29\linewidth} \centering Many-to-One \end{minipage}% \begin{minipage}{.29\linewidth} \centering One-to-One \end{minipage} \caption{Illustration of the use of appropriate transformation to adjust the alignment between two time-series signals $X^1$ (blue signal) and $X^2$ (green signal).} \label{fig:talign} \end{figure} We illustrate in Figure \ref{fig:dtwalign} two examples of transformation choices for time-series signal $x$ when aligned with a pattern $S$. The alignment on the left exhibits that the longest consecutive matching sequence is a one-to-many ($x_4$ is matched with $[S_2, \cdots, S_7]$) while the alignment on the right exhibits that the longest consecutive matching sequence is a sequential one-to-one ($[x_4, \cdots,x_8]$ is matched with $[S_3, \cdots, S_7]$). \begin{figure}[!h] \centering \begin{minipage}{.8\linewidth} \begin{minipage}{.35\linewidth} \centering \includegraphics[width=\linewidth]{Figures/Slide1.JPG} \end{minipage}% \hfill \begin{minipage}{.35\linewidth} \centering \includegraphics[width=\linewidth]{Figures/Slide2.JPG} \end{minipage} \end{minipage} \caption{Illustration of two transformation choices for a time-series $x$ aligned with a pattern $S$. (Left) One-to-many as the longest match, calling to use $Expand$ transformation. (Right) sequential one-to-one as the longest match, calling to use the $Translate$.} \label{fig:dtwalign} \end{figure} \section{Related Work} \vspace{1.0ex} \noindent \textbf{OOD detection via pre-trained models.} Employing pre-trained deep neural networks (DNNs) to detect OOD examples was justified by the observation that DNNs with ReLU activation can produce arbitrarily high softmax confidence for OOD examples \citep{hendrycks2017baseline}. Maximum probability over class labels has been used \citep{hendrycks2017baseline} to improve the OOD detection accuracy. Building on the success of this method, temperature scaling and controlled perturbations were used \citep{liang2017enhancing} to further increase the performance. The Mahalanobis-based scoring method \citep{lee2018unified} is used to identify OOD examples with class-conditional Gaussian distributions. Gram matrices \citep{sastry20gram} were used to detect OOD examples based on the features learned from the training data. The effectiveness of these prior methods depends critically on the availability of a highly-accurate DNN for the classification task. However, this requirement is challenging for the time-series domain as real-world datasets are typically small and exhibit high class imbalance resulting in inaccurate DNNs \cite{wen2020time,Huang_2016_CVPR}. \vspace{1.0ex} \noindent \textbf{OOD detection via synthetic data.} During the training phase, it is impossible to anticipate the OOD examples that would be encountered during the deployment of DNNs \citep{hendrycks2019scaling}. Hence, unsupervised methods \citep{yu2019unsupervised} are employed or d synthetic data based on generative models is created \citep{lee2018training, lin2020using} to explicitly regularize the DNN weights over potential OOD examples. It is much more challenging to create synthetic data for the time-series domain due to the limited data and their ambiguity to validate by human experts. \vspace{1.0ex} \noindent \textbf{OOD detection via deep generative models.} The overall idea of using deep generative models (DGMs) for OOD detection is as follows: 1) DGMs are trained to directly estimate the in-distribution $P^*$; and 2) The learned DGM identifies OOD samples when they are found lying in a low-density region. Prior work has used auto-regressive generative models \citep{jie2019likelihood} or GANs \citep{wang2020further} and proposed scoring metrics such as likelihood estimates to obtain good OOD detectors. DGMs are shown to be effective in evaluating the likelihood of input data and to estimate the data distribution, which makes them a good candidate to identify OOD examples with high accuracy. However, as shown by \citep{nalisnick2018deep}, DGMs can assign a high likelihood to OOD examples. Likelihood ratio \citep{jie2019likelihood} and likelihood regret \citep{xiao2020regret} are proposed to improve OOD detection. While likelihood regret method can generalize to different types of data, likelihood ratio is limited to categorical data distributions with the assumption that the data contains background units (background pixels for images and background sequences for genomes). Likelihood ratio cannot be applied to the time-series domain for two reasons: 1) We need to deal with continuous distributions; and 2) We cannot assume that time-steps (information unit) can be independently classified as background or semantic content. \vspace{1.0ex} \noindent \textbf{OOD detection via time-series anomaly detection.} Generic Anomaly Detection (AD) algorithms \citep{ruff2021unifying} can be employed to solve OOD problems for time-series data. We note that there exist some AD methods that can cover the same setting as the OOD problem for time-series domain. However, both settings are still considered as two different frameworks with two different goals \cite{yang2021generalized}. By definition, AD aims to detect and flag anomalous samples that deviates from a pre-defined normality \citep{laptev2015generic,canizo2019multi} estimated during training. Under the AD assumption of normality, such samples only originate from a covariate shift in the data distribution \citep{ruff2021unifying}. Semantically, such samples do not classify as OOD samples \citep{yang2021generalized}. For example, consider an intelligent system trained to identify the movement of a person (e.g, run, stand, walk, swim), where stumbling may occur during running. Such event would be classified as an anomaly as the activity running is still taking place, but in an irregular manner. However, if the runner slips and falls, such activity should be flagged as OOD due to the fact that it does not belong to any of the pre-defined activity classes. In other words, OOD samples must originate from a different class distribution ($y_{\text{OOD}} \notin \mathcal{Y}$) than in-distribution examples, while anomalies typically originate from the same underlying distribution but with anomalous behavior. Additionally, anomalies can manifest as a single time-step, non-static window-length, but not generally a complete time-series example in itself. Such differences can be critical for users and practitioners, which necessitates the study of separate algorithms for AD and OOD. Unlike anomaly detection, OOD detection focuses on identifying test samples with non-overlapping labels with in-distribution data and can generalize to multi-class setting \citep{yang2021generalized}. The main limitations of time-series AD algorithms \cite{challu2022deep} for OOD detection task are \begin{itemize} \item OOD samples cannot be used as labeled anomalous examples during training due to the general definition of the OOD space. For various AD methods such as nearest neighbors and distance-based, the fine-tuning of the cut-off threshold between "normal" and "anomalous" examples requires anomaly labels during training. Mainly, window-based techniques\cite{chandola2010anomaly} require both normal and anomalous sequences during training, and if there are none, anomalous examples are randomly generated. Such a requirement is not practical for OOD problem settings as the distribution is ambiguous to define and sample from. \item AD assumes that normal samples are homogeneous in their observations. This assumption helps the AD algorithm to detect anomalies. Such assumption cannot hold for different classes of the in-distribution space for multi-class settings. Therefore, time-series AD algorithms are prone to fail at detecting OOD samples. Indeed, our experiments demonstrate the failure of state-of-the-art time-series AD methods. \end{itemize} \section{Experiments and Results} In this section, we present experimental results comparing the proposed SRS algorithm and prior methods on diverse real-world time-series datasets. \subsection{Experimental Setup} \vspace{1.0ex} \noindent \textbf{Datasets.} We employ the multivariate benchmarks from the UCR time-series repository \citep{dau2019ucr}. Due to space constraints, we present the results on representative datasets from six different pre-defined domains \textit{Motion}, \textit{ECG}, \textit{HAR}, \textit{EEG}, \textit{Audio} and \textit{Other}. The list of datasets include Articulary Word Recognition (AWR), Stand Walk Jump (SWJ), Cricket (Ckt), Hand Movement Direction (HMD), Heartbeat (Hbt), and ERing (ERg). We employ the standard training/validation/testing splits from these benchmarks. \vspace{1.0ex} \noindent \textbf{OOD experimental setting.} Prior work formalized the OOD experimental setting for different domains such as computer vision \citep{hendrycks2017baseline}. However, there is no OOD setting for the time-series domain. In what follows, we explain the challenges for the time-series domain and propose a concrete OOD experimental setting for it. The first challenge with the time-series domain is the dimensionality of signals. Let the ID space be $\mathbb{R}^{n_i\times T_i}$ and the OOD space be $\mathbb{R}^{n_o\times T_o}$. Since we train CVAEs on the ID space, ${n_o\times T_o}$ needs to match ${n_i\times T_i}$. Hence, if $n_o>n_i$ or $T_o>T_i$, we window-clip the respective OOD dimension in order to have $n'_o=n_i$ or $T'_o=T_i$. If $n_o<n_i$ or $T_o<T_i$, we zero-pad the respective OOD dimension in order to have $n'_o=n_i$ or $T'_o=T_i$. Zero-padding is based on the assumption that the additional dimension exists but takes {\em null} values. The second challenge is in defining OOD examples. Since the number of datasets in UCR repository is large, conducting experiments on all combinations of datasets as ID and OOD is impractical and repetitive (600 distinct configurations for the 25 different datasets considered in this paper). Hence, we propose two settings using the notion of domains. \begin{itemize} \item \textbf{In-domain OOD}: Both ID and OOD datasets belong to the same domain. This setting helps in understanding the behavior of OOD detectors when real-world OOD example come from the same application domain. For example, a detector of \textit{Epileptic} time-series signals should consider signals resulting from sports activity (\textit{Cricket}) as OOD. \item \textbf{Cross-domain OOD:} Both ID and OOD datasets come from two different domains. This configuration is more intuitive for OOD detectors, where time-series signals from different applications domains should not confuse the ML model (e.g., \textit{Motion} and \textit{HAR} data). \end{itemize} Our intuition is that in-domain OOD setting is more likely to occur in the real-world deployment. Hence, we propose to do separate experiments by treating every dataset from the same domain as OOD. For the cross-domain OOD setting, we believe that a single representative dataset from the domain can be used as OOD. In this work, we focus on real-world OOD detection for the time-series domain. Since random noise does not inherit the characteristics of time-series data, methods from the computer vision literature have a good potential in detecting random noise. For improved readability and ease of understanding, we provide Table \ref{tab:domainlabel} and Table \ref{tab:dslabel} to explain the domain labels and dataset labels used in the experimental section of our paper along with the corresponding UCR domain name and dataset name. \begin{itemize} \item Table \ref{tab:domainlabel} shows the label used to represent a given domain for \textbf{Cross-Domain OOD} setting. \begin{table}[t] \centering \caption{List of domain labels used in the experimental section and the corresponding UCR domain name.} \label{tab:domainlabel} \begin{tabular}{|c|c|} \hline \textbf{Domain label} & \textbf{Domain name} \\ \hline D1 & Motion \\ \hline D2 & ECG \\ \hline D3 & HAR \\ \hline D4 & EEG \\ \hline D5 & Audio \\ \hline D6 & Other \\ \hline \end{tabular} \end{table} \item Table \ref{tab:dslabel} shows the label used to represent the dataset used as an OOD source against a given ID dataset for the \textbf{In-Domain OOD} setting. For example, while reading Table \ref{tab:auroc} in the main paper, when AWR is the ID distribution, according to Table \ref{tab:dslabel}, DS1 represents CharacterT. dataset. On the other hand, if HMD is the ID distribution, DS1 represents FingerM. dataset. \end{itemize} \begin{table*}[t] \centering \setlength\extrarowheight{2.5pt} \caption{Reference table for the In-Domain dataset labels used in the experimental section and the corresponding UCR dataset name. The second column shows the average CVAE normalized reconstruction Mean Absolute Error (MAE) with a negligible variance $\le$ 0.001 on the in-distribution data.} \label{tab:dslabel} \resizebox{\linewidth}{!}{% \begin{tabular}{|c|c|ccccccc|} \hline \multirow{2}{*}{\textbf{In-distriution Dataset name}} & \multirow{2}{*}{MAE} & \multicolumn{7}{c|}{\textbf{OOD Dataset label}} \\ & & DS1 & DS2 & DS3 & DS4 & DS5 & DS6 & DS7 \\ \hline ArticW. (Motion) & 0.025 & CharacterT. & EigenW. & PenD. & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ \\\hline EigenW. (Motion) & 0.000 & ArticW. & CharacterT. & PenD. & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ \\\hline PenD. (Motion) & 0.001 & ArticW. & CharacterT. & EigenW. & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ \\\hline AtrialF. (ECG) & 0.005 & StandW. & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ \\\hline StandW. (ECG) & 0.012 & AtrialF. & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ \\\hline BasicM. (HAR) & 0.024 & Cricket & Epilepsy & Handw. & Libras & NATOPS & RacketS. & UWaveG.\\\hline Cricket (HAR) & 0.010 & BasicM. & Epilepsy & Handw. & Libras & NATOPS & RacketS. & UWaveG.\\\hline Epilepsy (HAR) & 0.030 & BasicM. & Cricket & Handw. & Libras & NATOPS & RacketS. & UWaveG.\\\hline Handw. (HAR) & 0.006 & BasicM. & Cricket & Epilepsy & Libras & NATOPS & RacketS. & UWaveG.\\\hline Libras (HAR) & 0.003 & BasicM. & Cricket & Epilepsy & Handw. & NATOPS & RacketS. & UWaveG.\\\hline NATOPS (HAR) & 0.046 & BasicM. & Cricket & Epilepsy & Handw. & Libras & RacketS. & UWaveG.\\\hline RacketS. (HAR) & 0.026 & BasicM. & Cricket & Epilepsy & Handw. & Libras & NATOPS & UWaveG.\\\hline UWaveG. (HAR) & 0.015 & BasicM. & Cricket & Epilepsy & Handw. & Libras & NATOPS & RacketS.\\\hline EthanolC. (Other) & 0.001 & ER. & LSST & PEMS-SF & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ \\\hline ER. (Other) & 0.044 & EthanolC. & LSST & PEMS-SF & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ \\\hline LSST (Other) & 0.008 & EthanolC. & ER. & PEMS-SF & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ \\\hline PEMS-SF (Other) & 0.525 & EthanolC. & ER. & LSST & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ \\\hline FingerM. (EEG) & 0.048 & HandM. & MotorI. & SelfR1. & SelfR2. & $\emptyset$ & $\emptyset$ & $\emptyset$ \\\hline HandM. (EEG) & 0.006 & FingerM. & MotorI. & SelfR1. & SelfR2. & $\emptyset$ & $\emptyset$ & $\emptyset$ \\\hline MotorI. (EEG) & 0.543 & FingerM. & HandM. & SelfR1. & SelfR2. & $\emptyset$ & $\emptyset$ & $\emptyset$ \\\hline SelfR1. (EEG) & 0.009 & FingerM. & HandM. & MotorI. & SelfR2. & $\emptyset$ & $\emptyset$ & $\emptyset$ \\\hline SelfR2. (EEG) & 0.012 & FingerM. & HandM. & MotorI. & SelfR1. & $\emptyset$ & $\emptyset$ & $\emptyset$ \\\hline Heartbeat (Audio) & 0.011 & JapaneseV. & SpokenA. & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ \\\hline \end{tabular} } \end{table*} \vspace{1.0ex} \noindent \textbf{Evaluation metrics.} We employ the following two standard metrics in our experimental evaluation. {\bf 1) AUROC score:} The area under the receiver operating characteristic curve is a threshold-independent metric. This metric (higher is better) is equal to 1.0 for a perfect detector and 0.5 for a random detector. {\bf 2) F1 score:} It is the harmonic mean of precision and recall. Due the threshold dependence of F1 score, we propose to use the highest F1 score obtained with a variable threshold. This score has a maximum of 1.0 in the case of a perfect precision and recall. \vspace{1.0ex} \noindent \textbf{Configuration of algorithms.} We employ a 1D-CNN architecture for the CVAE models required for seasonal ratio scoring (SR) method. We consider a naive baseline where the CVAE is trained on the ID data and the likelihood (LL) is used to detect OOD samples. We consider a variant of SR scoring (SR$_a$) that works on the aligned time-series data using the method explained in Section 4.3. We evaluate both SR and SR$_a$ against state-of-the-art baselines and employ their publicly available code: Out-of-Distribution Images in Neural networks (ODIN) \citep{liang2017enhancing} and Gram Matrices (GM) \citep{sastry20gram} that have been shown to outperform most of the existing baselines; recently proposed Likelihood Regret (LR) score \citep{xiao2020regret}; adaptation of a very recent time-series AD method referred to as Deep generative model with hierarchical latent (HL) \citep{challu2022deep} that does not require labeled anomalies for training purposes. We chose HL as the main baseline to represent time-series AD under OOD setting as it is the state-of-the-art time-series AD algorithm. HL for time-series was shown \citep{challu2022deep} to outperform nearest-neighbor based methods, LSTM-based methods, and other methods \citep{blazquez2021review,braei2020anomaly} in various AD settings. \begin{itemize} \item \textbf{Choice of architecture:} We have experimented with 3 different types of CVAE architecture to decide on the most suitable one for our OOD experiments. We evaluated 1) fully connected, 2) convolutional, and 3) LSTM based architectures using the reconstruction error as the performance metric. We have observed that fully connected networks generally suffer from poor reconstruction performance especially on high-dimensional data. We have also observed that LSTM's runtime during training and inference is relatively longer than the other architectures. However, CNN-based CVAEs delivered both a good reconstruction performance and fast runtime. \item \textbf{1D-CNN CVAE details:} To evaluate the the effectiveness of the proposed seasonal ratio (SR) score, we employed a CVAE that is based on 1D-CNN layers. The encoder of the CVAE is composed of 1) A minmax normalization layer, 2) A series of 1D-CNN layers, and 3) A fully-connected layer. At the end of the encoder, the parameters $\mu_{\text{CVAE}}$ and $\sigma_{\text{CVAE}}$ are computed to estimate the posterior distribution. A random sample is then generated from this distribution and passed on to the CVAE decoder along with the class label. The decoder of the CVAE is composed of 1) A fully-connected layer, 2) A series of transposed convolutional layer, and 3) A denormalization layer. \item \textbf{CVAE Training:} We use the standard training, validation, and testing split on the benchmark datasets to train both CVAEs $\mathcal{M}_x$ and $\mathcal{M}_r$. Both CVAEs are trained to maximize the ELBO on the conditional log-likelihood defined in Section 3 using Adam optimizer with a learning rate of $10^{-4}$. We employ a maximum number of training iterations equal to $500$. To ensure the reliability of the performance of the proposed CVAEs, we report in Table \ref{tab:dslabel} the test reconstruction error of the trained CVAE on ID data using Mean Absolute Error (MAE). We observe clearly that the proposed CVAE is able to learn well the ID space as the reconstruction error is relatively low. To compute the semantic patterns and remainders for in-distribution examples for training $\mathcal{M}_x$ and $\mathcal{M}_r$, we use the STLdecompose\footnote{https://github.com/jrmontag/STLDecompose.git} python package. \item {\bf Implementation of the baselines:} The baseline methods for ODIN\footnote{https://github.com/facebookresearch/odin.git}, GM\footnote{https://github.com/VectorInstitute/gram-ood-detection.git}, HL\footnote{https://github.com/cchallu/dghl.git} and LR\footnote{https://github.com/XavierXiao/Likelihood-Regret.git} were implemented using their respective publicly available code with the recommended settings. To employ ODIN and GM, we have trained two different DNN models: a 1D-CNN and a LSTM for classification tasks with different settings. We report the average performance of the baseline OOD detectors in our experimental setting. To repurpose HL method from the AD setting to OOD setting, we have serialized the training data and use it during the training of the generator. For OOD detection at inference time, we serialize both the test ID data and OOD data and shuffle them. By providing the window size equal to the time-steps dimension of the original in-distribution inputs, we execute HL anomaly detection algorithm and report every anomaly as an OOD sample. We employed the default parameters of the generator. As recommended by the authors, we use a hierarchical level equal to 4 and 500 iterations for training and inference. We lower the learning rate to $10^{-6}$ to prevent exploding gradient occurred using the default $10^{-3}$ value. For a fair comparison, the VAE for Likelihood Regret (LR) has the same architecture as the CVAE used to estimate seasonal ratio (SR) and the naive LL score. \end{itemize} \subsection{Results and Discussion} \vspace{1.0ex} \noindent \textbf{Reconstruction error of DGMs.} Table \ref{tab:OODrecons} shows the test reconstruction error of the trained CVAE on ID data using Mean Absolute Error (MAE). We clearly observe that CVAE model is able to learn the ID space as the reconstruction error is relatively low. Table \ref{tab:OODrecons} shows analogous results for the same CVAE on some OOD data. We observe that DGMs perform well on OOD samples regardless of the different semantics of both ID and OOD data. The pre-trained CVAEs performed well on the OOD AWR dataset with a reconstruction error $\le 0.1$. For the OOD FingerMovement (Fmv) dataset, only two out of the six CVAEs exhibited an intuitive high reconstruction error. \vspace{1.0ex} \noindent \textbf{OOD detection via pre-trained classifier and DGMs.} Our first hypothesis is that pre-trained DNN classifiers are not well-suited for OOD detection. To test this hypothesis, we train two DNN models: a 1D-CNN and a RNN classifier. We use these models for OOD detection using the ODIN and GM baselines. Table \ref{tab:OODpretrained} shows that AUROC is low on all datasets. For datasets such as HMD and SWJ, the AUROC score does not exceed 0.6 for any experimental setting. The accuracy of DNNs for time-series classification is not as high as those for the image domain for the reasons explained earlier. Hence, we believe that this uncertainty of DNNs causes the baselines ODIN and GM to fail in OOD detection. \begin{table}[!h] \setlength\extrarowheight{2pt} \caption{Average AUROC results for ODIN and GMM.} \label{tab:OODpretrained} \begin{tabular}{cc||cccccc} & & AWR & SWJ & Ckt & HMD & Hbt & ERg \\ \hline \multirow{2}{*}{\shortstack{\textbf{In-}\\ \textbf{Domain}}}& ODIN& 0.65$\pm 0.03$ & 0.54$\pm 0.01$ & 0.65$\pm 0.03$& 0.55$\pm 0.01$ & 0.71$\pm 0.03$ & 0.70$\pm 0.03$\\ & GM & 0.70$\pm 0.03$ & 0.58$\pm 0.01$ & 0.64$\pm 0.03$& 0.58$\pm 0.02$ & 0.80$\pm 0.01$ & 0.75$\pm 0.03$ \\ \hline \multirow{2}{*}{\shortstack{\textbf{Cross-}\\\textbf{Domain}}}&ODIN& 0.55$\pm 0.01$ & 0.54$\pm 0.01$ & 0.55$\pm 0.01$& 0.50 & 0.70$\pm 0.01$ & 0.70$\pm 0.01$ \\ & GM & 0.56$\pm 0.01$ & 0.54$\pm 0.01$ & 0.65$\pm 0.03$& 0.55$\pm 0.01$ & 0.75$\pm 0.03$ & 0.78$\pm 0.02$ \end{tabular} \end{table} Our second hypothesis is that DGMs assign high likelihood for OOD samples is also applicable to time-series. While results in Table \ref{tab:OODrecons} corroborated this hypothesis, we provide the use of a pre-trained CVAE for OOD detection (LL) in Table \ref{tab:auroc}. We observe that AUROC score of LL does not out-perform any of the other baselines. Hence, a new scoring method is necessary for CVAE based OOD detection. \begin{table}[t] \centering \caption{Average reconstruction error of CVAE is small on both ID and OOD data. The variance is $\le 0.001$.} \label{tab:OODrecons} \begin{tabular}{cc||cccccc} \multicolumn{2}{c||}{\textbf{ID Train}} & AWR & SWJ & Ckt & HMD & Hbt & ERg \\ \hline \multicolumn{2}{c||}{\textbf{ID Test}} & 0.025 & 0.012 & 0.010 & 0.118 & 0.011 & 0.045 \\ \hline \multirow{2}{*}{\textbf{OOD}} & AWR & $\emptyset$ & 0.018 & 0.035 & 0.039 & 0.002 & 0.137 \\ & Fmv & 1.658 & 0.146 & 0.146 & 0.039 & 0.071 & 6.132 \end{tabular} \end{table} \begin{figure}[t] \centering \begin{minipage}{.42\linewidth} \centering \includegraphics[width=\linewidth]{Figures/OverlapHist_RacketSports.png} \end{minipage}% \begin{minipage}{.42\linewidth} \centering \includegraphics[width=\linewidth]{Figures/OverlapHist_UWaveGestureLibrary.png} \end{minipage} \begin{minipage}{.42\linewidth} \centering \includegraphics[width=\linewidth]{Figures/SR_OverlapHist_RacketSports.png} \end{minipage}% \begin{minipage}{.42\linewidth} \centering \includegraphics[width=\linewidth]{Figures/SR_OverlapHist_UWaveGestureLibrary.png} \end{minipage} \caption{Histogram showing the non-separability of ID and OOD LR scores (Top row) and the separability using the seasonal ratio method on real-world time-series data (Bottom row).} \label{fig:histoverlap} \end{figure} \vspace{1.0ex} \noindent \textbf{Random Noise as OOD.} An existing experimental setting for OOD detection tasks is to detect random noise. For this setting, we generate random noise as an input sampled from a Gaussian distribution or a Uniform distribution. Table \ref{tab:noiseOOD} shows the LR baseline performance on detecting the random noise as OOD examples. We observe in Table \ref{tab:noiseOOD} that LR has an excellent performance on this task. This is explainable as time-series noise does not necessarily obey time-series characteristics. Hence, the existing baselines can perform strongly on the OOD examples. We motivate our seasonal ratio scoring approach for OOD detection based on real-world examples. We have shown in the main paper that existing baselines have poor performance on detecting real-world OOD examples, where SR has significantly much better performance. \begin{table}[!h] \centering \caption{Average performance of LR on OOD examples sampled from Gaussian/Uniform distribution.} \label{tab:noiseOOD} \begin{tabular}{c||cccccc} & AWR & SWJ & Ckt & HMD & Hbt & ERg \\ \hline \textbf{LR}& 1.00 & 1.00 & 1.00 & 1.00 & 0.95 & 1.00 \\ \end{tabular} \end{table} \begin{table}[!h] \centering \caption{Results for the validity of Assumption \ref{asp:main}. Average distance (MAE and DTW measures) between the semantic pattern from STL $S_y$ and time-series example $x$ with label $y$ from the testing data (with a negligible variance$\le 0.001$).} \label{tab:patternerr} \begin{tabular}{c||cccccc} & AWR & SWJ & Ckt & HMD & Hbt & ERg \\ \hline \textbf{MAE} & 0.047 & 0.031 & 0.029 & 0.078 & 0.002 & 0.086 \\ \hline \textbf{DTW} & 0.039 & 0.018 & 0.022 & 0.068 & 0.002 & 0.069 \end{tabular} \end{table} \vspace{1.0ex} \noindent {\bf Results for SR score.} The effectiveness of SR score depends on the validity of Assumption \ref{asp:main}. Table \ref{tab:patternerr} shows both MAE and DTW measure between semantic pattern $S_y$ from STL and different time-series examples of the same class $y$. We observe that the average difference measure is low. These results strongly demonstrate that Assumption \ref{asp:main} holds empirically. For qualitative results, Fig. \ref{fig:histoverlap} shows the performance of SR in contrast to the performance of Likelihood Regret (LR) shown in Fig. \ref{fig:histoverlap}. This illustration shows that SRS provides significantly better OOD separability. \begin{table*}[b] \setlength\extrarowheight{2.5pt} \caption{AUROC results for the baselines, SR, and SR with time-series alignment (SR$_a$) on different datasets for both in-domain and cross-domain OOD setting. {\bf DSi} is a label given to the dataset used as a real-world OOD data compared to the ID dataset in the first cell of every column. {\bf Di} is a label given to the domain used to define the OOD distribution. $\emptyset$ denotes non-existent setting, e.g., AWR's corresponding domain has only three datasets. The last two rows show the percentage of total experiments where SR$_a$ outperforms the baseline methods, and SR$_a$ improves over $SR$ performance respectively.} \label{tab:auroc} \resizebox{\linewidth}{!}{% \begin{tabular}{|cc|ccccccc||cccccc|} \cline{3-15} \multicolumn{2}{c|}{ } & \multicolumn{7}{c||}{\textbf{In-domain OOD}} & \multicolumn{6}{c|}{\textbf{Cross-domain OOD}} \\ \cline{3-15} \multicolumn{2}{c|}{ } & DS1 & DS2 & DS3 & DS4 & DS5 & DS6 & DS7 & D1 & D2 & D3 & D4 & D5 & D6 \\ \hline \multirow{3}{*}{\shortstack{AWR\\(Motion)}} & LL & 0.80 & 0.85 & 0.54 & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & 0.81 & 0.80 & 0.57 & 0.59 & 0.81 \\ & HL & \emph{0.50} & 0.96 & 0.94 & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & \emph{0.50} & 0.75 & 0.98 & \emph{0.50} & 0.56 \\ & LR & 0.90 & 0.95 & 0.66 & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & 0.61 & 0.84 & 0.56 & 0.72 & 0.61 \\ & SR & 0.90 & 0.97 & 0.95 & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & 1.00 & 0.97 & 1.00 & 1.00 & 1.00 \\ & SR$_a$ & 0.90 & \textbf{0.97}& \textbf{0.95}& $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & \textbf{1.00}& \textbf{1.00}& \textbf{1.00}& \textbf{1.00} & \textbf{1.00} \\ \hline \multirow{3}{*}{\shortstack{SWJ\\(ECG)}} & LL & 0.55 & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & 0.61 & $\emptyset$ & 0.51 & 1.00 & 0.77 & 0.52 \\ & HL & \emph{0.50} & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & \emph{0.50} & $\emptyset$ & \emph{0.50} & \emph{0.50} & \emph{0.50} & \emph{0.50} \\ & LR & \textbf{0.97} & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & 0.64 & $\emptyset$ & \emph{0.50} & 1.00 & 0.67 & \textbf{0.99} \\ & SR & 0.65& $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & 0.70 & $\emptyset$ & 0.61 & 1.00 & 0.96 & 0.61 \\ & SR$_a$ & 0.67 & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & \textbf{0.70} & $\emptyset$ & \textbf{0.61} & 1.00 & \textbf{0.96} & 0.61 \\ \hline \multirow{3}{*}{\shortstack{Ckt\\(HAR}} & LL & 0.89 & 0.85 & 0.84 & 0.81 & 0.82 & 0.91 & 0.79 & 0.79 & 0.82 & $\emptyset$ & 0.95 & 0.90 & 0.81 \\ & HL & 0.97 & \emph{0.50} & 0.99 & \emph{0.50} & \emph{0.50} & 0.51 & \emph{0.50} & \emph{0.50} & \emph{0.50} & $\emptyset$ & 0.52 & 0.94 & \emph{0.50} \\ & LR & 0.81 & 0.75 & 0.74 & 0.71 & 0.74 & 1.00 & 0.78 & 0.77 & 0.72 & $\emptyset$ & 0.95 & 0.88 & 0.71 \\ & SR & 0.99 & 0.98 & 0.99 & 0.99 & 0.99 & 0.98 & 0.98 & 0.98 & 0.99 & $\emptyset$ & 0.99 & 0.98 & 0.98 \\ & SR$_a$ & \textbf{0.99} & \textbf{0.99} & \textbf{1.00} & \textbf{1.00} & \textbf{1.00} & 1.00& \textbf{0.98} & \textbf{0.98} & \textbf{0.99} & $\emptyset$ & \textbf{1.00} & \textbf{1.00} & \textbf{1.00} \\ \hline \multirow{3}{*}{\shortstack{HMD\\(EEG)}} & LL & 0.88 & 0.88 & 0.89 & 0.89 & $\emptyset$ & $\emptyset$ & $\emptyset$ & 0.89 & 0.80 & 0.87 & $\emptyset$ & 0.90 & 0.91 \\ & HL & \textbf{0.93} & 0.57 & 0.78 & 0.87 & $\emptyset$ & $\emptyset$ & $\emptyset$ & \textbf{0.97} & \emph{0.50} & \textbf{0.98} & $\emptyset$ & 0.58 & 0.66 \\ & LR & 0.68 & 0.68 & 0.68 & 0.68 & $\emptyset$ & $\emptyset$ & $\emptyset$ & 0.68 & 0.68 & 0.68 & $\emptyset$ & 0.80 & 0.68 \\ & SR & 0.75 & 0.75 & 0.75 & 0.75 & $\emptyset$ & $\emptyset$ & $\emptyset$ & 0.75 & 0.75 & 0.75 & $\emptyset$ & 0.83 & 0.75 \\ & SR$_a$ & 0.90& \textbf{0.90} & \textbf{0.90} & \textbf{0.90} & $\emptyset$ & $\emptyset$ & $\emptyset$ & 0.75& \textbf{0.84} & 0.89& $\emptyset$ & \textbf{0.97} & 0.91 \\ \hline \multirow{3}{*}{\shortstack{Hbt\\(Audio)}} & LL & 1.00 & 1.00 & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & 0.90 & 0.95 & 0.94 & 0.85 & $\emptyset$ & 0.98 \\ & HL & \emph{0.50} & \emph{0.50} & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & 0.96 & \emph{0.50} & 0.78 & 0.62 & $\emptyset$ & 0.82 \\ & LR & 1.00 & 1.00 & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & 0.93 & 0.94 & 0.94 & 0.75 & $\emptyset$ & 0.98 \\ & SR & 1.00 & 1.00 & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & 0.96 & 0.97 & 0.94 & 1.00 & $\emptyset$ & 1.00 \\ & SR$_a$ & 1.00 & 1.00 & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & 0.96& \textbf{0.97}& 0.94 & \textbf{1.00}& $\emptyset$ & \textbf{1.00} \\ \hline \multirow{3}{*}{\shortstack{ERg\\(Other)}} & LL & 0.83 & 0.77 & 0.75 & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & 0.88 & 0.86 & 0.82 & 0.77 & 0.76 & $\emptyset$\\ & HL & \emph{0.50} & \emph{0.50} & \emph{0.50} & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & \emph{0.50} & \emph{0.50} & \emph{0.50} & \emph{0.50} & \emph{0.50} & $\emptyset$\\ & LR & 0.83 & 0.72 & 0.78 & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & 0.88 & 0.78 & 0.81 & 0.71 & 0.78 & $\emptyset$\\ & SR & 1.00 & 0.98 & 0.99 & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & 0.89 & 0.99 & 0.94 & 0.99 & 0.99 & $\emptyset$\\ & SR$_a$ & \textbf{ 1.00}& \textbf{1.00}& \textbf{1.00}& $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & \textbf{0.95}& \textbf{1.00}& \textbf{0.95}& \textbf{1.00}& \textbf{1.00}& $\emptyset$ \\ \hline \multicolumn{2}{|c|}{Absolute } & \multicolumn{2}{c}{LL 00.0\%} & \multicolumn{2}{c}{HL 05.0\%} & \multicolumn{3}{c||}{LR 05.0\%} & \multicolumn{2}{c}{LL 0.00\%} & \multicolumn{2}{c}{HL 06.7\%} & \multicolumn{2}{c|}{LR 03.30\%} \\ \multicolumn{2}{|c|}{ Wins (\%)} & \multicolumn{3}{c}{Ties 20.0\%} & \multicolumn{4}{c||}{\textbf{SR$_a$ 70.0\%}} & \multicolumn{3}{c}{Ties 16.7\%} & \multicolumn{3}{c|}{\textbf{SR$_a$ 73.3\%}} \\ \hline \multicolumn{2}{|c|}{$SR_a$ Improvement (\%)} & \multicolumn{7}{c||}{55.0\%} & \multicolumn{6}{c|}{40.0\%} \\ \hline \end{tabular} } \end{table*} \vspace{1.0ex} \noindent {\bf SR score vs. Baselines.} Table \ref{tab:auroc} shows the OOD results for SRS and baseline methods. For a fair comparison, we use the same architecture for VAEs computing the LL, LR, and SR scores. We make the following observations. {\bf 1)} The naive LL method fails to outperform any other approach, which demonstrates that DGMs are not reliable on their own as they produce high likelihood for OOD samples. {\bf 2)} Time-series anomaly detection method HL fails drastically for various OOD settings as reflected by the poor AUROC score of 0.5. This demonstrates that AD methods are not appropriate for OOD detection in the multi-class setting. {\bf 3)} SR score outperforms LR score in identifying OOD examples on 80\% of the total experiments. This means improvement is due to better scoring function. {\bf 4)} For the in-domain OOD setting, AUROC score of LR is always lower than SRS. 3) For the cross-domain setting, SRS outperforms LR in all cases except one experiment on a single dataset SWJ. 4) LR and SRS has the same performance in 20\% of the total experiments. Therefore, we conclude that SRS is better than LR in terms of OOD performance and execution time (LR requires new training for every single testing input unlike SR). \vspace{1.0ex} \noindent \textbf{Alignment improves the accuracy of SR score.} Our hypothesis is that extraction of accurate semantic component using STL results in improved OOD detection accuracy. To test this hypothesis, we compare SR and SR$_a$ (SR with aligned time-series data). Table \ref{tab:auroc} shows the AUROC scores of SR and SR$_a$. SR$_a$ improves the performance of SR for around 50\% of the overall experiments. For example, on HMD dataset, we observe that SR$_a$ enhances the performance of SR by an average of 15\% under the in-domain OOD setting. These results strongly corroborate our hypothesis that alignment improves OOD performance. \vspace{1.0ex} \noindent \textbf{SR performance using F1-score.} In addition to the AUROC score, we employ F1 score to assess the effectiveness of SR score in detecting OOD. Table \ref{tab:f1sc} provides the results comparing SR score and LR score. Like AUROC score evaluation, we make similar observation on F1 score. {\bf 1)} SR score outperforms LR score in identifying OOD examples on 60\% of the total experiments. This means improvement is due to better scoring function. {\bf 2)} For the in-domain OOD setting, F1 score of LR is mostly lower than SR. 3) For the cross-domain setting, SR outperforms LR in 66\% of the cases. Hence, we conclude that SR is better than LR in terms of OOD performance measured as F1 metric. \begin{table*}[!h] \setlength\extrarowheight{2pt} \caption{F1 metric results of LR, $SR_a$ on the different datasets for both In-Domain and Cross-Domain setting. The last two rows show the percentage of datasets where $SR_a$ is out-performing the $LR$ score.} \label{tab:f1sc} \resizebox{\linewidth}{!}{% \begin{tabular}{|cc|ccccccc||cccccc|} \cline{3-15} \multicolumn{2}{c|}{ } & \multicolumn{7}{c||}{\textbf{In-Domain OOD}} & \multicolumn{6}{c|}{\textbf{Cross-Domain OOD}} \\ \cline{3-15} \multicolumn{2}{c|}{ } & DS1 & DS2 & DS3 & DS4 & DS5 & DS6 & DS7 & D1 & D2 & D3 & D4 & D5 & D6 \\ \hline \multirow{2}{*}{\shortstack{AWR\\(Motion)}} & LR & 0.58 & \textbf{0.99} & 0.80 & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & 0.53 & 0.79 & 0.61 & 0.44 & 0.67 \\ & SR$_a$ & \textbf{0.69 }& 0.89 & \textbf{0.97} & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & \textbf{0.97} & \textbf{0.85} & \textbf{1.00} & \textbf{0.99} & \textbf{1.00} \\ \hline \multirow{2}{*}{\shortstack{SWJ\\(ECG)}} & LR & 0.69 & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & 0.98 & $\emptyset$ & 0.82 & 1.00 & 0.97 & 0.97 \\ & SR$_a$ & 0.69 & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & 0.98 & $\emptyset$ & \textbf{0.86} & 1.00 & 0.97 & \textbf{0.98} \\ \hline \multirow{2}{*}{\shortstack{Ckt\\(HAR)}} & LR & 0.70 & 0.90 & 0.97 & 0.92 & 0.93 & 0.91 & 0.95 & 0.96 & 0.48 & $\emptyset$ & \textbf{1.00} & 0.94 & 0.93\\ & SR$_a$ & \textbf{0.98} & \textbf{0.96} & \textbf{0.99} & \textbf{0.98} & \textbf{0.99} & \textbf{0.99} & \textbf{0.98} & \textbf{0.98} & \textbf{0.94} & $\emptyset$ & 0.99 & \textbf{0.97} & \textbf{1.00} \\ \hline \multirow{2}{*}{\shortstack{HMD\\(HMD)}} & LR & \textbf{0.82} & \textbf{0.81} & 0.84 & 0.81 & $\emptyset$ & $\emptyset$ & $\emptyset$ & 0.84 & 0.45 & 0.68 & $\emptyset$ & 0.80 & 0.82 \\ & SR$_a$ & 0.80 & 0.75 & 0.84 & 0.81 & $\emptyset$ & $\emptyset$ & $\emptyset$ & \textbf{0.94} &\textbf{ 0.65} & 0.68 & $\emptyset$ & \textbf{0.90} & \textbf{0.92} \\ \hline \multirow{2}{*}{\shortstack{Hbt\\(Audio)}} & LR & 1.00 & 1.00 & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & 0.93 & 0.88 & 0.94 & 0.75 & $\emptyset$ & 0.98 \\ & SR$_a$ & 1.00 & 1.00 & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & 0.93 & 0.88 & 0.94 & \textbf{1.00} & $\emptyset$ & 0.98 \\ \hline \multirow{2}{*}{\shortstack{ERg\\(Other)}} & LR & 0.44 & 0.85 & 0.76 & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & \textbf{0.88} & 0.90 & 0.79 & 0.83 & 0.88 & $\emptyset$ \\ & SR$_a$ & \textbf{0.99} & \textbf{0.99} & \textbf{0.95} & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & 0.87 & \textbf{0.94} & \textbf{0.88} & \textbf{0.98} & \textbf{1.00} & $\emptyset$ \\ \hline \multirow{2}{*}{Wins(\%)} & LR & \multicolumn{7}{c||}{15.0\%} & \multicolumn{6}{c|}{6.7\%} \\ & Ties & \multicolumn{7}{c||}{25.0\%} & \multicolumn{6}{c|}{26.7\%} \\ & SR$_a$ & \multicolumn{7}{c||}{60.0\%} & \multicolumn{6}{c|}{66.6\%} \\ \hline \end{tabular} } \end{table*} \vspace{1.0ex} \noindent \textbf{AUROC performance of SR scoring on the full multivariate UCR dataset.} For the sake of completeness, Table \ref{tab:auroc2} provides additional results for the performance of SR on all the UCR multi-variate datasets in terms of the AUROC score. These results demonstrate that the proposed SR scoring approach is general and highly-effective for all time-series datasets. \input{bigtable.tex} \section{Summary and Future Work} We introduced a novel seasonal ratio (SR) score to detect out-of-distribution (OOD) examples in the time-series domain. SR scoring relies on Seasonal and Trend decomposition using Loess (STL) to extract class-wise semantic patterns and remainders from time-series signals; and estimating class-wise conditional likelihoods for both input time-series and remainders using deep generative models. The SR score of a given time-series signal and the estimated threshold interval from the in-distribution data enables OOD detection. Our strong experimental results demonstrate the effectiveness of SR scoring and alignment method to detect time-series OOD examples over prior methods. Immediate future work includes applying seasonal ratio score based OOD detection to generating synthetic time-series data for small-data settings. \section{Acknowledgments} The authors would like to thank Alan Fern for useful discussions regarding the key assumption behind the seasonal ratio scoring approach. \clearpage \bibliographystyle{apalike}
{ "attr-fineweb-edu": 1.762695, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUczY4uzlhbeH0ZUB6
\section{Introduction} After the upgrade of the Large Hadron Collider (LHC) at CERN in 2026~\cite{apollinari2017high}, the accelerator aims to deliver \num{5} - \num{7} times the nominal lu\-mi\-no\-sity of \SI{1e34}{\per\square\centi\meter\per\second}. This results in new challenges in terms of data rate capabilities and radiation tolerance, especially for detector layers close to the interaction point of the colliding proton beams. Therefore, the ATLAS and CMS experiments will replace their currently installed tracking detectors with detectors featuring larger areas of silicon and decreased pixel pitch \cite{CERN-LHCC-2017-021, CERN-LHCC-2017-009}. In particular, the ATLAS Inner Detector \cite{Aad_2008} is going to be replaced by an all-silicon tracking detector, the ATLAS Inner Tracker (ITk) \cite{CERN-LHCC-2017-021}. The expected \SI{1}{\mega\electronvolt} neutron equivalent fluence\footnote{after an integrated luminosity of \SI{2000}{\per\femto\barn}} for the innermost layer is approximately \SI{1e16}{\neq\per\square\centi\meter}, for the outer layers fluences\footnote{after an integrated luminosity of \SI{4000}{\per\femto\barn}} from \SI{2e15}{\neq\per\square\centi\meter} up to \SI{5e15}{\neq\per\square\centi\meter} are expected \cite{CERN-LHCC-2017-021}. Since it was demonstrated in the past that hybrid pixel detectors \cite{rossi2006pixel} can be successfully operated in such harsh radiation environments at the LHC, it is foreseen to employ this technology also for the upgraded pixel detector. However, the surface of the new ATLAS pixel detector is increased from approximately \SI{2}{\square\meter} to \SI{13}{\square\meter} \cite{CERN-LHCC-2017-021}, demanding large-area solutions with cost-effective designs. An approach of employing monolithic (active) CMOS pixel detectors \cite{PERIC2007876,WERMES2016483} which combine the sensing and electronic processing functions, and thus sa\-ving the time-consuming and expensive hybridisation process, has been investigated \cite{Wang_2018, Caicedo_2019}. With this CMOS pixel development an interesting option became attractive that utilises commercial CMOS process lines for the fabrication of planar sensors as the sensing part of hybrid pixel detectors. CMOS fabrications offer a high production throughput at comparatively low cost. Further benefits arise from the fact that several features are available to enhance the sensor design. For instance, polysilicon layers can be used to connect each pixel to a bias grid making it possible to test the functionality of the sensor at wafer level. Also, MIM (metal-insulator-metal) capacitors can be used to AC-couple the sensor pixels to the readout chip pixels preventing leakage current flowing into the readout channels. Moreover, several metal layers are available that can be exploited as re-distribution layers such that enlarged inter-gap pixels (between two readout chips) can be avoided. These sensors are called \textit{passive} CMOS sensors since they do not have any active components implemented. Passive CMOS sensors in a large pixel pitch design ($\num{50}\times\SI{250}{\square\micro\meter}$ pixels) were already investigated \cite{pohl_cmos}. The characterisation of the small pixel pitch design ($\num{50}\times\SI{50}{\square\micro\meter}$ pixels) before irradiation can be found in \cite{dieter2020}. In the following the performance of irradiated passive CMOS sensors is studied to demonstrate their radiation tolerance and suitability for the upgrades of the LHC experiments. \section{LFoundry passive CMOS pixel sensor} \subsection{Pixel sensor design} \begin{figure}[t] \begin{center} \includegraphics[width=0.65\linewidth]{LF_passive_pixel.pdf} \caption{Simplified schematic cross section of an n-in-p pixel of the LFoundry passive CMOS pixel sensor. The charge collection electrode is an n-well (optionally with an additional deep n-well) with varying implant size between \SI{15}{\micro\meter} and \SI{30}{\micro\meter}. The polysilicon layer is omitted. For details see Fig.~\ref{fig:lfcmos_layout}. Small p-implantations (p-stop) isolate the pixels from each other.} \label{fig:lfcmos_pixel} \end{center} \end{figure} Passive CMOS n-in-p pixel sensors in \SI{150}{\nano\meter} LFoundry \cite{lfoundry} technology were manufactured on high-resistivity Czochralski wafers. The resistivity of the substrate is at least \SI{2}{\kilo\ohm\centi\meter}, as specified by the foundry. Measurements suggest that the resistivity is between \num{5} and \SI{6}{\kilo\ohm\centi\meter} \cite{pohl_cmos}. The substrate was thinned to a thickness of \SI{100}{\micro\meter}. The backside was processed by Ion Beam Services (IBS) \cite{IBS} including backside implantation as well a backside me\-ta\-llisation allowing for backside bias application. The sensor consists of $\num{64} \times \num{64}$ pixels with a size of $\num{50} \times \SI{50}{\square\micro\meter}$, and has a total area of $\num{3.8} \times \SI{3.8}{\square\milli\meter}$. Fig.~\ref{fig:lfcmos_pixel} shows a simplified schematic cross section of one pixel, Fig.~\ref{fig:lfcmos_layout} depicts the layout of the pixel matrix. In order to investigate the charge collection properties and the pixel capacitance, various pixel designs were implemented. The left half of the pixel matrix consists of pixels with a regular n-implantation (n-well, denoted as NW), whereas the right half of the pixel matrix consists of pixels with an additional deep n-implantation (deep n-well, denoted as DNW). The size of the n-implantations varies in both dimensions between \SI{30}{\micro\meter} (top of the matrix) and \SI{15}{\micro\meter} (bottom of the matrix). To isolate the pixels from each other a small p-implantation (p-stop) is used. Moreover, a fine-pitch polysilicon layer encloses the n-implantations with the intention to increase the breakdown voltage, especially after irradiation. The pixel matrix is surrounded by an n-implantation confining the active pixel region. In addition, six guard-rings isolate the pixels from the high voltage at the sensor edge. The sensor is bump-bonded via solder bumps (by Fraunhofer IZM \cite{izm}) to the RD53A readout chip \cite{Garcia-Sciveres:2287593, Monteil:2019niy}, a prototype readout chip for the ATLAS ITk pixel detector. \begin{figure}[t] \begin{center} \includegraphics[width=0.78\linewidth]{LFCMOS_layout_summary.pdf} \caption{Left: Layout of the LFoundry passive CMOS pixel sensor. The left half of the pixel matrix consists of pixels with a n-implantation (NW), whereas the right half consists of pixels with an additional deep n-implantation (DNW). The size of the n-implantations varies in both dimensions between \SI{30}{\micro\meter} (top of the matrix) and \SI{15}{\micro\meter} (bottom of the matrix). Guard-rings isolate the pixels from the high voltage at the sensor edge. Right: Enlarged view of the different pixel designs.} \label{fig:lfcmos_layout} \end{center} \end{figure} \subsection{Pixel sensor irradiation} The studied pixel detector has been step-wise irradiated to the target fluence to investigate the performance at different irradiation levels. In the first step, the detector was irradiated to a fluence of \SI{5e15}{\neq\per\square\centi\meter} at the MC40 cyclotron of the University of Birmingham \cite{Allport_2017} using \SI{27}{\mega\electronvolt} protons. In the second step, the detector was irradiated to a total fluence of \SI{1e16}{\neq\per\square\centi\meter} at the Proton Irradiation Site at the Bonn Isochronous Cyclotron \cite{wolf_ma} using \SI{14}{\mega\electronvolt} protons. The irradiations were performed uniformly in a cold environment and the device was left unpowered during irradiation. After each irradiation step the device was annealed for \SI{80}{\minute} at \SI{60}{\celsius}. The co\-rres\-ponding total ionising doses\footnote{Important for surface damage affecting the readout chip.} created by protons were estimated to approximately \SI{660}{\mega\radian} (Birmingham) and \SI{580}{\mega\radian} (Bonn). \section{Leakage current measurements} \begin{figure}[t] \begin{center} \includegraphics[width=0.7\linewidth]{IV_curves_official_v9.pdf} \caption{Leakage current as a function of the (reverse) applied bias voltage before (solid line) and after irradiation (dashed lines) to a fluence of \SI{5e15}{\neq\per\square\centi\meter} and \SI{1e16}{\neq\per\square\centi\meter}. The grey dashed line indicates the maximum allowed leakage current of \SI{35}{\micro\ampere\per\square\centi\meter} after a fluence of \SI{5e15}{\neq\per\square\centi\meter} according to the ATLAS ITk specifications. The leakage current is normalised to the total area of the sensor.} \label{fig:iv_curve} \end{center} \end{figure} To test the functionality of the sensors and to determine their maximum operational voltage, the leakage current is measured as a function of the (reverse) bias voltage (IV-curve). Fig.~\ref{fig:iv_curve} shows the IV-curves before and after irradiation of the tested sensor after bump-bonding. The IV-curve before irradiation was measured at room temperature, whereas the IV-curves after irradiation were measured at an environmental temperature of \SI{-25}{\celsius}. Before irradiation the maximum operational voltage is approximately \SI{220}{\volt}. After irradiation the sensor is still functional and there is no breakdown visible anymore up to the maximum tested operational voltage of \SI{350}{\volt}. Furthermore, the leakage current after a fluence of \SI{5e15}{\neq\per\square\centi\meter} is approximately \SI{23}{\micro\ampere\per\square\centi\meter} and meets the requirements for ATLAS ($<\SI{35}{\micro\ampere\per\square\centi\meter}$). As expected, the leakage current at a fluence of \SI{1e16}{\neq\per\square\centi\meter} is approximately twice as large as for a fluence of \SI{5e15}{\neq\per\square\centi\meter}. The power dissipation of the sensor needed for a hit-detection efficiency larger than \SI{97}{\percent} (see Sec.~\ref{sec:eff}) is about \SI{7}{\milli\watt\per\square\centi\meter} (\SI{35}{\micro\ampere\per\square\centi\meter} at \SI{200}{\volt}) after a fluence of \SI{1e16}{\neq\per\square\centi\meter}. This is comparable to the power dissipation reported for 3D sensors \cite{Terzo:2744099}. \section{Electronic noise} An important parameter to quantify the performance of a sensor is the equivalent noise charge (ENC). The ENC distributions of the investigated pixel detector at different irradiation steps can be seen in Fig.~\ref{fig:noise_map}. Before irradiation, an ENC of \SI{73}{\electrons} is measured. This is a value comparable to other planar sensor designs read out with the same amplifier chip. After irradiation, the ENC increases to about \SI{100}{\electrons}. The reason for that is most likely an increase in shot noise due the higher leakage current after irradiation (approximately \SI{90}{\micro\ampere} corresponding to \SI{22}{\nano\ampere} per pixel\footnote{Measured at a temperature of \SI{-17}{\celsius}.}). In addition, the performance of the analogue front-end is degraded by irradiation (i.e. the transconductance $g_m$ decreases) and is likely responsible for an unspecifiable additional noise contribution. Further, it cannot be excluded that the detector capacitance increases after irradiation which would also lead to an increase in noise. Comparing different pixel designs, there is no significant difference in terms of noise observed before irradiation, although the measured pixel capacitances\footnote{Including contributions due to routing and bump-bonds.} are different for the various pixel geometries, \SI[separate-uncertainty=true]{33.5(2)}{\femto\farad} for NW30 pixels and \SI[separate-uncertainty=true]{22.4(2)}{\femto\farad} for NW15 pixels \cite{Kr_ger_2021}. At a fluence of \SI{1e16}{\neq\per\square\centi\meter}, the noise of NW30 pixels is approximately \SI{8}{\percent} higher than for NW15 pixels. Since this difference is small, the benefits in terms of noise reduction for pixels featuring small implants do not outweigh the disadvantages that arise in terms of hit-detection efficiency (see Sec.~\ref{sec:eff}). \begin{figure}[t] \begin{center} \includegraphics[width=0.7\linewidth]{Noise.pdf} \caption{Equivalent noise charge distributions at different irradiation levels. The noise distributions for different pixel geometries are shown exemplary for the NW30 and NW15 pixels. The distributions are fitted with a Gaussian function to extract the mean noise. The uncertainties of the estimated means are approximately \SI{2}{\electrons}.} \label{fig:noise_map} \end{center} \end{figure} \section{Hit-detection efficiency measurement}\label{sec:eff} A crucial detector property is the hit-detection efficiency, i.e. the probability with which a hit (a particle traversing a pixel) is recognised by the detector. For the application as a tracking detector the hit-detection efficiency has to be high for efficient hit finding and track reconstruction. Especially after irradiation, the hit-detection efficiency is of interest, and for ATLAS ITk it is required to be above \SI{97}{\percent} \cite{CERN-LHCC-2017-021}. To measure the hit-detection efficiency the device under test (DUT) is placed in a beam-telescope setup consisting of six high-resolution Mimosa26 planes (EUDET-type beam telescope) \cite{Jansen2016} and an ATLAS FE-I4 \cite{GARCIASCIVERES2011S155} re\-fe\-rence plane. The Mimosa26 planes provide a high spatial resolution of approximately \SI{3}{\micro\meter} \cite{Jansen2016} allowing a precise track reconstruction. However, the time resolution of the Mimosa26 sensors is limited due to their rolling shutter readout (duration of \SI{115.2}{\micro\second}). In contrast, the FE-I4 reference plane provides a very good time-stamping capability with a precision better than \SI{25}{\nano\second}. The challenge during track reconstruction is to ensure a correct time assignment of the Mimosa26 tracks, which is needed for a proper hit-detection efficiency measurement. Therefore, the ATLAS FE-I4 plane is used as a time reference plane such that the tracking hits in the Mimosa26 planes spatially match the hit in the timing reference plane, which thus provides the time stamp for the track. The hit-detection efficiency of the investigated sensor was measured using a minimum ionising electron beam provided by the ELSA test beam facility~\cite{Heurich:2016ilc} (\SI{2.5}{\giga\electronvolt}) and the DESY II test beam facility \cite{Diener_2019} (\SI{5}{\giga\electronvolt}). A scin\-ti\-lla\-tor in front of the telescope setup generates a trigger signal when particles traverse the setup. An EUDET-type Trigger Logic Unit (TLU) \cite{tlu} is used to distribute and synchronise the trigger signals with the different readout systems. The Mimosa26 telescope is read out without being triggered in continuous rolling shutter readout mode using the \textit{pymosa} software~\cite{pymosa}. The DUT is read out triggered using the \textit{BDAQ53} software~\cite{Daas_2021} and the ATLAS FE-I4 plane was read out triggered using the \textit{PyBAR} software~\cite{pybar}. The data is analysed using the \textit{beam telescope analysis} software~\cite{bta} including clustering, detector alignment, and track reconstruction as well as the final result analysis of the hit-detection efficiency and charge collection behaviour. For all following measurements, the detector was tuned to a threshold of approximately \SI{1000}{\electrons} with a noise occupancy of less than \num{e-6} per pixel. \begin{figure} \begin{center} \includegraphics[width=0.75\linewidth]{Residuals.pdf} \caption{(Unbiased) residual distribution in one dimension at the DUT. The data is shown on a logarithmic scale. The distribution is fitted with a Gaussian function. The grey dashed line illustrates the maximum distance between a hit and track intersection (with the DUT) at which a hit contributes to the efficiency.} \label{fig:residuals} \end{center} \end{figure} Fig.~\ref{fig:residuals} illustrates the (unbiased) residual distribution (distance between hit and track intersection) in the y-dimension at the DUT. The residuals are centred around zero which indicates a correct alignment of the detector planes. Due to multiple scattering the residual distribution is smeared out and can be approximated with a Gaussian function. The deviation from the Gaussian function towards the tails originates from the fact that the probability for large scattering angles is enhanced as described in Molière's theory~\cite{moliere}. From a fit a residual width (1-$\sigma$ width) of \SI{18.7}{\micro\meter} is extracted. This is in agreement with the expectation since the residual width for unbiased tracks is the quadratic sum of the intrinsic resolution of the DUT ($\frac{\mathrm{pixel\,\,pitch}}{\sqrt{12}}$) and the pointing resolution at the DUT (a few \si{\micro\meter}). The pointing resolution in this setup is slightly worsened by the additional material of the cooling infrastructure (cooling box for DUT and PCB cooling plate). However, the resolution is still sufficient for in-pixel studies. \begin{figure} \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=1.0\linewidth]{efficiency_irrad_vs_non_irrad_NW_30_25_larger_fontsize.pdf} \caption{Hit-detection efficiency of NW-flavors} \label{fig:eff_vs_bias_nw} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=1.0\linewidth]{efficiency_irrad_vs_non_irrad_DNW_30_25_larger_fontsize.pdf} \caption{Hit-detection efficiency of DNW-flavors} \label{fig:eff_vs_bias_dnw} \end{subfigure} \caption{Hit-detection efficiency as a function of bias voltage for different pixel flavors and different irradiation levels. The grey dashed line represents the requirement of an (in-time) efficiency larger than \SI{97}{\percent}. For the sake of clarity, not all pixel designs are shown. The quoted error bars are purely statistical. Left: NW-flavors. Right: DNW-flavors.} \label{fig:eff_vs_bias} \end{figure} In order to reject noise hits (spatially uncorrelated) which artificially increase the hit-detection efficiency, hits are only considered as efficient if the residual of the track is smaller than a given distance (of \SI{120}{\micro\meter}). This efficiency search radius is illustrated in Fig.~\ref{fig:residuals}. Fig.~\ref{fig:eff_vs_bias_nw} shows the hit-detection efficiency before and after irradiation as a function of the applied (reverse) bias voltage for the NW30 and NW25 pixel designs (regular n-implantation with a size of \num{30} and \SI{25}{\micro\meter}), Fig.~\ref{fig:eff_vs_bias_dnw} shows this for the DNW30 and DNW25 pixel designs (additional deep n-implantation with a size of \num{30} and \SI{25}{\micro\meter}). For the sake of clarity, other pixel designs are omitted here since they have a lower efficiency after irradiation. \begin{figure} \begin{center} \includegraphics[width=0.8\linewidth]{in_pixel_eff_1e16_summary.png} \caption{In-pixel efficiency map after a fluence of \SI{1e16}{\neq\per\square\centi\meter} for two different pixel flavors (left: NW30 and right: NW15) at a bias voltage of \SI{400}{\volt}.} \label{fig:in_pixel_eff} \end{center} \end{figure} Before irradiation, an efficiency larger than \SI{99.5}{\percent} is achieved at \SI{5}{\volt} only and there is no significant difference across the various pixel designs visible. After irradiation, the efficiency increases with increasing bias voltage. The measured efficiency after a fluence of \SI{5e15}{\neq\per\square\centi\meter} and a bias voltage of \SI{350}{\volt} is \SI[separate-uncertainty=true]{99.89(1)}{\percent} for the NW30 flavor and \SI[separate-uncertainty=true]{99.88(1)}{\percent} for the DNW30 flavor. This is well above the requirement of \SI{97}{\percent} after irradiation (grey dashed line in Fig.~\ref{fig:eff_vs_bias}). After a fluence of \SI{1e16}{\neq\per\square\centi\meter} the efficiency decreases further (especially for low bias voltages). However, at \SI{400}{\volt} an efficiency of \SI[separate-uncertainty=true]{99.79(1)}{\percent} for the NW30 and \SI[separate-uncertainty=true]{99.78(1)}{\percent} for the DNW30 flavor can still be achieved. Especially, at the highest measured fluence, it is visible that flavors with a smaller implant size show a slightly lower efficiency\footnote{This is also true for the omitted (D)NW20 and (D)NW15 flavors.}. Furthermore, at low bias voltages, the hit-detection efficiency of designs with a deep n-well (same implant size) is higher compared to designs with only the standard n-well geometry, especially for smaller implant sizes. Fig.~\ref{fig:in_pixel_eff} shows in-pixel efficiency maps (all data mapped onto a single pixel) at a fluence of \SI{1e16}{\neq\per\square\centi\meter} for two different pixel flavors (NW30 and NW15) at a bias voltage of \SI{400}{\volt}. One can see that the efficiency, for pixel designs with smaller implant size, is low at the pixel corners which is due to the low electric field (and charge sharing) in this region. \section{Charge measurements and charge-collection efficiency} In addition to the hit-detection efficiency, the charge collection behaviour was studied during test beams. The readout chip already provides an internal charge information, called ToT (time over threshold). However, the precision of this measurement needed for a charge calibration (using radioactive sources) or charge measurements during test beams is not sufficient. This problem is circumvented by using the so-called \textit{TDC method} \cite{pohl_phd}. This method makes use of the chip's HitOR signal (logical OR of all discriminator outputs) whose length is proportional to the collected charge. This signal is sampled with a \SI{640}{\mega\hertz} clock provided externally by the readout system, thus enabling a precise charge measurement. \begin{figure} \begin{center} \includegraphics[width=0.8\linewidth]{in_pixel_charge_1e16_summary.png} \caption{In-pixel charge map in electrons after a fluence of \SI{1e16}{\neq\per\square\centi\meter} for two different pixel flavors (left: NW30 and right: NW15) at a bias voltage of \SI{400}{\volt}.} \label{fig:in_pixel_charge} \end{center} \end{figure} In Sec.~\ref{sec:eff} it was shown that, after irradiation, the efficiency for pixel designs with smaller implant sizes is low in the pixel corners where the electrical field is low. The same behaviour (low charge in pixel corners) is observed for the collected charge. The corresponding in-pixel charge maps (only events with cluster size of 1), after a fluence of \SI{1e16}{\neq\per\square\centi\meter}, can be seen in Fig.~\ref{fig:in_pixel_charge}. This is in agreement with the expectations and explains the efficiency loss since a lower electric field after irradiation leads to more charge carrier trapping, and thus to a smaller collected charge which in turn results in a lower efficiency for a given threshold. In Fig.~\ref{fig:charge_spectra} the measured charge distributions for the different irradiation levels measured using the NW30 pixel design are depicted. The distributions follow a Landau function convoluted with a Gaussian function due to electronic noise. Furthermore, it is visible that the most probable value (MPV) extracted from a fit decreases with increasing fluence. The reasons for that are the facts that a) after irradiation the sensor can no longer be fully depleted at reasonable bias voltages and b) charge carrier trapping during charge col\-lection due to radiation damage of the bulk. \begin{figure} \begin{center} \includegraphics[width=0.7\linewidth]{Charge_irrad_vs_non_irrad_NW30.pdf} \caption{Measured charge distributions for different irradiation levels for the NW30 flavor. The distributions are fitted with a Landau-Gauss convolution to extract the most probable value (MPV). Before irradiation a MPV of \SI[separate-uncertainty=true]{6590(170)}{\electrons} is measured at \SI{80}{\volt}. At a fluence of \SI{5e15}{\neq\per\square\centi\meter} a MPV of \SI[separate-uncertainty=true]{5030(50)}{\electrons} is measured at \SI{350}{\volt}, and at a fluence of \SI{1e16}{\neq\per\square\centi\meter} a MPV of \SI[separate-uncertainty=true]{3670(50)}{\electrons} is measured at \SI{400}{\volt}.} \label{fig:charge_spectra} \end{center} \end{figure} The collected charge as a function of the bias voltage, illustrated in Fig.~\ref{fig:charge_vs_bias}, was studied. Each data point corresponds to the most probable value extracted from a fit to the measured charge distribution. The charge signal increases with bias voltage since the depleted volume extends with increasing bias voltage. Before irradiation, the amount of collected charge starts to saturate at approximately \SI{40}{\volt} leading to a charge signal of about \SI{6600}{\electrons}. This indicates that the sensor is fully depleted at approximately \SI{40}{\volt}. Assuming that \num{73} electrons per \si{\micro\meter} (extracted from a GEANT4 simulation) are created within the depletion zone, this yields a silicon bulk thickness of approximately \SI{90}{\micro\meter}. This values is reasonable since the nominal thickness of \SI{100}{\micro\meter} includes also the metal layers with a thickness of a few \si{\micro\meter}. After irradiation, the amount of collected charge decreases due to the fact that the sensor can no longer be fully depleted and charge carrier trapping sets in. The measured charge signal after a fluence of \SI{1e16}{\neq\per\square\centi\meter} is approximately \SI{3700}{\electrons} at the highest measured bias voltage of \SI{400}{\volt}. \begin{figure} \begin{center} \includegraphics[width=0.7\linewidth]{Charge_irrad_vs_non_irrad_bias_voltage_NW30_NW25.pdf} \caption{Collected charge as a function of the bias voltage for different irradiation levels. Data points are the most probable values extracted from a fit to the measured charge distributions. The error bars originate from the fit. The y-axis on the right-hand side shows the charge collection efficiency (CCE).} \label{fig:charge_vs_bias} \end{center} \end{figure} The measured charge signal can be translated to a charge collection effi\-ciency (CCE), as shown in Fig.~\ref{fig:charge_vs_bias} (axis on the right-hand side). The charge collection efficiency is obtained by dividing the measured charge by the maximum measured charge before irradiation (\SI{6600}{\electrons}). This yields a maximum charge collection efficiency of approximately \SI{80}{\percent} at a fluence of \SI{5e15}{\neq\per\square\centi\meter} and \SI{55}{\percent} at a fluence of \SI{1e16}{\neq\per\square\centi\meter}, respectively. \section{Conclusion and outlook} The radiation tolerance of $\SI{100}{\micro\meter}$ thin passive CMOS sensors fabricated in \SI{150}{\nano\meter} LFoundry technology has been investigated. The sensors are still functional even after a fluence of \SI{1e16}{\neq\per\square\centi\meter} and can be successfully operated. At this fluence a hit-detection efficiency of \SI[separate-uncertainty=true]{99.79(1)}{\percent} is measured (at \SI{400}{\volt}) for the NW30 design. The charge collection efficiency is measured to be approximately \SI{55}{\percent} at the maximum tested bias voltage of \SI{400}{\volt} after the highest fluence. In addition, the power dissipation of the sensor needed to meet the ATLAS ITk requirements in terms of efficiency is comparable to 3D sensors. This demonstrates that passive CMOS sensors are radiation tolerant and withstand a fluence of \SI{1e16}{\neq\per\square\centi\meter}, the expected fluence for the future innermost ATLAS pixel detector layer. The performance of passive CMOS sensors in terms of noise and hit-detection efficiency equals that of conventional planar sensors. Full-size (RD53-sized) passive CMOS sensors using the NW30 geometry for the ATLAS and CMS experiments at the HL-LHC are already manufactured and currently under investigation. \section*{Acknowledgements} We would like to thank LFoundry and Ion Beam Services (IBS) for the fabrication and the processing of the backside of the sensors. We also thank Laura Gonella for making the irradiation at the Birmingham Irradiation Facility possible. Further, we would like to thank the HISKP group for making the irradiation at the Proton Irradiation Site in Bonn possible. This project has received funding from the Deutsche Forschungsgemeinschaft DFG, under grant agreement no. WE 976/4-1, the German Ministerium f\"ur Bildung, Wissenschaft, Forschung und Technologie (BMBF) under contract no. 05H15PDCA9, the H2020 project AIDA-2020, under grant agreement no. 654168, and from a Marie Sklodowska-Curie ITN Fellowship of the European Union's Horizon 2020 program under grant agreement no. 675587-STREAM. The measurements leading to these results have been performed at the Test Beam Facility at DESY Hamburg (Germany), a member of the Helmholtz Association (HGF)
{ "attr-fineweb-edu": 1.993164, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd0jxK0flNeFb45At
\section{Introduction} \label{sec1} In recent decades, the applications of fractional partial differential equations (FPDEs) have been interested and recognized in numerous fields such as control systems \cite{machado2001discrete}, quantum mechanics \cite{laskin2000fractional}, stochastic dynamics \cite{gao2014mean} and image processing \cite{bai2007fractional}. Actually, the closed-form analytical solutions of FPDEs can be obtained in a few special cases \cite{podlubny1998fractional}, but such solutions are usually impractical. It thus becomes imperative to study the numerical solutions of FPDEs, and numerous reliable numerical methods have been developed \cite{Gu2017fast, Gao2016Two, li2017galerkin, luo2016quadratic, Mao2016Efficient, hao2016finite, Liu2014A, cui2015compact, ccelik2012crank, lei2013circulant, zhao2014fourth, pang2012multigrid}. Due to the nonlocality of the fractional operators, using the finite difference method to solve space/time-space fractional differential equations leads to a time-stepping scheme with dense coefficient matrices. The conventional time-stepping schemes utilizing the Gaussian elimination require the computational cost of $\mathcal{O}(N^3)$ and storage of $\mathcal{O}(N^2)$ at each time step, where $N$ is the spatial grid number. For the purpose of optimizing the computational complexity, numerous fast algorithms \cite{Gu2017fast, li2017galerkin, lei2013circulant, pang2012multigrid, Gu2014On, gu2015strang, wang2010direct, zhao2018delay} are designed. From another point of view, if all time steps are stacked in a vector, we will obtain an all-at-once system or a block lower triangular system. Ke et al. \cite{ke2015fast} combined the block forward substitution (BFS) method with the divide-and-conquer strategy to solve the block lower triangular Toeplitz-like with tri-diagonal blocks (BL3TB-like) system. The complexity and storage requirement of their method are respectively $\mathcal{O}(M N \log^2 M)$ and $\mathcal{O}(MN)$, where $M$ is the number of time steps. Lu et al. \cite{lu2015approximate} proposed a fast approximate inversion method, whose computational cost is of $\mathcal{O}(M N \log M)$ and storage requirement is of $\mathcal{O}(MN)$, for the block lower triangular Toeplitz with tri-diagonal blocks (BL3TB) matrix. The idea of this method is to approximate the coefficient matrix by the block $\epsilon$-circulant matrix, which can be block-diagonalized by the fast Fourier transform (FFT). Additionally, the error estimation given in \cite{lu2015approximate} shows that their method has high accuracy. Since the sufficient condition provided in \cite{lu2015approximate} is difficult to verify in practice, Lu et al. \cite{lu2018approximate} proposed a new sufficient condition, which is easier to check and can be applied to several existing numerical schemes. Huang et al. \cite{huang2017fast} combined the divide-and-conquer technique with the circulant-and-skew-circulant representation of Toeplitz matrix inversion for solving the nonsingular block lower triangular Toeplitz with dense Toeplitz blocks (BLDTB) system. Their proposed method requires a complexity within $\mathcal{O}\left( MN \log M \left( \log M + \log N \right) \right)$. In this work, we mainly concentrate on fast solving the block lower triangular Toeplitz (BLTT) system arising from time-space fractional diffusion equation (TSFDE): \begin{equation} \begin{cases} \sideset{_0^C}{^\alpha_t}{\mathop{\mathcal{D}}} u(x,t) = e_{1} \sideset{_0}{^\beta_x}{\mathop{\mathcal{D}}} u(x,t) + e_{2} \sideset{_x}{^\beta_L}{\mathop{\mathcal{D}}} u(x,t) + f(x,t), & 0 < t \leq T, ~0 \leq x \leq L, \\ u(x,0) = u_{0}(x), & 0 \leq x \leq L,\\ u(0,t) = u(L,t) = 0, & 0 \leq t \leq T, \end{cases} \label{eq1.1} \end{equation} where $e_1, e_2 > 0$. The time and space fractional derivatives are introduced in Caputo and Riemann-Liouville sense \cite{podlubny1998fractional}, respectively, i.e., \begin{equation*} \sideset{_0^C}{^\alpha_t}{\mathop{\mathcal{D}}} u(x,t) = \frac{1}{\Gamma(1 - \alpha)} \int_{0}^{t} (t - \eta)^{-\alpha} \frac{\partial u(x,\eta)}{\partial \eta} d\eta, ~0 < \alpha < 1, \end{equation*} \begin{equation*} \sideset{_0}{^\beta_x}{\mathop{\mathcal{D}}} u(x,t) = \frac{1}{\Gamma(2 - \beta)} \frac{d^{2}}{dx^{2}} \int_{0}^{x} \frac{u(\eta,t)}{(x - \eta)^{\beta - 1}} d \eta, ~1 < \beta < 2, \end{equation*} \begin{equation*} \sideset{_x}{^\beta_L}{\mathop{\mathcal{D}}} u(x,t) = \frac{1}{\Gamma(2 - \beta)} \frac{d^{2}}{dx^{2}} \int_{x}^{L} \frac{u(\eta,t)}{(\eta - x)^{\beta - 1}} d \eta, ~1 < \beta < 2, \end{equation*} where $\Gamma(\cdot)$ denotes the Gamma function. In this study, we adopt the preconditioned biconjugate gradient stabilized (PBiCGSTAB) method \cite{van1992bicgstab} and flexible generalized minimal residual (FGMRES) method \footnote{The preconditioned sub-system is solved inexactly in each preconditioned iteration step, and this information just matches the characteristic of FGMRES method. Thus the FGMRES method is chosen in this study.} \cite{saad2003iter} to solve the BLTT system efficiently. Therefore, the main contribution of this work can be concluded as: (i) A block bi-diagonal Toeplitz (B2T) preconditioner, whose storage is of $\mathcal{O}(N)$, is developed to solve the BLTT system; (ii) A new skew-circulant preconditioner is designed to efficiently compute the inverse of the B2T preconditioner multiplying a vector. Furthermore, numerical experiments indicate that our skew-circulant preconditioner is slightly better than the Strang's circulant preconditioner \cite{ng2004iterative,chan2007introduction}. The rest of this paper is organized as follows. In Section 2, the BLTT system is established through the $L2$-$1_\sigma$ \cite{Alikhanov2015A} and weighted and shifted Gr\"{u}nwald difference (WSGD) \cite{tian2015class} formulae. In Section 3, the B2T preconditioner and skew-circulant preconditioner are proposed and analyzed. In Section 4, numerical examples are provided to demonstrate the efficiency of the two proposed preconditioners. Some conclusions are drawn in Section 5. \section{Finite difference discretization and the BLTT system} \label{sec2} In this section, the finite difference method is employed to discretize (\ref{eq1.1}) in both time and space. Then the BLTT system is derived based on the obtained time-marching scheme. \subsection{The time-marching scheme} \label{sec2.1} First of all, the WSGD operator is used to approximate the left- and right- Riemann-Liouville derivatives \cite{tian2015class} (in this paper $(p,q) = (1,0)$). Let $h = \frac{L}{N}$ be the grid spacing for the positive integer $N$. Hence the space domain is covered by $\bar{\omega}_{h} = \{ x_i = i h | 0 \leq i \leq N \}$, and approximations of the left- and right- Riemann-Liouville derivatives can be expressed respectively as: \begin{equation} \sideset{_0}{^\beta_x}{\mathop{\mathcal{D}}} u(x,t)\mid_{x = x_i} \approx \frac{1}{h^{\beta}}\sum\limits_{k = 0}^{i +1} \omega_{k}^{(\beta)} u_{i - k + 1}, \qquad \sideset{_x}{^\beta_L}{\mathop{\mathcal{D}}} u(x,t)\mid_{x = x_i} \approx \frac{1}{h^{\beta}}\sum\limits_{k = 0}^{N - i +1} \omega_{k}^{(\beta)} u_{i + k - 1}, \label{eq2.1} \end{equation} where $u_{i}$ is the numerical approximation to $u(x_i, t)$, \begin{equation*} \omega_{0}^{(\beta)} = \frac{\beta}{2} g_{0}^{(\beta)}, \qquad \omega_{k}^{(\beta)} = \frac{\beta}{2} g_{k}^{(\beta)} + \frac{2 - \beta}{2} g_{k - 1}^{(\beta)}, ~ k \geq 1 \end{equation*} and \begin{equation*} g_{0}^{(\beta)} = 1, \qquad g_{k}^{(\beta)} = \left( 1 - \frac{\beta + 1}{k} \right) g_{k - 1}^{(\beta)}, ~ k = 1, 2, \cdots. \end{equation*} Substituting Eq. (\ref{eq2.1}) into Eq. (\ref{eq1.1}), the semi-discretized system of fractional ordinary differential equations is expressed as: \begin{equation} \begin{cases} h^\beta \sideset{_0^C}{^\alpha_t}{\mathop{\mathcal{D}}} \bm{u}(t) = K_{N}\bm{u}(t) + h^\beta \bm{f}(t), & 0 < t \leq T, \\ u(x,0) = u_{0}(x), & 0 \leq x \leq L,\\ \end{cases} \label{eq2.2} \end{equation} where $\bm{u}(t) = \left[ u_{1}, u_{2}, \cdots, u_{N - 1} \right]^{T}$, $\sideset{_0^C}{^\alpha_t}{\mathop{\mathcal{D}}} \bm{u}(t) = \left[ \sideset{_0^C}{^\alpha_t}{\mathop{\mathcal{D}}} u_1, \cdots, \sideset{_0^C}{^\alpha_t}{\mathop{\mathcal{D}}} u_{N - 1} \right]^{T}$, $\bm{f}(t) = \left[ f_{1}, f_{2}, \cdots, f_{N - 1} \right]^{T}$ with $f_i = f(x_i,t)$ ($0 \leq i \leq N$) , $K_{N} = e_1 G_\beta + e_2 G_\beta^T$, and the Toeplitz matrix $G_\beta$ is given \begin{equation*} G_{\beta} = \begin{bmatrix} \omega_1^{(\beta)} & \omega_0^{(\beta)} & 0 & \cdots & 0 & 0 \\ \omega_2^{(\beta)} & \omega_1^{(\beta)} & \omega_0^{(\beta)} & 0 & \cdots & 0 \\ \vdots & \omega_2^{(\beta)} & \omega_1^{(\beta)} & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & \ddots & 0 \\ \omega_{N-2}^{(\beta)} & \ddots & \ddots & \ddots & \omega_1^{(\beta)} & \omega_0^{(\beta)} \\ \omega_{N-1}^{(\beta)} & \omega_{N-2}^{(\beta)} & \cdots & \cdots & \omega_2^{(\beta)} & \omega_1^{(\beta)} \end{bmatrix} \in \mathbb{R}^{(N - 1) \times (N - 1)}. \end{equation*} For a positive integer $M$, the temporal partition is defined as $\bar{\omega}_{\tau} = \{ t_j = j \tau, ~j = 0, 1, \cdots, M; ~t_M = T \}$ and let $u_i^j \approx u(x_i,t_j)$ be the approximate solution. Through utilizing the $L2$-$1_\sigma$ formula \cite{Alikhanov2015A}, the temporal fractional derivative $\sideset{_0^C}{^\alpha_t}{\mathop{\mathcal{D}}} u(x,t)$ can be discretized as: \begin{equation} \sideset{_0^C}{^\alpha_t}{\mathop{\mathcal{D}}} u(x,t)\mid_{(x, t) = (x_i, t_{j + \sigma})} = \sum\limits^{j}_{s = 0} c^{(\alpha,\sigma)}_{j - s} \left( u_i^{s + 1} - u_i^{s} \right) + \mathcal{O}(\tau^{3 - \alpha}), \label{eq2.3} \end{equation} in which $\sigma = 1 - \alpha/2$ and for $j = 0$, $c^{(\alpha,\sigma)}_{0} = \frac{\tau^{-\alpha}}{\Gamma(2 - \alpha)} a^{(\alpha,\sigma)}_{0}$, for $j \geq 1$, \begin{equation*} c^{(\alpha,\sigma)}_{s}= \frac{\tau^{-\alpha}}{\Gamma(2 - \alpha)} \cdot \begin{cases} a^{(\alpha,\sigma)}_{0} + b^{(\alpha,\sigma)}_{1}, & s = 0,\\ a^{(\alpha,\sigma)}_{s} + b^{(\alpha,\sigma)}_{s + 1} - b^{(\alpha,\sigma)}_{s}, & 1\leq s \leq j - 1,\\ a^{(\alpha,\sigma)}_{j} - b^{(\alpha,\sigma)}_{j}, & s = j \end{cases} \end{equation*} with \begin{equation*} a^{(\alpha,\sigma)}_{0} = \sigma^{1 - \alpha}, \qquad a^{(\alpha,\sigma)}_{l} = (l + \sigma)^{1 - \alpha} - (l - 1 + \sigma)^{1 - \alpha}~ ( l \geq 1), \end{equation*} \begin{equation*} b^{(\alpha,\sigma)}_{l} = \frac{1}{2 - \alpha} \left[(l + \sigma)^{2 - \alpha} - (l - 1 + \sigma)^{2 - \alpha} \right] - \frac{1}{2} \left[ (l + \sigma)^{1 - \alpha} - (l - 1 + \sigma)^{1 - \alpha} \right]~ ( l \geq 1). \end{equation*} Readers are suggested to refer to \cite{Alikhanov2015A} for a thoroughly discuss. Substituting Eq. (\ref{eq2.3}) into Eq. (\ref{eq2.2}) and omitting the small term, the discretized time-marching scheme is established as below \begin{equation} h^\beta \sum\limits^{j}_{s = 0} c^{(\alpha,\sigma)}_{j - s} \left( \bm{u}^{s + 1} - \bm{u}^{s} \right) = K_N \bm{u}^{j + \sigma} + h^\beta \bm{f}^{j + \sigma},~j = 0, 1, \cdots, M - 1 \label{eq2.4} \end{equation} with initial condition $u_i^0 = u_0(x_i)~(0 \leq i \leq N)$, where $\bm{u}^{j + \sigma} = \sigma \bm{u}^{j + 1} + (1 - \sigma) \bm{u}^j$, $\bm{u}^j = \left[ u_1^j, u_2^j, \cdots, u_{N - 1}^j \right]^T$, $\bm{f}^{j + \sigma} = \left[ f_1^{j + \sigma}, f_2^{j + \sigma}, \cdots, f_{N - 1}^{j + \sigma} \right]^T$ and $f_i^{j + \sigma} = f(x_i, t_{j + \sigma})~(0 \leq i \leq N)$. Furthermore, the stability and convergence with the second-order accuracy of the time-marching scheme (\ref{eq2.4}) have been discussed in \cite{zhao2017fast}. \subsection{The block lower triangular Toeplitz system} \label{sec2.2} Before deriving the BLTT system, several auxiliary symbols are introduced: $\bm{0}$ and $I$ represent zero and identity matrices of suitable orders, respectively. $A_0 = h^\beta c_0^{(\alpha,\sigma)}I - \sigma K_N$, $\bm{y}_0 = B \bm{u^0} + h^\beta \bm{f}^{\sigma}$, \begin{equation*} A = \frac{\tau^{-\alpha} h^\beta}{\Gamma(2 - \alpha)} a_0^{(\alpha,\sigma)} I - \sigma K_N, \quad B = \frac{\tau^{-\alpha} h^\beta}{\Gamma(2 - \alpha)} a_0^{(\alpha,\sigma)} I + (1 - \sigma) K_N, \end{equation*} \begin{equation*} A_1 = h^\beta \left( c_1^{(\alpha,\sigma)} - c_0^{(\alpha,\sigma)} \right) I - (1 - \sigma) K_N, \quad A_k = h^\beta \left( c_k^{(\alpha,\sigma)} - c_{k - 1}^{(\alpha,\sigma)} \right) I~(2 \leq k \leq M - 2). \end{equation*} To avoid misunderstanding, let $v^{(\alpha,\sigma)}_{j} = \frac{\tau^{-\alpha}}{\Gamma(2 - \alpha)} \left( a^{(\alpha,\sigma)}_{j} - b^{(\alpha,\sigma)}_{j} \right)$. Then some other notations are given: \begin{equation*} \bm{y}_1 = -\left[ h^\beta \left( v^{(\alpha,\sigma)}_{1} - c^{(\alpha,\sigma)}_{0} \right) I - (1 - \sigma) K_N \right] \bm{u}^1 + h^\beta \left( v^{(\alpha,\sigma)}_{1} \bm{u}^0 + \bm{f}^{1 + \sigma} \right), \end{equation*} \begin{equation*} \bm{y}_k = -h^\beta \left( v^{(\alpha,\sigma)}_{k} - c^{(\alpha,\sigma)}_{k - 1} \right) \bm{u}^1 + h^\beta \left( v^{(\alpha,\sigma)}_{k} \bm{u}^0 + \bm{f}^{k + \sigma} \right)~(2 \leq k \leq M - 1). \end{equation*} With the help of Eq. (\ref{eq2.4}), the BLTT system can be written as: \begin{subnumcases}{} A \bm{u}^1 = \bm{y}_0, \label{eq2.5a}\\ W \bm{u} = \bm{y}, \label{eq2.5b} \end{subnumcases} where $\bm{y} = \left[ \bm{y}_1, \bm{y}_2, \cdots, \bm{y}_{M - 1} \right]^T$, \begin{equation*} \bm{u} = \begin{bmatrix} \bm{u}^2 \\ \bm{u}^3\\ \vdots \\ \bm{u}^{M} \end{bmatrix}, \quad W = \begin{bmatrix} A_0 & \bm{0} & \bm{0} & \cdots & \bm{0} \\ A_1 & A_0 & \bm{0} & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ A_{M - 3} & \ddots & \ddots & \ddots & \bm{0} \\ A_{M - 2} & A_{M - 3} & \cdots & \cdots & A_0 \end{bmatrix}. \end{equation*} If the Kronecker product ``$\otimes$" is introduced, then Eq. (2.5) is equivalent to \begin{equation*} \begin{cases} & A \bm{u}^1 = \bm{y}_0, \\ & \tilde{W} \bm{u} = \bm{y}, \end{cases} \end{equation*} in which $\tilde{W} = h^\beta \left( \tilde{A} \otimes I \right)- \tilde{B} \otimes K_N$ with \begin{equation*} \tilde{A} = \begin{bmatrix} c^{(\alpha,\sigma)}_{0} & 0 & 0 & \cdots & 0 & 0 \\ c_1^{(\alpha,\sigma)} - c_0^{(\alpha,\sigma)} & c^{(\alpha,\sigma)}_{0} & 0 & 0 & \cdots & 0 \\ \vdots & c_1^{(\alpha,\sigma)} - c_0^{(\alpha,\sigma)} & c^{(\alpha,\sigma)}_{0} & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & \ddots & 0 \\ c_{M - 3}^{(\alpha,\sigma)} - c_{M - 4}^{(\alpha,\sigma)} & \ddots & \ddots & \ddots & c^{(\alpha,\sigma)}_{0} & 0 \\ c_{M - 2}^{(\alpha,\sigma)} - c_{M - 3}^{(\alpha,\sigma)} & c_{M - 3}^{(\alpha,\sigma)} - c_{M - 4}^{(\alpha,\sigma)} & \cdots & \cdots & c_1^{(\alpha,\sigma)} - c_0^{(\alpha,\sigma)} & c^{(\alpha,\sigma)}_{0} \end{bmatrix} \end{equation*} and $B = \textrm{tridiag}(1 - \sigma, \sigma, 0)$. If the Gaussian elimination is adopted for the BFS method \cite{huang2017fast} to solve (2.5), the matrices $K_N$, $A$, $A_0$, $A_1$ and $B$ must be stored inherently. Hence, the computational complexity and storage requirement of such the method are $\mathcal{O}(M N^3 + M N^2)$ and $\mathcal{O}(N^2)$, respectively. To optimize the computational complexity, we prefer to employ the preconditioned Krylov subspace methods to solve (2.5). The key point of such preconditioned methods is to hunt for an efficient preconditioner. In the following section, two economical preconditioners are developed based on the special structures of $W$ and $A_0$, and several properties of them are investigated. \section{Two preconditioners and their spectra analysis} \label{sec3} In this section, two economical preconditioners are designed for solving Eq. (2.5). The spectra of the preconditioned matrices are also analyzed. \subsection{A block bi-diagonal Toeplitz preconditioner} \label{sec3.1} \begin{figure}[ht] \centering \subfigure[] {\includegraphics[width=3.0in,height=2.2in]{fig0.eps}} \subfigure[] {\includegraphics[width=3.0in,height=2.2in]{fig1.eps}} \caption{The sparsity pattern (Left) and decay elements (Right) of matrix $W \in \mathbb{R}^{100 \times 100}$, when $M = N = 11$.} \label{fig1} \end{figure} To approximate the coefficient matrix $W$ well, an example of the matrix $W$ is plotted in Fig. \ref{fig1} corresponding to $h = \tau = \frac{1}{11}$. Fig. \ref{fig1}(a) shows the sparsity pattern of $W$. From Fig. \ref{fig1}(b), it is noticeable that the diagonal entries of $W$ decay quickly, i.e., the main information of $W$ clustered in the first two nonzero block diagonals. Inspired by this observation, a block bi-diagonal Toeplitz preconditioner $P_W$ is developed for the linear system (\ref{eq2.5b}), which only preserves the first two nonzero block diagonals of $W$, more precisely, \begin{equation} P_W = \begin{bmatrix} A_0 & \bm{0} & \bm{0} & \cdots & \bm{0} & \bm{0} \\ A_1 & A_0 & \bm{0} & \bm{0} & \cdots & \bm{0} \\ \bm{0} & A_1 & A_0 & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & \ddots & \bm{0} \\ \bm{0} & \ddots & \ddots & \ddots & A_0 & \bm{0} \\ \bm{0} & \bm{0} & \cdots & \cdots & A_1 & A_0 \end{bmatrix}. \label{eq3.1} \end{equation} Clearly $P_W$ is a block-Toeplitz matrix with Toeplitz-blocks, thus its memory requirement is of $\mathcal{O}(N)$. Several properties of $\omega^{(\beta)}_k$ are reviewed in the following lemma, which is helpful to analyze the nonsingularity of $P_W$. \begin{lemma}(\cite{zhao2017fast, feng2016high}) Suppose that $1< \beta <2$, then the coefficients $\omega_{k}^{(\beta)}$ satisfy \begin{equation*} \begin{cases} \omega_{0}^{(\beta)} = \frac{\beta}{2} > 0, ~~ \omega_{1}^{(\beta)} = \frac{2 - \beta - \beta^2}{2} < 0, ~~ \omega_{2}^{(\beta)} = \frac{\beta (\beta^2 + \beta - 4)}{4}, \\ 1 \geq \omega_{0}^{(\beta)} \geq \omega_{3}^{(\beta)} \geq \omega_{4}^{(\beta)} \geq \cdots \geq 0, ~~ \omega_{0}^{(\beta)} + \omega_{2}^{(\beta)} > 0, \\ \sum\limits_{k = 0}^{\infty} \omega_{k}^{(\beta)} = 0, ~~ \sum\limits_{k = 0}^{N} \omega_{k}^{(\beta)} < 0, ~~N \geq 2. \end{cases} \end{equation*} \label{lemma3.1} \end{lemma} As seen from Lemma \ref{lemma3.1}, we proceed to analyze the nonsingularity of $P_W$. \begin{theorem} $P_W$ given in (\ref{eq3.1}) is nonsingular. \label{th3.1} \end{theorem} \textbf{Proof.} Since $P_W$ is a block lower bi-diagonal matrix, the proof of this theorem is equivalent to prove the nonsingularity of $A_0$. Firstly, we show that all eigenvalues of matrix $H = \frac{K_N + K_N^T}{2}$ are strictly negative. From the definition of $K_N$ in (\ref{eq2.2}), it has $H = \frac{e_1 + e_2}{2} \left( G_\beta + G_\beta^T \right)$. Then according to the Gershgorin circle theorem \cite{varga2004gersgorin}, the $i$-th Gershgorin disc of $H $ is centered at $\left( e_1 + e_2 \right) \omega_1^{(\beta)} < 0$ with radius \begin{equation*} r_i^\beta = \frac{e_1 + e_2}{2} \left( \sum\limits_{k = 0, k \neq 1}^{i} \omega_k^{(\beta)} + \sum\limits_{k = 0, k \neq 1}^{N - i} \omega_k^{(\beta)} \right) \leq \left( e_1 + e_2 \right) \sum\limits_{k = 0, k \neq 1}^{N} \omega_k^{(\beta)} < -\left( e_1 + e_2 \right) \omega_1^{(\beta)} ~ (1 \leq i \leq N - 1), \end{equation*} in which Lemma \ref{lemma3.1} is adopted. Thus the real parts of all eigenvalues of $A_0$ are strictly positive. The proof of Theorem \ref{th3.1} is completed. \hfill $\Box$ Theorem \ref{th3.1} also implies that the matrices $A$ and $W$ are invertible. Now, the eigenvalues of the preconditioned matrix $P_W^{-1} W$ can be studied. \begin{theorem} The eigenvalues of the preconditioned matrix $P_W^{-1} W$ are all equal to $1$. \label{th3.2} \end{theorem} \textbf{Proof.} It is known that the product of two block lower triangular matrices also is a block lower triangular matrix. After simple calculations, it notes that \begin{equation*} P_W^{-1} W = \begin{bmatrix} I & \bm{0} & \cdots & \cdots & \bm{0} \\ \bm{0} & I & \ddots & \ddots & \vdots \\ J_2 & \ddots & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & \bm{0} \\ J_{M - 2} & J_{M - 3} & \cdots & \cdots & I \end{bmatrix} \end{equation*} is a block lower triangular matrix, where $J_2 = A_0^{-1} A_2$, $J_k = A_0^{-1} \left( A_k - A_1 J_{k - 1} \right)~(3 \leq k \leq M - 2)$. From the above equality, the main diagonal elements of $P_W^{-1} W$ are $1$, which completes the proof. \hfill $\Box$ \begin{remark} The preconditioned Krylov subspace methods require us to compute $P_W^{-1} \bm{v}$, where $\bm{v}$ is a vector. In this work, the Thomas method is employed to compute such matrix-vector multiplications. Hence, only $A_0^{-1} \bm{v}$ is needed to compute. In practical computation, the Toeplitz inversion formula \cite{ng2003recursive} combined with Krylov subspace methods is used to calculate $A_0^{-1} \bm{v}$, and this will be discussed in Section \ref{sec3.2}. \label{remark1} \end{remark} For the sake of clarity, the Thomas method for calculating $P_W^{-1} \bm{v}$ is given as below. \begin{algorithm}[H] \caption{Compute $\bm{z} = P_W^{-1} \bm{v}$} \begin{algorithmic}[1] \STATE Reshape $\bm{v}$ into an $(N - 1) \times M$ matrix $\check{V}$ \STATE Compute $\bm{\hat{b}}_1 = A_0^{-1} \check{V}(:,1)$ via Algorithm \ref{alg2} in Section \ref{sec3.2} \FOR {$k = 2, \cdots, M$} \STATE $\bm{\varphi} = \check{V}(:,k) - A_1 \hat{\bm{b}}_{k - 1}$ \STATE $\bm{\hat{b}}_k = A_0^{-1} \bm{\varphi}$ via Algorithm \ref{alg2} in Section \ref{sec3.2} \ENDFOR \STATE Stack $\bm{\hat{b}}_k~(k = 1,\cdots,M)$ in a vector $\bm{z}$ \end{algorithmic} \label{alg1} \end{algorithm} In line 4 of Algorithm \ref{alg1}, the matrix-vector multiplications can be done via FFTs in $\mathcal{O}(N \log N)$ operations \cite{ng2004iterative,chan2007introduction}. As for the storage requirement, $\bm{v}$, $\bm{\hat{b}}_k$, $\bm{\varphi}$, the first column and first row of $A_1$ must be stored. Thus only $\mathcal{O}(M N)$ memory is needed in Algorithm \ref{alg1}. \subsection{A skew-circulant preconditioner} \label{sec3.2} According to the Toeplitz inversion formula in \cite{ng2003recursive}, two Toeplitz systems \begin{equation} \begin{cases} A_0 \bm{\xi} = \bm{q}_1, \\ A_0 \bm{\eta} = \bm{q}_{N - 1} \end{cases} \label{eq3.2} \end{equation} require to be solved, where $\bm{\xi} = [\xi_1, \cdots,\xi_{N - 1}]^T$, $\bm{\eta} = [\eta_1, \cdots,\eta_{N - 1}]^T$, $\bm{q}_1$ and $\bm{q}_{N - 1}$ are the first and last columns of the identity matrix of order $(N - 1)$, respectively. As mentioned in Remark \ref{remark1}, Krylov subspace methods are chosen to solve (\ref{eq3.2}). However, when $A_0$ is ill-conditioned, Krylov subspace methods converge very slowly. To remedy such difficulties, in this subsection, a new skew-circulant preconditioner $P_{sk}$ is designed and the spectrum of $P_{sk}^{-1} A_0$ is discussed. The expression of our skew-circulant $P_{sk}$ is given as follows \begin{equation} P_{sk} = h^\beta c_0^{(\alpha,\sigma)}I - \sigma sk(K_N), \label{eq3.3} \end{equation} where $sk(K_N) = e_1 sk(G_\beta) + e_2 sk(G_\beta)^T$ with \begin{equation*} sk(G_{\beta}) = \begin{bmatrix} \omega_1^{(\beta)} & \omega_0^{(\beta)} & -\omega_{N-2}^{(\beta)} & \cdots & -\omega_{2}^{(\beta)} \\ \omega_2^{(\beta)} & \omega_1^{(\beta)} & \omega_0^{(\beta)} & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & -\omega_{N-2}^{(\beta)} \\ \omega_{N-2}^{(\beta)} & \ddots & \ddots & \ddots & \omega_0^{(\beta)} \\ -\omega_{0}^{(\beta)} & \omega_{N-2}^{(\beta)} & \cdots & \omega_2^{(\beta)} & \omega_1^{(\beta)} \end{bmatrix} \in \mathbb{R}^{(N - 1) \times (N - 1)}. \end{equation*} Similar to the proof of Theorem \ref{th3.1}, the following theorem provide an essential property of $P_{sk}$ in (\ref{eq3.3}). \begin{theorem} The matrix $P_{sk}$ given in (\ref{eq3.3}) is invertible. \label{th3.3} \end{theorem} \textbf{Proof.} Firstly, we prove that all eigenvalues of matrix $\hat{H} = -\frac{sk(K_N) + sk(K_N)^T}{2}$ are strictly positive. Based on the definition of $sk(G_\beta)$ and the Gershgorin circle theorem \cite{varga2004gersgorin}, all the Gershgorin disc of the matrix $\hat{H}$ are centered at $-\left( e_1 + e_2 \right) \omega_1^{(\beta)} > 0$ with radius \begin{equation*} r = \frac{e_1 + e_2}{2} \left[ 2\left( \omega_0^{(\beta)} + \omega_2^{(\beta)} \right) + \sum\limits_{k = 3}^{N - 2} \left| \omega_k^{(\beta)} - \omega_{N + 1 - k}^{(\beta)} \right| \right] \leq \left( e_1 + e_2 \right) \sum\limits_{k = 0, k \neq 1}^{N} \omega_k^{(\beta)} \leq -\left( e_1 + e_2 \right) \omega_1^{(\beta)}. \end{equation*} Thus, the real parts of all eigenvalues of $P_{sk}$ are strictly positive. Then the targeted result follows. \hfill $\Box$ An $n \times n$ skew-circulant matrix $\mathcal{C}$ has the spectral decomposition \cite{ng2004iterative, chan2007introduction}: \begin{equation*} \mathcal{C} = \Omega^* F^* \Lambda F \Omega, \end{equation*} here $\Omega = \textrm{diag} \left[ 1, (-1)^{-1/n}, \cdots, (-1)^{-(n - 1)/n} \right]$, $F$ is the discrete Fourier matrix, $F^*$ represents the conjugate transpose of $F$, and $\Lambda$ is a diagonal matrix containing all eigenvalues of $\mathcal{C}$. Let $sk(G_\beta) = \Omega^* F^* \Lambda_s F \Omega$, then $sk(G_\beta)^T = \Omega^* F^* \bar{\Lambda}_s F \Omega$ and $P_{sk} = \Omega^* F^* \Lambda F \Omega$, where $\Lambda = h^\beta c_0^{(\alpha,\sigma)}I - \sigma \left( e_1 \Lambda_s + e_2 \bar{\Lambda}_s \right)$ and $\bar{\Lambda}_s$ is the complex conjugate of $\Lambda_s$. With the help of the decomposition of $P_{sk}$, the following result is obtained immediately. \begin{lemma} Suppose $0 < \hat{v} < h^\beta c_0^{(\alpha,\sigma)}$, then $\parallel P_{sk}^{-1} \parallel_2 \leq \frac{1}{\hat{v}}$. \label{lemma3.2} \end{lemma} \textbf{Proof.} By Theorem \ref{th3.3}, we obtain $Re([\Lambda_s]_{k,k}) < 0$, where $Re([\Lambda_s]_{k,k})$ means the real part of $[\Lambda_s]_{k,k}$. Then \begin{equation*} \left| [\Lambda]_{k,k} \right| \geq Re([\Lambda]_{k,k}) = h^\beta c_0^{(\alpha,\sigma)} - \sigma \left( e_1 Re([\Lambda_s]_{k,k}) + e_2 Re([\bar{\Lambda}_s]_{k,k}) \right) \geq \hat{v}, ~k = 1, 2, \cdots, N - 1. \end{equation*} Therefore \begin{equation*} \parallel P_{sk}^{-1} \parallel_2 = \frac{1}{\min\limits_{1\leq k\leq N - 1} \left| [\Lambda]_{k,k} \right| } \leq \frac{1}{\hat{v}}. \end{equation*} \hfill $\Box$ To analyze the spectrum of $P_{sk}^{-1} A_0$, we first prove that the generating function of the Toeplitz matrix $K_N$ is in the Wiener class \cite{chan2007introduction}. \begin{lemma} The generating function of the sequence $\left\{ K_N \right\}_{N = 2}^{\infty}$ is in the Wiener class. \label{lemma3.3} \end{lemma} \textbf{Proof.} For the Toeplitz matrix $K_N$ in (\ref{eq2.2}), its generating function is \begin{equation*} p(\theta) = \sum\limits_{k = -\infty}^{\infty} \ell_k e^{\bm{i} k \theta} = \sum\limits_{k = -1}^{\infty} \omega_{k + 1}^{(\beta)} \left( e_1 e^{\bm{i} k \theta} + e_2 e^{-\bm{i} k \theta} \right), \end{equation*} where $\bm{i} = \sqrt{-1}$ and $\theta \in [-\pi, \pi]$. By the properties of $\omega_k^{(\beta)}$, it yields \begin{equation*} \sum\limits_{k = -\infty}^{\infty} \left| \ell_k \right| \leq \left( e_1 + e_2 \right) \sum\limits_{k = -1}^{\infty} \left| \omega_{k + 1}^{(\beta)} \right| = \left( e_1 + e_2 \right) \left(- 2\omega_{1}^{(\beta)} + \left| \omega_{2}^{(\beta)} \right| - \omega_{2}^{(\beta)} \right) < \infty. \end{equation*} Thus, the generating function $p(\theta)$ is in the Wiener class. \hfill $\Box$ According to Lemma \ref{lemma3.3}, the following result is true. \begin{lemma} Let $p(\theta)$ be the generating function of $K_N$. Then for any $\varepsilon > 0$, there exists an $N' > 0$, such that for all $N > N' + 1$, $A_0 - P_{sk} = \tilde{U} + \tilde{V}$, where $rank(\tilde{U}) < 2N'$ and $\parallel \tilde{V} \parallel_2 \leq \varepsilon$. \label{lemma3.4} \end{lemma} \textbf{Proof.} Define $D_{sk} = A_0 - P_{sk} = \sigma \left( sk(K_N) - K_N \right)$. It can be checked that $D_{sk}$ is a Toeplitz matrix, and its first column and first row are respectively \begin{equation*} \begin{split} &-\sigma [0, 0, e_2 \omega_{N - 2}^{(\beta)}, \cdots, e_2 \omega_{3}^{(\beta)}, e_1 (\omega_{0}^{(\beta)} + \omega_{N - 2}^{(\beta)}) + e_2 \omega_{2}^{(\beta)} ]^T,\\ &-\sigma [0, 0, e_1\omega_{N - 2}^{(\beta)}, \cdots, e_1 \omega_{3}^{(\beta)}, e_1 \omega_{2}^{(\beta)} + e_2 (\omega_{0}^{(\beta)} + \omega_{N - 2}^{(\beta)})]. \end{split} \end{equation*} Using Lemma \ref{lemma3.3}, we know that $p(\theta)$ is in the Wiener class. Then for any $\varepsilon > 0$, there exists an $N' > 0$ such that $\sum\limits_{k = N' + 1}^{\infty} \left| \ell_k \right| = e_2 \sum\limits_{k = N' + 1}^{\infty} \left| \omega_{k + 1}^{(\beta)} \right| \leq \frac{ e_2 \varepsilon}{\sigma (e_1 + e_2)}$. Let $\tilde{V}$ be the $(N - 1)$-by-$(N - 1)$ matrix obtained from $D_{sk}$ by copying the $(N - 1 - N')$-by-$(N - 1 - N')$ leading principal submatrix of $D_{sk}$. Hence the leading $(N - 1 - N') \times (N - 1 - N')$ block of $\tilde{V}$ is a Toeplitz matrix. Thus \begin{equation*} \begin{split} \parallel \tilde{V} \parallel_1 & = \sigma \max\left\{ \sum\limits_{k = N' + 1}^{N - 3} \left| \ell_{k} \right|, \sum\limits_{k = N' + 1}^{N - 3} \left| \ell_{-k} \right|, \max\limits_{3 \leq j \leq N - 3 - N'} \left( \sum\limits_{k = N' + j}^{N - 3} \left| \ell_k \right| + \sum\limits_{k = N - j}^{N - 3} \left| \ell_{-k} \right| \right) \right\}\\ & \leq \sigma (e_1 + e_2) \sum\limits_{k = N' + 1}^{\infty} \left| \omega_{k + 1}^{(\beta)} \right| \leq \varepsilon. \end{split} \end{equation*} Similarly, $\parallel \tilde{V} \parallel_\infty \leq \varepsilon$. Thus $\parallel \tilde{V} \parallel_2 \leq \left( \parallel \tilde{V} \parallel_1 \cdot \parallel \tilde{V} \parallel_\infty \right)^{1/2} \leq \varepsilon$. Let $\tilde{U} = D_{sk} - \tilde{V}$. It is obvious that $\tilde{U}$ is an $(N - 1) \times (N - 1)$ matrix obtained from $D_{sk}$ by replacing the $(N - 1 - N') \times (N - 1 - N')$ leading principal submatrix of $D_{sk}$ by the zero matrix. Hence $rank(\tilde{U}) \leq 2 N'$. \hfill $\Box$ Combining Lemmas \ref{lemma3.2} and \ref{lemma3.4}, the spectrum of $P_{sk}^{-1} A_0 - I$ is discussed. \begin{theorem} Suppose $0 < \hat{v} < h^\beta c_0^{(\alpha,\sigma)}$. Then for any $\varepsilon > 0$, there exists an $N' > 0$, such that for all $N - 1 > N'$, $P_{sk}^{-1} A_0 - I = U + V$, where $rank(U) < 2N'$ and $\parallel V \parallel_2 \leq \frac{\varepsilon}{\hat{v}}$. \label{th3.4} \end{theorem} \textbf{Proof.} According to Lemma \ref{lemma3.4}, for any $\varepsilon > 0$, there exists an $N' > 0$, such that for all $N - 1 > N'$, \begin{equation*} P_{sk}^{-1} A_0 - I = P_{sk}^{-1} \left( A_0 - P_{sk} \right) = U + V, \end{equation*} where $U = P_{sk}^{-1} \tilde{U}$ and $V = P_{sk}^{-1} \tilde{V}$. Applying Lemma \ref{lemma3.2}, it yields \begin{equation*} \parallel V \parallel_2 = \parallel P_{sk}^{-1} \tilde{V} \parallel_2 \leq \parallel P_{sk}^{-1} \parallel_2 \parallel \tilde{V} \parallel_2 \leq \frac{\varepsilon}{\hat{v}}. \end{equation*} On the other hand, $rank(U) = rank(P_{sk}^{-1} \tilde{U}) < 2N'$. \hfill $\Box$ \begin{remark} Since the matrix $A$ is slightly different to $A_0$, the $P_{sk}$ in (\ref{eq3.3}) still works for solving (\ref{eq2.5a}). Hence, in this work, $P_{sk}$ is also applied to solve (\ref{eq2.5a}). \end{remark} For convenience, our strategy in this subsection is concluded in the following algorithm. \begin{algorithm} \caption{Compute $\bm{\tilde{z}} = A_0^{-1} \bm{v}$} \begin{algorithmic}[1] \STATE {Solve $A_0 \bm{\xi} = \bm{q}_1$ via FGMRES/PBiCGSTAB with $P_{sk}$ \\ Solve $A_0 \bm{\eta} = \bm{q}_{N - 1}$ via FGMRES/PBiCGSTAB with $P_{sk}$} \STATE {$\bm{s}_1 = [\eta_{N - 1}, -\eta_1,\cdots, -\eta_{N - 2}]^T$, $\bm{s}_2 = [\eta_{N - 1}, \eta_1,\cdots, \eta_{N - 2}]^T$} \STATE {$\Lambda^{(1)} = \textrm{diag}(F \bm{\xi})$, ~$\Lambda^{(2)} = \textrm{diag}(\Omega^{*} F \bm{s}_1)$, \\ $\Lambda^{(3)} = \textrm{diag}(F \bm{s}_2)$, $\Lambda^{(4)} = \textrm{diag}(\Omega^{*} F \bm{\xi})$} \STATE {$\bm{\tilde{v}} = F \Omega \bm{v}$} \STATE {$\bm{z}_1 = \Omega^{*} F^{*} \Lambda^{(2)}\bm{\tilde{v}}$, ~$\bm{z}_2 = \Omega^{*} F^{*} \Lambda^{(4)}\bm{\tilde{v}}$, \\ $\bm{z}_3 = \Lambda^{(1)} F \bm{z}_1$, \quad~ $\bm{z}_4 = \Lambda^{(3)} F \bm{z}_2$} \STATE $\bm{\tilde{z}} = \frac{1}{2 \xi_1} F^{*} (\bm{z}_3 + \bm{z}_4)$ \end{algorithmic} \label{alg2} \end{algorithm} In this algorithm, ten fast Fourier transforms are needed. Thus, the complexity and storage requirement are $\mathcal{O}(N \log N)$ and $\mathcal{O}(N)$, respectively. \section{Numerical experiments} \label{sec4} In this section, one example is reported to show the performance of the proposed preconditioners in Section \ref{sec3}. In order to illustrate the efficiency of $P_{sk}$, the Strang's circulant preconditioner \cite{ng2004iterative,chan2007introduction} is also tested, which can be written as \begin{equation*} P_{s} = h^\beta c_0^{(\alpha,\sigma)}I - \sigma s(K_N), \end{equation*} where $s(K_N) = e_1 s(G_\beta) + e_2 s(G_\beta)^T$. More precisely, the first columns of circulant matrices $s(G_\beta)$ and $s(G_\beta)^T$ are $\left[ \omega_1^{(\beta)}, \cdots, \omega_{\lfloor N/2 \rfloor}^{(\beta)}, 0, \cdots, 0, \omega_0^{(\beta)} \right]^T$ and $\left[ \omega_1^{(\beta)}, \omega_0^{(\beta)}, 0, \cdots, 0, \omega_{\lfloor N/2 \rfloor}^{(\beta)}, \cdots, \omega_2^{(\beta)} \right]^T$, respectively. The PBiCGSTAB and FGMRES methods for solving (2.5) terminate if the relative residual error satisfies $\frac{\| \bm{r}^{(k)} \|_2}{\| \bm{r}^{(0)} \|_2} < 10^{-8}$ or the iteration number is more than $1000$, where $\bm{r}^{(k)}$ denotes residual vector in the $k$-th iteration, and the initial guess is chosen as the zero vector. Since the $P_W$ as a preconditioner for solving (2.5), it is not necessary to compute the $P^{-1}_W \bm{v}$ accurately. Hence the stop criterion of PBiCGSTAB or FGMRES methods in Algorithm \ref{alg2} is $\frac{\| \bm{r}^{(k)} \|_2}{\| \bm{r}^{(0)} \|_2} < 10^{-3}$, and the initial guess is also chosen as the zero vector. All of the symbols shown below will appear in later. All experiments are carried out via MATLAB 2017a on a Windows 10 (64 bit) PC with the configuration: Intel(R) Core(TM) i7-7700T CPU 2.90 GHz and 8 GB RAM. \begin{center}\footnotesize\tabcolsep=2.5pt \begin{tabular}{|l|l|} \hline Symbol & Explanation \\ \hline BS & The MATLAB's backslash method to solve (2.5) \\ BFSM & The BFS method to solve (2.5) \\ SK2-PBiCGSTAB & The PBiCGSTAB method with the preconditioners $P_W$ and $P_{sk}$ to solve (2.5) \\ SK2-FGMRES & The FGMRES method with the preconditioners $P_W$ and $P_{sk}$ to solve (2.5) \\ S2-PBiCGSTAB & The PBiCGSTAB method with the preconditioners $P_W$ and $P_{s}$ to solve (2.5) \\ S2-FGMRES & The FGMRES method with the preconditioners $P_W$ and $P_{s}$ to solve (2.5) \\ $\mathrm{Iter1}$ & The number of iterations required for solving (\ref{eq2.5a}) \\ $\mathrm{Iter2}$ & The number of iterations required for solving (\ref{eq2.5b}) \\ $\mathrm{Iter3}$ & The number of iterations required for solving (\ref{eq3.2}) \\ $\mathrm{Iter}$ & $\mathrm{Iter1} + \mathrm{Iter2}$ \\ $\textrm{Time}$ & Total CPU time in seconds for solving the whole BLTT system (2.5) \\ \dag & Out of memory \\ \hline \end{tabular} \end{center} \begin{figure}[ht] \centering \subfigure[Eigenvalues of $W$] {\includegraphics[width=3.0in,height=2.2in]{fig2-a.eps}} \subfigure[Eigenvalues of $P_{W}^{-1} W$] {\includegraphics[width=3.0in,height=2.25in]{fig2-b.eps}}\\ \subfigure[Eigenvalues of $W$] {\includegraphics[width=3.0in,height=2.2in]{fig2-c.eps}} \subfigure[Eigenvalues of $P_{W}^{-1} W$] {\includegraphics[width=3.0in,height=2.2in]{fig2-d.eps}} \caption{Spectra of $W$ and $P_{W}^{-1} W$, when $M = N = 2^6$ in Example 1. Top row: $(\alpha, \beta) = (0.1, 1.1)$; Bottom row: $(\alpha, \beta) = (0.7, 1.4)$.} \label{fig2} \end{figure} \noindent \textbf{Example 1.} Considering Eq. (\ref{eq1.1}) with diffusion coefficients $e_1 = 20$, $e_2 = 0.02$, the source term \begin{equation*} \begin{split} f(x,t) = & 2 t^{1 - \alpha} E_{1, 2 - \alpha}(2t) x^2 (1 - x)^2 - e^{2t} \Bigg\{ \frac{\Gamma(3)}{\Gamma(3 - \beta)} \left[ e_1 x^{2 - \beta} + e_2 (1 - x)^{2 - \beta} \right] \\ & - \frac{2 \Gamma(4)}{\Gamma(4 - \beta)} \left[ e_1 x^{3 - \beta} + e_2 (1 - x)^{3 - \beta} \right] + \frac{\Gamma(5)}{\Gamma(5 - \beta)} \left[ e_1 x^{4 - \beta} + e_2 (1 - x)^{4 - \beta} \right] \Bigg\}, \end{split} \end{equation*} in which $E_{\mu, \nu}(z)$ is the Mittag-Leffler function \cite{podlubny1998fractional} with two parameters defined by \begin{equation*} E_{\mu, \nu}(z) = \sum_{k = 0}^{\infty} \frac{z^{k}}{\Gamma(\mu k + \nu)}. \end{equation*} The exact solution of the TSFDE problem (\ref{eq1.1}) is $u(x,t) = e^{2 t} x^2 (1 - x)^2$. \begin{figure}[ht] \centering \subfigure[Eigenvalues of $A_0$] {\includegraphics[width=3.0in,height=2.2in]{fig3-a.eps}} \subfigure[Eigenvalues of $P_{sk}^{-1} A_0$ ({\color{blue} $*$}) and $P_{s}^{-1} A_0$ ({\color{magenta} $\times$})] {\includegraphics[width=3.0in,height=2.2in]{fig3-b.eps}}\\ \subfigure[Eigenvalues of $A_0$] {\includegraphics[width=3.0in,height=2.2in]{fig3-c.eps}} \subfigure[Eigenvalues of $P_{sk}^{-1} A_0$ ({\color{blue} $*$}) and $P_{s}^{-1} A_0$ ({\color{magenta} $\times$})] {\includegraphics[width=3.0in,height=2.2in]{fig3-d.eps}} \caption{Spectra of $A_0$, $P_{s}^{-1} A_0$ and $P_{s}^{-1} A_0$, when $M = N = 2^8$ in Example 1. Top row: $(\alpha, \beta) = (0.1, 1.1)$; Bottom row: $(\alpha, \beta) = (0.7, 1.4)$.} \label{fig3} \end{figure} \begin{table}[th]\footnotesize\tabcolsep=2.0pt \begin{center} \caption{Results of different methods when $M = N$ for Example 1.} \centering \begin{tabular}{cccccccccccc} \hline & & \rm{BS} & \rm{BFSM} & \multicolumn{2}{c}{\rm{SK2-PBiCGSTAB}} & \multicolumn{2}{c}{\rm{S2-PBiCGSTAB}} & \multicolumn{2}{c}{\rm{SK2-FGMRES}} & \multicolumn{2}{c}{\rm{S2-FGMRES}} \\ [-2pt] \cmidrule(lr){5-6} \cmidrule(lr){7-8} \cmidrule(lr){9-10} \cmidrule(lr){11-12}\\ [-11pt] ($\alpha$, $\beta$) & $N$ & $\mathrm{Time}$ & $\mathrm{Time}$ &($\mathrm{Iter}$, $\mathrm{Iter3}$) & $\mathrm{Time}$ & ($\mathrm{Iter}$, $\mathrm{Iter3}$) & $\mathrm{Time}$ & ($\mathrm{Iter}$, $\mathrm{Iter3}$) & $\mathrm{Time}$ & ($\mathrm{Iter}$, $\mathrm{Iter3}$) & $\mathrm{Time}$ \\ \hline (0.1, 1.1) & 64 & 0.213 & 0.007 & (4+2, 5) & 0.014 & (5+2, 5) & 0.015 & (6+5, 5) & 0.020 & (6+5, 6) & 0.021 \\ & 128 & 3.469 & 0.044 & (4+2, 5) & 0.056 & (5+2, 5) & 0.057 & (6+5, 5) & 0.077 & (6+6, 5) & 0.092 \\ & 256 & 237.015 & 0.234 & (5+2, 5) & 0.142 & (5+2, 5) & 0.144 & (6+6, 5) & 0.234 & (6+7, 5) & 0.273 \\ & 512 & \dag & 1.839 & (5+2, 5) & 0.995 & (5+2, 5) & 0.998 & (6+7, 5) & 1.912 & (6+8, 5) & 2.185 \\ & 1024 & \dag & 19.839 & (5+2, 5) & 2.635 & (5+2, 6) & 2.672 & (6+9, 5) & 6.672 & (6+10, 5) & 7.480\\ \\ (0.4, 1.7) & 64 & 0.185 & 0.009 & (4+2, 5) & 0.014 & (6+2, 6) & 0.015 & (6+5, 7) & 0.021 & (7+5, 6) & 0.022 \\ & 128 & 2.993 & 0.043 & (4+2, 5) & 0.057 & (6+2, 5) & 0.058 & (6+6, 6) & 0.090 & (7+5, 6) & 0.078 \\ & 256 & 232.214 & 0.235 & (6+2, 5) & 0.140 & (6+2, 5) & 0.141 & (6+7, 6) & 0.268 & (7+6, 5) & 0.233 \\ & 512 & \dag & 1.840 & (6+3, 5) & 1.486 & (6+3, 5) & 1.485 & (6+7, 5) & 1.906 & (7+6, 5) & 1.664 \\ & 1024 & \dag & 19.838 & (6+3, 5) & 3.887 & (6+3, 5) & 3.878 & (6+8, 5) & 5.983 & (7+7, 5) & 5.248 \\ \\ (0.7, 1.4) & 64 & 0.183 & 0.009 & (4+3, 5) & 0.020 & (5+3, 5) & 0.020 & (6+6, 7) & 0.024 & (6+6, 8) & 0.025 \\ & 128 & 2.969 & 0.040 & (5+3, 5) & 0.081 & (5+3, 5) & 0.083 & (6+7, 6) & 0.104 & (7+8, 6) & 0.119 \\ & 256 & 237.030 & 0.238 & (5+4, 5) & 0.279 & (5+4, 5) & 0.279 & (6+8, 6) & 0.300 & (7+9, 6) & 0.342 \\ & 512 & \dag & 1.842 & (5+4, 5) & 1.975 & (5+4, 5) & 1.988 & (6+10, 5) & 2.688 & (7+11, 6) & 2.971 \\ & 1024 & \dag & 19.847 & (5+5, 5) & 6.429 & (5+5, 5) & 6.526 & (6+11, 5) & 8.174 & (7+14, 5) & 10.540 \\ \\ (0.9, 1.9) & 64 & 0.176 & 0.009 & (4+2, 5) & 0.015 & (6+2, 5) & 0.016 & (5+5, 5) & 0.200 & (6+5, 5) & 0.021 \\ & 128 & 2.950 & 0.043 & (6+3, 5) & 0.081 & (6+3, 5) & 0.082 & (6+6, 5) & 0.091 & (6+6, 5) & 0.092 \\ & 256 & 233.143 & 0.209 & (6+3, 5) & 0.209 & (6+3, 5) & 0.214 & (6+7, 5) & 0.267 & (6+7, 5) & 0.271 \\ & 512 & \dag & 1.837 & (6+4, 5) & 1.968 & (6+4, 5) & 1.986 & (6+8, 5) & 2.164 & (6+8, 5) & 2.182 \\ & 1024 & \dag & 19.853 & (6+4, 5) & 5.211 & (6+4, 5) & 5.276 & (6+10, 5) & 7.505 & (6+10, 5) & 7.447 \\ \hline \end{tabular} \label{tab1} \end{center} \end{table} \begin{table}[H]\footnotesize\tabcolsep=2.0pt \begin{center} \caption{Results of different methods when $M = 257$ for Example 1.} \centering \begin{tabular}{cccccccccccc} \hline & & \rm{BS} & \rm{BFSM} & \multicolumn{2}{c}{\rm{SK2-PBiCGSTAB}} & \multicolumn{2}{c}{\rm{S2-PBiCGSTAB}} & \multicolumn{2}{c}{\rm{SK2-FGMRES}} & \multicolumn{2}{c}{\rm{S2-FGMRES}} \\ [-2pt] \cmidrule(lr){5-6} \cmidrule(lr){7-8} \cmidrule(lr){9-10} \cmidrule(lr){11-12}\\ [-11pt] ($\alpha$, $\beta$) & $N$ & $\mathrm{Time}$ & $\mathrm{Time}$ &($\mathrm{Iter}$, $\mathrm{Iter3}$) & $\mathrm{Time}$ & ($\mathrm{Iter}$, $\mathrm{Iter3}$) & $\mathrm{Time}$ & ($\mathrm{Iter}$, $\mathrm{Iter3}$) & $\mathrm{Time}$ & ($\mathrm{Iter}$, $\mathrm{Iter3}$) & $\mathrm{Time}$ \\ \hline (0.1, 1.1) & 65 & 3.198 & 0.077 & (4+2, 5) & 0.053 & (5+2, 5) & 0.055 & (6+5, 5) & 0.075 & (6+5, 6) & 0.072 \\ & 129 & 13.545 & 0.115 & (4+2, 5) & 0.079 & (5+2, 5) & 0.079 & (6+5, 5) & 0.111 & (6+6, 5) & 0.138 \\ & 257 & 277.409 & 0.209 & (5+2, 5) & 0.138 & (5+2, 5) & 0.138 & (6+6, 5) & 0.236 & (6+7, 5) & 0.269 \\ & 513 & \dag & 0.819 & (5+2, 5) & 0.236 & (5+2, 5) & 0.237 & (6+7, 5) & 0.462 & (6+8, 5) & 0.527 \\ & 1025 & \dag & 4.613 & (5+2, 5) & 0.405 & (5+2, 6) & 0.418 & (6+9, 5) & 1.078 & (6+10, 5) & 1.205 \\ \\ (0.4, 1.7) & 65 & 3.116 & 0.072 & (4+2, 5) & 0.050 & (6+2, 6) & 0.059 & (6+5, 7) & 0.066 & (7+6, 6) & 0.077 \\ & 129 & 13.397 & 0.119 & (4+2, 5) & 0.080 & (6+2, 5) & 0.084 & (6+6, 6) & 0.131 & (7+6, 6) & 0.137 \\ & 257 & 263.419 & 0.210 & (6+2, 5) & 0.139 & (6+2, 5) & 0.139 & (6+7, 6) & 0.271 & (7+6, 5) & 0.238 \\ & 513 & \dag & 0.816 & (6+2, 5) & 0.233 & (6+2, 5) & 0.232 & (6+7, 5) & 0.474 & (7+6, 5) & 0.421 \\ & 1025 & \dag & 4.613 & (6+2, 5) & 0.410 & (6+3, 5) & 0.615 & (6+7, 5) & 0.840 & (7+6, 5) & 0.760 \\ \\ (0.7, 1.4) & 65 & 3.056 & 0.073 & (4+4, 5) & 0.093 & (5+4, 5) & 0.100 & (6+8, 6) & 0.101 & (6+8, 7) & 0.109 \\ & 129 & 13.421 & 0.115 & (5+4, 5) & 0.155 & (5+4, 5) & 0.165 & (6+8, 6) & 0.172 & (7+9, 6) & 0.199 \\ & 257 & 251.611 & 0.214 & (5+4, 5) & 0.269 & (5+4, 5) & 0.277 & (6+8, 6) & 0.298 & (7+9, 6) & 0.334 \\ & 513 & \dag & 0.833 & (5+4, 5) & 0.450 & (5+4, 5) & 0.457 & (6+9, 6) & 0.593 & (7+10, 6) & 0.658 \\ & 1025 & \dag & 4.397 & (5+4, 5) & 0.792 & (5+4, 5) & 0.793 & (7+10, 5) & 1.203 & (7+11, 6) & 1.332 \\ \\ (0.9, 1.9) & 65 & 3.057 & 0.070 & (4+3, 5) & 0.074 & (6+3, 5) & 0.071 & (6+7, 5) & 0.088 & (6+7, 5) & 0.102 \\ & 129 & 13.393 & 0.118 & (4+3, 5) & 0.116 & (6+3, 5) & 0.124 & (6+7, 5) & 0.152 & (6+7, 5) & 0.160 \\ & 257 & 257.493 & 0.211 & (6+3, 5) & 0.201 & (6+3, 5) & 0.213 & (6+7, 5) & 0.263 & (6+7, 5) & 0.274 \\ & 513 & \dag & 0.828 & (6+3, 5) & 0.340 & (6+3, 5) & 0.365 & (6+7, 5) & 0.465 & (6+7, 5) & 0.482 \\ & 1025 & \dag & 4.625 & (6+3, 5) & 0.587 & (6+3, 5) & 0.613 & (6+7, 5) & 0.854 & (6+7, 5) & 0.860 \\ \hline \end{tabular} \label{tab2} \end{center} \end{table} \begin{table}[t]\footnotesize\tabcolsep=2.0pt \begin{center} \caption{Results of different methods when $N = 257$ for Example 1.} \centering \begin{tabular}{cccccccccccc} \hline & & \rm{BS} & \rm{BFSM} & \multicolumn{2}{c}{\rm{SK2-PBiCGSTAB}} & \multicolumn{2}{c}{\rm{S2-PBiCGSTAB}} & \multicolumn{2}{c}{\rm{SK2-FGMRES}} & \multicolumn{2}{c}{\rm{S2-FGMRES}} \\ [-2pt] \cmidrule(lr){5-6} \cmidrule(lr){7-8} \cmidrule(lr){9-10} \cmidrule(lr){11-12}\\ [-11pt] ($\alpha$, $\beta$) & $M$ & $\mathrm{Time}$ & $\mathrm{Time}$ &($\mathrm{Iter}$, $\mathrm{Iter3}$) & $\mathrm{Time}$ & ($\mathrm{Iter}$, $\mathrm{Iter3}$) & $\mathrm{Time}$ & ($\mathrm{Iter}$, $\mathrm{Iter3}$) & $\mathrm{Time}$ & ($\mathrm{Iter}$, $\mathrm{Iter3}$) & $\mathrm{Time}$ \\ \hline (0.1, 1.1) & 65 & 4.065 & 0.048 & (5+2, 5) & 0.035 & (5+2, 5) & 0.037 & (6+6, 5) & 0.059 & (6+7, 5) & 0.067 \\ & 129 & 23.278 & 0.091 & (5+2, 5) & 0.063 & (5+2, 5) & 0.070 & (6+6, 5) & 0.113 & (6+7, 5) & 0.135 \\ & 257 & 277.409 & 0.209 & (5+2, 5) & 0.138 & (5+2, 5) & 0.138 & (6+6, 5) & 0.236 & (6+7, 5) & 0.269 \\ & 513 & \dag & 0.569 & (5+2, 5) & 0.257 & (5+2, 5) & 0.266 & (6+6, 5) & 0.463 & (6+7, 5) & 0.550 \\ & 1025 & \dag & 1.702 & (5+2, 5) & 0.530 & (5+2, 5) & 0.542 & (6+6, 5) & 0.958 & (6+7, 5) & 1.095 \\ \\ (0.4, 1.7) & 65 & 4.057 & 0.047 & (6+2, 5) & 0.036 & (6+2, 5) & 0.039 & (6+7, 6) & 0.069 & (7+6, 5) & 0.067 \\ & 129 & 22.929 & 0.088 & (5+2, 5) & 0.068 & (6+2, 5) & 0.071 & (6+7, 6) & 0.125 & (7+6, 5) & 0.118 \\ & 257 & 263.419 & 0.210 & (6+2, 5) & 0.139 & (6+2, 5) & 0.139 & (6+7, 6) & 0.271 & (7+6, 5) & 0.238 \\ & 513 & \dag & 0.585 & (6+3, 5) & 0.390 & (6+3, 5) & 0.396 & (6+7, 6) & 0.534 & (7+6, 5) & 0.476 \\ & 1025 & \dag & 1.854 & (6+3, 5) & 0.782 & (6+3, 5) & 0.810 & (6+7, 6) & 1.101 & (7+6, 5) & 0.968 \\ \\ (0.7, 1.4) & 65 & 4.069 & 0.048 & (5+3, 5) & 0.051 & (5+3, 5) & 0.550 & (6+9, 6) & 0.088 & (7+8, 6) & 0.079 \\ & 129 & 23.217 & 0.090 & (5+3, 5) & 0.099 & (5+3, 5) & 0.106 & (6+8, 6) & 0.146 & (7+8, 6) & 0.151 \\ & 257 & 251.611 & 0.214 & (5+4, 5) & 0.269 & (5+4, 5) & 0.277 & (6+8, 6) & 0.298 & (7+9, 6) & 0.334 \\ & 513 & \dag & 0.578 & (4+4, 5) & 0.521 & (5+4, 5) & 0.542 & (6+9, 6) & 0.680 & (7+10, 6) & 0.776 \\ & 1025 & \dag & 1.763 & (4+5, 5) & 1.301 & (5+5, 5) & 1.318 & (6+11, 5) & 1.680 & (7+12, 6) & 1.829 \\ \\ (0.9, 1.9) & 65 & 4.081 & 0.043 & (6+2, 5) & 0.036 & (6+2, 5) & 0.039 & (6+5, 5) & 0.049 & (6+5, 5) & 0.053 \\ & 129 & 23.093 & 0.095 & (6+3, 5) & 0.098 & (6+3, 5) & 0.106 & (6+6, 5) & 0.112 & (6+6, 5) & 0.119 \\ & 257 & 257.493 & 0.211 & (6+3, 5) & 0.201 & (6+3, 5) & 0.213 & (6+7, 5) & 0.263 & (6+7, 5) & 0.274 \\ & 513 & \dag & 0.552 & (6+4, 5) & 0.513 & (6+4, 5) & 0.539 & (6+8, 5) & 0.612 & (6+8, 5) & 0.642 \\ & 1025 & \dag & 1.725 & (4+4, 5) & 1.033 & (6+4, 5) & 1.082 & (6+10, 5) & 1.528 & (6+10, 5) & 1.567 \\ \hline \end{tabular} \label{tab3} \end{center} \end{table} \begin{table}[H]\footnotesize\tabcolsep=3.0pt \begin{center} \caption{Comparison results of SK2-PBiCGSTAB method and Huang-Lei's method for Example 1, where $M = 257$.} \centering \begin{tabular}{cccccccc} \hline & & \multicolumn{3}{c}{\rm{Huang-Lei's method}} & \multicolumn{3}{c}{\rm{SK2-PBiCGSTAB}} \\ [-2pt] \cmidrule(lr){3-5} \cmidrule(lr){7-8} \cmidrule(lr){6-8} \\ [-11pt] ($\alpha$, $\beta$) & $N$ & $\mathrm{Time}$ & $\mathrm{Error1}$ & $\mathrm{Error2}$ & $\mathrm{Time}$ & $\mathrm{Error1}$ & $\mathrm{Error2}$\\ \hline (0.1, 1.1) & 65 & 0.058 & 8.3526E-04 & 5.9916E-04 & 0.053 & 8.3526E-04 & 5.9916E-04 \\ & 129 & 0.065 & 2.1165E-04 & 1.5173E-04 & 0.079 & 2.1165E-04 & 1.5173E-04 \\ & 257 & 0.078 & 5.2851E-05 & 3.7902E-05 & 0.138 & 5.2852E-05 & 3.7903E-05 \\ & 513 & 0.116 & 1.2783E-05 & 9.2066E-06 & 0.236 & 1.2778E-05 & 9.2035E-06 \\ & 1025 & 0.233 & 2.7253E-06 & 2.0070E-06 & 0.405 & 2.7131E-06 & 1.9997E-06 \\ \\ (0.4, 1.7) & 65 & 0.055 & 5.4781E-04 & 3.8003E-04 & 0.050 & 5.4781E-04 & 3.8003E-04 \\ & 129 & 0.063 & 1.3690E-04 & 9.5128E-05 & 0.080 & 1.3689E-04 & 9.5126E-05 \\ & 257 & 0.076 & 3.2744E-05 & 2.2885E-05 & 0.139 & 3.2743E-05 & 2.2884E-05 \\ & 513 & 0.115 & 6.6208E-06 & 4.7452E-06 & 0.233 & 6.6207E-06 & 4.7454E-06 \\ & 1025 & 0.224 & 1.5886E-06 & 4.9796E-07 & 0.410 & 1.5886E-06 & 4.9764E-07 \\ \\ (0.7, 1.4) & 65 & 0.053 & 7.0888E-04 & 4.9767E-04 & 0.093 & 7.0888E-04 & 4.9767E-04 \\ & 129 & 0.064 & 1.7789E-04 & 1.2502E-04 & 0.155 & 1.7790E-04 & 1.2502E-04 \\ & 257 & 0.078 & 4.3826E-05 & 3.0074E-05 & 0.269 & 4.3825E-05 & 3.0076E-05 \\ & 513 & 0.120 & 1.1377E-05 & 6.1321E-06 & 0.450 & 1.1376E-05 & 6.1350E-06 \\ & 1025 & 0.220 & 2.9060E-06 & 5.7145E-07 & 0.792 & 2.9113E-06 & 5.4756E-07 \\ \\ (0.9, 1.9) & 65 & 0.053 & 4.4937E-04 & 3.1623E-04 & 0.074 & 4.4937E-04 & 3.1623E-04 \\ & 129 & 0.064 & 1.1041E-04 & 7.7685E-05 & 0.116 & 1.1043E-04 & 7.7700E-05 \\ & 257 & 0.092 & 2.5058E-05 & 1.7763E-05 & 0.201 & 2.5028E-05 & 1.7741E-05 \\ & 513 & 0.126 & 3.8914E-06 & 2.8666E-06 & 0.340 & 3.8553E-06 & 2.8317E-06 \\ & 1025 & 0.220 & 1.7111E-06 & 1.0294E-06 & 0.587 & 1.7104E-06 & 1.0289E-06 \\ \hline \end{tabular} \label{tab4} \end{center} \end{table} \begin{table}[t]\footnotesize\tabcolsep=2.0pt \begin{center} \caption{The condition numbers of $W$, $P_{W}^{-1} W$, $A_0$, $P_{s}^{-1} A_0$ and $P_{sk}^{-1} A_0$ in Example 1.} \centering \begin{tabular}{ccccccc} \hline ($\alpha$, $\beta$) & ($N$, $M$) & $W$ & $P_{W}^{-1} W$ & $A_0$ & $P_{s}^{-1} A_0$ & $P_{sk}^{-1} A_0$ \\ \hline (0.1, 1.1) & (32, 32) & 27.98 & 1.01 & 25.28 & 99.15 & 14.16 \\ & (64, 32) & 57.43 & 1.01 & 51.90 & 212.95 & 27.82 \\ & (128, 32) & 120.74 & 1.01 & 109.09 & 457.09 & 57.03 \\ \\ (0.4, 1.7) & (32, 32) & 214.57 & 1.02 & 132.85 & 223.71 & 49.84 \\ & (64, 32) & 696.64 & 1.02 & 431.24 & 725.02 & 152.98 \\ & (128, 32) & 2262.94 & 1.02 & 1400.75 & 2348.38 & 484.23 \\ \\ (0.7, 1.4) & (32, 32) & 89.65 & 1.05 & 39.59 & 40.06 & 18.52 \\ & (64, 32) & 236.56 & 1.05 & 104.15 & 102.99 & 45.01 \\ & (128, 32) & 624.16 & 1.05 & 274.49 & 268.20 & 114.37 \\ \\ (0.9, 1.9) & (32, 32) & 51.45 & 1.15 & 233.76 & 211.90 & 74.67 \\ & (64, 32) & 3063.80 & 1.02 & 872.64 & 774.19 & 259.89 \\ & (128, 32) & 11438.08 & 1.02 & 3256.96 & 2854.62 & 932.00 \\ \hline \end{tabular} \label{tab5} \end{center} \end{table} In Tables \ref{tab1}-\ref{tab3}, compared with BS method, the four preconditioned iterative methods (i.e., SK2-PBiCGSTAB, SK2-FGMRES, S2-PBiCGSTAB and S2-FGMRES) greatly reduce the computational cost in aspects of CPU time and memory requirement. When $M = N = 2^6, 2^7$ and $2^8$ in Table \ref{tab1}, although the four preconditioned iterative methods are slower than BFSM method, they do not need to deal with $M$ systems. After further investigating Tables \ref{tab1}-\ref{tab3}, we have found that there is little difference in the CPU time and number of iterations between SK2-PBiCGSTAB and S2-PBiCGSTAB (or between SK2-FGMRES and S2-FGMRES). However, $\textrm{Time}$ and number of iterations needed by SK2-FGMRES (or S2-FGMRES) are slightly larger than SK2-PBiCGSTAB (or S2-PBiCGSTAB). In Table \ref{tab4}, the SK2-PBiCGSTAB method is compared with the method proposed in \cite{huang2017fast} (referred to as Huang-Lei's method) in terms of CPU cost and accuracy of solutions. Here and hereafter, $\textrm{Error1} = \max\limits_{1 \leq j \leq M} \| \bm{\zeta}^j \|_\infty$ and $\textrm{Error2} = \max\limits_{1 \leq j \leq M} \| \bm{\zeta}^j \|$, where $\| \cdot \|$ is the $L_2$-norm, and $\bm{\zeta}^j$ is a vector representing the absolute error between the exact solution and numerical solution at $t = t_j$. As seen from Table \ref{tab4}, the SK2-PBiCGSTAB method needs more CPU time when solving Eq. (2.5). However, Error2 calculated by the SK2-PBiCGSTAB method is slightly smaller than the Huang-Lei's method when $N$ becomes increasingly large. In Table \ref{tab5}, the condition numbers of $W$, $P_{W}^{-1} W$, $A_0$, $P_{s}^{-1} A_0$ and $P_{sk}^{-1} A_0$ are listed to further illustrate the effectiveness of $P_{W}$ and $P_{sk}$. It shows that both $P_{W}$ and $P_{sk}$ reduce the condition numbers greatly, and $P_{sk}$ performs better than $P_{s}$. Meanwhile, it is also interesting to notice that the condition number of $P_{s}^{-1} A_0$ is even larger than $A_0$ when $(\alpha, \beta) = (0.1, 1.1)$ and $(0.4, 1.7)$. Furthermore, Fig. \ref{fig2} shows the eigenvalues of $W$ and $P_{W}^{-1} W$, when $M = N = 2^6$ and $(\alpha, \beta) = (0.1, 1.1), (0.7, 1.4)$. Fig. \ref{fig3} is plotted to further illustrate that $P_{sk}$ is slightly better than the Strang's preconditioner $P_{s}$. \section{Concluding remarks} \label{sec5} The BLTT system (2.5) arising from TSFDE (\ref{eq1.1}) is studied. Firstly, the $L2$-$1_\sigma$ and WSGD formulae are adopted to discrete (\ref{eq1.1}). Secondly, for the purpose of fast solving the obtained BLTT system (2.5), two preconditioners (i.e., $P_W$ and $P_{sk}$) are proposed and analyzed, respectively. Finally, numerical experiments show that our proposed SK2 strategy is efficient for fast solving the BLTT system. Meanwhile, the numerical experiments also indicate that the performance of our skew-circulant preconditioner $P_{sk}$ is slightly better than the Strang's circulant preconditioner $P_s$. Based on this research, we give three future research directions: (i) Notice that the preconditioner $P_W$ only compresses the temporal component. Hence, it is valuable to develop a preconditioner which compresses both the temporal and spatial components; (ii) $P_W$ is not suitable for parallel computing. Thus, it is interesting to design an efficient and parallelizable preconditioner; (iii) Some other applications of our new skew-circulant preconditioner are worth considering. \section*{Acknowledgments} \addcontentsline{toc}{section}{Acknowledgments} \label{sec6} \textit{The authors would like to thank Dr. Jiwei Zhang and Dr. Meng Li for giving some helpful discussions. We would like to express our sincere thanks to the referees and our editor Prof. Michael Ng for insightful comments and invaluable suggestions that greatly improved the presentation of this paper. We are also grateful to Dr. Siu-Long Lei for sharing us with MATLAB codes of Ref. \cite{huang2017fast}. This research is supported by the National Natural Science Foundation of China (Nos. 61876203, 61772003 and 11801463) and the Fundamental Research Funds for the Central Universities (Nos. ZYGX2016J132 and JBK1809003). }
{ "attr-fineweb-edu": 1.75, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd0nxaJJQnNCygIPT
\section{Graphe donne l'orientation} \section{The framework and one example} We deal here with an oriented graph whose vertices are the elements of $\mathbb{Z}^d$, and whose edges are the couples $(x,y)$ such that $y-x$ belongs to a given finite set denoted by $\text{Dir}$. Hence, if $E$ denotes the set of edges, one has $$E=\{(x,y)\in\mathbb{Z}^d\times\mathbb{Z}^d:\; y-x\in \text{Dir}\}.$$ For a given parameter $p \in(0,1)$, we endow the set $\Omega=\{0,1\}^E$ with the Bernoulli product $\P_p=\text{Ber}(p)^{\otimes E}$: under this probability measure, the edges are independently open (state $1$) with probability $p$ or closed (state $0$) with probability $1-p$, and we are interested in the connectivity properties of the random graph $G(\omega)$ whose edges are the ones that are open in $\omega$. For $x\in\mathbb{Z}^d$, we denote by $C_+(x)$ the set of points that can be reached from $x$ by a path in the random graph $G$, \emph{i.e.} the points $y$ such that there exists a sequence $(x_0,\dots,x_n)$ with $x_0=x$, $x_n=y$ and $(x_i,x_{i+1})\in E$ for each $i\in\{0,\dots,n-1\}$. For $u\in\mathbb{R}^d \backslash \{0\}$, we define $$D_u(x)=\sup_{y\in C_+(x)} \langle y-x,u\rangle.$$ The field $(D_u(x))_{x\in\mathbb{Z}^d}$ is stationary and ergodic. We set $$\theta_u(p)=\P_p(D_u(0)=+\infty)\text{ and }p_c(u)=\inf\{p>0: \;\theta_u(p)>0\}.$$ The quantity $D_u(x)$ measures the extension of the oriented open cluster issued from $x$ in direction $u$ and $p_c(u)$ is the critical parameter for the existence of an oriented open cluster that is unbounded in direction $u$. \subsection*{An example} We take here $d=2$, we fix some positive integer $M$ and we choose \begin{align*} \text{Dir}=\{(0,-1);(-M,1),(-M+1,1),\dots,(-1,1),(0,1),(1,1),\dots,(M,1)\}. \end{align*} In other words, the only allowed communications are the following: for all $x,x',y \in \mathbb{Z}^2$, \begin{itemize} \item $(x,y)\to (x',y+1)$ if $|x-x'|\le M$ \item $(x,y)\to (x,y-1)$ \end{itemize} Let us denote by $(e_1,e_2)$ the canonical basis for $\mathbb{R}^2$: with this set of edges, we give an advantage to direction $e_2$ when compared to direction $-e_2$. We first observe that for $M$ large enough, there exist values for the opening parameter $p$ such that there is percolation in direction $e_2$ but not in direction $-e_2$: \begin{theo} $\;$ \begin{itemize} \item For $M\ge 2$, $\displaystyle p_c(-e_2)\ge\frac1{2\sqrt{2M+1}}$. \item For $M\ge 5$, $\displaystyle p_c(e_2)\le 1-(1-\overrightarrow{p_c}(2))^{2/M}<\frac{-2\ln( 1-\overrightarrow{p_c}(2))}{M}\le \frac{2 \ln 3}M$, where $\overrightarrow{p_c}(2)$ denotes the critical value for classical oriented percolation on $\mathbb{Z}_{+}^2$. \end{itemize} Particularly, for $M\ge 37$, $p_c(e_2)<p_c(-e_2).$ \end{theo} \begin{proof} For a fixed integer $\ell$, the graph $(\mathbb{Z}^2,E)$ contains exactly ${2\ell+n\choose \ell}(2M+1)^{\ell}$ paths from $(0,0)$ to $\mathbb{Z}\times\{-n\}$ that contains $\ell$ steps upwards and $\ell+n$ steps downwards. Then, the mean number of open self-avoiding paths from $(0,0)$ to the line $y=-n$ is no more that \begin{align*} &\sum_{\ell=0}^{+\infty} {2\ell+n\choose \ell} (2M+1)^{\ell} p^{2\ell+n} \le \sum_{\ell=0}^{+\infty}(2M+1)^{\ell} (2p)^{2\ell+n}= \frac{(2p)^n}{1-4p^2(2M+1)}, \text{ if } \end{align*} as soon as $4p^2(2M+1)<1$. It follows that for $p<\frac1{2\sqrt{2M+1}}$, the number of self-avoiding paths from $(0,0)$ to $\{(x,y)\in\mathbb{Z}^2:\ ;y\le 0\}$ is integrable, therefore it is almost surely finite. This gives the first inequality. For the second inequality, we build a dynamic independent directed percolation from bloc events with length $M/2$ that partition the horizontal lines. Remember that $M\ge 5$. The probability that a given point $(x,y)$ in the segment$(\frac{M}2 \overline{x}+[-M/4,M/4))\times \{y\}$ can be linked to some point in $ (\frac{M}2 (\overline{x}+1)+[-M/4,M/4))\times y+1\}$ is larger than $1-(1-p)^{M/2}$. So is the probability that one can link this point to some point in $(\frac{M}2 (\overline{x}-1)+[-M/4,M/4))\times \{y+1\}$. Hence, we built a dynamic percolation of blocks in the spirit of Grimmett and Marstrand~\cite{Grimmett-Marstrand} (see also Grimmett~\cite{grimmett-book}), that stochastically dominates an independent directed bond percolation on $\mathbb{Z}^2$, with parameter $1-(1-p)^{M/2}$. Then, percolation in direction $e_2$ is possible as soon as $1-(1-p)^{M/2}>\overrightarrow{p_c}(2)$, whence \begin{align*} p_c(e_2)&\le 1-\exp \left(\frac{2}M\ln(1-\overrightarrow{p_c}(2)) \right)<-\frac{2}M\ln(1-\overrightarrow{p_c}(2))\le \frac{2\log 3}M, \end{align*} where the last inequality comes from Liggett's bound~\cite{MR1359822}: $\overrightarrow{p_c}(2)\le 2/3$. The desired result follows. \end{proof} \begin{figure} \begin{tabular}{cc} \frame{\includegraphics[scale=0.05]{percoorienteeA-051.eps}}&\frame{\includegraphics[scale=0.05]{percoorienteeA-055.eps}}\\ \end{tabular} \caption{Oriented percolation with $M=1$, $p=0.51$ on the left and $p=0.55$ on the right. The pictures are centered at the origin. The points are colored accordingly to their distance to the origin. The coloring is performed by the Dijkstra algorithm until one hits the border.} \end{figure} \section{A sharp percolation transition} We now come back to our general framework. Let $\Psi:\mathbb{Z}^d\to\mathbb{R}$ be a subadditive function, i.e. such that for any $x,y \in \mathbb{Z}^d$, $\Psi(x+y)\le\Psi(x)+\Psi(y)$. We define $$\forall x \in \mathbb{Z}^d \quad r_\Psi(x)=\sup_{y\in C_+(x)}\Psi(y-x).$$ The graph $(\mathbb{Z}^d,E)$ being translation-invariant, the distribution of $r_\Psi(x)$ does not depend on $x$. If $A,B,S$, are subsets of $\mathbb{Z}^d$, the event $A\stackrel{S}{\rightarrow} B$ means that there exists a path $(x_0,\dots,x_n)$ with $x_0\in A$, $x_n\in B$, $x_i\in S$ for $i\in\{1,\dots,n-1\}$ and the bonds $(x_i,x_{i+1})$ are all open. For $p\in[0,1]$ and $0\in S\subset \mathbb{Z}^d$, we define \begin{align} \phi_p(S) & :=p \sum_{(x,y)\in \partial^+ S} \P_p (0\stackrel{S}{\rightarrow} x), \label{eq:16} \\ \tilde p_c(\Psi) &:= \sup\left\{ \begin{array}{c} p \in [0,1] : \; \text{there exists a set $S$ s.t. } 0\in S \subset \mathbb{Z}^d \\ \text{with } \phi_p(S)<1 \text{ and } \sup_{S}\Psi <+\infty \end{array}\right\}, \label{eq:17} \\ p_c(\Psi)&:=\sup\{p \in [0,1] : \; \P_p(r_\Psi(0)=\infty)=0\}\nonumber. \end{align} Note that in the above definition, the set $S$ may be infinite. Then, we have the following result: \begin{theo} \label{thm:perco} Fix $d\ge 2$. Let $\Psi:\mathbb{Z}^d \to \mathbb{R}$ be a subadditive function. \begin{enumerate} \item \label{item:1} For $p<\tilde p_c(\Psi)$, there exists $c=c(\Psi,p)>0$ such that for each $n\ge 1$, $$\P_p(r_\Psi(0)\ge n)\le e^{-c n}.$$ \item\label{item:2} For $p>\tilde p_c(\Psi)$, $ \displaystyle \P_p(r_\Psi(0)=+\infty)\ge \frac{p-\tilde p_c(\Psi)}{p(1-\tilde p_c(\Psi))}$. \end{enumerate} In particular, \eqref{item:1} and~\eqref{item:2} imply that $\tilde p_c(\Psi)=p_c(\Psi)$. \end{theo} Note that $ \Psi_u(x)=\langle u,x\rangle$ is linear and thus subadditive, and, for this map, $ p_c(\Psi_u)=p_c(u)$. \begin{proof} $\bullet$ At first, let us prove that~\eqref{item:1} and~\eqref{item:2} imply $\tilde p_c(\Psi)=p_c(\Psi)$. If $p<\tilde{p}_c(\Psi)$, then for each $n\ge 1$, we have $\P_p(r_\Psi(0)=+\infty)\le \P_p(r(0)\ge n)\le e^{-cn}$; letting $n$ go to infinity, we get $\P(r_\Psi(0)=+\infty)=0$. So $\tilde{p}_c(\Psi)\le p_c(\Psi)$. But~\eqref{item:2} implies that $\P_p(r_\Psi(0)=+\infty)>0$ for $p\ge \tilde{p}_c(\Psi)$, thus $\tilde p_c(\Psi)\ge p_c(\Psi)$. \medskip $\bullet$ Proof of~\eqref{item:1}: it is very similar to Duminil-Copin--Tassion~\cite{MR3477351,MR3605816,MR3783562}. Since it is short, we give it to stay self-contained. Let $p<\tilde{p}_c(\Psi)$. By the very definition de $\tilde{p}_c(\Psi)$, we can find $S \subset \mathbb{Z}^d$ that contains the origin and such that $\phi_p(S)<1$ and $\sup_S{\Psi}<+\infty$. Fix a positive integer $L\ge \sup_{S \cup \text{Dir}} {\Psi}$. We set $$\Lambda_n=\{x\in\mathbb{Z}^d: \; \Psi(x)\le n\}.$$ Thus, $\{r_\Psi(0)>n\}=\{0\to\Lambda_n^c\}$. For $k \ge 1$, an open path starting from $0$ and escaping from $\Lambda_{kL}$ eventually leaves $S$. Then, \begin{align*} \{0\to \Lambda_{2kL}^c\}=\miniop{}{\cup}{(x,y)\in\partial^+ S}\{0\stackrel{S}{\rightarrow} x,\omega_{(x,y)}=1,y \stackrel{S^c}{\rightarrow}\Lambda_{2kL}^c\} \end{align*} By independence, we get \begin{align*} \P_p(r_\Psi(0)>2kL) &\le \miniop{}{\sum}{(x,y)\in\partial^+ S} \P_p(0\stackrel{S}{\rightarrow} x) \, p\, \P_p(y \stackrel{S^c}{\rightarrow}\Lambda_{2kL}^c). \end{align*} Note that \begin{itemize} \item If $(x,y)\in\partial^+ S$, then $\Psi(y)\le \Psi(x)+\max_{\text{Dir}} \Psi \le 2L$ \item $\{ y \stackrel{S^c}{\rightarrow}\Lambda_{2kL}^c\} \subset \{\exists z \in C_+(y): \; \Psi(z)>2kL \}$; \item thus if $(x,y)\in\partial^+ S$ and $z \in C_+(y)$ is such that $\Psi(z)>2kL$, then $$\Psi(z-y) \ge \Psi(z)-\Psi(y) >2kL-2L =2(k-1)L.$$ \end{itemize} We thus obtain \begin{align*} \P_p(r_\Psi(0)>2kL) &\le \miniop{}{\sum}{(x,y)\in\partial^+ S}\P_p(0\stackrel{S}{\rightarrow} x) \, p \, \P_p(r_\Psi(y)> 2(k-1)L)\\ &\le\phi_p(S)\P_p(r_\Psi(0)> 2(k-1)L) \end {align*} It follows that $\P_p(r_\Psi(0)>2kL)\le\phi_p(S)^k$, which gives the desired result. \end{proof} \medskip $\bullet$ Proof of~\eqref{item:2}. In Duminil-Copin--Tassion, the idea is to use the Russo inequality. It is a bit more tricky here, because the events $\{0\leftrightarrow\partial\Lambda_n\}$, which correspond to the exit of finite boxes in Duminil-Copin--Tassion, now depend on infinitely many bonds. The proof is cut into three lemmas. We begin with a lemma on a general graph. \begin{lemme}\label{conditionne} Let $G=(V,E)$ be an oriented graph with $V$ finite or denumerable. Let $\P$ denote a Bernoulli product on $\{0,1\}^E$. Let $X$ and $Y$ be disjoint subsets of $V$, with $\P(X\to Y)>0$. For each $S\subset V$, and each $(x,y)\in E$, we set \begin{align*} r_X^{(x,y)}(S) &=1\hspace{-1.3mm}1_{X \subset S} 1\hspace{-1.3mm}1_{(x,y)\in\partial^+ S}\P(X\stackrel{S}{\rightarrow} x). \end{align*} We denote by $\mathcal{T}_Y$ the $\sigma$-field generated by the events $\{x\to Y\}$, for $x\in V$. We denote by $B_Y$ the random subset of $V$ composed by the points that are not linked to~$Y$. Then, for any $e \in E$, $$\P(e\text{ pivotal for }X\to Y, X\not\to Y \; | \; \mathcal{T}_Y)=r_X^e(B_Y).$$ \end{lemme} Remember that $e$ is said to be pivotal for an event $A\in\mathcal{B}(\{0,1\}^E)$ in the configuration $\omega\in \{0,1\}^E$ if $1\hspace{-1.3mm}1_A(0_e\omega_{E\backslash\{e\}})\ne 1\hspace{-1.3mm}1_A(1_e\omega_{E\backslash\{e\}})$. \begin{proof}[Proof of Lemma \ref{conditionne}] Let us denote by $\Gamma$ the set of oriented paths in $Y^ c$ from a point in $Y^c$ to a point in $Y$. Then the subsets $\cap_{\gamma \in A}\cap_{e \in\gamma}\{\omega_e=1\}$, for $A \subset \Gamma$, form a $\pi$-system that generates $\mathcal{T}_Y$, so it is enough to prove that for each $A \subset \Gamma$, one has \begin{align} \label{amtr} \P \left( \begin{array}{c} e\text{ pivotal for }X\to Y, X\not\to Y, \\ \forall \gamma\in A,\, \forall f \in \gamma, \, \omega_f=1 \end{array} \right) &=\mathbb{E}\ \left( r_X^e(B_Y)\prod_{\gamma\in A}\prod_{f\in \gamma}\omega_f \right). \end{align} The quantities that appear on each side of~\eqref{amtr} are the limit of analogous quantities for a sequence of finite subgraphs of $G$. So, by dominated convergence, it is sufficient to prove~\eqref{amtr} for a finite graph. From now on, we assume that $G$ is finite. Decomposing on the (finite number of) possible values of $B_Y$, we thus only have to prove that for any subset $S$ of vertices such that $X \subset S \subset Y^c$, \begin{align*} & \P \left( \begin{array}{c} e\text{ pivotal for }X\to Y, \; B_Y=S \\ \forall \gamma\in A,\, \forall f \in \gamma, \, \omega_f=1 \end{array} \right) =\mathbb{E}\ \left( r_X^e(S) 1\hspace{-1.3mm}1_{\{B_Y=S\}}\prod_{\gamma\in A}\prod_{f\in \gamma}\omega_f \right). \end{align*} Fix a set $S$ such that $X \subset S \subset Y^c$. Let us denote by \begin{align*} E_1 & = \{ (x,y) \in E: \; x,y \in S\}, \\ E_2 & =\partial^+ S = \{ (x,y) \in E: \; x \in S, \, y \in S^ c\}, \\ E_3 & = \{ (x,y) \in E \backslash (E_1 \cup E_2): \;\exists (u,v) \in \partial^+ S, \, v \stackrel{S^c}{\rightarrow} x \text{ and } y \stackrel{S^c}{\rightarrow}Y\}. \end{align*} Note that on the event $B_Y=S$, as $X \subset S$, pivotal edges for $X \to Y$ are necessarily in $E_2$ and that when $e \notin E_2$, both members vanish. The event $B_Y=S$ is measurable with respect to the states of the edges in $E_2 \cup E_3$, and implies that all edges in $E_2$ are closed. Thus both members vanish if $A \not\subset E_3$. Denote by $A_3$ the set of possible configurations of edges in $E_3$ that correspond to $B_Y=S$. Finally, we thus have to prove that for any $S$ such that $X \subset S \subset Y^c$, for any $e=(x,y) \in E_2$, for any $\xi \in A_3$, \begin{align*} & \P \left( \begin{array}{c} e\text{ pivotal for }X\to Y, \\ \forall f\in E_3,\omega_f=\xi_f, \; B_Y=S \end{array} \right) =\mathbb{E}\ \left( r_X^e(S) 1\hspace{-1.3mm}1_{\{B_Y=S\}} \prod_{f\in E_3}1\hspace{-1.3mm}1_{\{\omega_f=\xi_f\}} \right). \end{align*} But now, by independence, \begin{align*} & \P \left( \begin{array}{c} e\text{ pivotal for }X\to Y, \\ \forall f\in E_3,\omega_f=\xi_f, \; B_Y=S \end{array} \right) = \P \left( \begin{array}{c} X\stackrel{S}{\rightarrow} x , \, \forall f \in E_2, \, \omega_f=0 \\ \forall f\in E_3,\omega_f=\xi_f \end{array} \right) \\ = & \P(X\stackrel{S}{\rightarrow} x) \P(\forall f \in E_2, \, \omega_f=0, \, \forall f\in E_3,\omega_f=\xi_f) \\ = & \P(X\stackrel{S}{\rightarrow} x) \P(B_Y=S, \, \forall f\in E_3,\omega_f=\xi_f), \end{align*} which is indeed the mean value of $ r_X^e(S) 1\hspace{-1.3mm}1_{\{B_Y=S\}} \prod_{f\in E_3}1\hspace{-1.3mm}1_{\{\omega_f=\xi_f\}}$. \end{proof} We come back to the case of a graph on $\mathbb{Z}^d$. \begin{lemme}\label{proba}Let $p\in [0,1]$. For every natural number $n$, we set $f_n(p)=\P_p(0 \to \Lambda_n^c)$ and $c_n=\inf_{S\subset\Lambda_n,0\in S}\phi_p(S)$. Then, for each $p\in[0,1[$. $$\liminf_{h\to 0^+}\frac{f_n(p+h)-f_n(p)}h\ge\frac1{p(1-p)} c_n (1-f_n(p)).$$ \end{lemme} \begin{proof}[Proof of Lemma~\ref{proba}] The event $\{0\to \Lambda^c_n\}$ depends on infinitely many bonds, so one can not directly apply the Russo formula. However, since $\{0\to \Lambda_n^c\}$ is an increasing event, the following inequality is preserved (see for example Grimmett~\cite{grimmett-book}, page 43): \begin{align*} \liminf_{h\to 0^+}\frac{f_n(p+h)-f_n(p)}h&\ge \sum_{e\in E}\P(e\text{ is pivotal for } 0 \to\Lambda^c_n ) \\ &=\sum_{e\in E}\frac1{1-p}\P(e\text{ is pivotal for } 0 \to\Lambda^c_n,0\not \to\Lambda^c_n) \end{align*} Now consider the random set $S_n$ of points from which $\Lambda_n^c$ can not be reached. Note that $\{0\not\to\Lambda_n^c\}=\{0\in S_n\}$. For each $S\subset\mathbb{Z}^d$ and $(x,y)\in E$, we define the random variable \begin{align*}r^{(x,y)}_p(S)&=1\hspace{-1.3mm}1_{(x,y)\in\partial^+ S}\P_p(0\stackrel{S}{\rightarrow} x). \end{align*} Integrating the result of Lemma~\ref{conditionne}, we have for each $e\in E$: $$\P(e\text{ is pivotal for } 0 \to\Lambda^c_n, 0\not \to\Lambda^c_n)=\mathbb E_p\left( 1\hspace{-1.3mm}1_{0 \in S_n} r^e_p(S_n)\right).$$ Then, we get \begin{align*} \sum_{e\in E}\mathbb E_p\left( 1\hspace{-1.3mm}1_{0 \in S_n} r^e_p(S_n)\right)&=\mathbb E_p \left( 1\hspace{-1.3mm}1_{ \{0\not \to\Lambda^c_n \} } \sum_{e\in E}r^e_p(S_n) \right) = \mathbb E_p \left( 1\hspace{-1.3mm}1_{ \{0\not \to\Lambda^c_n \} } \frac{\phi_p(B)}p \right) \\ &\ge \mathbb E_p\left( 1\hspace{-1.3mm}1_{ \{0\not \to\Lambda^c_n \} } \frac{c_n}p\right)=c_n\frac{1-f_n(p)}p, \end{align*} which gives the desired inequality. \end{proof} \begin{lemme} \label{analyse} Let $I\subset\mathbb{R}$ be an open interval of $\mathbb{R}$ and let $f$ and $h$ be real valued functions defined on $I$ and such that \begin{itemize} \item $f$ is left upper semi-continuous on $I$ from the left: $\forall x \in I, \; f(x)\ge\liminf_{t\to x^-}f(t)$; \item $h$ is continuous on $I$ \item For each $x\in I$ $$\liminf_{t\to 0^+}\frac{f(x+t)-f(x)}t\ge h(x).$$ \end{itemize} Then, for any $a$ and $b$ in $I$ with $a\le b$, we have $\displaystyle f(b)-f(a)\ge\int_a^b h(x)\ dx.$ \end{lemme} \begin{proof}[Proof of Lemma \ref{analyse}] Let $a,b\in I$ with $a<b$. We fix $\epsilon>0$ and define on $[a,b]$: $F_{\epsilon}(x)=f(x)-\int_a^x h(t)\ dt +\epsilon x$. It is sufficient to prove that $F_{\epsilon}$ is non-decreasing for each $\epsilon>0$. Indeed, it will imply that $$f(b)-\int_a^b h(t)\ dt +\epsilon b=F_{\epsilon}(b)\ge F_{\epsilon}(a)=f(a)+\epsilon a,$$ which gives the lemma when $\epsilon$ tends to $0$. Let $x\in [a,b]$. By definition of $F_\epsilon$, $$\liminf_{t\to 0^+} \frac{F_{\epsilon}(x+t)-F_{\epsilon}(x)}t=\liminf_{t\to 0^+}\frac{f(x+t)-f(x)}t-h(x)+\epsilon\ge \epsilon.$$ So there exists $\eta_x>0$ such that for any $t \in (0,\eta_x)$, $\frac{F_{\epsilon}(x+t)-F_{\epsilon}(x)}t\ge \epsilon/2\ge 0$. Let $B=\{x\in [a,b]: \; F_{\epsilon}(x)<F_{\epsilon}(a)\}$. Assume by contradiction that $B\ne\varnothing$ and define $c=\inf B$. Let $(x_n)$ be a sequence in $B$ that tends to $c$. By the previous observation, the inequality $F_{\epsilon}(x_n)\ge F_{\epsilon}(c)$ holds for $n$ large enough. Since $x_n\in B$, by definition of $B$, $F_\epsilon(a)>F_{\epsilon}(x_n)$. Thus $F_\epsilon(a)>F_{\epsilon}(c)$. As $F_{\epsilon}$ is the sum of a function which is upper semi-continuous from the left and of a continuous function, it is still upper semi-continuous from the left. So $$F_{\epsilon}(c)\ge\liminf_{t\to c^-} F_{\epsilon}(t),$$ and by definition of $c$, $F_{\epsilon}(t)\ge F(a)$ for each $t\in ]a,c[$, so $F_{\epsilon}(c)\ge F_{\epsilon}(a)$. This brings a contradiction. \end{proof} \begin{proof}[End of the proof of Theorem \ref{thm:perco}: proof of~\eqref{item:2}] Fix $p'\in ]\tilde{p}_c(\Psi),1[$ and define on $[0,1)$ the function $g(x)=-\log(1-x)$: it is non-decreasing and convex. Let $p\in[ p', 1)$ and $h\in (0, 1-p)$: $$\frac{g(f_n(p+h))-g(f_n(p))}{f_n(p+h)-f_n(p)}\frac{f_n(p+h)-f_n(p)}{h}\ge g'(f_n(p)) \frac{f_n(p+h)-f_n(p)}{h}.$$ With Lemma \ref{proba} (note that as $p>\tilde{p}_c(\Psi)$, $c_n\ge 1$), we obtain that $$\liminf_{h\to 0^+}\frac{g(f_n(p+h))-g(f_n(p))}h\ge\frac{c_n}{p(1-p)}\ge\frac{1}{p(1-p)}.$$ We can now apply Lemma~\ref{analyse} on $[p',1[$: as $f_n$ is non-increasing, $g\circ f_n$ is non-decreasing, so it is clearly upper semi-continuous from the left: for any $p>p'$ $$g(f_n(p))\ge g(f_n(p))-g(f_n(p'))\ge\int_{p'}^{p} \frac{dx}{x(1-x)}=\log\frac{p(1-p')}{p'(1-p)}=g\left(\frac{p-p'}{p(1-p')}\right) .$$ It follows that $f_n(p)\ge \frac{p-p'}{p(1-p')}$, then, letting $p'$ tend to $\tilde{p}_c(\Psi)$, we get $$f_n(p)\ge \frac{p-\tilde{p}_c(\Psi)}{p(1-\tilde{p}_c(\Psi))}.$$ Finally, we obtain~\eqref{item:2} by letting $n$ go to infinity. \end{proof} \section{Links with first-passage percolation} \subsection{Percolation and first-passage percolation on the (unoriented) edges of~$\mathbb{Z}^d$} Consider first $\mathbb{Z}^d$ endowed with the set $E_d$ of edges between nearest neighbors. In the first-passage percolation model, iid non negative and integrable random variables $(t_e)_{e \in E_d}$ are associated to edges. Let us denote by $\nu$ their common law. We refer the reader to the recent review paper on first passage percolation by Damron et al~\cite{MR3729447}. For each path $\gamma$ in the graph $(\mathbb{Z}^d,E_d)$, we define \begin{align} \label{DEF:t} t(\gamma)=\sum_{e\in\gamma} t_e, \quad \text{and} \quad \forall x,y \in \mathbb{Z}^d, \; t(x,y)=\inf_{\gamma:x \to y} t(\gamma), \end{align} that can been seen as a random pseudo-distance on $\mathbb{Z}^d$. Using Kingman's subadditive ergodic theorem allows to define \begin{align} \label{DEF:mu} \forall x \in \mathbb{Z}^d \quad \mu_{\nu}(x)&=\lim_{n\to +\infty} \frac{t(0,nx)}n, \end{align} where the limits holds almost surely and in $L^1$. The functional $\mu_{\nu}$ is homogeneous and subadditive, and can be extended to a symmetric semi-norm on $\mathbb{R}^d$. With some extra integrability assumption, we obtain the analytic form of the asymptotic shape theorem: \begin{align} \label{PROP:FA} \lim_{\|x\|\to +\infty} \frac{t(0,x)-\mu_{\nu}(x)}{\|x\|}&=0\quad \P \text{ p.s.} \end{align} The subadditivity and the symmetries of the lattice imply quite simply that $\mu_{\nu}$ is a norm if and only if it $\mu_{\nu}((1,0,\dots,0))>0$ is strictly positive. Moreover, it has long been known (see for example Cox--Durrett~\cite{MR624685} or Kesten~\cite{kesten}) that $\mu_{\nu}$ is a norm if and only $\nu(0)<p_c(\mathbb{Z}^d)$, where $p_c(\mathbb{Z}^d)$ is the critical percolation parameter for independent percolation on the edges of $\mathbb{Z}^d$. Our idea here is to find, in oriented percolation on $(\mathbb{Z}^d,E)$, an analogous characterization of directions of percolation in terms of the semi-norm for an associated oriented first-passage percolation on $(\mathbb{Z}^d,E)$. Things are necessarily more intricate, since we saw that for oriented percolation the critical probability may depend on the direction. \subsection{Oriented percolation and first-passage percolation on $(\mathbb{Z}^d,E)$} We suppose that to each oriented bond $e\in E$ is associated a random variable $t_e$, the $(t_e)$'s being i.i.d. integrable non-negative random variables, with $\nu$ as common distribution; we denote by $p$ the probability $p=\P(t_e=0)=\nu(0)$. In this section, we assume that the semi-group of $\mathbb{Z}^d$ generated by $\text{Dir}$ is the whole set $\mathbb{Z}^d$. Then, the graph $(\mathbb{Z}^d,E)$ is transitive. An in the classical setting, we can define the passage time of an oriented path as in \eqref{DEF:t}, use Kingman's subadditive ergodic theorem to define the associated functional $\mu_{\nu}$ as in \eqref{DEF:mu}, which is now positively homogeneous and subadditive but not necessarily symmetric. By sudadditivity, $$\forall x,y \in \mathbb{Z}^d \quad |\mu_{\nu}(x+y)-\mu_{\nu}(x) | \le \|y\|_1 \max\{\mu(\epsilon e_i): \; 1 \le i \le d, \; \epsilon \in\{0,1\} \}.$$ Thus $\mu_{\nu}$ be extended in the usual way to a non-symmetric semi-norm on $\mathbb{R}^d$. Finally, we get, under some extra integrability assumption, the analytic form of the asymptotic shape theorem as in \eqref{PROP:FA}. Our hope is to characterize the directions of percolations in $(\mathbb{Z}^d,E)$ when edges are open with probability $p$, i.e. the $u\in \mathbb{R}^d$ such that $$\displaystyle D_u(0)=\sup_{y\in C_+(0)} \langle y,u\rangle=+\infty$$ with the help of the semi-norm $\mu_{\nu}$ for some law $\nu$ for the passage times of the edges. Since the only relevant parameter here is $\nu(0)=p$, we take from now on $$\nu_p=p\delta_0+(1-p)\delta_1; $$ we denote by $\mu_p$ the associated semi-norm on $\mathbb{R}^d$ and we set $$A_{p}=\{x\in\mathbb{R}^d: \; \mu_{p}(x)\le 1\},$$ which is a closed and convex set, but non necessarily bounded. We thus need some basics in the theory of convex sets. \subsection{Convex sets} As $A_p$ is closed and convex, we can associate to $A_{p}$ two non-empty closed convex cones: $\bullet$ The recession cone\footnote{sometimes called caracteristic cone or asymptotic cone} of $A_{p}$ is $$\displaystyle 0^+(A_{p})=\{u\in\mathbb{R}^d: \; A_{p}+\mathbb{R}_+u\subset A_{p}\}= \{x\in\mathbb{R}^d:\; \mu_{p}(x)=0\}.$$ $\bullet$ The barrier cone of $A_{p}$ is $$\displaystyle \text{Bar}(A_{p})=\{u\in\mathbb{R}^d: \; \miniop{}{\sup}{x\in A_{p}} \langle x,u\rangle <+\infty\}=\{x\in\mathbb{R}^d:\; b_p(x)>0\},$$ where $b_p(u)=\inf\{\mu_{p}(x):\;x \in \mathbb{R}^d \text{ such that } \langle u,x\rangle=1\} $. The polar cone of a closed non-empty convex cone $C$ is defined by $$C^{\circ}=\{u\in\mathbb{R}^d:\; \forall x\in C\quad \langle x,u\rangle\le 0\}.$$ The mapsto $C\mapsto C^{\circ}$ is an involutive map in the set of closed non-empty convex cones. Note also that $C\cap C^{\circ}=\{0\}$. Here, $0^+ (A_{p})$ is the polar cone associated to $\text{Bar}(A_{p})$ (see Rockafellar~\cite{MR0274683} Corollary 14.2.1 p 123). In other words, characterizing the directions $x\in\mathbb{R}^d$ such that $\mu_{p}(x)=0$ is equivalent to characterizing the directions $y\in\mathbb{R}^d$ such that $b(y)>0$. \subsection{Results} Let us define, for $p\in[0,1]$, \begin{align*} \mathrm{BG}(p)=\left\{u \in \mathbb{R}^d: \;\P_p \left( \sup_{y\in C_+(0)} \langle y,u\rangle=+\infty \right)=0\right\}. \end{align*} Note that $\mathrm{BG}(p)$ is non-increasing in $p$. The set $\mathrm{BG}(p)$ collects the directions in which the growth of the cluster issued from $0$ is bounded. It is thus natural to make the following conjecture: \begin{conjecture} $\forall p\in [0,1] \quad \mathrm{Bar}(A_{p})= \mathrm{BG}(p).$ \end{conjecture} For the moment, we only manage to prove the following result: \begin{theo} For every $p \in [0,1]$, $$\mathrm{int}(\mathrm{Bar}(A_{p}))\subset \mathrm{BG}(p) \qquad \text{and} \qquad \cup_{q>p} \mathrm{BG}(q) \subset \mathrm{Bar}(A_p).$$ \end{theo} This result will be a direct consequence of corollaries \ref{coco1} and \ref{coco2}. \medskip As in the classical setting, we can describe the asymptotic behavior of the point-to-hyperplane passage times with $\mu_p$. For $u \in\mathbb{R}^d\backslash\{0\}$ and $n \ge 0$, set \begin{align*} H_n(u)&=\{x\in\mathbb{R}^d:\;\langle x, u\rangle\ge n\} \quad \text{and} \quad t(0,H_n(u))=\inf_{x \in H_n(u)} t(0,x). \end{align*} \begin{theo} \label{shapedir2} For each $u \not\in \mathrm{fr}(\mathrm{Bar}(A_{p}))$, we have the almost convergence: $$ \lim_{n \to +\infty} \frac{t(0,H_n(u))}{n}= b_p(u) .$$ \end{theo} \begin{proof} As in the unoriented case, it will follow from the analytic form of the shape theorem. However, the existence of directions for which $\mu_p$ vanishes requires some attention. $\bullet$ Let $L>b_p(u)$. There exists $x \in \mathbb{R}^d$ with $\langle u,x\rangle=1$ and $\mu_{p}(x)\le L$. \\ For $n \ge 1$, denote by $x_n$ one vertex in $H_n(u)$ which is the closest to $nx$. \\ Then $\mu_p(x_n) \le n\mu_p(x) +O(1)$.\\ Since $t(0,H_n(u))\le t(0,x_n)$, we have $\displaystyle \limsup \frac{t(0,H_n(u))}{n}\le\limsup \frac{ \mu_{p}(x_n)}n \le \mu_p(x)\le L$. \\ Letting $L$ go to $b_p(u)$, we obtain that $\displaystyle \limsup \frac{t(0,H_n(u))}{n}\le b_p(u)$. $\bullet$ If $u \not \in \mathrm{Bar}(A_{p})$, then $b_p(u)=0$ and the desired convergence is clear. $\bullet$ If $u \in \mathrm{int}(\mathrm{Bar}(A_{p}))$, there exists $\epsilon>0$ such that the open ball centered in $u$ with radius $\epsilon$ is included in $\mathrm{Bar}(A_{p})$; moreover, $b_p(u)>0$. By contradiction, assume that there exists $\ell \in(0,b_p(u))$ such that $$\miniop{}{\liminf}{n\to +\infty} \frac{t(0,H_n(u))}{n}\le\ell< b_p(u).$$ Then, one can build an infinite increasing sequence integers $(n_k)$ and sites $(x_k)$ such that $t(0,x_k)\le\ell n_k$ and $\langle u,x_k\rangle = n_k+O(1)$. By a compactness argument, we can assume that $\frac{x_k}{\|x_k\|}\to x$. Then, $\frac{n_k}{\|x_k\|}=\langle \frac{x_k}{\|x_k\|},u\rangle +O(1/ \|x_k\|)\to \langle x,u\rangle$. By the asymptotic shape theorem, $\frac{t(0,x_k)}{\|x_k\|}$ tends to $\mu_{p}(x)$, and we get the inequality $$\mu_{p}(x)\le\ell {\langle u,x\rangle}.$$ Assume that ${\langle u,x\rangle}=0$, then $\mu_{p}(x)=0$, so $x\in 0^+(A_{p})$. But $\mathrm{Bar}(A_{p})$ is the polar cone of $ 0^+(A_{p})$: by definition of $\epsilon$, $u+\epsilon x/2 \in \mathrm{Bar}(A_{p})$, so $0 \ge \langle u+\epsilon x/2,x\rangle = \epsilon/2$, which is a contradiction. \\ So assume that ${\langle u,x\rangle}\ne 0$: we can define $\tilde x=\frac{x}{\langle u,x\rangle}$ and then $\langle u,\tilde{x}\rangle=1$ and $\mu_{p}(\tilde x)\le\ell$, which contradicts the definition of $b_p(u)$. \end{proof} \begin{coro} \label{coco1} $\mathrm{int}(\mathrm{Bar}(A_{p}))\subset \mathrm{BG}(p).$ \end{coro} \begin{proof} Assume that $u\not\in\mathrm{BG}(p)$. Then, $\theta_u(p)>0$. On the event $$\displaystyle \sup_{x\in C_+(0)} \langle y,u\rangle=+\infty,$$for each $n\ge 1$, one can find $x_n\in C_+(0)$, with $\langle x_n,u\rangle\ge n$. Then, $x_n\in H_n(u)$ and $t(0,H_n(u))=0$. We then apply Theorem~\ref{shapedir2}. \end{proof} \begin{theo} \label{exposoupc} Let $p<p_c(u)$. There exist constants $c,\alpha>0$ such that $$\forall n \ge 0 \quad \P_p(t(0,H_n(u))\le cn)\le e^{-\alpha n}.$$ \end{theo} \begin{proof} Note that $ \Psi_u(x)=\langle u,x\rangle$ is linear and thus subadditive, and, for this map, $ p_c(\Psi_u)=p_c(u)$. Thus, by Theorem~\ref{thm:perco}, we can find $S\subset\mathbb{Z}^d$ such that $\phi_p(S)<1$ and $\sup_{x \in S}\langle u,x\rangle<+\infty$. For $(x,y)\in\partial^+S$, let us define $$t_S(x,y)=t_{(x,y)}+ \inf_{\gamma:0 \to x, \gamma \subset S} t(\gamma).$$ By monotone convergence, \begin{align*} & \lim_{\alpha\to +\infty}\mathbb{E}\ _p\left(\sum_{(x,y)\in\partial^+S} e^{-\alpha t_S(x,y)}\right) = \mathbb{E}\ _p \left(\sum_{(x,y)\in\partial^+S} \mathbf{1}_{ \{t_S(x,y)=0 \}} \right) \\ = & \sum_{(x,y)\in\partial^+S} \P_p (t_S(x,y)=0) = \sum_{(x,y)\in\partial^+S} \P_p (0 \to x, \; \omega_{(x,y)}=1)=\phi_p(S), \end{align*} so we can find $\alpha>0$ such that \begin{align*} K_{S,\alpha}&=\miniop{}{\sum}{(x,y)\in\partial^+S} \mathbb{E}\ [e^{-\alpha t_S(x,y)}]<1. \end{align*} Let us assume that $t(0,H_n(u))\le cn$ and consider a path $\gamma=(0,\dots,f)$ linking $0$ to $H_n(u)$, whose travel time does not exceed $cn$. We can assume without loss of generality that $\gamma$ has no loop. Then, we set $y_0=0$ and we build a finite sequence $e_1=(x_1,y_1)$, \dots $e_q=(x_q,y_q)$ of edges such that $e_1$ is the first edge in $\gamma$ that does not belong to $S$, $e_2$ is the first edge in $\gamma$ after $e_1$ which does not belong to $y_1+S$,\dots. Once we arrive at the end of path $\gamma$, we stop and define $x_{q+1}$ as the last vertex of the path $\gamma$. The travel time of $\gamma$ is larger than $t^*_S(0, e_1,e_2, \dots, e_q, x_{q+1 })$, which is the infimum of travel times of paths starting in $0$, ending in $x_{q+1}$, passing in this order through edges $e_1,\dots,e_q$, using no edge twice, and remaining in $y_i+S$ between $y_i$ and $x_{i+1}$. Note that $\displaystyle n \le \langle x_{q+1}, u \rangle = \sum_{i=0}^{q}\langle X_{i+1}-X_i, u \rangle \le q M_u$, where $\displaystyle M_u=\max_{(x,y)\in\partial^+S}\{ \langle y, u \rangle \}>0$. Then, for a finite sequence $e=(e_i)_{1 \le i \le q}\in (\partial^+ S)^{q}$, we set $\tilde e_1=(x_1,y_1)=e_1$ and we define recursively $y_{i+1}=y_i+e_i$, $x_{i+1}=y_{i+1}-e_{i+1}$, $\tilde e_i=(x_i,y_i)$ $$\{t(0,H_n(u))\le cn\}\subset\miniop{}{\cup}{q\ge\frac{n}{M_u}-1}\miniop{}{\cup}{e\in (\partial^+ S)^{q}} \{ t^*_S(0,\tilde e_1,\dots, \tilde e_q, y_q)\le cn\}. $$ An extension of the van den Berg--Kesten inequality for the disjoint occurrence of increasing events established by Alexander (Theorem 2.3 in ~\cite{MR1202516}) ensures that $t^*_S(0,\tilde e_1,\dots, \tilde e_q, y_q)$ stochastically dominates the sum of independent copies of $t_S(e_1)$, $t_S(e_2)$, \dots,$t_S(e_q)$, so the Markov inequality leads to \begin{align*} \P(t(0,H_n(u))\le cn)&\le \miniop{}{\sum}{q \ge \frac{n}{M_u}-1}\miniop{}{\sum}{e\in (\partial^+ S)^{q}} e^{\alpha cn} \prod_{i=1}^{q}\mathbb{E}\ [e^{-\alpha t_S(e_i)}]\\ &\le e^{\alpha cn} \miniop{}{\sum}{q \ge \frac{n}{M_u}-1}\left(\miniop{}{\sum}{e\in (\partial^+ S)^{q} }\mathbb{E}\ [e^{-\alpha t_S(e_i)}]\right)^{q}\\ & \le C \left(e^{\alpha c} K_{S,\alpha}^{1/M_u}\right)^n, \end{align*} where $C$ only depends on $S$ and $\alpha$. Hence, for $c<\frac{\log(1/ K_{S,\alpha})}{\alpha M_u}$, and we get the desired exponential decrease. \end{proof} \begin{coro} \label{coco2} \label{hb}For each $u\in\mathbb{R}^d\backslash\{0\}$, $p<p_c(u)\Longrightarrow b_p(u)>0.$ \end{coro} \begin{proof} Let $p<p_c(u)$. By Theorem~\ref{exposoupc}, there exist $c,\alpha>0$ such that for each $n\ge 1$, $\P_p(t(0,H_n(u))\le cn)\le e^{-\alpha n}$. Then, with the Borel--Cantelli lemma and Theorem~\ref{shapedir2}, we get $b_p(u)\ge c$. \end{proof} \begin{coro}\label{untheoqdmm} $\displaystyle \cup_{q>p} \mathrm{BG}(q) \subset \mathrm{Bar}(A_p)$. \end{coro} \begin{proof} Consider $u\in \displaystyle \cup_{q>p} \mathrm{BG}(q)$: there exists $q>p$ such that $u\in \mathrm{BG}(q)$, so $\theta_u(q)=0$, so $p<p_c(u)$. We conclude with Corollary~\ref{hb}. \end{proof} \defReferences{References} \bibliographystyle{plain}
{ "attr-fineweb-edu": 1.958008, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd145qX_BwR884UCI
\section{Review of the RSOS model in the representation theoretical formulation} \par Here we review the formulation of the RSOS model given by Jimbo-Miwa-Ohta \cite{JMO} and introduce notation. We adopt the notation of \cite{JMO} unless otherwise stated. Let us fix an integer $k$ satisfying $1\leq N\leq k-1$ and set $l=N-k$. The spin variable of the RSOS model takes the level $k$ dominant integral weights. The Boltzmann weight is given by $$ \begin{array}{ccc} \lambda & - & \mu \\ \vert & & \vert \\ \mu' & - & \nu \end{array} = W^N_k( \begin{array}{cc} \lambda & \mu \\ \mu' & \nu \end{array} \vert z), $$ where $W^N_k( \begin{array}{cc} \lambda & \mu \\ \mu' & \nu \end{array} \vert z)$ is determined from the commutation relations of vertex operators, $$ \check{\bar{R}}_{NN}({z_1 \over z_2}) \Phi^{\nu V^{(N)}_1}_{\mu}(z_1) \Phi^{\mu V^{(N)}_2}_{\lambda}(z_2) = \sum_{\mu'}\Phi^{\nu V^{(N)}_2}_{\mu}(z_2) \Phi^{\mu V^{(N)}_1}_{\lambda}(z_1) W^N_k( \begin{array}{cc} \lambda & \mu \\ \mu' & \nu \end{array} \vert {z_1\over z_2}). $$ Here the Boltzmann weight is zero unless the pairs $(\lambda,\mu),(\mu,\nu),(\nu,\mu'),(\mu',\lambda)$ are admissible. The admissibility condition is specified by the existence condition of the vertex operators. Let us denote by $\lambda_j^{(k)}=(k-j)\Lambda_0+j\Lambda_1$ the level $k$ dominant integral weight and set $P^0_k=\{\lambda_j^{(k)} \vert 0\leq j\leq k\}$. \begin{definition} The pair $(\lambda_j^{(k)},\lambda_{j'}^{(k)})$ is called admissible if the following conditions are satisfied \begin{eqnarray} && j-j'\in\{N,N-2,\cdots, -N\}, \quad N\leq j+j'\leq 2k-N. \nonumber \end{eqnarray} \end{definition} \vskip3truemm \noindent Let us define the bijection $\sigma$ of $P^0_N$ by $B(\eta)\otimes B^{(N)}\simeq B(\sigma(\eta))$. Explicitly, if $\eta=\lambda^{(N)}_j$ then $\sigma(\eta)=\lambda^{(N)}_{N-j}$. The admissible pairs are parametrized as described below. \begin{prop} There is a bijection $$ P^0_{l}\times P^0_{N}\simeq\{\hbox{the admissible pairs}\} $$ given by $$ (\xi,\eta)\mapsto (\xi+\eta,\xi+\sigma(\eta)). $$ \end{prop} \vskip3truemm \noindent Take any $(\xi,\eta),(\tilde{\xi},\tilde{\eta})\in P^0_{l}\times P^0_{N}$. Then we state \begin{definition} $a=(a(n))_{n\in{\bf Z}}$ is called a $(\xi\eta,\tilde{\xi}\tilde{\eta})$ restricted path if \begin{description} \item[(i)] $a(n)\in P^0_k$, and $(a(n),a(n+1))$ is admissible for any $n$. \item[(ii)] $ a(n)=\xi+\sigma^n(\eta)\quad(n>>0), \hbox{ and } a(n)=\tilde{\xi}+\sigma^n(\tilde{\eta})\quad(n<<0). $ \end{description} \end{definition} \vskip3truemm \noindent Let us set \begin{eqnarray} &&B(\xi,\eta\vert \tilde{\xi},\tilde{\eta}) =B(\xi)\otimes B(\eta) \otimes B(\tilde{\eta})^\ast\otimes B(\tilde{\xi})^\ast, \nonumber \\ &&B_{\xi\eta,\tilde{\xi}\tilde{\eta}} =\{b\in B(\xi,\eta\vert \tilde{\xi},\tilde{\eta}) \vert \ \tilde{e}_i b=\tilde{f}_i b=0 \hbox{ for all $i$}\}, \nonumber \\ &&B_{\xi\eta}^\lambda=\{b\in B(\xi)\otimes B(\eta) \vert \ \tilde{e}_ib=0\hbox{ for all $i$ and $wt b=\lambda$}\}, \nonumber \\ &&B^{\ast}_{\tilde{\xi}\tilde{\eta}}=\{b\in B(\tilde{\eta})^{\ast}\otimes B(\tilde{\xi})^{\ast} \vert \ \tilde{f}_ib=0\hbox{ for all $i$ and $wt b=-\lambda$}\}. \nonumber \end{eqnarray} If we introduce the trivial action of $\tilde{e}_i$ and $\tilde{f}_i$ on $B_{\xi\eta}^\lambda$ and $B^{\ast\lambda}_{\tilde{\xi}\tilde{\eta}}$, we have the isomorphisms of affine crystals, \begin{eqnarray} && B(\xi)\otimes B(\eta)\simeq \sqcup_{\lambda\in P^0_k} B_{\xi\eta}^\lambda \otimes B(\lambda), \quad B(\tilde{\eta})^{\ast}\otimes B(\tilde{\xi})^{\ast}\simeq \sqcup_{\lambda\in P^0_k} B(\lambda)^{\ast}\otimes B_{\tilde{\xi}\tilde{\eta}}^{\ast\lambda}, \nonumber \\ && B_{\xi\eta,\tilde{\xi}\tilde{\eta}}= \sqcup_{\lambda\in P^0_k} B_{\xi\eta}^\lambda \otimes B_{\tilde{\xi}\tilde{\eta}}^{\ast\lambda}. \nonumber \end{eqnarray} Then \begin{prop}\cite{DJO}\label{singlet} There is a bijection \begin{eqnarray} && B_{\xi\eta,\tilde{\xi}\tilde{\eta}} \simeq \{\hbox{$(\xi\eta,\tilde{\xi}\tilde{\eta})$ restricted paths}\}, \nonumber \end{eqnarray} given by \begin{eqnarray} && b_\xi\otimes b\otimes b_{-\tilde{\xi}} \mapsto (a(n))_{n\in{\bf Z}}, \nonumber \\ && a(n-1)-a(n)=\hbox{wt} \ p(n)\hbox{ for any $n$}, \nonumber \end{eqnarray} where $b=(p(n))_{n\in{\bf Z}}\in B(\eta) \otimes B(\tilde{\eta})^\ast$. \end{prop} \vskip3truemm \noindent In the proof of this proposition, we use the weight multiplicity freeness of $B^{(N)}$. Proposition \ref{singlet} motivates the following definition of the space of states of the RSOS quantum spin chain, \begin{eqnarray} {\cal H}&=&\oplus{\cal H}_{\xi\eta,\tilde{\xi}\tilde{\eta}}, \nonumber \\ {\cal H}_{\xi\eta,\tilde{\xi}\tilde{\eta}} &=& \{\hbox{$U_q(\widehat{sl_2})$ singlet}\}\subset V(\xi,\eta\vert \tilde{\xi},\tilde{\eta}), \quad {\rm and} \nonumber \\ V(\xi,\eta \vert \tilde{\xi},\tilde{\eta}) &=& V(\xi)\otimes V(\eta)\otimes V(\tilde{\eta})^{\ast a}\otimes V(\tilde{\xi})^{\ast a}. \nonumber \end{eqnarray} Here the sum is over all $(\xi,\eta),(\tilde{\xi},\tilde{\eta})\in P^0_l\times P^0_N$. The tensor product is considered to be appropriately completed. Then the crystallized space of states of the RSOS model is $\sqcup B_{\xi\eta,\tilde{\xi}\tilde{\eta}}$, the main object of study in this paper. The creation operator is defined in terms of the spin $1/2$ vertex operators\cite{JMO}, \begin{eqnarray} && \varphi_{\xi,\eta}^{\ast \xi',\eta'}(z) = \Phi^{\eta'}_{V^{(1)}\eta}(z) \Phi_{\xi}^{\xi' V^{(1)}}(z) \label{creation} \end{eqnarray} which act as \begin{eqnarray} V(\xi,\eta\vert \tilde{\xi},\tilde{\eta}) &\stackrel{\Phi_{\xi}^{\xi' V^{(1)}}(z)}{\rightarrow}& V(\xi')\otimes V^{(1)}_z \otimes V(\eta)\otimes V(\tilde{\eta})^{\ast a}\otimes V(\tilde{\xi})^{\ast a} \nonumber \\ &\stackrel{\Phi^{\eta'}_{V^{(1)}\eta}(z)}{\rightarrow}& V(\xi',\eta' \vert \tilde{\xi},\tilde{\eta}). \nonumber \end{eqnarray} This operator obviously preserve the space ${\cal H}$, since it is an intertwiner. \section{Crystalline spinon basis for higher spin XXZ models} \par In this section we recall the results of \cite{NY} in a slightly generalized form which will be needed for the case of the RSOS models. We denote by ${\cal P}^k_{{\rm res},n}(r_1,r_2)$ the set of restricted paths from $r_1$ to $r_2$ of length $n$ in the sense of \cite{NY}. It is understood that ${\cal P}^k_{{\rm res},0}=\{ \phi \}$. We use the expression $B^{\rm XXZ}(p_n,\cdots,p_1)$ to represent $B(p_n,\cdots,p_1)$ of section 2, \cite{NY}. Let us set \begin{eqnarray} && B^{\rm XXZ}_{\geq m}(p_n,\cdots,p_1) =\{ \varphi^{\ast p_n}_{j_n}\cdots\varphi^{\ast p_1}_{j_1} \in B^{\rm XXZ}(p_n,\cdots,p_1) \vert \ j_1\geq m\}. \nonumber \end{eqnarray} The following theorem is proved in \cite{NY}. \begin{theorem}\label{xxzbase} Let $k$ be a positive integer and $0\leq l,r\leq k$. Then there is an isomorphism of affine crystals \begin{eqnarray} && \sqcup_{n=0}^\infty\sqcup_{(p_n,\cdots,p_1)\in {\cal P}^k_{{\rm res},n}(r,l)} B^{\rm XXZ}(p_n,\cdots,p_1) \simeq B(\lambda_l)\otimes B(\lambda_r)^\ast \nonumber \end{eqnarray} given by \begin{eqnarray} 1&\longmapsto&[[ \ ]]_r=b_{\lambda_r}\otimes b_{-\lambda_r}, \nonumber \\ \varphi^{\ast p_n}_{j_n}\cdots\varphi^{\ast p_1}_{j_1} &\longmapsto& [[j_n-p_n,\cdots,j_1-p_1]]. \nonumber \end{eqnarray} \end{theorem} Although, in \cite{NY}, the statement of this theorem is only made for the case $r=0$, the statement given above is actually proved. In fact the bijectivity of the map is obvious, and the condition $r=0$ is not used in the proof of the weight preservation. \begin{cor The map in Theorem \ref{xxzbase} induces the bijection preserving the affine weights: \begin{eqnarray} && \sqcup_{n=0}^\infty\sqcup_{(p_n,\cdots,p_1)\in {\cal P}^k_{{\rm res},n}(r,l)} B^{\rm XXZ}_{\geq p_1}(p_n,\cdots,p_1) \simeq B(\lambda_l), \nonumber \end{eqnarray} where we make the identification: \begin{eqnarray} B(\lambda_l)&\simeq& B(\lambda_l)\otimes b_{-\lambda_r} \nonumber \\ b&\longrightarrow& b\otimes b_{-\lambda_r}. \nonumber \end{eqnarray} \end{cor} \par \noindent By a calculation similar to that in section 5 of \cite{NY} we have \begin{cor} For any $0\leq r\leq k$, the character ${\rm ch}_j(q,z) \equiv {\rm tr}_{V(\lambda_{j})} (q^{-d} z^{h_1})$ is given by \begin{eqnarray} {\rm ch}_j(q,z)=\sum_{n=0}^\infty \sum_{m=0}^{n} {1 \over (q)_{n-m} (q)_{m}} \sum_{p} q^{h'(p)+mp_1} z^{n+r-2m}, \label{character} \end{eqnarray} where $p=(p_n,\cdots,p_{1})$ runs over all level $k$ restricted fusion paths from $r$ to $j$ and $h'(p)=\sum_{s=1}^{n-1}(n-s)H(p_{s+1},p_s)$. \end{cor} This corollary gives $k+1$ different expressions for the character of $V(\lambda_j)$. \section{Crystalline creation algebra} \par In this section we shall introduce the algebra of creation operators of the RSOS model at $q=0$. Unless otherwise stated we use the notation of \cite{NY}. For $p=0,1$ let us set $c(p)=1-p$. \begin{definition}\label{cca} The crystalline creation algebra ${\cal A}^{\rm RSOS}$ is the associative algebra with unity generated by $\{\varphi^{\ast pp'}_{2j+c(p)} \ \vert \ j\in{\bf Z},\,p\in\{0,1\}\}\cup\{1\}$ over ${\bf Z}$ subject to the following relations: \begin{eqnarray} && \varphi^{\ast p_2p_2^{\prime}}_{2j_2+c(p_2)} \varphi^{\ast p_1p_1^{\prime}}_{2j_1+c(p_1)}+ \varphi^{\ast p_2p_2^{\prime}}_{2j_1+2s_{21}+c(p_2)} \varphi^{\ast p_1p_1^{\prime}}_{2j_2-2s_{21}+c(p_1)}=0, \label{crels} \end{eqnarray} where \begin{eqnarray} s_{21}=-1+H(p_2,p_1)+H(p_2^{\prime},p_1^{\prime}). \nonumber \end{eqnarray} \end{definition} \vskip3truemm The algebra ${\cal A}^{\rm RSOS}$ is naturally graded by \begin{eqnarray} && {\cal A}^{\rm RSOS}=\oplus_{n=0}^\infty {\cal A}_n^{\rm RSOS} \nonumber \\ && {\cal A}^{\rm RSOS}_n=\sum {\bf Z} \varphi_{2j_n+c(p_n)}^{\ast p_np_n^{\prime}} \cdots \varphi_{2j_1+c(p_1)}^{\ast p_1p_1^{\prime}}, \quad {\cal A}^{\rm RSOS}_0={\bf Z}. \nonumber \end{eqnarray} In the following, we denote the crystalline creation algebra of the spin $l/2$ XXZ model\cite{NY} by ${\cal A}^{\rm XXZ}$, where the integer $l=k-N$ is associated with the RSOS model as in section 1. For $p=(p_n,\cdots,p_1)$ and $p^\prime=(p_n^\prime,\cdots,p_1^\prime)$ in $\{0,1\}^n$, let us set \begin{eqnarray} &&B^{\rm RSOS}(p \vert p^\prime) \nonumber \\ &&=\{ \varphi_{2j_n+c(p_n)}^{\ast p_np_n^{\prime}} \cdots \varphi_{2j_1+c(p_1)}^{\ast p_1p_1^{\prime}} \vert \hbox{ $(j_n,\cdots,j_1)$ satisfies the condition (\ref{nor})}\}, \nonumber \\ &&B^{\rm RSOS}_{\geq0}(p \vert p^\prime) \nonumber \\ &&=\{ \varphi_{2j_n+c(p_n)}^{\ast p_np_n^{\prime}} \cdots \varphi_{2j_1+c(p_1)}^{\ast p_1p_1^{\prime}} \in B^{\rm RSOS}(p \vert p^\prime) \vert j_1\geq H(p_1^{\prime},c(p_1)) \}. \nonumber \end{eqnarray} If $n=0$, we define $(p_n, \cdots, p_1)=\phi$ and $B^{\rm RSOS}(\phi\vert \phi)=\{ 1 \}$. The condition is \begin{eqnarray} && j_n-I_n-I_n^{\prime}\geq\cdots\geq j_2-I_2-I_2^{\prime}\geq j_1, \label{nor} \end{eqnarray} where \begin{eqnarray} I_m &=& I_m(p_m,\cdots,p_1)=\sum_{s=1}^{m-1}H(p_{s+1},p_s), \nonumber \\ I_m^{\prime} &=& I_m(p_m^{\prime},\cdots,p_1^{\prime})= \sum_{s=1}^{m-1}H(p_{s+1}^{\prime},p_s^{\prime}). \nonumber \end{eqnarray} \noindent If we set \begin{eqnarray} && \psi_A(j)=\varphi^{\ast pp'}_{2j+c(p)}, \quad A=(p,p') \nonumber \end{eqnarray} and $A_i=(p_i,p_i^{\prime})$ $(i=0,1)$, $s_{A_2A_1}=s_{21}$, then the commutation relations in Definition 3 can be written as \begin{eqnarray} && \psi_{A_2}(j_2) \psi_{A_1}(j_1) + \psi_{A_2}(j_1+s_{A_2A_1}) \psi_{A_2}(j_2-s_{A_2A_1}) =0. \nonumber \end{eqnarray} \noindent Then the condition (\ref{nor}) is nothing but the normality condition in the sense of Definition 2 in \cite{NY}. Hence by Corollary 2 of \cite{NY} we have \begin{theorem}\label{linear} $\sqcup_{p,p'\in\{0,1\}^n}B^{\rm RSOS}(p \vert p^\prime)$ is a ${\bf Z}$ linear base of ${\cal A}^{\rm RSOS}_n$. \end{theorem} \begin{definition} Let us define the weight of an element of $B^{\rm RSOS}(p \vert p^\prime)$ by \begin{eqnarray} wt(\varphi_{2j_n+c(p_n)}^{\ast p_np_n^{\prime}} \cdots \varphi_{2j_1+c(p_1)}^{\ast p_1p_1^{\prime}})&=& \sum_{s=1}^nwt(\varphi_{2j_s+c(p_s)}^{\ast p_sp_s^{\prime}}) \nonumber \\ wt(\varphi_{2j+c(p)}^{\ast pp^{\prime}})&=& -j\delta, \nonumber \\ wt1&=&0. \nonumber \end{eqnarray} \end{definition} \noindent We introduce the structure of a crystal in $B^{\rm RSOS}(p \vert p^\prime)$ such that $\tilde{e}_i$, $\tilde{f}_i$ $(i=0,1)$ act as $0$ on any element, and $B^{\rm RSOS}(p \vert p^\prime)$ has the weights defined above. Note that the commutation relations (\ref{crels}) are the same as those of $\varphi^{\ast p_2^{\prime}}_{2j_2+c(p_2)}$ and $\varphi^{\ast p_1^{\prime}}_{2j_1+c(p_1)}$. Hence, by Theorem 1 in \cite{NY} and Theorem \ref{linear} above, we have \begin{theorem}\label{embed} There is an embedding of the algebra ${\cal A}^{\rm RSOS}\rightarrow {\cal A}^{\rm XXZ}$ given by \begin{eqnarray} && \varphi_{2j+c(p)}^{\ast pp^{\prime}} \mapsto \varphi_{2j+c(p)}^{\ast p^{\prime}}. \nonumber \end{eqnarray} Under this embedding $B^{\rm RSOS}(p \vert p^\prime)$ is mapped into $B^{\rm XXZ}(p^\prime)$ for $p,p^\prime\in\{0,1\}^n$. Moreover, the weight in $d$ is preserved under this embedding. \end{theorem} \section{Crystalline spinon basis for the RSOS models} \par In this section we give a new parametrization of a base of the set of highest weight vectors in the tensor products of two integrable highest weight $U_q(\widehat{sl_2})$ modules in terms of crystalline spinons. Let us consider the integers $k,N$ and $l$ as introduced in section 1. We now state the main theorem of this paper. \begin{theorem}\label{base} For $r_1,l_1\in\{0,\cdots,l\}$ and $r_2,l_2\in\{0,\cdots,N\}$, there is an isomorphism of affine crystals \begin{eqnarray} && \sqcup_{n=0}^\infty \sqcup_{p\in {\cal P}^l_{{\rm res},n}(r_1,l_1)} \sqcup_{p^\prime\in {\cal P}^N_{{\rm res},n}(r_2,l_2)} B^{\rm RSOS}(p \vert p^\prime) \simeq B_{\lambda_{l_1}^{(l)}\lambda_{l_2}^{(N)},\lambda_{r_1}^{(l)}\lambda_{r_2}^{(N)}} \nonumber \end{eqnarray} given by \begin{eqnarray} 1&\longmapsto&[[ \ ]]_{r_1,r_2}= b_{\lambda_{r_1}^{(l)}}\otimes b_{\lambda_{r_2}^{(N)}}\otimes b_{-\lambda_{r_2}^{(N)}}\otimes b_{-\lambda_{r_1}^{(l)}}, \nonumber \\ \varphi_{2j_n+c(p_n)}^{\ast p_np_n^{\prime}} \cdots \varphi_{2j_1+c(p_1)}^{\ast p_1p_1^{\prime}} &\longmapsto& b_{\lambda_{l_1}^{(l)}}\otimes \varphi_{2j_n+c(p_n)}^{\ast p_n^{\prime}} \cdots \varphi_{2j_1+c(p_1)}^{\ast p_1^{\prime}} \otimes b_{-\lambda_{r_1}^{(l)}}. \nonumber \end{eqnarray} \end{theorem} \vskip4truemm \noindent By restricting the above map to the set of elements of the form $b\otimes b^\prime\otimes b_{-\lambda_{r_2}^{(N)}}\otimes b_{-\lambda_{r_1}^{(l)}}$ in $B_{\lambda_{l_1}^{(l)}\lambda_{l_2}^{(N)},\lambda_{r_1}^{(l)}\lambda_{r_2}^{(N)}}$, we have \begin{cor}\label{hbase} The map in Theorem \ref{base} induces the bijection preserving the affine weights: \begin{eqnarray} && \sqcup_{n=0}^\infty \sqcup_{p\in {\cal P}^l_{{\rm res},n}(r_1,l_1)} \sqcup_{p^\prime\in {\cal P}^N_{{\rm res},n}(r_2,l_2)} B^{\rm RSOS}_{\geq0}(p \vert p^\prime) \simeq B^{\lambda_{r}^{(k)}}_{\lambda_{l_1}^{(l)}\lambda_{l_2}^{(N)}}, \nonumber \end{eqnarray} where $r=r_1+r_2$, and we make the identification: \begin{eqnarray} B^{\lambda_{r}^{(k)}}_{\lambda_{l_1}^{(l)}\lambda_{l_2}^{(N)}}&\simeq& B^{\lambda_{r}^{(k)}}_{\lambda_{l_1}^{(l)}\lambda_{l_2}^{(N)}}\otimes b_{-\lambda_{r_2}^{(N)}}\otimes b_{-\lambda_{r_1}^{(l)}}, \nonumber \\ b&\longrightarrow& b\otimes b_{-\lambda_{r_2}^{(N)}}\otimes b_{-\lambda_{r_1}^{(l)}}. \nonumber \end{eqnarray} \end{cor} \par Let us define \begin{eqnarray} && V_{l_1,l_2}^{r}= \{ v\in V(\lambda_{l_1}^{(l)})\otimes V(\lambda_{l_2}^{(N)}) \vert e_i v=0 \ (i=0,1), \ \hbox{wt}v=\lambda_r^{(k)} \}. \nonumber \end{eqnarray} By calculations similar to those in section 5 of \cite{NY} we have \begin{cor} Assume the same conditions as those in Theorem \ref{base}. Then we have \begin{eqnarray} && \hbox{tr}_{V_{l_1,l_2}^{r}}(q^{-d}) = \sum_{n=0}^\infty {1\over (q)_n} \sum_{p\in {\cal P}^l_{{\rm res},n}(r_1,l_1)} \sum_{p'\in {\cal P}^N_{{\rm res},n}(r_2,l_2)} q^{h'(p)+h'(p')+nH(p_1',c(p_1))} . \nonumber \end{eqnarray} In particular, if $r_1r_2=0$, then \begin{eqnarray} && \hbox{tr}_{V_{l_1,l_2}^{r}}(q^{-d})= \sum_{n=0}^\infty{1\over (q)_n} K^{l}_{r_1,l_1}(n)K^{N}_{r_2,l_2}(n), \nonumber \end{eqnarray} where the polynomial $K^k_{r,j}(n)$ is that defined in (\ref{poly}) of section 5. \end{cor} \vskip5truemm We remark that the branching coefficient\cite{KW} is obtained from $\hbox{tr}_{V_{l_1,l_2}^{r}}(q^{-d})$ as $q^{ s_{l_1}^{(l)}+ s_{l_2}^{(N)}- s_{r}^{(k)} } \hbox{tr}_{V_{l_1,l_2}^{r}}(q^{-d})$, where $s^{(l)}_m={(m+1)^2\over 4(l+2)}-{1\over8}$. \vskip3truemm {\bf Example.} For the simplest case, $l=N=1$, one has \begin{eqnarray} &&\hbox{tr}_{V_{0,0}^{0}}(q^{-d})= \sum_{n=0}^{\infty} {q^{2 n^2} \over (q)_{2n}}, \nonumber \\ &&\hbox{tr}_{V_{1,1}^{0}}(q^{-d})= \sum_{n=0}^{\infty} {q^{2n(n+1)} \over (q)_{2n+1}}, \nonumber \\ &&\hbox{tr}_{V_{0,1}^{1}}(q^{-d})= \sum_{n=0}^{\infty} {q^{n(2n-1)} \over (q)_{2n}}= \sum_{n=0}^{\infty} {q^{n(2n+1)} \over (q)_{2n+1}}. \nonumber \end{eqnarray} These are well known fermionic character formulas for the Ising model with $h=0,1/2$ and $1/16$ up to the normalization $q^{h-1/48}$ \cite{KMM}. \vskip3truemm \noindent Theorem \ref{base} follows from Theorem \ref{embed} and the following lemma. \begin{lemma}\label{singlecity} Let $ b= b_{\lambda_{l_1}^{(l)}}\otimes \varphi_{2j_n+c(\epsilon_n)}^{\ast p_n^{\prime}} \cdots \varphi_{2j_1+c(\epsilon_1)}^{\ast p_1^{\prime}} \otimes b_{-\lambda_{r_1}^{(l)}} $ be an element of $B(\lambda_{l_1}^{(l)},\lambda_{l_2}^{(N)}\vert \lambda_{-r_1}^{(l)},\lambda_{-r_2}^{(N)})$. Then $\tilde{x}_ib=0$ for $i=0,1$ and $x=e,f$ if and only if the path $(\epsilon_n,\cdots,\epsilon_1)$ beginning at $r_1$ is an element of ${\cal P}^N_{{\rm res},n}(r_1,l_1)$. \end{lemma} \noindent The following statement is easily proved. \begin{lemma}\label{sub} Let $b'=b^{(1)}_{c(\mu_n)}\otimes\cdots \otimes b^{(1)}_{c(\mu_1)}$ be an element of $B^{(1)\otimes n}$. Then \begin{description} \item[(i)] If $\tilde{e}_1b'=\tilde{f}_1b'=0$, the path $(\mu_n,\cdots,\mu_1)$ beginning at $0$ never falls below $0$. \item[(ii)] If $\tilde{e}_0b'=\tilde{f}_0b'=0$, the path $(\mu_n,\cdots,\mu_1)$ beginning at $N$ never rises above $N$. \end{description} \end{lemma} \vskip5truemm \noindent Proof of Lemma \ref{singlecity}. Note that the condition $\tilde{e}_ib=\tilde{f}_ib=0$ for $i=0,1$ is equivalent to \begin{eqnarray} && \tilde{x}_1( b^{(1)\otimes l_1}_0\otimes b^{(1)}_{c(\epsilon_n)}\otimes \cdots \otimes b^{(1)}_{c(\epsilon_1)} \otimes b^{(1)\otimes r_1}_1)=0 \hbox{ for $x=e,f$}, \nonumber \\ && \tilde{x}_0( b^{(1)\otimes N-l_1}_1\otimes b^{(1)}_{c(\epsilon_n)}\otimes \cdots \otimes b^{(1)}_{c(\epsilon_1)} \otimes b^{(1)\otimes N-r_1}_0)=0 \hbox{ for $x=e,f$}. \nonumber \end{eqnarray} Then Lemma \ref{singlecity} follows from Lemma \ref{sub}. $\Box$. \section{Some other applications} To this point we have shown that the crystalline spinon picture can be naturally extended to the case of RSOS models. In this section, we discuss some additional applications of our construction. First, we discuss various formulas for the string function of integrable highest weight ${\widehat {sl_2}}$ modules which follow from the spinon character formulas of \cite{NY}. Then, considering the limiting forms of these formulas, one can obtain the $sl_2$ parafermion characters for $c=3k/(k+2)-1$ as suitable large $n$ limits of the polynomials $K^k_{r,j}(n)$. Finally, we comment on another large $n$ limit of the polynomials $K^k_{r,j}(n)$ and their relation to the Virasoro minimal model characters with $c=1-6/(k+1)(k+2)$. Let us define $K^k_{r,j}(n)$, ($0 \leq r,j \leq k$). \begin{eqnarray} && K^{k}_{r,j}(n)=\sum_{p \in {\cal P}^{(k)}_{{\rm res},n}(r,j)} q^{h'(p)}, \label{poly} \end{eqnarray} where $h'(p)$ is given by $$ h'(p)=\sum_{i=1}^{n-1} (n-i) H(p_{i+1},p_{i}). $$ By definition, $K^k_{r,j}(n)$ is a polynomial in $q$ with non-negative integer coefficients. One can prove the following formula \begin{eqnarray} &&K^k_{r,j}(n)=G^{k+2}_{r,j+1}(n)-G^{k+2}_{r,-j-1}(n) \nonumber \\ &&G^l_{r,s}(n)=\sum_{m \in {\bf Z}} q^{m s+m^2 l} \left[ \begin{array}{c} n \\ {s+2ml+n-r-1 \over 2} \end{array} \right]. \nonumber \end{eqnarray} The string function\footnote{We normalize it as $c^{\lambda}_{\mu}(q)=1+{\cal O}(q)$.} $c^{j}_{\mu}(q)$ is the character of the weight $h_1=\mu$ subspace of the integrable module $V(\lambda_j)$. It is easy to read the string functions from the spinon character formula\cite{BLS2,NY}, $$ c^{j}_{\mu}(q)= q^a \sum_{n=\vert \mu \vert}^{\infty} {K^k_{0,j}(n) \over (q)_{n-\mu \over 2}(q)_{n+\mu \over 2}}, $$ where $q^a$ is a suitable normalization factor depending on $\mu$. It should be noted that in these formulas the symmetry relations $$ c^j_\mu=c^j_{-\mu}=c^j_{\mu+2 k {\bf Z}}=c^{k-j}_{k-\mu} $$ are not manifest. Considering this fault as a virtue, one can derive infinitely many $q$-series identities. For instance, at level $k=1$, all string functions are equal, and one obtains $$ \sum_{n=0}^{\infty} {q^{n(n+m)} \over (q)_n (q)_{n+m}}={1 \over (q)_{\infty}}, $$ for any $m \geq 0$. The expression on the rhs may be identified with the $m \rightarrow \infty$ limit of the lhs. Similarly, for $k=2$, one has \begin{eqnarray} &&(q)_{\infty} c^0_0= \lim_{n \rightarrow \infty} q^{-2 n^2} K^2_{0,0}(4 n), \nonumber \\ &&(q)_{\infty} c^0_2= \lim_{n \rightarrow \infty} q^{-(2 n^2+2n+1)} K^2_{0,0}(4 n+2), \nonumber \\ &&(q)_{\infty} c^1_1= \lim_{n \rightarrow \infty} q^{-(2 n^2+n)} K^2_{0,1}(4 n+1). \nonumber \end{eqnarray} These are essentially the Virasoro characters for $c=1/2$ and $h=0,1/2$ and $1/16$. In general, the large $n$ behavior of $K^k_{r,j}(n)$ is described by \begin{lemma} For $(a)$ $0 \leq i < k-j$ and $(b)$ $k-j \leq i < k$, put \begin{eqnarray} &(a)& K^k_{0,j}(2nk+2i+j)= q^{k n^2+j n+i(2n+1)} {\bar K}^k_{0,j}(2nk+2i+j), \nonumber \\ &(b)& K^k_{0,j}(2nk+2i+j)= q^{k n^2+j n+i(2n+2)+j-k} {\bar K}^k_{0,j}(2nk+2i+j). \nonumber \end{eqnarray} Then, ${\bar K}^k_{0,j}(2nk+2i+j)=1+{\cal O}(q)$, and has a limit as $n \rightarrow \infty$ in the form of formal power series in $q$. \end{lemma} Proof. The lowest degree term comes from the following path \begin{eqnarray} &(0^k 1^k)^n 0^{i+j} 1^i, &(0 \leq i < k-j) \nonumber \\ &(0^k 1^k)^n 0^k 1^i 0^{i+j-k}, &(k-j \leq i < k) \nonumber \end{eqnarray} and the number of paths of fixed degree is finite and independent of n (for $ n >>0 $), since those paths can differ from the lowest degree path only near end points.$\Box$ By this lemma, the limiting form of the infinite sequence $$ c^j_\mu=c^j_{\mu+2k}=c^j_{\mu+4k}=\cdots, $$ takes the form $$ c^j_\mu= {1 \over (q)_{\infty} } \lim_{n \rightarrow \infty} {\bar K}^k_{0,j}(2kn+\mu) . $$ In particular $\lim_{n \rightarrow \infty} {\bar K}^k_{0,j}(2kn+\mu)$ is an analytical function on $\vert q \vert<1$. Since $(q)_{\infty} c^j_\mu$ is the parafermion character \footnote{This is also normalized as ${\rm ch}^j_{\mu}=1+{\cal O}(q)$.} ${\rm ch}^j_\mu$ with $c=3k/(k+2)-1$, we have \begin{prop} The large $n$ limit of the lower degree terms in $K^k_{0,j}(n)$ gives the parafermion character $$ {\rm ch}^j_\mu=(q)_{\infty} c^j_\mu= \lim_{n \rightarrow \infty} {\bar K}^k_{0,j}(2kn+\mu). $$ \end{prop} The sum in $K^k_{r,j}(n)$ looks like the usual path realization of Virasoro minimal model characters \cite{ABF}. Hence it is natural to seek the relation between these two. In doing this, let us first introduce the 1D configuration sum of the ABF model in regime II or III by \begin{eqnarray} && X^k_{j,r}(n,q)=\sum_{p\in {\cal P}^k_{res,n}(j,r)} q^{\omega'(c(p))}, \nonumber \end{eqnarray} where \begin{eqnarray} && \omega'(p)=\sum_{i=1}^{n-1} i \tilde{H}(p_{i+1},p_i). \nonumber \end{eqnarray} Here $\tilde{H}(0,1)=-1$, $\tilde{H}(0,0)=\tilde{H}(1,1)=\tilde{H}(1,0)=0$, and $c(p)=(c(p_i))$. Let us next set \begin{eqnarray} && b^j_{r,s} (q)={\rm tr}_{V^j_{r, s}} (q^{-d}), \nonumber \end{eqnarray} which is the branching coefficient up to the power of $q$, as mentioned in section 4 for $$ V(\lambda_r^{(k-1)})\otimes V(\lambda_{s}^{(1)})\simeq \oplus_{j}V_{r,s}^{j}\otimes V(\lambda_{j}^{(k)}), $$ where $j \equiv r+s$ (mod. 2). We write $K^k_{j,r}(n,q)=K^k_{j,r}(n)$. Then we have \begin{prop} \begin{description} \item[] \item[(i)] $K^k_{j,r}(n,q)=X^k_{r,j}(n,q^{-1})$. \item[(ii)] For $\vert q \vert<1$ we have \begin{eqnarray} && b^j_{r,s} (q)= \lim_{n\rightarrow\infty}q^{n(n-r)}K^k_{r+s,j}(2n,q^{-1}). \nonumber \end{eqnarray} \end{description} \end{prop} \par \noindent Proof. The proof of (i) follows immediately from the definitions. Let us prove (ii). Using the formulation of \cite{DJO} we have \begin{eqnarray} && b^j_{r, s}(q)= \lim_{n \rightarrow \infty} \sum_{p \in {\cal P}^{(k)}_{{\rm res},2n}(j,r+s)} q^{{\omega'}(c(p))-{\omega'}(p_{gr}^r)}= \lim_{n \rightarrow \infty} q^{-{\omega'}(p_{gr}^r)} X^k_{j,r+s}(2n,q), \nonumber \end{eqnarray} where $p_{gr}^r(n)=\epsilon(n+r)$ and $\epsilon(n)={1\over2}(1-(-1)^n)$. Since $\omega'(p_{gr}^r)=-n(n-r)$, (ii) follows from (i). $\Box$ \vskip3truemm \noindent It follows from (i) of this proposition that $\lim_{n \rightarrow \infty}{\bar K}^k_{0,j}(2kn+\mu)$ discussed above is the 1D configuration sum in the thermodynamic limit of the ABF model in regime II. \section{Discussion} In this paper we introduced the crystalline creation algebra for RSOS models. Using this we have given a description of the set of highest weight elements in the tensor product of crystals of integrable highest weight $U_q(\widehat{sl_2})$ modules in terms of crystalline spinons. This crystalline spinon basis leads us to the fermionic type formulas of the branching functions. We have found that the infinite system of string functions obtained from the spinon basis converge to the one dimensional configuration sums (divided by $(q)_\infty$) of the ABF model in regime II. This allows a path description of the parafermion characters. The fermionic formulas for the branching coefficients associated with the ABF model were first proved in \cite{Berk}. In that work, however, the relation of the character formula with the underlying quasi-particle structure is rather implicit or there was no description given of the path counting with weights. Recently O. Warnaar gave another proof by counting weighted paths based on the Fermi gas picture. He also discussed its relation with the Bethe Ansatz solutions \cite{Wa}. Our results are the rigorous generalization of the character formulas presented in \cite{Berk,Wa} to the case of general RSOS models, directly related with the quasi-particle structure of the model. The relation between the one dimensional configuration sums of the ABF model in regime II and the parafermion characters found here is consistent with the results of \cite{BR}. It is an interesting problem to determine whether this relation can be generalized to the case of general RSOS models and whether there are corresponding spinon type series.
{ "attr-fineweb-edu": 1.066406, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd1U25V5jAV9GHsHH
\section{Introduction} \label{sec:introduction} \iffalse \textbf{The} global community is increasingly becoming a data-driven environment, in which systems are generating vast quantities of data outside of the traditional data centers. Cisco anticipates that global Internet traffic will be 396 Exabytes (EB) per month in 2022, up from 122 EB per month in 2017 \cite{cisco2022}. The need to collect, process, and transfer vast data to the central Cloud is becoming a bottleneck in many mission-critical use-cases \cite{ray2019EDGE, sapienza2016solving}. However, EDGE computing acts as a high-performance bridge from local systems to private and public Clouds. EDGE computing, which typically has relatively small hardware and memory footprint, can provide a valuable infrastructure at the network's periphery. EDGE can perform tasks such as collecting, filtering, and light-weight computation over the raw data. Consequently, the provision of such processing at the EDGE means that only the necessary data needs to be transferred to the centralized data center instead of the initial raw data \cite{sharma2017live}. \fi \textbf{The} global community is increasingly becoming a data-driven environment, in which systems are generating vast quantities of data outside of the traditional data centers. Cisco anticipates that global Internet traffic will be 396 Exabytes (EB) per month in 2022, up from 122 EB per month in 2017 \cite{cisco2022}. This enormous amount of data has a positive impact on Artificial Intelligence (AI) applications. In particular, Deep Learning (DL) techniques rely on the availability of large quantities of data \cite{aggarwal2022has,lv2022look}. In recent years, DL has shown promising progress in natural language processing, computer vision, and big data analysis. Examples include natural language processing tasks where models like BERT, Megatron-LM, GPT3, Gropher, etc.~ \cite{devlin2018bert,rae2021scaling,he2021fastmoe,dale2021gpt} are reaching human-level understanding of the textual data. Other examples include DL techniques that have exceeded human performance on object classification tasks such as ImageNet \cite{barbu2019objectnet,kwabena2022capsule} or have outperformed humans in the game of Multiplayer Online Battle Arena without any human intervention \cite{wu2019hierarchical}. \iffalse Training and inference for Artificial Intelligence (AI) models commonly require many memory and computational resources. For example, training the BERT BASE model containing 110M parameters on V100x64 graphical processing unit (GPU) requires 12,041.51W of power with 79 hours of training emits 1438lbs \cite{strubell2019energy} of Carbon dioxide (CO2). Performing inference on many of these deep learning models is also computationally heavy. For example, suppose the input sequence is 256 words at inference time, and the BERTBase model of twelve transformer blocks is selected for processing. In that case, the BERTBase model has to process $12 \times 256 = 3072$ wordvectors for any input to make an inference \cite{goyal2020power}. Because of the high memory and computational resources required for most AI models, training and inference activity are often partially or fully offloaded to Cloud data centers. This offloading involves transferring data from source devices to the data center, contributing to network congestion and increased application latency \cite{8473376} and also raises issues regarding data privacy. One possible way to mitigate the challenge is to utilise Tiny machine Learning (TinyML). TinyML is a field in which machine learning applications including the hardware requirements are analysed and operated on extremly low power (battery operated) EDGE device \cite{dutta2021tinyml}. Even though industry and academia are pushing research in this field for different use-cases, there is still a gap in the performance of the TinyML model compared to the deeper neural network-based model \cite{soro2021tinyml}. Differences in performance make researchers continuously devise the architecture that can fully exploit the infrastructure available at EDGE. \fi DL requires significant compute resources to facilitate both training and inference phase; this is mainly achieved by utilizing Cloud or in-house computing infrastructures. The need to collect, process, and transfer the vast quantity of data to the central Cloud can become a bottleneck in many mission-critical use-cases \cite{ray2019EDGE,sulieman2022EDGE}. However, EDGE computing acts as a high-performance bridge from local systems to private and public Clouds. EDGE computing, which typically has relatively small hardware and memory footprint, can provide a valuable infrastructure at the network's periphery. EDGE computing has typically performed tasks such as collection, filtering, and light-weight computation of raw data. Consequently, the provision of such processing at the EDGE means that only the necessary processed data needs to be transferred to the centralized data center instead of the transmission of all raw data in earlier days \cite{alsalemi2022innovative}. In addition, with progress in Deep Learning-based architectures and algorithms, training and inference are now being pushed to the network's EDGE. \iffalse \par Traversal of data across the Internet of Things and Cloud continuum introduces the following four major challenges \cite{deng2020EDGE,wang2020convergence}: \subsection{Security and Privacy of Data} Security and privacy issues in Cloud computing are a significant challenge in adopting cloud capabilities. In a multi-hierarchy Cloud environment and given the dynamic nature of the real-world application, it is not easy to maintain the credibility of the data flowing between the End-user device/systems and server nodes \cite{mukherjee2017security}. \cite{mukherjee2017security} highlight that the use of certificates and Public-Key-Mechanism (PKI) mechanisms, which were traditionally use to ensure the integrity of the data in such environments, are no longer viable as such techniques can not be facilitated in typical IoT devices due to resource constraints. Furthermore, multitenancy in Cloud computing which is used for sharing the server resources among different customers, demands new solutions and architecture to preserve the privacy of data coming from the end user \cite{odun2017Cloud}. \subsection{Latency challenge} Over the last decade, significant improvement in computing power and storage capacity has made deploying AI deep learning models in production. Advancements in technology have also led to leveraging AI models for real-time application \cite{abdulkareem2019review,yang2019application,chen2019augmented}. For example, virtual voice-assistant-based applications such as Siri and Alexa require a real-time response to be generated \cite{hoy2018alexa}. The success of such real-time AI applications is highly dependent on a low latency requirement \cite{pereira2017experimental}. For example, cooperative autonomous driving applications emphasize low latency as a critical requirement for its success \cite{hu2016quantifying}. Latency is becoming a crucial metric of service composition for applications reliant on data-intensive tasks \cite{wang2019enable}. \subsection{Legal Challenge} In recent years, a number of significant data breaches \cite{mcelroy2019data,esteve2017business} has contributed to a renewed emphasis on ensuring users' data privacy. Data protection laws like GDPR and CIPPA redefine the necessity of user privacy. Such laws represent significant challenges for Cloud based operators because of restrictions placed on the movement of the data by governments. The processing and computational task offloaded to the Cloud, from different geographically based computing systems, leads to serious concerns of how data is moving across the international boundaries \cite{zhang2018hybrid}. As customers utilizing the Cloud power do not have full control over the physical infrastructure and authorization layer, this leads to legal issues \cite{krishnan2019legal}, which are subjected again to the jurisdiction of the Cloud's physical location. \subsection{Scalability Challenge} The emergence and proliferation of IoT devices contributed to the exponential surge in the data generated and transmitted across international boundaries. Offloading all the raw data to the Cloud is intractable as it can lead to excessive network congestion and a bottleneck in the network interface \cite{deng2020EDGE}. For example, a real-time application that processes the video streams to provide analytics is one of the bandwidth-intensive use-cases, creating congestion in the network \cite{chen2019deep}. \fi The convergence of AI and EDGE computing has given rise to a new paradigm of research, called EDGE Intelligence (EI) or EDGE AI \cite{wang2019EDGE,han2015learning} with the goal of facilitating AI modelling closer to the data generation source. In the last couple of years, there has been a significant increase in the number of research papers published in the domain of EDGE Intelligence. In the area of Edge intelligence there has been a 150\% increase in published papers between 2016 and 2021. Figure~\ref{fig98} depicts the interest of academic researchers in the field of EDGE intelligence~\cite{dimensionsai}. EDGE Intelligence enables empowering new applications and innovations that leverage its proximity to the end-user \cite{liu2020toward}. These opportunities have been recognized by both industry and academia. Companies such as Google, IBM, and Microsoft have developed more powerful EDGE servers \cite{charyyev2020latency} while DL at the EDGE is being used across various application domains, including video analytics \cite{rocha2021leveraging}, healthcare \cite{amin2020EDGE}, natural language processing \cite{uddin2020emotion}, network functions \cite{xiao2020optimizing} and virtual and augmented reality \cite{lodhi2020state}. \begin{figure}[h] \centering \setlength \fboxsep{0pt} \setlength\fboxrule{0.25pt} \fbox{\includegraphics[width=3.0in]{image/chart.png}} \caption{Publication volume over time for the topic EDGE Intelligence.} \label{fig98} \end{figure} EI is more focused on decentralized and distributed architectures in order to utilize EDGE servers, which are available in high numbers and close to the point of data generation. Therefore, while an individual server may have relatively low computing power for facilitating DL training and inference, a collection of EDGE servers can be leveraged to effectively train and infer from DL techniques at the EDGE. A few prominent examples are Distributed Machine Learning-based technologies \cite{zhou2019EDGE}, Federated Learning \cite{abreha2022federated} and Split Learning \cite{gu2021server}, which are explained in more detail in this paper in section\ref{EnablingTechnologies}. Also, to infer from the DL model at EDGE from the resources-constrained EDGE server, it is required most of the time to adapt the DL model by compressing the model's size or by using the conditional computation as explained in section \ref{modelAdaption}. This is to adapt the DL model by compressing the model's size or by applying the conditional computation of the DL training and inference. The adoption of EI also helps to mitigate a range of specific challenges inherent in the traditional EDGE-Cloud architecture. These include addressing data privacy issues, challenges around DL model latency, scalability limitations, and legal challenges. EI helps to safeguard the privacy of end-user data as it enables data to be processed at the EDGE closer to the source of data generation, negating the necessity to transfer potentially sensitive end user data across the network \cite{mao2018privacy,chen2019deep,boulemtafes2020review}. EI also facilitates the deployment of AI models at the EDGE, helping to achieve faster inference, thereby mitigating real-time-based applications' latency requirements \cite{martin2019edge}. In terms of legalities, the EI architecture provides enough flexibility to define the offloading of the DL computation, which can adhere to the government guidelines of sharing the data across geographical regions. Furthermore, EI also provides a well-defined strategy \cite{xu2020survey}, such as task offloading and cache deployment, to mitigate the scalability challenge that arises due to the proliferation of Internet of Things (IoT) devices. All the above solutions provided by EI can be made to fall into one of the paradigms of EI, the \emph{All-in EDGE paradigm}, which resolves data privacy, latency, and scalability problems. The All-in EDGE paradigm is a logical paradigm that comes into existence when only the EDGE server is utilised for the DL training and inference; the detailed definition is clearly stated in section \ref{EIAllInEdge}. Different researchers focus on different metrics while developing a DL application at the All-in EDGE paradigm. This survey paper also presents the key performance indicator (KPIs) which should be measured while developing the DL solution at the All-in EDGE paradigm. These KPIs will provide the needed evaluation metrics to compare one algorithm to another. A detailed discussion of KPI is provided in section \ref{sec:util}. The proliferation of IoT devices has led to a paradigm shift in the use of DL. More specifically, there is a movement from centralized DL deployment based solely on the Cloud to a decentralized All-in EDGE DL computing paradigm \cite{atieh2021next,alimi20216g}, which is the foundation for this survey. This survey looks upon the underneath mentioned problems which arise when DL is pushed to the All-in EDGE paradigm: \begin{enumerate} \item How do we formally define the emergent All-in EDGE paradigm? \item How do different architectures (centralised, decentralised and distributed) work in the All-in EDGE paradigm? \item What is the state of the art in training (and inference) enablers for the All-in EDGE paradigm? \item What are the standard Key Performance metric to compute for the All-in EDGE paradigm? \end{enumerate} Unlike prior surveys \cite{deng2020EDGE,wang2020convergence,chen2019deep,park2019wireless,zhou2019EDGE,murshed2021machine} summarised in the Table~\ref{tab:summary}, this survey focuses on providing insights into enablers for DL in the All-in EDGE paradigm. To the best of our knowledge, none of the mentioned surveys looked upon the enablers and challenges from the All-in EDGE paradigm perspective. All in all, considering the aforementioned questions, the contributions of this survey paper are the following: \begin{enumerate} \item The survey provides clear distinctions in the computing paradigm and EDGE intelligence. This is required to find the networking infrastructure that can be exploited at different computing paradigms as well as different levels of EI. \item A state of the art review of enablers required for deep learning training and inference from All-in EDGE paradigm perspective. \item A discussion of the key performance metrics for the All-in EDGE paradigm. The metrics enable finding suitable methods for evaluating the deep learning model at the All-in EDGE paradigm. \item Identification of open challenges in the All-in EDGE paradigm from an operational perspective for the attention of the research community in academia and industry. \end{enumerate} \iffalse This survey focuses on training and inference of the real time DNN applications at the All-in EDGE paradigm. Unlike related work \cite{deng2020EDGE,wang2020convergence,chen2019deep,park2019wireless,zhou2019EDGE,murshed2021machine} also summarised in Table~\ref{tab:summary}, this study will focus on the DNN applications of AI exclusively at the EDGE server. \begin{enumerate} \item Comprehensively looks upon deep learning architecture and enabling technologies to facilitate DL training and inference phase at EDGE server. \item Investigate the model adaption techniques for improvement in the convergence of the DL model and low-latency inference in the resource-constrained environment. \item Recommends the key performance metrics, which are essential for evaluating EI at all-in EDGE [all-in EDGE refers to a paradigm, wherein DL model training and inference happens only from EDGE server] paradigm. \end{enumerate} \fi \begin{table*}[] \centering \caption{A summary of related surveys.\\ \ding{55}: Not included; \ding{108}: Not considered from All-in EDGE paradigm; \ding{52}: Included} \label{tab:summary} \resizebox{\textwidth}{!}{% \begin{tabular}{|l|l|c|c|c|c|} \hline \hline \multicolumn{1}{|c|}{\textbf{Survey Paper}} & \multicolumn{1}{c|}{\textbf{Takeaway}} & \begin{tabular}[c]{@{}c@{}}\textbf{Discussion on} \\ \textbf{computing paradigm}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Focused on All-in EDGE} \\ \textbf{Enablers}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Focused on All-in EDGE} \\ \textbf{Model Adaption}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Focused on All-in EDGE} \\ \textbf{Evaluation Metrics} \end{tabular} \\ \hline \hline \begin{tabular}[c]{@{}l@{}}EDGE intelligence: the confluence\\ of EDGE computing and \\ artificial intelligence~\cite{deng2020EDGE}\end{tabular} & \begin{tabular}[c]{@{}l@{}}Survey provided insights into EDGE \\ intelligence. It partitioned EDGE \\ Intelligence into AI for EDGE and AI\\ on EDGE along with research roadmap.\end{tabular} & \ding{55} & \ding{108} & \ding{108} & \ding{55} \\ \hline \begin{tabular}[c]{@{}l@{}}Convergence of EDGE computing\\ and deep learning: \\ A comprehensive survey~\cite{wang2020convergence}\end{tabular} & \begin{tabular}[c]{@{}l@{}}Survey looked upon EDGE Intelligence from \\ machine learning perspective for wireless \\ communication. Also, provides insights \\ into the EDGE hardware for DL.\end{tabular} & \ding{108} & \ding{108} & \ding{108} & \ding{55} \\ \hline \begin{tabular}[c]{@{}l@{}}Deep Learning with EDGE \\ computing: A review~\cite{chen2019deep}\end{tabular} & \begin{tabular}[c]{@{}l@{}}Authors of the survey paper looked upon\\ scenarios of EI, techniques to speed up\\ training and inference on EDGE devices.\end{tabular} & \ding{55} & \ding{108} & \ding{108} & \ding{55} \\ \hline \begin{tabular}[c]{@{}l@{}}Wireless network intelligence\\ at the EDGE~\cite{park2019wireless}\end{tabular} & \begin{tabular}[c]{@{}l@{}}Survey provided insights into theoretical \\ and technical enabler of EDGE ML for\\ training and inference process.\end{tabular} & \ding{55} & \ding{108} & \ding{108} & \ding{55} \\ \hline \begin{tabular}[c]{@{}l@{}}Machine Learning at the \\ Network Edge: A Survey~\cite{murshed2021machine}\end{tabular} & \begin{tabular}[c]{@{}l@{}}Survey looks upon the deployment of ML\\ system at EDGE of computer along with\\ the tools, frameworks and hardware.\end{tabular} & \ding{55} & \ding{108} & \ding{108} & \ding{55} \\ \hline \begin{tabular}[c]{@{}l@{}}EDGE Intelligence: Paving \\ the Last Mile of Artificial \\ Intelligence With Edge\\ Computing~\cite{zhou2019EDGE}\end{tabular} & \begin{tabular}[c]{@{}l@{}}Authors provides six level rating for EI. \\ Survey also provides into the architecture, \\ framework and technologies require for\\ DL deployment over EDGE.\end{tabular} & \ding{52} & \ding{108} & \ding{108} & \ding{108} \\ \hline \begin{tabular}[c]{@{}l@{}}Enabling Deep Learning \\ for All-in EDGE paradigm\\ (Ours)\end{tabular} & \begin{tabular}[c]{@{}l@{}}Survey looks upon the key architecture, \\ enabling technologies, model adaption \\ techniques along with performance metric \\ for training and inference from deep \\ learning for All-in EDGE paradigm.\end{tabular} & \ding{52} & \ding{52} & \ding{52} & \ding{52} \\ \hline \hline \end{tabular} } \end{table*} \begin{figure*}[h] \centering \setlength \fboxsep{0pt} \setlength\fboxrule{0.25pt} \fbox{\includegraphics[width=6.2in]{image/PhDSurveyP4.png}} \caption{ An overview of the structure of the survey paper.} \label{overview} \end{figure*} An overview of the survey paper organisation is provided in the Figure \ref{overview}. The survey paper is organised as follows: \begin{itemize} \iffalse \item Section I provides an insight into the survey paper, gaps in the existing survey and contributions by current survey paper. \fi \item Section II provides the primer on computing paradigm and EDGE intelligence (EI). This section also defines the all-in-EDGE paradigm of EI. \item Section III presents the architecture, enabling technologies for training and inference from the deep learning models at the All-in EDGE paradigm, and also examines model adaption technique for effective deployment of deep learning models at EDGE. \item Section IV reviews the key performance metrics used for evaluating the research in all-in-EDGE deep learning. \item Section V discusses the open challenges and future direction of research in all-in-EDGE deep learning. \item Section VI presents a summary and identifies the primary conclusions and findings of the paper. \end{itemize} \begin{table}[h] \begin{center} \caption{List of important abbreviations} \label{tab:Abb} \setlength{\tabcolsep}{2pt} \begin{tabular}{|p{85pt}|p{135pt}|} \hline\hline \textbf{Abbreviation}& \textbf{Definition}\\ \hline\hline AI& Artificial Intelligence\\ \hline EDGE& Enhanced Data GSM Environment\\ \hline EI& EDGE Intelligence\\ \hline DNN& Deep Neural Network\\ \hline GDPR& General Data Protection Regulation\\ \hline CIPAA& Construction Industry Payment and Adjudication Act\\ \hline DL& Deep Learning\\ \hline ANN& Artificial Neural Network\\ \hline DC& Data Center\\ \hline MEC& Mobile (Multi-Access) EDGE Computing\\ \hline CC& Cloud Computing\\ \hline EC& EDGE Computing\\ \hline IoT& Internet of Things\\ \hline ED& EDGE Devices\\ \hline ES/N& EDGE Server or EDGE Node\\ \hline ES& EDGE Server\\ \hline\hline \end{tabular} \end{center} \label{tab1} \end{table} \iffalse \begin{table*}[] \centering \caption{Enabling Technologies for model adaptation at EDGE server} \label{tab:modelAdaption} \begin{tabular}{|p{0.3\textwidth}|p{0.5\textwidth}|} \hline\hline \multicolumn{1}{|l|}{\textbf{Model Adaption Category}} & \multicolumn{1}{l|}{\textbf{Model Adaption Technique}} \\ \hline\hline \multirow{3}{*}{Model Compression} & Pruning \cite{qiu2020pre,berthelier2021deep,wang2021emerging,mishra2020survey,liang2021pruning,xu2020convolutional,liu2020pruning,han2015learning,han2017ese,li2020ss,gong2020privacy} \\ \cline{2-2} & Quantization \cite{gholami2021survey,liang2021pruning,berthelier2021deep,naik2021survey,zhong2021fine,menghani2021efficient,zhang2021making,zhou2019EDGE,yang2021dynamic,huang2021mixed} \\ \cline{2-2} & Knowledge Distillation \cite{hinton2015distilling,wang2018not,sharma2018existing,niu2021distant,hazarika2021conversational,wei2021inter,soleimani2021cross,chuang2020lifelong,zhou2020lifelong,yao2020Knowledge ,passban2020alp,inaguma2021alignment,you2020contextualized,chen2021cross,cho2020speech,sun2021unsupervised,si2021speech2video,yuan2021reinforced,wang2021mulde,hao2021model,mirzadeh2020improved,tsunashima2021adversarial} \\ \hline \multirow{5}{*}{Conditional Computation} & Early Exit \cite{matsubara2021split,baccarelli2021learning,tan2020end,passalis2020efficient,tan2021empowering, zhou2020bert,laskaridis2020hapi,xin2020deebert,teerapittayanon2016branchynet,wang2020convergence,li2020deepqtmt} \\ \cline{2-2} & Model Selection \cite{park2015big,zhou2019EDGE,marco2020optimizing,wang2020convergence} \\ \cline{2-2} & Input filtering \cite{kang2017noscope,zhang2018ffs,tao2018esgd,kwak2021study,zheng2020realizing} \\ \cline{2-2} & Result Cache \cite{li2020learning,kumar2020quiver,khudia2021fbgemm,ikram2021cache,krichevsky2021quantifying,romero2021memory,drolia2017cachier,huynh2017deepmon,balasubramanian2021accelerating,cheng2020adaptive,zong2020efficient,yang2020mixed,wang2021diesel+,inci2020deepnvm++} \\ \cline{2-2} & Model Partitioning \cite{zhang2021ex,yan2021optimal,jain2021latency,zhao2018deepthings,kang2017neurosurgeon,cuervo2010maui} \\ \hline\hline \end{tabular} \end{table*} \fi \section{Fundamentals of EDGE Intelligence and All-in EDGE level EI} The centralised nature of the Cloud data centre has certain drawbacks. One of the most considerable disadvantages is the distance between the data centres and end (user) devices. On the other hand, EDGE computing provides undeniable benefits in bringing storage and computational resources physically closer to the source of data generation, thereby reducing the AI models' prediction latency. This section discusses the distinction between the Cloud and EDGE computing paradigms. This section also defines the different levels of EDGE intelligence based on the target (such as deep learning training and inference) performed by the discrete computing paradigms (Cloud and EDGE). \iffalse \begin{figure*}[h] \centering \setlength \fboxsep{0pt} \setlength\fboxrule{0.25pt} \fbox{\includegraphics[width=6.2in]{image/N_EDGELayer.png}} \caption{Illustration of Cloud-EDGE computing-based IoT.} \label{fig1} \end{figure*} \fi \subsection{Computing paradigms} Facilitating deep learning can require significant computational and storage capacity. As such, this requirement can represent a substantive impediment to the deployment of deep learning models on IoT and other resource-constrained devices. Below we evaluate the Cloud and EDGE computing paradigms in the context of deep learning based on their storage, computational power, proximity to the ED. \subsubsection{Cloud Computing} Cloud servers have significant storage capacity and computational power to facilitate the overwhelming data coming via the backhaul network from the end-user \cite{salem2020ai,chen2016Cloud}. Thus, Cloud data centres can satisfy resource requirements for aggregation, pre-processing, and inference for any AI-based applications. The Cloud data centre resources are concentrated in a few data centres, providing global coverage with a backhaul network. The Cloud computing paradigm typically involves the end devices that offload data directly to the Cloud for further processing. The end devices mentioned here are the originators of the data. In the Cloud, data can persist for days, months, and years, meaning that long term data can be collated and processed. For example, Cloud data centres can facilitate forecasting models based on a large amount of historical time series data \cite{puliafito2019fog}. \iffalse With the proliferation of IoT in the prevailing world, data generated have been exponentially rising. To provide almost real-time prediction, the Cloud holds the powerful computing resources necessary for processing data. \fi Cloud computing is still the appropriate vehicle for modelling and analytical processing if latency requirements and bandwidth consumption are not an issue, provided measures for preserving privacy and security are in place \cite{dantas2020application}. \begin{figure*}[h] \centering \setlength \fboxsep{0pt} \setlength\fboxrule{0.25pt} \fbox{\includegraphics[width=6.2in]{image/PhDDataTraversal.png}} \caption{Data traversal and Network consumption of Cloud and EDGE computing.} \label{fig981} \end{figure*} \subsubsection{EDGE Computing} With the surge in the proliferation of IoT devices, traditional centralised Cloud computing struggles to provide an acceptable Quality of Service (QoS) level to the end customers \cite{ali2020deep}. EDGE computing coupled with advancements in networking technology and the advent of 5G technology represents a viable mechanism of resolving this issue\cite{barate20195g, kupriyanovsky2019eu}. The EDGE computing paradigm has emerged due to the need to push computation away from the Cloud and towards the network's EDGE. This allows us to place computing resources close to the data sources. EDGE devices (ED), which are inherent components of EDGE computing, are widely distributed across geographical regions, and as such they are in closer proximity to the origin of the data. Although being widely available for processing the intermittent time-sensitive data in the network, data in such EDGE devices are transient \cite{narendra2018managing}. In contrast to Cloud computing, latency incurred from EDGE computing is significantly less, as a majority of data does not have to travel via a backhaul network to the Cloud \cite{saleem2020latency}. Less consumption of backhaul networks also means the requirement of bandwidth consumption is considerably less. ED are located in closest proximity to the data source, as shown in the Figure \ref{fig981}. \subsection{EDGE intelligence} \label{EIAllInEdge} In recent years, there has been a significant amount of research in EDGE computing. EDGE computing can bring computing resources and storage capacity closer to the ED, thereby improving the QoS of real-time based applications such as automated driving \cite{milz2018visual} and real-time surveillance \cite{barthelemy2019EDGE} all of which intrinsically require fast processing, and response time \cite{liu2018distributed,hassan2018role}. Meanwhile, over the last decade, significant progress has been made in the AI domain. Technical advancements in high-performance processors \cite{li2020survey} coupled with AI advancements in algorithms, and the availability and maturity of big data processing \cite{sanchez2016workshop} have all contributed to the increase in AI performance. Applications ranging from Apple's Siri and Microsoft's Cortana to Alpha Go demonstrate AI technology's pervasive and diverse impact over the last number of years. AI applications have established their candidature as a necessary component of today's data-driven world and its ability to extract meaningful information. Over the past number of years it has been widely recognised that the proliferation of EDGE devices and the subsequent vast amounts of generated data represents a high-potential application space for AI technology. As mentioned by the author in \cite{zhou2019EDGE}- "\textit{EI is a paradigm that fully exploits the available data and resources across the hierarchy of end devices, EDGE nodes, and Cloud datacenters to optimise the overall performance of training and inferencing a DNN model.}" Considerable efforts have been invested in research exploring how EDGE devices can reduce the dependency on Cloud computing. For example, there has been significant research interest in minimising data transmission between EDGE devices and centralised Cloud servers as this can help significantly alleviate network congestion. EI can be divided into six levels based on participation and processing, as described in the Figure \ref{fig2}. The first level involves participation from the Cloud and the EDGE servers. At this level, the Cloud is solely responsible for the training of the Deep Learning model. For the inference phase, a Cloud-EDGE based co-inference strategy is used, in which initially EDGE determines the output with a certain confidence. If the result produced by the model on EDGE server exceeds a fixed threshold probability then the result is provided to the application/end-user. If the probability threshold is not met then the activations of the last layer of the network on the EDGE server are passed to the Cloud for further processing. The second level again involves the participation from the Cloud and EDGE servers. Like level 1, Cloud is responsible for training the Deep Learning model. Unlike level 1, the EDGE server is solely responsible for providing the predicted output. There are a range of techniques (which will be discussed in more detail in section \ref{EnablingTechnologies}) that are used to condense deep learning models to facilitate deployment on EDGE servers. The third level involves the participation of the Cloud and the end device. This level utilises the Cloud to train the deep learning model. The model is deployed and used for inference on the users' end device. The fourth level involves closer integration between the Cloud and EDGE servers. The fourth level is similar to the first except that more of the computational burden is pushed towards the EDGE. In contrast to level 1, in level 4, both the Cloud and EDGE servers are responsible for training the deep learning model. The inference phase still utilises a Cloud-EDGE based co-inference strategy. The fifth level involves only EDGE servers. EDGE servers become solely responsible for training the deep learning model. Based on the deep learning model's size, either a single EDGE server can train a deep learning model, or a group of EDGE servers collaborate to train a deep learning model. Techniques for training deep learning at level 5 are described in detail in section \ref{EnablingTechnologies}. Inference at level 5 can be produced from either a single EDGE server or multiple EDGE servers working collaboratively. Also, the fifth level of EI is referred to as the All-in-EDGE paradigm in this survey paper. The sixth level involves only EDGE devices. EDGE device or devices are responsible for training the deep learning model. For the inference phase, output to a certain query is generated from the EDGE device. \begin{figure*}[h] \centering \setlength \fboxsep{0pt} \setlength\fboxrule{0.25pt} \fbox{\includegraphics[width=6.2in]{image/PhDInt_Layer_level.png}} \caption{Six level rating for EDGE Intelligence.} \label{fig2} \end{figure*} As defined by Zhou et al. \cite{zhou2019EDGE}, All in-EDGE (fifth level) refers to the paradigm where both training and inference of the Deep Neural Network (DNN) takes place in EDGE servers (also known as in-EDGE manner). This level becomes critical when connectivity to the backhaul network becomes limited in the remote location \cite{oderhohwo2020EDGE}. Another use case when this paradigm becomes essential is when latency is critical for real-time AI applications. EDGE computing now offers greater computing capability due to significant advancements in the computing power of EDGE servers and the microprocessors used in end devices \cite{mittal2020survey}. Consequently, this improvement in performance means that end devices are now less reliant on Cloud servers for the provision of deep learning inference and training purposes. The all in-EDGE paradigm (EI level 5th) attempts to facilitate the end-user with the necessary computation, storage and latency requirements lessening the dependence on a centralised Cloud server. This research paper primarily focus on All in-EDGE paradigm. \section{Deep Learning at EDGE} Data generated by end-user devices facilitates the provision of AI-based analytics. More specifically, this data enables us to train deep learning models, which can be used for real-time inference. This section reviews the current state of the art for training deep learning models in the All in-EDGE paradigm. Furthermore, the section explicitly details the different architecture employed for training and the current enabling technologies and frameworks. \subsection{Architecture} \label{modelTrainingArch} The architecture used for DL training at the EDGE server can broadly be divided into three main categories: centralised, distributed and decentralised, as shown in the Figure \ref{fig3}. \begin{figure*}[h] \centering \setlength \fboxsep{0pt} \setlength\fboxrule{0.25pt} \fbox{\includegraphics[width=6.2in]{image/N_Distributed.png}} \caption{Architecture for training Deep Learning model in-EDGE. (a) Centralised, (b) Decentralised, and (c)Distributed Architecture.} \label{fig3} \end{figure*} \begin{enumerate} \item Centralised Architecture: In a centralised architecture, a single centralised EDGE server (Figure \ref{fig3}(a)) undertakes the DNN training task. Each EDGE server then sends the data produced or accumulated from end devices. Once the centralised EDGE server receives data, it then starts the DNN model training task \cite{rani2022cloud,kong2021real}. In this architecture, the centralised EDGE server is assumed to have enough and relatively higher computing power than other EDGE servers participating in the DNN model training. \item Decentralised Architecture: In a decentralised architecture (Figure \ref{fig3}(b)), each EDGE server is responsible for training their own local DNN. Once a local model is trained, a random peer-to-peer connection with another EDGE server in the network is established for that specific iteration to share their local models. As random sharing of the local model happens after certain iterations, all the EDGE servers achieve a consensus (further updating the model parameters will not change the model's estimate for a given classification or regression problem). This architecture does not rely on a central authority to manage the DNN training, although the methodology utilised requires an equal contribution from all network servers for the DNN training. Enabling technologies like Aggregation Frequency Control and Gossip training, which supports this architecture, are explained in more detail in section \ref{EnablingTechnologies}. \item Distributed Architecture: A distribute architecture is a hybrid approach between the centralised and decentralised architecture. In a distributed architecture, one of the EDGE servers acts as an orchestrator. The rest of the participating EDGE server helps in training the DNN model. Each EDGE server receives the DNN model from the orchestrator to train on their local data. Once local training of the DNN model ends, EDGE servers then send the trained model back to the orchestrator. The orchestrator then combines all the incoming DNN models to produced a global model. There are different techniques that can be applied to combine the incoming models, such as the simple aggregation of model weights. Once the Global DNN model is developed, the orchestrator again sends back the Global DNN model to the rest of the EDGE server for subsequent rounds of training. \end{enumerate} \subsection{Enabling technologies} \label{EnablingTechnologies} This section focuses on the technologies that enable the model training process undertaken by the EDGE Servers. Model parallelism, aggregation frequency control, gossip training, gradient compression, data parallelism, Federated Learning and Split Learning at the EDGE server are the core technologies underpinning centralized, decentralized and distributed architectures. In which Federated Learning and Split Learning have been favoured more in recent years, which is evident from a lot of research interest and citations as presented in Table ~\ref{tab:DLtraining}. For comparison, we will be keeping model parallelism, aggregation frequency control, gossip training, gradient compression and data parallelism in one group, and we will call it Group 1. Group 1, Federated Learning and Split Learning technologies are discussed in more detail below and their main differences are highlighted in Table ~\ref{tab:ComparisonOfEnablingTechnologies}. Underneath, we detail the enabling technologies that facilitate deep learning training at the EDGE. \newline \begin{table*}[] \centering \caption{Enabling Technologies for DL training at EDGE server} \label{tab:DLtraining} \begin{tabular}{|p{0.6\textwidth}|p{0.3\textwidth}|} \hline\hline \multicolumn{1}{|c|}{\textbf{Enabling Technology}} & \multicolumn{1}{c|}{\textbf{Related Research}} \\ \hline Model Parallelism/ DNN Splitting & \cite{kim2016strads,shoeybi2019megatron,geng2019elasticpipe,mao2018privacy,xu2020acceleration,hewett2020linear,xu2021efficient} \\ \hline Aggregation Frequency Control & \cite{hsieh2017gaia} \\ \hline Gossip Training & \cite{hegedHus2021decentralized,dinani2021gossip,kong2021consensus,nikolaidis2021using} \\ \hline Gradient Compression & \cite{tang2018communication,strom2015scalable,tao2018esgd,abrahamyan2021learned,chen2021scalecom} \\ \hline Data Parallelism & \cite{li2020pytorch,li2015malt,szabo2020distributed,yu2021toward,park2020hetpipe,li2020distributed,cheikh2021klessydra} \\ \hline Federated Learning at EDGE server & \cite{bonawitz2019towards,reisizadeh2020fedpaq,xie2020multi,mothukuri2021survey,li2021survey,zhang2021survey,lyu2020threats,romanini2021pyvertical,sun2021vertical,zhou2021privacy,yu2021federated} \\ \hline Split Learning at EDGE server & \cite{abuadbba2020can,gao2020end,ha2021spatio,thapa2021advancements,thapa2020splitfed,pant2021comparison} \\ \hline \hline \end{tabular} \end{table*} \begin{table*}[] \centering \caption{Comparison of Enabling Technologies for DL training at EDGE server} \label{tab:ComparisonOfEnablingTechnologies} \begin{tabular}{|p{0.22\textwidth}|p{0.22\textwidth}|p{0.22\textwidth}|p{0.22\textwidth}|} \hline\hline \multicolumn{1}{|c|}{Category} & \multicolumn{1}{c|}{Group 1} & \multicolumn{1}{c|}{Federated Learning} & \multicolumn{1}{c|}{Split Learning} \\ \hline \hline Connected devices & Switch, database, access points & Local database, server, access points, multiplexer & Bridge, router, switch, database, access points \\ \hline System Architecture & Centralized, decentralized and distributed architecture & Distributed architecture & Distributed architecture \\ \hline Model & Client-server hosted model (as in decentralized) and client-server synergized model (as in centralized and distributed) & Clients server collaboratively trains the model & Client server shared model architecture and collaborative training \\ \hline Access operational mechanism~ & LAN, WAN & LAN, WAN & LAN, WAN \\ \hline Inter-communication while model training & Full model parameters exchange (exception Model Parallelism) & Full model parameters exchanged & No model parameters are exchanged (only activation vectors from last layers are shared) \\ \hline Computational resource requirement for large DNN & High & High at client and server end & Low at client end and comparatively high at server end \\ \hline Data privacy by default & No & Yes & Yes \\ \hline Communication overhead between server and clients & Depends on sample size and model size. & Depends on model size. & Depends on sample size and number of nodes in the cut-layer. \\ \hline \hline \end{tabular} \end{table*} \iffalse \Figure[t!](topskip=0pt, botskip=0pt, midskip=0pt){EnablingTechnologies.png} {Enabling Technologies for DL training at EDGE server.\label{fig4}} \fi \iffalse \subsubsection{Distributed Training at the EDGE server} Distributed training at the EDGE provides a means by which different EDGE servers combine their computational power to achieve a common task of training a Deep Learning model. In contrast to a single EDGE server, when clusters of EDGE servers are used in parallel for computation of a Deep Learning model it helps to alleviate the significant time and computational resource requirements. While utilising the EDGE servers in the real-world; one can use either select between centralised, distributed or decentralised architecture to train the Deep Learning Model. In distributed architecture, the EDGE servers train their own DL models, and aggregation of the individual DL models happens at one primary EDGE server. The primary EDGE server does the aggregation and becomes responsible for updating the individual DL models present in the cluster of distributed EDGE nodes. Similarly, decentralised architecture with the EDGE server helps break the dependency on the primary EDGE server. In decentralised architecture, individual EDGE server trains their own DL models (similar to centralised architecture). For aggregating DL models to reach a consensus (consensus is a state where all DL models give the same inference in common), each EDGE server shares its DL models update with its randomly selected peer. As DL models are continuously shared amongst the randomly selected peers after a certain interval, they reach a consensus regarding the DL models quality. Underneath, we detail the enabling technologies that facilitate deep learning training at the EDGE. \newline \fi \begin{enumerate} \item Model Parallelism/ DNN Splitting: \label{lbl_model_partitioning} Model Parallelism (also referred to as model splitting or DNN Splitting) is a technique in which the DNN is split across the EDGE servers in order to overcome the constrained computing resources. With the partitioning of the DNN model, the technique ensures the optimal distribution of the computational workload during the DL training process. Model splitting can be categorised as Horizontal partitioned Model parallelism or Vertical partitioned as shown in Figure \ref{fig502}. In the Vertical Partitioning approach, one or more layers of the DNN are housed in different servers based on the computational requirement of the layer and the available resources of the EDGE server. Whereas, in Horizontal partitioning, neurons from different layers are placed together based on the computational power of the EDGE server. \begin{figure*}[h] \centering \setlength \fboxsep{0pt} \setlength\fboxrule{0.25pt} \fbox{\includegraphics[width=6.2in]{image/ModelDT.png}} \caption{Model Parallelism.} \label{fig502} \end{figure*} In \cite{kim2016strads}, the authors proposed STRADS the distributed framework for vertical partitioned parallel machine learning.The ML application scheduler introduced in the STRADS framework helped control the update of the model parameters based on the model's dependency structure and parameters of the DNN model. The authors also successfully demonstrated 10x faster convergence of the model parallelism based LDA topic modelling implementation over the non-model parallelism. In 2019, research \cite{shoeybi2019megatron} on the training of the Megatron language model also utilised the horizontal partitioned model parallelism for the training of the multi-billion parameter language model. In contrast to the single-GPU-per-model training, authors in this research implemented model parallelism on the same PyTorch transformer implementations with few modifications. To train such a big system, 512 GPUs were consumed to train the transformer-based model. The same model was then able to achieve the SOTA accuracy on the RACE dataset. \newline \item Aggregated Frequency Control (AFC): Typically, distributed model training involves: (i) distributing a single copy of a model across multiple edge servers (ii) each of the EDGE servers then train the model locally and (iii) a centralised authority will aggregate the updates from each of the EDGE servers. In AFC, a finite number of discrete clusters of EDGE nodes are formed. The task of each of the discrete clusters is to train an identical DNN model. In each cluster, there is one EDGE server that acts as a parameter server. The task of the parameter server is to provide all other EDGE servers in the cluster with an identical copy of the DNN model. Once each worker node receives their copy, they train that model using their local data and send back the updated DNN model weights to the parameter server for aggregation. The parameter server aggregates the weights from each of the individual nodes in the cluster. Once aggregation is done, the parameter server again sends back the updated DNN model to all the workers in the cluster. In addition, after each aggregation at the parameter server, a "significance function" is computed. This function will determine if the current aggregation has lead to a significant improvement. If the improvement is deems significant, then the current cluster's parameter server will inform the parameter servers of each of the other clusters with the new model weights. Hence, each parameter server will have an approximately correct model copy at any given point in time. As shown in Figure \ref{fig6}, Aggregated Frequency Control (AFC) focuses on decoupling the individual nodes' communication to the centralised authority. \begin{figure}[ht] \centering \setlength \fboxsep{0pt} \setlength\fboxrule{0.25pt} \fbox{\includegraphics[width=3.0in]{image/N_afc.png}} \caption{Aggregated Frequency Control (AFC).} \label{fig6} \end{figure} \iffalse \begin{figure*}[h] \centering \setlength \fboxsep{0pt} \setlength\fboxrule{0.25pt} \fbox{\includegraphics[width=6.2in]{image/N_afc.png}} \caption{Aggregated Frequency Control (AFC).} \label{fig6} \end{figure*} \fi The significance function used in AFC influences the frequency with which updated weights are sent from one parameter server to another. This in turn can reduce the communication overhead in the network. The approximate Synchronous Parallel (ASP) model \cite{hsieh2017gaia}, is one such model that targeted the Geo-Distributed ML training to converge at a faster rate. This research successfully employed an intelligent communication system based on the AFC technique to minimise WAN communication between the two data centres. By utilising the AFC at all-in EDGE paradigm, one can benefit from the low communication between the EDGE servers which are far apart from each other.\newline \item Gossip Training: \label{lbl:gossip} Gossip Training provides a way to reduce the training time in a decentralised architecture. Gossip training is based on the randomised selection of the EDGE server to share the gradient weights for aggregation \cite{loizou2016new}. In this technique, each EDGE node will randomly select another node and subsequently send the gradient weight updates to the selected node. Each node will then compute the average received weights. Gossip training is fully asynchronized and works in a fully decentralised manner. In \cite{blot2016gossip}, researchers demonstrated that GoSGD (Gossip Stochastic Gradient Descent) takes 43\% less time to converge to the same train loss score when compared to EASGD (Elastic Averaging SGD \cite{zhang2014deep}) algorithm used in decentralised architecture training. In other research \cite{vsajina2020decentralized} PeerSGD, modified the GoSGD algorithm \cite{blot2016gossip} to work in the decentralised trustless environment. The algorithm was modified at the stage when the random peer is selected to share the update. The peer who receives the update can decide whether to accept the received weights based on the loss difference (hyper-parameter defined in the research). PeerSGD was evaluated with a varying number of clients ranging from 1 to 100. In the experiment, PeerSGD demonstrated slower convergence when tested with a larger number of clients but it still comparable accuracy after a certain number of epochs. The limitation with PeerSGD is its inability to achieve convergence in a scenario when data classes are segregated across multiple clients. Research \cite{oguni2021communication,han2020accelerating}, targeted Wide Area Network and Heterogeneous EDGE computing platforms with a slightly tweaked version of GoSGD. This work achieved comparable results to the GoSGD algorithm but in different environments. Gossip training provides a way to train a model without a central authority for the all-in EDGE paradigm. Moreover, as EDGE servers can be provided by different institutes that can be trustworthy or not, Gossip training can handle both situations.\newline \item Gradient Compression: Gradient Compression is another approach to reducing communication while training the DL model, which can be applied to either a distributed or decentralised architecture. Gradient Compression (GC) minimises the communication overhead incurred by addressing the issue of redundant gradients. Authors in the research \cite{lin2017deep}, found that 99.9\% of the gradient exchange in distributed stochastic gradient descent are redundant. They proposed a technique called Deep Gradient Compression (DGC), which achieved a reduction in the communication necessary for training ResNet-50 from 97 MB to 0.35 MB. In gradient compression, two approaches are used in practice: gradient quantisation and gradient sparsification. In gradient quantisation \cite{tang2018communication}, gradient weights are degraded from having a higher order of precision values to a lower precision order {\em i.e.}, representing weights using float 12 rather than float64. In \cite{du2020high}, the author proposed high-dimensional stochastic gradient quantization for reducing the communication in the Federated Learning setting (explained in \ref{subsec_fedlearning}). In the proposed architecture, the authors decomposed the stochastic gradient into its norm and normalized stochastic gradient. The norm is then compressed via scalar quantizer, and the high-dimensional normalized stochastic gradient is decomposed into two parts. The first part consists of a set of equal-length unitary vectors and a mentioned hinge vector. The unitary hinge vector, when integrated with the first part, yield the normalized stochastic gradient. This unitary nature of both the decomposed part allows them to be compressed using two Grassmannian quantizers. Through the framework of hierarchical gradient quantization, authors reduced the communication overhead theoretically and at the same time achieved a similar accuracy to SOTA signSGD scheme\cite{bernstein2018signsgd}. \par Another approach to gradient compression is gradient sparsification. This technique allows the gradient exchange only if the absolute gradient values are higher than a certain threshold \cite{strom2015scalable}. For example, the threshold in the research ranged from 2 to 15. So if the absolute values of the gradients elements exceed the threshold, they are allowed to be transmitted. The higher the value of the selected threshold the lower the communication cost (as the threshold limits the transmission of gradient weights). This method reduced the required communication bandwidth by three orders of magnitude for data-parallel distributed SGD training of DNNs. Recent research \cite{tao2018esgd}, found that the selection of an appropriate threshold is challenging due to the variation in the value of the gradients. This research proposed an alternative approach called EDGE Stochastic GradThe gradcent (eSGD) method. In eSGD, determining if the gradient update should be sent over the the network is based on the loss function. The loss function is used to compute the loss against each coordinate of the gradient at time step `$t-1$' and `$t$'. If the loss value at time step `$t$' is smaller than its value at time step `$t-1$', the current gradient `$gt$' will be exchanged to build a global model. The standard SGD, when applied to MNIST with 128 batch size and trained for 200000 epochs, will achieve 99.7\% accuracy. In contrast, eSGD method with the same setting attained an accuracy of 95.31\% and 91.22\% with a drop out ratio (\% of gradients that will not be communicated by EDGE server) of 25\% and 50\% respectively. In \cite{shi2020communication}, the author formulated an optimization problem to find the optimal solution of the trade-off between the communication that takes place within the neighbour layers of DNN (of an individual EDGE server) and the computation required for the gradient sparsification. Authors further developed an optimal merged gradient sparsification algorithm which achieved 31\% time efficiency over the SOTA sparsified SGD \cite{stich2018sparsified}. \iffalse The algorithm is primarily based upon merging the neighbouring layers, leading to performance gain in terms of less computation.\fi For the all-in EDGE paradigm, the size of the message being communicated by the servers utilises a significant bandwidth. The gradient compression approach helps in reducing the size of the message being communicated from one EDGE server to another; thereby freeing up network bandwidth which can be then utlised by other EDGE applications. \newline \iffalse \item Knowledge Transfer Learning (KTL)/ Knowledge Distillation (KD): Knowledge Transfer Learning (KTL) (also referred to as Knowledge Distillation (KD)) is a technique in Machine Learning that aims at gaining Knowledge over one problem and applying the Knowledge gained to a different but relevant problem\cite{ding2021analyzing}. KTL broadly helps with model compression and Knowledge transfer. In KTL, to achieve the Knowledge transfer, a base DNN model (also known as teacher network) is initially trained on a base dataset (with a significant amount of data points) over a specific task. After completing training, the same model is used to train another DNN model (also known as student network) on another target dataset (with less number of data points) identical to the base dataset but for different tasks—for example, training a DNN model on ships and in the second phase training it over yachts. Thus this technique helps in transitioning from generality towards specificity in respective domain tasks\cite{tang2020understanding,wen2021preparing,wu2021peer,guo2020online}. Furthermore, for model compression in KTL, the architecture of the student model is downsized (consist of fewer parameters compared to the parent model) while maintaining the same level of performance\cite{choi2020data,pan2020meta,sarfraz2021Knowledge ,yang2020model,walawalkar2020online}. As KTL provides a way for model compression and Knowledge transfer but still there exists a lack of theoretical support before it can succeed as a general and practical approach \cite{gou2021Knowledge ,huang2017like}. Two crucial areas where KTL needs more research are: \begin{enumerate} \item The teacher models can guide the student model on the task ({\em i.e.}, classification, regression etc.) in hand. Still, the student model is not able to learn all the significant Knowledge. The mismatch indicates this in the accuracy during model evaluation. Thereby, optimisation methods are crucial for significant Knowledge absorption from the teacher to the student and require more research \cite{cho2019efficacy,zhang2018deep,gou2021Knowledge }. \item The student models are not able to follow the teacher models. This can happen when the model parameters of the parent and teacher models are significantly different from each other. On the one hand, the teacher model can be very deep in contrast to the shallower network of the student model. On the other hand, it is also seen if the student model is similar to the teacher model architecture, it will produce outputs identical to the teacher \cite{heo2019comprehensive,sun2020mobilebert}. \end{enumerate} \fi \item Data Parallelism: Data parallelism (also referred to as data splitting) is a technique in which the sizeable primary dataset is split to form mutually exclusive smaller datasets. These datasets are then forwarded to the participating ES (secondary servers). In this architecture, as represented in Figure \ref{fig501}, initially, the primary server distributes the uninitialised similar model copy to each of the secondary servers. The secondary server starts training after the model copy and the associated dataset is received. The primary server holds the responsibility of aggregating the local models residing inside the secondary servers. Once the global model is formed by aggregating the local modal copies, the global model is sent back to the secondary server so that the secondary server can update its local model \cite{negi2020distributed,li2020pytorch,szabo2020distributed}. \newline \iffalse This technique can be used efficiently when ML algorithm is applied on an independent and identical distribution (iid) assumption on the data samples \cite{DBLP:journals/corr/abs-1909-08329,tang2020communication}. \fi \begin{figure}[h] \centering \setlength \fboxsep{0pt} \setlength\fboxrule{0.25pt} \fbox{\includegraphics[width=3.7in]{image/PhDDistributedTraining.png}} \caption{Data Parallelism.} \label{fig501} \end{figure} \item{Federated Learning:} \label{subsec_fedlearning} Federated Learning (FL) is a popular framework for training distributed DNNs \cite{mcmahan2017communication}. FL provides a practical mechanism to implement DL training across the network hierarchy. Although the native framework treats mobile devices as clients responsible for training the DL model, it can be extended to the EDGE servers \cite{khan2020federated,samarakoon2019distributed}. Federated Learning enables EDGE devices and servers (in our case) to collaboratively learn a shared prediction model while keeping all the training data on the device. As shown in Figure \ref{fig7}, during the first stage, all the (client) servers download the global DNN model from the aggregation server (responsible for maintaining the global DNN model). Once the global DNN model is received, client servers train the DNN model on the private data stored in the server, making it a local DNN model. Once training is completed on the client-server, the local model weights are sent to the aggregation server. Once the aggregation server receives all the weights from the participant client servers, it is then aggregated to formulate the new global DNN model~\cite{bonawitz2019towards,reisizadeh2020fedpaq,xie2020multi}. After aggregation, the global DNN model is again circulated to the client servers for further training, making the whole approach cyclic. This framework must ensure that the performance of the aggregated global model should be better than any of the individual client-side models \cite{li2021survey} before being disseminated. Federated Learning Systems (FLS) can be further categorised based on their data partitioning strategy, privacy mechanism and communication architecture \cite{mothukuri2021survey,li2021survey,zhang2021survey,lyu2020threats}. The data partitioning strategy dictates how the data is partitioned across the clients. There are three broad categories of data partitioning (i) Horizontal data partitioned FLS, (ii)Vertical data partitioned FLS and (iii) Hybrid data partitioned FLS. In horizontal data partitioning, all the clients have the same attributes/features in the dataset needed to train the model but use private local data for training. Whereas in vertical data partitioned, all the clients have the different attributes/features in the dataset and then by utilising entity alignment techniques (which helps in finding the overlap in different datasets) \cite{romanini2021pyvertical,sun2021vertical,zhou2021privacy} overlapped samples are collected for training machine learning models. \iffalse Wake-word recognition like 'Ok Google' is good example of Horizontal data partitioned FLS \cite{yu2021federated}, and when two different institutions comes together i.e. bank and hospital to deduce about users will fall under Vertical data partitioned FLS \cite{zhang2021survey}. \fi Hybrid data partitioning utilises the best of both worlds. In this category, the entire dataset is divided into horizontal and vertical subsets. So each subset can be seen as an independent dataset with fewer attributes and data points when compared to the entire dataset \cite{zhang2021survey}. For example, a set of hospitals wants to develop the DL model for cancer prediction. Suppose the feature set (medical examination of patients) of a few hospitals matches each other. In that case, they can utilise the Horizontally partitioned scheme, and if the feature set does not match but there is overlap, it can use the vertical data partition scheme \cite{li2021survey}. In FLS, even though confidential data is not shared across the client and the server, while exchanging the model parameters, there is the possibility that some sensitive information pertaining to private data could still leaked. The provision of privacy for FLS is typically either through cryptographic methods or differential privacy. When using cryptographic techniques, both the client and server operate on encrypted messages. Two of the most widely used privacy-preserving algorithms are homomorphic encryption \cite{stripelis2021secure,tian2021secure,jiang2021flashe,fang2021privacy} and multi-party computation \cite{csahinbacs2021secure,mondal2021poster,mou2021verifiable,sotthiwat2021partially}. On the other hand, differential privacy introduces random noise to the data as well as to the model parameters \cite{girgis2021shuffled,zhang2021privacy,truex2020ldp,zhao2020local}. Although random noise is added to the data, the algorithm provides statistical guarantees on privacy while still ensuring that the data being used facilitates effective model development. The design of an FLS can be broadly subdivided into two subcategories: distributed and decentralised designs. In a distributed design, a manager is responsible for collecting the local model, aggregating the model and again sending the aggregated global model for retraining. In this design, communication between the clients and aggregation server can happen in synchronous \cite{chai2021fedat,zhang2021survey} as well as in asynchronous \cite{chen2020vafl,chai2021fedat,ma2021fedsa,xu2021asynchronous} manner. One of the major risks of aggregation servers in a distributed design setting is that that the server may not treated each client model equally. That is, the aggregation server may have a bias toward certain clients. A decentralised design can mitigate the potential issues of bias in a distributed design. A decentralized design in Federated Learning can be based on a P2P scheme (ex. gossiping scheme as described in section \ref{lbl:gossip}), a blockchain-based system or graph-based system. In decentralized design, none of the participating servers is responsible for being aggregation servers. Therefore, if a gossip scheme is implemented to achieve the decentralized FLS, all the models will randomly share the updates with their neighbours \cite{lo2021flra,chen2021ppt,chen2021bdfl}. In contrast, if a blockchain system is implemented, it leverages Smart Contracts (SC) to coordinate the round delineation, model aggregation, and update tasks in FLS \cite{toyoda2020blockchain,nguyen2021federated,rahman2020secure,lu2020blockchain,li2020blockchain}. Lastly, if graph-based FLS is implemented, each client will utilize the graph neural network model with its neighbours to formulate the global models \cite{barbieri2021decentralized,he2021spreadgnn,xing2020decentralized,liu2021glint}. FLS provides a much-needed way of enabling the deep learning model training and inference at the All-in EDGE paradigm. With a FLS, one can easily integrate multiple low resource EDGE server's to help develop the DL model at the EDGE. Also, based on the resources available at the EDGE and the communication overhead of FLS design, one gets the freedom to select either a distributed or decentralised design. \newline \begin{figure}[h] \centering \setlength \fboxsep{0pt} \setlength\fboxrule{0.25pt} \fbox{\includegraphics[width=3.0in]{image/N_FederatedLearning.png}} \caption{Vanilla Federated Learning.} \label{fig7} \end{figure} \item{Split Learning:} \label{subsec_splitlearning} In Federated Learning each client is responsible for training the whole neural network locally. In contrast, Split Learning provides a way to offload some of this computation to other clients/servers. Split Learning divides a neural network into two or more sub-networks. Figure \ref{SplitLearning_1} illustrates the case where we split a seven layer neural network into two sub-networks using layer 2 as the "cut layer". After the split, the two sub-networks are distributed between the client who trains the initial two layers of the network and the server who trains the last five layers of the network. At the time of training, the client initiates the forward propagation on its confidential data and sends the activation's from the cut layer to the server-side sub-network. The server then continues the forward propagation and calculates the loss. During backpropagation over the loss, gradients are computed and propagated initially in the server sub-network and then relayed back to the client-side sub-network. In Split Learning, during the training and testing, the server never gets access to the parameters of the client-side network or the client's data. \begin{figure}[h] \centering \setlength \fboxsep{0pt} \setlength\fboxrule{0.25pt} \fbox{\includegraphics[width=3.0in]{image/SplitLearning_12.png}} \caption{Split Learning.} \label{SplitLearning_1} \end{figure} Split Learning can be broadly categorised into three configurations based on how the input data and labels are shared across the clients and the servers. Figure \ref{Split Learning_Var} shows three configurations- simple vanilla Split Learning, Split Learning without label sharing and Split Learning for vertically partitioned data. A main neural network is partitioned into two sub-networks in simple vanilla Split Learning. The initial sub-network, along with the input data for the neural network, remains with the client, whereas the remaining sub-network, along with the labels, reside with server \cite{thapa2021advancements}. Split Learning without label sharing is identical to the vanilla Split Learning, except that that the labels live with the client instead of the server. To compute the loss, the activations outputted from the server-side network are sent back to the client, who holds the last layer of neural network \cite{abuadbba2020can,gao2020end,ha2021spatio}. The loss is calculated, and gradients are computed from the last layer held by the client and then sent back to the server, and backpropagation takes place in the usual way. Clients train their partial sub-network for vertically partitioned data and then propagate the activation to the server-side sub-network. The server-side sub-network then concatenates the activation's and feed them to the remaining sub-network. In this configuration, labels are also shared with the server \cite{wu2020privacy}. \begin{figure}[h] \centering \setlength \fboxsep{0pt} \setlength\fboxrule{0.25pt} \fbox{\includegraphics[width=3.0in]{image/PhDSplitLearning_Var.png}} \caption{Different configurations of Split Learning- (a) simple vanilla Split Learning, (b) Split Learning without label sharing and (c) Split Learning for vertically partitioned data.} \label{Split Learning_Var} \end{figure} In a Federated Learning system, clients can interact with the server in parallel, which helps training to be faster than a Split Learning-based system. In contrast to Federated Learning, Split Learning provides a better means to reduce the computational requirements on the client-side. This reduction in computation comes from the fact that instead of training the whole neural network at the client-side (as done in a Federated Learning system), now the client has to compute only a sub-network of the whole network (as done in a Split Learning system). Recently, to leverage the advantages of both Split Learning and Federated Learning a hybrid techniques called SplitFed Learning was proposed \cite{thapa2020splitfed}. In splitfed learning, a neural network is broken down into the sub-networks shared amongst the clients and servers. In addition there is a separate federated aggregation server for the client and for the servers. All the clients perform the forward pass in parallel and independent of each other (as seen in Split Learning). The resulting activations are sent to the server-side sub-network, which performs a forward pass for the remaining sub-network portion. The server then calculates the loss and back propagates the gradients back to the very first layer on the client-side (as described earlier with Split Learning). Once this process finishes, the servers send their model weights to a federated aggregation server, which aggregates the independent server-side sub-network to form a global server-side model. Similarly the clients send their sub-network weights to another aggregration. at the end of aggregation, a global model can be developed by combining the aggregated client side weights with the aggregated server side weights as show in Figure \ref{SplitFed Learning} (a) \cite{thapa2020splitfed,pant2021comparison}. \iffalse In splitfed learning, a neural network is broken down into the sub-network shared amongst the clients and servers. All the clients perform the forward pass in parallel and independent of each other, as seen in the Federated Learning subsection \ref{subsec_fedlearning}. Activation is then sent to the server-side sub-network, which takes activation as input and performs forward pass in the remaining sub-network portion. The servers then calculates the loss and back propagates the gradients till the very first layer on the client-side, as seen in the Split Learning system subsection \ref{subsec_splitlearning}. Once forward and backward pass finishes, server-side sub-networks then send their model weights to the federated aggregation server, aggregating the independent server-side sub-network and forming the global server-side model. The same aggregation also take place for the client-side sub-network, and at the end of aggregation, a global model can be developed by combining the aggregated client side weights with the aggregated server side weights as show in Figure \ref{SplitFed Learning} (a) \cite{thapa2020splitfed,pant2021comparison}. \fi \begin{figure*}[h] \centering \setlength \fboxsep{0pt} \setlength\fboxrule{0.25pt} \fbox{\includegraphics[width=6.2in]{image/PhDSplitFedLearning.png}} \caption{Variants of splitfed learning (a) Splitfed learning with same number of client and server side sub-networks and (b) Splitfed learning with only one copy of server-side sub-network.} \label{SplitFed Learning} \end{figure*} Splitfed learning can have several variants. For example, the first one is where each client has its own corresponding server-side network in the main server, {\em i.e.}, number of client-side models are equal to the number of server-side models as explained in the earlier paragraph. In the second variant there are multiple client but only a single server. Therefore, each client-side model send their activations to the a single common server-side sub-network, thereby reducing the required aggregation step and need to keep the multiple copies of the server-side networks as compared to the first variant as shown in Figure \ref{SplitFed Learning} (b). Moreover, as the server keeps only one copy of the server-side sub-network, it makes the server-side do forward and backward pass sequentially with each of the client's data (activations of the cut layer)~\cite{gawali2020comparison,joshi2021splitfed}. \iffalse \begin{enumerate} \item Vanilla Federated Learning: Vanilla Federated Learning allows the DNN model to be trained in a decentralised way. Vanilla Federated Learning is a new approach to distributed machine learning that enables EDGE devices and servers to collaboratively learn a shared prediction model while keeping all the training data on the device. Approach decouples the need to do machine learning by storing the data in the Cloud. As shown in Figure \ref{fig7}, at the first stage, all the (client) servers download the global DNN model from the aggregation server (responsible for maintaining the global DNN model). Once the global DNN model is received, client workers train the DNN model on the private data stored in the server, making it a local DNN model. Once training is completed on the client-server, local model weights are sent to the aggregation server. Once the aggregated server receives all the weights from the participant client servers, it is then aggregated to formulate the new global DNN model\cite{bonawitz2019towards,reisizadeh2020fedpaq,xie2020multi}. After aggregation, the global DNN model is again circulated to the client servers for further training, making the whole approach cyclic. Vanilla Federated Learning (VFL) allows model training on one type of hardware ({\em \emph{e.g.}}, CPU) and then deploy it to a different kind of hardware ({\em \emph{e.g.}}, GPU) \cite{dinh2020federated,chai2020tifl,khan2021federated,ruan2021towards}. VFL is independent of hardware constraints as long as the infrastructure can train the DNN model. \item Communication-Efficient Federated Learning: In vanilla Federated Learning, client servers need to communicate with the aggregation server for periodic model averaging (FedAvg). Periodic communication thereby introduces the massive communication overhead as the data sent from the client in each iteration is about the same size as the DNN model, which leads to a low communication efficiency\cite{zhou2021communication,yuan2020hierarchical}. Consequently, various proposals focusing on the communication efficiency by reduction between the client servers and the aggregated server and data compression techniques were proposed to minimise the communication overhead of VFL \cite{gao2019hhhfl,ji2020dynamic,zhou2021communication}. \par In \cite{liu2020client} found that the increase in communication frequency (number of times client-server are allowed to communicate their local weights to the aggregation server) monotonically decrease the training time to achieve the same test accuracy. Other research \cite{abad2020hierarchical}, authors proposed hierarchical Federated Learning (HFL) across the heterogeneous cellular network (HCN). EDGE device users with private datasets are clustered around the small-cell base stations (SBSs) in this framework. A formed cluster with SBS then performs the distributed training where the model being trained is the same amongst the EDGE devices. SBSs then communicates periodically with the Macro-cell base station (MBS) to seek consensus on the model convergence. To further reduce the communication over network gradient, sparsification was employed in the architecture. As a result, HFL significantly reduces the communication latency on CIFAR-10 Dataset without sacrificing the model accuracy. With a slight variation in HFL, authors in \cite{briggs2020federated} studied the Federated Learning on the hierarchical cluster for local updates. In non-IID settings, the proposed architecture shows a 1.1x to 2.1x better test performance than VFL with 10 Global epochs for each approach. \newline \item Resource-Optimized FL: Federated Learning (FL) has been widely used for DNN model training-on-EDGE in heterogeneous environments (where participating EDGE devices and servers have different computation capabilities). However, when FL deploys a similar DNN model over a heterogeneous EDGE, the one with low computation power significantly delays the synchronised global model aggregation phase. This delay in global model aggregation leads to computational straggler issues ({\em i.e.}, some EDGEs take longer than the others to train the local DNN model). Although asynchronous EDGE parameter aggregation in FL can be one potential solution, but it does not eliminate computational stragglers. Therefore, there lies a need for resource-aware FL in a heterogeneous environment. \par In the last few years, many works have been proposed to resolve the issue. ELFISH \cite{xu2019elfish} is one such framework where the author proposed a multi-stage scheme to overcome the stragglers' issue. The initial phase formulates the ML model to estimate the device-specific computation time by building 100 CNN models with different structures ({\em i.e.}, different number of layers and number of neurons) to assess the training time. The ability to determine the training time in devices helps the second phase in masking some of the neurons with significant computation time requirement (determined while performing the DNN model training over the period of time) in the resource-constrained clients. It again masks the neurons with the low contribution to the overall global model (again determined over time while training the DNN model) from the remaining neurons. So a resource-constrained client only trains a subset of the DNN model and send it for aggregation. Each global epoch updates the masked and unmasked neurons with the global model to make the local model in sync with the global model. Also, in each iteration, neurons are selected for masking changes giving a chance to other neurons, to participate in the local model building in one of the iteration. This model shows 2x training speed up as compared to the synchronized and asynchronized version of the FL. Deep reinforcement learning has also been used in research \cite{guo2021efficient}, where authors tuned the CPU cycle of the clients to slow down the training process of the faster clients to synchronize with all other clients. With comparable performance to VFL, this approach could efficiently utilize the straggler client's contribution to the global model's training. Other research \cite{guo2021efficient,li2019fair} formulated their optimization equation in the similar line, by finding the trade-off between the computation power available with the clients to the time taken to perform the one global epoch of DNN model learning in the FL. However, with each of the optimization equations, extra resources are required which can manage and optimise the FL learning\cite{chen2020joint}. \end{enumerate} \section{All in-EDGE Model Inference } Inference from EDGE AI applications is deemed to provide a faster response (latency requirement) to the end-users' query. The distributed nature of deep learning model deployment in-EDGE manner becomes more critical than Cloud as high-quality EI service is expected from the EDGE-AI model. In this section, we discuss the architecture of infrastructure for inference in-EDGE manner, for AI models. Secondly, we discuss the adaption techniques, which helps deploy DNN in the resource-constrained EDGE server. \subsection{Architecture} \label{modelInferenceArch} The architecture of the infrastructure that facilitates inference at the EDGE server can be classified as centralised or distributed, as shown in Figure \ref{fig8}. Categorisation of architecture is base on how the DNN is deployed to provide the service to the end-users. \begin{figure*}[h] \centering \setlength \fboxsep{0pt} \setlength\fboxrule{0.25pt} \fbox{\includegraphics[width=6.2in]{image/N_Distributed2.png}} \caption{Architecture for Inferring from Deep Learning model in-EDGE. (a) centralised and (b) Distributed.} \label{fig8} \end{figure*} \begin{enumerate} \item Centralised: Figure \ref{fig8}(a) describes a centralised DNN architecture for the deployment at the EDGE server. In this architecture, a trained DNN is deployed in an EDGE server responsible for dealing with the end-users' requests. centralised architecture is the most commonly used architecture. However, to achieve adequate privacy levels while using centralised architecture, it is mostly accompanied with privacy preservation protocols. As the available resources also constrain the EDGE server, model adaption techniques such as model compression and conditional computation are explained in section \ref{modelAdaption} are also used to deploy a model effectively over the EDGE server. \newline \item Distributed: Figure \ref{fig8}(b) describes one of the ways by which a distributed DNN architecture can be deployed at the EDGE server. In this architecture, a trained DNN is deployed across multiple EDGE servers to offload the computing burden from one EDGE server. This architecture also takes advantages of different protocols for preserving the privacy of the end-user. To, partition the DNN and make the model fit into the resource-constrained EDGE server, various combinations of model adaption techniques are utilised to implement the distributed DNN inference model. \end{enumerate} \fi \end{enumerate} \subsection{Model Adaption at EDGE server} \label{modelAdaption} Model Adaption techniques provide a means by which DNN deployment at the EDGE server can deliver high-quality EI services despite a lack of computing resources, storage, and bandwidth. Model adaption techniques can be broadly categorised into model compression and conditional computation techniques, as summarised in Table~\ref{tab:modelAdaption}. \begin{table*}[] \centering \caption{Enabling Technologies for model adaptation at EDGE server} \label{tab:modelAdaption} \begin{tabular}{|p{0.3\textwidth}|p{0.5\textwidth}|} \hline\hline \multicolumn{1}{|l|}{\textbf{Model Adaption Category}} & \multicolumn{1}{l|}{\textbf{Model Adaption Technique}} \\ \hline\hline \multirow{4}{*}{Model Compression} & Pruning \cite{qiu2020pre,berthelier2021deep,wang2021emerging,mishra2020survey,liang2021pruning,xu2020convolutional,liu2020pruning,han2015learning,han2017ese,li2020ss,gong2020privacy} \\ \cline{2-2} & Quantization \cite{gholami2021survey,liang2021pruning,berthelier2021deep,naik2021survey,zhong2021fine,menghani2021efficient,zhang2021making,zhou2019EDGE,yang2021dynamic,huang2021mixed} \\ \cline{2-2} & Knowledge Distillation \cite{hinton2015distilling,wang2018not,sharma2018existing,niu2021distant,hazarika2021conversational,wei2021inter,soleimani2021cross,chuang2020lifelong,zhou2020lifelong,yao2020Knowledge,passban2020alp,inaguma2021alignment,you2020contextualized,chen2021cross,cho2020speech,sun2021unsupervised,si2021speech2video,yuan2021reinforced,wang2021mulde,hao2021model,mirzadeh2020improved,tsunashima2021adversarial} \\ \cline{2-2} & Low rank factorization \cite{jain2021low,papadimitriou2021data,miles2021network,patrona2021self,han2021learning,lee2021training,yang2021iterative,russo2021dnn} \\ \hline \multirow{3}{*}{Conditional Computation} & Early Exit \cite{matsubara2021split,baccarelli2021learning,tan2020end,passalis2020efficient,tan2021empowering, zhou2020bert,laskaridis2020hapi,xin2020deebert,teerapittayanon2016branchynet,wang2020convergence,li2020deepqtmt} \\ \cline{2-2} & Model Selection \cite{park2015big,zhou2019EDGE,marco2020optimizing,wang2020convergence} \\ \cline{2-2} & Result Cache \cite{li2020learning,kumar2020quiver,khudia2021fbgemm,ikram2021cache,krichevsky2021quantifying,romero2021memory,drolia2017cachier,huynh2017deepmon,balasubramanian2021accelerating,cheng2020adaptive,zong2020efficient,yang2020mixed,wang2021diesel+,inci2020deepnvm++} \\ \hline\hline \end{tabular} \end{table*} \iffalse table in conditional compuation took out two model adaption technique: & Input filtering \cite{kang2017noscope,zhang2018ffs,tao2018esgd,kwak2021study,zheng2020realizing} \\ \cline{2-2} \\ \cline{2-2} & Model Partitioning \cite{zhang2021ex,yan2021optimal,jain2021latency,zhao2018deepthings,kang2017neurosurgeon,cuervo2010maui} \fi \iffalse \Figure[t!](topskip=0pt, botskip=0pt, midskip=0pt){ModelAdaption.png} {Enabling Technologies for model adaption at EDGE server.\label{fig9}} \fi \subsubsection{Model Compression:} Model compression techniques facilitate the deployment of resource-hungry AI models into resource-constrained EDGE servers by reducing the complexity of the DNN. Model compression exploits the sparse nature of gradients' and computation involved while training the DNN model. In turn, this has been shown to reduce network latency, memory and energy. This section reviews pruning, quantisation, knowledge distillation and low rank factorization. \begin{enumerate} \item Pruning: Pruning of parameters is the most widely adopted approach to model compression. In this approach, neural network parameters are evaluated against their contribution to predicting the label. Those neurons that make a low contribution in inference are then pruned from the trained DNN. Pruning of parameters reduces the size of a DNN but can also negatively impact network performance. In \cite{han2015learning} the research authors were able to reduce the size of the AlexNet and VGG-16 by a factor of $9\times$ and $13\times$ respectively, without incurring any loss in the accuracy over the ImageNet dataset. In other research~\cite{han2017ese}, the authors utilised pruning to create a compressed speech recognition model on field-programmable-gate-array(FPGA). \iffalse The authors also encode and partitions the compressed speech recognition model into multiple processing nodes (PN's) for parallelism to achieve low latency. While partition, due to pruning, some of the PN's computation load became higher as compared to another based on the number of pruned parameters. To mitigate the problem, the authors proposed load-balance-aware pruning, in which pruning in a specific layer takes place only if the number of computations does not go below 15\% of the overall layer computation. This technique compressed the LSTM model by $10\times$ with negligible loss in accuracy. \fi SS-Auto \cite{li2020ss}is a single-shot structured pruning framework. In contrast to earlier versions of pruning where whole DNN network parameters were selected for pruning, in structured pruning, an independent pruning on columns and rows of filters and channels matrix (for CNN based DNN model) is performed. The compressed DNN model produced by the SS-Auto framework did not suffer any degradation in performance, achieving the original perform levels when tested on CIFAR-10 and CIFAR-100 datasets. However, the compressed VGG-16 model reduced the number of convolutional layers parameters by a factor of 41.4\% for CIFAR-10 and 17.5\% for CIFAR-100 dataset. In \cite{gong2020privacy}, the authors proposed a new framework based on weight pruning and compiler optimisation for faster inference while preserving the privacy of the training dataset. This approach initially trains the DNN model as usual on the user's own data. The model then undergoes privacy-preserving-oriented DNN pruning. Synthetically generated data (with no relevance to the training data) is passed through a layer of the user-trained model. The decision to prune a parameter or not from the current layer is based on how similar (by computing the Frobenius norm) the original output of the layer (without pruning) when compared with the output of the layer after the parameter has been pruned. If the outputs are close enough, then that parameter is pruned. This pruning technique is named as alternating direction method of multipliers (ADMM). Experimental results of the framework outperformed the state-of-the-art end-to-end frameworks, i.e., TensorFlow-Lite, TVM and MNN, with speedup in inference up to $4.2\times$, $2.5\times$, and $2.0\times$, respectively. \newline \item Quantisation: Data quantisation degrades the precision of the parameters and gradients in the DNN. More specifically, in quantisation, data is represented in a more compact format (low precision form). For example, instead of adopting a 32-bit floating-point format, a quantisation approach might utilise a more compact format such as 16-bit to represent layer inputs, weights, or both \cite{zhou2019EDGE}. Quantisation reduces the memory footprint of a DNN and its energy requirements. In contrast, pruning of the neurons in a DNN will reduce the networks memory footprint but does not necessarily reduce energy requirements. For example, if later stage neurons are pruned in a convolutional network, this will not have a high impact on energy because the initial convolutional layer dominates energy requirement \cite{zhou2019EDGE}. In \cite{yang2021dynamic}, the authors utilised a dynamic programming-based algorithm in collaboration with parameter quantisation. With the proposed dynamic programming assisted quantisation approach, the authors demonstrated a $16\times$ compression in a ResNet-18 model with less than a 3\% accuracy drop. The authors in \cite{huang2021mixed}, proposed a quantisation scheme for DNN inference that targets weights along with the inputs to the model and the partial sums occurring inside the hardware accelerator. Experiments showed that the proposed schema reduced the inference latency and energy consumption by up to $3.89\times$ and $4.84\times$ respectively while experiencing a 1.18\% loss in the DNN inference accuracy. \newline \item Knowledge Distillation: The knowledge distillation model compression technique is composed of three key components: Knowledge, the distillation algorithm, and teacher-student architecture \cite{gou2021Knowledge}. Knowledge is the representation learnt by the teacher model. It is usually represented by a large neural network trained on a large amount of data. The knowledge distillation algorithm is used to transfer the Knowledge from the teacher model to the student model such as Adversarial KD \cite{mirzadeh2020improved,tsunashima2021adversarial}, Multi-Teacher KD \cite{yuan2021reinforced,wang2021mulde,hao2021model}, Cross-modal KD \cite{cho2020speech,sun2021unsupervised,si2021speech2video}, Attention-based KD \cite{passban2020alp,inaguma2021alignment,you2020contextualized,chen2021cross}, Lifelong KD \cite{chuang2020lifelong,zhou2020lifelong,yao2020Knowledge} and Quantizaed KD \cite{jin2021kdlsq,boo2021stochastic}. Finally, the teacher-student architecture is used to train the student model. A general teacher-student framework for the Knowledge distillation is shown in Figure. \ref{fig_KD}. In this architecture, the teacher DNN model is trained on the given dataset in the initial phase. Once the teacher DNN model is trained, it then helps the shallower student DNN model. The student DNN model also uses the same dataset which was used to train the teacher DNN model, but labels for the data points are generated by the teacher DNN model \cite{meng2019conditional}. Thereby, the knowledge distillation technique helps a smaller DNN model imitate the larger DNN model's behaviour. \begin{figure}[ht] \centering \setlength \fboxsep{0pt} \setlength\fboxrule{0.25pt} \fbox{\includegraphics[width=3.0in]{image/KnowledgeDitillation.png}} \caption{Teacher-student architecture for Knowledge Distillation.} \label{fig_KD} \end{figure} KD provides a viable mechanism of model compression \cite{gou2021Knowledge,huang2017like}. However, a mismatch in the accuracy during model evaluation indicates students' incapability to mimic the teacher perfectly \cite{cho2019efficacy,zhang2018deep,gou2021Knowledge}, which requires more research in future. \newline \iffalse{} KD provides a viable mechanism of model compression \cite{gou2021Knowledge,huang2017like}. However, a number of open research challenges still exist in the area: \begin{enumerate} \item The teacher models can guide the student model on the task ({\em i.e.}, classification, regression etc.) in hand. Still, the student model is not able to learn all the significant Knowledge. The mismatch in the accuracy during model evaluation indicates students' incapability to mimic the teacher. Thereby, optimisation methods are crucial for significant Knowledge absorption from the teacher to the student and require more research \cite{cho2019efficacy,zhang2018deep,gou2021Knowledge}. \item The student models are not able to follow the teacher models. This can happen when the model parameters of teacher models are significantly different from each other. On the one hand, the teacher models can be very deep in contrast to the shallower network of the student model. On the other hand, it is also seen if the student model is similar to the teacher model architecture, it will produce outputs identical to the teacher \cite{heo2019comprehensive,sun2020mobilebert}. \end{enumerate} \fi \item Low Rank Factorization: Low rank factorization is a technique which helps in condensing the dense parameter weights of a DNN \cite{jain2021low,papadimitriou2021data}, limiting the number of computations done in convolutional layers \cite{miles2021network,patrona2021self,han2021learning,lee2021training} or both \cite{yang2021iterative,russo2021dnn}. This technique is based on the idea of creating another low-rank matrix that can approximate the dense metrics of the parameter of a DNN, convolutional kernels, or both. Low-rank factorisation can save memory on EDGE servers, while also decreasing computational latency because of the resulting compacted size of the models. In \cite{chen2022fpc}, the authors used the low rank factorisation by applying a singular value decomposition (SVD) method. They demonstrated a substantive reduction in the number of parameters in convolutional kernels, which helped reduce floating-point operations(FLOPs) by 65.62\% in VGG-16 while also increasing accuracy by 0.25\% when applied to the CIFAR-10 dataset. Unlike pruning, which necessitates retraining the DNN model, after the application of low-rank factorisation there is no need to retrain the DNN model. Further research \cite{swaminathan2020sparse} proposed a sparse low-rank approach to obtain the low-rank approximation. The sparse low-rank approach is based on the idea that all the neurons in a layer have different contributions to the performance of the DNN model. So based on the neuron ranking (based on the contribution made for inference), entries in the decomposition matrix were made. This approach, when applied over the CIFAR-10 dataset with VGG-16 architecture, achieved $3.6\times$ times smaller compression ratio to the SVD. Other most commonly used methods for Low rank factorization are Tucker Decomposition (TD) \cite{shi2021low,fu2022low,ma2021fast} and canonical polyadic decomposition (CPD) \cite{phan2021canonical,chantal2021dynamic}. \end{enumerate} \subsubsection{Conditional Computation:} Conditional computational approaches alleviate the tension between the resource-hungry DNN model and the resource-constrained EDGE servers. In conditional computation, the computational load of the DNN deployed over a single ES is distributed across the network hierarchy. \iffalse Distribution of offloading the tasks from the single ES to multiple ES includes model partitioning across the ES (Early exit [\ref{lbl_early_exit}]) where each ES can make their independent inference, saving the recurring inferences [\ref{lbl_result_cache}] and distributing the DNN model across the ES [\ref{lbl_model_partitioning}]. \fi The selection of of the appropriate conditional computation technique is based on the EI application's latency, memory, energy, and network requirements. Therefore, depending upon the configuration of the ES and it's application requirements, one or any combination of the following techniques (Early Exit, Model Selection and or Result Cache) is employed to empower high-quality EI services. \begin{enumerate} \item Early Exit: \label{lbl_early_exit} The main idea behind the early exit approach is to find the best tradeoff between the deep DNN structure and the latency requirements for inference. In this approach, a deep neural network trained on a specific task is partitioned across the EDGE servers. The partitioning of the DNN model is based on a layer-wise split, such that a single or multiple layers can reside in the ES based on the computation power provided by that ES. Each ES that hosts one or more layers of the DNN also attach a shallower model (or side branch classifier) to the output of the final layer on the current ES. The model is then trained as shown in Figure \ref{fig10}. The purpose of the side branch classifier is to provide an early prediction or early exit. During inference the data is propagated through the network (and each ES host). Each host will calculate both the output of the hosted layers and the output of the local early exit network. If the output of the early exit layer exceeds a defined confidence threshold then the propagation stops (this is the early exit) and the 'early' result is returned. In the case that the prediction from the early exit network is less than the confidence threshold the output of the larger DNN layers is then propagated to the next ES in the chain, which holds the next layer of the larger DNN and another early exit network. The process of propagating the layer's output to the subsequent layer is carried out until one ES inferences the class with a higher confidence score. This process can provide `$n-1$' exit points for a DNN with an `$n$' deep-layer structure. Thus, if layer 1 of larger DNN along with the side branch can infer the class with required confidence that output will be given as a response to the end-user eliminating any further propagation of activation values along the ES. \begin{figure}[ht] \centering \setlength \fboxsep{0pt} \setlength\fboxrule{0.25pt} \fbox{\includegraphics[width=3.0in]{image/N_EarlyExit.png}} \caption{Early exit adaption of Deep Neural Network.} \label{fig10} \end{figure} Researchers in \cite{teerapittayanon2016branchynet}, provided the programming framework 'Branchynet', which helps incorporate the early exit approach into a standard DNN. The framework modifies the proposed DNN by adding exit branches at certain layers. With the multiple early exit points, it can also be considered as an enabler for localized inference using shallow DNN models \cite{wang2020convergence}. In \cite{li2020deepqtmt}, the authors proposed DeepQTMT to lower the encoding time spent on video compression. In the DeepQTMT, the authors utilised a multi-stage early exit mechanism to accommodate the high encoding time. Experimental results showed the encoding time was reduced by a factor of 44.65\% - 66.88\% with a negligible delta in bit-rate of 1.32\% - 3.18\%. With the early exit strategy, one can benefit from low latency as a faster response for the user query. The drawback of the early exit technique is that it increases the memory footprint of the DNN, thus ultilising more storage at each individual EDGE server. \newline \iffalse But due to increases in the branches' memory footprint of the DNN increases significantly compared to the vanilla version of the same DNN. An early exit in the All-in EDGE paradigm helps utilise the EDGE servers, which are very close to the source of data generation. If inference does not hold good, it then passes the activation to the subsequent EDGE server, which is far from the point of data generation. Although this technique helps lower the latency to provide faster inference, it utilises more storage at individual EDGE servers. \fi \item Model Selection: The model selection approach selects a specific DNN model for inference from a set of available DNN models based on the latency, precision and energy requirements of the end-user \cite{wang2020convergence}. In a model selection strategy, multiple DNN models with varying DNN structures are trained. The different trained models each have a specific inference latency, energy requirements, and accuracy. Once trained, each of the models is deployed to various servers. The model selection approach will then select the DNN model based on the end-user requirements \cite{zhou2019EDGE}. The model selection approach is similar to the early exit approach with only one difference, that, in model selection, independent DNN models are trained. In contrast, in the early exit, only one DNN is trained over which multiple exit points are created. Authors in \cite{park2015big}, proposed a new concept of BL-DNN (big/Little DNN) based on the model selection approach. The authors proposed the score margin function, which helps in taking the decision whether or not inference made by small DNN is valid or not. The score function is computed by subtracting the probability of the first prediction with the second prediction for the given input. Thus, a score function can be seen ranging from 0 to 1. Higher the value of the score function, the higher the estimation that inference is accurate. Lower the value of score function lower is the estimation of inference being accurate. If the score function estimation is low, then a big DNN is invoked to make the inference on the same input data. The same research was able to show a 94.1\% reduction in the energy consumption on the MNIST dataset with accuracy dropping by 0.12\%. Recently in \cite{marco2020optimizing}, an adaptive model selection technique is used to optimise deep learning inference. The proposed framework builds a standard machine learning model, which learns to predict the best DNN model to use for inference based on the input feature data. To facilitate the training of the selection model (which is standard KNN model in this scenario), different pre-trained models like Inception \cite{liu2021deep}, Resnet \cite{sarwinda2021deep}, MobileNet \cite{kadam2021detection} were evaluated on the same image dataset. For each image, the DNN model that acheived the highest accuracy is set as the output. The training data for the KNN model is comprised of the features extracted from the image as input and the optimal DNN model as output. Once the model selector (the KNN) is trained, it is then used to determine the DNN model, giving the best accuracy on the selected image. In the end, the selected DNN model makes inference on the image as shown in Figure \ref{fig:modelselection}. \iffalse For the image classification problem, first of all, 29 features are captured from the image. Some of these features comprise of average brightness of the image, a seven bin histogram of EDGE lengths and angles, area of the main object, aspect ratio and hue. For training the selection model (which is KNN in this scenario), different pre-trained models like Inception \cite{liu2021deep}, Resnet \cite{sarwinda2021deep}, MobileNet \cite{kadam2021detection} were evaluated on the same image dataset. DNN model with higher accuracy for that image becomes the output from the model. Dataset for the KNN model then has features extracted from the image as input and the model to select as output. Once the model selection is trained, it is then used to determine the DNN model, giving the best accuracy on the selected image. In the end, the selected DNN model makes inference on the image as shown in Figure \ref{fig:modelselection}. \fi \begin{figure}[ht] \centering \setlength \fboxsep{0pt} \setlength\fboxrule{0.25pt} \fbox{\includegraphics[width=3.0in]{image/ModelSelection.png}} \caption{Model Selection of Deep Neural Network.} \label{fig:modelselection} \end{figure} Experimental results validated the reduction in the inference time by a factor of $1.8\times$ for the classification task and $1.34\times$ time reduction in a machine translation task. Model selection facilitates a decrease in inference time. However, with an increase in the number of pre-trained DNN models, the memory footprint across the EDGE servers increases significantly. \newline \item Result Cache: \label{lbl_result_cache} Result cache techniques help in decreasing the time required to obtain the prediction from the EDGE server. In this approach, frequent input queries (such as frames in the case of video classification or images in the case of image classification) and associated DNN predicted output are saved in an archive on the EDGE server. So, before any query is inferred from the DNN model, intermittent lookup happens. In intermittent lookup, if a query is similar to a saved query, the result is inferred from the archive (Cache). Otherwise, the query goes to the DNN model for inference. This technique becomes more powerful in environments where the queries can be expected to exhibit similarity. Drolia, Utsav, et al. proposed a cache-based system that leveraged the EDGE server for image classification \cite{drolia2017cachier}. When evaluated on image classification applications, the approach yielded up to $3\times$ speedup on inference for image recognition task without any drop in the model's performance (accuracy). Another system for video analysis utilised the cached convolution outputs of the CNN layers to reduce the computation for making an inference \cite{huynh2017deepmon}. The idea is again based on the similarity of consecutive frames in videos. Initially, in this approach, activations from each layer of DNN for a query frame are saved in the cache. For the next subsequent frame(query), the query is pushed through the first layer and the resulting activations are compared with the previous activation values of the same layer saved in the cache. Only those activations that differ significantly from the cached version are calculated and propagated further through the network. If the activation is deemed similar, they are carried over with their cache results to the next layer. In the experiment, the authors showed a significant speedup of $3\times$ to $4\times$ compared to the vanilla CNN model with no change in accuracy. In other research \cite{balasubramanian2021accelerating}, the authors proposed the framework on a similar line of result caching. In this research, queries were initially passed through the DNN and activation's of each layer were cached (archived) in the EDGE server along with the prediction from the DNN model. During the inference, after passing the image through the layers of DNN, activation's are then checked with the saved activations of a specific layer. If activation of a particular layer for query matches with the activation in the cache, further propagation of activation is stopped, and the cached result is provided back for the query. The research with VGG-16 architecture on CIFAR yielded a 1.96x latency gain using a CPU and a 1.54x increase when using a GPU with no loss in accuracy. Result caching provides a significant boost in the scenario where the query (frames processing for the boundary identification) for inference does not change significantly. While result caching improves the overall latency of the neural network, it also incurs a larger memory footprint. \iffalse \item Model partitioning: \label{lbl_model_partitioning} Model partitioning works on the concept of offloading computation tasks across independent servers. It leverages the layered architecture of a DNN and forward passes of activation's from initial to subsequent layers to partition the model based on the layers on different servers as shown in Figure \ref{fig11}. \begin{figure*}[h] \centering \setlength \fboxsep{0pt} \setlength\fboxrule{0.25pt} \fbox{\includegraphics[width=6.2in]{image/N_ModelPartitioning.png}} \caption{Model partitioning of Deep Neural Network.} \label{fig11} \end{figure*} Model partitioning can be subdivided into two categories based on the partition of the DNN model across the servers. In the first category, the DNN model is layer-wise partitioned and hosted in different servers. Essentially, the first layer resides in the one ES, and the second layer can reside on the same or another ES and the same hold for the subsequent layers as shown in Figure \ref{fig11}. (a). To decide on how many layers can reside inside one ES, the computational requirement of the DNN layer and computational resources which ES can offer is determined. Then, based on the values, the number of layers is decided to be in the ES. In the second category, the DNN model is partitioned in the input dimension, which can also be visualised as horizontal partitioning of the DNN model when DNN layers are vertically stacked from left to right, as shown in Figure \ref{fig11}. (b). In \cite{kang2017neurosurgeon}, authors designed a light-weight scheduler called Neurosurgeon to partition the DNN model across two computing ES automatically. The partitioning function for the DNN of Neurosurgeon was based on the layer level partitioning strategy. As a result, Neurosurgeon achieved 1.9x average speed up on the MAUI offloading framework \cite{cuervo2010maui} without any loss inaccuracy. In another research \cite{stahl2021deeperthings}, the authors proposed an approach, DeeperThings, to enable CNN based model deployment across ES. This approach supports the partitioning of the CNN model by utilising input-dimension wise partitioning. With this approach, the authors reduced the communication demands within the DNN layers by up to 28.8\%, resulting in 1.52x faster inference in object detection tasks compared to the DNN model subjected to layer partitioning. \fi \end{enumerate} \section{Key Performance Metrics for AI at the EDGE} \label{sec:util} The application of AI at the EDGE has gathered significant momentum over the last few years. Consequently, significant changes can be seen in the selection of evaluation metrics for EI based services. Selection of the performance metrics for EI services is dependent on latency, bandwidth (data transferred across the backhaul network), privacy and storage requirements. Also, some additional criteria need to be considered and monitored when training AI models at the EDGE.\par This section will discuss the different evaluation metrics that should be evaluated when developing All-in EDGE based AI models. \iffalse {Model training at EDGE server} Model training on an EDGE server becomes challenging because of the explicit constraints of the EDGE server. At the same time, DNN models are resource hungry. Hence, it becomes critical to identify the appropriate balance of model performance within the boundaries of the available resources allocated for DNN model training. Underneath four metrics are presented to evaluate the model training at EDGE server. {Model Inference on the EDGE server} Evaluation of an EI service is more dependent on the model inference phase rather than the model learning phase. Different services have different quality of service requirements that must be met, which can be challenging in a constrained environment. Latency is one of the primary key factors which pushed the model deployment to the EDGE server. Other being the bandwidth congestion at the backhaul network due to the proliferation of IoT devices producing a large amount of data. \fi \subsection{Use-case specific metrics} Use-case specific metrics are used to determine the quality of the trained DL model and are dependent on the problem statement. For example, if the use-case is a classification problem, then accuracy, balanced accuracy, roc\_auc, etc. can be evaluated \cite{kristiani2020isec,singh2021deep,yu2021deep}. Simultaneously, if it is a regression-based use-case, one needs to assess max variance, R-square, root mean squared error, etc. \cite{kumar2021predictive,violos2021predicting,shitole2021optimization,moursi2021iot,farooq2021intelligent}. While these metrics are widely used, they are essential for the performance comparison of different models architecture and strategies deployed on the same dataset over the EDGE server. \subsection{Training Loss} The process of training a deep learning model requires the optimisation (typically minimisation) of a specific loss function. The training loss is a metric that captures how well a DNN model fits the training data by quantifying the loss between the predicted output and ground truth labels. Different metrics are selected based on the type of problem, {\em i.e.}, classification or regression. Some of the widely used loss functions to capture the learning of DNN at EDGE while training are Mean Absolute Error \cite{wang2019ecass,gao2020salient,figetakis2021uav}, Mean Square Error Loss \cite{wang2021EDGE,zhu2021learning,yang2021EDGE}, Negative Log-Likelihood Loss \cite{shao2021task,liu2021consistent,liu2021resource}, Cross-Entropy Loss \cite{matsubara2021supervised,li2021intelligent,du2021cracau,fagbohungbe2021efficient}, Kullback-Leibler divergence \cite{deng2021share,goldstein2021decentralized,yang2021learning,sun2021cooperative} etc. \subsection{Convergence Rate} A convergence rate is primarily computed when using a distributed and decentralised architecture to train a model at the EDGE. One of the primary goals of the distributed/decentralised DNN learning process at the EDGE is to speed up the convergence of models getting trained at multiple locations. In the distributed and decentralised architecture, the training happens at different EDGE servers (for all-in EDGE paradigm), but needs to collectively converge to a consensus (which means further updating the model parameters while using iterative algorithm i.e gradient descent, will not change the estimate of the model for a given classification or regression problem) \cite{guo2020communication}. Convergence rate as a metric defines the number of iterations that one algorithm will take to converge on an optimum solution \cite{tang2020communication}. Thus, in decentralised/ distributed architecture it becomes crucial because the different combinations of the architecture selected along with synchronisation schemes (synchronous, asynchronous etc.) have different convergence rates \cite{wu2020collaborate,nedic2020distributed,jiang2020skcompress,so2021codedprivateml}. \subsection{Latency} When inferring from a model at the EDGE both the computational latency and communication latency becomes critical key performance metrics. Computational latency provides us with an estimate of time that DNN model will require to process a query input and infer on the same \cite{yang2021joint,zeng2021energy,liu2021light}. Whereas communication latency provides an estimate of the time from when a query is sent from the origin device/ server until the result is returned \cite{li2021slicing,zhang2021making,zhu2021network,shlezinger2021collaborative}. For mission-critical cases \cite{chen20213u}, DNN models with low computational and communication latency are more favoured. This metric becomes critical as one of the most important reasons to move from Cloud to All-in EDGE was to reduce the latency incurred during DNN training and inference. \subsection{Communication Cost} \iffalse As a model is trained using a decentralised/ distributed architecture, it is necessary to transmit intermediary output between different partitions of the model, which are hosted across multiple EDGE servers. The available network bandwidth represents a constraint on the transmissions of these messages. Thereby the quantity/size of messages being passed between the EDGE Servers's can cause a bottleneck in the network. To avoid network congestion the message size and frequency need to be evaluated while training the model. \fi When a DNN model is deployed for inference on an EDGE server, many requests by the end-user(s) are raised to consume the EI service. The volume of data transmitted from the end-user(s) has the potential to create congestion at the network EDGE server. The communication cost metric evaluates the amount of data (message size of each query) flowing to the ES from the end-user \cite{shi2020communication,lim2021decentralized}. It also takes into consideration the inference data, which is reverted to the end-user. Active monitoring of the communication cost is important to ensure concise data flow and prevent any potential congestion points \cite{welagedara2021review,janakaraj2021towards,tonellotto2021neural}. \subsection{Privacy-preserving Metrics} \label{pm:utilityme} Privacy-preserving metrics provide a means to quantify the freedom of privacy that is enjoyed by users when privacy protection is offered by an EI application by privacy-preserving technologies \cite{wagner2018technical}. In order to assess the merits of the enabling technologies to preserve privacy, it can be evaluated on its merits to minimize: \begin{itemize} \item Direct leakage, \item Indirect leakage \end{itemize} Direct leakage can occur during the training process when the privacy of the training data can be compromised or at the inference time when the client's data gets hacked before reaching the EI application. Protecting the client's data at inference time is managed by well-established encryption algorithms like DES, 3DES, AES, RSA and blowfish, which doesn't require evaluation \cite{vaibhavi2021survey}. However, we need to measure the privacy of training data when used in building an AI model. Based on the enabling technology utilised to preserve privacy, there will be different metrics that can help to evaluate it. Enabling technologies, where activations are transferred from one server to another, can leak private data. Enabling technology such as model parallelism, gradient compression, and Split Learning-based systems to preserve privacy minimises the similarity between the raw data and the intermediary activation vector sent from one server to another. The distance correlation metric such as pairwise correlation and mutual information score \cite{szekely2007measuring,vepakomma2019reducing,vepakomma2020nopeek} can be utilised to find the leakage between the raw data and intermediary activation vector. This metric ranges from 0 to 1, where 0 implies the raw data are independent of the intermediary activation vector. \par Similarly, indirect leakage happens when servers share model parameters during the training process. This exposes the client's dataset to inference attack by adversaries \cite{wang2021privacy}. Enabling technologies such as aggregation frequency control, gossip training, data parallelism, and Federated Learning frequently shares the model parameters with other participating servers. It makes them prone to leak client data from the model parameters. Mutual information as a metric provides a way to quantify the risk of confidential information leakage from the gradients \cite{chen2021mutual}. Mutual information quantifies the amount of common information obtained about the client's data by observing the model parameters \cite{liu2021quantitative}. \iffalse Broadly privacy is provided by either one of the three protection methods: Cryptographic Techniques, Perturbative Techniques, and Anonymisation Techniques \cite{shokri2021privacy,tran2019privacy}. Methods used to quantify privacy can be subdivided into the following two categories: \cite{weber2021quantifying,sovrano2021survey,tran2019privacy}: \begin{enumerate} \item Privacy metrics pertaining to the privacy of the data used in training the AI model. \item Utility metric for measuring the usefulness of protected data for building AI model. \end{enumerate} For privacy metrics, Wagner et al. \cite{wagner2018technical}, analyzed more than over 80 existing privacy metrics. Based on the characteristics of the metrics such as data sources, adversary models, input and output for computation by the AI models, all the 80 metrics were divided in either of the four categories. Privacy metrics categorised in \textbf{data sources} describes the sensitive attributes in the dataset and how that information can be acquired by adversary. For metrics falling under the \textbf{adversary models} make assumption regarding the motivation and capabilities of the adversary. Privacy metrics are also defined based on the \textbf{input data} (which includes prior Knowledge about the adversary such as adversary resources and estimate (which is again effort of adversary to breach privacy, which is mostly based on posterior probability distribution available to adversary)). Lastly, the privacy metrics which are based on \textbf{output} measures are based on the eight output properties of the AI model namely uncertainty \cite{ibrahim2021privacy,fischer2021ai}, information gain/ loss \cite{coutinho2021gans,li2021privacy,song2021systematic}, data similarity \cite{sowmiya2021heuristic,guo2021practical,alam2021open,briggs2021review,gao2021privacy,das2021privacy,kiranmayi2021review}, indistinguishability \cite{xu2021artificial,wei2021low,wang2021privacy,jiang2021differential}, time \cite{liu2018location,tang2020privacy,aslanpour2020performance,beres2021blockchain}, error \cite{singh2021machine,fagbohungbe2021efficient,liu2020datamix,wang2021privacy,jasinskaite2021combining}, adversary success probability \cite{meurisch2021data,rajabzadeh2020graph,gong2021privacy,beres2021blockchain,mo2020layer,liu2020datamix} and accuracy/ precision \cite{murakami2021designing,imola2021communication,baccour2021pervasive}. \label{pm:utilitymetric} Another aspect of the privacy metric is to find the utility trade-off. Once data is introduced to the privacy preservation technique it should be able to produce comparable results as with the model trained on raw data. Utility metrics help in quantifying the usefulness of protected data for the AI model and can be further divided into general and specific purposes. For general purpose utility metrics, information loss metrics are used to quantify the difference between the original and the transformed data (when original data is applied with the privacy-preserving methods) \cite{jiang2021robust,zhang2021wasserstein,yuvaraj2021privacy}. Whereas in the case of specific metrics, data is anonymised and used for the analytics task then evaluated based on the quality of the result produced the AI model ({\em i.e.}, accuracy or error rate) and compared with results with the original data \cite{salman2021data,nielsen2021deep,tomei2021estimating}. Ensuring the privacy of the end-user is a critical challenge when deploying the EI service. It is EI the service provider's responsibility to minimise the privacy loss of any data flowing from the end-user to the ES. Because of the resource-constrained nature of an ES, deploying privacy preservation techniques such as homomorphic encryption or secure multi-party computation, requiring massive computation is not feasible. Other methods, like cryptography, can be used based on the computation complexity of implementation. Privacy loss metric can be customised to comprehend both the privacy of the trained model and the users consuming the EI service. \fi \subsection{Energy consumption} There is a wide range of available DNN models, their individual energy requirements for computation can vary significantly. For some resource-constrained environments, it becomes infeasible to host models with a larger energy footprint \cite{desislavov2021compute,desislavov2021analysis}. The energy consumption of different models should be evaluated to find the best DNN architecture deployment strategy. The energy requirement of a DNN model should be consider for both the training and inference phase \cite{nez2021dynamic,mei2021energy,zhu2021green}. Power consumption (watts and kilowatts units) as measurement can be utilised to determine the energy consumption \cite{liang2020ai}. \subsection{Memory footprint/ Model Size} As an EDGE server usually has limited infrastructure resources, it becomes challenging to host a DNN model because of the computational requirements (the bigger the network, the more parameters it will have and each extra parameter increases the memory requirement (in RAM)). Model size or memory footprint is computed having `MB' as their unit of measurement \cite{flamis2021best,varghese2020survey,chen2019neuropilot,merenda2020EDGE,liu2020bringing}. For the image classification problem, if MobileNet V2 with 3.54 million parameters is selected it will have 14 MB as model size whereas if InceptionV4 with 42.74 million parameters is selected for the same problem it will have a 163 MB model size requirement \cite{liang2020ai}. \section{Open Challenges and future direction} Thus far, we have discussed deep learning architectures, technologies, adaption techniques and the key performance indicators required to facilitate AI to the All-in EDGE paradigm. In this section we will highlight existing open challenges and future research directions in the area of AI at the All-in EDGE paradigm. \subsection{Latency} The deployment of AI at the EDGE server enables low latency inference due to the closer proximity to the end-user. AI applications like image segmentation demand very low latency to be practical in real-world applications. Researchers from academia and industry are actively pursuing methods of decreasing latency by utilising EDGE based AI models \cite{li2019automatic,khani2021real,ilhan2021offloading,mahendran2021computer,rahman2021internet}. The research focused on deploying models at the EDGE has exclusively focused on using CNN-based DNN models. Unfortunately, to date, a host of other types of DNN models have not been considered. For example, open areas of research include the EDGE deployment of models with looped layers, such as RNNs and larger scale models such as transformers. Deployment of such models will benefit in building better applications based on information retrieval, language model, object detection, image segmentation etc. \iffalse Furthermore, model adaption techniques (described in section \ref{modelAdaption}) propose an innovative way to reduce the latency. Also, some hybrid approaches with different combinations of model adaption techniques and architecture (described in \ref{modelTrainingArch}) are receiving much attention \cite{okuno2021lossless,tonellotto2021neural,broekman2021real,abid2021optimizing}. \fi \subsection{Memory efficiency} DNNs are resource-hungry during the model training and inference stages. One of the significant challenges at the EDGE server is the availability of limited computing resources. EDGE servers share computing resources across multiple applications and resource-hungry DNN applications have the potential to impact the normal operation and available resources for other applications running at that EDGE server. To address this issue, efforts have been made to update the hardware chipset \cite{piyasena2019reducing} to facilitate higher processing rates. Other research has examined method of improving communication efficiency (to pass data only when its really necessary from one EDGE server to another) \cite{sattler2019robust}. Research needs to find approaches that can reduce the memory footprint of DNN at EDGE server by utilising model adaption and improved communication efficiency to mitigate the memory-efficiency requirement in the end-to-end EI services. \iffalse However, there has been very limited research that focuses on the selection of pre-existing memory-efficient DNN architectures \cite{gamatie2019towards, gong2020intelligent, prado2020bonseyes}. \fi \subsection{Privacy-preservation in EDGE-AI} Providing adequate privacy preservation for EI applications is an area with open research challenges. To preserve the privacy of client's data, different enabling technology are utilised with or without cryptographic techniques, perturbation techniques, and anonymisation techniques \cite{shokri2021privacy,tran2019privacy}. On the one hand, these technologies and techniques provide a means to safeguard the client's data better but simultaneously struggles to maintain the effectiveness (accuracy in case of classification problem) of the AI model \cite{kaissis2021end,alkhelaiwi2021efficient,gawali2021comparison}. On the other hand, it's not effectiveness that gets jeopardised, but efficiency (which includes training time \cite{zhang2021adaptive,thapa2021advancements} and inference time \cite{jain2022ppdl,tan2021cryptgpu}) of the AI model also gets degraded. Hence, there lies an opportunity to build a platform that can preserve privacy but at a considerable cost of sacrificing the effectiveness and efficiency of the AI model. \iffalse find the trade-off between the EI services such as privacy to performance trade-off (as explained in the privacy utility metric section \ref{pm:utilitymetric}) \cite{yrjanainen2020privacy,aloufi2020paralinguistic,hammoud2020privacy}, and privacy to latency requirement (if privacy is achieved by encryption technique then encryption and decryption of private data will increase the overall latency while providing the inference) \cite{zhang2021learning,yang2021joint,xing2021privacy,wei2021low}. \fi \subsection{Designing EI Application Framework for All-in EDGE paradigm} All-in EDGE paradigm requires new ways of designing applications. In section \ref{modelTrainingArch}, we presented different architectures capable of pushing AI to the EDGE server with varying application requirements. With the enabling technologies (model parallelism, aggregation frequency control, gossip training, gradient compression, data parallelism, Federated Learning and Split Learning as explained in section \ref{EnablingTechnologies}) and model adaption techniques \ref{modelAdaption}, EI application design becomes progressively more complex. The introduction of a microservices-based architecture is another exciting area of research in the provisioning of EI applications \cite{ezzeddine2018restful}. Although different research provided the framework, they all remain confined to the problem they tried to resolve. For example, in \cite{xiao2020toward} provided a framework for self-learning EI, in which authors proposed GAN based synthesis of the traffic images. The proposed framework remains applicable for only video-based scenarios. \par Similarly, the work in \cite{an2020novel} provides a framework that was restricted to work for the web traffic anomaly detection. Likewise, other research \cite{li2018EDGE,liu2019e2m,lin2020real} have their own niche and the proposed framework is restricted to solve the specific problem type. To the authors knowledge, Open EI \cite{zhang2019openei}, is the only framework that provided a generic approach to facilitate the development of applications for a wide range of problem domains (computer vision, natural language processing etc.). Still, this framework lacks the components of hardware (choices in the selection of hardware accelerators that can help in faster DNN computation {\em i.e.}, \cite{hashemi2021darknight,shehzad2021scalable,zaman2021custom,mittal2021survey}) and the deployment of the EI services (how to distribute load and develop a global model across the EDGE servers \ref{EnablingTechnologies}). Therefore, there is a need to find a robust framework that can help in the easy deployment of complex EI architecture while finding the best trade-off between the application requirements and the EDGE server resources. \section{Conclusion} Exploding data due to the proliferation of EDGE devices and advancements in resource-hungry Deep Learning (\emph{e.g.}, Deep Neural Network) models lead to new challenges that need to be considered to enable Deep Learning in the All-in EDGE paradigm. In this regard, this survey paper focused on the current state-of-the-art facilitating Deep Learning in the All-in EDGE paradigm. We initially performed a thorough review of the various levels of EDGE Intelligence. We subsequently focused on the All-in EDGE paradigm, the motivation behind the adoption of EDGE Intelligence. We presented an overview of the architectures, enabling technologies and model adaption techniques that enable EDGE intelligence through Deep Learning. Then, we presented the key performance metrics that should be tracked to analyze the All-in EDGE services and Deep Learning techniques at the edge. Finally, we highlighted open challenges and future research directions. \section*{Acknowledgment} This research was conducted with the financial support of ADVANCE CRT PHD Cohort under Grant Agreement No. 18/CRT/6222 and at the ADAPT SFI Research Centre at Cork Institute Of Technology. The ADAPT SFI Centre for Digital Media Technology is funded by Science Foundation Ireland through the SFI Research Centres Programme and is co-funded under the European Regional Development Fund (ERDF) through Grant 13/RC/2106. \bibliographystyle{ACM-Reference-Format} \subsection{Experimental setup} Specify the context and setup of your experiments. This includes e.g.~what hardware (VMs) you are running on, what operating system these machines are running, how they are connected, ... Also explain how you generate load for your system and what parameters you used here. The general idea is to include enough information for others to reproduce your experiments. To that end, you should provide a detailed set of instructions for repeating your experiments. These instructions should not be included in the report, but should be provided as part of the source code repository on GitHub, typically as the \texttt{README.md} file, or as Shell scripts or Ansible scripts. If your experiments give strange or unexpected results, analyze, profile and debug your code. \textbf{Do not simply re-run experiments until they give the expected results.} Finally, running experiments is very time consuming and you may need to go through multiple rounds of experiment, debugging and optimization. \textbf{Do not delay running experiments until the end of your project period.} \subsection{Results} The results of your experiments. Compare different variants of your design (e.g.~with and without optimizations) or compare performance to other designs or systems. Plots should show the average over multiple runs (at least 10 as a rule of thumb), including error bars, percentiles or min/max values. Discuss the plot and extract the overall performance. Do not repeat all numbers in the text, but mention relevant differences in numbers, e.g.~our optimization improves throughput by 26\%. Discuss how the results validate or contradict your assumptions. Perform experiments to evaluate your system under operating normal conditions, when experiencing failures or attacks, or with different workloads. Figure~\ref{fig:graph} shows a graph generated with \texttt{pgfplots} from experiment data. \begin{figure} \begin{tikzpicture} \begin{axis}[ xlabel=Throughput ($\times 1000$ Ops/sec), ylabel=Latency (ms), legend entries={baseline, optimized}, ] \addplot table [x=throughput, y=latency, col sep=comma] {data/data-unoptimized.csv}; \addplot table [x=throughput, y=latency, col sep=comma] {data/data-optimized.csv}; \end{axis} \end{tikzpicture} \caption{A graph showing latency and throughput of a baseline and optimized implementation. The axes show latency in milliseconds, and throughput in thousand operations per second. Data is made up.} \label{fig:graph} \end{figure} \subsection{You can add subsections} You can use numbered subsections to structure your sections or $\backslash\texttt{paragraph}$ to separate paragraphs with a heading using only a few words, as used in Section~\ref{sec:introduction}. \begin{enumerate} \item\label{item:1} This is an item in an enumeration. \begin{itemize} \item This is an item of a unnumbered list. In this case, the lists are nested within each other. \end{itemize} \item\label{item:2} The second point in the enumerated list. \end{enumerate} I can refer to the element of the enumeration above as Point~\ref{item:1}. If you refer to numbered items,~e.g. items from a list or figures or sections, always capitalize the name. For example, this is Section~\ref{sec:latex}. \subsection{Figures} You can include figures. You can include files, as done in Figure~\ref{fig:example}. Avoid including jpeg, gif or bmp files since these do not scale nicely. Again Figure~\ref{fig:example} is an example for that. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{fig/RegisterOperations} \caption{Figure taken from~\cite{lecture}} \label{fig:example} \end{figure} You can create graphs from your experiment-data using \texttt{pgfplots}. See the example in \texttt{tex/evaluation.tex} and documentation \url{http://pgfplots.sourceforge.net/pgfplots.pdf}. \subsection{Other tips} \begin{itemize} \item Always use the tilde character between the number and unit, e.g. 100~Mbps or 53~ms. The tilde inserts a space, but prevents line break between the number and unit. \item Never put SI units in italics. \item Do differentiate between bits (b) and bytes (B), and between powers of 10~(MB) and 2~(MiB). \item Avoid things like: "We refer the reader to [42]." That is, don't use citations as nouns. \end{itemize} For further instructions on how to add \textbf{Tables, Algorithms, Theorems see acmguide.pdf}. \subsection{Design} Concisely present your design. Focus on novel aspects, but avoid implementation details. Use pseudo-code and figures to better explain your design, but also present and explain these in the text. To assist in evaluating your design choices, it may be relevant to describe several distinct \textit{design alternatives} that can later be compared. \subsection{Analysis} Argue for the correctness of your design and why you expect it to perform better than previous work. If applicable, mention how your design relates to theoretical bounds. \subsection{Optimization} Explain how you optimized your design and adjusted it to specific situations. Generally, as important as the final results is to show that you took a structured, organized approach to the optimization and that you explain why you did what you did. Be careful to argue, why your optimization does not break the correctness of your design, established above. It is often a good strategy to explain a design or protocol in stepwise refinements, so as to more easily convince the reader of its correctness. \subsection{Implementation} It is not necessary to "explain" your code. However, in some cases it may be relevant to highlight additional contributions given by your implementation. Examples for such contributions are: \begin{itemize} \item \emph{Abstractions and modules}: If your implementation is nicely separated into interacting modules with separated responsibilities you could explain this structure, and why it is good/better than some other alternative structure. \item \emph{Optimization}: If you spend significant time optimizing your code, e.g.~using profiling tools, the result of such optimization may be presented as a contribution. In this case, reason, why the optimized code works better. \item \emph{Evaluation framework}: If you implemented a framework or application to evaluate your implementation against existing work, or in a specific scenario, this framework may be presented as a contribution. \end{itemize} Make sure to cite all external resources used.
{ "attr-fineweb-edu": 1.942383, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd245qsBDFq2tCrcu
\section{Introduction} The two-dimensional $\phi^4$ theory is perhaps the simplest quantum field theory (QFT) which is not exactly solvable. It is thus an ideal laboratory for studying approximate solution techniques. In our recent paper \cite{Rychkov:2014eea}, we studied this theory using the method of \emph{Hamiltonian truncation}---a QFT analogue of the Rayleigh-Ritz method in quantum mechanics. In that work, we considered the case of positive bare mass $m^2>0$ and of quartic coupling $g=\bar g m^2$ with $\bar g=O(1)$. The physical particle mass is given by \beq m_{\rm ph} = f(\bar g) m\,, \end{equation} and the function $f(\bar g)$ was determined numerically. We observed that the physical mass vanishes for $\bar g=\bar g_c\approx 3$, signaling the presence of a second order phase transition. In \cite{Rychkov:2014eea}, our focus was mainly on the region below and around the critical coupling $\bar g_c$. In this second work of the series we will instead be interested in the complementary region $\bar g>\bar g_c$. In this {range of couplings} the theory is massive, but the $\bZ_2$ symmetry, $\phi\to-\phi$, is spontaneously broken. In infinite volume, there are therefore two degenerate vacua, and two towers of massive excitations around them. We will be able to determine the low energy spectrum as a function of $\bar g$. In finite volume the exact degeneracy is lifted, and the energy eigenstates come in pairs split by a small amount, exponentially small if the volume is large. In this paper, as in \cite{Rychkov:2014eea}, we will regulate the theory by putting it in finite volume. In the $\bZ_2$-broken phase, there is also a topologically nontrivial sector of ``kink" states corresponding, in the semiclassical limit, to field configurations interpolating between the two vacua. In this work we will probe the kink mass by studying the mass splittings in the topologically trivial sector. In the future it would be interesting to study the kink sector directly. One interesting feature of the theory under study is that it enjoys a weak/strong coupling duality first discussed by Chang \cite{Chang:1976ek}. The dual description exists for all $\bar{g}\geqslant \bar{g}_*\approx 2.26$. As we review below, the duality relates a description in which the theory is quantized around the $\bZ_2$-invariant vacuum state to an equivalent description in which it is quantized around a $\bZ_2$-breaking vacuum. For $\bar{g}$ not much above $\bar{g}_*$ both descriptions are strongly coupled\footnote{This explains why $\bar{g}_*$ need not be equal, and in fact is not equal to the critical coupling $\bar{g}_c$ mentioned above.} and they can be equivalently employed as a starting point for the numerical computations. In section \ref{sec:chang} we present a comparison between the numerical spectra obtained using the two descriptions, serving both as a non-trivial test of the method and as a check of the Chang duality. On the other hand, for $\bar{g} \gg \bar{g}_*$ the dual description becomes weakly coupled, and provides the better starting point. In section \ref{sec:weakcoupling}, we will explain a modification of the method which can be used, among other things, to study this regime (a weakly coupled $\phi^4$ theory with negative $m^2$) efficiently. It is based on a different treatment of the zero mode of the field. We will compare the numerical results with the predictions from perturbation theory and from semiclassical analyses. We conclude in section \ref{sec:conclusions}. Several technical details are relegated to the appendices. Recently, the $\bZ_2$-broken phase of the two-dimensional $\phi^4$ model was studied in Ref.~\cite{Coser:2014lla} using a version of the Truncated Conformal Space Approach \cite{Yurov:1989yu,Lassig:1990xy}. Differences and similarities between our works will be mentioned throughout the paper. \section{The Chang duality} \label{sec:chang} \subsection{Formulation and consequences} According to Chang \cite{Chang:1976ek}, the two-dimensional $\phi^4$ theory described by the (Euclidean) Lagrangian \beq \mathcal{L}= {\textstyle\frac 12} (\partial\phi)^2 +{\textstyle\frac 12} m^2 \phi^2 + g\, N_m(\phi^4) \label{eq:L} \end{equation} with $m^2>0$, $g>0$, admits a dual description in terms of a Lagrangian with a different, and negative, value of the squared mass: \beq \mathcal{L'}= {\textstyle\frac 12} (\partial\phi)^2 - {{\textstyle\frac 14}} M^2 \phi^2 + g\, N_M(\phi^4)\,. \label{eq:L'} \end{equation} The actual value of the dual mass will be given below. Note that the duality is between quantum theories in the continuum limit, and to specify this limit one has to subtract the logarithmic divergence of the mass parameters. The divergence is removed by normal-ordering the quartic interaction with respect to the mass indicated in the subscript of the normal ordering sign $N$. The potential in $\mathcal{L'}$ has two minima at $\phi=c=\pm M/\sqrt{8g}$. After the shift $\phi\to\phi+c$ the dual Lagrangian becomes\footnote{Notice that normal ordering is a linear operation, and thus commutes with the field shift.} \beq \label{eq:shiftedpotential} \mathcal{L'} \to {\textstyle\frac 12} (\partial\phi)^2 + {\textstyle\frac 12} M^2 \phi^2 + \sqrt{2g}M\, N_M(\phi^3) + g\, N_M(\phi^4)\,. \end{equation} In this way of writing, interactions of both $\mathcal{L}$ and $\mathcal{L}'$ are normal ordered with respect to the mass appearing in the quadratic part of the Lagrangian. In perturbation theory such normal ordering means that we are simply forbidding diagrams with the lines starting and ending in the same vertex. To find the dual mass $M^2$, one is instructed to solve the equation: \begin{gather} \label{eq:ch0} F(X)=f(x)\, \end{gather} where $x=g/m^2$, $X=g/M^2$ are the dimensionless quartic couplings of the two descriptions ($x$ is given and $X$ is an unknown) and \beq f(x)\equiv\log x-\pi/(3 x)\,,\qquad F(X)\equiv\log X+\pi/(6X)\,. \end{equation} This equation is illustrated in Fig.~\ref{fig:eq}. There is no solution for \beq x<x_*=\frac{\pi}{3 W(2/e)}\approx 2.26149\,, \end{equation} where $W(z)$ is the Lambert $W$ function. For $x\geqslant x_*$ there are two solution branches. We are particularly interested in the lower branch $X_1(x)$, which for large $x$ approaches zero: \beq X_1(x)\approx 6/(\pi \log x),\quad x\to \infty\,. \end{equation} The dual description corresponding to this branch becomes weakly coupled in the limit in which the original description becomes stronger and stronger coupled. We thus have a weak/strong coupling duality. \begin{figure}[h] \begin{center} \includegraphics[scale=0.4]{fig-eq1.pdf}\hspace{1cm} \includegraphics[scale=0.4]{fig-eq2.pdf} \end{center} \caption{Left panel: equation $F(X)=f(x)$ has two solutions for $x>x_*$. Right panel: the two solution branches $X_{1,2}(x)$. We are mostly interested in the lower branch $X_1(x)$ which becomes weakly coupled as $x\to\infty$. } \label{fig:eq} \end{figure} Chang \cite{Chang:1976ek} used this duality to show that the $\phi^4$ theory undergoes a phase transition. Indeed, for small $x$ we can use perturbation theory to argue that the theory is in the symmetric phase, with the $\bZ_2$ symmetry $\phi\to -\phi$ unbroken. On the other hand, for large $x$ we use the dual description. Since in that description the potential is a double well, and moreover the dual coupling is weak for $x\gg 1$, we conclude that for large $x$ the $\bZ_2$ symmetry is spontaneously broken. By continuity, there must be a phase transition at an intermediate value of $x$. This argument does not establish whether the transition is first or second order. However, as explained in \cite{Chang:1976ek}, a first order transition is excluded by rigorous theorems due to Simon and Griffiths \cite{Simon:1974dg}. So the transition must be second order. This conclusion is supported by Monte Carlo simulations \cite{Loinaz:1997az,Schaich:2009jk,Wozar:2011gu,Bosetti:2015lsa}, as well as by computations using DLCQ \cite{Harindranath:1987db}, density matrix renormalization group \cite{Sugihara:2004qr}, matrix product states \cite{Milsted:2013rxa}, and the Hamiltonian truncation \cite{Lee:2000ac,Lee:2000xna,Rychkov:2014eea}. Nor does the above argument predict the value of $x$ at which the phase transition must happen. In particular, the fact that the dual description exists at $x\geqslant x_*$ does not mean that the phase transition happens at $x=x_*$. Indeed, at $x=x_*$ both the direct and the dual descriptions are strongly coupled, and the fate of the $\bZ_2$ symmetry is not \emph{a priori} clear. In fact, calculations indicate a higher phase transition location at $x_c\approx 2.75 - 3$ \cite{Rychkov:2014eea,Wozar:2011gu,Milsted:2013rxa,Bosetti:2015lsa,Pelissetto:2015yha}. \subsection{Review of the derivation} Here's a quick derivation of the Chang duality, following \cite{Chang:1976ek}. We will work in the Hamiltonian formalism, and consider the normal-ordered Hamiltonians corresponding to $\mathcal{L}$ and $\mathcal{L'}$: \begin{gather} H= \int dx\, N_m\bigl({\textstyle\frac 12} \dot \phi^2 + {\textstyle\frac 12} \phi'^2 + {\textstyle\frac 12} m^2 \phi^2 +g\, \phi^4\bigr) \label{eq:H}\,,\\ H'= \int dx\, N_M\bigl({\textstyle\frac 12} \dot \phi^2 + {\textstyle\frac 12} \phi'^2 -{{\textstyle\frac 14}} M^2 \phi^2 + g\,\phi^4+\Lambda\bigr)\,. \label{eq:H'0} \end{gather} Notice that we are now normal ordering the full Hamiltonian, including the quadratic part. This more careful procedure will allow us to establish the correspondence also for the ground state energy. In the dual description it will receive an extra constant contribution, denoted~$\Lambda$ in \reef{eq:H'0}. Recall Coleman \cite{Coleman:1974bu} relations between normal orderings with respect to different masses: \begin{gather} N_m\bigl({\textstyle\frac 12} \dot \phi^2 + {\textstyle\frac 12} \phi'^2) = N_M\bigl({\textstyle\frac 12} \dot \phi^2 + {\textstyle\frac 12} \phi'^2)+ Y\,,\nn\\ N_m(\phi^2) = N_M(\phi^2) + Z\,,\label{eq:coleman}\\ N_m(\phi^4) = N_M(\phi^4) + 6 Z N_M(\phi^2)+ 3 Z^2\,, \nn \end{gather} where $Y = Y(m,M)$ and $Z = Z(m,M)$ are the differences of the normal-ordering constants:\footnote{The expression for $Z$ can also be equivalently derived in the Lagrangian language as the difference of one-loop massive diagrams: $ Z = \int \frac{d^2 k}{(2\pi)^2} \bigl( \frac{1}{k^2 + M^2} - \frac{1}{k^2 + m^2} \bigr)\,. $} \begin{gather} \label{eq:YZ} Y(m,M)=\int \frac{dk}{8\pi}\left\{ \frac{2 k^2+M^2}{\sqrt{k^2+M^2}}-(M\to m)\right\}=\frac 1{8\pi}(M^2 - m^2)\,,\nn\\ Z(m,M)=\int \frac{dk}{4\pi} \left\{ \frac{1}{\sqrt{k^2+M^2}}-(M\to m)\right\}=\frac 1{4\pi}\log \frac{m^2}{M^2}\,. \label{eq:YZ} \end{gather} Using these relations, one can see that $H$ maps on $H'$ as long as \beq \label{eq:M2} {\textstyle\frac 12} m^2+6 Z g =-{{\textstyle\frac 14}} M^2\,, \end{equation} written equivalently as \reef{eq:ch0}. We also find a constant contribution to the ground state energy \beq \label{eq:Lambda} \Lambda = Y +{\textstyle\frac 12} m^2 Z +{3 g} Z^2\,. \end{equation} \subsection{Numerical check of the duality} \label{sec:setup} We will test the Chang duality by comparing the spectra of the direct and dual theories in a finite volume---a circle of length $L$. The spectra will be computed using the Hamiltonian truncation. We will first describe the setup for these computations, and then present the results. \subsubsection{Direct theory} \label{sec:directtheory} By the direct theory we mean \reef{eq:H} put on a circle of length $L$. This is precisely the theory we were studying in \cite{Rychkov:2014eea}, and we will be following the same method. Here we will give just a brief reminder. The finite volume Hamiltonian corresponding to the infinite-volume Hamiltonian \reef{eq:H} is given in \cite{Rychkov:2014eea}, Eq.~(2.19), and has the form: \beq H(L) = H_0 + g[V_4+ 6 \zeta V_2]+ [E_0+3 \zeta^2 g L], \label{eq:HL} \end{equation} Here $H_0$ is the Hamiltonian of the free scalar field on the circle: \beq H_0(L,m)=\sum_k \omega(k) a^\dagger_k a_k,\quad k=(2\pi/L)n,\ n\in\bZ,\quad \omega(k)=\sqrt{m^2+k^2}\,, \end{equation} where $a,a^\dagger$ are the ladder operators appearing in the field mode expansion:\ \begin{equation} \label{eq:modeexp} \phi(x) = \sum_{k} \frac{1}{\sqrt{2L \omega(k)}} \left( a_k e^{i k x} + a_k^\dagger e^{-i k x}\right)\,. \end{equation} The $V_4$ term is the normal ordered quartic interaction: \beq V_4(L,m) = \frac{1}{L} \sum_{\sum k_i=0} \frac{1}{\prod \sqrt{2 \omega(k_i)}} \Big[ (a_{k_1}a_{k_2}a_{k_3}a_{k_4} + 4 a^\dagger_{-k_1}a_{k_2}a_{k_3}a_{k_4} +\text{h.c.}) + 6 a^\dagger_{-k_1}a^\dagger_{-k_2}a_{k_3}a_{k_4} \Big]\,. \end{equation} The other terms in \reef{eq:HL} are all exponentially suppressed for $Lm\gg 1$. In particular, \beq \label{eq:E0L} E_0(L,m)=-\frac{1}{\pi L}\int_0^\infty dx \frac{x^2}{\sqrt{m^2L^2+x^2}} \frac{1}{e^{\sqrt{m^2L^2+x^2}}-1}\, . \end{equation} is the Casimir energy of the free scalar field in finite volume. Corrections involving \beq \zeta(L,m)=\frac{1}{\pi }\int_0^\infty \frac{dx}{\sqrt{m^2L^2+x^2}}\frac{1}{e^{\sqrt{m^2L^2+x^2}}-1}\, . \end{equation} are due to a mismatch between the normal ordering counterterms needed to define the $\phi^4$ operator in infinite space and on the circle. One of them contributes to the vacuum energy density, and the other is a correction proportional to the mass operator $V_2$: \beq V_2(L,m)=\sum_k \frac{1}{2 \omega_k}( a_k a_{-k} + a^\dagger_k a^\dagger_{-k} +2 a^\dagger_k a_k)\,. \end{equation} In \cite{Rychkov:2014eea} we worked at circle sizes up to $L=10 m^{-1}$, and it was justified to neglect the exponentially small terms proportional to $E_0$ and $z$. Here, in some cases, we will work at smaller circle sizes. In this paper we will always keep these terms, which is actually straightforward in our algorithm. The Hilbert space $\calH$ of the theory is the Fock space of the ladder eigenstates. As in \cite{Rychkov:2014eea}, we will restrict our attention to the subsector of the Hilbert space consisting of the states of zero total momentum $P=0$ and of the positive spatial parity $\bP=1$. The Hamiltonian \reef{eq:HL} does not mix states of positive and negative field parity $\bZ_2:\phi\to-\phi$ (i.e.~the states containing an even and odd number of particles). Thus the $\bZ_2$-even and $\bZ_2$-odd sectors can be studied separately. We will study both of them. Finally, we will truncate the Hilbert space to the subspace of states $\calH(E_{\rm max})$ which have $H_0$ energy below a certain cutoff $E_{\rm max}$. Typically, we choose our cutoff so that the dimension of $\calH(E_{\rm max})$ is $\sim$10000 per $\bZ_2$ sector. The Hamiltonian $H(L)$ restricted to the truncated Hilbert space is called the truncated Hamiltonian $H(L)_{\rm trunc}$. We evaluate the matrix elements of ${H_{\rm trunc}}$, and the eigenvalues of the resulting finite matrix are then computed numerically. This gives what in \cite{Rychkov:2014eea} is called ``raw" spectrum. It converges to the true nonperturbative spectrum with a rate which asymptotically goes as $1/E_{\rm max}^2$. Convergence of the method can be improved by renormalizing the couplings. We refer the reader to \cite{Rychkov:2014eea} for a detailed explanation of the renormalization procedure.\footnote{Similar renormalization procedures were developed in the Truncated Conformal Space Approach literature \cite{Giokas:2011ix,Feverati:2006ni,Watts:2011cr,Lencses:2014tba}. The concrete version used by us shares a lot in common with the one in \cite{Hogervorst:2014rta}; the small differences that exist were stressed in \cite{Rychkov:2014eea}.} In the present work we will use an identical procedure, apart from a technicality that we now explain. In \cite{Rychkov:2014eea}, the leading renormalization coefficients were calculated by extracting the leading non-analytic behavior for $\tau \to 0$ of the quantities \begin{equation} \label{eq:ik} I_k(\tau) = \int_{- L/2}^{L/2} d z\, G_L(z,\tau)^k\,, \end{equation} where $G_L(z,\tau)$ is the two point function in finite volume, which can be expressed through periodization via the two point function in infinite volume: \begin{gather} G_L (z, \tau) = \sum_{n \in \mathds{Z}} G(\sqrt{(z+n L)^2 + \tau^2})\,, \label{eq:gl} \\ G(\rho) \equiv \frac{1}{2 \pi} K_0(m \rho) \,, \quad \rho \equiv \sqrt{z^2+\tau^2}\,. \end{gather} Here $K_0(m \rho)$ is a modified Bessel function of the second kind. Since $G(\rho)$ is exponentially suppressed for $m \rho \gg 1$, the contributions from $n\ne 0$ in (\ref{eq:gl}) can be neglected as long as $m L \gg 1$. This is what we did in \cite{Rychkov:2014eea}. However in the present work we will encounter also the situation $m L = O(1)$. Our procedure will be to approximate: \begin{equation} \label{eq:approxgl} G_L(z, \tau) \simeq G(\rho) + 2 \sum_{n=1}^{\infty} G(n L)\,, \end{equation} which simply adds a constant to the infinite-volume two point function. This approximation is justified because the higher order Taylor expansion terms of $G(\rho)$ around $\rho = n L$ would result in renormalization terms suppressed by powers of $m^2/E_{\rm max}^2 \ll 1$. The short-distance asymptotics of $G_L$ used to calculate (\ref{eq:ik}) is modified as (cf.~(3.23) in \cite{Rychkov:2014eea}): \begin{equation} G_L(z,\tau) \approx - \frac{1}{2 \pi} \log \left(\frac{e^\gamma}{2} m' \rho \right) \,, \quad m' \equiv m \exp\Bigl[- 4 \pi \sum^\infty_{n=1} G(n L)\Bigr]\,. \end{equation} It is then straightforward to generalize the renormalization procedure of \cite{Rychkov:2014eea} to the case $m L =O(1)$. E.g.~the Hamiltonian renormalized by local counterterms is given by: \beq \label{eq:Hren} H(L)_{\rm ren}=H_{\rm trunc}(L)+\int dx\, N_m(\kappa_0 +\kappa_2 \phi^2 + \kappa_4 \phi^4)\,, \end{equation} where $\kappa_i$ are given in \cite{Rychkov:2014eea}, (3.34) where one has to put $g_4=g$, $g_2=6z(L)g$, and replace $m\to m'$ in the expressions for the $\mu$-functions in \cite{Rychkov:2014eea}, (3.31). This Hamiltonian allows to calculate the spectrum with the convergence rate of $1/E_{\rm max}^3$. In the numerical computations in section \ref{sec:result} we will also include subleading, non-local corrections improving the convergence rate up to $1/E_{\rm max}^4$, for which we refer the reader to \cite{Rychkov:2014eea}. \subsubsection{Dual theory} \label{sec:dualtheory} The Hamiltonian for the dual theory in finite volume is easiest derived as follows. Let us rewrite $H'$ in \reef{eq:H'0} by adding and subtracting ${\textstyle\frac 12} M^2\phi^2$: \beq \label{eq:H'} H'= \int dx\, N_M\bigl({\textstyle\frac 12} \dot \phi^2 + {\textstyle\frac 12} \phi'^2 + {\textstyle\frac 12} M^2\phi^2\bigr) +N_M\bigl(-{{\textstyle\frac 34}} M^2 \phi^2 + g\,\phi^4+\Lambda\bigr)\,. \end{equation} This looks like the direct Hamiltonian with $m\to M$ and an extra negative mass squared perturbation. The passage to a finite volume is then analogous to the direct theory. We get \begin{gather} H'(L) = H_0 + \Bigl [ -{{\textstyle\frac 34}} M^2 + 6 \zeta g\Bigr] V_2+ g V_4+ h\,, \label{eq:H'L0} \\ h = \Lambda L+ E_0+3 \zeta^2 g L -{{\textstyle\frac 34}} M^2 \zeta L. \label{eq:H'L} \end{gather} The building blocks have the same meaning as in section \ref{sec:directtheory}, except that we have to use $M$ instead of $m$ in all expressions: $H_0=H_0(L,M)$, $\zeta=\zeta(L,M)$, etc. \begin{figure}[h!] \begin{center} \includegraphics[scale=.8]{fig_vacCompare_L_5.pdf} \includegraphics[scale=.8]{fig_specCompare_L_5.pdf} \end{center} \caption{The ground state energy (left) and the spectrum of excitations (right) for the direct and the dual theory as a function of $g$ for $m=1$, $L=5$. The excitation plot shows the energies of the $\bZ_2$ odd and $\bZ_2$ even energy levels. See the text for the details.} \label{fig:comparison} \end{figure} \subsubsection{Comparison} \label{sec:result} In figure \ref{fig:comparison} we show the ground state energy $\calE_0$ and the spectrum of excitations $\calE_I-\calE_0$ for $m=1$, $L=5$. We plot them as a function of the direct coupling $g=0$ - 3. The results for the direct theory are given in the full range of $g$, whereas for the dual theory {only} for $g\geqslant g_c\approx 2.26$, where the dual description exists. As in \cite{Rychkov:2014eea}, the error (shaded region) is estimated as the variation of the results upon using the ``local'' and ``subleading'' renormalization prescriptions. We see that in the overlapping region the {numerical predictions} from the two descriptions agree very well. This is an explicit check of the Chang duality. This check is non-trivial, as in both descriptions the Hamiltonian is strongly coupled. To illustrate this, the black dashed lines in the plots represent the tree-level prediction for the vacuum energy and the lightest excitation in the dual description. \emph{Computational details:} The computation in the direct theory is carried out as described in section \ref{sec:directtheory}. The dual mass $M$ for a given $g\geqslant g_c$ is determined by solving Eq.~\reef{eq:ch0} numerically. We use the solution with the smaller $X$ (and thus the larger $M$). The computation in the dual theory is then done using the Hamiltonian \reef{eq:H'L0} with two couplings $g_2=-{{\textstyle\frac 34}} M^2 + 6 z(L) g$ and $g_4=g$, i.e.~by including $-{{\textstyle\frac 34}} M^2$ into the perturbation. The renormalization procedure in \cite{Rychkov:2014eea} is applicable for such a general perturbation. It's not a problem for the method that $g_2$ is negative and comparable in size to the positive mass square term in $H_0$. There is in fact a great deal of arbitrariness in how to split the $\phi^2$ coefficient between the zeroth-order Hamiltonian and the perturbation. What we do here is just the fastest possibility, which turns out sufficient for the purposes of this section. More sophisticated ways of dealing with the dual theory will be developed in section \ref{sec:weakcoupling}. \section{The $\bZ_2$-broken phase} \label{sec:weakcoupling} In section \ref{sec:chang} we reviewed the Chang duality and tested it numerically in the strongly coupled region by comparing the results obtained from the dual and {the} direct {descriptions}. We will now focus on the region $g/m^2 \gg g_*/m^2$, where the theory is in the $\bZ_2$-broken phase. In this {range of couplings} the direct description is very strongly coupled and it's difficult to achieve good numerical accuracy. On the other hand, the dual Hamiltonian becomes weakly coupled ($g/M^2\ll 1$). Therefore, we will use the dual Hamiltonian (\ref{eq:H'0}) as the starting point for the numerical calculations. It will be convenient to replace the value of $\Lambda$ given in (\ref{eq:Lambda}) by $\Lambda = {M^2}/({64 g})$, which corresponds to having zero classical vacuum energy density of the dual Hamiltonian. \subsection{Modified zero mode treatment} \label{sec:numres} In \ref{sec:directtheory} we reviewed the method of \cite{Rychkov:2014eea} which treats all field modes on equal footing. This method is adequate in the $\bZ_2$-unbroken phase and in the $\bZ_2$-broken phase in moderate volumes, as in section \ref{sec:result}. However, it becomes inefficient in the $\bZ_2$-broken phase in large volume. The physical reason is that the zero mode has then very different dynamics from the rest of the modes, acquiring a VEV. It makes sense to take this into account, and to treat the zero mode separately from the rest. We will now explain how this can be done. First of all we will rewrite \reef{eq:H'L0} making explicit the dependence on the zero mode. We will revert for the zero mode from using the oscillators $a_0,a_0^\dagger$ to the field variable \beq \phi_0=(a_0+a_0^\dagger)/\sqrt{2 L M} \end{equation} and the corresponding conjugate momentum $\pi_0$: \begin{equation} \pi_0 = i (a_0^\dagger -a_0) \sqrt{L M/2}\,. \end{equation} Denoting by bar (resp.~hat) all quantities involving only the nonzero (zero) modes, we have \begin{gather} H_0 = \bar H_0 + \frac{\NO{\pi_0^2}}{2L}+ \frac{L M^2}{2} \NO{\phi_0^2 } \,,\\ V_2 = \bar V_2 + L \NO{\phi_0^2} \,, \quad V_4 = \bar V_4 + 4 \bar V_3 \phi_0 + 6 \bar V_2 \NO{\phi_0^2} + L \NO{\phi_0^4}\,. \end{gather} Gathering everything we get \beq \label{eq:ham0mode} H'(L) = \bar{H_0} + \hat{H} + W \,, \end{equation} where $\hat{H}$ depends only on the zero mode: \beq \label{eq:ham0mode2} \hat{H} \equiv \frac{\NO{\pi_0^2}}{2L}+ L\left[ -{\textstyle \frac{1}{4}} M^2 + 6 \zeta g \right]\NO{\phi_0^2} + L g\, \NO{\phi_0^4} +h \,, \end{equation} while $W$ involves the interactions between the zero and the nonzero modes, and among the latter: \beq W \equiv \Bigl [ 6 g\NO{\phi_0^2}-{{\textstyle\frac 34}} M^2 + 6 \zeta g\Bigr] \bar V_2+ 4g \phi_0 \bar V_3+ g \bar V_4\,. \label{eq:W} \end{equation} In a large volume and for $g \ll M^2$, the quantum mechanics of (\ref{eq:ham0mode2}) predicts that the wavefunction of $\phi_0$ is peaked around the minima of the potential at $\phi_0^2\approx M^2/(8g)$, with a width scaling asymptotically as $\langle (\Delta \phi_0)^2 \rangle \sim 1/(L M)$. For this $\phi_0$ the coefficient of ${\bar V}_2$ in $W$ vanishes. Intuitively this implies that, up to small perturbative corrections induced by the ${\bar V}_3$ and ${\bar V}_4$ terms, the nonzero modes of the field will stay in their vacuum state. This is true in a very large volume, and it provides a good starting point for a quantitative description in finite volume. The idea of the method will be therefore to first solve the quantum mechanics of the zero modes, by neglecting its interaction with the nonzero modes. Having done so, the full Hamiltonian will be diagonalized in a Hilbert space whose basis wavefunctions are products of the exact zero mode wavefunctions and the harmonic oscillator wavefunctions for the nonzero modes. This is expected to be more efficient than the original method which would use harmonic oscillator wavefunctions also for the zero mode. Concretely, the procedure goes as follows. The full Hilbert space can be written as a direct product: \begin{equation} \mathcal{H} = \mathcal{\hat{H}} \otimes \mathcal{\bar{H}}\,, \end{equation} where $\mathcal{\hat{H}}$ and $\mathcal{\bar{H}}$ are the Hilbert spaces of the zero modes and nonzero modes, respectively. The truncated Hilbert space is then ($l$ for low) \beq \calH_l = \mathcal{\hat{H}}_l \otimes \mathcal{\bar{H}}_l\,, \end{equation} where the basis of $\mathcal{\bar{H}}_l$ is formed by the harmonic oscillator states for the nonzero modes with energy $\bar{E} \leqslant \bar{E}_{\rm max}$, while $\mathcal{\hat{H}}_l$ is spanned by the first few low-lying eigenfunctions of $\hat{H}$ \begin{equation} \hat{H} \ket{\psi_\alpha} = \hat{\mathcal{E}}_\alpha \ket{\psi_\alpha},\quad \alpha=1\ldots s\,. \end{equation} In practice, it will be sufficient to fix $s=4$ or 5. A separate computation has to be done to find the $\ket{\psi_\alpha}$. We do this using the standard Rayleigh-Ritz method, working in the $S$-dimensional subspace of $\mathcal{\hat{H}}$ spanned by the original harmonic oscillator wavefunctions $(a^\dagger_0)^i \ket{0}$, $i=0\ldots S-1$. The parameter $S\gg s$ can be chosen so large that the numerical error accumulated in this step is insignificant; in practice we choose $S=500$. The eigenstates $\ket{\psi_\alpha}$ are thus found expanding them in the harmonic oscillator wavefunctions. This facilitates the subsequent computations of the matrix elements involving these states. One can now compute the matrix elements of $H'(L)$ in the truncated Hilbert space and diagonalize it, finding the ``raw" spectrum. As usual, we will employ a renormalization procedure to improve the precision. The necessary modifications are described in appendix \ref{sec:ren}. {\it Comparison with prior work:} The $\bZ_2$-broken phase of the $\phi^4$ model has been previously studied via a Hamiltonian truncation method in Ref.~\cite{Coser:2014lla}. There are many similarities between our works, and some differences. The main difference lies in the treatment of the zero mode (see also the discussion in \cite{Rychkov:2014eea}, section 4.5). Ref.~\cite{Coser:2014lla} compactifies the zero mode on a circle of large radius, and uses plane waves on this target space circle as the basis of trial wavefunctions. Instead, we resolve the zero mode dynamics and pick trial wavefunctions adapted to the quartic potential. Another difference is that they use conformal, massless, basis for the nonzero modes, while we use a massive basis. Matrix elements are easier to compute in the conformal basis, while a massive basis gives, we believe, a better initial approximation. {Notice that} Ref.~\cite{Coser:2014lla} uses a different parametrization of the Hamiltonian, corresponding to a different normal-ordering prescription. Translation to our parametrization will be given in section \ref{sec:translation}. \subsection{Varying the normal-ordering mass} It turns out that in the regime we will be considering, the most important term inducing the interactions between $\mathcal{\hat{H}}_l$ and $\mathcal{\bar{H}}_l$ is the $\bar{V}_2$ term in \reef{eq:W}. This is because for the volumes that we will be able to consider, the localization of the $\phi_0$ wavefunctions near the potential minimum is not very sharp, and the coefficient of $\bar{V}_2$, viewed as a matrix in the space of the $\phi_0$ eigenstates, has significant matrix elements. The $\bar{V}_3$ and $\bar{V}_4$ terms will be suppressed at weak coupling. Empirically, we concluded that the one-loop renormalization procedure, including the modifications to be described in appendix \ref{sec:ren}, is insufficient to fully describe the truncation effects arising from the big $\bar{V}_2$ term. Moreover, estimating the accuracy as the difference between the ``local" and ``subleading" renormalized answers was found inadequate in such a situation. Notice that the $V_2$ term renormalizes at quadratic order only the unit operator coefficient and this correction does not affect the spectrum of excitations \cite{Hogervorst:2014rta,Rychkov:2014eea} (this statement remains approximately true in the scheme with the separated zero mode discussed here). Ideally, to estimate the error one would have to compute the renormalization effects of cubic order in the problematic operator. Here we will resort to an interim alternative technique, which we now describe.\footnote{Another interesting possibility is to incorporate the coefficient of $\bar{V}_2$ into the mass of nonzero modes, making it $\phi_0$-dependent. This creates technical difficulties of its own and was not tried in this work.} In the modified method as described in the previous section, the trial wavefunctions of the nonzero modes are taken to be those of the free massive boson of mass $M$, i.e.~the bare mass appearing in the Lagrangian. We will now consider the formalism in which one can vary the mass parameter $\mu$ of the trial wavefunctions. As in \cite{Lee:2000ac}, this will then be used to control the accuracy of our computations, since the exact spectrum should be independent of $\mu$. Apart from the accuracy issues, varying $\mu$ is also natural from the point of view of searching for an optimal zeroth order approximation to the ground state, in the spirit of variational methods. So we rewrite the infinite-volume Hamiltonian (\ref{eq:H'}) by using the Coleman relations (\ref{eq:coleman}): \begin{gather} \label{eq:normordH} H' = \int d x N_\mu\bigl({\textstyle\frac 12}\dot{\phi}^2+{\textstyle\frac 12} {\phi'}^2+( -{\textstyle\frac{1}{4}} M^2 + 6 g Z )\phi^2 + g \phi^4 + \Lambda_\mu\bigr)\,, \\ \Lambda_\mu = \Lambda -{\textstyle\frac{1}{4}} M^2 Z + 3 g Z^2 + Y\,, \end{gather} where $Z=Z(M,\mu)$, $Y=Y(M,\mu)$ are defined in (\ref{eq:YZ}) with the replacement $M\to\mu, m\to M$. We then pass to finite volume as in section \ref{sec:dualtheory}: \begin{gather} \label{eq:normordH} H'(L) = H_0 + [ -{\textstyle\frac{1}{4}} M^2 - {\textstyle\frac 12} \mu^2 + 6 (Z+\zeta)g ]V_2+ g V_4 + h_\mu\,,\\ h_\mu = \Lambda_\mu L+ E_0+3 \zeta^2 g L +( -{\textstyle\frac{1}{4}} M^2 - {\textstyle\frac 12} \mu^2 + 6 g Z) \zeta L. \end{gather} where $H_0,V_2,V_4,E_0,\zeta$ are defined with respect to $\mu$. Finally, we separate the zero mode as in section \ref{sec:numres}. The final Hamiltonian has the form \reef{eq:ham0mode} where $\bar H_0=\bar H_0(L,\mu)$ while $\hat H$ and $W$ are given by: \begin{gather} \hat{H} = \frac{\NO{\pi_0^2}}{2L}+ L\left[ -{\textstyle \frac{1}{4}} M^2 + 6 (Z+\zeta) g \right]\NO{\phi_0^2} + L g\, \NO{\phi_0^4} +h_\mu \,,\\ W = \bigl [ 6 g\NO{\phi_0^2}-{\textstyle \frac{1}{4}} M^2 - {\textstyle\frac 12} \mu^2+ 6 (Z+\zeta) g\bigr] \bar V_2+ 4g \phi_0 \bar V_3+ g \bar V_4\,. \end{gather} This is the Hamiltonian which we use for numerical calculations, varying $\mu$ in the range $0.9$ - $1.1M$. This will give an idea of the systematic error due to the truncation. \subsection{Results} From previous estimates, we know that the critical point lies at $g/m^2\approx 2.97(14)$ \cite{Rychkov:2014eea},\footnote{For more precise estimates by different methods see \cite{Wozar:2011gu,Milsted:2013rxa,Bosetti:2015lsa,Pelissetto:2015yha}.} which by making use of the Chang duality corresponds to $g/M^2 \approx 0.26$. Here we will limit ourselves to values $g/M^2 \leqslant 0.2$, as beyond this value it appears difficult to reach the limit $L \to \infty$ and get a stable spectrum. $M$ will be set to $1$ throughout this section, unless stated otherwise. We are now going to present the results for the two sectors of excitations of the theory. First, we will discuss the perturbative sector, which in the $L \to \infty$ consists of two decoupled towers of excitations around the two vacua with the opposite-sign VEV for the field. We will then turn to the non-perturbative sector of ``kink'' states which have topological charge, interpolating between the two vacua. Given the periodic boundary conditions imposed in our method, the kink sector will be studied here only indirectly, through the splitting of quasi-degenerate perturbative states in finite volume. \subsubsection{Perturbative sector} \label{sec:pertsector} In figure \ref{fig:vsG_L12} we plot the ground state energy density and the low-energy excitation spectrum for $M=1$, $L=12$. For the ground state energy density we show both the ``raw'' and renormalized\footnote{In this section only local renormalization, in terminology of \cite{Rychkov:2014eea}, was used. Subleading nonlocal corrections were found to be totally negligible.} results, while for the spectrum only the renormalized results, because the raw/renormalized difference is negligible. As explained above, we don't think this difference gives a fair idea of the truncation error in the situation at hand. Instead, we estimate the error for the spectrum by varying the normal-ordering mass $\mu = 0.9$ - 1.1. In making these plots we fixed $s=4$, while the cutoff $\bar{E}_{\rm max}$ was chosen so that $\calH_l$ has dimension around $10000 - 15000$. We checked that increasing $s$ does not change the results significantly. We see that the first excited level is almost degenerate with the ground state. The splittings for the higher-energy levels are larger. This is because for the higher energy states it's easier to tunnel through the potential barrier separating the two infinite-volume vacua, which has a finite height for a finite $L$. In figure \ref{fig:vsG_L20} we show the same plots for $L=20$. One can see that the energy splitting reduces but the truncation error increases (as one has to reduce $\bar{E}_{\rm max}$ in order to keep the total number of states the same). Finally, in figure \ref{fig:vsL_G01} we plot the vacuum energy density and the spectrum for $g=0.1$ as a function of $L$. One can see how the renormalization procedure is effective for the vacuum energy density, as its renormalized value reaches a constant for sufficiently large $L$, while its ``raw'' values does not. In the spectrum also the physical mass reaches a constant as expected. Notice that for sufficiently small $g$ the interaction in the considered model is attractive (the cubic vertex squared attraction overcomes the quartic vertex repulsion) \cite{Caselle:2000yx,Coser:2014lla}. Therefore the second energy level pair in the spectrum in figure \ref{fig:vsL_G01} is expected to asymptote to $m_2 < 2 m_{\rm ph}$ (where $m_{\rm ph}$ is the single particle mass) as $L \to \infty$, i.e. it represents a bound state. The numerical results seem consistent with this expectation, although the precision is insufficient to extract $m_2$ accurately. In general, it is hard to extract the perturbative bound state mass from the infinite-volume limit, as the asymptotic convergence sets in at $L \approx (m^2_{\rm ph}-m_2^2/4)^{-1/2}$ \cite{Luscher:1985dn}, which diverges as $g \to 0$. In Appendix \ref{sec:pert} we compare the numerical results for $\Lambda$ and $m_{\rm ph}$ with the predictions from perturbation theory, showing very good agreement at small couplings. It is also interesting to analyze the higher-energy states in the spectrum. In figure \ref{fig:vsL_G005} we redo the previous plot for $g=0.05$, including a few more eigenvalues. Above the stable particle mass and the bound state, one can see the multiparticle states whose energy depends on $L$ according to the dispersion relations in finite volume.\footnote{See e.g.~the discussion in \cite{Coser:2014lla}, appendix B.} Furthermore, the horizontal line with energy $\approx 2.5 < 3 m_{\rm ph}$ represents a resonance. Due to the non-integrability of the theory, that state is not stable, as its energy is larger than $2 m_{\rm ph}$. Indeed, the horizontal line does not cross the multiparticle states as could seem at first glance, thanks to the phenomenon of avoided crossing. See \cite{Delfino:1996xp} for a discussion of how resonances should appear in the finite volume spectrum. \begin{figure}[htb!] \begin{center} \includegraphics[scale=0.8]{fig_vacVsGerror_L_12.pdf} \includegraphics[scale=0.8]{fig_specVsGerror_L_12.pdf} \end{center} \caption{The ground state energy density and the low-energy excitation spectrum as a function of $g$ for $L=12$; see the text. Results extracted from \cite{Coser:2014lla} are shown by crosses (whose size does not reflect the uncertainty), see section \ref{sec:translation}.} \label{fig:vsG_L12} \end{figure} \begin{figure}[htb!] \begin{center} \includegraphics[scale=0.8]{fig_vacVsGerror_L_20.pdf} \includegraphics[scale=0.8]{fig_specVsGerror_L_20.pdf} \end{center} \caption{Same as in figure \ref{fig:vsG_L20} but for $L=20$.} \label{fig:vsG_L20} \end{figure} \begin{figure}[htb!] \begin{center} \includegraphics[scale=0.8]{fig_vacVsLerror_g_01.pdf} \includegraphics[scale=0.8]{fig_specVsLerror_g_01.pdf} \end{center} \caption{Results for $g=0.1$ plotted as a function of $L$.} \label{fig:vsL_G01} \end{figure} \begin{figure}[htb!] \begin{center} \includegraphics[scale=0.8]{fig_specVsLerror_g_005.pdf} \end{center} \caption{Same as in the right-hand figure \ref{fig:vsL_G01} but for $g=0.05$.} \label{fig:vsL_G005} \end{figure} \subsubsection{Non-perturbative sector} \label{sec:NP} As already mentioned, in finite volume non-perturbative effects lift the spectrum degeneracy both for the ground state and for all the excited states. For small coupling, these effects can be interpreted as tunneling due to the semiclassical field configurations interpolating between the two vacua (``kinks"). The splitting depends on the mass of the kink. Here we will need the semiclassical prediction for the splitting of the first two energy levels (the ground state, which lives in the $\bZ_2$ even sector, and the $\bZ_2$ odd state just above it). Including the leading semiclassical results and the one-loop determinant fluctuations around it, the splitting for small $g/M^2$ is given by (see appendix \ref{sec:energysplitting}): \begin{equation} \label{eq:splitting} \Delta \calE = \mathcal{E}_1-\mathcal{E}_0 \approx \sqrt{\frac{M^3}{6 \pi g L}} e^{-L M_{\rm kink}-f(ML)} \,, \qquad M_{\rm kink} = \frac{M^3}{12 g} + M \left( \frac{1}{4 \sqrt{3}} - \frac{3}{2 \pi}\right)\,, \end{equation} where $M_{\rm kink}$ is the kink mass in the one-loop approximation, first computed in \cite{Dashen:1974cj}. Corrections are suppressed by $g/M^2$ and by $1/(LM_{\rm kink})$. The function $f(x)$, given in \reef{eq:f(x)}, approaches zero exponentially fast for $LM\gg 1$. Our numerical method allows to extract $\Delta \calE$ with high precision and to compare with this formula. In figure \ref{fig:fitkink_G005} we present as an example the renormalized numerical results\footnote{The difference between ``raw'' and renormalized is negligible in the present analysis.} for $M=1$, $g=0.05$. We used $s = 5$, checking that its increase does not change significantly the numerics, while $\bar{E}_{\rm max}$ was fixed such as to have a basis dimension $\sim 10000$ for each $L$. We plot $ \sqrt{L}\,e^{f(ML)}\Delta \calE$ as a function of $L$ in logarithmic scale in order to observe a linear trend, as expected from (\ref{eq:splitting}), and perform a fit in a region chosen by eye such that the data look close to a straight line: \begin{equation} \log \bigl[\sqrt{L}\,e^{f(ML)}\Delta \calE] \approx \alpha - M_{*} L\,. \end{equation} The value of $L$ must be not too low so that the exponential law decay sets in, and not too high otherwise $\Delta \calE$ becomes smaller than the precision of our method. We then compare the fitted values of $\alpha$ and $M_*$ with the expectations from \reef{eq:splitting}. We carried out this analysis for several values of the coupling between $0.01$ and $0.1$, finding both $\alpha$ and $M_*$ very close to the expected values. The comparison of $M_*$ with $M_{\rm kink}$ is plotted in figure \ref{fig:fitkink} as a function of $g$. It turns out that in the range of points where the fit is made $f(ML)$ is very small and does not influence the fit, except a little for the smallest considered values of $g$. On the other hand including $\sqrt{L}$ is crucial for reaching the agreement. One can see that the accord with the semiclassical prediction $M_{\rm kink}$ (black line) is very good. \begin{figure}[htb!] \begin{center} \includegraphics[scale=1]{fig_fitKinkError_g_005.pdf} \end{center} \caption{Ground state splitting as a function of $L$ for $g=0.05$; see the text.} \label{fig:fitkink_G005} \end{figure} \begin{figure}[htb!] \begin{center} \includegraphics[scale=1]{fig_mkink.pdf} \end{center} \caption{Comparison between the fitted and the theoretically predicted value of the kink mass; see the text. The green cross represents, with error bars, a result from \cite{Coser:2014lla} as discussed in section \ref{sec:translation}.} \label{fig:fitkink} \end{figure} \subsubsection{Comparison to Ref.~[3]} \label{sec:translation} For comparison we included in figures \ref{fig:vsG_L12},\ref{fig:fitkink} a few data points extracted from \cite{Coser:2014lla}. Ref.~\cite{Coser:2014lla} parametrizes the theory by two couplings $G_2,G_4$ which they denote $g_2,g_4$; we capitalized to avoid confusion with our notation in other parts of this paper. Their couplings are not identical to ours; because of the different field normalization $g=2\pi G_4$. More importantly, their $\phi^4$ operator is normal-ordered differently, by subtracting the normal-ordering constants for all nonzero massless modes in finite volume $L$(= their $R$). Going to our normal ordering prescription (in infinite volume), \beq \NO{\phi^4}_{\rm their}\to N_m(\phi^4)-C(mL)N_m(\phi^2)+const.,\qquad C(mL)= -(3/\pi)\log[e^\gamma mL/(4\pi)]\,, \end{equation} where $\gamma$ is the Euler-Mascheroni constant. We don't pay attention to the ground state energy renormalization here. To put their Hamiltonian into the canonical form \reef{eq:H} (resp.~\reef{eq:H'0}) one has to solve the two equations \beq G_2-2gC(mL)=m^2\quad (\text{resp.}\quad G_2-2gC(ML)=-M^2/2) \end{equation} for $m$ or $M$ respectively. Keeping $G_{2,4}$ fixed and varying $L$ thus induces a logarithmic variation of the infinite-volume mass parameters. Although for the small quartic couplings considered in \cite{Coser:2014lla} this variation is not huge (order $10\%$), it may be problematic for extracting the spectrum by approaching the large $L$ limit. It would seem more appropriate to vary $G_2$ with $L$ while keeping $m$ or $M$ fixed. The two data points (crosses) in figure \ref{fig:vsG_L12} were extracted from figure 10(b,d) of \cite{Coser:2014lla}, where $G_2=-0.1$, $G_4=1.2\times 10^{-3}$. This corresponds to $g/M^2\approx 0.035$ at $ML=12$. The agreement between their and our results is good. Their determination of the kink mass for the same $G_{2,4}$ is shown in figure \ref{fig:fitkink}. Here $g/M^2=0.042(3)$, varying within the range of $L$ used in their fit. The large error bars on $M_{\rm kink}$ may be due to this variation. Also, they did not consider the pre-exponential factor in (\ref{eq:splitting}). A remark is in order concerning the discussion in \cite{Mussardo:2006iv,Coser:2014lla}, which views the particles in the topologically trivial sector as bound states of kinks. A semiclassical prediction is given for their masses (\cite{Coser:2014lla}, (28)): \beq m_{{\rm sc},n}=2M_{\rm kink} \sin(n \pi\xi/2),\quad n=1\ldots[1/\xi], \label{eq:semi} \end{equation} where \beq \xi=M/(\pi M_{\rm kink}) \end{equation} in our notation. The lightest mass $m_{{\rm sc},1}$ has to be identified with our $m_{\rm ph}$, while the second $m_{{\rm sc},2}$ with the bound state mass $m_2$ discussed in section \ref{sec:pertsector}. The other masses correspond not to stable particles but to resonances in a non-integrable theory like the one we are considering. The total number of particles is predicted to be $[1/\xi]$. The semiclassical prediction is valid for $\xi\ll1$, but if one could extrapolate it to $\xi=O(1)$ one would naively predict that for $\xi>1$ the topologically trivial sector would be devoid of particles. This would be analogous to the phase of the sine-Gordon model for $4\pi<\beta^2<8\pi$. Of course it's far from clear if such an extrapolation is trustworthy. From the kink mass formula \reef{eq:splitting} we have $\xi=1$ for $g/M^2\approx 0.12$, just outside the region that we explored, and well below the critical point at $g_c/M^2\approx 0.26$. It will be interesting to study this range in the future. One minimalistic possibility is that the topologically trivial particles disappear only at the critical point. Indeed, its neighborhood is described by the thermally perturbed 2d Ising model CFT, which is free massive Majorana fermion theory. In the low-temperature phase, the fermionic excitations are naturally identified with the kink states interpolating between the two vacua. There are no bound states since the fermions are free.\footnote{{\bf Note added:} This `minimalistic possibility' can now be ruled out, in favor of the original scenario of \cite{Mussardo:2006iv}, based on the very recent results of \cite{Bajnok:2015bgw}. As this paper shows, the second lightest particle $m_2$ in the topologically trivial spectrum becomes unstable with respect to decay into two kinks for $g/M^2\gtrsim 0.075$, while for the lightest particle $m_{\rm ph}$ this happens for $g/M^2\gtrsim 0.125$. The possibility of the first of these decays could be observed already from our mass plots in Figs.~\ref{fig:vsG_L20},\ref{fig:fitkink}.} \section{Conclusions} \label{sec:conclusions} In this paper we followed up on our earlier study \cite{Rychkov:2014eea} of the Hamiltonian truncation technique applied to the $\phi^4$ theory in two dimensions. The main results derived in this work can be summarized as follows: \begin{itemize} \item According to an exact duality, reviewed in section \ref{sec:chang}, the theory under consideration can be expressed via two different Lagrangian formulations. We proved that, even at strong coupling, the Hamiltonian truncation method correctly predicts the same low-energy spectrum of excitations in the two cases, despite the fact that they look totally different at the zeroth order. We regard this as a non-trivial check of the method. \item We showed how to modify the method in order improve its accuracy in the spontaneously broken phase. We found very good agreement with the predictions from perturbation theory and semiclassics in the perturbative and non-perturbative sectors. To approach the critical region as in \cite{Rychkov:2014eea} will require further improvements of the method.\footnote{{\bf Note added:} Rapid progress in this direction should be possible thanks to the technical and conceptual improvements discussed in \cite{Bajnok:2015bgw,Elias-Miro:2015bqk}, which appeared a few weeks after our work.} \end{itemize} We continue to believe that the potential of ``exact diagonalization'' techniques, among which we have implemented a particular realization in the present work, is very large and has to be explored further. Some other representative applications to non-integrable theories to be found in the literature are \cite{Delfino:1996xp,Bajnok:2000ar,Fonseca:2006au,Caux:2012nk,Coser:2014lla,2014arXiv1407.7167B,Konik:2015bia,Lepori:2009ip,Lencses:2015bpa} in $d=2$. In $d>2$ the only work is \cite{Hogervorst:2014rta}. In the future it would be interesting to extend the present analysis, for instance by studying the topological spectrum of kink-states directly,\footnote{{\bf Note added:} This has just been achieved in \cite{Bajnok:2015bgw}. The authors use a different truncation scheme and diagonalization routine, and they are able to calculate the kink mass up to $g \sim 0.2$ (in our conventions).} or consider more complicated theories involving scalar-fermion interactions, which should not be too technically challenging.\footnote{See \cite{Brooks:1983sb} for early work.} In the long term, in order to solve numerically higher dimensional theories, it will be necessary at the very least to refine the renormalization technique, as the RG flow becomes more weakly relevant.\footnote{{\bf Note added:} See \cite{Elias-Miro:2015bqk} for recent progress towards the calculation of higher order renormalization coefficients.} The hope is that exact diagonalization techniques can evolve into computationally efficient tools to address difficult problems in quantum field theory \section*{Acknowledgements} We are grateful to Giuseppe Mussardo for the comments of the draft. This research was partly supported by the National Centre of Competence in Research SwissMAP, funded by the Swiss National Science Foundation. The work of L.V. is supported by the Swiss National Science Foundation under grant 200020-150060.
{ "attr-fineweb-edu": 1.526367, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd3U4eIZijj9JPiSp
\section{Introduction \label{intro}} In a seminal paper, \cite{Arellano1991} introduced a now well-known one-step generalized method of moments (GMM) panel data estimator. The estimator relies on first-differencing the observations --- the first-difference (FD) transformation --- and then applying GMM. \cite{Bover1995} suggested another transformation could be used. They showed that, under suitable conditions, GMM is invariant to how the data are transformed, and they introduced the forward orthogonal deviations (FOD) transformation to the panel data literature. Moreover, \cite{Bover1995} noted there is a computational advantage to using the FOD transformation when the number of columns in the instrument matrix is large. However, to date, there appears to be no published evidence illustrating how much of a computational advantage is conferred by using the FOD transformation. The purpose of this paper is to fill that gap. I show how the computational complexity --- the amount of computational worked required --- of the FD and FOD transformations increase with the length of the time series ($T$) and the number of individuals ($N$). The results reveal that, even when lim$(T/N) = 0$,\footnote{In this case, one-step GMM, based on all available instruments, has no asymptotic bias \citep{Alvarez2003}.} computational complexity is affected much more by the size of $T$ than by the size of $N$. Furthermore, the FD transformation's computational complexity increases with $T$ at a much faster rate than the rate of increase in the FOD transformation's computational complexity. Consequently, when $T$ is not small, the FOD transformation is computationally faster --- orders of magnitude faster --- than the FD transformation. A practical implication of this finding is that computationally intensive work, such as Monte Carlo simulations, can be performed much faster by relying on the FOD rather than the FD transformation. \section{The computational complexity of the FD and FOD transformations \label{FOD}} In order to compare the computational complexity of the FD and FOD transformations, a simple case was considered --- the first-order autoregressive (AR(1)) panel data model. The model is \begin{equation} y_{it}=\delta y_{i,t-1}+\eta _{i}+v_{it}, \label{ar1_model} \end{equation}% with $y_{i0}$ taken to be the first available observation. If the $v_{it}$s are uncorrelated, then, for instruments, one might use $z_{i1}=y_{i0}$, $\boldsymbol{z}_{i2}^{\prime }=\left( z_{i1},y_{i1}\right) $, $\boldsymbol{z}_{i3}^{\prime }=\left( \boldsymbol{z}% _{i2}^{\prime },y_{i2}\right) $, and so on up to $\boldsymbol{z}% _{i,T-1}^{\prime }=( \boldsymbol{z}_{i,T-2}^{\prime },y_{i,T-2}) $% . For this choice of instruments, one-step GMM based on the FOD transformation is numerically equivalent to one-step GMM based on the FD transformation (see, e.g., Hayakawa and Nagata, 2016). However, although numerically the same, the two transformations are not computationally the same. To see this, consider first one-step GMM estimation of the AR(1) panel data model using the FD transformation. The first-difference transformation matrix is \begin{equation*} \boldsymbol{D}=\left( \begin{array}{ccccc} -1 & 1 & 0 & \cdots & 0 \\ 0 & -1 & 1 & \cdots & 0 \\ \vdots & \vdots & \ddots & \ddots & \vdots \\ 0 & 0 & \cdots & -1 & 1% \end{array}% \right). \end{equation*}% The one-step GMM estimator based on this transformation can be written as follows: Let $\boldsymbol{y}_{i}=\left( y_{i1},\ldots ,y_{iT}\right) ^{\prime }$, $\boldsymbol{y}_{i,-1}=\left( y_{i0},\ldots ,y_{i,T-1}\right) ^{\prime }$, $\boldsymbol{\tilde{y}}_{i}=\boldsymbol{Dy}% _{i}$, and $\boldsymbol{\tilde{y}}_{i,-1}=\boldsymbol{Dy}_{i,-1}$ ($% i=1,\ldots ,N$). Also, let $\boldsymbol{Z}_{i}$ denote a block-diagonal matrix with the vector $\boldsymbol{z}_{it}^{\prime }$ in its $t$th diagonal block ($t=1,\ldots ,T-1$, $% i=1,\ldots ,N$). Moreover, let $\boldsymbol{\tilde{s}}=\sum_{i}\boldsymbol{Z}% _{i}^{\prime } \boldsymbol{\tilde{y}}_{i}$, $\boldsymbol{\tilde{s}}_{-1}=\sum_{i}% \boldsymbol{Z}_{i}^{\prime } \boldsymbol{\tilde{y}}_{i,-1}$, $\boldsymbol{G}=\boldsymbol{D}\boldsymbol{D}^{\prime}$, and $\boldsymbol{A}_{N}=\sum_{i}\boldsymbol{Z}_{i}^{\prime }\boldsymbol{G}\boldsymbol{Z}_{i}$. Finally, set $\boldsymbol{\tilde{a}}=\boldsymbol{\tilde{s}}_{-1}^{\prime }\boldsymbol{A}_{N}^{-1}$. Then the one-step FD GMM estimator is given by% \begin{equation} \widehat{\delta }_{D}=\frac{\boldsymbol{\tilde{a}}\boldsymbol{\tilde{s}}}{\boldsymbol{\tilde{a}}\boldsymbol{\tilde{s}}_{-1}}. \label{GMM1_delta} \end{equation}% \citep{Arellano1991}. For large $T$, the formula in (\ref{GMM1_delta}) is computationally expensive. A measure of computational cost or work is computational complexity, which is the number of floating point operations or flops required.\footnote{A flop consists of an addition, subtraction, multiplication, or division.} The conclusion from counting up the number of flops required to compute $\widehat{\delta}_{D}$ is provided in Lemma \ref{FD_flops}. \begin{lemma} \label{FD_flops} Given $\boldsymbol{D}$, $\boldsymbol{y}_{i}$, $\boldsymbol{y}_{i,-1}$ and $\boldsymbol{Z}_{i}$ ($i=1,\ldots,N$), the number of flops required by the FD formula in (\ref{GMM1_delta}) increase with $N$ linearly, for given $T$, and increase with $T$ at the rate $T^{6}$ increases, for given $N$. \end{lemma} Appendix A.1 provides the flop counts that verify Lemma \ref{FD_flops}. Lemma \ref{FD_flops} shows that, even when $T$ is much smaller than $N$, it can be much more important than $N$ in determining the amount of computational work --- and hence time --- it takes to obtain an estimate via the formula in (\ref{GMM1_delta}). A substantial contribution to the amount of work required is the computation of $\boldsymbol{A}_{N}$ and then inverting it. Appendix A.1 shows that the number of flops required to calculate $\boldsymbol{A}_{N}$ is on the order of O($NT^{5}$). Moreover, the work required to invert $\boldsymbol{A}_{N}$ increases even faster with $T$. Standard matrix inversion methods require on the order of another O($T^{6}$) flops to compute $\boldsymbol{A}_{N}^{-1}$. On the other hand, the FOD transformation does not require inverting $\boldsymbol{A}_{N}$. This fact makes it more efficient computationally when $T$ is not small. To see how much more efficient the FOD transformation is, consider again the AR(1) model in (\ref{ar1_model}), and set $\boldsymbol{\ddot{y}}_{i}=\boldsymbol{Fy}_{i}$ and $\boldsymbol{\ddot{y}}_{i,-1}=\boldsymbol{Fy}_{i,-1}$, where $\boldsymbol{F}$ is the FOD transformation matrix given by \begin{eqnarray*} \boldsymbol{F} &=&\text{ diag}\left( \left( \frac{T-1}{T}\right) ^{1/2}, \left( \frac{T-2}{T-1}\right) ^{1/2}, \ldots ,\left( \frac{1}{2}\right) ^{1/2}\right) \notag \\ && \times \left( \begin{array}{ccccccc} 1 & - \frac{1}{T-1} & - \frac{1}{T-1} & \cdots & - \frac{1}{T-1} & - \frac{1}{T-1} & - \frac{1}{T-1} \\ 0 & 1 & - \frac{1}{T-2} & \cdots & - \frac{1}{T-2} & - \frac{1}{T-2} & - \frac{1}{T-2} \\ \vdots & \vdots & \vdots & & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & & 1 & -\frac{1}{2} & -\frac{1}{2} \\ 0 & 0 & 0 & & & 1 & -1% \end{array}% \right) \label{FOD_trans} \end{eqnarray*}% (see Arellano and Bover, 1995). Also, let $\ddot{y}_{it}$ and $\ddot{y}_{i,t-1}$ denote the $t$th entries in $\boldsymbol{\ddot{y}}_{i}$ and $\boldsymbol{\ddot{y}}_{i,-1}$. Then the FOD version of the one-step GMM estimator is \begin{equation} \widehat{\delta }_{F}=\frac{\sum_{t=1}^{T-1}\boldsymbol{\ddot{a}}_{t}% \boldsymbol{\ddot{s}}_{t}}{\sum_{t=1}^{T-1}\boldsymbol{\ddot{a}}_{t}% \boldsymbol{\ddot{s}}_{t-1}}. \label{FOD_ar1} \end{equation}% where $\boldsymbol{\ddot{s}}_{t}=\sum_{i}\boldsymbol{z}_{it}\ddot{y}_{it}$, $\boldsymbol{\ddot{s}}_{t-1}=\sum_{i}% \boldsymbol{z}_{it}\ddot{y}_{i,t-1}$, $\boldsymbol{\ddot{a}}_{t}=\boldsymbol{\ddot{s}}_{t-1}^{ \prime }% \boldsymbol{S}_{t}^{-1}$, and $\boldsymbol{S}_{t}=\sum_{i}\boldsymbol{z}_{it}% \boldsymbol{z}_{it}^{\prime }$ (see, e.g., Alvarez and Arellano, 2003). The formula in (\ref{FOD_ar1}) replaces computing and then inverting one large matrix --- the matrix $\boldsymbol{A}_{N}$ --- with computing and inverting several smaller matrices --- the matrices $\boldsymbol{S}_{t}$ ($t=1,\ldots ,T-1$). The computational savings of this alternative approach are summarized in Lemma \ref{FOD_flops}. \begin{lemma} \label{FOD_flops} Given $\boldsymbol{F}$, $\boldsymbol{y}_{i}$, $\boldsymbol{y}_{i,-1}$, and $\boldsymbol{Z}_{i}$ ($i=1,\ldots,N$), the number of flops required by the FOD formula in (\ref{FOD_ar1}) increase with $N$ linearly, for given $T$, and increase with $T$ at the rate $T^{4}$ increases, for given $N$. \end{lemma} The flop counts are provided in Appendix A.2. Lemmas \ref{FD_flops} and \ref{FOD_flops} show that the computational complexity of one-step GMM based on both the FOD and FD transformations increases much faster with $T$ than with $N$. But the number of flops increase with $T$ at a much slower rate for the FOD transformation. This finding indicates that, for large $T$, computing time will be orders of magnitude faster for the FOD transformation than for the FD transformation. This conjecture is explored in the next section. \section{An illustration\label{MC}} In order to illustrate the reductions in computing time from using the FOD transformation rather than differencing, some experiments were conducted. For all of the experiments, observations on $y_{it}$ were generated according to the AR(1) model% \begin{equation*} y_{it}=\delta y_{i,t-1}+\eta _{i}+v_{it}\text{, \ \ \ \ \ \ \ \ \ }% t=-49,\ldots ,T,\ i=1,\ldots ,N, \end{equation*}% with $y_{i,-50}=0$. The value for $\delta $ was fixed at $0.5$ for all experiments. The error components $\eta _{i}$ and $v_{it}$ were generated independently as standard normal random variables. The processes were started with $t=-50$ so that, for each $i$, the process was essentially stationary by time $t=0$. As for the sample sizes, $T$ was set to either five, 10, 15, 20, 25, 30, 35, 40, 45, or 50 whereas $N$ was either 100, 200, 300, 400, or 500. After a sample was generated, start-up observations were discarded so that estimation was based on the $T+1$ observations $y_{i0},\ldots ,y_{iT}$ for each $i$ ($i=1,\ldots ,N$). Finally, one-step GMM estimates were calculated both with the FD formula in (\ref{GMM1_delta}) and the FOD formula in (% \ref{FOD_ar1}). The estimates obtained from the two formulas were identical but how long it took to compute them differed.\footnote{All computations were performed using GAUSS. To calculate elapsed times for the FOD and FD algorithms, the GAUSS command hsec was used to calculate the number of hundredths of a second since midnight before and after calculations were executed.} \begin{figure}[t] \noindent \text{\textbf{Fig. 1} Computing time for $T = 5$ and $N$ increasing from 100 to 500 ($N =$ 100, 200, 300, 400,} \\ \text{and 500) over computing time for $T = 5$ and $N = 100$.} \\ \includegraphics[width=\textwidth, height=110mm]{fig_1_pic} \end{figure} Figure 1 plots ratios of computing times as $N$ increases holding $T$ fixed. In Figure 1, I plot the time required to calculate GMM estimates for 100 independent samples of size $T=5$ and $N=X$ (% $X=100$, 200, 300, 400, and 500) over the time required to calculate that many one-step GMM estimates for $T=5$ and $N=100$, first using the FD formula in (\ref{GMM1_delta}) and then using the FOD formula in (\ref{FOD_ar1}). The solid line shows how computing time increases with $N$, holding $T$ constant, using differencing, while the dashed line shows how it increases with $N$ using forward orthogonal deviations.% The message of the figure is clear. Because the number of computations required to compute a GMM estimate increases linearly in $N$, regardless of whether differencing or the FOD transformation is used, so too does computing time. For example, in Figure 1, when $N$ is doubled from 100 to 200, computing time approximately doubles, regardless of whether estimates are computed using the differencing or the FOD transformation. When $N$ triples from 100 to 300, computing time approximately triples, and so on.% Figure 2 plots ratios of computing times as $T$ changes, with $N$ held fixed at 100. Specifically, it gives the time required to calculate GMM estimates for 100 independent samples of size $N=100$ and $T=X$ ($X=5$, 10, 15, 20, 25, 30, 35, 40, 45, and 50) over the time required to compute that many estimates for $N=100$ and $T=5$. As in Figure 1, the solid curve shows how computing time increases for the FD transformation, but now as $T$ increases with $N$ held fixed. The dashed curve shows how computing time increases as $T$ increases using the FOD transformation.% \begin{figure}[t] \text{\textbf{Fig. 2} Computing time for $N = 100$ and $T$ increasing from 5 to 50 ($T = $ 5, 10, 15, 20, 25, 30, 35,} \\ \text{40, 45, and 50) over computing time for $N = 100$ and $T = 5$.} \\ \includegraphics[width=\textwidth, height=110mm]{fig_2_pic} \end{figure} Figure 2 shows that computing time does not increase linearly with $T$. Instead, when the FD transformation is used, computing time increases much faster than linearly. For example, if the differencing formula in (\ref{GMM1_delta}) is used to calculate one-step GMM estimates, then, for $N=100$, it takes about 5.5 times longer to calculate an estimate, on average, when $T=10$ than when $T=5$. Thus, doubling $T$ increases computing time by 5.5 times. When $T$ increases from 5 to 15, a three-fold increase in $T$, it takes about 36 times longer to calculate an estimate. Finally, if we increase $T$ ten-fold, from 5 to 50, it takes approximately $16,038$ times longer to compute GMM estimates if differencing is used (see Figure 2). When we use the FOD formula in (\ref{FOD_ar1}) to calculate GMM estimates, the relative increase in computing time is much less dramatic, but it is still not linear in $T$. In Figure 2, I also plotted the ratios of computing times required to calculate GMM estimates using the FOD formula in (\ref% {FOD_ar1}). For the FOD transformation, how relative computing time increases with $T$ is indicated by the dashed line in the figure. The line hugs the horizontal axis in Figure 2 because the increase in computing time with $T$ is of a much smaller order of magnitude when the FOD transformation is used than when the FD transformation is used. Specifically, for $N=100$, the computing time given $T=10$ is 2.2 times what it is when $T=5$. Thus, doubling $T$ leads to a bit more than double the computing time. However, a ten-fold increase in computing time --- when $T$ is increased from 5 to 50 --- leads to computing time taking about 24 times longer to compute an estimate when the FOD transformation is used. Although the computational complexity of the FOD transformation is not linear in $T$, it increases at a much slower rate in $T$ than is the case when differencing is used. Consequently, as $T$ increases, using the FOD formula for calculating GMM estimates leads to significant reductions in computing time relative to using the differencing formula for computing GMM estimates. Table 1 shows how large these reductions can be. Table 1 reports time ratios. The ratios in Table 1 are the time required to compute estimates using the FD transformation over the time required to compute those same estimates using the FOD transformation for different values of $T$ and $N$. For small $T$, the computations based on first differences are faster. For example, for $T=5$, it takes about half the time to calculate an estimate using the FD formula in (\ref{GMM1_delta}) rather than the FOD formula in (% \ref{FOD_ar1}). However, because computational complexity increases in $T$ at a slower rate when the FOD transformation is used rather than differencing, the FOD method is faster --- indeed, much faster --- than differencing for larger values of $T$. For $T$ as small as 10, the FOD transformation is faster, and, for $T=50$, computations based on the FOD transformation are two orders of magnitude faster --- specifically, over 300 times faster --- than computations exploiting the FD transformation. \vspace{10mm} Table 1: Time to compute FD GMM estimates over time to compute FOD GMM estimates. \begin{tabular}[h]{lccccccccccc} \hline & & & & & & & & & & & \\ & & \multicolumn{10}{c}{$T$} \\ & & $5$ & $10$ & $15$ & $20$ & $25$ & $30$ & $35$ & $40$ & $45$ & $50$ \\ \hline \multicolumn{1}{c}{} & & & & \multicolumn{1}{r}{} & & & & & & & \\ $N=100$ & & 0.47 & 1.18 & 4.82 & 15.24 & 33.30 & 65.05 & 106.93 & 167.56 & 235.44 & 316.90 \\ $N=200$ & & 0.47 & 1.20 & 4.86 & 15.62 & 33.62 & 61.60 & 104.07 & 162.72 & 236.71 & 319.03 \\ $N=300$ & & 0.47 & 1.18 & 4.92 & 15.89 & 33.64 & 61.70 & 104.19 & 163.38 & 237.45 & 320.49 \\ $N=400$ & & 0.47 & 1.19 & 5.19 & 15.97 & 33.89 & 61.55 & 104.39 & 163.49 & 240.04 & 321.96 \\ $N=500$ & & 0.48 & 1.20 & 5.24 & 15.90 & 33.66 & 61.98 & 104.63 & 163.48 & 239.43 & 321.65 \\ & & & & & & & & & & & \\ \hline \end{tabular}% Note: Each estimate is based on 100 samples. \section{Summary and concluding remarks} This paper showed that the computational complexity of one-step GMM estimation based on the FD transformation increases with $N$ linearly but it increases with $T$ dramatically --- at the rate $T^{6}$ increases. On the other hand, when the FOD transformation is used, computational complexity still increases with $N$ linearly, but it increases with $T$ at the rate $T^{4}$ increases. Simulation evidence provided in Section \ref{MC} showed that the reductions in computing time from use of the FOD instead of the FD transformation are dramatic when $T$ is not small. The fact that estimates can be computed so much faster with the FOD transformation implies that Monte Carlo simulations, and other computationally intensive procedures, can be conducted in a fraction of the time if the FOD transformation is used rather than the FD transformation. Consequently, Monte Carlo studies using large values of $T$ and complicated models that may be prohibitively costly for GMM estimation based on the FD transformation may be feasible for GMM based on the FOD transformation. \section*{Appendix A: Floating point operations for one-step GMM} A floating point operation (flop) is an addition, subtraction, multiplication, or division. This appendix shows how the number of flops required to calculate $\widehat{\delta }$ via the formulas in (\ref% {GMM1_delta}) and (\ref{FOD_ar1}) depends on $N$ and $T$. To find the number of flops required to calculate $\widehat{\delta }$, the following facts will be used repeatedly throughout this appendix. Let $\boldsymbol{B}$, $\boldsymbol{% E}$, and $\boldsymbol{H}$ be $q\times r$, $q\times r$, and $r\times s$ matrices, and let $d$\ be a scalar. Then $d\boldsymbol{B}$, $\boldsymbol{B}% \pm \boldsymbol{E}$, and $\boldsymbol{BH}$ consist of $qr$, $qr$, and $% qs \left( 2r-1\right) $ flops, respectively (see Hunger, 2007).% \subsection*{Appendix A.1: Floating point operations using differencing} To calculate $\boldsymbol{\tilde{y}}_{i} =\boldsymbol{D}\boldsymbol{y}_{i}$ ($i=1,\ldots,N$), a total of $N\left(T-1\right)\left(2T-1\right)$ flops are needed. After the $\boldsymbol{\tilde{y}}_{i}$s are calculated, the number of flops required to compute $\boldsymbol{\tilde{s}}% =\sum_{i}\boldsymbol{Z}_{i}^{\prime } \boldsymbol{\tilde{y}}_{i}=\left( \boldsymbol{Z}_{1}^{\prime },\ldots \boldsymbol{Z}_{N}^{\prime }\right) \left( \boldsymbol{\tilde{y}}_{1}^{\prime },\ldots , \boldsymbol{\tilde{y}}% _{N}^{\prime }\right) ^{\prime }$ is $m\left[ 2N\left( T-1\right) -1\right] $, where $m=T (T -1) /2$ is the number of moment restrictions. Therefore, the total number of flops required to calculate $\boldsymbol{\tilde{s}}$ is $N\left(T-1\right)\left(2T-1\right)+m\left[ 2N\left( T-1\right) -1\right]$. Given $m$ increases with $T$ at a quadratic rate, the number of flops required to compute $\boldsymbol{\tilde{s}}$ is therefore of order O($NT^{3}$). The same number of flops is needed to compute $\boldsymbol{\tilde{s}}_{-1}$. Hence, the number of flops needed to compute $\boldsymbol{\tilde{s}}$ and $\boldsymbol{\tilde{s}}_{-1}$ increase with $N$ linearly, for given $T$, and with $T$ at a cubic rate, for given $N$.% To compute $\boldsymbol{A} _{N}$ we must compute $\boldsymbol{G}=\boldsymbol{D}\boldsymbol{D}^{\prime}$, which requires $\left(T-1\right)^{2}\left(2T-1\right)$ flops; the products $\boldsymbol{GZ}_{i}$ $% \left( i=1,\ldots ,N\right) $, which requires another $Nm\left( T-1\right) \left( 2T-3\right) $ flops; and the products $\boldsymbol{Z}_{i}^{\prime }\left( \boldsymbol{GZ}_{i}\right) $ $\left( i=1,\ldots ,N\right) $, which require $Nm^{2}\left( 2T-3\right) $ flops. Finally, we execute $N-1$ summations of the $m\times m$ matrices $\boldsymbol{Z}% _{i}^{\prime }\boldsymbol{GZ}_{i}$ $\left( i=1,\ldots ,N\right) $ for another $\left(N-1\right)m^{2}$ flops. From this accounting, we see that $\left(T-1\right)^{2}\left(2T-1\right)+Nm\left( T-1\right) \left( 2T-3\right) + Nm^{2}\left( 2T-3\right)+\left(N-1\right)m^{2}$ flops are required to compute $\boldsymbol{A} _{N}$. Given $m$ is quadratic in $T$, the number of flops required to compute $\boldsymbol{A} _{N}$ is of order O($NT^{5}$). Hence, the number of flops increase with $N$ linearly, for given $T$, but they increase with $T$ at the rate $T^{5}$, for given $N$.% The number flops required to compute $\boldsymbol{A} _{N}^{-1}$ increases with $% T$ at the rate $T^{6}$. To see this, note that standard methods for inverting a $q\times q$ matrix require on the order of $q^{3}$ operations (see Hunger, 2007; Strang, 2003, pp. 452--455). The matrix $\boldsymbol{A} _{N}$ is $m\times m$, and $m$ increases with $T$ at the rate $T^{2}$ if all available moment restrictions are exploited. Hence, the number of flops required to invert $\boldsymbol{A} _{N}$ is of order O($T^{6}$). No additional calculations increase with $T$ and $N$ as quickly as computing $\boldsymbol{A}_{N}$ and its inversion. For example, after $\boldsymbol{A} _{N}^{-1}$ is calculated, $m\left( 2m-1\right) $ flops are required to calculate % $\boldsymbol{\tilde{a}}=\boldsymbol{\tilde{s}}_{-1}^{\prime }\boldsymbol{A} _{N}^{-1}$, while computing $% \boldsymbol{\tilde{a}}\boldsymbol{\tilde{s}}_{-1}$, and $\boldsymbol{\tilde{a}}\boldsymbol{\tilde{s}}$ both require $2m-1$ flops.% \subsection*{Appendix A.2: Floating point operations using FOD} Calculation of $\boldsymbol{\ddot{y}}_{i} =\boldsymbol{F}\boldsymbol{y}_{i}$ ($i=1,\ldots,N$) requires $N\left(T-1\right)\left(2T-1\right)$ flops. An additional $t\left( 2N-1\right) $ flops are needed to calculate $\boldsymbol{\ddot{s}}_{t}=\left( \boldsymbol{z}_{1t},\ldots , \boldsymbol{z}_{Nt}\right) \left( \ddot{y}_{1t},\ldots ,\ddot{y}_{Nt}\right) ^{\prime }$. Therefore, calculation of all of the $\boldsymbol{\ddot{s}}% _{t}$s ($t=1,\ldots ,T-1$) requires $\ddot{f}_{1}=N\left(T-1\right)\left(2T-1\right)+\left( 2N-1\right) \sum_{t=1}^{T-1}t=N\left(T-1\right)\left(2T-1\right)+\left( 2N-1\right) T\left( T-1\right) /2$ flops, which is of order O($NT^{2}$). Calculation of $\boldsymbol{\ddot{s}}_{t-1}$ ($t=1,\ldots ,T-1$) requires another $\ddot{f}_{2}=\ddot{f}_{1}$ flops. \sloppy On the other hand, computing $% \boldsymbol{S}_{t}=\left( \boldsymbol{z}_{1t},\ldots , \boldsymbol{z}_{Nt}\right)\left( \boldsymbol{z}_{1t},\ldots , \boldsymbol{z}_{Nt}\right)^{ \prime}$ requires $t^{2}\left( 2N-1\right) $ flops. Therefore, calculation of $\boldsymbol{S}_{t}$ ($t=1,\ldots ,T-1$) requires $% \ddot{f}_{3}=\left( 2N-1\right) \sum_{t=1}^{T-1}t^{2}=\left( 2N-1\right) T\left( 2T-1\right) \left( T-1\right) /6$ flops, which is of order O($NT^{3}$). \fussy The matrix $\boldsymbol{S}% _{t} $ is a $t\times t$ matrix, which requires on the order of O($t^{3}$) flops to invert. Given there are $T-1$ $\boldsymbol{S}_{t}$ matrices that must be inverted, the number of operations required to invert all of them is on the order of $\ddot{f}_{4}=% \sum_{t=1}^{T-1}t^{3}=T^{2}\left( T-1\right) ^{2}/4$ flops. In other words, the number of flops required to invert all of the $\boldsymbol{S}_{t}$ matrices is of order O($T^{4}$). After $\boldsymbol{S}_{t}^{-1}$ ($t=1,\ldots ,T-1 $) are computed, computing $\boldsymbol{\ddot{a}}_{t}=\boldsymbol{\ddot{s}}% _{t-1}^{ \prime }\boldsymbol{S}_{t}^{-1}$ ($t=1,\ldots ,T-1$) requires another $\ddot{f}_{5}=\sum_{t=1}^{T-1}t\left( 2t-1\right) =T\left( T-1\right) \left(4T-5\right) /6$ flops, which is of order O($T^{3}$). Next, calculation of $\boldsymbol{\ddot{a}}% _{t}\boldsymbol{\ddot{s}}_{t}$ ($t=1,\ldots ,T-1$) requires $% \ddot{f}_{6}=\sum_{t=1}^{T-1}\left( 2t-1\right) =T\left( T-2\right) +1$ flops, and then summing the computed $\boldsymbol{\ddot{a}}_{t}\boldsymbol{\ddot{s}% }_{t}$s --- i.e., $\sum_{t=1}^{T-1}\boldsymbol{\ddot{a}}_{t}% \boldsymbol{\ddot{s}}_{t}$ --- is another $\ddot{f}_{7}=T-2$ flops. Hence, calculation of $\sum_{t=1}^{T-1} \boldsymbol{\ddot{a}}_{t}\boldsymbol{\ddot{s}}_{t}$ requires $% \sum_{j=1}^{7}\ddot{f}_{j}$ flops. This work increases with $N$ at the rate $N$ increases, for given $T$, and increases with $T$ at the rate $T^{4}$ increases, for given $N$. Of course, to compute $\widehat{\delta }_{F}$ we must also compute $\sum_{t=1}^{T-1}% \boldsymbol{\ddot{a}}_{t}\boldsymbol{\ddot{s}}_{t-1}$, but the $% \boldsymbol{\ddot{a}}_{t}$s and $\boldsymbol{\ddot{s}}_{t-1}$s have already been calculated. Therefore, the remaining calculations required to compute $\widehat{\delta }_{F}$ are but a small part of the total number of flops required.
{ "attr-fineweb-edu": 1.788086, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd505qsJBjms2ZG_V
\section{Introduction \label{section intro}} Feature selection is indispensable for predicting clinical or biological outcomes from microbiome data as researchers are often interested in identifying the most relevant microbial features associated with a given outcome. This task can be particularly challenging in microbiome analyses, as the datasets are typically high-dimensional, underdetermined (the number of features far exceeds the number of samples), sparse (a large number of zeros are present), and compositional (the relative abundance of taxa in a sample sum to one). Current methodological research has been focusing on developing and identifying the best methods for feature selection that handle the above characteristics of microbiome data, however, methods are typically evaluated based on overall performance of model prediction, such as Mean Squared Error (MSE), R-squared or Area Under the Curve (AUC). While prediction accuracy is important, another possibly more biologically relevant criterion for choosing an optimal feature selection method is reproducibility, i.e. how reproducible are all discovered features in unseen (independent) samples? If a feature selection method is identifying true signals in a microbiome dataset, then we would expect those discovered features to be found in other similar datasets using the same method, indicating high reproducibility of the method. If a feature selection method yields a good model fit yet poor reproducibility, then its discovered features will mislead related biological interpretation. The notion of reproducibility for evaluating feature selection method seems intuitive and sensible, yet in reality we neither have access to multiple similar datasets to estimate reproducibility, nor have a well-defined mathematical formula to define reproducibility. The many available resampling techniques~\citep{efron1994introduction} enable us to utilize well-studied methods, for example bootstrapping, to create replicates of real microbiome datasets for estimating reproducibility. Moreover, given the burgeoning research in reproducibility estimation in the field of computer science~\citep{kalousis2005stability, kalousis2007stability, nogueira2018quantifying}, we can borrow their concept of Stability to approximate the reproducibility of feature selection methods in microbiome data analysis. In this paper, we investigate the performance of a popular model prediction metric MSE and the proposed feature selection criterion Stability in evaluating four widely used feature selection methods in microbiome analysis (lasso, elastic net, random forests and compositional lasso)~\citep{tibshirani1996regression, zou2005regularization, breiman2001random, lin2014variable}. We evaluate both extensive simulations and experimental microbiome applications, with a focus of feature selection analysis in the context of continuous outcomes. We find that Stability is a superior feature selection criterion to MSE as it is more reliable in discovering true and biologically meaningful signals. We thus suggest microbiome researchers use a reproducibility criterion such as Stability instead of a model prediction performance metric such as MSE for feature selection in microbiome data analysis. \section{Methods\label{method}} \subsection{Estimation of stability} The Stability of a feature selection method was defined as the robustness of the feature preferences it produces to differences in training sets drawn from the same generating distribution~\citep{kalousis2005stability}. If the subsets of chosen features are nearly static with respect to data changes, then this feature selection method is a \textit{stable} procedure. Conversely, if small changes to the data result in significantly different feature subsets, then this method is considered \textit{unstable}, and we should not trust the output as reflective of the true underlying structure influencing the outcome being predicted. In biomedical fields, this is a proxy for reproducible research, in the latter case indicating that the biological features the method has found are likely to be a data artifact, not a real clinical signal worth pursuing with further resources~\citep{lee2013robustness}. \citet{goh2016evaluating} recommend augmenting statistical feature selection methods with concurrent analysis on stability and reproducibility to improve the quality of selected features prior to experimental validation~\citep{sze2016looking, duvallet2017meta}. While the intuition behind the concept of stability is simple, there is to date no single agreed-upon measure for precisely quantifying stability. Up to now, there have been at least~16 different measures proposed to quantify the stability of feature selection algorithms in the field of computer science~\citep{nogueira2017stability}. Given the variety of stability measures published, it is sensible to ask: which stability measure is most valid in the context of microbiome research? A multiplicity of methods for stability assessment may lead to publication bias in that researchers may be drawn toward the metric that extracts their hypothesized features or that reports their feature selection algorithm as more stable~\citep{boulesteix2009stability}. Under the perspective that a useful measure should obey certain properties that are desirable in the domain of application, and provide capabilities that other measures do not, Nogueira and Brown aggregated and generalized the requirements of the literature into a set of five properties~\citep{nogueira2017stability}. The first property requires the stability estimator to be fully defined for any collection of feature subsets, thus allowing a feature selection algorithm to return a varying number of features. The second property requires the stability estimator to be a strictly decreasing function of the average variance of the selection of each feature. The third property requires the stability estimator to be bounded by constants not dependent on the overall number of features or the number of features selected. The fourth property states that a stability estimator should achieve its maximum if and only if all chosen feature sets are identical. The fifth property requires that under the null model of feature selection, where we independently draw feature subsets at random, the expected value of a stability estimator should be constant. These five properties are desirable in any reasonable feature selection scenario, and are critical for useful comparison and interpretation of stability values. Among all the existing measures, only Nogueira's stability measure (defined below) satisfies all five properties, thus we adopted this measure in the current work. We assume a data set of $n$ samples $\{x_i,y_i\}_{i=1}^n$ where each $x_i$ is a $p$-dimensional feature vector and $y_i$ is the associated biological outcome. The task of feature selection is to identify a feature subset, of size $k<p$, that conveys the maximum information about the outcome $y$. An ideal approach to measure stability is to first take $M$ data sets drawn randomly from the same underlying population, to apply feature selection to each data set, and then to measure the variability in the $M$ feature sets obtained. The collection of the $M$ feature sets can be represented as a binary matrix $Z$ of size $M \times p$, where a row represents a feature set (for a particular data set) and a column represents the selection of a given feature over the $M$ data sets as follows \begin{equation*} Z = \begin{pmatrix} Z_{1,1} & \cdots & Z_{1,p} \\ \vdots & \ddots & \vdots \\ Z_{M,1} & \cdots & Z_{M,p} \end{pmatrix} \end{equation*} Let $Z_{.f}$ denote the $f^{th}$ column of the binary matrix $Z$, indicating the selection of the $f^{th}$ feature among the $M$ data sets. Then $Z_{.f} \sim Bernoulli (p_f)$, where $\hat p_f = \frac{1}{M} \sum_{i=1}^M Z_{i,f}$ as the observed selection probability of the $f^{th}$ feature. Nogueira defined the stability estimator as \begin{equation} \hat \Phi(Z) = 1- \frac{\frac{1}{p} \sum_{f=1}^p \sigma_f^2}{E [\frac{1}{p} \sum_{f=1}^p \sigma_f^2 |H_0 ]} = 1-\frac{\frac{1}{p} \sum_{f=1}^p \sigma_f^2 }{\frac{\bar k}{p} (1- \frac{\bar k}{p})} \end{equation} where $\sigma_f^2= \frac{M}{M-1} \hat p_f (1-\hat p_f)$ is the unbiased sample variance of the selection of the $f^{th}$ feature, $H_0$ denotes the null model of feature selection (i.e. feature subsets are drawn independently at random), and $\bar k = \frac{1}{M} \sum_{i=1}^M \sum_{f=1}^p Z_{i,f}$ is the average number of selected features over the $M$ data sets. In practice, we usually only have one data sample (not $M$), so a typical approach to measure stability is to first take $M$ bootstrap samples of the provided data set, and apply the procedure described in the previous paragraph. Other data sampling techniques can be used as well, but due to the well understood properties and familiarity of bootstrap to the community, we adopt the bootstrap approach. \subsection{Four selected feature selection methods } Lasso, elastic net, compositional lasso and random forests were chosen as benchmarked feature selection methods in this paper due to their wide application in microbiome community~\citep{knights2011supervised}. Lasso is a penalized least squares method imposing an $L_1$-penalty on the regression coefficients~\citep{tibshirani1996regression}. Owing to the nature of the $L_1$-penalty, lasso does both continuous shrinkage and automatic variable selection simultaneously. One limitation of lasso is that if there is a group of variables among which the pairwise correlations are very high, then lasso tends to select one variable from the group and ignore the others. Elastic net is a generalization of lasso, imposing a convex combination of the $L_1$ and $L_2$ penalties, thus allowing elastic net to select groups of correlated variables when predictors are highly correlated~\citep{zou2005regularization}. Compositional lasso is an extension of lasso to compositional data analysis~\citep{lin2014variable}, and it is one of the most highly cited compositional feature selection methods in microbiome analysis~\citep{kurtz2015sparse, li2015microbiome, shi2016regression, silverman2017phylogenetic}. Compositional lasso, or the sparse linear log-contrast model, considers variable selection via $L_1$ regularization. The log-contrast regression model expresses the continuous outcome of interest as a linear combination of the log-transformed compositions subject to a zero-sum constraint on the regression vector, which leads to the intuitive interpretation of the response as a linear combination of log-ratios of the original composition. Suppose an $n \times p$ matrix $X$ consists of $n$ samples of the composition of a mixture with $p$ components, and suppose $Y$ is a response variable depending on $X$. The nature of composition makes each row of $X$ lie in a $(p-1)$-dimensional positive simplex $S^{p-1}=\{(x_1,…,x_p ): x_j>0,j=1,…,p \text{ and } \sum_{j=1}^p x_j =1 \}$. This compositional lasso model is then expressed as \begin{equation} y=Z \beta + \epsilon, \sum_{j=1}^p \beta_j =0 \end{equation} where $Z=(z_1,…,z_p )=(logx_{ij})$ is the $n \times p$ design matrix, and $\beta= (\beta_1,…,\beta_p)^T$ is the $p$-vector of regression coefficients. Applying the $L_1$ regularization approach to this model is then \begin{equation} \hat \beta = argmin (\frac{1}{2n} ||y - z\beta||_2^2 + \lambda ||\beta||_1), \text{ subject to } \sum_{j=1}^p \beta_j = 0 \end{equation} where $\beta = (\beta_1,…, \beta_p)^T, \lambda>0$ is a regularization parameter, and $||.||_2$ and $||.||_1$ denote the $L_2$ and $L_1$ norms, respectively. Random forests is regarded as one of the most effective machine learning techniques for feature selection in microbiome analysis \citep{belk2018microbiome, liu2017experimental, namkung2020machine, santo2019clustering, statnikov2013comprehensive}. Random forests is a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest \citep{breiman2001random}. Since random forests do not select features but only assign importance scores to features, we choose features from random forests using Altmann's permutation test \citep{altmann2010permutation}, where the response variable is randomly permuted $S$ times to construct new random forests and new importance scores computed. The $S$ importance scores are then used to compute the p-value for the feature, which is derived by computing the fraction of the $S$ importance scores that are greater than the original importance score. \subsection{Simulation settings} We compared the performance of the popular model prediction metric MSE and the proposed criterion Stability in evaluating four widely used feature selection methods for different data scenarios. We simulated features with Independent, Toeplitz and Block correlation structures for datasets with the number of samples and features in all possible combinations of $(50, 100, 500, 1000)$, resulting in the ratio of $p$ (number of features) over $n$ (number of samples) ranging from 0.05 to 20. Our simulated compositional microbiome data are an extension of the simulation settings from \citet{lin2014variable} as follows: \begin{enumerate} \item Generate an $n \times p$ data matrix $W=(w_{ij})$ from a multivariate normal distribution $N_p(\theta,\Sigma)$. To reflect the fact the components of a composition in metagenomic data often differ by orders of magnitude, let $\theta = (\theta_j)$ with $\theta_j =log(0.5p)$ for $j=1,…,5$ and $\theta_j=0$ otherwise. To describe different types of correlations among the components, we generated three general correlation structures: Independent design where covariates are independent from each other, Toeplitz design where $\Sigma =(\rho^ {|i-j|})$ with $\rho=0.1,0.3,0.5,0.7,0.9$, and Block design with 5 blocks, where the intra-block correlations are 0.1, 0.3, 0.5, 0.7, 0.9, and the inter-block correlation is 0.09. \item Obtain the covariate matrix $X=(x_{ij})$ by the transformation $x_{ij} = \frac{exp(w_{ij})}{\sum_{k=1}^p exp(w_{ik})}$, and the $n \times p$ log-ratio matrix $z=log(X)$, which follows a logistic normal distribution \citep{aitchison1982statistical}. \item Generate the responses $y$ according to the model $y=Z \beta^*+ \epsilon$, $\sum_{j=1}^p \beta_j^* =0$, where $\epsilon \sim N(0, 0.5^2)$, and $\beta^*=(1,-0.8,0.6,0,0,-1.5,-0.5,1.2,0,…,0)^T$, indicating that only 6 features are real signals. \item Repeat steps 1-3 for 100 times to obtain 100 simulated datasets for each simulation setting, and apply the desired feature selection algorithm with 10-fold cross-validation on the 100 simulated datasets. Specifically, each simulated dataset is separated into training and test sets in the ratio of 8 : 2, 10-fold cross-validation is applied to the training set ($80\%$ of the data) for parameter tuning and variable selection, and then model prediction (i.e. MSE) is evaluated on the test set ($20\%$ of the data). Hence, stability is measured according to Nogueira's definition based on the 100 subsets of selected features. Average MSE is calculated as the mean of the MSEs across the 100 simulated datasets, and the average false positive or false negative rate denotes the mean of the false positive or false negative rates across the 100 simulated datasets. \end{enumerate} In summary, a total of 176 simulation scenarios were generated, with 16 for Independent design, 80 for Toeplitz or Block design, and 100 replicated datasets were simulated for each simulation setting, resulting in 17,600 simulated datasets in total. \section{Simulation results} Given that the true numbers of false positive and false negative features are known in simulations, we can utilize their relationships with MSE and Stability to compare the reliability of MSE and Stability in evaluating feature selection methods. In theory, we would expect to see a positive correlation between MSE and false positive rate or false negative rate, while a negative correlation between Stability and false positive or false negative rates. This is because when the real signals are harder to select (i.e. increasing false positive or false negative rates), a feature selection method would perform worse (i.e. increasing MSE or decreasing Stability). The first column in Figure~\ref{f:fig1} shows the relationship between MSE and false positive rate in three correlation designs, and the second column in Figure~\ref{f:fig1} shows the relationship between Stability and false positive rate. In contrast to the random pattern in MSE vs. false positive rate (Figure~\ref{f:fig1} A-C-E), where drastic increase in false positive rate could lead to little change in MSE (e.g. random forests), or big drop in MSE corresponds to little change in false positive rate (e.g. elastic net), we see a clear negative correlation pattern between Stability and false positive rate (Figure~\ref{f:fig1} B-D-F). Regarding false negative rate, we also observe a random pattern in MSE and a meaningful negative correlation relationship in Stability (Supplementary Figure 1). These results suggest that Stability is a more reliable evaluation criterion than MSE due to its closer reflection of the ground truth in the simulations (i.e. false positive \& false negative rates), and this is true irrespective of feature selection method used, features-to-sample size ratio ($p/n$) or correlation structure among the features. \begin{figure} \centerline{\includegraphics[width=5in]{fig1_mse_stab_fpr.png}} \caption{Comparing the relationship between MSE and False Positive Rate vs. Stability and False Positive Rate in three correlation structures. Colored dots represent values from different feature selection methods: compositional lasso (red), elastic net (green), lasso (blue) and random forests (purple). Size of dots indicate features-to-sample size ratio $p/n$.} \label{f:fig1} \end{figure} Using the more reliable criterion Stability, we now investigate the best feature selection method in different simulation scenarios. Based on Stability, compositional lasso has the highest stability in "easier" correlation settings (Toeplitz 0.1 – 0.7 in Supplementary Figure 2, represented by Toeplitz 0.5 in Figure \ref{f:fig2} A due to their similar results; Block 0.9-0.3 in Supplementary Figure 3, represented by Block 0.5 in Figure \ref{f:fig2} C) for all combinations of $n$ (number of samples) and $p$ (number of features). Across all ``easier'' correlation scenarios, compositional lasso has an average stability of 0.76 with its minimum at 0.21 and its maximum close to 1 (0.97), while the 2nd best method Lasso has an average stability of only 0.44 with the range from 0.09 to 0.89, and the average stabilities of random forests and Elastic Net hit as low as 0.24 and 0.17 respectively. In ``extreme" correlation settings (Toeplitz 0.9 in Figure \ref{f:fig2} B or Block 0.1 in Figure \ref{f:fig2} D), compositional lasso no longer maintains the highest stability across all scenarios, but it still has the highest average stability of 0.42 in Toeplitz 0.9 (surpassing the 2nd best Lasso by 0.09), and the second highest average stability in Block 0.1 (only 0.03 lower than the winner Lasso). Regarding specific scenarios in "extreme" correlation settings, compositional lasso, lasso or random forests can be the best in different combinations of $p$ and $n$. For example, in both Toeplitz 0.9 and Block 0.1, with small $p$ (when $p$ = 50 or 100), random forests has highest stability ($\geq 0.8$) when $n$ is largest ($n=1000$), but Lasso or compositional lasso surpasses random forest when n is smaller than 1000, although all methods have poor stability ($\leq 0.4$) when $n \leq 100$. This indicates that best feature selection method based on Stability depends on the correlation structure among features, the number of samples and the number of features in each particular dataset; thus there is no single omnibus best, i.e., most stable, feature selection method. \begin{figure} \centerline{\includegraphics[width=5in]{fig2_methods_stab.png}} \caption{Method comparisons based on Stability in representative correlation structures. Colored bars represent Stability values corresponding to specific number of samples (x-axis) and number of features ($p$) for different feature selection methods: compositional lasso (red), elastic net (green), lasso (blue) and random forests (purple). Note that Toeplitz 0.1-0.7 has similar results as Toeplitz 0.5 (see Supplementary Figure 2), and Block 0.9-0.3 has similar results as Block 0.5 (see Supplementary Figure 3). Moreover, Stability equals to zero when no features were selected by methods (e.g. random forests chooses nothing when the number of samples equals 50).} \label{f:fig2} \end{figure} How will results differ if we use MSE as the evaluation criterion? Using the extreme correlation settings (Toeplitz 0.9 and Block 0.1) as examples, random forests has lowest MSEs for all combinations of $p$ and $n$ (Figure \ref{f:fig3} A-B). However, Figure \ref{f:fig3} C-D unveils that random forests has highest false negative rates in all scenarios of Toeplitz 0.9 and Block 0.1, and its false negative rates can reach as high as the maximum 1, indicating that random forests fails to pick up any real signal despite its low prediction error. Moreover, Figure \ref{f:fig3} E-F show that random forests can have highest false positive rates when $p$ is as large as 500 or 1000. All these highlight the danger of choosing inappropriate feature selection method based on MSE, where the merit of high predictive power masks high errors in false positives and false negatives. On the other hand, the method with lowest false positive rates (compositional lasso) (Figure~\ref{f:fig3} E-F) was rather found to have the worst performance by MSE (Figure~\ref{f:fig3} A-B), suggesting another pitfall of missing the optimal method when using MSE as the evaluation criterion. \begin{figure} \centerline{\includegraphics[width=5in]{fig3_methods_mse.png}} \caption{Method comparisons based on MSE in extreme correlation structures (Toeplitz 0.9 for A,C,E and Block 0.1 for B,D,F). Colored bars represent MSE (A-B), False Negative Rates (C-D), and False Positive Rates (E-F) corresponding to a specific number of samples (x-axis) and features ($p$) for different feature selection methods: compositional lasso (red), elastic net (green), lasso (blue) and random forests (purple). Note that false positive rates are not available for random forests when number of samples equals 50 because it chooses zero features. } \label{f:fig3} \end{figure} The use of point estimates alone to compare feature selection methods, without incorporating variability in these estimates, could be misleading. Hence, as a next step, we evaluate reliability of MSE and Stability across methods using a hypothesis testing framework. This is demonstrated with the cases of $n = 100$ $\&$ $p = 1000$ for Toeplitz 0.5 and Block 0.5, where compositional lasso is found to be the best feature selection method based on Stability, while random forests is the best based on MSE. We use bootstrap to construct $95\%$ confidence intervals to compare compositional lasso vs. random forests based on Stability or MSE. For each simulated data (100 in total for Toeplitz 0.5 or Block 0.5), we generate 100 bootstrapped datasets and apply feature selection methods to each bootstrapped dataset. Then for each simulated data, Stability is calculated based on the 100 subsets of selected features from the bootstrapped replicates, and the variance of Stability is measured as its variability across the 100 simulated data. Since MSE can be obtained for each simulated data without bootstrapping, we use the variability of MSE across the 100 simulated data as its variance. Based on the 95\% CI for the difference in Stability between compositional lasso and random forest methods (Table 1), we see that compositional lasso is better than random forest in terms of Stability index, and not statistically inferior to random forests in terms of MSE despite its inferior raw value. This suggests that Stability has higher precision (i.e. lower variance). Conversely, MSE has higher variance, which results in wider confidence intervals and its failure to differentiate methods. \begin{table}[h] \centering \label{t:table1} \caption{Hypothesis testing using Bootstrap to compare compositional lasso (CL) with random forests (RF) based on Stability or MSE using two simulation scenarios (*indicate statistically significant). } \begin{tabularx}{\textwidth}{ |X|X|X| } \hline Example (N = 100 \& P = 1000) & Estimated mean difference (CL – RF) in Stability index with 95\% CI & Estimated mean difference (CL – RF) in MSE with 95\% CI \\ \hline Toeplitz 0.5 & 0.22 (0.19, 0.28)* & 0.23 (-0.62, 1.36) \\ \hline Block 0.5 & 0.23 (0.17, 0.29)* & 0.44 (-0.27, 1.57) \\ \hline \end{tabularx} \end{table} \section{Experimental microbiome data applications\label{data}} To compare the reliability of MSE and Stability in choosing feature selection methods in microbiome data applications, two experimental microbiome datasets were chosen to cover common sample types (human gut and environmental soil samples) and the scenarios of $p \approx n$ and $p >> n$ (where $p$ is the number of features and $n$ is the number of samples). The human gut dataset represents a cross-sectional study of 98 healthy volunteers to investigate the connections between long-term dietary patterns and gut microbiome composition \citep{wu2011linking}, and we are interested in identifying a subset of important features associated with BMI, which is a widely-used gauge of human body fat and associated with the risk of diseases. The soil dataset contains 88 samples collected from a wide array of ecosystem types in North and South~America~\citep{lauber2009pyrosequencing}, and we are interested in discovering microbial features associated with the pH gradient, as pH was reported to be a strong driver behind fluctuations in the soil microbial communities~\citep{morton2017balance}. Prior to our feature selection analysis, the same filtering procedures were applied to the microbiome count data from these two datasets, where only the microbes with a taxonomy assignment at least to genus level or lower were retained for interpretation, and microbes present in fewer than 1\% of the total samples were removed. Moreover, the count data were transformed into compositional data after replacing any zeroes by the maximum rounding error~0.5~\citep{lin2014variable}. Comparisons of feature selection methods in these two microbiome datasets are shown in Table 2, which are consistent with simulation results, where the best method chosen by MSE or Stability in each dataset can be drastically different. Based on MSE, random forests is the best in the BMI Gut dataset, while being the worst based on Stability. Similarly, in the pH Soil dataset, random forests is the second best method according to MSE, yet the worst in terms of Stability. If we use Stability as the evaluation criterion, then Elastic Net is the best in the BMI Gut and compositional lasso is the best in the pH Soil, yet both methods would be the worst if MSE was used as the evaluation criterion. One important note is that the Stability values in these two experimental microbiome datasets are low: none of the feature selection method exceeds a stability of 0.4, indicating the challenging task of feature selection in real microbiome applications. However, this possibility of low Stability values was already reflected in our simulated scenarios of ``extreme" correlation scenarios. Another important note, which might be counter-intuitive, is that the dataset with a high $p/n$ ratio (pH Soil) has higher stabilities than the dataset with $p/n$ ratio close to 1 (i.e. similar $p$ \& $n$ values) (BMI Gut). This might be explained by the clearer microbial signals in environmental samples than in human gut samples, but it also highlights the impact of the dataset itself, whose characteristics cannot be easily summarized with the numbers of $p$ and $n$, on feature selection results. Correlation structures between features as considered in our simulations could play an important role, and there may be many other unmeasured factors involved as well. \begin{table}[h] \centering \label{t:table2} \caption{Method comparisons based on Stability Index and MSE in experimental microbiome datasets (methods ordered in terms of best MSE/Stability performance, followed with raw MSE/Stability values in parentheses). } \begin{tabularx}{\textwidth}{ |c|c|X|X| } \hline Dataset & $n * p$ $(p/n)$ & MSE \newline (lower is better) & Stability \newline (higher is better) \\ \hline BMI Gut & 98 * 87 (0.9) & Random forests (4.99) \newline Compositional lasso (21.59) \newline Lasso (24.07) \newline Elastic Net (25.33) & Elastic Net (0.23) \newline Compositional lasso (0.22) \newline Lasso (0.14) \newline Random forests (0.02)\\ \hline pH Soil & 89 * 2183 (24.5) & Elastic Net (0.23) \newline Random forests (0.26) \newline Lasso (0.34) \newline Compositional lasso (0.46) & Compositional lasso (0.39) \newline Lasso (0.31) \newline Elastic Net (0.16) \newline Random forests (0.04)\\ \hline \end{tabularx} \end{table} Apart from the comparisons based on point estimates, we can further compare MSE and Stability with hypothesis testing using nested bootstrap~\citep{wainer2018nested}. The outer bootstrap generates 100 bootstrapped replicates of the experimental microbiome datasets, and the inner bootstrap generates 100 bootstrapped dataset for each bootstrapped replicate from the outer bootstrap. Feature selections are performed on each inner bootstrapped dataset with 10-fold cross-validation after a 80:20 split of training and test sets. The variance of Stability is calculated based on the Stability values across the outer bootstrap replicates, and the variance of MSE is calculated across both inner and outer bootstrap replicates, since MSE is available for each bootstrap replicate while Stability has to be estimated based on feature selection results across multiple bootstrap replicates. Using the datasets of BMI Gut and pH Soil, Table 3 confirms with simulation results that raw value difference in MSE does not indicate statistical difference, yet difference in Stability does help to differentiate methods due to its higher precision. A comparison between the observed difference in Table 2 and the estimated mean difference from bootstrap in Table 3 further confirms this discovery. Compared to the estimated mean differences between compositional lasso and random forests based on stability (0.27 in the BMI Gut and 0.36 in the pH Soil), the observed differences (0.2 in the BMI Gut and 0.35 in the pH Soil) differ by 26\% in the BMI Gut and 3\% in the pH Soil. However, this difference is much more drastic based on MSE. Compared to the estimated mean differences between compositional lasso and random forests based on MSE (16.6 in the BMI Gut and 0.2 in the pH Soil), the observed differences (11.8 in the BMI Gut and 0.08 in the pH Soil) have huge differences of 41\% and 160\% in each dataset respectively. Hence, Stability is consistently shown to be more appropriate than MSE in experimental data applications as in simulations. \begin{table}[h] \centering \label{t:table3} \caption{Hypothesis testing using Bootstrap to compare compositional lasso (CL) with random forests (RF) based on Stability or MSE using two experimental microbiome datasets (*indicate statistically significant).} \begin{tabularx}{\textwidth}{ |X|X|X| } \hline Dataset & Estimated mean difference (CL – RF) in Stability index with 95\% CI & Estimated mean difference (CL – RF) in MSE with 95\% CI \\ \hline BMI Gut & 0.27 (0.17, 0.34)* & 11.8 (-2.1, 41.2) \\ \hline pH Soil & 0.36 (0.28, 0.44)* & 0.08 (-0.28, 0.95) \\ \hline \end{tabularx} \end{table} \section{Discussion} Reproducibility is imperative for any scientific discovery, but there is a growing alarm about irreproducible research results. According to a survery by Nature Publishing Group of 1,576 researchers in 2016, more than 70\% of researchers have tried and failed to reproduce another scientist's experiments, and more than half have failed to reproduce their own experiments \citep{baker20161}. This ``reproducibility crisis" in science affects microbiome research as much as any other areas, and microbiome researchers have long struggled to make their research reproducible \citep{schloss2018identifying}. Great efforts have been made towards setting protocols and standards for microbiome data collection and processing \citep{thompson2017communal}, but more could be achieved using statistical techniques for reproducible data analysis. Microbiome research findings rely on statistical analysis of high-dimensional data, and feature selection is an indispensable component for discovering biologically relevant microbes. In this article, we focus on discovering a reproducible criterion for evaluating feature selection methods rather than developing a better feature selection method. We question the common practice of evaluating feature selection methods based on overall performance of model prediction~\citep{knights2011human}, such as Mean Squared Error (MSE), as we detect a stark contrast between prediction accuracy and reproducible feature selection. Instead, we propose to use a reproducibility criterion such as Nogueira's Stability measurement~\citep{nogueira2017stability} for identifying the optimal feature selection method. In both our simulations and experimental microbiome data applications, we have shown that Stability is a preferred evaluation criterion over MSE for feature selection, because of its closer reflection of the ground truth (false positive and false negative rates) in simulations, and its better capacity to differentiate methods due to its higher precision. Hence, if the goal is to identify the underlying true biological signal, we propose to use a reproducibility criterion like Stability instead of a prediction criterion like MSE to choose feature selection algorithms for microbiome data applications. MSE is better suited for problems where prediction accuracy alone is the focus. The strength of our work lies in the comparisons of widely used microbiome feature selection methods using extensive simulations, and experimental microbiome datasets covering various sample types and data characteristics. The comparisons are further confirmed with non-parametric hypothesis testing using bootstrap. Although Nogueira et al. were able to derive the asymptotical normal distribution of Stability~\citep{nogueira2017stability}, their independent assumption for two-sample test might not be realistic due to the fact that two feature selection methods are applied to the same dataset. Hence our non-parametric hypothesis testing is an extension of their two-sample test for Stability. However, our current usage of bootstrap, especially the nested bootstrap approach for experimental microbiome data applications, is computationally expensive; further theoretical development on hypothesis testing for reproducibility can be done to facilitate more efficient method comparisons based on Stability. Last but not least, although our paper is focused on microbiome data, we do expect the superiority of reproducibility criteria over prediction accuracy criteria in feature selection to apply in other types of datasets as well. We thus recommend that researchers use stability as an evaluation criterion while performing feature selection in order to yield reproducible results. \section*{Acknowledgements} We gratefully acknowledge supports from IBM Research through the AI Horizons Network, and UC San Diego AI for Healthy Living program in partnership with the UC San Diego Center for Microbiome Innovation. This work was also supported in part by CRISP, one of six centers in JUMP, a Semiconductor Research Corporation (SRC) program sponsored by DARPA. LN was partially supported by NIDDK 1R01DK110541-01A1. \section*{Supporting Information} The code that implements the methodology, simulations and experimental microbiome data applications is available at the Github repository https://github.com/knightlab-analyses/stability-analyses. \label{lastpage} \bibliographystyle{biom} \section*{Supplementary Figures} \begin{suppfigure}[h] \centerline{\includegraphics[width=5in]{s1_mse_stab_fnr.png}} \caption{Compare the relationship between MSE and False Negative Rate vs. Stability and False Negative Rate in three correlation structures. Colored dots represent values from different feature selection methods: compositional Lasso (red), Elastic Net (green), Lasso (blue) and random forests (purple). Size of dots indicate features-to-sample size ratio $p/n$.} \label{f:figS1} \end{suppfigure} \begin{suppfigure} \centerline{\includegraphics[width=5in]{s2_stab_easy_toe.png}} \caption{Method comparisons based on Stability in easier Toeplitz correlation structures from 0.1 to 0.7. Colored bars represent Stability values corresponding to specific number of samples (x-axis) and number of features (p) for different feature selection methods: compositional Lasso (red), Elastic Net (green), Lasso (blue) and random forests (purple). Compositional Lasso has highest stability in all cases across all correlation strength. Note that Stability equals to zero when no features were selected by methods (e.g. random forests chooses nothing when the number of samples equals 50).} \label{f:figS2} \end{suppfigure} \begin{suppfigure} \centerline{\includegraphics[width=5in]{s3_stab_easy_block.png}} \caption{Method comparisons based on Stability in easier Block correlation structures from 0.9 to 0.3. Colored bars represent Stability values corresponding to specific number of samples (x-axis) and number of features (p) for different feature selection methods: compositional Lasso (red), Elastic Net (green), Lasso (blue) and random forests (purple). Compositional Lasso has highest stability in all cases across all correlation strength. Note that Stability equals to zero when no features were selected by methods (e.g. random forests chooses nothing when the number of samples equals 50).} \label{f:figS3} \end{suppfigure} \label{lastpage} \end{document}
{ "attr-fineweb-edu": 1.696289, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd5nxaKgQL8L9jO9r
\section{Positive Estimators}\label{PEN} \section{Introduction} In this paper we present a slight modification of the Fourier estimation method of the spot volatility (matrix) process of an It\^o semimartingale. The method is originally introduced by Paul Malliavin and the third author in \cite{MM02,MM09}. The main aim of the modification is to construct an estimator of the matrix which always stays non-negative definite. A motivation of the present study is to make an implementation of the Fourier method easier when it is applied to ``dynamic principal component analysis", an important application of the spot volatility estimation (see \cite{LM12,LN12}). Due to the lack of symmetry of the matrices, its estimated eigenvalues are sometimes non-positive, or even worse, non-real. This is not the case with those based on finite differences (FD) of the integrated volatility such as Ngo-Ogawa's method \cite{NO09}. However, as the Fourier method has many advantages over the FD ones, among which robustness against non-synchronous observations counts most, the modification to be presented in this paper would be important. There is a by-product of the modification; thanks to a symmetry imposed to have the non-negativity our estimator is factorized, which may save computational cost a lot. \ The present paper is organized as follows. We will firstly introduce a generic form of Fourier type estimators (Definition \ref{Generic}), and discuss how it works as a recall (section \ref{Heurietic}). After remarking that the classical one is obtained by a choice of the ``fiber" (Remark \ref{classical}), we will introduce a class of such estimators (section \ref{PSD}), each of which is labeled by a positive definite function. As a main result, we will prove its positive semi-definiteness (Theorem \ref{PFE}). In addition, we give a remark (Remark \ref{reduction}) that with an action of finite group, some of the newly introduced positive semi-definite estimators reduce to the classical one. In section \ref{Parametrize}, we will give a factorized representation of the estimator (Definition \ref{factorize}) which is parameterized by a measure by way of Bochner's correspondence. The use of the expression may reduce the computation cost, as will be exemplified by simple experiments presented in section \ref{experiments}. Section \ref{Measure} gives an important remark that as a sequence of estimators, the parameterization measures should be a delta-approximating kernel. Three examples of the kernels are given (Examples \ref{Cauchy}--\ref{doubleF}), two of which are used in the simple experiments presented in section \ref{experiments}. In the present paper, we will not study limit theorems; consistency nor central limit theorem in detail. More detailed studies in these respects will appear in another paper. \subsection*{Acknowledgment} This work was partially supported by JSPS KAKENHI Grant Numbers 25780213, 23330109, 24340022, 23654056 and 25285102. \section{The Fourier Method Revisited} \subsection{Generic Fourier Estimator} Let $ X = (X^1,\cdots, X^d) $ be a $ d $-dimensional continuous semimartingale. Suppose that its quadratic variation (matrix) process is absolutely continuous in $ t $ almost surely. In this paper, we are interested in a statistical estimation of the so-called {\em spot volatility} process; \begin{equation*} \frac{d[ X^j, X^{j'} ]_t}{dt} (\omega) =: V^{jj'}_t (\omega), \quad 0 \leq t \leq 1, \ 1\leq j , j' \leq d, \end{equation*} as a function in $ t $, especially when $ d \geq 2 $, for a given observations of $ X^{j}$ on a partition $\Delta^j : 0 = t^j_0 < \cdots < t^j_{N_j} = 1 $, $ j= 1, \cdots, d$. Here and hereafter we normalize the time interval to $ [0,1] $ for notational simplicity. We start with a generic form of a {\em Fourier estimator} with respect to this observation set, to have a unified view. \begin{defi}\label{Generic} Let $ \mathcal{K} $ be a finite subset of $ \mathbf{Z}$, $ \mathcal{S} =\{ \mathcal{S} (k) \subset_{\mathrm{finite}} \mathbf{Z}^2 : k \in{\mathcal K}, (s,s') \in \mathcal{S}(k) \Rightarrow s+s'=k \} $ be a ``fiber" on $ \mathcal{K} $, and $ c $ be a complex function on $ \mathcal{K} $. A Fourier estimator associated with $ (\mathcal{K}, \mathcal{S}, c) $ is a $ d \times d $ matrix defined entry-wisely by \begin{equation}\label{GF} \begin{split} & (V_{(\mathcal{K}, \mathcal{S}, c)})_{j,j'} (t)\\ &= \sum_{l=1}^{N_j} \sum_{l'=1}^{N_{j'}} \sum_{k \in \mathcal{K}} c(k) e^{2 \pi i k t} \sum_{(s, s') \in \mathcal{S} (k) } e^{-2 \pi i s t_l^j} e^{-2 \pi i s' t_{l'}^{j'}} \Delta X^j_l\Delta X^{j'}_{l'},\\ & \hspace{4cm} ( 1 \leq j,j' \leq d), \end{split} \end{equation} where \begin{equation*} \Delta X^j_l := X^j_{t^j_l} - X^{j}_{t^j_{l-1}}. \end{equation*} \end{defi} \begin{rem}\label{classical} If we take $ \mathcal{K} = \{ 0,\pm 1, \cdots, \pm L \} $ for some positive integer $ L $, $ \mathcal{S}(k) = \{ (s,s') | s+s'=k, s=0, \pm 1, \cdots, \pm M \} $ for some positive integer $ M $, and $ c (k) = ( 1 - \frac{|k|}{L+1} )/(2M+1) $, then the estimator $ V_{(\mathcal{K}, \mathcal{S}, c)} $ coincides with the one introduced in \cite{MM09}. In fact, with these parameters, we have \begin{equation}\label{FEMM09} \begin{split} & (V)_{j,j'} (t) = \sum_{k=-L}^{L} \left( 1 - \frac{|k|}{L+1} \right) e^{2\pi i k t}\\ & \cdot \frac{1}{2M+1} \sum_{s= -M}^M \sum_{l=1}^{N_j} \sum_{l'=1}^{N_{j'}} e^{-2 \pi i s t^j_l} e^{-2 \pi i (k-s) t^{j'}_{l'}} \Delta X^{j}_l \Delta X^{j'}_{l'}. \end{split} \end{equation} With the Dirichlet and the Fej\'er kernels; \begin{equation*} D_{M} (x) = \sum_{|s|\leq M} e^{2 \pi i s x} = \frac{\sin (2M+1) \pi x}{\sin \pi x}, \end{equation*} {and} \begin{equation*} \begin{split} K_{L} (x) &:= \frac{1}{L} \sum_{M=0}^{L-1} D_{M} (x) =\sum_{|k| \leq L-1} \left( 1 - \frac{|k|}{L} \right) e^{2\pi i k x}\\ &= \frac{1}{L} \left( \frac{\sin(L\pi x) }{\sin (\pi x)} \right)^2, \end{split} \end{equation*} we can rewrite (\ref{FEMM09}) as \begin{equation}\label{FEMM14} \begin{split} & (V)_{j,j'} (t)\\ &=\frac{1}{(2M+1)} \sum_{l,l'} K_{L+1} (t-t^j_l) D_{M} (t^j_l-t^{j'}_{l'}) \Delta X^j_l\Delta X^{j'}_{l'}\\ &=\frac{1}{(2M+1)} \sum_{l,l'} \left( \frac{\sin\{(L+1) \pi (t-t^j_l)\}) }{\sin (\pi (t-t^j_l))} \right)^2 \frac{\sin \{ (2M+1) \pi (t^j_l -t^{j'}_{l'})\}}{\sin \pi (t^j_l -t^{j'}_{l'})} \\ & \hspace{4cm} \cdot \Delta X^j_l\Delta X^{j'}_{l'}. \end{split} \end{equation} \end{rem} \subsection{A Heuristic Derivation}\label{Heurietic} Here we give a heuristic explanation of the idea behind the Fourier method, which was originally proposed in \cite{MM02, MM09}, and now is extended to (\ref{GF}) to include a class of positive semi-definite estimators that will be introduced in the next section. Looking at (\ref{GF}) or (\ref{FEMM09}) carefully, we notice that though naively, we may suppose \begin{equation}\label{heuristic} \begin{split} &(V_{(\mathcal{K}, \mathcal{S}, c)})_{j,j'} (t) \\ & \sim \sum_{k \in \mathcal{K}} c (k) e^{2 \pi i k t} \sum_{ (s,s') \in S (k)} \left(\int_0^1 e^{ - 2 \pi i s u} d X^j_u \right) \left(\int_0^1 e^{ - 2 \pi i s' u} d X^{j'}_u \right) \\ &= \sum_{k \in K} c (k) |\mathcal{S}(k)| e^{2 \pi i k t} \int_0^1 e^{ - 2 \pi i k u} d [X^j, X^{j'}]_u \\ &+ \sum_{k \in K} c (k) e^{2 \pi i k t} \int_0^1 \int_0^u \sum_{ (s,s') \in \mathcal{S} (k)} ( e^{ - 2 \pi i s u} e^{- 2 \pi i s' v} dX^j_u d X^{j'}_v \\ & \hspace{6cm} + e^{ - 2 \pi i s' u} e^{- 2 \pi i s v} dX^{j'}_u d X^{j}_v ) \\ &=: I + II. \end{split} \end{equation} The term $ I $ can be understood to be a weighted partial sum of Fourier series of $ V^{jj'} $, which may converge uniformly if the weight $ c (k) | \mathcal{S} (k) | $ is properly chosen (in the case of (\ref{FEMM09}), it is Fej\'er kernel). The term $ II $ vanishes, roughly because, $ \sum_{ (s,s') \in S (k)} ( e^{ - 2 \pi i s u} e^{- 2 \pi i s' v} $ behaves like a Dirichlet kernel $ D (u-v) $, which converges weakly to the delta measure. \section{Positive Semi-Definite Fourier Estimators}\label{PSD} In financial applications, we are often interested in computing the rank of the (spot) volatility matrix. Since it is positive semi-definite the rank is estimated by a hypothesis testing on the number of positive eigenvalues. The estimator (\ref{FEMM09}), or equivalently given as (\ref{FEMM14}), however, sometimes fails to be symmetric\footnote{This is seen from the following simple observation: $ \sum_{l=1}^2\sum_{l'=1}^2 a_{l,l'} (x^{1}_l x^2_{l'} -x^{2}_l x^1_{l'}) =(a_{1,2} - a_{2,1}) (x_1^1x_2^2-x_1^2 x_2^1) $. } since $ K_M (t-t_l) D_L (t_l - t_{l'}) $ is not symmetric in $ l,l' $, and thus its eigenvalues are not always real numbers. This causes some trouble in estimating the rank of the volatility matrix. Here we propose a class of Fourier estimators that will be proven to be symmetric and positive semi-definite. \begin{rem} We just note that the positive semi-definiteness of the Fourier estimator defined in \cite{MM09} of the integrated volatility matrix (the $ 0 $-th coefficient) is proved in \cite{MS11}. \end{rem} \subsection{Positive Fourier Estimators} The main result of the present paper is the following \begin{thm}\label{PFE} Suppose that $ \mathcal{K} = \{ 0, \pm 1, \cdots, \pm 2M \} $ for some positive integer $ M $, $ c $ is a positive semi-definite function on $ \mathcal{K} $, and \begin{equation*} \begin{split} & \mathcal{S} (k) = \\ & \begin{cases} \{ (-M+k+v, M-v) : v=0,\cdots,2M-k \} & 0 \leq k \leq 2M \\ \{ (M+k-v, -M+v :v=0,\cdots,2M+k\} & -2M \leq k < 0. \end{cases} \end{split} \end{equation*} Then, $ V_{(\mathcal{K}, \mathcal{S}, c)} $ defined in (\ref{GF}) is positive semi-definite. \end{thm} \begin{proof} Let $ a_j $, $ j=1,2,3 $ be arbitrary functions on $ \mathbf{Z} $. From the definitions of $ (\mathcal{K},\mathcal{S}) $, we notice that \begin{equation*} \begin{split} & \sum_{ k \in \mathcal{K} } \sum_{ (s,s') \in \mathcal{S}(k) } a_1 (k) a_2 (s) a_3 (s') \\ &= \sum_{ k =0 }^{2M} \sum_{ v=0 }^{2M-k} a_1 (k) a_2 (-M+k+v) a_3 (M-v) \\ &+ \sum_{ k = -2M }^{-1} \sum_{ v=0 }^{2M+k} a_1 (k) a_2 (M+k-v) a_3 (-M+v) =: A+B \end{split} \end{equation*} For the first term of the right-hand-side, \begin{equation*} \begin{split} A &= \sum_{ k =0 }^{2M} \sum_{ u= k-M }^{M} a_1 (k) a_2 (k-u) a_3 (u) \\ &= \sum_{ u = -M }^{M} \sum_{k=0}^{u+M} a_1 (k) a_2 (k-u) a_3 (u) \\ &= \sum_{ u = -M }^{M} \sum_{u'=-u}^{M} a_1 (u+u') a_2 (u') a_3 (u), \end{split} \end{equation*} where we set $ u = M-v $ in the first line, changed the order of the summations in the second line, and put $ u'= k-u $. Similarly, we have \begin{equation*} \begin{split} B &= \sum_{ k = -2M }^{-1} \sum_{ u= -M }^{M+k} a_1 (k) a_2 (k-u) a_3 (u) \\ &= \sum_{ u = -M }^{M}\sum_{k=u-M}^{-1} a_1 (k) a_2 (k-u) a_3 (u) \\ &= \sum_{ u = -M }^{M} \sum_{u'=-M}^{-u-1} a_1 (u+u') a_2 (u') a_3 (u). \end{split} \end{equation*} Thus we see that \begin{equation}\label{lm001} \sum_{ k \in \mathcal{K} } \sum_{ (s,s') \in \mathcal{S}(k) } a_1 (k) a_2 (s) a_3 (s') =\sum_{ u = -M }^{M} \sum_{u'=-M}^{M} a_1 (u+u') a_2 (u') a_3 (u). \end{equation} Applying (\ref{lm001}) when $ a_1 (k) = c (k) e^{2 \pi i k t} $, $ a_2 (s) = e^{-2 \pi i s t^{j'}_{l'}} $ and $ a_3 (s') = e^{-2 \pi i s' t^j_l} $, we obtain \begin{equation}\label{PF} \begin{split} & (V_{(\mathcal{K}, \mathcal{S}, c)})_{j,j'} (t)\\ &= \sum_{l=1}^{N_j} \sum_{l'=1}^{N_{j'}} \sum_{u=-M}^M \sum_{u'=-M}^M c(u-u') e^{2 \pi i u (t-t_l^j)} e^{-2 \pi i u' (t-t_{l'}^{j'})} \Delta X^j_l \Delta X^{j'}_{l'} \\ & \hspace{4cm} ( 1 \leq i,j \leq d). \end{split} \end{equation} Here we used an obvious change of variables $ u' \mapsto -u' $. Now the positive definiteness follows easily. In fact, for $ x \in \mathbb{C}^d $, we have \begin{equation*} \begin{split} & \sum_{j,j'} (V_{(\mathcal{K}, \mathcal{S}, c)})_{j,j'} (t)x_j \overline{x_{j'}} \\ &= \sum_{u=-M}^M \sum_{u'=-M}^M c(u-u') \\ & \cdot \left( \sum_{j=1}^d x_j \sum_{l=1}^{N_j} e^{2 \pi i u (t-t_l^j)} \Delta X^j_l\right) \left( \sum_{j'=1}^d \overline{x_{j'}} \sum_{l=1}^{N_{j'}} e^{-2 \pi i u' (t-t_{l'}^{j'})} \Delta X^{j'}_{l'} \right) \\ &= \sum_{u=-M}^M \sum_{u'=-M}^M c(u-u') f (u) \overline{f(u')} \geq 0, \end{split} \end{equation*} where we have put \begin{equation*} f (u) := \sum_{j=1}^d x_j \sum_{l=1}^{N_j} e^{2 \pi i u(t- t_l^j)} \Delta X^{j}_{l}. \end{equation*} \end{proof} \begin{rem}\label{reduction} If we set in (\ref{PF}) $ N := N_j=N_{j'} = 2M+1 $, $ \Delta t^j_l \equiv 1/N $, and $ c (k) = 1-\min(|k|, |N-k|)/M $, the estimator (\ref{PF}) coincides with (\ref{FEMM09}) with $ L= M $. In fact, writing $ t_l = l/N $ for $ l=1, \cdots, N $, we notice that, for $ t = l_0/N $, \begin{equation*} \begin{split} & (V_{(\mathcal{K}, \mathcal{S}, c)})_{j,j'} (t)\\ &= \frac{1}{N} \sum_{l,l'} \Delta X^j_l \Delta X^{j'}_{l'} \sum_{k=-M}^M \sum_{s=-M}^M c(k-s) e^{\frac{2 \pi i k(l_0-l)}{N}} e^{- \frac{2 \pi i s(l_0-l')}{N} }, \\ \end{split} \end{equation*} and by the change of variables $ (k,s) \mapsto (k-s,-s) $, which is an automorphism over $ \mathbb{Z}/N \mathbb{Z} $, we have \begin{equation*} \begin{split} &= \frac{1}{N} \sum_{l,l'} \Delta X^j_l \Delta X^{j'}_{l'} \sum_{k=-M}^M \sum_{s=-M}^M c(k) e^{\frac{2 \pi i k(l_0-l)}{N}} e^{\frac{2 \pi i s(l-l')}{N} }. \end{split} \end{equation*} \end{rem} \subsection{Parameterization by measures}\label{Parametrize} By Bochner's theorem, we know that for each positive semi-definite function $ c $, there exists a bounded measure $ \mu $ on $ \mathbf{R} $ such that \begin{equation}\label{Boch} c (x) = \int_\mathbf{R} e^{2 \pi i y x} \,\mu(dy). \end{equation} Therefore, we may rewrite the positive Fourier estimator (\ref{PF}) using the measure $ \mu $ instead of the positive semi-definite function $ c $. The expression in terms of the measure $ \mu $ will be useful when implementing the Fourier method in estimating a spot volatility matrix. \begin{defi}\label{factorize} Let $ \mu $ be a bounded measure and $ M $ be a positive integer. We associate with $ (\mu, M) $ an estimator of the spot volatility matrix as: \begin{equation}\label{PF2} \begin{split} & (V_{(\mu,M)})_{j,j'} (t) = \int_\mathbf{R} \left( \sum_{l=1}^{N_j} D_M ( t-t_l^j+y) \Delta X^j_l \right)\\ & \hspace{4cm} \cdot \left( \sum_{l'=1}^{N_{j'}} D_M (t-t_{l'}^{j'} + y) \Delta X^{j'}_{l'}\right) \mu(dy),\\ & \hspace{6cm} ( 1 \leq j,j' \leq d). \end{split} \end{equation} \end{defi} \begin{rem} Under the assumptions in Theorem \ref{PFE} with the relation (\ref{Boch}), we have that $ V_{(\mathcal{K}, \mathcal{S}, c)} (t) = V_{(\mu, M)} (t) $ for all $ t \in [0,1] $. In fact, we have \begin{equation*}\label{PF3} \begin{split} & (V_{(\mathcal{K}, \mathcal{S}, c)})_{j,j'} (t) \\ &= \sum_{\substack{1 \leq l \leq N_j \\ 1 \leq l' \leq N_{j'}}} \int_\mathbf{R} \sum_{ |k| \leq M } \sum_{|s| \leq M} e^{2 \pi i (t-t_l^{j}+y)k} e^{-2 \pi i(t-t_{l'}^{j'}+y) s} \mu(dy) \Delta X^{j}_l \Delta X^{j'}_{l'}\\ &= \sum_{\substack{1 \leq l \leq N_{j} \\ 1 \leq l' \leq N_{j'}}} \int_\mathbf{R} D_M (t-t^j_l+y) D_M (t-t^{j'}_{l'}+y) \mu(dy) \Delta X^j_l \Delta X^{j'}_{l'}\\ &=(V_{(\mu,M)})_{j,j'} (t). \end{split} \end{equation*} Note that $ V_{(\mu,M)} $ is easily seen to be real symmetric, and thus so is $ V_{(\mathcal{K}, \mathcal{S}, c)} $. Further, it is easier to see that $ V_{(\mu,M)} $ is positive semi-definite. In fact, for arbitrary $ x = (x_1, \cdots, x_d) \in \mathbf{R}^d $, \begin{equation*} \begin{split} & \sum_{j,j'} (V_{(\mu,M)})_{j,j'} (t) x_j x_{j'} \\ &= \sum_{j,j'}\int_\mathbf{R} \sum_{\substack{1 \leq l \leq N_j \\ 1 \leq l' \leq N_{j'}}} D_M (t-t^j_l+y)\Delta X^j_l x_{j}\\ &= \int_\mathbf{R} \sum_{j=1}^d \sum_{1 \leq l \leq N_j} D_M (t-t^j_l+y)\Delta X^j_l x_{j}\\ & \hspace{3cm} \cdot \sum_{j'=1}^d \sum_{1 \leq l' \leq N_{j'}} D_M (t-t^{j'}_{l'}+y) \Delta X^{j'}_{l'} x_{j'} \mu(dy) \\ &= \int_\mathbf{R} \left(\sum_{j=1}^d \sum_{1 \leq l \leq N_j} D_M (t-t^j_l+y) x_j \right)^2 \mu(dy) \geq 0. \end{split} \end{equation*} \end{rem} \subsection{Remarks on the choice of the measure}\label{Measure} From the observation made in (\ref{heuristic}), we may insist we choose a sequence of positive semi-definite functions $ c_N $, where $ N := \max_j N_j $ for simplicity, so as that \begin{equation*} c_N(k) \sim \frac{1}{|\mathcal{S}_N (k)|} C_N (k) \end{equation*} where the kernel \begin{equation}\label{kernel0} \sum_{k=-2M_N}^{2M_N} C_N (k)e^{2 \pi i k s} \end{equation} behaves like/better than Fej\'er one. The first example is the Fej\'er sum case where \begin{equation*} C_N (k) = 1- \frac{|k|}{2M_N+1}, \end{equation*} or equivalently \begin{equation*} c_N (k) = \frac{1}{2M_N +1} \end{equation*} and therefore \begin{equation*} \mu_N = \frac{1}{2M_N +1} \delta_0. \end{equation*} In this case, the convergence of $II$ in (\ref{heuristic}) may not be good, which might be easier to be seen from the expression of (\ref{PF2}). Note that the estimator is completely different from the original one (\ref{FEMM09}) since $ |\mathcal{S} (k)| = 2M -|k|+1 $ in the former while it is always $ 2M $, independent of $ k $, in the latter. The factor $ |\mathcal{S} (k)| $ contributes less to the consistency in the newly introduced positive semi-definite class of estimators. Looking at the above primitive case, however, we notice that a proper choice for the measures would be implied by \begin{equation*} (2 M_N +1)^{-1} \times \text{(delta approximating kernel)}. \end{equation*} Here we list possible choices. \begin{ex}\label{Cauchy} Let \begin{equation}\label{Poisson} C_N (k) = \left(1- \frac{|k|}{2M_N+1}\right) e^{-\gamma_N |k|}, \end{equation} where, $ \gamma_N \to 0 $ as $ N \to \infty $. In this case, \begin{equation}\label{Cauchy2} \mu_N (dy) = \frac{1}{2M_N +1} \frac{1}{\pi} \frac{\gamma_N}{y^2 + \gamma_N^2} dy, \end{equation} a Cauchy kernel. \end{ex} \begin{ex}\label{Gaussian} Let \begin{equation}\label{Gauss} C_N (k) = \left(1- \frac{|k|}{2M_N+1}\right) e^{-\frac{2 \pi^2 k^2}{L_N}}, \end{equation} where, $ L_N \to \infty $ as $ N \to \infty $. In this case, \begin{equation}\label{Gauss2} \mu_N (dy) = \frac{1}{2 M_N +1} \sqrt{\frac{L_N}{2\pi}} e^{-L_N y^2} dy, \end{equation} a Gaussian kernel. \end{ex} \begin{ex}\label{doubleF} We let \begin{equation}\label{F1} C_N (k) = \left(1- \frac{|k|}{2M_N+1}\right)^2. \end{equation} In this case, its corresponding measure is the Fej\'er kernel; \begin{equation}\label{F2} \mu_N (\{y\}) = \frac{1}{2M_N+1} \left( \frac{\sin (2M_N+1) \pi y} {\sin \pi y} \right)^2 = K_{2M_N+1} (y), \end{equation} $ y = \frac{k}{2M_N+1}, k= 0,1,\cdots, 2M_N $ if $ 2M_N +1 $ is a prime number. This can be seen by the following relation: \begin{equation*} 1 - \frac{|k|}{L} = \sum_{t= 0}^{L-1} K_{L} (t) e^{-2 \pi i k t}, \end{equation*} which is valid when $ L $ is a prime number and is implied by \begin{equation*} K_{L} (t) = \sum_{k=-(L-1)}^{L-1} \left(1 - \frac{|k|}{L} \right) e^{2 \pi i k t}. \end{equation*} It is notable that in this case we need not discretize the integral with respect to $ \mu_N $ since it is already discrete. \end{ex} In the use of a delta kernel, one needs to choose properly the approximating parameters of the kernel as well as $ M_N $; the delta approximating parameters are $ \gamma_N $ in Example \ref{Cauchy} and $ L_N $ in Example \ref{Gaussian}. (The Fej\'er case of Example \ref{doubleF} is an exception). Even with a consistency result which only tells an asymptotic behavior, one still needs to optimize the choice with some other criteria. In the next section, we give some simulation results to have a clear view of this issue. \section{Experimental Results}\label{experiments} In this section, we present some results of simple experiments to exemplify how our method is implemented; \begin{itemize} \item We applied our estimation method to the daily data from \\ 31/03/2008 to 26/09/2008 of zero-rate implied by Japanese government bond prices with maturities 07/12 and 07/06, from 07/12/2008 to 07/06/2014. \item Therefore, we set $ N = 150 $ ($ = N_j $ for all $ j $, the observation dates are equally spaced) and $ d= 12 $. \item We set $ M = M_N = 15 $ for $ M $ in (\ref{PF2}) and $ M_N $ in (\ref{kernel0}). \item The integral with respect to $ \mu $ is also discretized; we only use $ [-1/2, 1/2] $ instead of whole real line, which is discretized to $ \{ -1/2 + k/(2M_N+1); k= 0, 1, \cdots, 2 M_N \} $. \item We tested the Cauchy kernel estimator with (\ref{Poisson}) in Example \ref{Cauchy} in Experiment 1 and 2 with different $ \gamma_N $, the Gaussian kernel ones of Example \ref{Gaussian} in Experiment 3 and 4 with different $ L_N $. \item We used Octave ver. 3.2.4, and a Vaio/SONY, Windows 7 64bit OS laptop PC, with processor Intel(R) Core(TM) i3-2310M CPU @2.10GHz 2.10GHz, and RAM 4.00 GB. \end{itemize} All the figures are indicating the results of ``dynamical principal component analysis", where the graphs from the top shows the time evolution of the rate of the biggest, the biggest + the second, and the first three biggest eigenvalues, respectively. Each experiment took about 3 minutes; plausibly fast. We see the similarities between Figure \ref{w1} and Figure \ref{w2}, and between Figure \ref{w4} and Figure \ref{w5}. In these experiments, we should say that the accuracy is not fully appreciated, but we may say that the order of the delta kernel is important to have an accuracy. \begin{figure} \caption{Experiment 1; $ \gamma_N = (2M_N+1)^{-1/2} \approx 0.1796 $} \label{w1} \includegraphics[width=12cm]{Pattern1.eps} \end{figure} \begin{figure} \caption{Experiment 2;$ \gamma_N = (2M_N+1)^{-1/4} \approx 0.4238 $} \label{w2} \includegraphics[width=12cm]{Pattern2.eps} \end{figure} \if1 \begin{figure} \caption{Experiment 3;$ \gamma_N = N^{-9/8} \approx 0.0036 $} \label{w3} \includegraphics[width=12cm]{Pattern7.eps} \end{figure} \fi \begin{figure} \caption{Experiment 3; $ L_N =2M_N+1 =31 $} \label{w4} \includegraphics[width=12cm]{Pattern3.eps} \end{figure} \begin{figure} \caption{Experiment 4;$L_N = (M_N+1)^{1/4} \approx 2.36 $} \label{w5} \includegraphics[width=12cm]{Pattern5.eps} \end{figure} \if2 \begin{figure} \caption{Experiment 4;$L_N =N^{33/16} \approx 3.0 \times10^4 $} \label{w6} \includegraphics[width=12cm]{Pattern8.eps} \end{figure} \fi
{ "attr-fineweb-edu": 1.491211, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd6A4eIfiUWUdd1aU
\section{Introduction} The introduction of transformers \cite{attnisallyouneed} from NLP to vision with Vision Transformers (ViTs) by \citet{vit} has rapidly advanced the field of computer vision. However, unlike in NLP, vision has been since dominated by domain-specific transformer hybrids like Swin \cite{swin,cswin} using vision-specific attention, MViT \cite{mvit,mvit2} using vision-specific pooling, or LeViT \cite{levit} using vision-specific conv modules. The reason for this trend is simple: efficiency. Adding vision-specific inductive biases enables transformer hybrids to perform better with less compute. Yet, despite being overtaken in cost to performance, vanilla ViTs still have many desirable qualities: they consist of simple matrix multiplications, making them faster than their raw flop count would suggest; they support powerful self-supervised pre-training techniques such as MAE \cite{mae} that can put up state-of-the art results while being fast to train; given their lack of assumptions about the data, they can be applied with little or no changes across many modalities \cite{mae-video,mae-audio}; and they scale well with massive amounts of data \cite{scalingvits,swag}, recently obtaining up to 90.94\% top-1 on ImageNet \cite{modelsoup}. However, running these massive models can be troublesome, and reproducing these results with a faster architecture would be difficult. A promising subfield of ViTs have recently emerged where, due to the input-agnostic nature of transformers, tokens can be pruned at runtime to enable a faster model \cite{dynamicvit,avit,adavit,evit,spvit}. Yet, token pruning has several disadvantages: the information loss from pruning limits how many tokens you can reasonably reduce; current methods require re-training the model to be effective (some with extra parameters); most cannot be applied to speed up training; and several prune different numbers of tokens depending on the input content, making batched inference infeasible. In this work, we present Token Merging (ToMe) to \textit{combine} tokens, rather than prune them. Because of our custom matching algorithm, our method is as fast as pruning while being more accurate. Moreover, our method works \textit{with or without training}, which unlocks its use on huge models with minimal accuracy drop. Using ToMe during training, we observe actual increases in training speed, in some cases cutting the total training time \textit{in half}. And we apply ToMe without any modifications to images, video, and audio and find it to be competitive with the SotA in all cases. Our contributions are as follows: we introduce a technique to increase the throughput and real-world training speed of ViT models, both with and without training (Sec.~\ref{sec:approach}) and thoroughly ablate our design choices (Sec.~\ref{subsec:ablations}); we perform extensive experiments on images with several different ViT models (Sec.~\ref{subsec:model_sweep}) and compare to state-of-the-art in architecture design and token pruning methods (Sec.~\ref{subsec:comparison}); we then repeat these experiments for both video (Sec.~\ref{sec:video_experiments}) and audio (Sec.~\ref{sec:audio_experiments}) and find that ToMe works well across modalities; and finally we visualize token merging results and find ToMe merges object parts on images (Fig.~\ref{fig:image_vis}) and merges objects over their entire range of motion on video (Fig.~\ref{fig:video_vis}). We hope ToMe can enable the creation of more powerful, faster ViT models. \section{Related Work} \vspace{-.15cm} \paragraph{Efficient Transformers.} Several works have attempted to create more efficient transformers in both NLP and Vision. Some focus on faster attention \cite{performers,linearattn_efficient,flashattn,linformer,hydraattn}, some attempt to prune heads or features \cite{adavit,analyzingmultiheadattn,sixteenheadsbetterthanone}, and some attempt to infuse domain-specific modules \cite{mobilevit,levit,swin,swinv2,cswin}. In this paper, we focus on speeding up existing ViT models by merging tokens to match the speed-accuracy trade-off of more complicated domain-specific models, sometimes \textit{without training}. \paragraph{Token Reduction.} Since transformers can operate with any number of tokens, several recent works have attempted to prune the tokens from transformers in both NLP \cite{powerbert,lat,learnedtokenpruning,colbertpruningstudy} and Vision \cite{adavit,avit,spvit,cpvit,dynamicvit,ats,unifiedpruning}. However, these methods require training, while our method can be used \textit{without training}. Moreover, most pruning works are \textit{dynamic}, i.e., the number of tokens varies between images or sentences. While this benefits accuracy it limits practicality, as samples with different numbers of tokens can no longer be batched. To solve this, most pruning papers apply a mask during training rather than remove tokens, which negates the speed-up from pruning. Our method, on the other hand, can be applied during both inference and training, achieving real-world speed-ups in either case. \paragraph{Combining Tokens.} While plenty of works prune tokens, very few combine them. \citet{spvit} and \citet{evit} combine what they prune into a single token. GroupViT \cite{groupvit}, while not focused on efficiency, groups tokens using cross-attention for semantic segmentation. TokenLearner \cite{tokenlearner} uses an MLP to reduce the number of tokens. Token Pooling \cite{tokenpooling} is the most similar to token merging but uses a slow kmeans-based approach\footnote{Their throughput is only 1.14-1.25$\times$ the baseline because their method can't be parallelized.} that doesn't work on an off-the-shelf model\footnote{In their appendix, they show drops of 10-40\% accuracy when combining tokens without training.}. Until now, no approach has been successful in offering a reasonable speed-accuracy trade-off when combining tokens without training. \section{Token Merging} \label{sec:approach} Our goal is to insert a token merging module into an existing ViT \cite{vit}. By merging \textit{redundant} tokens, we hope to increase both training and inference throughput, while not necessarily having to train with our method to get its benefits. \paragraph{Strategy.} \input{figures/0_concept_figure} In each block of a transformer, we merge tokens to \textit{reduce} by $r$ per layer. Note that $r$ is a quantity of tokens, not a ratio. Over the $L$ blocks in the network, we gradually merge $rL$ tokens. Varying $r$ gives a speed-accuracy trade-off, as fewer tokens means lower accuracy but higher throughput. Importantly, we reduce $rL$ tokens regardless of the image's content. Some pruning methods \textit{dynamically} vary the number of tokens (e.g., \citet{spvit}). This increases accuracy but is generally impractical, as it prevents batched inference or training without padding tokens. As shown in Fig.~\ref{fig:concept}, we apply our token merging step \textit{between} the attention and MLP branches of each transformer block. This is also in contrast to prior works, which tend to place their reduction method at the beginning of the block instead. Our placement allows information to be propagated from tokens that would be merged and enables us to use features within attention to decide what to merge, both of which increase accuracy (see Tab.~\ref{tab:ablation_feature_choice}). \paragraph{Token Similarity.} Before merging similar tokens, we must first define what ``similar'' means. While it may be tempting to call two tokens similar if the distance between their features is small (as in \citet{tokenpooling}), this is not necessarily optimal. The intermediate feature space in modern transformers is \textit{overparameterized}. For instance, ViT-B/16 has enough features to completely encode the rgb pixel values of each token ($16*16*3=768$). This means that the intermediate features have the potential to contain insignificant noise that would confound our similarity calculations. Luckily, transformers natively solve this problem with QKV self-attention \cite{attnisallyouneed}. Specifically, the keys (K) already summarize the information contained in each token for use in dot product similarity. Thus, we use a dot product similarity metric (e.g., cosine similarity) between the keys of each token to determine which contain similar information (see Tab.~\ref{tab:ablation_feature_choice}, \ref{tab:ablation_distance_function}). \paragraph{Bipartite Soft Matching.} With token similarity defined, we need a \textit{fast} way to determine which tokens to \textit{match} in order to reduce the total number by $r$. There are several potential solutions to this problem, such as kmeans clustering \cite{kmeans} or graph cuts \cite{graphcuts}. But we perform this matching $L$ times within the network on potentially thousands of tokens, so its runtime has to be \textit{absolutely negligible}. This is very much not the case for most iterative clustering algorithms. Thus, we propose a more efficient solution. Our design goals are as follows: 1.) we want to avoid anything iterative that cannot be parallelized and 2.) we want the changes merging makes to be \textit{gradual}. The latter is why we focus on \textit{matching} and not \textit{clustering}, as clustering places no bounds on how many tokens can be merged into one group (which may adversely affect the network) , whereas matching leaves most of the tokens unmerged. Our algorithm is as follows (visualized in Fig.~\ref{fig:concept}): \begin{enumerate} \itemsep-0.1em \item Partition the tokens into two sets ${\mathbb{A}}$ and ${\mathbb{B}}$ of roughly equal size. \item Draw \textbf{one} edge from each token in ${\mathbb{A}}$ to its \textit{most similar} token in ${\mathbb{B}}$. \item Keep the $r$ most similar edges. \item Merge tokens that are still connected (e.g., by averaging their features). \item Concatenate the two sets back together. \end{enumerate} Because this creates a bipartite graph and each token in ${\mathbb{A}}$ has only one edge, finding connected components in step 4 is trivial. Moreover, we don't need to compute similarity between every pair of tokens which, if we choose ${\mathbb{A}}$ and ${\mathbb{B}}$ carefully, isn't a problem for accuracy (see Tab.~\ref{tab:ablation_bipartite_order}). In fact, this ``bipartite soft matching'' is nearly as fast as just dropping tokens randomly (see Tab.~\ref{tab:matching_alg}) and takes only a few lines of code to implement (see Appendix~\ref{appendix:bipartite_impl}). \paragraph{Tracking Token Size.} Once tokens are merged, they no longer represent one input patch. This can change the outcome of softmax attention: if we merge two tokens with the same key, that key has less effect in the softmax term. We can fix this with a simple change, denoted \textit{proportional attention}: \begin{equation} \label{eq:prop_attn} {\bm{A}} = \text{softmax}\left(\frac{{\bm{Q}}{\bm{K}}^\top}{\sqrt{d}} + \log {\bm{s}} \right) \end{equation} where ${\bm{s}}$ is a row vector containing the \textit{size} of each token (number of patches the token represents). This performs the same operation as if you'd have ${s}$ copies of the key. We also need to weight tokens by ${\bm{s}}$ any time they would be aggregated, like when merging tokens together (see Tab.~\ref{tab:ablation_combining_method}). \paragraph{Training with Merging.} Each component thus far has been designed to be able to add token merging to an already trained ViT model. Training with ToMe isn't necessary, but it may be desirable to reduce accuracy drop or to speed up training. To train, we simply treat token merging as a pooling operation and backprop through the merged tokens as if we were using average pooling. We don't find a need to use any gradient tricks such as Gumbel softmax \cite{gumbelsoftmax} as in token pruning (e.g., \citet{spvit}). And in fact, we find that the same settings used in training a vanilla ViT are also optimal here (see Appendix~\ref{appendix:hyperparameter_sweep}). Thus ToMe is a drop-in replacement to increase training speed. \section{Image Experiments} \label{sec:image_experiments} \vspace{-.2cm} We perform several experiments on ImageNet-1k \cite{imagenet} using ViT models trained in four different ways: AugReg \cite{augreg}, MAE \cite{mae}, SWAG \cite{swag}, and DeiT \cite{deit}. For all experiments, we either run the model \textit{off-the-shelf} with our method or, in the case of MAE and DeiT, \textit{trained} with our method applied. All throughputs are measured during inference on a V100 GPU with optimal batch size and fp32 unless noted otherwise. \vspace{-.15cm} \subsection{Design Choices} \label{subsec:ablations} \vspace{-.1cm} \input{figures/3_ablations} In Tab.~\ref{tab:ablations}, we ablate the design choices made in our approach. For each ablation, we start from our default parameters marked in purple. Unless otherwise noted, we test on an off-the-shelf ViT-L/16 MAE model without training (acc: 85.96\%, im/s: 93.3). and merge with $r=8$ which gradually removes 98\% of tokens over the 24 layers of the network. \paragraph{Token Similarity.} The tokens' features (\texttt{X}) are not the best in terms of performance (Tab.~\ref{tab:ablation_feature_choice}). Moving the merging operation after attention (\texttt{X} vs. \texttt{X}$_\texttt{pre}$) and using the attention keys (\texttt{K}) is significantly more accurate. Then, cosine similarity is best to measure token distance as shown in Tab.~\ref{tab:ablation_distance_function}. Finally, we average \texttt{K} over the attention heads instead of concatenating them (Tab.~\ref{tab:ablation_head_aggregation}) for efficiency. \paragraph{Algorithmic Choices.} After deciding what tokens to merge, we combine them by averaging weighted by token size, ${\bm{s}}$ (see Eq.~\ref{eq:prop_attn}). In Tab.~\ref{tab:ablation_combining_method}, this outperforms keeping just the token in ${\mathbb{B}}$, max pooling, or unweighted average pooling. Then, our bipartite matching algorithm requires splitting the input into two disjoint sets. Because we concatenate the sets afterward, we find that assigning tokens by alternating to work the best (Tab.~\ref{tab:ablation_bipartite_order}). Filling ${\mathbb{A}}$ and then filling ${\mathbb{B}}$ (sequentially) performs the worst. \paragraph{Proportional Attention.} Once merged, tokens can represent more than one input patch. We address this with proportional attention (Eq.~\ref{eq:prop_attn}), which we ablate in Tab.~\ref{tab:ablation_model_tweaks}. Surprisingly, proportional attention is necessary for supervised models (e.g., AugReg, SWAG, DeiT), but not for MAE models. Since this discrepancy disappears after training, this is likely because MAE already removes tokens during pretraining. Nevertheless, we use proportional attention for all but off-the-shelf MAE models. \input{figures/4_matching_r_ablation} \paragraph{Comparing Matching Algorithms.} In Tab.~\ref{tab:matching_alg} we compare our bipartite matching to different token reduction algorithms, both pruning and merging. Pruning is fast, but with 98\% of the tokens removed overall, important information is lost. This is true for both pruning randomly and pruning based on what isn't attended to \cite{learnedtokenpruning}. In contrast, merging tokens only loses information when dissimilar tokens are merged. Thus, it's important to correctly select similar tokens to merge. At first, kmeans \cite{kmeans} may seem like the obvious choice, but on top of being slow it's only slightly better than pruning. While it may minimize reconstruction error, kmeans allows a large number of tokens to match to the same cluster, which increases the probability of dissimilar tokens being merged. Note that \citet{tokenpooling} perform similar experiment in their appendix and are unable to obtain better than a 10\% accuracy drop with a kmeans-based approach. Instead, we want a \textit{matching} algorithm that only merges the most similar tokens. We could do this greedily by merging the most similar pair of tokens and then repeating without replacement $r$ times. This is accurate but sequential and thus could get slow with large $r$. Our bipartite matching has the accuracy of this greedy approach and the speed of pruning while having constant runtime w.r.t. to $r$. \paragraph{Selecting a Merging Schedule.} By default, we merge tokens with a \textit{constant} schedule, i.e. \textit{r} per layer. To evaluate the optimality of this design we randomly sample a total of 15,000 merging schedules. For each schedule, we test its accuracy and fp16 throughput on ImageNet-1k val using an off-the-shelf AugReg ViT-B/16 model. In Fig.~\ref{fig:r_ablation}, we plot the results of this experiment and find that a constant schedule is close to optimal, especially as the total tokens merged increases. We further analyze the best random samples (see Appendix~\ref{appendix:merging_schedule}) and find that a linearly decreasing schedule works well at throughputs up to $\sim$3x. Thus, we also define a ``decreasing'' schedule that removes $2r$ tokens in the first layer and $0$ tokens in the last layer, linearly interpolating for the rest. This also removes $rL$ tokens, but is faster because more are removed early: \begin{align} \text{Constant Schedule} && x & \text{ per layer}& \text{denoted }&r_x\fslope\\ \text{Decreasing Schedule} && 2x \rightarrow 0 &\text{ per layer} & \text{denoted }&r_x\dslope \end{align} \input{figures/5_model_sweep} \vspace{-.3cm} \subsection{Model Sweep} \label{subsec:model_sweep} In Fig.~\ref{fig:model_sweep}, we apply our token merging method to 11 SotA off-the-shelf ViT models from various sources. For each model, we vary $r$ with a constant schedule to construct throughput vs. accuracy curves, starting with $0$ (no merging baseline) to $r$ such that we run out of tokens to merge. We evaluate each model \textit{off-the-shelf}. That is, by default \textit{we don't train}; we just download the model and change a few lines of code. Models are evaluated on 224px images unless otherwise noted. \paragraph{Supervised Models.} Both AugReg \cite{augreg} and SWAG \cite{swag} are ViT models pretrained on a large supervised (or weakly supervised) pretraining dataset and fine-tuned on ImageNet-1k. AugReg covers optimal ViT training up until ViT-L/16, while SWAG pushes the limits of ImageNet by training huge models with massive image sizes. We apply our method \textit{off-the-shelf} on AugReg models in Fig.~\ref{fig:sweep_augreg} and SWAG models in Fig.~\ref{fig:sweep_swag}. Immediately, we can see that a constant schedule gives up to 2$\times$ the throughput no matter the model. And even though we're compressing 96-98\% of the tokens in each, the largest models have barely any accuracy drop: while ViT-B, S, and Ti all have around 4-5\% accuracy drop at 2$\times$ speed, ViT-L only suffers a 2\% drop on 224px images and a 0.7\% drop on 384px images with AugReg. Similarly, with SWAG models, ViT-L on 512px images and ViT-H on 518px images both have small 0.3\% accuracy drop \textit{without training}. Note this trend is not just because larger models have more tokens, since we always reduce the number of tokens by 96-98\%. Instead, we think this is because large models are deeper and thus allow for more \textit{gradual} change in features, which lessens the impact of merging. \paragraph{Self-Supervised Models.} MAE \cite{mae} is a self-supervised pretraining method for ViT with models pretrained and fine-tuned on ImageNet-1k. In Fig.~\ref{fig:sweep_mae} we apply our method both \textit{off-the-shelf} and \textit{trained} by fine-tuning the public pretrained checkpoints. When fine-tuning, we find that we can use the original training recipes. We don't have to to compensate for fewer tokens in later layers (see Appendix~\ref{appendix:hyperparameter_sweep}), likely because our method is already tuned to imitate a model without merging. And as expected, in Fig.~\ref{fig:sweep_mae}, we see the same trends as before: except this time, with training we can bring the error down to 0.4\% for ViT-H, 0.6\% for ViT-L, and 1.7\% for ViT-B at 2$\times$ throughput. Our approach actually implicitly addresses an issue in MAE: because MAE removes tokens during pretraining, its epoch times are $\sim$4$\times$ faster than training a supervised model. However, normal fine-tuning uses all the tokens and doesn't have this benefit. Our token merging method fixes this issue and allows for roughly $\sim$2$\times$ faster epochs at negligible accuracy drops for large models. This suggests that one could train even bigger models with token merging than was possible before. \paragraph{Re-evaluating.} Note that, while in Fig.~\ref{fig:sweep_mae} we train a new model for each value of $r$, this isn't actually necessary. Instead, we can take a model trained with one value of $r$ and re-evaluated it with another. In fact, it's possible to actually \textit{improve} performance by doing so. For instance, the baseline ViT-L model we train in Fig.~\ref{fig:sweep_mae} gets 85.7\% accuracy. If we re-evaluate our $r=5$ trained model with $r=0$, we obtain 85.8\% accuracy. Thus, it's feasible to speed up training with ToMe and not apply it during evaluation to produce the same or better results. This means that, while the result of applying ToMe in Fig.~\ref{fig:sweep_swag} and Fig.~\ref{fig:sweep_mae} are similar to e.g. scaling the model size, you only have to train one model with ToMe to create any a model with a large range of scales. \input{figures/6_image_results} \vspace{-.2cm} \subsection{Comparison to Other Works} \label{subsec:comparison} In this section, we compare our trained token merging models to other state-of-the art works on ImageNet-1k, both in terms of the overall vision space as well as other token reduction methods. \paragraph{Comparison to State of the Art.} In Tab.~\ref{tab:image_sota_comparison}, we compare our MAE fine-tuned models to state-of-the-art models trained on ImageNet-1k without extra data: EfficientNet \cite{efficientnet}, Swin \cite{swin}, SwinV2 \cite{swinv2}, CSWin \cite{cswin}, and MViTv2 \cite{mvit2}. All throughputs are on a single V100. Note that we use MAE pretraining which is not supported for all transformers, but provides accuracy improvement for some like Swin/SwinV2. Thus we also include SimMIM \cite{simmim} pre-trained Swin and SwinV2 models for comparison. Nevertheless, token merging with ToMe improves the throughput of ViT models such that ViT-L and ViT-H become comparable in speed to models of a lower tier, without scaling the number of features. Thus we display results of ViT ``advancing a tier'' in Tab.~\ref{tab:image_sota_comparison}. More testing is need to see whether applying ToMe and model scaling at the same time would produce even better results. \paragraph{Comparison to Token Pruning.} In Tab.~\ref{tab:image_pruning_comparison}, we compare ToMe to token pruning works that use DeiT-S training\footnote{This comparison is difficult as many token pruning works use different training strategies, some even claiming improvement in accuracy without a valid baseline. A-ViT fine-tunes on top of DeiT, while DynamicViT starts DeiT training from an existing checkpoint. We, on the other hand, train from scratch. }: A-ViT \cite{avit}, DynamicViT \cite{dynamicvit}, and SP-ViT \cite{spvit} with throughput measured on a V100. Even though we don't use gradient tricks such as gumbel softmax \cite{gumbelsoftmax}, add extra parameters, or use additional training tricks, we can already match the performance and exceed the throughput of existing much more complicated token pruning works. Moreover, most token pruning works are forced to use padding tokens or attention masking during training, negating the benefits of pruning in the first place. Our method, on the other hand, doesn't suffer from this issue and we observe a 1.5$\times$ training speedup with DeiT. But we actually don't need to train at all: if we take an off-the-shelf AugReg ViT-S model and apply the same merging schedule, we can match the performance of the DeiT models \textit{without training}. \input{figures/7_image_visualizations} \vspace{-.2cm} \subsection{Visualizations} In Fig.~\ref{fig:image_vis}, we show the input patches belonging to each merged token at the end of the network. We find that applying ToMe results in token merging that resembles something not unlike part segmentation \cite{partsegmentation}. In the second image, the husky has different tokens for its legs, body, and face. The monkey in the 3rd image has different tokens for its hand, body, face, eyes, and mouth while the orange it's holding gets its own token despite it representing just one input patch. In cases where there are multiple instances of the same class like the dung beetles in the fourth image and the Boston terriers in the last image, the same parts from all instances get merged together. Notably unlike pruning, ToMe is able to merge a large number of tokens both in the background and the foreground without losing information. See more results and methodology in Appendix~\ref{appendix:more_visualization}. \section{Video Experiments} \label{sec:video_experiments} \vspace{-.2cm} This framework of MAE plus token merging is a powerful strategy across several domains. Because of its high redundancy, one of the most promising directions is video. Thus, we apply our token merging approach on Spatiotemporal MAE \cite{mae-video} for video classification on Kinetics-400 \cite{kinetics}, both by simply applying our method off-the-shelf \textit{without training} and by applying our method during MAE fine tuning with the default training recipe like we did for images. Note that nothing in our method needs to change for video: we use the same code for both. \paragraph{Results.} In Tab.~\ref{tab:video_results}, we show the results of applying our method \textit{off-the-shelf} and during MAE fine-tuning using ViT-L from Spatiotemporal MAE compared to the relevant state-of-the-art on Kinetics-400 classification: Swin \cite{swin-video} pretrained on ImageNet-21k, MViTv2 \cite{mvit2} pretrained with MaskFeats \cite{maskfeats}, and Spatiotemporal MAE as the \gc{baseline}. Amazingly, ToMe applied to ViT-L with a constant schedule can match the throughput of Swin-B while performing better than MViTv2-L, even when evaluated \textit{without training}. Moreover, with a decreasing schedule, ViT-L $^{\text{MAE}}_{r_{65}\text{\dslope{}}}$ significantly outperforms the baseline ViT-B$^{\text{MAE}}$ model with the same flop count with or without training, meaning ToMe is better than model scaling here. Training is not necessary for a constant schedule, but it does help with a decreasing schedule. \paragraph{Throughput.} In Tab.~\ref{tab:video_throughput}, we display the throughput and training time of our method applied to ViT-L. With a constant schedule, we can increase throughput by 2.2$\times$ for a negligible 0.2\% accuracy drop. Moreover, this setting \textit{cuts training time in half}, even with the overhead of syncing across 8 gpus. \input{figures/8_video_results} \paragraph{Clip Count.} Because each forward pass only sees up to 2 seconds of video, it's standard practice to evaluate video recognition models with multiple clips. In Tab.~\ref{tab:video_results}, we evaluate with multiple clips (1 spatial crop, 10 temporal crops). We don't factor the number of clips into flop count because this is a hyperparameter every method can tune, usually resulting in only small differences as long as a minimum number of clips are used (i.e., 4 in this case). Thus, we choose the same number of clips as other models to compare. However, this might compensate for the information loss from token merging. In Fig.~\ref{fig:clip_comparison}, we test if this is the case by sweeping over the number of clips for our method compared to the baseline ViT-L model. For $r=65$, we see some degradation compared to the 4 clip sweet-spot ($\sim$0.5\%), but for lower $r$ values, there's no decrease compared to the baseline. \paragraph{Visualization.} We visualize the final tokens for each input patch over multiple frames of video in Fig.~\ref{fig:video_vis} using our trained {ViT-L} $^{\text{MAE}}_{r_{65}\text{\fslope{}}}$ model. Just like ToMe performs primitive part segmentation on images, it is actually able to perform primitive part \textit{tracking} on video. The same object or part is merged into one across multiple frames of video like the ball in Fig.~\ref{fig:video_vis}. Note that the extraneous red patch in the third frame is the reflection of the ball in the glass. More results in Appendix~\ref{appendix:more_visualization}. \section{Audio Experiments} \label{sec:audio_experiments} \vspace{-.15cm} We perform experiments on an Audio MAE \cite{mae-audio}, where a spectrogram of the audio signal is rasterized and then fed into a ViT model. We use the ViT-B model from \citet{mae-audio} and evaluate on AudioSet-2M \cite{audioset}. \input{figures/10_audio_results} \textbf{Results.} Note, the metric reported is mAP instead of accuracy because of class imbalance. Due to training implementation differences, the baseline model we train has lower mAP than originally reported in \citet{mae-audio}. Thus in Tab.~\ref{tab:audio_results}, we compare ToMe without training to the original number, and ToMe with training to our trained baseline. Regardless, on audio we obtain an almost 2$\times$ throughput increase with an mAP drop of only 0.4\%. Full results for this experiment are in Appendix~\ref{appendix:full_results}. \section{Conclusion} \label{sec:conclusion} \vspace{-.15cm} In this work, we introduced Token Merging (ToMe) to increase the throughput of ViT models by gradually merging tokens. ToMe naturally exploits the redundancy in the input modality, allowing its use for any modality with redundancy. In the paper we explore extensive experiments on images, video, and audio, obtaining speeds and accuracies competitive with the state of the art in each case. ToMe can be viewed as a ``natural'' hierarchical model, similar to Swin or MViT but using pure transformer blocks. ToMe could be combined with these methods to create an entirely new type of architecture. Similarly, we focus on classification but our visualizations show potential on tasks like segmentation. Finally, ToMe works well on large models across domains and cuts down training time and memory usage, meaning it could be a core component of training huge models. We leave these as topics for future work and hope ToMe can lead to the creation of better, more efficient transformers. \section{Full Results} \label{appendix:full_results} Results for plots and tables in the main paper. For all results, im/s indicates throughput and ``speed'' indicates improvement over the baseline. All throughputs are measured on a V100, but the actual values may differ a little from the main paper as the model may have been benchmarked on a different machine. However, all results in the main paper use the same machine for throughput evaluation. \subsection{Images} For each ImageNet-1k model, we display our full results here. \subsubsection{AugReg Models} Full results listed in Tab.~\ref{tab:appendix_augreg_full}. We make no special changes to any of these models. \input{figures/appendix/augreg_full_results} \subsubsection{SWAG Models} Full results listed in Tab.~\ref{tab:appendix_swag_full}. Again, we make no special changes for these models. \input{figures/appendix/swag_full_results} \subsubsection{MAE Models} We evaluate MAE models both off the shelf and trained with ToMe in Tab.~\ref{tab:appendix_mae_full}. For off-the-shelf evaluation we disable proportional attention as noted in Sec.~\ref{subsec:ablations}, but we enable it for the trained models. Note that we compare to baselines we trained ourselves, which may slightly underperform the official baselines (for ViT-L). When training, we fine-tune from the official pretrained weights and use the original training recipe. Unlike prior work, we intend for ToMe to \textit{replace} standard training, not augment it, in order to receive the benefits of faster training times and less memory usage. \input{figures/appendix/mae_full_results} \subsubsection{DeiT Models} We present DeiT results in Tab.~\ref{tab:appendix_deit_full}. For DeiT, we train from scratch with the default training recipe for 300 epochs. Unlike other token pruning works, we don't use any tricks such as starting from an existing checkpoint or fine-tuning. Note that for merging, in addition to not merging the class token, we don't merge the distillation token. In Tab.~\ref{tab:appendix_deit_full}, we don't train for all values of $r$, just the baseline $r=0$ and those between 8 and 16. $r=11$ for DeiT-S didn't finish training. \input{figures/appendix/deit_full_results} \subsection{Video} We run the ViT-L model from \citet{mae-video} off the shelf. In Tab.~\ref{tab:appendix_video_full}, we show the results of this experiment by sweeping over $r$. For each setting, we evaluate with 1 spatial crop and 10 temporal clips. Note that the original baseline is evaluated with 3 spatial crops and 7 temporal clips, while we re-evaluated it with $1\times10$. Thus, the baseline has slightly lower accuracy than the original paper. Like with images, for these off-the-shelf MAE pretrained models we don't use proportional attention. \input{figures/appendix/video_full_results} \subsection{Audio} Full results for our audio experiments can be found in Tab.~\ref{tab:appendix_audio_full}. We used the model from \citet{mae-audio} to evaluate off-the-shelf. However, for training we train with our own implementation that's different from the paper. For this reason, in Tab.~\ref{tab:appendix_audio_full}, we list two different baselines (one from the original paper, and the other trained by us). In this case, we don't use proportional attention during off-the-shelf evaluation or training. \input{figures/appendix/audio_full_results} \section{Hyperparameters} \label{appendix:hyperparameter_sweep} In Tab.~\ref{tab:appendix_hyperparam_search}, we perform a limited hyperparameter search on parameters that would be affected by applying ToMe: layer decay, drop path, and the number of epochs. Layer decay reduces learning rate based on the layer of the network. Since ToMe gradually reduces the number of tokens, the size of gradient updates in later layers might already be lower without layer decay. However, we find that it's not necessary to change that parameter. Drop path randomly drops out entire attention and MLP blocks with some probability. This has the effect of regularizing layers so that they don't rely on a single block. Because we use the \texttt{K} matrix from blocks that could be dropped out, we test the value of this parameter. Again, we find this not necessary to change. We also perform the same experiments on video except with just layer decay and the number of epochs, testing whether ToMe requires increasing the number of epochs (due to seeing fewer tokens overall). And again, the default parameters work the best. \input{figures/appendix/image_hyperparameter_sweep} \section{Merging Schedule} \label{appendix:merging_schedule} \input{figures/appendix/r_schedule} In Fig.~\ref{fig:appendix_merging_schedule}, we plot the average number of tokens merged in each layer for the most accurate random samples in Fig.~\ref{fig:r_ablation}. Around throughputs of 1600-1800, the best schedule is close to constant, which is why constant is close to optimal in this range. For throughputs beyond that, however, a decreasing schedule is best. For this, reason we define a linearly decreasing schedule in addition to a constant schedule in the main paper. \section{Implementation} \label{appendix:bipartite_impl} The following is an implementation of our ``bipartite soft matching'' in PyTorch \cite{pytorch}: \begin{python} def bipartite_soft_matching(k: torch.Tensor, r: int) -> torch.Tensor: """ Input is k from attention, size [batch, tokens, channels]. """ k = k / k.norm(dim=-1, keepdim=True) a, b = k[..., ::2, :], k[..., 1::2, :] scores = a @ b.transpose(-1, -2) scores[..., 0, :] = -math.inf # don't merge cls token node_max, node_idx = scores.max(dim=-1) edge_idx = node_max.argsort(dim=-1, descending=True)[..., None] unm_idx = edge_idx[..., r:, :] # Unmerged Tokens src_idx = edge_idx[..., :r, :] # Merged Tokens dst_idx = node_idx[..., None].gather(dim=-2, index=src_idx) unm_idx = unm_idx.sort(dim=-2)[0] # Sort cls token back to idx 0 def merge(x: torch.Tensor) -> torch.Tensor: """ Input is of shape [batch, tokens, channels]. """ src, dst = x[..., ::2, :], x[..., 1::2, :] n, t1, c = src.shape unm = src.gather(dim=-2, index=unm_idx.expand(n, t1 - r, c)) src = src.gather(dim=-2, index=src_idx.expand(n, r, c)) dst = dst.scatter_add(-2, dst_idx.expand(n, r, c), src) return torch.cat([unm, dst], dim=-2) return merge \end{python} This returns a lambda function that can be applied to any matrix or vector (i.e. to merge features, to calculate token size, or to calculate source patches). Note how this is done all at once in parallel---there are no sequential loops. \section{More Visualization} \label{appendix:more_visualization} To create the visualizations in Fig.~\ref{fig:image_vis} and Fig.~\ref{fig:video_vis}, we follow each final merged token back to its original input patches. Then for each token, we color its input patches with the average color in that region. To make sure different tokens are distinct from each other, we also assign each token a random border color. Note that tokens do not necessarily represent contiguous input regions. The only spatial signal ToMe has comes from the position encodings. \input{figures/appendix/more_img_results} In Fig.~\ref{fig:appendix_more_image_vis}, we present several more examples of merging on images as a continuation of Fig.~\ref{fig:image_vis}. ToMe's propensity for part and object segmentation appears time and time again across many different images. \input{figures/appendix/more_vid_results} In Fig.~\ref{fig:appendix_more_vid_vis}, we also display more results of ToMe performing object tracking on video. Note that in \cite{mae-video}, each token represents more than one frame. Namely, the patch size is $2\times16\times16$ and thus 2 frames of video correspond to each token. We plot the first frame from the two, because we find that more closely matches the merged tokens.
{ "attr-fineweb-edu": 1.77832, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd7U25YjgKNDlV8yr
\section{Introduction} Let $K$ be an algebraically closed field of characteristic $p>0$. For every simply connected simple linear algebraic group over $K$ of rank $l$, the irreducible rational representations in defining characteristic with dimension below a bound proportional to $l^2$ were determined by Liebeck in \cite{liebeck}. Lübeck \cite{lubeck} extended these results taking a bound proportional to $l^3$. For groups of type $B_l$, $C_l$ and $D_l$ this bound was $l^3$, but for type $A_l$ the bound taken was $l^3/8$. This is consistent with the fact that the dimension of the natural module for the group is roughly twice the rank for $B_l$, $C_l$ and $D_l$, but merely $l+1$ for type $A_l$. However, a larger bound for the latter is desirable for some applications (see for example \cite{halasi}, \cite{lee}). Let $G=\mathrm{SL}_{l+1}(K)$. As explained in §\ref{prelim}, the irreducible $KG$-modules can be parameterised by their highest weights, and we denote them by $L(\lambda)$. Their dimensions are readily obtained once the weight multiplicities of the subdominant weights are known. For small ranks, Lübeck has developed software and provided lists of weight multiplicities and dimensions of highest weight modules \cite{lubpage}. For type $A_l$, the lists include all modules with $\dim L(\lambda) \le (l+1)^4$ and ranks $l\le 20$. In this note we first determine for arbitrary rank $l$ the highest weights and dimensions of the irreducible modules of groups of type $A_l$ with dimensions $\le (l+1)^3$, as summarised in the following theorem. Throughout the paper, $\epsilon_p(k)$ will denote 1 if $p$ divides $k$ and $0$ otherwise.\\ \begin{theorem} \label{thmthird} Let $G=\mathrm{SL}_{l+1}(K)$ and $l>18$. Table \ref{tablethird} contains all nonzero $p$-restricted dominant weights $\lambda$ up to duals such that $\dim L(\lambda)\le(l+1)^3$, as well as the dimensions of the corresponding modules $L(\lambda)$. \end{theorem} We further extend these techniques, mainly applying results of Seitz \cite{seitz} and Cavallin \cite{cavallin}, for modules with $\dim L(\lambda) \le (l+1)^4$, proving the following.\\ \begin{theorem} \label{thmfourth} Let $G=\mathrm{SL}_{l+1}(K)$ and $l>35$. Tables \ref{tablethird} and \ref{tablefourth} contain all nonzero $p$-restricted dominant weights $\lambda$ up to duals such that $\dim L(\lambda)\le(l+1)^4$, as well as the dimensions of the corresponding modules $L(\lambda)$. \end{theorem} \begin{table} \setlength{\tabcolsep}{3em} \renewcommand{\arraystretch}{1.4} \begin{tabular}{ l l l } \specialrule{.1em}{.05em}{.05em} $\lambda$ & $\dim L(\lambda)$ & Conditions\\ \specialrule{.1em}{.05em}{.05em} $\lambda_1$ & $l+1$& \\ $\lambda_2$& $\binom{l+1}{2}$ &\\ $2\lambda_1$ & $\binom{l+2}{2}$& \\ $\lambda_1+\lambda_l$ & $(l+1)^2-1-\epsilon_p(l+1)$& \\ $\lambda_3$ & $\binom{l+1}{3}$ &\\ $3\lambda_1$ & $\binom{l+3}{3}$& \\ $\lambda_1+\lambda_2$ & $2\binom{l+2}{3}- \epsilon_p(3)\binom{l+1}{3}$& \\ $\lambda_1+\lambda_{l-1}$& $3\binom{l+2}{3}-\binom{l+2}{2}-\epsilon_p(l)(l+1)$ &\\ $2\lambda_1+\lambda_{l}$& $3\binom{l+2}{3}+\binom{l+1}{2}-\epsilon_p(l+2)(l+1)$ & \\ \hline $\lambda_4$ & $\binom{l+1}{4}$ & $l\le 28$\\ \specialrule{.1em}{.05em}{.05em} \end{tabular} \caption{Nonzero $p$-restricted dominant weights $\lambda$ such that $\dim L(\lambda)\le (l+1)^3$ for $l>18$.} \label{tablethird} \bigskip \setlength{\tabcolsep}{1.4em} \renewcommand{\arraystretch}{1.4} \begin{tabular}{ l l l } \specialrule{.1em}{.05em}{.05em} $\lambda$ & $\dim L(\lambda)$ & Conditions\\ \specialrule{.1em}{.05em}{.05em} $\lambda_4$& $\binom{l+1}{4}$& \\ $4\lambda_1$ & $\binom{l+4}{4}$& \\ $2\lambda_2$ & $\binom{l+1}{2}^2-(l+1)\binom{l+1}{3}-\epsilon_p(3)\binom{l+1}{4}$ & \\ $\lambda_1+\lambda_3$ & $3\binom{l+2}{4}-\epsilon_p(2)\binom{l+1}{4} $& \\ $2\lambda_1+\lambda_2$ & $3\binom{l+3}{4}$&\\ $\lambda_1+\lambda_{l-2}$ & $(l-2)\binom{l+2}{3}-\epsilon_p(l-1)\binom{l+1}{2} $ & \\ $3\lambda_1+\lambda_l$ & $4\binom{l+3}{4}+\binom{l+2}{3}-\epsilon_p(l+3)\binom{l+2}{2}$& \\ $2\lambda_1+\lambda_{l-1}$ & $\binom{l+3}{2} \binom{l}{2}-\epsilon_p(l+1)((l+1)^2 -2)$ & \\ $\lambda_2+\lambda_{l-1}$ & $\binom{l+1}{2}^2 - (l+1)^2 - \epsilon_p(l-1)((l+1)^2-1)-\epsilon_p(l)$ & \\ $2\lambda_1+2\lambda_l$ & $\binom{l+2}{2}^2-(l+1)^2-\epsilon_p(l+3)((l+1)^2-1)-\epsilon_p(l+2)$ &\\ $\lambda_1+\lambda_2+\lambda_{l}$ & $(l+1)(2\binom{l+1}{3}+l^2-1)-4 \epsilon_p(3) (l-2)(\binom{l+1}{3}-1) $ &\\ & $-\epsilon_p(l) \binom{l+2}{2}-\epsilon_p(l+2)(1-\epsilon_p(3))\binom{l+1}{2}$ &\\ \hline $\lambda_5$& $\binom{l+1}{5}$& $l\le 128$ \\ $\lambda_2+\lambda_{3}$& $\binom{l+1}{2} \binom{l+1}{3} -(l+1) \binom{l+1}{4}-\epsilon_p(2)\binom{l+1}{5}-4\epsilon_p(3)\binom{l+2}{5}$ &$l\le 109$\\ $5\lambda_1$& $\binom{l+5}{5}$& $l\le 108$ \\ $3\lambda_1+\lambda_2$ & $4\binom{l+4}{5}-\epsilon_p(5)(3\binom{l+3}{5}+2\binom{l+2}{4}+\binom{l+1}{3})$&$l\le 108$\\ $\lambda_1+\lambda_{4}$& $4 \binom{l+2}{5}-\epsilon_p(5)\binom{l+1}{5}$ &$l\le 42$\\ \specialrule{.1em}{.05em}{.05em} \end{tabular} \caption{Nonzero $p$-restricted dominant weights $\lambda$ not in Table \ref{tablethird} such that $\dim L(\lambda) \le (l+1)^4$ for $l>35$.} \label{tablefourth} \end{table} \emph{Remark.} In Theorem \ref{thmfourth}, if we relax the condition on $l$ to $l>20$, then the values of $\lambda$ that need to be added to Table \ref{tablefourth} are: \begin{description} \item $2\lambda_1+\lambda_3$, for $l \le 35$, with $\dim L(\lambda)=6 \binom{l+3}{5}-\epsilon_p(5)(3 \binom{l+2}{5}+\binom{l+1}{4})$, \item $\lambda_6$ for $l \le 32$, with $\dim L(\lambda)=\binom{l+1}{6}$, \item $\lambda_1+\lambda_{l-3}$, for $l\le 28$, with $\dim L(\lambda)=(l-3)\binom{l+2}{4}-\epsilon_p(l-2) \binom{l+1}{3}$, \item $\lambda_7$, for $l\le 22$, with $\dim L(\lambda)=\binom{l+1}{7}$. \end{description} This can be shown via a lengthier variant of the proof of Theorem \ref{thmfourth}. The proofs of Theorems \ref{thmthird} and \ref{thmfourth} appear in sections \ref{firstproof} and \ref{secondproof}. Results similar to Theorem \ref{thmfourth} for types $B_l$, $C_l$ and $D_l$ will appear in forthcoming work. In §\ref{descriptions}, we also provide explicit descriptions of the modules with dimensions $\le (l+1)^3$, as quotients of subspaces of the tensor product $V^{\otimes k}$, by combining the Young symmetrizers construction in \cite{fultonharris} and a result of Cavallin (\cite{cavallin}, Lemma 4.1.2). \section{Preliminaries} \label{prelim} Let $G=\mathrm{SL}_{l+1}(K)$ as in the introduction. Let $T<G$ be a maximal torus of $G$ and $B=UT$ a Borel subgroup of $G$, where $U$ denotes the unipotent radical. Let $X(T)=\mathrm{Hom}(T,K^*)\cong \mathbb{Z}^l$ be the character group of $T$ and fix a set of simple roots $\Pi=\{\alpha_1,...,\alpha_l\} \subset X(T)$, a base of the root system $\Phi$ of $G$. Denote by $\Phi^+$ the set of positive roots. Also let $\{\lambda_1,...,\lambda_l\}$ be the set of fundamental dominant weights corresponding to $\Pi$. Define the partial order $\preccurlyeq$ as follows: for $\lambda, \mu \in X(T)$, $\mu \preccurlyeq \lambda$ if and only if $\lambda-\mu$ is a non-negative linear combination of the simple roots. Let $L$ be a finite dimensional $KG$-module. For $\mu \in X(T)$, let $L_{\mu}=\{v \in M : tv=\mu(t)v\ \forall t \in T\}$. If $L_{\mu}\neq \{0\}$, we say that $\mu$ is a weight of $L$ and that $L_{\mu}$ is its corresponding weight space. The module $L$ is the direct sum of its weight spaces $L_{\mu}$. The Weyl group $W=N_G(T)/T$, which for type $A_l$ is isomorphic to the symmetric group $S_{l+1}$, acts on the set of weights. In every $W$-orbit there is exactly one dominant weight, that is, a weight which is a non-negative linear combination of fundamental weights. Every irreducible $KG$-module has a unique maximum weight $\lambda$ with respect to the partial order $\preccurlyeq$, called its highest weight, which is in turn dominant. Its weight space is the unique 1-space fixed by the Borel subgroup $B$. Conversely, for every dominant weight $\lambda$, there exists a unique irreducible $KG$-module (up to isomorphism) with highest weight $\lambda$, and every irreducible $KG$-module arises in this way. We thus parameterise the irreducible $KG$-modules by their highest weights as $L(\lambda)$. We also denote $m_{\lambda}(\mu)=\dim L(\lambda)_{\mu}$, the multiplicity of $\mu$ in $L$. We will denote by $V(\lambda)$ the Weyl module with highest weight $\lambda$. The module $L(\lambda)$ occurs as a composition factor of $V(\lambda)$ exactly once. Let $\mu=a_1\lambda_1+...+a_l\lambda_l$ be a dominant weight of $L(\lambda)$. We say that $\mu$ is $p$-restricted if $0 \le a_i < p$ for all $i$. We will only consider $p$-restricted highest weights. The reason for this is Steinberg's tensor product theorem (\cite{steinberg}, §11), which states that all irreducible modules can be obtained as tensor products of twists of modules with $p$-restricted highest weights. It is a basic fact that if $\mu \in X(T)$ is a non-negative linear combination of fundamental weights such that $\mu \prec \lambda$, then $\mu$ is a weight of $V(\lambda)$. Premet's theorem \cite{premet} implies, for type $A_l$, that all such $\mu$ are weights of $L(\lambda)$ as well. We say that such a weight $\mu$ is subdominant. Since weight spaces corresponding to $W$-conjugate weights have equal dimensions, we have the following equality: \begin{equation} \label{dimbound} \dim L(\lambda)=\sum_{\mu \preccurlyeq \lambda} \left\vert{ \mu^W }\right\vert m_\lambda(\mu). \end{equation} Premet's theorem then implies $\dim L(\lambda) \ge \sum_{\mu \preccurlyeq \lambda} \left\vert{ \mu^W }\right\vert$. We also note that $m_{\lambda}(\lambda)=1$. The size of the orbit of a dominant weight can readily be obtained as follows. Write $\mu=a_1\lambda_1+...+a_l\lambda_l$ and let $i_1<...<i_{N_\mu}$ be the indices in $\{1,...,l\}$ corresponding to the nonzero $a_i$'s. The stabiliser in $W$ of $\mu$ is the parabolic subgroup generated by the reflections along the simple roots $\alpha_i$ such that $a_i=0$. For type $A_l$, this means that that $W_\mu \cong S_{i_1} \times S_{i_2-i_1}\times ...\times S_{i_{N_{\mu}}-i_{N_{\mu}-1}} \times S_{l+1-i_{N_\mu}}$ and therefore \begin{equation} \label{orbitsize} \left\vert{ \mu^W }\right\vert=\left\vert{W:W_\mu}\right\vert=\frac{(l+1)!}{i_1!(i_2-i_1)!\cdots(i_{N_{\mu}}-i_{N_{\mu}-1})!(l+1-i_{N_\mu})!}. \end{equation} Finally, the weight multiplicities for the Weyl module $V(\lambda)$ can be obtained using Freudenthal's formula (e.g. \cite{fultonharris}, §25.1). For type $A_l$, there is a combinatorial way of finding them (see Young's rule, \cite{james}, chapter 14). \section{Proof of Theorem \ref{thmthird}} \label{firstproof} The proof has two parts. In the first part, for each dominant weight $\lambda$ in Table \ref{tablethird}, we determine the dimension of $L(\lambda)$. In the second part, we prove that the stated weights are indeed all the $p$-restricted dominant weights that correspond to representations of dimension $\le(l+1)^3$. \subsection{Dimensions of the modules} \label{dimsthird} The first dominant weights in Table $\ref{tablethird}$ correspond to well-known modules: $\lambda_1$ for the natural module (dimension $l+1$); $\lambda_k$ for the $k$-th exterior power (dimension $\binom{l+1}{k}$); $k\lambda_1$ for the $k$-th symmetric power (dimension $\binom{l+k}{k}$) and $\lambda=\lambda_1+\lambda_l$ for the adjoint module, of dimension $(l+1)^2-1-\epsilon_p(l+1)$. We are left with the weights $\lambda_1+\lambda_2$, $\lambda_1+\lambda_{l-1}$ and $2\lambda_1+\lambda_l$. We will make extensive use of the following result, which is part of Lemma 8.6 in \cite{seitz}.\\ \begin{lemma}\label{lmseitz} Let $\lambda=a_i \lambda_i+a_j \lambda_j$, $i<j$, $p$-restricted as before with $a_i a_j\neq 0$. Suppose $\mu=\lambda-(\alpha_i+...+\alpha_j)$. Then $m_{\lambda}(\mu)=j-i+1-\epsilon_p(a_i+a_j+j-i)$. \end{lemma} The dimensions stated in Table \ref{tablethird} easily follow from this. We give as an example the weight $\lambda=2\lambda_1+\lambda_l$, as the other cases can be dealt with in a similar fashion. The subdominant weights are $\lambda-\alpha_1$ and $\lambda-(\alpha_1+...+\alpha_l)$. The multiplicity of $\lambda-\alpha_1$ in the Weyl module is 1, as can easily be seen (using Freudenthal's formula or otherwise) and therefore so it is in $L(\lambda)$ (by Premet's theorem). By Lemma \ref{lmseitz} the subdominant weight $\lambda-(\alpha_1+...+\alpha_l)$ has multiplicity $l-\epsilon_p(l+2)$. Using equation (\ref{orbitsize}) for the size of the respective orbits, the RHS of equation (\ref{dimbound}) now yields the stated dimension. \subsection{Dominant weights} \label{proofdom3} Let $\lambda$ be a $p$-restricted dominant weight such that $\dim L(\lambda)\le (l+1)^3$ and $l>18$. Our aim is to show that $\lambda$ appears in Table \ref{tablethird}. Write $\lambda=a_1\lambda_1+...+a_l\lambda_l$, $0 \le a_i < p$, and let $I_\lambda=\{i_1,...,i_{N_\lambda}\}$, $i_1<...<i_{N_\lambda}$ be the set of indices in $\{1,...,l\}$ corresponding to the nonzero $a_i$'s. We define \[\Delta_\lambda=\mathrm{max}\{i_1,i_2-i_1,...,i_{N_\lambda}-i_{N_\lambda-1}, l+1-i_{N_\lambda}\}.\] Notice first that $a_l\lambda_1+...+a_1\lambda_l$ is the highest weight of the dual representation of $L(\lambda)$. This will allow us to consider only one of two cases, whenever $\lambda$ is not self-dual. We start by considering the case where $\Delta_\lambda\le l-4$. In order to minimise the RHS of equation (\ref{orbitsize}), we choose $I_\lambda=\{5\}$ (this is valid assuming $l>8$). The size of the orbit is then $\binom{l+1}{5}$, which exceeds $(l+1)^3$ if $l>14$. We therefore discard this case. Now assume $\Delta_\lambda=l-3$. The minimum value $|\lambda^W|$ can attain occurs when $I_\lambda=\{4\}$, assuming $l>6$. However, $\lambda_4$ appears in Table \ref{tablethird}. Thus we have to check two cases. If $\lambda = a_4\lambda_4$ and $a_4>1$, then $\lambda-\alpha_4$ is subdominant and its $\lambda_5$ coefficient is $1$. By the previous paragraph, $\dim L(\lambda)>(l+1)^3$ for $l>14$. Otherwise if $\lambda \neq a_4\lambda_4$, the minimum value of $|\lambda^W|$ now occurs when $I_\lambda=\{1,4\}$. The size of the orbit of $\lambda$ then exceeds $(l+1)^3$ for $l>10$. In the following, we assume $\Delta_\lambda \ge l-2$. The next statements exhaust the remaining possibilities. \begin{enumerate}[label=\alph*)] \item If $a_3>0$, necessarily $\lambda=\lambda_3$. \begin{proof} Note that if $a_1>0$, then $\mu=\lambda-(\alpha_1+\alpha_2+\alpha_3)$ is subdominant and \[\dim L(\lambda) \ge |\lambda^W|+|\mu^W| \ge \frac{(l+1)l(l-1)}{2}+\binom{l+1}{4} > (l+1)^3 \text{ for }l>18.\] Also if $a_2>0$ or $a_3>1$, set respectively $\mu=\lambda-(\alpha_2+\alpha_3)$ or $\mu=\lambda-\alpha_3$. In both cases $\mu$ is subdominant with nonzero $\lambda_4$ coefficient and $\mu \neq \lambda_4$, which we already know yields $|\mu^W| > (l+1)^3$, so we discard these too.\end{proof} We may now assume that $\lambda=a_1\lambda_1+a_2\lambda_2+a_l\lambda_l$ and not consider its dual. \item If $a_2>0$ and $a_1>0$, then $\lambda=\lambda_1+\lambda_2$. \begin{proof} Observe that $\lambda-(\alpha_1+\alpha_2)$ is a subdominant weight with nonzero coefficient of $\lambda_3$. By case a) we require that it equals $\lambda_3$. This yields $\lambda=\lambda_1+\lambda_2$.\end{proof} \item If $a_2>0$ and $a_1=0$, then $a_2=1$. \begin{proof} If $a_2>1$ then $\lambda-\alpha_2$ is subdominant with nonzero coefficients of $\lambda_1$ and $\lambda_3$. In view of case a), we can discard this case.\end{proof} \item If $a_2=1$, $a_1=0$ and $a_{l}>0$, then $\lambda=\lambda_2+\lambda_{l}$. \begin{proof} If $a_{l}>1$, $\mu=\lambda-\alpha_{l}$ is subdominant with nonzero $\lambda_{l-1}$ coefficient. Thus $\Delta_\mu=l-3$ but $\mu \neq \lambda_4,\lambda_{l-3}$, a contradiction.\end{proof} Assume now that $\lambda$ has the form $a_1\lambda_1+a_l \lambda_l$ and $a_1 \ge a_l$. \item If $a_1>2$, then $\lambda=3\lambda_1$. \begin{proof} Notice that $\mu=\lambda-\alpha_1$ is as in case b). Setting $\mu=\lambda_1+\lambda_2$ yields $\lambda=3\lambda_1$.\end{proof} \item If $a_1=2$ and $a_l>0$, then $\lambda=2\lambda_1+\lambda_l$. \begin{proof} Note that $\mu=\lambda-\alpha_1$ is as in case d). Hence we require $\mu=\lambda_2+\lambda_{l}$ and solving for $\lambda$ yields $\lambda=2\lambda_1+\lambda_l$.\end{proof} \end{enumerate} The weights not considered at this point already lie in Table \ref{tablethird}. This completes the proof of Theorem \ref{thmthird}. \section{Proof of Theorem \ref{thmfourth}} \label{secondproof} As in the previous section, we split the proof into two: first we establish the dimension of $L(\lambda)$ for the dominant weights in Table \ref{tablefourth}; then we prove that these are all the dominant weights to consider. \subsection{Dimensions of the modules} \label{dimsfourth} In addition to Lemma \ref{lmseitz}, we will need the following results on weight multiplicities. Lemma \ref{testerman121} is essentially due to Seitz (see the proof of 6.7 in \cite{seitz}), but we state it as it appears in Lemma 2.3 from \cite{testerman}. Lemmas \ref{cavallinc111}, \ref{cavallin2221}, \ref{cavallin11001}, \ref{cavallin011} and \ref{cavallin3321} are respectively results 2.3.19, 6.1.3, 6.1.10, 7.4.5 and 6.1.4 from \cite{cavallin}. Lemma \ref{cavallin1221} is the same as result 7.5.6 from \cite{cavallin} for $p \neq 2$. In the case $p=2$, the result is still true; for the proof one must add the term $-\nu_p(2) \chi^{\mu}(\mu)$ to the expression $\nu_c^\mu(T_\sigma)$ in the original proof and proceed similarly.\\ \begin{lemma}\label{testerman121} If $\lambda=2a_j \lambda_j$, $1<j<l$, $a_j>1$ and $\lambda-\mu=\alpha_{j-1}+2\alpha_j+\alpha_{j+1}$, then $m_\lambda(\mu)=2-\epsilon_p(a_j+1)$.\\ \end{lemma} \begin{lemma} \label{cavallinc111} If $\lambda=a_i\lambda_i+a_j\lambda_j$, $ i<j$, $a_i a_j \neq 0$ and $\lambda-\mu=c\alpha_i+\alpha_{i+1}+...+\alpha_j$ with $0<2c \le a_i+1$, then $m_\lambda(\mu)=j-i+1-\epsilon_p(a_i+a_j+j-i)$.\\ \end{lemma} \begin{lemma} \label{cavallin2221} If $\lambda=a_1\lambda_1+\lambda_j$, $1<j<l$, $a_1>1$ and $\lambda-\mu=2\alpha_1+...+2\alpha_j+\alpha_{j+1}$, then $m_\lambda(\mu)=\binom{j+1}{2}-\epsilon_p(a_1+j)(j)$.\\ \end{lemma} \begin{lemma}\label{cavallin11001} If $\lambda=a_1\lambda_1+a_2\lambda_2+a_l\lambda_l$, $a_1 a_2 a_l \neq 0$, and $\lambda-\mu=\alpha_1+...+\alpha_l$, then $m_\lambda(\mu)=2(l-1)-\epsilon_p(a_1+a_2+1)(l-2)-\epsilon_p(a_2+a_l+l-2)-\epsilon_p(a_1+a_2+a_l+l-1)+\epsilon_p(a_1+a_2+1)\epsilon_p(a_2+a_l+l-2)$.\\ \end{lemma} \begin{lemma}\label{cavallin011} If $\lambda=\lambda_2+\lambda_3$, and $\mu=\lambda_5$, then $m_\lambda(\mu)=5-\epsilon_p(2)-4\epsilon_p(3)$.\\ \end{lemma} \begin{lemma}\label{cavallin3321} If $\lambda=a_1\lambda_1+\lambda_j$, $1<j<l-1$, $a_1 >2$ and $\lambda-\mu=3\alpha_1+...+3\lambda_j+2\lambda_{j+1}+\lambda_j$, then $m_\lambda(\mu)=\binom{j+2}{3}-\epsilon_p(a_1+j)\binom{j+1}{2}$.\\ \end{lemma} \begin{lemma}\label{cavallin1221} If $\lambda=\lambda_2+\lambda_j$, $2<j<l$, and $\mu=\lambda_{j+2}$ (with the convention $\lambda_{l+1}=0$), then $m_\lambda(\mu)=\binom{j+1}{2}-1-\epsilon_p(j)(j+1)-\epsilon_p(j+1)$.\\ \end{lemma} We now state some facts, see e.g. (\cite{lubeck}, §3) for a more detailed explanation. Let $\mathscr{L}$ be the complex Lie algebra having the same type as $G$, with Chevalley basis $\{e_{\alpha}, f_{\alpha}=e_{-\alpha}, h_{\alpha}: \alpha \in \Phi^+\}$. Let $v^{\lambda}$ be a highest weight vector of the Weyl module $V(\lambda)$. The weight space $V(\lambda)_{\mu}$ is spanned by the set \[\mathscr{W}_{\lambda,\mu}=\left\{\frac{f_{\beta_1}^{s_1}}{s_1!} \cdots \frac{f_{\beta_N}^{s_N}}{s_N!}v^{\lambda} : N, s_i\in \mathbb{Z}_{\ge 0},\ \beta_i \in \Phi^+, \sum_{i=1}^N s_i\beta_i=\lambda-\mu\right\}.\] Take $v,w \in \mathscr{W}_{\lambda,\mu}$, that is, $v=\frac{f_{\beta_1}^{s_1}}{s_1!} \cdots \frac{f_{\beta_N}^{s_N}}{s_N!}$, $w=\frac{f_{\beta_1}^{t_1}}{t_1!} \cdots \frac{f_{\beta_M}^{t_M}}{t_M!} v^\lambda$, and define the rational (in fact, integer) $a_{v,w}$ by $\frac{e_{\beta_1}^{s_1}}{s_1!} \cdots \frac{e_{\beta_N}^{s_N}}{s_N!}\frac{f_{\beta_1}^{t_1}}{t_1!} \cdots \frac{f_{\beta_M}^{t_M}}{t_M!} v^\lambda=a_{v,w} v^{\lambda}$. The bilinear form $F(\cdot, \cdot)$ on this space defined by $F(v,w)=a_{v,w}$ is non-degenerate, and the following holds (\cite{jantzen}, II, 8.21).\\ \begin{lemma}\label{lmrank} Let $\lambda$ and $\mu$ be dominant weights with $\mu \prec \lambda$, and let $A$ be a matrix of the bilinear form on $V(\lambda)_\mu$ defined above, with respect to some basis of elements in $\mathscr{W}_{\lambda,\mu}$. Then the multiplicity of $\mu$ in $L(\lambda)$ is the number of elementary divisors of $A$ that are not divisible by $p$. \end{lemma} We will use this in the proof of the next lemma. To ease the notation, we denote $f_{i,j}=f_{\alpha_i+...+\alpha_j}$.\\ \begin{lemma}\label{alvaro2222} If $\lambda=2\lambda_1+2\lambda_l$, and $\mu=0$, then $m_\lambda(\mu)=\binom{l+1}{2}-\epsilon_p(l+3)l-\epsilon_p(l+2)$. \end{lemma} \begin{proof} Note first that we are considering $\lambda$ to be $p$-restricted and so $p \neq 2$. We start by finding a linearly independent set in $V(\lambda)_\mu$. For $1 \le j \le i \le l-1$, let $w_{i,j}=\frac{1}{ 2^{\delta_{i,j}} } f_{1,j}f_{1,i}f_{j+1,l}f_{i+1,l}v^{\lambda} $ and denote (in lexicographic order) $v_1=w_{1,1}$, $v_2=w_{2,1}$, $v_3=w_{2,2}$, $v_4=w_{3,1}$, ..., $v_{\binom{l}{2}}=w_{l-1,l-1}$. Also let $v_{\binom{l}{2}+k}=f_{1,k}f_{1,l}f_{k+1,l}v^{\lambda}$ for all $k$ such that $1 \le k \le l-1$ and $v_{\binom{l+1}{2}}= \frac{1}{2}f_{1,l}^2 v^{\lambda}$. Finally, let $\mathscr{S}=\{v_1,...,v_{\binom{l+1}{2}}\}$. By Freudenthal's formula, the multiplicity of $\mu$ in $V(\lambda)$ is $\binom{l+1}{2}$. Therefore proving that $\mathscr{S}$ is linearly independent will show that it is in fact a basis of $V(\lambda)_\mu$. We do this by computing the matrix $A$ of the bilinear form $F(\cdot, \cdot)$ defined above with respect to $\mathscr{S}$. For some positive integer $k \le l$, define first the $k \times l-1$ matrix $C_{k,l-1}$ as having 1s in the $(h,h)$ and $(h,k)$ entries for $1 \le h < k$, a 2 in the $(k,k)$ entry, and zeros elsewhere. Define also the $l-1 \times l-1$ matrix $C_{l-1,l-1}'$ as having 1s in its diagonal entries and the value $-1$ elsewhere. By use of the commutation relations in $\mathscr{L}$, one obtains the matrix $A$, which has the form \[ A= \left( \renewcommand{\arraystretch}{1.08} \begin{array}{c|c|c} 4I_{\binom{l}{2}} & \begin{array}{c} -4C_{1,l-1} \\ \vdots \\ -4C_{l-1,l-1} \\ \end{array} & 2J_{\binom{l}{2}}^T\\ \hline 4C_{1,l-1}^T\ 4C_{2,l-1}^T\ \cdots\ 4C_{l-1,l-1}^T& 4C_{l-1,l-1}' & -6J_{l-1}^T\\ \hline 2J_{\binom{l}{2}} & 6J_{l-1} & 6 \end{array} \right) \] where $I_k$ is the $k \times k$ identity matrix and $J_k$ is the $1 \times k$ vector of ones. Now define $P$ and $\tilde{P}$ as \[ P= \left( \renewcommand{\arraystretch}{1.1} \begin{array}{c|c} I_{\binom{l}{2}} & \,\makebox[6em]{\begin{tabular}{c|c}$C_{1,l-1}$&\\ $\vdots$ & $J_{\binom{l}{2}}$\\ $C_{l-1,l-1}$ &\end{tabular}} \\ \hline \large0 & \ C_{l,l}\ \end{array} \right) \ \ \ \tilde{P}_{i,j} = \left\{ \begin{array}{@{}l@{\thinspace}l} P_{i,i} & \text{if } i=j\\ P_{i,\binom{l+1}{2}} & \text{if } i=\binom{l+1}{2},\ j\le \binom{l}{2}\\ -P_{j,i} & \text{otherwise.} \end{array} \right. \] An elementary check shows that $\tilde{P}AP = 4I_{\binom{l}{2}} \oplus 4(l+3)I_{l-1} \oplus (l+2)(l+3)I_1$. Hence $\mathscr{S}$ is a basis of $V(\lambda)_\mu$ and the result now follows from Lemma \ref{lmrank}. \end{proof} Now Table \ref{tableproof} contains each dominant weight $\lambda$ in Table \ref{tablefourth}, its subdominant weights, their multiplicities and the results we applied to obtain them. When the multiplicity is $1$ in the Weyl module, we do not cite any results. The dimensions in Table \ref{tablefourth} then follow from equations (\ref{dimbound}) and (\ref{orbitsize}). For the weight $\lambda_1+\lambda_2+\lambda_l$, the applications of Lemma \ref{lmseitz} are valid by the same argument in the proof of the Lemma (\cite{seitz}, 8.6). We do not consider the weights of the form $\lambda_k$ and $k\lambda_1$ as we discussed them in §\ref{dimsthird}. We also note that weights of the form $\lambda_1+\lambda_j$, $1<j<l$, only have one subdominant weight, $\lambda_{j+1}$. This has multiplicity $j-\epsilon_p(j+1)$ due to Lemma \ref{lmseitz}, so we omit this case too. \begin{table} \setlength{\tabcolsep}{0.5em} \renewcommand{\arraystretch}{1.2} \begin{tabular}{l l l c} \specialrule{.1em}{.05em}{.05em} $\lambda$ & $\mu$ & $m_\lambda(\mu)$ & Result used\\ \specialrule{.1em}{.05em}{.05em} $2\lambda_2$&$\lambda_1+\lambda_3$ &$1$&\\ & $\lambda_4$ & $2-\epsilon_p(3)$& \ref{testerman121}\\ \hline $\lambda=2\lambda_1+\lambda_2$& $2\lambda_2$ & $1$ &\\ &$\lambda_1+\lambda_3$ & $2$ & \ref{lmseitz}\\ &$\lambda_4$ & $3$& \ref{cavallin2221}\\ \hline $\lambda=3\lambda_1+\lambda_l$& $\lambda_1+\lambda_2+\lambda_l$, $\lambda_3+\lambda_{l}$ & $1$& \\ &$2\lambda_1$ & $l-\epsilon_p(l+3)$& \ref{lmseitz}\\ &$\lambda_2$ & $l-\epsilon_p(l+3)$ & \ref{cavallinc111}\\ \hline $\lambda=2\lambda_1+\lambda_{l-1}$& $\lambda_2+\lambda_{l-1}$ & $1$ &\\ & $\lambda_1+\lambda_{l}$ & $l-1-\epsilon_p(l+1)$& \ref{lmseitz}\\ & $0$ & $\binom{l}{2}-\epsilon_p(l+1)(l-1)$& \ref{cavallin2221}\\ \hline $\lambda_2+\lambda_{l-1}$ & $\lambda_1+\lambda_l$ & $l-2-\epsilon_p(l-1)$& \ref{lmseitz} \\ & $0$ & $\binom{l}{2}-1-\epsilon_p(l-1)l-\epsilon_p(l)$ & \ref{cavallin1221}\\ \hline $\lambda=2\lambda_1+2\lambda_l$& $\lambda_2+2\lambda_l$, $2\lambda_1+\lambda_{l-1}$, $\lambda_2+\lambda_{l-1}$ & $1$& \\ & $\lambda_1+\lambda_l$ & $l-\epsilon_p(l+3)$ & \ref{lmseitz} \\ & $0$ & $\binom{l+1}{2}-\epsilon_p(l+3)l-\epsilon_p(l+2)$ & \ref{alvaro2222}\\ \hline $\lambda=\lambda_1+\lambda_2+\lambda_l$ & $\lambda_3+\lambda_{l}$ & $2-\epsilon_p(3)$& \ref{lmseitz} \\ &$2\lambda_1$ & $l-1-\epsilon_p(l)$& \ref{lmseitz}\\ & $\lambda_2$ & $2(l-1)-\epsilon_p(3)(l-2)-\epsilon_p(l)$\\ & & $-\epsilon_p(l+2)+\epsilon_p(3)\epsilon_p(l+2)$ & \ref{cavallin11001}\\ \hline $\lambda=\lambda_2+\lambda_3$ & $\lambda_1+\lambda_4$ & $2-\epsilon_p(3)$ & \ref{lmseitz}\\ &$\lambda_5$ & $5-\epsilon_p(2)-4\epsilon_p(3)$ & \ref{cavallin011}\\ \hline $\lambda=3\lambda_1+\lambda_2$ &$\lambda_1+2\lambda_2$& $1$& \\ & $2\lambda_1+\lambda_3$ & $2-\epsilon_p(5)$ & \ref{lmseitz} \\ &$\lambda_2+\lambda_3$ & $2-\epsilon_p(5)$ & \ref{cavallinc111} \\ &$\lambda_1+\lambda_4$ & $3-2\epsilon_p(5)$ & \ref{cavallin2221} \\ &$\lambda_5$ & $4-3\epsilon_p(5)$ & \ref{cavallin3321}\\ \specialrule{.1em}{.05em}{.05em} \end{tabular} \caption{Some weight multiplicities} \label{tableproof} \end{table} \subsection{Dominant weights} Let $\lambda=a_1\lambda_1+...+a_l\lambda_l$ such that $\dim L(\lambda)\le (l+1)^4$ with $0 \le a_i < p$, and define $I_\lambda$ and $\Delta_\lambda$ as in §\ref{proofdom3}. We show that if $l>35$ then $\lambda$ appears in Table \ref{tablethird} or Table \ref{tablefourth}. We again consider the different possibilities for $\Delta_\lambda$. Assuming $\Delta_\lambda \le l-5$, the minimum value $|\lambda^W|$ can attain occurs when $I_\lambda=\{6\}$, assuming $l>12$, in which case the orbit size exceeds $(l+1)^4$ for $l>32$. Assume now $\Delta_\lambda=l-4$. The minimum value $|\lambda^W|$ can attain then occurs when $I_\lambda=\{5\}$, assuming $l>10$, in which case the orbit size exceeds $(l+1)^4$ for $l>128$. This is attained by $\lambda_5$. If $\lambda = a_5\lambda_5$ with $a_5>1$, $\lambda-\alpha_5$ is subdominant and has nonzero $\lambda_6$ coefficient. By the previous paragraph we can discard this case. Alternatively if $\lambda \neq a_5\lambda_5$, the minimum orbit size is attained when $I_\lambda=\{1,5\}$, and it is greater than $(l+1)^4$ for $l>31$. We can now assume $\Delta_\lambda \ge l-3$. The following statements complete the proof. \begin{enumerate}[label=\alph*)] \item If $a_4>0$ and $a_1>0$, then $\lambda=\lambda_1+\lambda_4$. \label{fourth14} \begin{proof} Note that $\mu=\lambda-(\alpha_1+\alpha_2+\alpha_3+\alpha_4)$ is subdominant with nonzero $\lambda_5$ coefficient. Setting $\mu=\lambda_5$ yields the stated $\lambda$.\end{proof} \item If $a_4>0$ and $a_1=0$, then $\lambda =\lambda_4$. \label{fourth4} \begin{proof} For the cases $a_4>1$, $a_3>0$, $a_2>0$, set respectively $\mu=\lambda-\alpha_4$, $\mu=\lambda-(\alpha_3+\alpha_4)$ and $\mu=\lambda-(\alpha_2+\alpha_3+\alpha_4)$. Then $\mu$ has nonzero $\lambda_5$ coefficient but $\mu \neq \lambda_5$, a contradiction.\end{proof} In the following we assume that $a_4=0$ and that $\lambda$ has either the form $a_1\lambda_1+a_2\lambda_2+a_3\lambda_3+a_l \lambda_l$ or $a_1\lambda_1+a_2\lambda_2+a_{l-1}\lambda_{l-1}+a_l \lambda_l$. \item If $a_3>0$ and $a_2>0$, then $\lambda=\lambda_2+\lambda_3$. \label{fourth23} \begin{proof} Clearly $\lambda-(\alpha_2+\alpha_3)$ is subdominant with nonzero $\lambda_1$ and $\lambda_4$ coefficients. Forcing it to equal $\lambda_1+\lambda_4$ yields the stated $\lambda$.\end{proof} \item If $a_3>0$ and $a_1>0$, then $\lambda=\lambda_1+\lambda_3$. \label{fourth13} \begin{proof} We have that $\mu=\lambda-(\alpha_1+\alpha_2+\alpha_3)$ is a subdominant weight with nonzero coefficient of $\lambda_4$, hence by cases \ref{fourth14} and \ref{fourth4} it is either $\lambda_4$ (which yields the stated $\lambda$) or $\lambda_1+\lambda_4$. Setting $\mu=\lambda_1+\lambda_4$ yields $\lambda=2\lambda_1+\lambda_3$. We may need to take this weight into account in subsequent cases, but we can discard it as highest weight as follows. Notice that its subdominant weights are $\lambda_2+\lambda_3$, $\lambda_1+\lambda_4$ and $\lambda_5$. The first has multiplicity 1 and the others are respectively greater than $1$ and $2$ due to Lemmas \ref{lmseitz} and \ref{cavallin2221}. These yield $\dim L(\lambda)>(l+1)^4$ for $l>35$.\end{proof} \item If $a_3>0$, $a_2=0$ and $a_1=0$, then $\lambda=\lambda_3$ or $\lambda=\lambda_3+\lambda_l$. \label{fourth3} \begin{proof} If $a_3>1$, $\lambda-\alpha_3$ has nonzero $\lambda_2$ and $\lambda_4$ coefficients, so we discard it by \ref{fourth14} and \ref{fourth4}. If $a_l>1$, $\lambda-\alpha_l$ has $\Delta_{\lambda-\alpha_l}= l-4$ and is not $\lambda_5$ or its dual, so we discard it too.\end{proof} From now we assume $a_3=0$, so $\lambda=a_1\lambda_1+a_2\lambda_2+a_{l-1}\lambda_{l-1}+a_l \lambda_l$. \item If $a_2>0$ and $a_{l-1}>0$, then $\lambda=\lambda_2+\lambda_{l-1}$. \label{fourth2l-1} \begin{proof} We may only consider the two cases $a_2>1$ and $a_1>0$. Respectively, $\lambda-\alpha_2$ and $\lambda-(\alpha_1+\alpha_2)$ have nonzero $\lambda_3$ and $\lambda_{l-1}$ coefficients. In view of cases \ref{fourth23}, \ref{fourth13} and \ref{fourth3}, we can discard these.\end{proof} We can now assume that $\lambda$ has the form $a_1\lambda_1+a_2\lambda_2+a_l \lambda_l$. \item If $a_2>0$, $a_1>0$ and $a_{l}>0$, then $\lambda=\lambda_1+\lambda_2+\lambda_l$. \label{fourth12l} \begin{proof} Note that $\mu=\lambda-(\alpha_1+\alpha_2)$ is subdominant with nonzero $\lambda_3$ and $\lambda_l$ coefficients. By cases \ref{fourth23}, \ref{fourth13} and \ref{fourth3}, we see that $\mu=\lambda_3+\lambda_l$, yielding the stated $\lambda$.\end{proof} \item If $a_2>0$, $a_1>1$ and $a_{l}=0$, then $\lambda=3\lambda_1+\lambda_2$ or $\lambda=2\lambda_1+\lambda_2$. \label{fourth1112} \begin{proof} The subdominant weight $\mu=\lambda-(2\alpha_1+2\alpha_2+\alpha_3)$ has nonzero $\lambda_4$ coefficient. By cases \ref{fourth14} and \ref{fourth4} we set $\mu=\lambda_4$ and $\mu=\lambda_1+\lambda_4$ and we obtain the stated values.\end{proof} \item If $a_2>0$, $a_1=1$ and $a_{l}=0$, then $\lambda=\lambda_1+\lambda_2$. \label{fourth12} \begin{proof} Suppose $a_2>1$. Then $\mu=\lambda-(2\alpha_1+3\alpha_2+2\alpha_3+\alpha_4)$ is subdominant with nonzero $\lambda_5$ coefficient. Hence $\mu=\lambda_5$ which implies $\lambda=\lambda_1+2\lambda_2$. Computing the rank of the matrix of the form $F(\cdot, \cdot)$ on $V(\lambda)_\mu$ as in Lemma \ref{alvaro2222} (here the matrix is the same for all ranks) shows that $m_\lambda(\mu)= 5- \epsilon_p(3) \ge 4$. Using this, equation (\ref{dimbound}) yields $\dim L(\lambda) \ge (l+1)^4$ for $l>33$. \end{proof} \item If $a_2>0$, $a_1=0$ and $a_{l}>0$, then $\lambda=\lambda_2+\lambda_l$ or $\lambda=\lambda_2+2\lambda_l$. \label{fourth2l} \begin{proof} If $a_2>1$, the subdominant weight $\mu=\lambda-(\alpha_1+2\alpha_2+\alpha_3)$ has nonzero $\lambda_4$ and $\lambda_l$ coefficients. In view of \ref{fourth14} and \ref{fourth4}, we discard this case. Now if $a_l>2$, $\mu=\lambda-(\alpha_{l-1}+2\alpha_l)$ is subdominant with $\Delta_\mu=l-4$ and nonzero $\lambda_2$ and $\lambda_{l-2}$ coefficients, so we discard this too.\end{proof} \item If $a_2>0$, $a_1=0$ and $a_{l}=0$, then $\lambda=\lambda_2$ or $\lambda=2\lambda_2$. \label{fourth22} \begin{proof} Otherwise if $a_2>2$, $\lambda-(\alpha_1+2\alpha_2+\alpha_3)$ is subdominant with nonzero $\lambda_4$ and $\lambda_2$ coefficients, so not as in \ref{fourth14} or \ref{fourth4}.\end{proof} Finally, we assume $\lambda$ has the form $a_1 \lambda_1+a_l \lambda_l$ and $a_1 \ge a_l$. \item If $a_1>2$, then $\lambda=3\lambda_1$, $\lambda=4\lambda_1$, $\lambda=5\lambda_1$ or $\lambda=3\lambda_1+\lambda_l$. \label{fourth1111} \begin{proof} If $a_l>0$, $\mu=\lambda-2\alpha_1-\alpha_2$ has nonzero $\lambda_3$ coefficient and by cases \ref{fourth23}, \ref{fourth13} and \ref{fourth3} we must have $\mu=\lambda_3+\lambda_l$, which yields $\lambda=3\lambda_1+\lambda_l$. Assuming $a_l=0$, $\dim L(a_1\lambda_1)=\binom{l+a_1}{a_1}$. If $a_1>5$, this exceeds $(l+1)^4$ for $l>17$.\end{proof} \end{enumerate} The cases not considered at this point belong to weights already in Tables \ref{tablethird} or \ref{tablefourth} and so the proof of Theorem \ref{thmfourth} is complete. \section{Explicit descriptions of the modules in Theorem \ref{thmthird}} \label{descriptions} In this section we find constructions of the modules corresponding to the weights in Table \ref{tablethird}. We omit the weights $\lambda_k$, $k\lambda_1$ and $\lambda_1+\lambda_l$ as the respective constructions were described in §\ref{dimsthird}. To start, the Young symmetrizers construction (or Weyl's construction, see \cite{fultonharris}, Lecture 6) gives the Weyl module as a subspace of the tensor product $V^{\otimes k}$, where $V=V(\lambda_1)$, the natural module. We denote by $\{e_1, ..., e_{l+1}\}$ the canonical basis of $V$. In order to describe $L(\lambda)$, we find the composition factors of the Weyl module, which, in view of §\ref{dimsthird}, in these cases consist of $L(\lambda)$ and (possibly) another module with multiplicity one and with highest weight $\lambda-(\alpha_i+...+\alpha_j)$. We use the same notation as in §\ref{dimsfourth} for the Lie algebra $\mathscr{L}$ and its elements. The following result is an immediate consequence of 4.1.2 from \cite{cavallin}.\\ \begin{lemma} \label{lindep} With the notation of Lemma \ref{lmseitz}, if $p \mid a_i+a_j+j-i$, then the weight $\mu$ affords the highest weight of a $KG$-composition factor of the Weyl module $V(\lambda)$, with highest weight vector $a_j f_{i,j}v^{\lambda}-\sum_{r=i+1}^{j}f_{i,r-1}f_{r,j}v^{\lambda}$, where $v^{\lambda}$ is a highest weight vector of $V(\lambda)$. \end{lemma} In the following, we use this result to give explicit constructions of the modules. \begin{description} \item{Construction for $\lambda=\lambda_1+\lambda_2$} The Weyl module has the following form (\cite{fultonharris}, Lecture 6): \[V(\lambda)=Span\{v_1 \otimes v_2 \otimes v_3+v_2 \otimes v_1 \otimes v_3-v_3 \otimes v_2 \otimes v_1-v_3 \otimes v_1 \otimes v_2 : v_1, v_2, v_3 \in V\}. \] As noted in (\cite{fultonharris}, p. 76), if we identify $V \otimes \Lambda^2 V$ as subspace of $V^{\otimes 3}$, then $V(\lambda)$ can also be realised as $\mathrm{Ker}(V \otimes \Lambda^2 V \rightarrow \Lambda^3 V)$, the kernel of the canonical map from $V \otimes \Lambda^2 V$ to $\Lambda^3 V$. Now the highest weight vector of $V(\lambda)$ can be taken as $v^{\lambda}=e_1 \otimes (e_1 \wedge e_2)$. The only subdominant weight is $\lambda_3$. By Lemma \ref{lindep}, $\lambda_3$ affords the highest weight of a composition factor of $V(\lambda)$ as a $KG$-module precisely when $p=3$. In this case the weight space $V(\lambda)_{\lambda_3}$ is spanned by the vector \[ \begin{array} {ll} v_R &:= f_{1,2}v^{\lambda}-f_{1,1}f_{2,2}v^{\lambda}\\ &=2 e_1 \otimes (e_3 \wedge e_2)-e_2 \otimes (e_1 \wedge e_3)+e_3 \otimes (e_1 \wedge e_2)\\ &= e_1 \otimes (e_2 \wedge e_3)-e_2 \otimes (e_1 \wedge e_3)+e_3 \otimes (e_1 \wedge e_2)\\ &=e_1 \wedge e_2 \wedge e_3. \end{array} \] Note that, as expected, $R=\mathscr{L} v_R$ is a submodule isomorphic to $\Lambda^3 V$ (since in fact, $R=\Lambda^3 V$). We conclude that \[ L(\lambda_1+\lambda_2) = \left\{ \begin{array}{@{}l@{\thinspace}l} \mathrm{Ker}(V \otimes \Lambda^2 V \rightarrow \Lambda^3 V) & \text{ if }p \neq 3\\ \mathrm{Ker}(V \otimes \Lambda^2 V \rightarrow \Lambda^3 V)/ \Lambda^3 V & \text{ if }p=3.\\ \end{array} \right. \] \item{Construction for $\lambda=\lambda_1+\lambda_{l-1}$} The Weyl module is in this case spanned by the vectors in the kernel of the canonical map $V \otimes \Lambda^{l-1} V \rightarrow \Lambda^l V$. The highest weight vector can be taken as $v^{\lambda}=e_1 \otimes (e_1 \wedge e_2 \wedge ... \wedge e_{l-1})$. The only subdominant weight is $\lambda_l$. In the case $p\mid l$, in view of Lemma \ref{lindep} we define \[v_R = f_{1,l-1}v^{\lambda}-\sum_{r=2}^{l-1}f_{1,r}f_{r,l-1}v^{\lambda}=\pm e_1 \wedge ... \wedge e_l. \] Now $R=\mathscr{L} v_R = \Lambda^l V $ is a submodule of $V(\lambda)$ and \[ L(\lambda_1+\lambda_{l-1}) \cong \left\{ \begin{array}{@{}l@{\thinspace}l} \mathrm{Ker}(V \otimes \Lambda^{l-1} V \rightarrow \Lambda^l V) & \text{ if }p \nmid l\\ \mathrm{Ker}(V \otimes \Lambda^{l-1} V \rightarrow \Lambda^l V)/ \Lambda^l V & \text{ if } p\mid l.\\ \end{array} \right. \] \item{Construction for $\lambda=2\lambda_1+\lambda_{l}$} The Weyl module is again the image in $V^{\otimes l+2}$ of the corresponding Young symmetrizer, which in turn corresponds to the tableau associated to the partition $(3,1,...,1)$. A highest weight vector is $(e_1 \cdot e_1) \otimes (e_1 \wedge ... \wedge e_{l}) $. To see that this lies in $V(\lambda)$, notice that it is the image of the vector $e_{1} \otimes e_{1} \otimes e_{1} \otimes e_{2} \otimes ... \otimes e_{l}$ under the Young symmetrizer. Now, the only subdominant weight that could afford the highest weight of a composition factor is $\lambda_1$. By Lemma \ref{lmseitz} this happens precisely when $p \mid l+2$ and, in that case, by Lemma \ref{lindep}, we define \[v_R = f_{1,l}v^{\lambda}-\sum_{r=2}^{l}f_{1,r-1}f_{r,l}v^{\lambda}=\pm 2\left (e_1 \otimes (e_1 \wedge ... \wedge e_{l+1})+\varphi(e_1 \otimes (e_1 \wedge ... \wedge e_{l+1}))\right),\] where $\varphi : V^{\otimes l+2} \rightarrow V^{\otimes l+2}$ is the linear map that swaps the first two entries of the basis tensors: $\varphi(e_{b_1} \otimes e_{b_2} \otimes e_{b_3} \otimes ... \otimes e_{b_{l+2}})=e_{b_2} \otimes e_{b_1} \otimes e_{b_3} \otimes ... \otimes e_{b_{l+2}}$, for $b_i \in \{1,2,...,l+1\}$. Finally, \[ L(2\lambda_1+\lambda_{l})= \left \{ \begin{array}{@{}l@{\thinspace}l} V(\lambda) & \text{ if }p \nmid l+2\\ V(\lambda)/ R & \text{ if } p\mid l+2.\\ \end{array} \right. \] where $R=\mathscr{L} v_R = \{(id + \varphi)(e_i \otimes (e_1 \wedge ... \wedge e_{l+1})): 1 \le i \le l+1 \} $, and as expected, $R \cong V$. \end{description} \section*{Acknowledgements} I wish to thank my supervisor Martin W. Liebeck for his encouragement and priceless advice throughout the work in this paper. I am also grateful for the financial support of the Imperial College UROP Award. \setlength{\bibsep}{2pt}
{ "attr-fineweb-edu": 1.501953, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd7c25V5jD6RgBFUk
\subsection{Mirror symmetric tTLG Hofstadter-Moir\'{e} butterfly} The lattice structure of the tTLG lattice exhibits mirror symmetry about the middle layer, as indicated in Fig.~\ref{fig:Fig1tTLG} (a). This facilitates a description of energy bands in parity eigenstates~\cite{LSHL,SMAA,VPF}. We denote sublattice $A(B)$ on layer $i$ with $A_i(B_i)$. The even parity orbital combinations are then given by $(A_+,B_+,A_2,B_2)$ while the odd parity orbitals are $(A_-,B_-)$, where $A_{\pm} = (A_{1} \pm A_{3})/\sqrt{2}$, and a similar expression applies for $B_{\pm}$. We take the relative in-plane displacement, $d=0$, and denote the top (bottom) layers angle $\theta/2$, while the middle layer angle, $-\theta/2$. The band dispersion due to the Mori\'{e} pattern formed at small twist angles can be captured by extensions of the Bistritzer-MacDonald (BM) Hamiltonian~\cite{Bistritzer12233}. The BM model captures the effect of the periodic tunneling between the layers in the AA and AB stacked regions (see Fig.~\ref{fig:Fig1tTLG} (b)), denoted by $w_{AB} = w = 97.5~ {\rm meV}$ and $w_{AA} = \eta w $ with $\eta = 0.82$, respectively (see Appendix A for the tTLG Hamiltonian). At zero displacement fields, due to mirror symmetry, the Hamiltonian can be decomposed into tBLG-like Hamiltonian with enhanced tunneling parameter $ w \to \sqrt{2} w $, and an MLG-like Dirac band (see Appendix A)~\cite{VPF,PhysRevB.100.085109}. The large Moir\'{e} periodicity of twisted 2D crystal results in fractal Hofstadter-Moir\'{e} (HM) bands at high magnetic fields. We used the parity eigenstate basis to calculate the HM-bands of tTLG with the gauge choice ${\bf A} = B(-y,0)$. The Hamiltonian was expressed in the basis set, $\{ | n, Y_i ,\alpha , \sigma \rangle \}$, where $n$ denotes the Landau level (LL) index at the guiding center positioned at $Y_{i}$, (which corresponds to a lattice site in the unit cell) on the sublattice $\alpha $. The index $\sigma = 1,2,3$ denotes the even(odd) parity eigenspinors with the assignments $1=(A_{+},B_{-})$, $2=(A_2,B_{2})$ and $3=(A_-,B_-)$. The details of the calculation are presented in Appendix A. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth,clip]{Fig3tTLG.pdf} \caption{Quantum parity Hall phase in the Hofstadter-Moir\'{e} butterfly patterns of tTLG for a) $\theta = 2^{\circ} $, b) $\theta=1.6^{\circ}$. The number of edge states associated with the even-parity tBLG-like bands is in blue, while the MLG-like bands are in red, in units of $e^2/h$. c) Edge state schematic of the quantum parity Hall phase in a six-terminal Hall-bar geometry at positive charge densities.} \label{fig:QPHtTLG} \end{center} \end{figure} Our calculations for tTLG exhibited rich structures in the HM spectrum, which can be tuned by the electric field and twist angles. Fig.~\ref{fig:tTLGHMbands} shows the HM butterfly for mirror-symmetric tTLG at three representative angles ($\theta = 2^{\circ}, 1.6^{\circ}$ and $1.5^{\circ}$). The Hall conductivity, in units of $e^2/h$, is shown in the spectral gaps. In Fig.~\ref{fig:tTLGHMbands}, the even parity bands are depicted in blue or black, while the odd parity bands are shown in red. The Landau bands originating from the odd parity sector can be distinguished by $\epsilon_n \propto \sqrt{B}$, while the energy of the even parity bands shows a tBLG HM fractal pattern. Similar HM butterfly patterns for tBLG have been reported in Ref.~\onlinecite{HLB}. Our even parity band HM butterfly patterns are consistent with these reports but now occur at twice the magnetic fields due to the $\sqrt{2}$ enhancement of the twist angle in the tTLG even parity sector. \begin{figure*} \begin{center} \includegraphics[width=0.95\textwidth,clip]{Fig4tTLG.pdf} \caption{Hofstadter-Moir\'{e} butterfly patterns in tTLG for an electric displacement field strength $ \Delta_{\perp} =10 ~{\rm meV}$ at the angles a) $\theta = 2^{\circ} $, b) $\theta=1.6^{\circ}$ and c) $\theta = 1.51^{\circ} $(magic angle) with $w = 97.50~ {\rm meV}$ and $\eta = 0.82 $. The values in the spectral gaps indicate the Hall conductivity $\sigma_{xy}$ in units of $e^2/h$. } \label{fig:tTLGHMEfield} \end{center} \end{figure*} We primarily focused on three angles, each indicating three distinct regimes of the HM butterfly. A detailed version of these central bands of the HM butterfly is shown in Fig.~\ref{fig:tTLGHMbands} d), e), and f). The $\theta = 2^{\circ}$ HM butterfly is representative of the twist angle range $\theta \approx 1.7^{\circ}$ to $2.5^{\circ}$. In this regime, we found an emergent Hofstadter pattern similar to the Hofstadter pattern of the tight-binding model for graphene. In contrast, for $\theta = 1.6^{\circ}$, we found a spectral gap for all magnetic fields. Similar results were obtained for the range of angles $\theta \approx 1.65^{\circ}$ to $1.55^{\circ}$, after which the pattern changed significantly. At the magic angle $\theta =1.51^{\circ}$, the HM pattern is modified with no resemblance to the Hofstadter pattern in monolayer graphene. The bandwidth of the central bands decreases nearly an order of magnitude compared to the HM pattern at $\theta = 2^{\circ}$. Below the magic angle at $\theta=1.45^{\circ}$ another pattern reemerged similar to $\theta = 1.6^{\circ}$. In Fig.~\ref{fig:tTLGHMbands}, the integers in the spectral gaps of the HM butterflies denote the Hall conductivity, $\sigma_{H}$ in units of $e^{2}/h$. The numerically attained eigenfunctions were employed with the Wilson loop procedure~\cite{Chernnumbercalculation} to calculate the Chern numbers and Berry flux (see Appendix B for details of this method). We calculated the Hall conductivity within the larger spectral gaps $\approx 5 \geqslant \rm{meV}$. The Hall conductivity at the charge neutrality point $\sigma_{H} (\epsilon_{F} = 0 ) = 0$ was regularized to zero and included the spin and valley degeneracy. The Chern numbers and Hall conductivity of the emergent HM pattern for $\theta = 2^{\circ}$ in the even parity sector of tTLG are the same as the monolayer graphene Hofstadter butterfly. This aspect of the duality for tBLG has been reported in Ref.~\onlinecite{HLB}. However, in tTLG, the Hall conductivity is the sum of the Hall conductivity of tBLG even-parity HM bands and the MLG-bands odd parity LLs. A consequence of mirror-symmetry in tTLG is a symmetry-protected topological (SPT) phase that simultaneously quantizes longitudinal and Hall resistance. This mirror-SPT (mSPT) phase, which we call the quantum parity Hall phase, was identified at neutral charge density in ABA trilayer graphene~\cite{PhysRevLett.121.066602,Stepanov10286}. In tTLG, this state occurs at finite charge density. It is marked by unequal branches of counterpropagating even-parity and odd-parity edge modes associated with tBLG-like HM bands and MLG-like LL bands. In Fig.~\ref{fig:QPHtTLG} a) and b), we label the regions where the quantum parity Hall state appears by the number of edge states associated with each parity sector, blue(red) for even(odd)-parity. In these regions, the Hall conductivity is positive(negative) for negative(positive) energies. Since the neutral charge density is defined at zero energy, this corresponds to a positive(negative) sign of Hall conductance for hole-like(electron-like) charge densities. This is an essential feature of this quantum Hall parity state in tTLG. From our calculations of Chern numbers, we only found one instance of this state. Still, other types of mSPT phases can be realized in regions with smaller spectral gaps $\lessapprox 5 ~\rm{meV}.$ They can be identified by negative(positive) even-parity tBLG-bands Chern numbers at positive(negative) charge densities. Fig.~\ref{fig:QPHtTLG} c) shows the edge states for the quantum parity Hall phase in a six-terminal Hall bar geometry for positive charge densities. There are $2$ edge modes originating from the odd parity LL bands (shown in red) and $4$ counter-propagating edge modes arising from the even-parity Hofstadter bands (shown in blue). Since the edge states in the mirror sectors have unequal branches of edge modes, they exhibit simultaneous quantized Hall and longitudinal resistances. The edge modes are protected from back-scattering by mirror symmetry. The resistances in the Hall bar geometry can be calculated from the Landauer-Buttiker approach~\cite{PhysRevB.38.9375} (see Appendix C) for the quantum parity Hall state, giving \be R_{14,26} = \frac{h}{6e^2} ; \qquad R_{14,32} = \frac{h}{9e^2} ; \qquad R_{14,14} = \frac{4h}{9 e^2}, \ee where $R_{ij,kl}$ is defined as the ratio of the voltage to the current measured between the $k^{th}$ and the $j^{th}$, with current applied from the $i^{th}$ to the $j^{th}$ lead. The edge states of the quantum parity Hall phase and their stability to disorder will be discussed elsewhere. Next, we study the effect of the displacement field on the HM patterns on tTLG. \subsection{Emergent zero-energy state in tTLG} The displacement field breaks mirror symmetry, hybridizing the Dirac LLs with the even parity HM bands of the tBLG-like even parity sector. Fig.~\ref{fig:tTLGHMEfield} $(a), (b)$ \& $(c)$ shows the HM pattern in the presence of a displacement field of strength $\Delta_{\perp} = 10$ meV. The most striking feature in Fig.~\ref{fig:tTLGHMEfield} $(a), (b)$ \& $(c)$ is the emergence of two spectral gaps adjacent to the charge neutrality point. For all three angle regimes, we observed a fractured fractal pattern in a displacement field when compared to the HM fractal patterns in Fig.~\ref{fig:tTLGHMbands} d), e), and f). This is accompanied by the emergence of a weakly dispersing zero-energy band pinned at the charge neutrality point. This zero-energy band disperses with a small bandwidth $\approx 0.1~{\rm meV}$ to $0.2 $ meV for the twist angle $\theta=2^{\circ}$. However, its bandwidth slightly increases at smaller twist angles as a function of the magnetic field. The spectral gap at zero-energy is given by $\approx \Delta_{\perp}/2$ for $\theta= 2^{\circ} $, and it is independent of the magnetic field within numerical accuracy. This spectral gap results from a level repulsion mechanism, as discussed below. Another striking feature is the change of the Hall conductance as a function of the twist angle. The Hall conductance at $\theta = 2^{\circ}, 1.6^{\circ}$ adjacent to the zero-energy state is $\sigma_{xy}= -2e^2/h$ and $2e^2/h$ changes to $\sigma_{xy}= 2e^2/h$ and $-2e^2/h$ at $\theta =1.51^{\circ}$ as a function of the twist angle. This topological transition indicates a significant band reconstruction between the twist angles $\theta = 1.6^{\circ}$ and $\theta = 1.51^{\circ}$. These transitions are associated with the Berry curvature's tunability and band dispersion as a function of the electric field. This phase transition is evident in the corresponding Wannier plots for tTLG (see Appendix B). The most striking feature is the emergence of a zero-energy flat band multiplet in the angle regimes $\theta = 1.7^{\circ}$ to $2.5^{\circ}$ under a displacement field. This zero-energy flat band multiplet is q-fold degenerate and completely resides in the middle layer. The origin of the zero-energy band can be understood by starting in the chiral limit ($\eta=0$) and zero displacement field and projecting on the $N=0$ LL index. The chiral limit, defined by $\eta =0$, corresponds to the absence of tunneling between the same orbitals (i.e. $w_{A_+ A_2}=w_{B_+ B_2} =0$) in the even parity tBLG-like bands~\cite{AshwinTMLG}. Therefore, in a magnetic field, when $\eta=0$, the $N=0$ LL in valley ${\bf K}$ lies on the sublattice $A_{+}, A_{2}$ in the even parity sector, and $A_{-}$ in the odd parity sector, while in valley ${\bf K'}$ the zeroth LL lies on the sublattice $B_{+}, B_{2}$ in the even parity sector, and $B_{-}$ in the odd parity sector. Since the $N=0$ LLs are localized on the sublattice, there is no direct coupling between the $N=0$ LLs, as indicated in Fig.~\ref{fig:Zeneprop} a). However, the $N \neq 0 $ LL are perturbatively coupled to the $N=0$ LL due to $w_{A_+ B_2}$ and $w_{B_+ A_2}$ tunneling. In Fig.~\ref{fig:Zeneprop} a), this mixing is indicated by the dashed lines, where we only show the coupling in valley ${\bf K}$. Below, we present the argument for the level mechanism for the ${\bf{K}}$ valley, the other valley ${\bf{K'}}$ can be attained by interchanging the sublattices. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth,clip]{Fig5tTLG.pdf} \caption{a) Schematic representation of the coupling in the chiral limit ($\eta =0$) in valley ${\bf K}$ for the $N=0$ with the $N \neq 0$ LLs. The direct coupling in $N=0$ LL due to the displacement field is represented by the solid black line, while the perturbative coupling due to tunneling between the $A$ and $B$ sublattices is denoted by the dashed line. b) Band gap as a function of the displacement field for different values of $\phi = p/q$, all energies are in meV. All results are for the twist angle $\theta = 2^{\circ}$. } \label{fig:Zeneprop} \end{center} \end{figure} When a displacement field is applied, the $A_+$ orbital hybridizes with the $A_{-}$ orbital in the odd-parity sector. This direct coupling is shown in the solid line in Fig.~\ref{fig:Zeneprop} a). The hybridization induced by the displacement field couples the $N=0$ LLs on the sublattice $A_{+}$ with $A_{-}$ in the ${\bf K}$ valley. These states gap out due to level repulsion, leaving the zero-energy state on the middle layer on the orbitals $A_{2}$ in ${\bf K}$ at zero energy. Therefore, in the chiral limit $\eta =0$, the emergent zero-energy state in the HM pattern at $\theta =2^{\circ}$ is localized in the middle layer on sublattice $A_{2}$ in the ${\bf K}$ valley. It is essential to point out that this zero-energy level has components $N \neq 0 $ LL due to mixing induced by the $w_{A_+ B_2}$ and $w_{A_2 B_+} $ tunneling terms. In the chiral limit, the calculated projected weight of the emergent zero-energy state averaged over the BZ-mesh on the $N=0$ LL orbital in the middle layer was $\approx 0.8$, indicating mixing with higher LL int the middle layer. When $\eta =0.82 $ the $N=0$ LL in the middle layer is weakly coupled to the $N \neq 0$ LL by a combination of the displacement field and the tunneling terms $w_{A_+ A_2}\neq 0$, and $w_{B_+ B_2} \neq 0$. However, even when $\eta =0.82$, we found that the emergent zero-energy state entirely resided in the middle layer. We verified our analysis by projecting the wavefunction amplitude of the emergent zero-energy state on the middle layer. The projected amplitude of the emergent zero-energy state averaged over the BZ-mesh, and the multiplet band index on the middle layer came out to be $\approx 1$. This projected amplitude was calculated for different values $ \phi$. The same results are obtained for various displacement fields. Furthermore, the calculated projected weight of the emergent zero-energy state averaged over the BZ-mesh on the $N=0$ LL orbital in the middle layer was $\approx 0.6$, indicating significant mixing with $N\neq 0$ LLs in the middle layer. More evidence of the level repulsion mechanism can be inferred from the behavior of the energy gap above the zero-energy state $ \Delta_g$ as a function of the displacement field. The energy gap $ \Delta_g \approx \Delta_{\perp}/2 $ grows linearly as a function of the displacement field (see Fig.~\ref{fig:Zeneprop} b). The bandwidth of the zero-energy state $\Delta_w \approx 0.1~{\rm meV}$ to $0.4 $ meV is much smaller than the bandgap and varies slightly with the electric field. We also found that the Berry curvature deviation of the zero-energy state decreases as a function of the displacement field strength $\Delta_{\perp}$. This tunability of the Berry curvature and isolation of the emergent zero-energy state provide ideal conditions for realizing various interesting many-body interacting ground states~\cite{Xie2021}. Since the emergent zero-energy state has significant mixing of higher LL wavefunction, it is anticipated that the ground state at fractional filling will most likely be a Wigner crystal or charge density wave state~\cite{PhysRevLett.76.499,PhysRevB.54.1853,PhysRevB.55.9326,PhysRevLett.82.394,PhysRevLett.109.126804}. Due to the complexity of the computational basis, these studies must be performed on lattice analogs of the HM pattern of tTLG. \subsection{Conclusion and Outlook} The displacement field provides an external knob to manipulate the topological phase and energy spectrum in the HM butterfly. The most striking is the emergence of a zero-energy state at the charge neutrality point within an accessible range of doping densities, whose separation from the energy spectrum can be tuned by the displacement field. The narrow bandwidth of the zero-energy band indicates the possibility of strongly correlated phases such as quantum Hall ferromagnetism~\cite{PhysRevLett.96.256602,PhysRevLett.109.046803,PhysRevLett.117.076807,Barlas_2012}, possible charge density waves~\cite{PhysRevLett.76.499,PhysRevB.54.1853,PhysRevB.55.9326,PhysRevLett.82.394,PhysRevLett.109.126804}, and fractional topological insulators~\cite{Xie2021,PhysRevB.85.241308,PhysRevX.1.021014,PhysRevB.101.235312,PhysRevResearch.2.023238,PhysRevResearch.2.023237}. Furthermore, the electric field can be used to access topological transitions. This makes it possible to probe the HM butterfly patterns in tTLG in transport or via scanning experiments. In addition, we discovered a symmetry-protected topological phase for the mirror-symmetric case due to unequal counter-propagating edge modes exhibiting simultaneous Hall and longitudinal resistances. Interactions within each sector of the quantum parity Hall phase will most likely result in an analog exotic correlated quantum Hall phases detected in the ABA stacked trilayer graphene~\cite{PhysRevLett.121.066602,Stepanov10286}. \acknowledgments{M. I. and Y. B. authors acknowledge the support of UNR/VPRI startup grant PG19012. Y. B. acknowledges support from the Aspen Center for Physics, which is supported by NSF grant PHY-1607611, where part of this work was performed. }
{ "attr-fineweb-edu": 1.806641, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd9DxK3YB9i3RLyQG
\section{Introduction} \setcounter{equation}{0} \noindent The Ricci flow \begin{equation} \frac{\partial g_{ij}}{\partial t} = -2 R_{ij}\ . \label{eq1.1} \end{equation} was first introduced in the mathematics literature by Richard Hamilton \cite{Hamilton1} in 1982. Almost immediately, it was applied to the classification problem for closed 3-manifolds and much subsequent work in the subject in the intervening 25 years has been focused on this application, culminating in the recent celebrated results of Perelman \cite{Perelman}. By contrast, Ricci flow on noncompact manifolds has received somewhat less attention. Of course, structures on noncompact manifolds, such as Ricci solitons, are relevant to the compact case, and this has been to now an important motivation for work on the noncompact case. The case of asymptotically flat Ricci flow has remained virtually untouched (nontrivial solitons do not occur in this case \cite{OSW1}). But physics provides considerable motivation to study the asymptotically flat case. Our interest in it arises out of a conjectural scenario in string theory. Equation (\ref{eq1.1}) is the leading-order {\it renormalization group flow equation} for a nonlinear sigma model that describes quantum strings propagating in a background spacetime \cite{Friedan}.\footnote {We ignore the dilaton since it can be decoupled from the metric in renormalization group flow.} What is important to understand from this statement is that fixed points of this equation provide geometric backgrounds in which the low energy excitations of quantum strings can propagate (in the approximation that radii of curvature are large and excitation energies small relative to the so-called string scale). The variable $t$ in renormalization group flow is not time: it is (a constant times) the logarithm of the so-called renormalization scale. However, there are conjectured relationships between renormalization group flow and temporal evolution. A specific case concerns tachyon condensation, the scenario wherein an unstable string system is balanced at the top of a hill of potential energy (for a review of tachyon condensation, see \cite{HMT}). The system falls off the hill, radiating away energy in gravitational waves. The system comes to rest in a valley representing a stable minimum of potential energy. In open string theory, a more elaborate version of this scenario involving the evaporation of a brane and the formation of closed strings is now well understood, even quantitatively. In closed string theory, much less is known but, conjecturally, the fixed points of the renormalization group flow equation (\ref{eq1.1}) are the possible endpoints of this evolution. Sometimes it is further conjectured that time evolution in closed string theory near the fixed points is determined by renormalization group flow, and then $t$ in (\ref{eq1.1}) does acquire an interpretation as a time. Comparing both sides of this picture, we see that the radiation of positive energy in the form of gravitational waves as the system comes to rest in the valley should produce a corresponding decrease in the mass of the manifold under the Ricci flow. This suggests that we should endeavor to formulate and test a conjecture that mass decreases under Ricci flow, at least if the initial mass is positive. The asymptotically flat case has a well-defined notion of mass, the ADM mass, so this seems an appropriate setting in which to formulate the conjecture. However, the metric entering the renormalization group flow or Ricci flow in this scenario is not the full spacetime metric, for which (\ref{eq1.1}) would not be even quasi-parabolic, but rather the induced Riemannian metric on a suitable spacelike submanifold \cite{GHMS}. Now ADM mass is conserved (between Cauchy surfaces, and in the closed string scenario of \cite{GHMS}), even in the presence of localized sources of radiation. This, we will see, is reflected in the Ricci flow. The mass of $g$ will not change during evolution by (\ref{eq1.1}). But if energy loss through gravitational radiation occurs, then the {\it quasi-local mass} contained within a compact region {\it should} change along the flow to reflect this.\footnote {We prefer not to discuss in terms of the Bondi mass, which would require us to pass back to the Lorentzian setting which is not our focus in this article. See \cite{GHMS} for a discussion in terms of Bondi mass.} In this paper, we focus first on the asymptotically flat case of Ricci flow in general. Section 2 describes asymptotically flat manifolds, with no assumption of rotational symmetry. Continuing with the general asymptotically flat case, in Subsection 3.1 we state and prove our short-term existence result Theorem 3.1, showing that a general asymptotically flat data set on ${\mathbb R}^n$ will always evolve under Ricci flow, remaining smooth and asymptotically flat on a maximal time interval $[0,T_M)$. We will show that the ADM mass remains constant during this interval, at least for non-negative scalar curvature (i.e., the positive mass case, the usual case of physical interest). Moreover, if $T_M < \infty$ then the norm of the Riemann curvature must become unbounded as $t \nearrow T_m$, just as in the compact case. We show this in Subsection 3.2. The short-term existence proof in Section 3 depends on detail provided in the appendices. In Appendix A, we derive weighted versions of standard Sobolev estimates such the Sobolev inequalities and Moser estimates. We then use these estimates in Appendix B to prove local existence and uniqueness in weighted Sobolev spaces for uniformly parabolic systems. We specialize to rotational symmetry in Section 4. In Section 4.1, we pass to a coordinate system well suited to our subsequent assumption that no minimal hyperspheres are present initially. We show in Section 4.3 that this coordinate system remains well-defined on the interval $[0,T_m)$. This is essentially a consequence of the result, proved in Section 4.2, that no minimal hyperspheres develop during the flow. The absence of minimal spheres allows us to analyse the problem in terms of a single PDE, the master equation (\ref{eq4.18}). From this equation, we derive a number maximum principles that yield uniform bounds on the curvature which allow us to conclude that $T_M=\infty$. We obtain these principles in the first two subsections of Section 5. Even better, we obtain not just uniform bounds but decay estimates, from which we can prove convergence to flat Euclidean space. Now given our assumptions, this is the only Ricci-flat fixed point available. That is, the string theory discussion above would lead one to conjecture that: \bigskip \noindent {\it When no minimal hypersphere is present, rotationally symmetric, asymptotically flat Ricci flow is immortal and converges to flat space as $t\to\infty$;} \bigskip \noindent and this is what we show. Though we have motivated this conjecture from string theory for the case of positive initial mass, we will prove that it holds whether or not the initial mass is positive. This is our main theorem, proved in Subsection 5.3, which states: \begin{thm}\label{Thm1.1} Let $\{x^i\}_{i=1}^n$ be a fixed Cartesian coordinate system on ${\mathbb R}^n$, $n\ge 3$. Let ${\hat g}=\hat{g}_{ij}dx^i dx^j$ be an asymptotically flat, rotationally symmetric metric on ${\mathbb R}^n$ of class $H^k_{\delta}$ with $k>n/2 +4$ and $\delta<0$. If $(\mathbb{R}^n,\hat{g})$ does not contain any minimal hyperspheres, then there exists a solution $g(t,x) \in C^\infty( (0,\infty)\times \mathbb{R}^n)$ to Ricci flow (\ref{eq1.1}) such that \begin{itemize} \item[(i)] $g(0,x)=\hat{g}(x)$, \item[(ii)] $g_{ij} - \delta_{ij} \in C^1([0,T],H^{k-2}_\delta)$ and $g_{ij}-\delta_{ij} \in C^1([T_1,T_2],H^\ell_\delta)$ for any $0<T_1<T_2<\infty$, $0<T<\infty$, $\ell \geq 0$, \item[(iii)] for each integer $\ell \geq 0$ there exists a constant $C_\ell > 0$ such that \eqn{curvest}{ \sup_{x\in\mathbb{R}^n}|\nabla^\ell{\rm Rm}(t,x)|_{g(t,x)}\leq \frac{C_\ell}{(1+t)t^{\ell/2}} \quad \forall\; t>0\, , } \item[(iv)] the flow converges to $n$-dimensional Euclidean space ${\mathbb E}^n$ in the pointed Cheeger-Gromov sense as $t\to\infty$, and \item[(v)] if furthermore $k>n/2 +6$, $\delta<\min\{4-n,1-n/2\}$, $\hat{R}\geq 0$, and $\hat{R}\in L^1$, then the ADM mass of $g(t)$ is well defined and $\text{\rm mass}(g(t)) = \text{\rm mass}(\hat{g})$ for all $t\geq 0$. \end{itemize} \end{thm} When a minimal hypersphere {\it is} present initially, if the neck is sufficiently pinched then we expect long-time existence to fail. To see why, consider rotationally symmetric metrics on $S^n$. If there is a sufficiently pinched minimal $(n-1)$-sphere, the curvature blows up in finite time. This has been shown both rigorously $(n\geq 3)$ \cite{AngKno} and numerically $(n=3)$ \cite{GI}. Our assumption of no minimal spheres in the initial data is intended to prevent this. The ability to make this assumption and to choose coordinates adapted to it is a distinct advantage of the noncompact case. However, we also expect (based, e.g., on (\cite{GI}) that for initial data with minimal hyperspheres that have only a mild neck pinching, the flow will continue to exist globally in time as well. Thus, when a minimal hypersphere is present, we believe there would be considerable interest in determining a precise criterion for global existence in terms of the degree of neck pinching because of the possibility, raised in \cite{GI}, that the critical case on the border between singularity formation and immortality may exhibit universal features such as those observed in critical collapse in general relativity \cite{Choptuik}. The constancy of the ADM mass in statement (v) is not at odds with the conclusion that the flow converges to a flat and therefore massless manifold. This constancy was also noted in \cite{DM} but we draw different conclusions concerning the limit manifold, owing to our use of the pointed Cheeger-Gromov sense of convergence of Riemannian manifolds.\footnote {The rotationally symmetric, expanding soliton of \cite{GHMS} can be used to illustrate this phenomenon explicitly (albeit in 2 dimensions, whereas our results are for $n\ge 3$ dimensions). For this soliton, one can easily compute the Brown-York quasi-local mass on any ball whose proper radius is fixed in time and see that for each such ball the quasi-local mass tends to zero as $t\to \infty$, and the flow converges to Euclidean 2-space. But the mass at infinity of the soliton (the deficit angle of the asymptotic cone in 2 dimensions) is a constant of the motion which can be set by initial conditions to take any value.} In Subsection 4.4 we define three different kinds of metric balls in $({\mathbb R}^n,g(t))$, $n\ge 3$; balls of fixed radius, fixed volume, and fixed surface area of the bounding hypersphere. To clarify the behaviour of the mass in the limit $t\to\infty$, we express the Brown-York quasi-local mass of these balls in terms of sectional curvature and, by anticipating the decay rate for sectional curvature derived in Section 5, show that these quasi-local masses go to zero as $t\to\infty$, even though the ADM mass, as measured at infinity, is constant. The picture is not strongly dependent on the definition of quasi-local mass, of which the Brown-York definition is but one among many. In rotational symmetry in any dimension, the metric has only one ``degree of freedom''. The study of the evolution of quasi-local mass then reduces to the study of this single degree of freedom, no matter which definition of quasi-local mass one prefers.\footnote {The assumption of spherical symmetry in general relativity precludes gravitational radiation, according to the Birkhoff theorem. But on the string side of our scenario, the picture is one of closed strings existing as perturbations that break the spherical symmetry of the background metric (as well, we should include a dilaton background field that modifies general relativity). Viewed in the string picture, these perturbations create the radiation that is detected as a change in the quasi-local mass of the spherically symmetric Ricci flow.} Although local existence, uniqueness, and a continuation principle for Ricci flow on non-compact manifolds with bounded curvature are known \cite{Shi,CZ}, it does not follow immediately from these results that Ricci flow preserves the class of asymptotically flat metrics. One of the main results of this paper is to show that Ricci flow does in fact preserve the class of asymptotically flat metrics. Independent of our work, Dai and Ma have recently announced that they have also been able to establish this result \cite{DM}, as has List in his recent thesis \cite{List}. Our approach to the problems of local existence, uniqueness, continuation, and asymptotic preservation is to prove a local existence and uniqueness theorem for quasi-linear parabolic equations with initial data lying in a weighted Sobolev space, and then use it to show that Ricci flow preserves the class of asymptotically flat metrics. An important advantage of this approach rather than appealing to the results of \cite{Shi,CZ,List,DM} is that we obtain a local existence and uniqueness theorem on asymptotically flat manifolds that is valid for other types of geometric flows to which the results of \cite{Shi,CZ,List,DM} do not immediately apply, and which are of interest in their own right. For example, our local existence results contained in appendix B combined with the DeTurck trick will yield local existence, uniqueness, and a continuation principle for the following flows on asymptotically flat manifolds: \eqn{SEF}{ \left. \begin{array}{l}\partial_t g = - 2R_{ij} + 4 \nabla_i u \nabla_j u \\ \partial_t u = \Delta u \end{array} \right\} \quad \text{(static Einstein flow)}, \\ } \eqn{RGF}{ \left. \begin{array}{l} \partial_tg_{ij} = - \alpha^{'} \bigl(R_{ij} + \nabla_i \nabla_j\Psi +\ensuremath{\textstyle\frac{1}{4}} H_{jpq}H_{j}{}^{pq}\bigr) \\ \partial_t \Psi = \frac{\alpha^{'}}{2} (\Delta \Psi - \vert \nabla \Psi \vert^2 +\vert H\vert^2) \\ \partial_t B_{ij} = \frac{\alpha^{'}}{2}( \nabla^{k}H_{kij} - H_{kij}\nabla^{k}\Psi ) \quad ( H:= \text{d} B) \end{array} \right\} \quad \text{($1^{\text{st}}$ order sigma model RG flow),} } and \eqn{RGF2}{ \partial_tg_{ij} = - \alpha^{'} \bigl(R_{ij} + \frac{\alpha^{'}}{2}R_{iklm}R_j{}^{klm}\bigr) \quad \text{($2^{\text{nd}}$ order sigma model RG flow with $B=\Phi=0$).} } We note that the static Einstein flow has been previously considered in the thesis \cite{List}. There a satisfactory local existence theory on noncompact manifolds is developed and an also a continuation principle for compact manifolds is proved. The problem of global existence for rotationally symmetric metrics on $\mathbb{R}^3$ has previously been investigated in \cite{Ivey}. There the assumptions on the initial metric are different than ours. Namely, the initial metric in \cite{Ivey} has positive sectional curvature and the manifold opens up as least as fast as a paraboloid. Under these assumptions, it is shown that Ricci flow exists for all future times and converges to either a flat metric or a rotationally symmetric Ricci soliton. Finally, throughout we fix the dimension of the manifold to be $n\ge 3$. As well, we usually work with the Hamilton-DeTurck form of the Ricci flow \begin{equation} \frac{\partial g_{ij}}{\partial t} = -2R_{ij}+\nabla_i\xi_j +\nabla_j\xi_i\ , \label{eq1.2} \end{equation} which is obtained from the form (\ref{eq1.1}) by allowing the coordinate basis in which $g_{ij}$ is written to evolve by a $t$-dependent diffeomorphism generated by the vector field $\xi$. \bigskip \noindent{\bf Acknowledgments.} We thank Suneeta Vardarajan for discussions concerning the string theory motivation for this work. EW also thanks Barton Zwiebach for his explanation of the rolling tachyon. This work was begun during a visit by TO to the Dept of Mathematical and Statistical Sciences of the University of Alberta, which he thanks for hospitality. The work was partially supported by a Discovery Grant from the Natural Sciences and Engineering Research Council of Canada. \section{Asymptotically flat manifolds} \setcounter{equation}{0} \noindent The definition of asymptotically flat manifolds that we employ requires the use of weighted Sobolev spaces, which we will now define. Let $V$ be a finite dimensional vector space with inner product $\ipe{\cdot}{\cdot}$ and corresponding norm $\enorm{\cdot}$. For $u\in L^p_{\text{loc}}(\mathbb{R}^n,V)$, $1\leq p \leq \infty$, and $\delta \in \mathbb{R}$, the \emph{weighted $L^{p}$ norm} of $u$ is defined by \leqn{wLpdef}{ \norm{u}_{L^{p}_{\delta}} := \left\{\begin{array}{ll} \norm{\sigma^{-\delta-n/p}\,u}_{L^p} & \text{if $ 1\leq p < \infty$}\\ \\ \norm{\sigma^{-\delta}\,u}_{L^{\infty}} & \text{if $p=\infty$} \end{array} \right. } with \begin{equation} \sigma(x) := \sqrt{1+|x|^2}\ . \end{equation} The \emph{weighted Sobolev norms} are then given by \leqn{wSobdef}{ \norm{u}_{W^{k,p}_{\delta}} := \left\{ \begin{array}{ll} \displaystyle{\Bigl(\sum_{|I|\leq k} \norm{D^{I}u}^{p}_{L^{p}_{\delta-|I|} } \Bigr)^{1/p}} & \text{if $1\leq p < \infty$} \\ \\ \displaystyle{\sum_{|I|\leq k} \norm{D^{I}u}_{L^{\infty}_{\delta-|I|} }} & \text{if $p=\infty$} \end{array}\right. } where $k\in \mathbb{N}_0$, $I = (I_{1},\ldots, I_{n}) \in \mathbb{N}_{0}^{n}$ is a multi-index and $D^{I} = \partial_{1}^{I_{1}}\ldots\partial_{n}^{I_{n}}$. Here $\partial_i = \frac{\partial\;}{\partial x^i}$ and $(x^1,\ldots,x^n)$ are the standard Cartesian coordinates on $\mathbb{R}^n$. The weighted Sobolev spaces are then defined as \eqn{wsobdef}{ W^{k,p}_{\delta} = \{\, u \in W^{k,p}_{\text{loc}}(\mathbb{R}^n,V) \, | \, \norm{u}_{W^{k,p}_{\delta}} < \infty \, \}\, . } Note that we have the inclusion \leqn{inclusion}{ W^{k,p}_{\delta_1} \subset W^{\ell,p}_{\delta_2} \quad \text{for $k\geq \ell$, $\delta_{1} \leq \delta_2$}} and that differentiation $\partial_i \: :\:W^{k,p}_{\delta}\rightarrow W^{k-1,p}_{\delta-1}$ is continuous. In the case $p=2$, we will use the alternative notation $H^{k}_{\delta} = W^{k,2}_{\delta}$. The spaces $L^{2}_{\delta}$ and $H^{k}_{\delta}$ are Hilbert spaces with inner products \leqn{L2ip}{ \ip{u}{v}_{L^{2}_{\delta}} := \int_{\mathbb{R}^{n}} \ipe{u}{v} \sigma^{-2\delta-n}d^{n}x} and \leqn{Hkip}{ \ip{u}{v}_{H^{k}_{\delta}} := \sum_{|I|\leq k} \ip{D^{I}u}{D^{I}v}_{L^{2}_{\delta-|I|} }\, , } respectively. As with the Sobolev spaces, we can define weighted version of the bounded $C^k$ function spaces $C^{k}_{b} := C^k(\mathbb{R}^n,V)\cap W^{k,\infty}$ spaces. For a map $u\in C^{0}(\mathbb{R}^{n},V)$ and $\delta\in \mathbb{R}$, let \eqn{rsobdef3.1}{ \norm{u}_{C^{0}_{\delta}} := \sup_{x\in \mathbb{R}^{n}}|\sigma(x)^{-\delta}u(x)| \, . } Using this norm, we define the $\norm{\cdot}_{C^{k}_{\delta}}$ norm in the usual way: \eqn{rsobdef3.3}{ \norm{u}_{C^{k}_{\delta}} := \sum_{|I|\leq k} \norm{\partial^{I}u}_{C^{0}_{\delta-|I|}} \, . } So then \gath{rsobdef3.5}{ C^{k}_{\delta} := \bigl\{\, u \in C^{k}(\mathbb{R}^n,V) \, | \, \norm{u}_{C^{k}_{\delta}} < \infty \: \bigr\} \, . } We are now ready to define asymptotically flat manifolds. \begin{Def} \label{asymdef} \mnote{[asymdef]} {\em Let $M$ be a smooth, connected, $n$-dimensional manifold, $n\ge 3$, with a Riemannian metric $g$ and let $E_R$ be the exterior region $\{\,\, x\in \mathbb{R}^n\,\,|\,\, |x|>R\}$. Then for $k>n/2$ and $\delta <0$, $(M,g)$ is {\em asymptotically flat of class} $H^k_\delta$ if \begin{itemize} \item[(i)] $g\in H^{k}_{\text{loc}}(M)$, \item[(ii)] there exists a finite collection $\{U_{\alpha}\}_{\alpha=1}^{m}$ of open subsets of $M$ and diffeomorphisms $\Phi_\alpha : E_R \rightarrow U_\alpha$ such that $M\setminus \cup_{\alpha}U_{\alpha}$ is compact, and \item[(iii)] for each $\alpha\in \{1,\ldots,m\}$, there exists an $R>0$ such that $(\Phi_{\alpha}^{*}g)_{ij} - \delta_{ij} \in H^k_\delta(E_R)$, where $(x^1,\ldots,x^n)$ are standard Cartesian coordinates on $\mathbb{R}^n$ and $\Phi_\alpha^*g = (\Phi_{\alpha}^{*}g)_{ij} dx^i dx^j$. \end{itemize} } \end{Def} The integer $m$ counts the number of asymptotically flat ``ends'' of the manifold $M$. As discussed in the introduction, we are interested in manifolds where $M\cong \mathbb{R}^n$ and hence $m=1$. In this case, we can assume that $g=g_{ij}dx^i dx^j$ is a Riemannian metric on $\mathbb{R}^n$ such that \leqn{falloff}{ g_{ij}-\delta_{ij}\, ,\; g^{ij}-\delta^{ij} \in H^{k}_\delta } where $g^{ij}$ are the components of the inverse metric, satisfying $g^{ij}g_{jk} = \delta^{i}_k$. We note that results of this section and Theorems \ref{LocA}, \ref{LocB}, and \ref{cont} of the next section are are easily extended to the general case. We leave the details to the interested reader. In the following section, we will need to use diffeomorphisms generated by the flows of time-dependent vector fields and also their actions on the metric and other geometrical quantities. Therefore, we need to understand the effect of composing a map in $H^{k}_\delta(\mathbb{R}^n,V)$ with a diffeomorphism on $\mathbb{R}^n$. Following Cantor \cite{Cantor}, we define \eqn{diffdef}{ \mathcal{D}^{k}_{\delta} := \{\; \psi : \mathbb{R}^n \rightarrow \mathbb{R}^n \, |\, \text{$\psi-\mathord{{\mathrm 1}\kern-0.27em{\mathrm I}}\kern0.35em \in H^{k}_\delta$, $\psi$ is bijective, and $\psi^{-1}-\mathord{{\mathrm 1}\kern-0.27em{\mathrm I}}\kern0.35em \in H^{k}_\delta$} \} } which is the group of diffeomorphisms that are asymptotic to the identity at a rate fast enough so that the difference lies in $H^k_\delta$. We will need to understand not only when composition preserves the $H^k_\delta$ spaces but also when composition $(\psi,u) \mapsto u\circ\psi$ is continuous as a map from $\mathcal{D}^k_\delta \times H^k_\delta$ to $H^k_\delta$. In \cite{Cantor}, Cantor studied this problem under the assumption that $\delta \leq -n/2$. He assumed $\delta \leq -n/2$ because that was what he needed to prove the weighted multiplication lemma (see Lemma \ref{SobB}). However, it is clear from his arguments that the proofs of his results are valid whenever the multiplication lemma holds and $H^{k}_\delta \subset C^{1}_b$. Therefore, by Lemmata \ref{SobA} and \ref{SobB}, his results are valid for $\delta \leq 0$. \begin{thm} {\emph{[Corollary 1.6,\cite{Cantor}]}} \label{Can3} \mnote{[Can3]} For $k > n/2+1$ and $\delta \leq 0$, the map induced by composition \eqn{Can3.1}{ H^k_{\delta} \times \mathcal{D}^k_{\delta} \longrightarrow H^k_{\delta} \: : \: (u,\psi) \longmapsto u \circ \psi } is continuous. \end{thm} Cantor also proved the following three useful results: \begin{lem} {\emph{[Lemma 1.7.2,\cite{Cantor}]}} \label{Can3B} \mnote{[Can3B]} If $k > n/2+1$, $\delta \leq 0$, and $f$ is a $C^1_b$ diffeomorphism such that $f-\mathord{{\mathrm 1}\kern-0.27em{\mathrm I}}\kern0.35em \in H^{k}_\delta$ then $f \in \mathcal{D}^k_\delta$. \end{lem} \begin{thm} {\emph{[Theorem 1.7,\cite{Cantor}]}} \label{Can3A} \mnote{[Can3A]} For $k > n/2+1$ and $\delta \leq 0$, $\mathcal{D}^k_\delta$ is an open subset of \eqn{Can3A.1}{ \{ \, f : \mathbb{R}^n \rightarrow \mathbb{R}^n\,|\, f-\mathord{{\mathrm 1}\kern-0.27em{\mathrm I}}\kern0.35em \in H^k_\delta \,\}\,. } \end{thm} \begin{thm} {\emph{[Theorem 1.9,\cite{Cantor}]}} \label{Can2} \mnote{[Can2]} For $k > n/2+1$ and $\delta \leq 0$, $\mathcal{D}^k_\delta$ is a topological group under composition and a smooth Hilbert manifold. Also, right composition is smooth. \end{thm} The following proposition is a straightforward extension of Cantor's work. \begin{prop} \label{Can4} \mnote{[Can4]} If $k> n/2+1$, $\delta \leq 0$, and $u \in H^{k+\ell}_\delta$ $(\ell\geq 0)$ then the map \eqn{Can4A}{ \mathcal{D}^{k}_\delta \longrightarrow H^k_\delta \: : \: \psi \longmapsto u\circ \psi } is of class $C^\ell$. \end{prop} Using these results, it is not difficult to see that the proof of Theorem 3.4 of \cite{EM} generalizes to the $H^k_\delta$ spaces with the result being: \begin{thm} \label{EM1} \mnote{[EM1]} Suppose $\delta \leq 0$, $k > n/2+2$ and $X : (-\kappa,\kappa)\times \mathbb{R}^n \rightarrow \mathbb{R}^n$ $(\kappa >0)$ defines a continuous map \eqn{EM1A}{X : (-\kappa,\kappa)\longrightarrow H^{k+\ell}_\delta(\mathbb{R}^n,\mathbb{R}^n)\: : \: t \longmapsto X(t,\cdot)\quad (\ell \geq 0)\, . } Let $\psi_t$ denote the flow of the time dependent vector field $X(t,x)$ on $\mathbb{R}^n$ that satisfies $\psi_0 = \mathord{{\mathrm 1}\kern-0.27em{\mathrm I}}\kern0.35em$. Then there exists a $\kappa_* \in (0,\kappa)$ such that $\psi_t$ $(\,t\in (-\kappa_*,\kappa_*) \,)$ defines a $C^{1+\ell}$ curve in $\mathcal{D}^k_\delta$. \end{thm} \section{Local Existence} \setcounter{equation}{0} \subsection{Existence of General Asymptotically Flat Ricci Flows} \noindent We now prove a local existence result for Ricci flow on asymptotically flat manifolds. \begin{thm} \label{LocA} \mnote{[LocA]} Let $\hat{g}$ be an asymptotically flat metric of class $H^k_\delta$ with $\delta < 0$ and $k > n/2+3$. Then there exists a $T>0$ and a family $\{g(t),t\in [0,T)\}$ of asymptotically flat metrics of class $H^{k-2}_\delta$ such that $g(0) = \hat{g}$, \eqn{LocA1}{ g_{ij}-\delta_{ij}\, ,\; g^{ij}-\delta^{ij} \in C^{1}([0,T),H^{k-2}_\delta)\, , } and $\partial_t g_{ij} = -2R_{ij}$ for all $t\in [0,T)$. Moreover, $g(t,x)\in C^\infty((0,T)\times M)$ and $g_{ij}-\delta_{ij}$, $g^{ij}-\delta^{ij}$ $\in$ $C^{1}([T_1,T_2],H^{\ell}_\delta)$ for any $\ell \geq 0$ and $0<T_1 < T_2 <T$. \end{thm} \begin{proof} Let $\tilde{\Gamma}^{k}_{ij}$ denote the Christoffel symbols for the Euclidean Levi-Civita connection on $M\cong \mathbb{R}^n$. Following the now standard method, see \cite{RF} Sec.~3.3, we first solve the Hamilton-DeTurck flow \leqn{LocA4}{ \partial_t g_{ij} = -2R_{ij} + \nabla_i W_i + \nabla_j W_j \;\; , \quad g(0) = \hat{g} \, , } where \leqn{LocA5}{ W_j = g_{jk} W^k := g_{jk} g^{pq}(\Gamma^{k}_{pq} - \tilde{\Gamma}^{k}_{pq}) \, , } and $\Gamma^{k}_{ij}$ are the Christoffel symbols for the Levi-Civita connection derived from $g$. Since $M\cong \mathbb{R}^n$, we can use global Cartesian coordinates $(x^{1},\ldots,x^n)$ where $\tilde{\Gamma}^{k}_{ij} = 0$. With respect to the Cartesian coordinates, the initial value problem \eqref{LocA4} becomes, see Lemma 2.1 in \cite{Shi}, \lalign{LocA5}{ \partial_t h_{ij} &= g^{ij}\partial_i\partial_j h_{ij} + \ensuremath{\textstyle\frac{1}{2}} g^{pq}g^{rs}\bigl(\partial_i h_{pr}\partial_j h_{qs} + 2\partial_{p}h_{jp}\partial_{q}h_{is} - 2\partial_{p}h_{jp}\partial_{s}h_{iq} \notag \\ &\qquad \qquad - 2\partial_{j}h_{pr}\partial_{s}h_{iq} - 2\partial_{i}h_{pr}\partial_{s}h_{jq} \bigr) \label{LocA5.1}\, , \\ h_{ij}(0) &= \hat{g}_{ij} -\delta_{ij} \in H^k_\delta \label{LocA5.2} \, , } where $g_{ij} = \delta_{ij} + h_{ij}$. But $k> n/2+3$ and $\delta <0$, so we can apply Theorem \ref{locB} to conclude that the quasi-linear parabolic initial value problem \eqref{LocA5.1}--\eqref{LocA5.2} has a local solution $h_{ij}(t,x)$ that satisfies \leqn{LocA6}{ h_{ij}\, , \;g^{ij}-\delta^{ij} \in C^0([0,T),H^k_\delta) \cap C^{1}([0,T),H^{k-2}_\delta) } for some $T > 0$, \leqn{LocA7}{ h_{ij}(t,x)\, ,\; g^{ij}(t,x)\; \in C^{\infty}((0,T)\times \mathbb{R}^n), } and $h_{ij} \in C^{1}([T_1,T_2],H^\ell_\delta)$ for any $\ell \geq 0$ and $0<T_1<T_2 < T$. The time-dependent vector field $W^k$ is given by \leqn{LocA8}{ W^{k} = g^{ij}\Gamma^{k}_{ij} = \ensuremath{\textstyle\frac{1}{2}} g^{ij}g^{kp}\bigl(\partial_i h_{jp} +\partial_j h_{ip} -\partial_p h_{ij}\bigr) \, , } and $W^{k}$ defines a continuous map from $[0,T)$ to $H^k_\delta(\mathbb{R}^n,\mathbb{R}^n)$ by \eqref{LocA6} and Lemma \ref{SobB}. Note also that $W^{k}\in C^{\infty}((0,T)\times \mathbb{R}^n)$. Letting $\psi_t(x)=(\psi^{1}_t(x),\ldots, \psi^{n}_t(x))$ denote the flow of $W^{k}$ where $\psi_0 = \mathord{{\mathrm 1}\kern-0.27em{\mathrm I}}\kern0.35em$, Theorem \ref{EM1} implies that the map, shrinking $T$ if necessary, $[0,T) \ni t$ $\mapsto$ $\psi_t$ $\in \mathcal{D}^{k-1}_\delta$ is $C^1$. In particular, this implies that $\psi_t^i(x) = x^i + \phi^i_t(x)$ where the map $[0,T) \ni t$ $\mapsto$ $\phi_t$ $\in H^{k-1}_\delta$ is $C^1$. But $W^{k}\in C^{\infty}((0,T)\times \mathbb{R}^n)$, so we also get that $\psi(t,x) \in C^{\infty}((0,T)\times \mathbb{R}^n)$. Let $\bar{h}$ denote the pullback of $h$ by the diffeomorphism $\psi_t$ so that \leqn{LocA10}{ \bar{h}_{ij} = \bigl(\psi^{*}_t h \bigr)_{ij} = \bigl(h_{pq}\circ\psi_t\bigr) \partial_i\psi^p_t\partial_j\psi^q_t \, . } Then $\bar{h}_{ij} \in C^{0}([0,T),H^{k-2}_\delta)$ by Proposition \ref{Can3} and Lemma \ref{SobB}. Also, $\bar{h}_{ij}(t,x)\in C^{\infty}((0,T)\times \mathbb{R}^n)$ by \eqref{LocA7}. Differentiating \eqref{LocA10} with respect to $t$ yields \lalign{LocA13}{ \partial_t \bar{h}_{ij} =& \bigl(\partial_t h_{pq}\circ\psi_t\bigr)\partial_i\psi^p_t\partial_j\psi^q_t +\bigl(\partial_r h_{pq}\circ\psi_t\bigr)\partial_t\psi_t^r \partial_i\psi^p_t\partial_j\psi^q_t\notag\\ &+ \bigl(h_{pq}\circ\psi_t\bigr)\bigl( \partial_i\partial_t\psi^p_t\partial_j\psi^q_t + \partial_i\psi^p_t\partial_j\partial_t\psi^q_t\bigr) \, . } Using the same arguments as above, we also find that $\partial_t\bar{h}_{ij} \in C^{0}([0,T),H^{k-2}_\delta)$. Finally, let $\bar{g} = \psi_t^*g$. Then $\bar{g}$ is a solution to the Ricci flow equation, see Ch.~3.3 of \cite{RF}, $\partial_t \bar{g}_{ij} = -2\bar{R}_{ij}$ with initial data $\bar{g}(0) = \hat{g}$. Furthermore, \leqn{LocA14}{ \bar{g}_{ij} -\delta_{ij} = \bar{h}_{ij} + \delta_{pq}\partial_i\psi^p_t\partial_j\psi^q_t -\delta_{ij} = \bar{h}_{ij} + \delta_{pq}\partial_i\phi^p_t\partial_j \phi^{q}_t } and hence $\bar{g}_{ij} -\delta_{ij} \in C^{1}([0,T),H^{k-2}_\delta)$ since we showed above that $\partial_j\phi^{i}_t$, $\bar{h}_{ij}$ $\in$ $C^{1}([0,T),H^{k-2}_\delta)$. Similar arguments show that $\bar{g}_{ij} -\delta_{ij}$, $\bar{g}^{ij}-\delta^{ij}$ $\in$ $C^{1}([T_1,T_2],H^{\ell}_\delta)$ follows from $h_{ij} \in C^{1}([T_1,T_2],H^\ell_\delta)$. Also, $\bar{g}_{ij}\in C^{\infty}((0,T)\times \mathbb{R}^n)$ follows easily from $h_{ij}(t,x)\, , \psi(t,x) \in C^{\infty}((0,T)\times \mathbb{R}^n)$. \end{proof} \begin{cor} \label{LocBa} Let $k>n/2+4$ and $g(t)$ be the Ricci flow solution from Theorem \ref{LocA}. Then $R_{ij} \in C^1([0,T),H^{k-4}_{\delta-2})$ and $g_{ij}(t) = \hat{g}_{ij} + f_{ij}(t)$ where $f_{ij} \in C^{1}([0,T),H^{k-4}_{\delta-2})$. Moreover, if $k>n/2+6$, $\delta < 4-n$ and $\hat{R} \in L^1$ then $R(t) \in C^{1}([0,T),L^1)$. \end{cor} \begin{proof} Let $h_{ij}= g_{ij}-\delta_{ij}$. Then the Ricci curvature of $g$ has the form $R_{ij} = B_{ij}(g^{pq},\partial_\ell\partial_m h_{rs})$ $+C_{ij}(g^{pq},\partial_q h_{rs})$ where $B_{ij}$ and $C_{ij}$ are analytic functions that are linear and quadratic, respectively, in their second variables. It follows from the weighted multiplication Lemma \ref{SobB} that the map $H^{\ell}_\delta \ni (g^{ij}-\delta^{ij},h_{ij}) \mapsto R_{ij} \in H^{\ell}_{\delta-2}$ is well defined and analytic for $\eta \leq 0$ and $\ell > n/2$. This proves the first statement. Integrating $\partial_t g_{ij} = -2 R_{ij}$ with respect to $t$ yields $g_{ij}(t) - \hat{g}_{ij}$ $=$ $-2\int_{0}^{t}R_{ij}(s)ds$. But $R_{ij} \in C^1([0,T),H^{k-4}_{\delta-2})$, and thus the map $[0,T) \ni t \mapsto -2\int_{0}^{t}R_{ij}(s)ds \in H^{k-4}_{\delta-2}$ is well defined and continuously differentiable. This completes the proof of the second statement. The Ricci scalar satisfies the equation \leqn{LocBa5}{ \partial_t R = \Delta R + |\text{Ric}|^2 \, . } Integrating this yields $R(t)$ $=$ $\hat{R} +$ $\int_{0}^{t} \bigl(\Delta R(s) + |\text{Ric}|^2(s)\bigr)ds$ . From Corollary \ref{LocBa} and the weighted multiplication lemma \ref{SobB}, we see that $\Delta R + |\text{Ric}|^2 \in C^{1}([0,T),H^{k-6}_{\delta-4})$. By the weighted H\"{o}lder and Sobolev inequalities (Lemmata \ref{Holder} and \ref{SobA}), we have $H^{k-6}_{\delta-4}\subset L^\infty_{\delta-4} \subset L^1$. Thus $\int_{0}^{t} \bigl(\Delta R(s) + |\text{Ric}|^2(s)\bigr)ds \in L^1$ for all $t\in [0,T)$. \end{proof} \begin{rem} \label{mass} {\rm In \cite{Bart86} Proposition 4.1, it is established that the mass of an asymptotically flat metric $g$ of class $H^k_\delta \subset W^{2,2n/(n-2)}_\delta$ $(k\geq 3)$ is well defined and given by the formula \leqn{mass1}{ \text{mass}(g) := \int_{S_\infty}\bigl(\partial_j g_{ij} -\partial_i g_{jj} \bigr) \, dS^i } provided $\delta \leq (2-n)/2$ and the Ricci scalar is both non-negative and integrable. So, by the above corollary and the maximum principle, see equation (\ref{LocBa5}), an initial asymptotically flat metric $\hat{g}$ of class $H^k_\delta$, where $k>n/2+6$ and $\delta < \min\{4-n,(2-n)/2\}$, with non-negative and integrable Ricci scalar will yield a flow $g(t)$ for which the Ricci scalar continues to be non-negative and integrable for every $t>0$. Thus the mass of $g(t)$ remains well defined. Furthermore, since $g_{ij}-\hat{g}_{ij}\in H^{k-4}_{\delta-2} \subset W^{1,\infty}_{\delta-2} \subset W^{1,\infty}_{2-n}$, it follows easily from the definition of the mass that \leqn{mass2}{ \text{mass}(g(t)) = \text{mass}(\hat{g})\quad \text{for all $t\geq 0$.} } } \end{rem} \begin{thm} \label{LocB} \mnote{[LocB]} Suppose $k>n/2+4$, $\delta < 0$, and $\tilde{g}(t)$ and $\bar{g}(t)$ are two solutions to the Ricci flow satisfying $\bar{g}(0) = \tilde{g}(0)$ and \eqn{LocB1}{ \tilde{g}_{ij}-\delta_{ij}\, , \; \tilde{g}^{ij}-\delta^{ij}\, , \; \bar{g}_{ij}-\delta_{ij} \, , \bar{g}^{ij} - \delta^{ij} \in C^{1}([0,T),H^k_\delta) . } Then $\bar{g}(t) = \tilde{g}(t)$ for all $t\in [0,T)$. \end{thm} \begin{proof} Fix $k > n/2+4 $ and $\delta < 0$. To prove uniqueness, we use Hamilton's method involving harmonic maps \cite{Ham95} as described in Sec.~3.4 of \cite{RF}. Let $e=\delta_{ij}dx^i dx^j$ denote the Euclidean metric. As before, $(x^1,\ldots,x^n)$ are Cartesian coordinates. Given a map $f_{0} : M\cong \mathbb{R}^n \rightarrow M : x=(x^1,\ldots,x^n) \mapsto (f_0^1(x),\ldots,f_0^n(x))$ and a metric $g$, the harmonic map flow with respect to the pair $(g,e)$ of metrics on $M$ is \leqn{LocB3}{ \partial_t \psi = \Delta_{g,e}\psi \quad : \quad \psi(0) = \psi_0 \; } where $\psi_t(x)=(\psi^1_t(x),\ldots,\psi_t^n(x))$ is a time dependent map from $\mathbb{R}^n$ to $\mathbb{R}^n$ and $\Delta_{g,e}\psi$ is defined by \leqn{LocB4}{ (\Delta_{g,e}\psi)^j = g^{pq}\bigl( \partial_{p}\partial_{q} \psi^j - \Gamma^r_{pq}\partial_r \psi^j\bigr) \, . } As above, $\Gamma^r_{pq}$ are the Christoffel symbols of the Levi-Civita connection derived from $g$. If we let \leqn{LocB5}{ \psi^j_t(x) = x^j +\phi^j_t(x) \quad \text{and} \quad \psi^j_0(x) = x^j + \phi^j_0(x)\, , } then we can write \eqref{LocB3} as \leqn{LocB6}{ \partial_t \phi^j = g^{pq}\bigl( \partial_{p}\partial_{q} \phi^j - \Gamma^r_{pq}\partial_r \phi^j -\Gamma^j_{pq}\bigr) \quad , \quad \phi^j(0) = \phi^j_0 \, . } Suppose $g$ is a time dependent metric that satisfies $g \in C^{0}([0,T),H^{k}_\delta)$. Then the continuity of the differentiation operator and Lemma \ref{SobB} imply that $\Gamma^r_{pq} \in C^1([0,1),H^{k-1}_\delta)$. So if $\phi_0^j \in H^{k-1}_{\delta}$, then there exists a unique solution $\phi^{j}$ $\in C^{0}([0,T),H^{k-1}_\delta)$$\cap$ $C^1([0,T),H^{k-3}_\delta)$ to \eqref{LocB6} by Theorem \ref{locB}. If $\psi_0 \in \mathcal{D}^{k-1}_\delta$, then Theorem \ref{Can3A} implies, shrinking $T$ if necessary, that $\psi$ $\in C^{0}([0,T),\mathcal{D}^{k-1}_\delta)$ $\cap$ $ C^1([0,T),\mathcal{D}^{k-3}_\delta)$. Suppose $\tilde{g}(t)$ and $\bar{g}(t)$ are two solutions to the Ricci flow such that \eqn{LocB9}{ \tilde{g}_{ij}-\delta_{ij}\, , \; \tilde{g}^{ij}-\delta^{ij}\, , \; \bar{g}_{ij}-\delta_{ij} \, , \bar{g}^{ij} - \delta^{ij} \in C^{1}([0,T),H^k_\delta) \, } and $\bar{g}(0) = \tilde{g}(0)$. Let $\tilde{\psi}$, $\bar{\psi}$ $\in C^{0}([0,T),\mathcal{D}^{k-1}_\delta)$ $\cap C^1([0,T),\mathcal{D}^{k-3}_\delta)$ be solutions to the harmonic map flow with respect to the metric pairs $(\tilde{g},e)$ and $(\bar{g},e)$ with initial conditions $\tilde{\psi}(0)=\bar{\psi}(0) = \mathord{{\mathrm 1}\kern-0.27em{\mathrm I}}\kern0.35em$. Letting $\tilde{h}_{ij}$ $:= (\tilde{\psi}_*\tilde{g})_{ij}-\delta_{ij}$ and $\bar{h}_{ij}$ $:= (\bar{\psi}_*\bar{g})_{ij}-\delta_{ij}$, the same arguments as in the proof of Theorem \ref{LocA} show that \eqn{LocB13}{ \tilde{h}_{ij}\, , \; (\tilde{\psi}_*\tilde{g})^{ij}-\delta^{ij} \, , \; \bar{h}_{ij} \, , (\bar{\psi}_*\bar{g})^{ij}-\delta^{ij} \; \in C^{0}([0,T),H^{k-2}_\delta)\cap C^{1}([0,T),H^{k-4}_\delta) \, . } But $\tilde{\psi}_*\tilde{g}$ and $\bar{\psi}_*\bar{g}$ both satisfy the Hamilton-DeTurck flow \eqref{LocA4} (see Sec.~3.4.4 of \cite{RF}) or equivalently $\bar{h}_{ij}$ and $\tilde{h}_{ij}$ both satisfy the parabolic equation \eqref{LocA5.1} with initial condition $\bar{h}_{ij}(0) = \tilde{h}_{ij}(0)$. By uniqueness of solutions to this equation (see Theorem \ref{locB}) we must have $\bar{h}_{ij}(t) = \tilde{h}_{ij}(t)$ or equivalently $(\tilde{\psi}_*\tilde{g})(t)= (\bar{\psi}_*\bar{g})(t)$ for all $t\in [0,T)$. So by Lemma 3.27 in \cite{RF}, the time dependent diffeomorphisms $\tilde{\psi}_t$ and $\tilde{\psi}_t$ are flows for the time dependent differential equation $dx^j/dt$ $=$ $W^j(t,x)$ that satisfy $\tilde{\psi}_0$ $=$ $\bar{\psi}_0$ $=$ $\mathord{{\mathrm 1}\kern-0.27em{\mathrm I}}\kern0.35em$. Here, $W^j$ is the vector field defined by $W^j$ $= g^{pq}\Gamma^j_{pg}$. By standard uniqueness theorems for solutions to ordinary differential equations, we can conclude that $\tilde{\psi}_t$ $=$ $\bar{\psi}_t$ for all $t$ $\in$ $[0,T)$. It follows that $\tilde{g}(t)$ $=$ $\bar{g}(t)$ for all $t$ $\in$ $[0,T)$ and the proof is complete. \end{proof} \subsection{A Continuation Principle} \noindent The following theorem shows that if local existence in time fails to extend indefinitely to give global future existence, then curvature diverges in finite time. \begin{thm} \label{cont} \mnote{[cont]} Suppose $k>n/2 +4$, $\delta < 0$ and $\hat{g}$ is an asymptotically flat metric of class $H^k_\delta$. Then Ricci flow $\partial_t g_{ij} = -2R_{ij}$ with the initial condition $g(0)=\hat{g}$ has a unique solution on a maximal time interval $0\leq t < T_M \leq \infty$. If $T_M < \infty$ then \leqn{cont1} {\limsup_{t\to T_M} \sup_{x\in\mathbb{R}^n}|{\rm Rm}(t,x)|_{g(t,x)} = \infty \, . } Moreover, for any $T \in [0,T_M)$, $K= \sup_{0\leq t\leq T} \sup_{x\in\mathbb{R}^n}|{\rm Rm}(t,x)|_{g(t,x)} < \infty$ and \leqn{cont2}{ e^{-2KT}\hat{g} \leq g(t) \leq e^{2KT}\hat{g} \quad \text{for all $t\in [0,T]$.} } \end{thm} \begin{proof} For $\hat{g}\in H^k_\delta$ with $k>n/2+4$ and $\delta < 0$, let $[0,T_M)$ be the maximal time interval of existence for a solution $g(t)$ to Ricci flow. Suppose that $T_M < \infty$ and that \leqn{cont3}{ K := \sup_{0\leq t< T_M} \sup_{x\in\mathbb{R}^n}|{\rm Rm}(t,x)|_{g(t,x)} < \infty. } For each $t\in [0,T_M)$, the metric is $g(t)$ is asymptotically flat and hence $g(t)$ is a solution to Ricci flow for which the maximum principle holds. It follows that Proposition 6.48 of \cite{RF} applies. So for each $m\in \mathbb{N}_0$, there exists a constant $c_m$ such that \leqn{cont4}{ |D^I g_{ij}(t,x)|+ |D^I g^{ij}| \leq c_m \quad \text{for all $|I|= m$ and $(t,x)\in [0,T_M)\times\mathbb{R}^n$}, } where $g_{ij}$ are the metric components in Cartesian coordinates and $D^I = \partial^{I_1}_1\ldots \partial^{I_n}_n$. {}From the proof of Theorem \ref{LocB}, we get that for each $\tilde{t}\in [0,T_M)$ there exists an interval $I_{\tilde{t}} := [\tilde{t},T_{\tilde{t}}) \subset [0,T_M)$ and a map $\psi_{\tilde{t}}(t,x) = (\psi_{\tilde{t}}^1(t,x),\ldots,\psi_{\tilde{t}}^n(t,x))$ of $\mathbb{R}^n$ to $\mathbb{R}^n$ such that \eqn{cont5}{ \psi_{\tilde{t}}\in C^{0}(I_{\tilde{t}},\mathcal{D}^{k-1}_{\delta})\cap C^{1}(I_{\tilde{t}},\mathcal{D}^{k-3}_{\delta})} and $\psi_{\tilde{t}}$ satisfies harmonic map flow (i.e. \eqref{LocB3}) with initial condition $\psi_{\tilde{t}}=\mathord{{\mathrm 1}\kern-0.27em{\mathrm I}}\kern0.35em$. Since $\psi_{\tilde{t}}$ satisfies a linear equation, see \eqref{LocB3}, $\psi_{\tilde{t}}$ will continue to exist as long as $g(t)$ does. Thus we can solve \eqref{LocB3} on the interval $[\tilde{t},T_M)$ although it may fail to define a diffeomorphism for some time less than $T_M$. Also, the metric $\tilde{g}(t):= (\psi_{\tilde{t}})_*g(t)$ satisfies \eqn{cont6}{ \tilde{g}^{ij} -\delta^{ij}\, , \tilde{g}_{ij}-\delta_{ij} \in C^{0}(I_{\tilde{t}},H^{k-2}_\delta)\cap C^{1}(I_{\tilde{t}},H^{k-4}_\delta) } and $\tilde{h}_{ij} := \tilde{g}_{ij}-\delta_{ij}$ is a solution of Hamilton-DeTurck flow \eqref{LocA5.1} on the time interval $I_{\tilde{t}}$ with initial condition $\tilde{h}_{ij}(\tilde{t})=g_{ij}(\tilde{t})-\delta_{ij}$. We now use the harmonic map flow equation \eqref{LocB3} to derive $C^k_b$ bounds on $\psi_{\tilde{t}}$ to estimate the length of time for which $\psi_{\tilde{t}}$ remains a diffeomorphism. Let $\phi^j(t,x) = \psi^j_{\tilde{t}}(t,x)-x^j$ and define $|\phi|^2 = \delta_{ij}\phi^i\phi^j$. Then from \eqref{LocB3}, or equivalently \eqref{LocB6}, $|\phi|^2$ satisfies \leqn{cont7}{ \partial_t|\phi|^2 = g^{pq}\partial_p\partial_q |\phi|^2 -\delta_{ij}g^{pq}\partial_p\phi^i\partial_q\phi^j -g^{pq}\Gamma^r_{pq}\partial_r|\phi|^2 -\ensuremath{\textstyle\frac{1}{2}}\delta_{ij}\phi^i g^{pq}\Gamma^j_{pq} \, . } So by \eqref{cont4}, there exists a constant $C$ independent of $\tilde{t}$ such that \leqn{cont8}{ \partial_t|\phi|^2 - g^{pq}\partial_p\partial_q |\phi|^2 +g^{pq}\Gamma^r_{pq}\partial_r|\phi|^2 \leq |\phi|^2 + C\, . } Since $\lim_{|x|\rightarrow \infty}|\phi|^2(t,x) = 0$ for all $t\in [\tilde{t},T_M)$ and $|\phi|^2(0,x) = 0$, we get via the maximum principle, see Theorem 4.4 in \cite{RF}, that \leqn{cont9}{ |\phi|^2(t,x) \leq C(\exp(t-\tilde{t})-1) \quad \text{ for all $(t,x) \in [\tilde{t},T_M)\times \mathbb{R}^n$.} } Next, differentiating \eqref{LocB3} we find that \lalign{cont10}{ \partial_t|D\psi_{\tilde{t}}|^2 = g^{pq}&\partial_p\partial_q |D\psi_{\tilde{t}}|^2 -2g^{pq}\delta_{jl}\delta^{ki}\partial_q \partial_k\psi_{\tilde{t}}^l\partial_p \partial_i\psi_{\tilde{t}}^j -g^{pq}\Gamma^r_{pq}\partial_r|D\psi_{\tilde{t}}|^2 \notag \\ &+2\partial_i g^{pq}\delta_{jl}\delta^{ki} \partial_k\psi_{\tilde{t}}^l\partial_p \partial_q\psi_{\tilde{t}}^j - 2 \partial_i(g^{pq}\Gamma^r_{pq})\delta_{jl}\delta^{ki} \partial_k\psi_{\tilde{t}}^l \partial_r\psi_{\tilde{t}}^j. } where $|D\psi_{\tilde{t}}|^2 := \delta_{ij}\delta^{kl}\partial_k\psi^i_{\tilde{t}}\partial_l\psi^j_{\tilde{t}}$. Using \eqref{cont4}, we obtain the inequalities \lalign{cont11}{ & \partial_t|D\psi_{\tilde{t}}|^2 - g^{pq}\partial_p\partial_q|D\psi_{\tilde{t}}|^2 + g^{pq}\Gamma^r_{pq}\partial_r|D\psi_{\tilde{t}}|^2\notag\\ & \leq -2g^{pq}\delta_{jl}\delta^{ki}\partial_q \partial_k\psi_{\tilde{t}}^l \partial_p \partial_i D\psi^j_{\tilde{t}} + \epsilon|D\psi_{\tilde{t}}|^2 + C_1(1+1/\epsilon)|D\psi_{\tilde{t}}|^2 \quad (\epsilon > 0) } and \leqn{cont12}{ -2g^{pq}\delta_{jl}\delta^{ki}\partial_q \partial_k\psi_{\tilde{t}}^l \partial_p \partial_i\psi_{\tilde{t}}^j \leq - C_2 |D\psi_{\tilde{t}}|^2 \, . } for some constants $C_1$ and $C_2$ that are independent of $\epsilon > 0 $, $\tilde{t}$ and $t\in [\tilde{t},T_M)$. Setting $\epsilon = C_2$ yields \lalign{cont13}{ \partial_t|D\psi_{\tilde{t}}|^2 - g^{pq}\partial_p\partial_q|D\psi_{\tilde{t}}|^2 + g^{pq}\Gamma^r_{pq}\partial_r|D\psi_{\tilde{t}}|^2 \leq C_1(1+1/C_2)|D\psi_{\tilde{t}}|^2 \, . } Since $\lim_{|x|\rightarrow \infty }|D\psi_{\tilde{t}}|^2(t,x) = n$ for all $t\in [\tilde{t},T_M)$ and $|D\psi_{\tilde{t}}|^2(\tilde{t},x)=n$, the maximum principle implies that there exists a constant $C$ independent of $\tilde{t}$ for which the following estimate holds \leqn{cont14}{ ||D\psi_{\tilde{t}}|^2(t,x)-n| \leq C\exp((t-\tilde{t})-1) \quad \text{for all $(t,x)\in [\tilde{t},T_M)\times \mathbb{R}^n$.} } Differentiating \eqref{LocB3} again and letting $|D^2\psi_{\tilde{t}}| = \delta_{ij}\delta^{kl}\delta^{pq}\partial_{kp}\psi^i_{\tilde{t}} \partial_{lq}\psi^j_{\tilde{t}}$, we find, using similar arguments, that there exists a constant $C>0$ independent of $\tilde{t}$ such that \leqn{cont15}{|D^2\psi_{\tilde{t}}|^2(t,x) \leq \exp(C(t-\tilde{t})) \quad \text{for all $(t,x)\in [\tilde{t},T_M)\times \mathbb{R}^n$.}} Let $J(\psi_{\tilde{t}})=\det(\partial_j\phi^i_{\tilde{t}})$ denote the Jacobian of the map $\psi_{\tilde{t}}$. Since $J(\psi_{\tilde{t}})=1$, the estimates \eqref{cont9} and \eqref{cont14} show that there exists a $\bar{t}\in (0,T_M)$ and a constant $C>1$ such that \leqn{cont16}{ 0<1/C \leq J(\psi_{\tilde{t}})(t,x) \leq C \quad \text{for all $(t,x) \in [\bar{t},T_M)\times \mathbb{R}^n$.} } Combining this estimate with \eqref{cont9}, \eqref{cont14}, and \eqref{cont15}, we have \leqn{cont17}{ |D^I(\psi^{-1}_{\tilde{t}}(t,x)-x)|\leq C \quad \text{for all $|I|\leq 2$ and $(t,x)\in [\bar{t},T_M)\times \mathbb{R}^n$.}} Notice that this estimate along with Lemma \ref{Can3B} shows that $I_{\bar{t}}=[\bar{t},T_M)$ and that \leqn{cont18}{ |D^I \tilde{h}_{ij}(t,x)| + |g^{ij}(t,x)| \leq C \quad \text{for $|I|\leq 1$ and all $(t,x)\in [\bar{t},T_M)$.}} But $\tilde{h}$ satisfies \eqref{LocA5.1}, and so the estimate \eqref{cont18} and the continuation principle of Theorem \ref{locB} imply that there exists a $T>T_M$ such that $\tilde{h}_{ij}(t,x)$ extends to a solution on $[\bar{t},T)\times \mathbb{R}^n$ of the class \leqn{cont19}{ \tilde{h}_{ij}=\tilde{g}_{ij}-\delta_{ij} \, ,\tilde{g}^{ij} -\delta^{ij}\, \in C^{0}([\bar{t},T),H^{k-2}_\delta)\cap C^{1}([\bar{T},T), H^{k-4}_\delta) \, . } By the proof of Theorem \ref{LocA} and \ref{LocB}, $\tilde{h}_{ij}$ produces a unique solution to Ricci flow satisfying $g_{ij}-\delta_{ij}$, $g^{ij}-\delta^{ij}$$\in$ $C^{1}([\bar{t},T),H^{k-4}_\delta) $ and $\bar{g}(\bar{t})$ $=$ $g(\bar{t})$. Thus $\bar{g}(t) = g(t)$ for all $t\in [0,T_M)$. Since $T>T_M$ this contradicts $T_M$ being the maximal existence time. So we must either have $T_M=\infty$ or $\limsup_{t\nearrow T_M}$ $\sup_{x\in\mathbb{R}^n}|{\rm Rm}(t,x)|_{g(t,x)}$ $<$ $\infty$. This proves the first statement. The second statement follows from a straightforward adaptation of Corollary 6.50 in \cite{RF}. \end{proof} We note that as in the compact case the continuation criterion (\ref{cont1}) can be strengthened to $\lim_{t\nearrow T_M}$ $\sup_{x\in\mathbb{R}^n}|{\rm Rm}(t,x)|_{g(t,x)}$ $=$ $\infty$ but we will not pursue this here. \section{Rotational Symmetry} \setcounter{equation}{0} \subsection{The Coordinate System} \noindent We now restrict our attention to flows evolving from a fixed initial initial metric that (i) is rotationally symmetric and admits no minimal hyperspheres, and (ii) is asymptotically flat of class $H^k_\delta$ with $\delta <0$ and $k>\frac{n}{2}+4$. In an attempt to manage the several constants that will appear from here onward, we will sometimes use the notation $C^+_x$ to denote a constant that bounds a quantity $x$ from above; dually, $C^-_x$ will sometimes be used to denote a constant that bounds $x$ from below. \begin{rem}\label{locrem} \mnote{[locrem]} $\;$ {\rm \begin{enumerate} \item[(i)] By Theorem \ref{LocA}, there exists a solution $\bar{g}(t)$ to Ricci flow satisfying \leqn{eq4.1}{ \bar{g}_{ij}-\delta_{ij},\; \bar{g}^{ij}-\delta^{ij} \in C^1([0,T_M),H^{k-2}_{\delta}) \, , \quad \bar{g}(t,x) \in C^\infty((0,T_M)\times \mathbb{R}^n), } and $\bar{g}(0)=\hat{g}$. \item[(ii)] From (\ref{eq4.1}) and the weighted Sobolev embedding (see Lemma \ref{SobA}), it follows that $\bar{g}(t) \in C^1([0,T_M),C^2_{\delta})$ and hence there exists a time dependent constant $C(t)$ such that \leqn{eq4.2}{ |D_x^I \bar{g}_{ij}(t,x)| \leq \frac{C(t)}{(1+|x|^2)^{(|\delta|+|I|)/2}} } for all $(t,x)$ $\in$ $[0,T_M)\times \mathbb{R}^n$, and $|I|\leq 2$. \item[(iii)] Since Ricci flow preserves isometries, each metric $g(t)$ is rotationally symmetric and hence \leqn{eq4.3}{ \bar{g}(t,x) = q^2(t,r) dr^2 + h^2(t,r) g_{\rm can} } for functions $q(t,r)$ and $h(t,r)$ which are $C^1$ in $t$, $C^2$ in $r$, $C^\infty$ in $t$ and $r$ for $t > 0$, and satisfy \lgath{}{ q(0,r) = a(r) \, ,\quad \quad h(0,r) = r\, , \label{eq4.4} \\ |\partial_r^s(q^2(t,r)-1)| \leq \frac{C(t)}{(1+r)^{|\delta|+s}} \quad s=0,1,2 \, , \label{eq4.5} \\ |\partial_r^s(r^{-2} h^2(t,r)-1)| \leq \frac{C(t)}{(1+r)^{|\delta|+s}} \quad s=0,1,2 \, . \label{eq4.6} } \end{enumerate} } \end{rem} Since $\partial_r h(0,r) = \partial_r r =1$, it follows that there exist constants $0<C_{\partial_r h}^- \leq 1$, $C_{\partial_r h}^+\geq 1$, such that \leqn{eq4.7}{ 0 < C_{\partial_r h}^- \leq \partial_r h(t,r) \leq C_{\partial_r h}^+ \quad \text{for all $(t,r) \in [0,T]\times (0,\infty)$} } for some $T>0$. Note that $T$ has no {\it a priori} relation to $T_M$, the maximal existence time of the flowing metric (\ref{eq4.3}). However, let ${\tilde T}$ be the largest time such that (\ref{eq4.7}) holds whenever $T<{\tilde T}$. We will show in Subsection 4.3 that we can take ${\tilde T}=T_M$. Letting $(\theta^A)$ denote angular coordinates on the sphere ${\mathbb S}^{n-1}$, the map \leqn{eq4.8}{ \psi_t(r,\theta^A) = (h(t,r),\theta^A) } defines a $C^2$ diffeomorphism on $\mathbb{R}^n$ for each $t \in [0,{\tilde T})$ which is smooth for all $t>0$. So then \leqn{eq4.9}{ \psi^{-1}_t(r,\theta^A) = (\rho(t,r),\theta^A) } for a function $\rho(t,r)$ that is $C^1$ in $t$, $C^2$ in $r$, $C^\infty$ in $r$ and $t$ for $t>0$, and satisfies \leqn{eq4.10}{ h(t,\rho(t,r))=r \, , \quad \rho(t,h(t,r)) = r\, , \quad \text{and} \quad \rho(0,r) =r } for all $(t,r) \in [0,T]\times (0,\infty)$. Next, define \leqn{eq4.11}{ g(t) := (\psi_t^{-1})^* \bar{g}(t). } Then we finally obtain that \leqn{eq4.12}{ g(t) = f^2(t,r) dr^2 + r^2 g_{\rm can} } where \leqn{eq4.13}{ f(t,r) = \frac{q(t,\rho(t,r))}{\partial_r h(t,\rho(t,r))} \quad \text{for all $(t,r) \in [0,{\tilde T})\times (0,\infty)$.} } Note that $f(t,r)$ is $C^1$ in $t$, is $C^2$ in $r$, and $C^\infty$ in $r$ and $t$ for $t>0$. As well, \begin{equation} \lim_{r\to\infty}f^2(t,r)=1 \label{eq4.14} \end{equation} (proof: from (\ref{eq4.5}) we have $q^2\to 1$ and from (\ref{eq4.6}) it's easy to check that $\partial_r h\to 1$; then apply these in (\ref{eq4.13})). Finally note that the mean curvature of constant-$r$ hyperspheres is \begin{equation} H=\frac{1}{rf}\ , \label{eq4.15} \end{equation} so a minimal hypersphere occurs iff $f$ diverges at finite $r$ and some $t\in[0,{\tilde T}]$. We show in the following subsection that such a divergence cannot develop. \subsection{Ricci Flow in Area Radius Coordinates} \noindent The metric (\ref{eq4.12}) is a solution of the Hamilton-DeTurck flow (\ref{eq1.2}), at least for $t\in[0,{\tilde T})$. Now from (\ref{eq4.12}) we can directly compute the Ricci curvature and obtain \begin{equation} {\rm Ric} = \frac{(n-1)}{rf(t,r)} \frac{\partial f}{\partial r} dr^2 + \left [ (n-2) \left ( 1 - \frac{1}{f^2(t,r)} \right ) +\frac{r}{f^3(t,r)}\frac{\partial f}{\partial r}\right ] g_{\rm can} \ . \label{eq4.16} \end{equation} We can then use the components of the flow equation (\ref{eq1.2}) normal to $\frac{\partial}{\partial r}$ to determine $\xi$, expressed as a 1-form, to be $\xi=\xi_1(t,r)dr$ where \begin{equation} \xi_1=\left [ \frac{(n-2)}{r} \left ( f^2(t,r) - 1 \right ) +\frac{\frac{\partial f}{\partial r}}{f(t,r)}\right ] \ . \label{eq4.17} \end{equation} We can then write the $rr$-component of (\ref{eq1.2}) as a differential equation for $f$ and use (\ref{eq4.17}) to eliminate $\xi$ from this equation. The result is \begin{eqnarray} \frac{\partial f}{\partial t}&=&\frac{1}{f^2}\frac{\partial^2 f}{\partial r^2} -\frac{2}{f^3}\left ( \frac{\partial f}{\partial r} \right )^2 + \left ( \frac{(n-2)}{r}-\frac{1}{rf^2} \right ) \frac{\partial f}{\partial r}\nonumber\\ &&-\frac{(n-2)}{r^2f}\left ( f^2 -1 \right )\ . \label{eq4.18} \end{eqnarray} This is our master equation upon which our global existence proof is based. Obviously $f(t,r)=1$ (flat space) is a solution, as is $f=const\neq 1$ when $n=2$ (flat cone) but not for $n>2$ . We will now prove that minimal hyperspheres cannot form along the flow if none are present initially. A variant of this argument will be employed several times over in Section 5. Our technique is to prescribe limits as $r\to\infty$ and as $r\to 0$ on $f(t,r)$ or, depending on the situation, an expression involving $f$ (and, in the next section, its radial derivative as well). These limits constitute time-dependent bounds on the behaviour of the geometry over the time interval $[0,{\tilde T})$. But if the flow exists subject to these limits, then maximum principles will give bounds expressed solely in terms of the initial conditions. The bounds are therefore uniform in time and independent of ${\tilde T}$. To see how this works, express (\ref{eq4.18}) in terms of the variable \begin{equation} w(t,r):=f^2(t,r)-1\ . \label{eq4.19} \end{equation} Then, working from (\ref{eq4.18}), we see that $w$ obeys \begin{equation} \frac{\partial w}{\partial t}= \frac{1}{f^2} \frac{\partial^2 w}{\partial r^2} - \frac{3}{2f^4} \left [ \frac{\partial w}{\partial r} \right ]^2 +\left [ \frac{n-2}{r}-\frac{1}{rf^2} \right ] \frac{\partial w}{\partial r}-\frac{2(n-2)}{r^2}w\ . \label{eq4.20} \end{equation} Since $f(t,r)$ solves (ref{eq4.18}) and obeys $\lim_{r\to 0} f^2(t,r)=1=\lim_{r\to \infty} f^2(t,r)$, the corresponding $w=f^2-1$ will solve (\ref{eq4.20}) with $\lim_{r\to 0} w(t,r)=0=\lim_{r\to \infty} w(t,r)$. \begin{prop}\label{Prop4.2} Suppose that $w(t,r)$ is a classical solution of (\ref{eq4.20}) for $(t,r)\in [0,{\tilde T})\times[0,\infty)=:{\tilde D}$ and that $\lim_{r\to 0} w(t,r)=0 =\lim_{r\to \infty} w(t,r)$ for all $t\in[0,{\tilde T})$. Then there exist constants $C^-_w\le 0$ and $C^+_w\ge 0$ such that $C^-_w\le w(t,r)\le C^+_w$ for all $(t,r)\in {\tilde D}$. \end{prop} \begin{proof} First choose positive constants $0<r_1<r_2$ and restrict the domain to $r\in[r_1,r_2]$. Let $T<{\tilde T}$. By the maximum principle, if the maximum of $w$ on $[0,T]\times [r_1,r_2]$ is positive, it must lie on the {\it parabolic boundary} $P$ (which consists of those points where either $t=0$, $r=r_1$, or $r=r_2$). But now take the limits $r_1\to 0$ and $r_2\to\infty$. By assumption, $w(t,r_1)$ and $w(t,r_2)$ tend to zero in these limits, so for $r_1$ small enough and $r_2$ large enough, the maximum, if it is positive, lies on the {\it initial boundary} $\{ (t,r)\vert t=0\}$ (and since $w(0,0)=0$, even when the maximum is zero it is realized on the initial boundary). Finally, take $T\to {\tilde T}$. This proves \begin{equation} C^+_w:=\max_{r\in[0,\infty)} \{w(0,r)\}=\max_{\tilde D} \{ w(t,r) \} \ge 0\ . \label{eq4.21} \end{equation} Dually, by the minimum principle, if the minimum of $w$ on $[0,T]\times [r_1,r_2]$ is negative, it must lie on $P$, and the argument proceeds as before, yielding \begin{equation} C^-_w:=\min_{r\in[0,\infty)} \{w(0,r)\}=\min_{\tilde D} \{ w(t,r) \}\le 0\ . \label{eq4.22} \end{equation} \end{proof} \begin{cor}\label{Cor4.3} Define constants $C^{\pm}_{f^2}$ such that $0<C^-_{f^2}:=\min_{r\in[0,\infty)} \{a^2(r)\}$ and let $C^+_{f^2}:=\max_{r\in[0,\infty)} \{a^2(r)\}$ ($a(r)$ is defined in (\ref{eq4.4}). Then \begin{equation} 0<C^-_{f^2}\le f^2(t,r)\le C^+_{f^2}\ . \label{eq4.23} \end{equation} for all $(t,r)\in {\tilde D}=[0,{\tilde T})\times [0,\infty)$. \end{cor} \begin{proof} Using $w:=f^2-1$ and noting in particular that $w(0,r)=f^2(0,r)-1=a^2(r)-1$, apply Proposition (\ref{Prop4.2}) and use $C^{\pm}_w +1=C^{\pm}_{f^2}$. \end{proof} Now we say that a minimal hypersphere forms along the flow iff $f(t,r)$ diverges in ${\tilde D}=[0,{\tilde T})\times [0,\infty)$. \begin{cor}\label{Cor4.4} If no minimal sphere is present initially then none forms. \end{cor} \begin{proof} {}From Corollary \ref{Cor4.3}, the classical solutions $f$ of (\ref{eq4.18}) developing from initial data (\ref{eq4.1}) are bounded uniformly in $t$ on $[0,{\tilde T})$. \end{proof} \subsection{The Continuation Principle in Area Radius Coordinates} \noindent To adapt the continuation principle of Section 3.2 to the rotationally symmetric case, we must deal with the following point. While we can assume the solution of Ricci flow in the coordinate system (\ref{eq4.3}) to exist for all $t<T_M$, the diffeomorphism transforming the coordinates to those of (\ref{eq4.12}) is, so far, only defined for $t<{\tilde T}$, and perhaps ${\tilde T}<T_M$. \begin{prop}\label{Prop4.5} ${\tilde T}= T_M$. \end{prop} \begin{proof} Let $K= \sup_{0\leq t \leq T'}\norm{{\rm Rm}}_{L^\infty}$. But $\bar{R}_{ijkl}$ is bounded on $[0,T']$ (indeed, on any closed subinterval of $[0,T_M)$), so we can use (\ref{cont2}), which states that for all $(t,r)\in [0,T']\times [0,\infty)$ \begin{equation} e^{-2KT'}C^-_{f^2}\leq e^{-2KT'}a^2(r)\leq q^2(t,r) \leq e^{2KT'}a^2(r) \leq e^{2KT'}C^+_{f^2}\ . \label{eq4.24} \end{equation} Here the inner two inequalities come from (\ref{cont2}) and the outer two are just the definitions of the constants $C^{\pm}_{f^2}$. Assume by way of contradiction that ${\tilde T}<T_M$. If we restrict attention to $t\in[0,{\tilde T})$ then we can divide (\ref{eq4.24}) by (\ref{eq4.23}). This yields \begin{equation} 0<e^{-2KT'}\frac{C^-_{f^2}}{C^+_{f^2}}\le \frac{q^2(t,r)}{f^2(t,r)}\le e^{2KT'}\frac{C^+_{f^2}}{C^-_{f^2}} \label{eq4.25} \end{equation} on $[0,{\tilde T})$. Using (\ref{eq4.13}), we can rewrite this as \begin{equation} 0<e^{-2KT'}\frac{C^-_{f^2}}{C^+_{f^2}}\le \frac{\partial h}{\partial r} \le e^{2KT'}\frac{C^+_{f^2}}{C^-_{f^2}} \label{eq4.26} \end{equation} on $[0,{\tilde T})$. We see by comparison of this to (\ref{eq4.7}) that the constants that appear in (\ref{eq4.7}) are independent of $T$. But the $\le$ signs give closed relations so, by relaxing the constant bounds slightly if necessary (keeping the lower bound positive of course), we can extend (\ref{eq4.26}) (equivalently, (\ref{eq4.7})) to $[0,{\tilde T}]$ and then to some interval $[0,T')\supset [0,{\tilde T}]$. This contradicts the assumption that ${\tilde T}<T_M$, and since necessarily ${\tilde T} \le T_M$ we must therefore conclude that ${\tilde T}= T_M$. \end{proof} \noindent Thus the diffeomorphism (\ref{eq4.8}--\ref{eq4.11}) is defined for all $t\in[0,T_M)$. The square of the norm of the curvature tensor is given by \leqn{eq4.27}{ |{\rm Rm}|^2 = R_{ijkl}R^{ijkl} = 2(n-1)\lambda^2_1 + (n-1)(n-2)\lambda_2^2 } where \leqn{eq4.28}{ \lambda_1(t,r) = \frac{1}{rf^3(t,r)}\frac{\partial f(t,r)}{\partial r} } and \leqn{eq4.29}{ \lambda_2(t,r) = \frac{1}{r^2} \left ( 1-\frac{1}{f^2(t,r)}\right ) } are the sectional curvatures in planes containing and orthogonal to $dr$, respectively. Now in terms of the curvature tensor $\bar{R}_{ijkl}$ of ${\bar g}(t)$ we have that \leqn{eq4.30}{ |{\rm Rm}| = |\overline{{\rm Rm}}|\circ\psi^{-1}_t \, . } But $\bar{R}_{ijkl}$ is bounded on any interval $[0,T']$ with $T'<T_M$ and thus the sectional curvatures are bounded functions of $(t,r)\in [0,T']\times[0,\infty)$, using Proposition 4.5. Thus \begin{eqnarray} C^-_{\lambda_1}(t)\le\lambda_1(t,r)&=& \frac{1}{rf^3}\frac{\partial f}{\partial r} \le C^+_{\lambda_1}(t)\ , \label{eq4.31} \\ C^-_{\lambda_2}(t)\le\lambda_2(t,r) &=& \frac{1}{r^2}\left ( 1-\frac{1}{f^2(t,r)}\right ) \le C^+_{\lambda_2}(t) \ , \label{eq4.32} \end{eqnarray} for all $t\in [0,T_M)$. In particular, the limits $r\to 0$ of these quantities exist at each fixed $t$. It also follows easily from the fall-offs (\ref{eq4.5}, \ref{eq4.6}) that \leqn{eq4.33}{ \lim_{r\rightarrow \infty} r^{-|\delta|-s}\partial_r(f^2(t,r)-1) = 0} for $s=0,1,2$ and all $t\in [0,T')$. Thus we have that \begin{prop}\label{Prop4.6} The function $f(t,r)$ given by (\ref{eq4.13}) solves the PDE (\ref{eq4.18}) on the region $[0,T_M)\times (0,\infty)$, equals $a(r)$ at time $t=0$, and satisfies the boundary conditions \leqn{eq4.34}{ \lim_{r\to 0} \frac{1-f^2(t,r)}{r^2} = L_1(t) \, , \quad \lim_{r\to 0} \frac{\partial_r f(t,r)}{r} = L_2(t) \, ,} for locally bounded functions $L_1 , L_2 : [0,T_M)\to \mathbb R$ and \leqn{eq4.35}{ \lim_{r\to \infty} r^{-|\delta|-s}\partial_r(f^2(t,r)-1) = 0 \quad (s=0,1,2) } for all $t\in [0,T_M)$ and $\delta<0$. \end{prop} \begin{proof} To obtain the boundary conditions (\ref{eq4.34}), multiply (\ref{eq4.31}) by $f^3$, (\ref{eq4.32}) by $f^2$, take the limit, and use that $f$ is a bounded function of $r$. The fact that $f$ solves (\ref{eq4.18}), subject to these conditions, for all $(t,r)\in[0,T_M)\times [0,\infty)$ follows from the facts that (i) $q$ and $h$ enter (\ref{eq4.3}) which solves Ricci flow (\ref{eq1.1}), (ii) $f$ enters (\ref{eq4.12}) which solves Hamilton-DeTurck flow (\ref{eq1.2}), and (iii) the diffeomorphism (\ref{eq4.8}--\ref{eq4.11}) relating these flows is valid for all such $(t,r)$ (Proposition 4.5). \end{proof} \begin{thm}\label{Thm4.7} If there exists a constant $C_{\lambda}>0$ independent of $T_M$ such that \leqn{eq4.36}{ \sup_{0<r<\infty}\bigl(|\lambda_1(t,r)| + |\lambda_2(t,r)|\bigr) \leq C_{\lambda} \, , } then $T_M = \infty$. \end{thm} \begin{proof} {}From Proposition (\ref{Prop4.6}), the solution $f$ of (\ref{eq4.18}) exists up to time $T_M$. From (\ref{eq4.27}--\ref{eq4.30}), if the sectional curvatures $\lambda_1$ and $\lambda_2$ are bounded independent of $T_M$, then so is $\vert {\overline{\rm Rm}} \vert^2$, and then by Theorem 3.5 we have $T_M = \infty$. \end{proof} \subsection{Quasi-Local Mass} \noindent This subsection is a brief aside, not necessary for our main results, but intended to relate our results to the motivation discussion in the introduction. One of the more popular quasi-local mass formulations is the Brown-York mass. The Brown-York quasi-local mass contained within a closed hypersurface $\Sigma$ is defined to be \begin{equation} \mu[\Sigma]:=\int_\Sigma(H_0-H)d\Sigma\ , \label{eq4.37} \end{equation} where $H$ is the mean curvature of $\Sigma$ and $H_0$ is the mean curvature of the image of $\Sigma$ under an isometric embedding of $\Sigma$ into flat space (assuming there is such an embedding). In the case of a hypersphere $r=b(t)$, whose coordinate radius we will allow to possibly change in time, we have (using (\ref{eq4.15}) and writing $d\Omega$ to represent the canonical volume element on the $(n-1)$-sphere) \begin{eqnarray} \mu(t)&=&\int_{{\mathbb S}^{n-1}}\frac{1}{b(t)} \left (1 - \frac{1}{f(t,b(t))} \right ) b^{n-1}(t) d\Omega\nonumber \\ &=&b^{n-2}(t)\left (1 - \frac{1}{f(t,b(t))} \right ) {\rm vol}\left ( {\mathbb S}^{n-1},{\rm can}\right ) \ . \label{eq4.38} \end{eqnarray} Comparing to (\ref{eq4.32}), we can relate quasi-local mass to sectional curvature by \begin{equation} \frac{1}{b^n(t)}\left (1 + \frac{1}{f(t,b(t))} \right )\mu(t,b(t)) = \lambda_2(t,b(t)){\rm vol}\left ( {\mathbb S}^{n-1},{\rm can}\right ) \ . \label{eq4.39} \end{equation} \begin{prop}\label{Prop4.8} The sign of the Brown-York quasi-local mass within the hypersphere $r=b(t)$ at time $t$ is determined by the sign of $\lambda_2(t,b(t))$, and \begin{equation} \lim_{t\to\infty}\lambda_2(t,b(t))=0\ \Leftrightarrow\ \lim_{t\to\infty}\mu(t,b(t))=0\ . \label{eq4.40} \end{equation} \end{prop} \begin{proof} Obvious from (\ref{eq4.39}) and (\ref{eq4.23}). \end{proof} Perhaps the three most interesting kinds of hyperspheres are those of \begin{enumerate} \item[(i)] fixed surface area \begin{equation} b(t)=b_0=const>0 \ , \label{eq4.41} \end{equation} \item[(ii)] fixed volume contained within \begin{equation} \int_0^{b(t)} \int_{{\mathbb S}^{n-1}} f(t,r)r^{n-1} drd\Omega =:V_0 =const>0 {\rm \ , and} \label{eq4.42} \end{equation} \item[(iii)] fixed proper radius \begin{equation} \int_0^{b(t)}f(t,r)dr =: R_0=const>0 \ . \label{eq4.43} \end{equation} \end{enumerate} In either case, it is easy to see that \begin{equation} 0< C^-_b \le b(t) \le C^+_b \ . \label{eq4.44} \end{equation} where obviously $C_b^{\pm}=b_0$ for the fixed area case, while \begin{equation} C^{\pm}_b = \frac{nV_0}{C^{\mp}_{f^2}{\rm vol}\left ({\mathbb S}^{n-1},{\rm can}\right )} \label{eq4.45} \end{equation} for the fixed volume case and \begin{equation} C^{\pm}_b:= \frac{R_0}{C^{\mp}_{f^2}}\label{eq4.46} \end{equation} for the fixed proper radius case. \begin{rem}\label{Rem4.9} {\rm In Subsection 5.1, we prove that $\lambda_2(t,r)\sim 1/t$ for large $t$ and fixed $r$. Thus, for all three kinds of hyperspheres discussed above, the quasi-local mass vanishes like $1/t$ as $t\to\infty$.} \end{rem} \section{Immortality and Convergence} \setcounter{equation}{0} \noindent In the next two subsections we show that the sectional curvatures $\lambda_1$ and $\lambda_2$ are bounded on $t\in[0,T_M)$. (Equivalently, we obtain bounds on the quasi-local mass and its radial derivative.) This permits us to invoke Theorem \ref{Thm4.7} to conclude that the solution is immortal. In fact, we find bounds that actually decay in time, going to zero in the limit $t\rightarrow \infty$. This implies that the flow converges in the limit to a space with vanishing sectional curvatures; i.e., to a flat space. In Subsection 5.3, we prove that it converges to Euclidean space ${\mathbb E}^n$. In this section, we use $T$ to denote an arbitrary time that is less than the maximal time of existence, i.e., $0<T<T_M$. \subsection{The Decay of $\lambda_2$} \noindent Short-time existence guarantees that $f^2(t,r)-1\in{\cal O}(r^2)$ as $r\to 0$. Specifically, for all $r<r_0$ and for $0\le t <T_M$, there is a function $C(t)$ such that \begin{equation} \vert w(t,r)\vert = \vert f^2(t,r)-1\vert < C(t) r^2\ . \label{eq5.1} \end{equation} This follows by applying the boundedness of $f^2$ (\ref{eq4.23}) to equation (\ref{eq4.32}) governing $\lambda_2$, which can be written (by choosing $C(t)$ less than optimally perhaps) as \begin{equation} r^2\vert \lambda_2 (t,r) \vert = \left \vert\frac{1}{f^2}-1\right \vert<C(t)r^2 \ , \label{eq5.2} \end{equation} To apply the continuation principle, we need to prove that $C(t)$ is bounded in $t$. In this section we will prove more: we will show that $C(t)$ can be taken to decay in time, converging to zero in the limit $t\to\infty$, so that the sectional curvature $\lambda_2$ decays to zero as well. If $w=f^2-1$ decays, then, based on the parabolic form of (\ref{eq4.20}), one might speculate that this decay would go roughly like $r^2/t$, or inverse ``parabolic time''. If so, then the function $g(t,r)(f^2-1)$ should be bounded if we take $g\sim t/r^2$. We will show below that this expectation is basically correct. We do not take $g=t/r^2$ exactly. For small $t$, we will modify the form $g\sim t/r^2$ so that $g$ does not vanish at $t=0$. For small $r$, the form $g\sim t/r^2$ is problematical because we cannot specify {\it a priori} the behaviour of $\frac{1}{r^2}(f^2-1)$ on approach to $r=0$. This behaviour is governed by $C(t)$, the very quantity we seek to control as the {\it outcome} of the argument, so we cannot specify it as input. We therefore choose instead small $r$ behaviour of the form $g(t,r)\sim 1/r^m$, $m<2$, and only later do we take $m\to 2$. For $m<2$, $g(t,r)(f^2-1)$ is very well controlled {\it a priori} for small $r$: it goes to zero. Lastly, as foreshadowed by (\ref{eq5.2}), we need to apply these considerations not only with $f^2-1$ but also to $\frac{1}{f^2}-1$. The same heuristic reasoning leads us then to consider functions of the form $g(t,r)(\frac{1}{f^2}-1)$ with the same $g(t,r)$. \begin{Def}\label{Def5.1} {\em Let $f$ be defined by (\ref{eq4.13})\footnote {wherein, of course, $q$ and $h$ arise from an asymptotically flat Ricci flow of rotationally symmetric initial data obeying the conditions of Theorem 1.1.} and therefore have all the properties outlined in Section 4. For such an $f$, define the {\em $u_m$ functions}, $m\in (0,2]$, on $[0,T_M) \times [0,\infty)$ by \begin{eqnarray} u_m(t,r)&:=&\left ( \frac{1+t}{r^m+r^2}\right ) \left ( \frac{1}{f^2(t,r)}-1 \right )\ \text{for}\ r>0\ , \label{eq5.3}\\ u_m(t,0)&:=& =\lim_{r\to 0} u_m(t,r)\ .\nonumber \end{eqnarray} } \end{Def} The $u_m$ functions have the following properties, which follow from the flow equation (\ref{eq4.18}) for $f$, Proposition \ref{Prop4.6}, and equation (\ref{eq4.14}): \begin{enumerate} \item[(i)] $u_m(t,0)=0$ for all $0<m < 2$ and $\lim_{r\to\infty} u_m(t,r) =0$ for all $0<m \leq 2$. \item[(ii)] For fixed $t$ and $r\neq 0$, the map $m\mapsto u_m(t,r)$ is continuous at $m=2$. \item[(iii)] \begin{equation} \lambda_2=-\frac{2}{1+t}u_2 \ . \label{eq5.4} \end{equation} \item[(iv)] The $u_m$ obey a maximum principle, as we will show below. \item[(v)] By direct calculation starting from (\ref{eq4.18}), the $u_m$ obey the differential equation: \end{enumerate} \begin{eqnarray} \frac{\partial u_m}{\partial t} &=& \frac{1}{f^2} \frac{\partial^2 u_m}{\partial r^2} - \frac{(r^m+r^2)}{2(1+t)} \left ( \frac{\partial u_m}{\partial r} \right )^2- \frac{(2r+mr^{m-1})}{(1+t)}u_m\frac{\partial u_m}{\partial r}\nonumber\\ &&+\left [ \frac{2\left ( 2r+mr^{m-1}\right )}{(r^m+r^2)f^2} -\frac{1}{rf^2}+\frac{(n-2)}{r} \right ] \frac{\partial u_m}{\partial r}\nonumber\\ &&-\frac{(2-m)(m+n-2)}{r^2(1+r^{2-m})}u_m\nonumber\\ &&+\frac{1}{(1+t)}\biggl \{\frac{1}{1+r^{2-m}}\left [ u_m-\left ( (4-m)(m+n-2)+m(n-2)\right )u_m^2 \right ]\nonumber\\ && +\frac{r^{2-m}}{1+r^{2-m}}\left [ u_m-2(n-1)u_m^2\right ]\nonumber\\ && +\frac{r^{m-2}}{1+r^{2-m}}\left [ (m-2)(m+n-2)-m\left ( \frac{m}{2}+n-2\right ) \right ]u_m^2\biggr \}\ . \label{eq5.5} \end{eqnarray} This PDE is the starting point for the maximum principle, which we now derive. \begin{prop}\label{Prop5.2} For $u_m(t,r)$ defined by Definition \ref{Def5.1}, there is a constant $C^+_u$ which depends only on the initial data $a(r)=f(0,r)$ such that $u_m(t,r)\le C^+_u$ for all $t\in [0,T_M)$ and all $m\in (0,2)$. \end{prop} \begin{proof} The technique will be to solve (\ref{eq4.18}) for $f$, given initial data obeying the bounds in Corollary \ref{Cor4.3}. {}From this initial data, we can construct initial data for $u_m$ using from (\ref{eq5.3}) that \begin{equation} u_m(0,r):=\frac{1}{r^m+r^2}\left ( \frac{1}{f^2(0,r)}-1\right )= -\frac{\lambda_2(0,r)}{1+r^{m-2}}\label{eq5.6} \ . \end{equation} Now by the assumed differentiability and asymptotic flatness of the initial metric stated in Theorem \ref{Thm1.1}, the initial sectional curvature $\lambda_2(0,r)$ is bounded. In particular, then by (\ref{eq5.6}) $u_m(0,r)$ is bounded above on $r\in[0,\infty)$ by a constant $C^+_u$ which depends only on the initial metric (thus on $a(r)$ as in (\ref{eq4.4})) and so does not depend on $m$. Without loss of generality, we choose $C^+_u\ge \frac{1}{2(n-1)}$, for reasons that will become clear. Now it remains to be shown that $u_m(t,r)$ is bounded above for all time $t\ge 0$ by a bound that is dependent only on $u_m(0,r)$. Of course, the initial data $u_m(0,r)$ will vary with $m$ (because of the denominator of (\ref{eq5.6}); but $f(0,r)=a(r)$ and, thus, $\lambda_2(0,r)$ are of course independent of $m$), but $C^+_u$ will always provide an $m$-independent upper bound which will then bound the full solution. First restrict consideration to the compact domain $D=[0,T]\times [r_1,r_2]$, $0<r_1<r_2$, $T<T_M$, with parabolic boundary $P$ (as defined in the proof of Proposition \ref{Prop4.2}). Now consider in (\ref{eq5.5}) the terms that do not contain derivatives. There are three such terms, each comprised of a function of $r$ multiplying a factor in square brackets. One can easily check (e.g., by direct substitution; keep in mind that $m\in(0,2)$ and $n\ge 3$) that in (\ref{eq5.5}) each of these factors in square brackets is negative whenever \begin{equation} u_m>\frac{1}{2(n-1)}\ , \label{eq5.7} \end{equation} so \begin{eqnarray} \frac{\partial u_m}{\partial t} &<& \frac{1}{f^2} \frac{\partial^2 u_m}{\partial r^2} - \frac{(r^m+r^2)}{2(1+t)} \left ( \frac{\partial u_m}{\partial r} \right )^2- \frac{(2r+mr^{m-1})}{(1+t)}u\frac{\partial u_m}{\partial r}\nonumber\\ &&+\left [ \frac{2\left ( 2r+mr^{m-1}\right )}{(r^m+r^2)f^2} -\frac{1}{rf^2}+\frac{(n-2)}{r} \right ] \frac{\partial u_m}{\partial r} \label{eq5.8} \end{eqnarray} then. Applying the usual maximum principle argument to this inequality (i.e., evaluating both sides at a hypothesized local maximum and observing that the inequality cannot then be satisfied), we conclude that $u_m$ has no maximum greater than $\frac{1}{2(n-1)}$ in $D\backslash P$. By the properties of $u_m$ listed above, we have $u_m(t,r)\to 0$ both for $r\to 0$ and for $r\to \infty$. Thus, as with the proof of Proposition \ref{Prop4.2}, if the maximum of $u_m$ is $>\frac{1}{2(n-1)}$ (or merely positive) and lies on the parabolic boundary with $r_1$ chosen small enough and $r_2$ large enough, it must lie on the initial boundary. Taking the limits $r_1\to 0$ and $r_2\to\infty$, then we see that \begin{equation} u_m(t,r)\le \max \left \{ \frac{1}{2(n-1)}, \sup_{r\in [0,\infty)} \{ u_m(0,r) \} \right \} \le C^+_u \ , \label{eq5.9} \end{equation} for all $(t,r)\in [0,T]\times[0,\infty)$. But this holds for any $T<T_M$, so it holds for $(t,r)\in [0,T_M)\times[0,\infty)$. \end{proof} \begin{cor}\label{Cor5.3} Proposition 5.2 extends to the case $m=2$ and yields \begin{equation} \lambda_2(t,r)\ge -\frac{2C^+_u}{1+t} =: \frac{C^-_{\lambda_2}}{1+t} \ . \label{eq5.10} \end{equation} \end{cor} \begin{proof} As in Proposition \ref{Prop5.2}, we solve (\ref{eq4.18}) with the assumed initial data to find $f$, from which we construct $u_m$ for, say, $0<m\le 2$. Fixing any $t\in[0,T_M)$ and any $r\neq 0$, the map $m\mapsto u_m(t,r) =\frac{(1+t)}{r^2+r^m} \left ( \frac{1}{f^2(t,r)}-1\right )$ is obviously continuous at $m=2$. This and Proposition 5.2 imply that $u_2(t,r)\le C^+_u$ for all $r>0$. By the continuity of $r\to u_2(t,r)$, then $u_2(t,0)\le C^+_u$ as well, for all $t\in[0,T_M)$. Now use (\ref{eq5.4}). \end{proof} Thus $\lambda_2$ is bounded below by a bound that tends to zero in the limit of long times. Next we need a similarly decaying bound from above. To get it, we work with the following class of functions: \begin{Def}\label{Def5.4} {\em Let $f$ be defined by (\ref{eq4.13}) and therefore have all the properties outlined in Section 4. For such an $f$, the {\em $v_m$ functions}, $m\in (0,2]$ are defined on $[0,T_M) \times [0,\infty)$ as \begin{eqnarray} v_m(t,r)&:=&\left ( \frac{1+t}{r^m+r^2}\right ) \left ( f^2(t,r)-1 \right )\ \text{for}\ r>0\ , \label{eq5.11}\\ v_m(t,0)&:=& =\lim_{r\to 0} v_m(t,r)\ .\nonumber \end{eqnarray} } \end{Def} These functions have essentially the same properties as those listed for the $u_m$, but the relation to $\lambda_2$ is now \begin{equation} v_2(t,r)=\frac{1}{2}(1+t)f^2(t,r)\lambda_2(t,r)\ ,\label{eq5.12} \end{equation} and the $v_m$ obey the PDE (computed directly from (\ref{eq4.18}) and (\ref{eq5.11})) \begin{eqnarray} \frac{\partial v_m}{\partial t}&=&\frac{1}{f^2}\frac{\partial^2 v_m}{\partial r^2} -\frac{3(r^m+r^2)}{2f^4(1+t)}\left ( \frac{\partial v_m}{\partial r} \right )^2\nonumber\\ &&+\left [ \frac{2(mr^{m-1}+2r)}{(r^m+r^2)f^2} -\frac{3(mr^{m-1}+2r)}{(1+t)f^4}+\frac{n-2}{r}-\frac{1}{rf^2} \right ] \frac{\partial v_m}{\partial r}\nonumber\\ &&+\left [ 1-\frac{3(mr^{m-1}+2r)^2}{2(r^m+r^2)f^4}v_m\right ] \frac{v_m}{1+t} \nonumber\\ &&+\frac{(m-2)}{r^2} \left ( \frac{r^m}{r^m+r^2} \right ) \left ( n-2+\frac{m}{f^2} \right ) v_m\ . \label{eq5.13} \end{eqnarray} We must of course prove that the $v_m$ obey a maximum principle. In fact, Proposition \ref{Prop5.2} holds with $v_m$ replacing $u_m$ and with $m$ restricted this time to $1<m<2$. Just as with Corollary \ref{Cor5.3}, the result can be extended to cover $m=2$. To prove this, it will help to note that when $v_m\ge 0$, $n\ge 3$, and $1<m<2$, then we can discard most of the nonderivative terms in (\ref{eq5.13}) to obtain \begin{eqnarray} \frac{\partial v_m}{\partial t}&\le&\frac{1}{f^2}\frac{\partial^2 v_m}{\partial r^2} -\frac{3(r^m+r^2)}{2f^4(1+t)}\left ( \frac{\partial v_m}{\partial r} \right )^2\nonumber\\ &&+\left [ \frac{2(mr^{m-1}+2r)}{(r^m+r^2)f^2} -\frac{3(mr^{m-1}+2r)}{(1+t)f^4}+\frac{n-2}{r}-\frac{1}{rf^2} \right ] \frac{\partial v_m}{\partial r}\nonumber\\ &&+\frac{v_m}{(1+t)}\left [ 1 -\frac{6v_m}{f^4} \right ]\ , \qquad v_m>0 \ . \label{eq5.14} \end{eqnarray} \begin{prop}\label{Prop5.5} There is a constant $C^+_v$ which depends only on the initial data $f(0,r)=a(r)$ such that $v_m(t,r)<C^+_v$ for all $(t,r)\in [0,T_M)\times [0,\infty)$ and all $m\in (1,2)$. \end{prop} \begin{proof} The proof follows that of Proposition \ref{Prop5.2}. Consider first the initial data \begin{equation} v_m(0,r)=\left ( \frac{1}{r^m+r^2}\right ) \left ( f^2(0,r)-1 \right ) =\left ( \frac{1}{1+r^{m-2}}\right )\frac{w(0,r)}{r^2}\ \le C^+_v \label{eq5.15} \end{equation} because $\frac{|w(0,r)|}{r^2}$ is bounded, where $C^+_v$ is independent of $m$. This time, we will choose without loss of generality that $C^+_v\ge \frac{1}{6}(C^+_{f^2})^2$, for reasons that will become clear below. Again we work first on the domain $D=[0,T]\times[r_1,r_2]$, $0<r_1<r_2$, with parabolic boundary $P$. Observe that if $v_m> \frac{1}{6}(C^+_{f^2})^2$, the last term in (\ref{eq5.14}) will be negative. As before, elementary arguments applied to (\ref{eq5.14}) imply that this term cannot be negative at a maximum in $D\backslash P$, and thus such a maximum can occur only on $P$. Also as before, we take $r_1\to 0$, $r_2\to \infty$ and since $v_m$ vanishes in both limits, the maximum of $v_m$, if it is greater than $\frac{1}{6}(C^+_{f^2})^2$, must occur on the initial boundary where $t=0$. Thus we obtain for any $(t,r)\in [0,T] \times [0,\infty)$ that \begin{equation} v_m(t,r)\le \max \left \{\frac{1}{6}(C^+_{f^2})^2,\max_{r\in[0,\infty)} \{v_m(0,r) \} \right \} \le C^+_v \ , \label{eq5.16} \end{equation} and $C^+_v$ does not depend on $m$. It also does not depend on $T$ and so the result extends to hold for all $(t,r)\in [0,T_M) \times [0,\infty)$. \end{proof} \begin{rem}\label{Rem5.6} {\rm For use in the next subsection, we observe that in virtue of this result $u_m$ is now bounded below, as well as above, on $(t,r)\in [0,T_M) \times [0,\infty)$ by a bound that depends only on the initial data for $f$ and so is independent of $m$. The proof is to observe that $u_m=-v_m/f^2\ge -C^+_v/C^-_{f^2}=:C^-_u$. We define \begin{equation} C_u:=\max \{ \vert C^{\pm}_u\vert \}\ , \label{eq5.17} \end{equation} which bounds the magnitude of $|u_m|$ and is independent of $m$.} \end{rem} \begin{cor}\label{Cor5.7} Proposition \ref{Prop5.5} extends to the case $m=2$ and yields \begin{equation} \lambda_2(t,r)\le \frac{2C^+_v}{C^-_{f^2}(1+t)}=:\frac{C^+_{\lambda_2}}{1+t}\ . \label{eq5.18} \end{equation} \end{cor} \begin{proof} The extension to $m=2$ follows exactly as in Corollary \ref{Cor5.3}. Equation (\ref{eq5.18}) follows directly from (\ref{eq5.12}). \end{proof} \begin{prop}\label{Prop5.8} $\vert \lambda_2 \vert$ is bounded on $[0,T_M)\times [0,\infty)$ and if $T_M=\infty$ then $\lambda_2$ converges uniformly to zero as $t\to\infty$. \end{prop} \begin{proof} Immediate from Corollaries \ref{Cor5.3} and \ref{Cor5.7}. \end{proof} \noindent In this regard, note that by Theorem \ref{Thm4.7} we {\it can} assume $T_M=\infty$ if we can bound $\lambda_1$, which we now proceed to do. \subsection{The Decay of $\lambda_1$} \noindent A lower bound and decay estimate on $\lambda_1$ is now easy to obtain. It is quickest to work from the flow equation for the scalar curvature, which is \begin{eqnarray} \frac{\partial R}{\partial t} &=& \Delta R +\xi \cdot \nabla R + 2 R_{ij}R^{ij}\nonumber \\ &\ge&\Delta R +\xi \cdot \nabla R + \frac{2}{n} R^2\ , \label{eq5.19} \end{eqnarray} with $\xi=\xi_1dr$ given by (\ref{eq4.17}) and where we used the elementary identity $R^{ij}R_{ij}\ge \frac{1}{n}R^2$. Inequality (\ref{eq5.19}) gives a well-known minimum principle for $R$. Moreover, if we define \begin{equation} {\tilde R}:=(1+t)R\ , \label{eq5.20} \end{equation} we obtain from (\ref{eq5.19}) that \begin{equation} \frac{\partial {\tilde R}}{\partial t} \ge \Delta {\tilde R} + \xi \cdot \nabla {\tilde R} + \frac{1}{(1+t)} \left ( \frac{2}{n} {\tilde R}^2 + {\tilde R} \right )\ , \label{eq5.21} \end{equation} which also has a minimum principle. \begin{prop}\label{Prop5.9} If $R$ is the scalar curvature of a Ricci flow developing from asymptotically flat initial data on a manifold $M$ then there is a constant $C^-_R\le 0$ such that on $[0,T_M)\times[0,\infty)\ni (t,r)$ we have \begin{equation} R\ge \frac{C^-_R}{1+t} \ . \label{eq5.22} \end{equation} \end{prop} For notational convenience, we give the proof for the special case of interest, a rotationally symmetric flow on ${\mathbb R}^n$, but the proof clearly generalizes to arbitrary asymptotically flat flows. \begin{proof} First take $t\in[0,T]$, $T<T_m$. Let $B_0(a)$ be the ball of coordinate radius $r=a$ about the origin $0\in{\mathbb R}^n$ at time $t$. Applying elementary minimum principle arguments to (\ref{eq5.21}), it is clear that either the minimum of ${\tilde R}$ in $[0,T]\times B_0(a)$ occurs on the parabolic boundary $P$ or ${\tilde R}\ge -\frac{n}{2}$. Now the parabolic boundary has an initial component $t=0$ and a spatial component which is a sphere $r=a$ for all $t>0$. By asymptotic flatness, $R\to 0$ as $a\to \infty$ and hence ${\tilde R}\to 0$ as well. Taking this limit, we conclude that if ${\tilde R}$ is anywhere less than $-\frac{n}{2}$, then the minimum of ${\tilde R}$ over all $(t,x)\in [0,T]\times {\mathbb R}^n$ exists and is realized on the initial boundary. Thus choose $C^-_R=\min \left \{ -\frac{n}{2}, \inf_r \{ R(0,r) \} \right \}$, which is obviously independent of $T$, so finally take $T\to T_M$. Then ${\tilde R}\ge C^-_R \Rightarrow R\ge C^-_R (1+t)$ for all $(t,r)\in [0,T_M)\times [0,\infty)$. \end{proof} \begin{cor}\label{Cor5.10} Then $\lambda_1(t,r)$ is bounded below on $[0,T_M)\times [0,\infty)\ni (t,r)$ by \begin{equation} \lambda_1(t,r)\ge \frac{1}{(1+t)} \left ( \frac{C^-_R}{2(n-1)} - \frac{(n-2)C^+_v}{C^-_{f^2}} \right ) =: \frac{C^-_{\lambda_1}}{1+t} \ . \label{eq5.23} \end{equation} \end{cor} \begin{proof} This follows from the formula \begin{equation} R=2(n-1)\lambda_1+(n-1)(n-2)\lambda_2 \label{eq5.24} \end{equation} for the scalar curvature in terms of the sectional curvatures, equation (\ref{eq5.22}), and the upper bound (\ref{eq5.18}) on $\lambda_2$. \end{proof} Now we turn attention to finding an upper bound and decay estimate. We have to work harder than we did for the lower bound, but we can apply essentially the same strategy as we used to prove boundedness and convergence of $\lambda_1$. Once again, the main issue will be control of $\lambda_1$ at $r=0$, and we will be forced to work with a sequence of functions with known behaviour at $r=0$. This time, we have found that a choice well-suited to our purpose is given by \begin{Def}\label{Def5.11} {\em Let $f$ be defined by (\ref{eq4.13}) and therefore have all the properties outlined in Section 4. For such an $f$, define the {\em $y_m$ functions}, $m\in \left (1,2\right ]$, on $[0,T_M) \times [0,\infty)$ by \begin{eqnarray} y_m(t,r)&:=&\left ( \frac{1+t}{1+r^{2-m}}\right ) \left \{ r \frac{\partial}{\partial r}\left [ \frac{1}{r^m} \left ( \frac{1}{f}-1\right ) \right ] \right \},\quad r>0, \label{eq5.25}\\ y_m(t,0)&:=&\lim_{r\to 0} y_m(t,r)\ . \nonumber \end{eqnarray} } \end{Def} We can extract $\lambda_1$ from the relation \begin{equation} \frac{y_m}{1+t}=\frac{r^2f}{(r^m+r^2)}\left ( \frac{m}{(1+f)}\lambda_2 - \lambda_1 \right ) \ . \label{eq5.26} \end{equation} Notice that $y_m(t,r)\to 0$ as $r\to 0$ whenever $m<2$. Calculating from (\ref{eq4.18}), we find that $y_m(t,r)$ obeys \begin{eqnarray} \frac{\partial y_m}{\partial t} &=& \frac{1}{f^2}\frac{\partial^2 y_m}{\partial r^2}+\frac{1}{r}\alpha_m\frac{\partial y_m}{\partial r} \nonumber\\ &&+\frac{1}{r^2}\biggl \{ \left [ \frac{2}{f}\left ( 2(m-1)r^m +mr^2\right ) y_m +1\right ] \frac{y_m}{1+t} \nonumber\\ &&+\beta_m y_m+(1+t)\gamma_m\biggr \} \ , \label{eq5.27} \end{eqnarray} where some of the coefficients have rather lengthy expressions so we have introduced the abbreviations \begin{eqnarray} \alpha_m&:=& \frac{2(r^{m}+r^2)}{f}\frac{y_m}{(1+t)}+\frac{4m-3}{f^2} -\frac{2m}{f}+n-2\nonumber\\ &&-\frac{2(m-2)r^{2-m}}{f^2(1+r^{2-m})} \ ,\label{eq5.28} \\ \beta_m&:=&\frac{7m^2-14m+4}{f^2} -\frac{m(6m-8)}{f} + (n-2)\left ( m-1-\frac{3}{f^2} \right ) \nonumber\\ &&+\frac{(m-2)r^{2-m}}{1+r^{2-m}}\left [ -\frac{(3m-2)}{f^2} + \frac{2m}{f}-(n-2)\right ]\ , \label{eq5.29} \\ \gamma_m&:=&\frac{1}{(r^m+r^2)} \left ( \frac{1}{f} -1\right ) \biggl \{ \frac{2m(m-1)(m-2)}{f^2} +\frac{2m^2(2-m)}{f}\nonumber\\ &&+(n-2)\left [ -m+\frac{m+2}{f}+\frac{2(1-m)}{f^2}\right ] \biggr \} \ . \label{eq5.30} \end{eqnarray} We now claim that the $y_m(t,r)$ are bounded below on $[0,T_M)\times [0,\infty)$ by a constant that is independent of $m$. Proceeding in our now usual fashion, let $T$ be such that $0<T<T_M$ and define $D:=[0,T]\times[0,\infty)$. As usual, because $y_m$ tends to zero for $r\to\infty$ and for $r\to 0$, either zero is the lower bound or \begin{equation} \inf_D y_m=:Y=y_m(t_0,r_0)<0 \label{eq5.31} \end{equation} for some $t_0$ and some $r_0>0$. In the latter case, either $t_0=0$ and therefore the minimum depends only on initial data $a(r)=f(0,r)$ and not on $m$ or $T$, or it occurs at some $t_0\in (0,T]$ and then the minimum obeys a quadratic inequality which we now state: \begin{lem}\label{Lem5.12} Let $y_m$ be defined on $[0,T]\times [0,\infty)$, $T<T_M$, by Definition \ref{Def5.11}. For $m<2$, if $y_m$ has a negative infimum $Y<0$, then this infimum is realized as a minimum at some $(t_0,r_0)$ where $r_0>0$ and either $t_0=0$ or \begin{equation} \frac{2(m-1)}{f(t_0,r_0)}Y^2+\left ( \frac{1+t_0}{r_0^m+r_0^2} \right ) \left [ \left ( 1+\beta_m(t_0,r_0)\right )Y+(1+t_0)\gamma(t_0,r_0) \right ] <0 \ . \label{eq5.32} \end{equation} \end{lem} \begin{proof} As discussed immediately above, a negative infimum must be realized at some $(t_0,r_0$ where $r_0>0$. Then it follows by applying standard minimum principle arguments to equation (\ref{eq5.27}) that either the minimum occurs at $t_0=0$ or the nonderivative terms in (\ref{eq5.27}) are governed by the inequality \begin{eqnarray} 0&>&\frac{2}{f(t_0,r_0)}\left ( 2(m-1)r_0^m +mr_0^2\right ) Y^2 +Y \nonumber\\ &&+(1+t_0)\left [ \beta_m(t_0,r_0) Y+(1+t_0)\gamma_m(t_0,r_0)\right ]\ . \label{eq5.33} \end{eqnarray} But in the first term on the right-hand side, use that $\left ( 2(m-1)r_0^m +mr_0^2\right ) Y^2 > 2(m-1)\left ( r_0^m+r_0^2\right )Y^2$ for $m<2$ to replace the former by the latter. Replace the second term (the singleton $Y$) by $(1+t_0)Y < Y$. These replacements preserve the inequality. Divide by $r_0^m +r_0^2$ to complete the proof. \end{proof} Now further restrict $m$ to some range of form $1<\kappa\le m<2$, so that the coefficient of $Y^2$ in (\ref{eq5.32}) is not arbitrarily small; for definiteness $\kappa=\frac{3}{2}\le m <2$ will do nicely. Then since the criterion (\ref{eq5.32}) is quadratic in $Y$ with positive coefficient of $Y^2$, it will be violated for $Y$ sufficiently negative. Thus $Y$ cannot be arbitrarily negative, giving a bound on $y_m$ expressed in terms of the coefficients in (\ref{eq5.32}). It remains therefore to manipulate these coefficients to produce a bound that is manifestly independent of $m$ and $T$. The proof is an exercise in elementary manipulation, but we will give the main points. \begin{prop}\label{Prop5.13} Let $\frac{3}{2}\le m<2$. Then for each $m$, the $y_m$ are bounded below on $[0,T_M) \times [0,\infty)$ by an $m$-independent constant. \end{prop} \begin{proof} As usual, we work on $t\in[0,T]$ with $T<T_M$ to obtain a bound which does not depend on $m$ or $T$ and then take $T\to T_M$ when we're done. If the lower bound is zero, which occurs at $r_0=0$ and as $r_0\to\infty$, then obviously it is independent of $m$ and $T$, so assume that the lower bound is negative. Then it is realized as a minimum at some $(t_0,r_0)\in[0,T]\times[0,\infty)$. If $t_0=0$, the lower bound is given by the initial data, so again it is clearly $m$- and $T$-independent. Therefore, assume $t_0>0$. Then the criterion (\ref{eq5.32}) applies. In this last case, we start with (\ref{eq5.32}) and seek to re-express, where possible, factors of the form $\frac{1+t_0}{r^m_0+r^2_0}$ in terms of the bounded quantity $u_m=\left ( \frac{1+t}{r^m+r^2}\right ) \left ( \frac{1}{f^2}-1 \right )$. The boundedness of this quantity is described in Remark \ref{Rem5.6}; since $f$ is also bounded, we can also make use of equivalent form $\frac{f}{1+f}u_m=\left ( \frac{1+t}{r^m+r^2}\right ) \left ( \frac{1}{f}-1 \right )$. For example, the term in (\ref{eq5.32}) that is constant in $Y$ can be written as (understanding all quantities to be evaluated at $(t_0,r_0)$) \begin{eqnarray} \frac{(1+t_0)^2}{r^m_0+r^2_0}\gamma_m&=& \left ( \frac{fu_m}{1+f} \right )^2 \biggl [ 2m(m-2)\left ( \frac{m-1}{f}-1 \right ) \nonumber\\ &&\qquad-(n-2)\left ( \frac{2(m-1)}{f}+m-4\right ) \biggr ] \nonumber \\ &&-2(m-2)(m+n-2)\left ( \frac{1+t_0}{r^m_0+r^2_0}\right ) \frac{fu_m}{1+f}\ . \label{eq5.34} \end{eqnarray} We can minimize the term proportional to $u_m^2$ over $\frac{3}{2}\le m \le 2$. In the second term, note that the coefficient $-2(m-2)(m+n-2)$ is positive for $\frac{3}{2}\le m < 2$. Therefore we write $-2(m-2)(m+n-2) \frac{fu_m}{1+f} \ge \left ( \frac{1}{2}-n\right )\frac{f}{1+f} |u_m(t_0,r_0)| \ge \left ( \frac{1}{2}-n\right )\frac{f}{1+f}C_u \ge \left ( \frac{1}{2}-n\right )C_u$, using (\ref{eq5.17}). This yields \begin{eqnarray} \frac{(1+t_0)^2}{r^m_0+r^2_0}\gamma_m&\ge& \left ( \frac{f u_m}{1+f} \right )^2\left [ -\frac{4}{3\sqrt{3}f}-2(n-2)\left ( 1-\frac{1}{f} \right ) \right ]\nonumber\\ &&+\left ( \frac{1}{2}-n\right ) C_u \left ( \frac{1+t_0}{r^m_0+r^2_0} \right )\nonumber\\ &\ge&-k_1+\left ( \frac{1}{2}-n\right )C_u \left ( \frac{1+t_0}{r^m_0+r^2_0} \right )\ , \label{eq5.35} \end{eqnarray} where $k_1$ is a (positive) constant independent of $m$, $T$, and $Y$.\footnote {For example, $k_1=C_u^2\left [ \frac{1}{C^-_f} +2(n-2)\right ]$ would do fine, where we write $C^-_f:=\sqrt{C^-_{f^2}}$.} The second term still contains an unwanted factor of $\frac{1+t_0}{r^m_0+r^2_0}$ with negative coefficient, but for $Y$ sufficiently negative we will be able to dominate this term with positive contributions coming from the part of the criterion (\ref{eq5.32}) that is linear in $Y$. To examine the linear term, start from the expression \begin{eqnarray} \left (\frac{1+t_0}{r_0^m+r_0^2}\right ) \left ( 1+\beta_m \right ) Y &=&\left (\frac{1+t_0}{r_0^m+r_0^2}\right )\biggl \{ 1+ \frac{7m^2-14m+4}{f^2} -\frac{m(6m-8)}{f}\nonumber\\ && +(n-2)\left ( m-1-\frac{3}{f^2} \right ) \label{eq5.36}\\ &&+\frac{(2-m)r^{2-m}}{1+r^{2-m}} \left [ \frac{3m-2}{f^2} -\frac{2m}{f}+n-2 \right ] \biggr \} Y\ . \nonumber \end{eqnarray} The terms in the last line simplify since we can use that $Y<0$, $\frac{3}{2}\ge m < 2$, and $n\ge 3$ to write \begin{eqnarray} \frac{(2-m)r^{2-m}}{1+r^{2-m}} \left [ \frac{3m-2}{f^2} -\frac{2m}{f}+n-2 \right ] Y &>& \frac{(2-m)r^{2-m}}{1+r^{2-m}} \left [ \frac{3m-2}{f^2} +n-2 \right ] Y\nonumber \\ &>& (2-m)\left [ \frac{3m-2}{f^2} +n-2 \right ] Y\ . \label{eq5.37} \end{eqnarray} Now we can combine this result with (\ref{eq5.36}) and again absorb the factor of $\frac{1+t_0}{r^m_0+r^2_0}$, wherever possible, using $u_m$. We get \begin{eqnarray} \left (\frac{1+t_0}{r_0^m+r_0^2} \right ) \left ( 1+\beta_m \right ) Y &>& \frac{f u_m}{(1+f)} \biggl [ \frac{4m^2-6m-3(n-2)}{f}\nonumber\\ &&\qquad -2m^2 +2m -3(n-2) \biggr ]Y \nonumber\\ &&-\left ( 2m^2-2m +2n -5 \right ) \left ( \frac{1+t_0}{r^m_0+r^2_0} \right ) Y\nonumber\\ &\ge&\frac{fu_m}{(1+f)} \biggl [ \frac{4m^2-6m-3(n-2)}{f}\nonumber\\ &&\qquad -2m^2 +2m -3(n-2) \biggr ] Y \nonumber\\ &&-\left ( 2n-\frac{7}{2}\right ) \left ( \frac{1+t_0}{r^m_0+r^2_0} \right ) Y \ , \label{eq5.38} \end{eqnarray} where in the last line minimized over $\frac{3}{2}\le m < 2$. It is again evident that this is the sum of a bounded term and a term involving $\frac{1+t_0}{r^m_0+r^2_0}$. Both these terms are linear in $Y$. That is, \begin{equation} \left (\frac{1+t_0}{r_0^m+r_0^2} \right ) \left ( 1+\beta_m \right ) Y\ge k_2Y-\left ( 2n-\frac{7}{2}\right ) \left ( \frac{1+t_0}{r^m_0+r^2_0} \right ) Y \ , \label{eq5.39} \end{equation} where $k_2$ is a constant independent of $m$, $T$, and $Y$.\footnote {For example, from elementary considerations applied to (\ref{eq5.38}) we obtain that $k_2=8C_u$ is a suitable bound.} Inserting (\ref{eq5.35}) and (\ref{eq5.39}) into the criterion (\ref{eq5.32}) and using that $\frac{2(m-1)}{f(t_0,r_0)}Y^2\ge \frac{3}{f(t_0,r_0)}Y^2$ for $\frac{3}{2}\le m <2$, we obtain the following necessary condition for $Y<0$ to be the minimum of $y_m(t_0,r_0)$ at some $t_0>0$: \begin{eqnarray} 0&\ge&\frac{3}{f(t_0,r_0)}Y^2+k_2 Y -k_1\nonumber\\ &&+\left [\left ( \frac{1}{2}-n \right ) C_u - \left ( 2n-\frac{7}{2} \right )Y \right ] \left ( \frac{1+t_0}{r^m_0+r^2_0} \right ) \ . \label{eq5.40} \end{eqnarray} Then a necessary condition for $Y<\frac{1-2n}{4n-7}C_u$ to be the minimum of $y_m(t_0,r_0)$ at some $t_0>0$ is \begin{equation} 0>\frac{3}{f(t_0,r_0)}Y^2+k_2 Y -k_1\ , \label{eq5.41} \end{equation} which is clearly violated whenever \begin{equation} Y<C_Y:=\min \left \{ \left ( \frac{1-2n}{4n-7}\right ) C_u, -\frac{C^+_f}{6}\left [ k_2 + \sqrt{k_2^2+\frac{12}{C^-_f} k_1} \right ] \right \} \ , \label{eq5.42} \end{equation} where we use the short-hand $C^{\pm}_f:=\sqrt{C^{\pm}_{f^2}}$. We conclude that \begin{equation} y_m\ge C_y^-:=\min\left \{ C_Y,\inf_r \{ y_m(0,r) \} \right \} \label{eq5.43} \end{equation} on $[0,T]\times [0,\infty)$ and since these bounds do not depend on $T$, taking $T\to T_M$ we see that they hold as well on $[0,T_M)\times [0,\infty)$ \end{proof} \begin{cor}\label{Cor5.14} There is a constant $C^+_{\lambda_1}$ such that \begin{equation} \lambda_1(t,r)\le \frac{C^+_{\lambda_1}}{1+t} \label{eq5.44} \end{equation} on $[0,T_M)\times [0,\infty)$. \end{cor} \begin{proof} First we prove that $y_2$ is bounded below by $C_y^-$. As with Corollaries \ref{Cor5.3} and \ref{Cor5.7}, the map $m\mapsto y_m(t,r)$, with fixed $t$ and fixed $r>0$, is continuous, so the bound (\ref{eq5.43}) applies to $y_2(t,r)$ except possibly at $r=0$. Then the continuity of $y_2$ at $r=0$ implies that the bound holds there as well. Next, the $m=2$ case of (\ref{eq5.26}) yields \begin{equation} \lambda_1=\frac{2}{1+f}\lambda_2-\frac{y_2}{1+t}\ . \label{eq5.45} \end{equation} Using (\ref{eq5.18}) and the facts that $C^-_y\le 0$ and $C^+_v\ge 0$, we can write this as \begin{eqnarray} \lambda_1(t,r)&\le& \left (\frac{2}{1+f}\right )\frac{2C^+_v}{(1+t)C^-+{f^2}} -\frac{2C_y^-}{(1+t)f}\nonumber \\ &\le& \frac{1}{1+t} \left [ \frac{4C^+_v}{C^-_{f^2}} -\frac{2C^-_y}{C^-_f} \right ] \label{eq5.46}\ , \end{eqnarray} where we've used that $0<C^-_{f^2}\le f^2$ and $C^-_f:= \sqrt{C^-_{f^2}}$. Now let $C^+_{\lambda_1}$ equal the quantity in square brackets in the last line. \end{proof} We can now prove the main theorem. \subsection{Proof of Theorem 1.1} \noindent {\it Proof of Statement (i).} By Corollaries \ref{Cor5.3}, \ref{Cor5.7}, \ref{Cor5.10}, and \ref{Cor5.14}, the sectional curvatures in $[0,T_M)\times [0,\infty)$ are bounded above and below by bounds of the form \begin{equation} \vert \lambda_{1,2}\vert \le \frac{\vert C^{\pm}_{\lambda_{1,2}}\vert }{1+t} \le \vert C^{\pm}_{\lambda_{1,2}}\vert\ .\label{eq5.47} \end{equation} Thus, by Theorem \ref{Thm4.7}, we can take $T_M=\infty$ and can conclude that there is a constant $C_0$ such that \leqn{eq5.48}{ \sup_{x\in\mathbb{R}^n}|{\rm \overline{Rm}}(x,t)|_{\bar{g}(t,x)} \leq \frac{C_0}{1+t} \quad \forall \; t\geq 0. } This proves the existence for all $t\in[0,\infty)$ of the solution developing from the initial condition Statement (i) of the theorem and also the $\ell =0$ estimate of (iii). \medskip \noindent {\it Proof of Statement (ii).} This is immediate from Theorem \ref{LocA}. \medskip \noindent {\it Proof of Statement (iii).} Follows directly from \eqref{eq5.48} and Theorem 7.1 of \cite{Ham95}. \medskip \noindent{\it Proof of Statement (iv).} This follows from the Compactness Theorem 1.2 of \cite{Hamilton3} and statement (iii), provided the injectivity radius at the origin is $>\delta\ge 0$ for some $\delta$ independent of $t$. Since the metric is uniformly equivalent to the Euclidean metric and the sectional curvatures are uniformly bounded in time, this follows immediately from, for example, the Cheeger-Gromov-Taylor injectivity radius estimate (Theorem 4.7 of \cite{CGT}).\footnote {Even more simply, since the constant-$r$ surfaces are convex througout the flow, there are no closed geodesics. Then it follows from the sectional curvature bound (\ref{eq5.48}) that ${\rm inj\ }({\mathbb R^n},g(t))\ge \frac{\pi \sqrt{1+t}}{\sqrt{C_0}}$. Since this gives a global bound on the injectivity radius, less powerful convergence theorems (e.g Theorem 7.1.3 of \cite{Topping}) suffice to finish the proof.} \begin{comment} \noindent{\it Proof of Statement (iv).} Fix any $t$ and any $r=const>0$ hypersphere in $({\mathbb R^n},g(t))$. By rotational symmetry, every point on this hypersurface is umbilic: all of the principal curvatures are equal. By the absence of minimal hyperspheres (Corollary \ref{Cor4.4}), the mean curvature is positive and thus so is each principal curvature. Now assume a closed geodesic exists in $({\mathbb R^n},g(t))$. The there would be a point where the $r$-coordinate along the geodesic were a maximum, and there the geodesic would be tangent to a constant-$r$ hypersphere. This is impossible, since one of the principal curvatures would have to be $\le 0$ at the point of tangency. Thus, there are no closed geodesics in $({\mathbb R^n},g(t))$ for any $t\ge 0$. For a manifold with no closed geodesics and with sectional curvatures bounded above by a constant $k>0$, the injectivity radius is bounded below by $\pi/\sqrt{k}$. In fact, in the present case, we have from (\ref{eq5.48}) that \begin{equation} {\rm inj\ }({\mathbb R^n},g(t))\ge \frac{\pi \sqrt{1+t}}{\sqrt{C_0}} \ . \label{eq5.54} \end{equation} {}From Statement (iii) and (\ref{eq5.54}), for {\it any} increasing, divergent sequence of times $t_i$ starting from some $t_0>0$ and for any convergent sequence of points $p_i\in {\mathbb R}^n$ (say choose all $p_i$ to be the origin), conditions 7.1.1 and 7.1.2 of \cite{Topping} apply. Then convergence follows from Theorem 7.1.3 of \cite{Topping}. \medskip \end{comment} \noindent{\it Proof of Statement (v).} Immediate from Remark \ref{mass}. \hfill \hfill $\Box$ \medskip
{ "attr-fineweb-edu": 1.875977, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdA45qX_BhIhybMiE
\section{\textit{ab initio} Molecular Dynamics} The Car-Parrinello ab initio MD \cite{AIMD} studies presented herein were performed using Kohn-Sham Density Functional Theory. The CPMD approach is based on the Born-Oppenheimer approximation. It describes the metallic state, but not the magnetic, insulating state, of the cuprate materials. The computations were checked by comparison with the band structures of copper and the 214 material \cite{Pickett2}. The Fig. 1 results were obtained by starting with the Fig. 1a configuration and running to equilibrium. The Fig. 2b results were obtained by constraining the oxygens in the manner shown in Fig. 1a at a specified value of the oxygen displacement, and running to equilibrium - they therefore represent "ion-relaxed" potential energy curves. Doping was achieved by varying the K/Ca ratio in the ensemble. Samples of 4x4 and 6x6 unit cells were run in periodic boundary conditions. \section{Projection of 3-band model onto 1-band model} In matrix notation consider a $d$-subspace and a $p$-subspace, represented by the Hamiltonians $H^{d}$ and $H^{p}$ respectively, connected by the coupling matrix $V^{pd}$, the Hamiltonian then being% \begin{equation} H=\left[ \begin{array} [c]{cc}% H^{d} & V^{dp}\\ V^{pd} & H^{p}% \end{array} \right] . \end{equation} Projecting onto the $d$-subspace in perturbation theory% \begin{equation} \widetilde{H^{d}}=H^{d}+V^{dp}\left( \epsilon_{d}-H^{p}\right) ^{-1}V^{pd}. \end{equation} if $i,j$ are $d$-sites, and $l,m$ are p-orbitals% \begin{equation} \widetilde{H^{d}}_{ij}=\epsilon_{d}\delta_{ij}+\sum_{l,m}V_{il}^{dp}\left( \epsilon_{d}-H^{p}\right) _{lm}^{-1}V_{mj}^{pd}. \end{equation} \begin{figure} [h] \begin{center} \includegraphics[ trim=0.526754in 0.526999in 0.529947in 0.792938in, height=2.6161in, width=3.6703in ]% {Fig_1s.eps}% \caption{$2p_{x}$, $2p_{y}$ and $3d_{x^{2}-y^{2}}$ orbitals in CuO$_{2}$ plane, illustrating $2p$ to $3d$ hopping integral $t_{pd}$.}% \end{center} \end{figure} Now we shall neglect the $pp$ hopping matrix elements (Emery model), when $l=m$ and the $V$'s are nearest-neighbor hopping matrix elements defined as $t_{pd}>0$ (see Fig. 1). There are 2 processes, \begin{enumerate} \item $i,j$ nearest neighbor $\left\langle i,j\right\rangle $ on the $d$-lattice, when the 2 $V$'s have opposite sign (Fig. 1) \item $i=j$, when the 2 $V$'s have same sign \end{enumerate} giving% \begin{equation} \widetilde{H^{d}}=\sum_{i,\sigma}\epsilon_{d}n_{i\sigma}+\sum_{\left\langle i,j\right\rangle ,\sigma}\frac{t_{p_{ij}d}^{2}}{\epsilon_{p_{ij}d}}\left( n_{i\sigma}+n_{j\sigma}\right) -\sum_{\left\langle i,j\right\rangle }% \frac{t_{p_{ij}d}^{2}}{\epsilon_{p_{ij}d}}X_{ij}, \label{formulation1}% \end{equation} where $\epsilon_{pd}=\epsilon_{d}-\epsilon_{p}>0$ is the "oxide gap" between the oxygen $2p$ orbital energy and the higher-lying Cu $3d_{x^{2}-y^{2}}$ orbital energy, $\sigma$ is spin, $p_{ij}$ is the p-orbital between $d$-sites $i$ and $j$, and the bond order operator $X_{ij}$\ is \begin{equation} X_{ij}=\sum_{\sigma}\left( c_{i\sigma}^{+}c_{j\sigma}+c_{j\sigma}% ^{+}c_{i\sigma}\right) . \end{equation} Let us assume that the oxygen motion in some direction is $x$, and that it enters the $3$-band hamiltonian via the $pd$ hopping integral \begin{equation} t_{pd}\rightarrow t_{pd}-v_{pd}x^{2},\text{ \ \ where }v_{pd}>0, \end{equation} then to order $v_{pd}$, and defining $t=t_{pd}^{2}/\epsilon_{pd}$% \begin{align} \widetilde{H^{d}} & =\left( \epsilon_{d}+2t\right) \sum_{i,\sigma }n_{i\sigma}-t\sum_{\left\langle i,j\right\rangle }X_{ij} \label{formulation2}% \\ & -\frac{2t_{pd}v_{pd}}{\epsilon_{pd}}\sum_{\left\langle i,j\right\rangle ,\sigma}\left( n_{i\sigma}+n_{j\sigma}\right) x_{ij}^{2}+\frac{2t_{pd}% v_{pd}}{\epsilon_{pd}}\sum_{\left\langle i,j\right\rangle }X_{ij}x_{ij}% ^{2}.\nonumber \end{align} Restoring our original notation \cite{FBM1} $2t_{pd}v_{pd}/\epsilon _{pd}=v/2\sqrt{nn_{s}}$, when the coupling $v$ is seen to be \textbf{positive} \begin{align} \widetilde{H^{d}} & =\left( \epsilon_{d}+2t\right) \sum_{i,\sigma }n_{i\sigma}-t\sum_{\left\langle i,j\right\rangle }X_{ij}\label{formulation2a}% \\ & -\frac{v}{2\sqrt{nn_{s}}}\sum_{\left\langle i,j\right\rangle ,\sigma }\left( n_{i\sigma}+n_{j\sigma}\right) x_{ij}^{2}+\frac{v}{2\sqrt{nn_{s}}% }\sum_{\left\langle i,j\right\rangle }X_{ij}x_{ij}^{2}.\nonumber \end{align} We retrieve our previous 1-band model (next-nearest and next-next-nearest neighbor hoppings are dropped due to neglect of $t_{pp}$), but with an extra term diagonal in $d$-space. As regards the vibrator, the effect of the new term is to weaken the oxygen parabolic potential linearly with increasing electron occupation of the band (or stiffen the vibrator with increasing hole occupation). The number operator term is dominant over the hopping term (% $<$% X% $>$ maximizes at $\simeq0.6$). Let us now alternatively assume that the oxygen motion enters the $3$-band hamiltonian through the interaction of the electrostatic potential with the charge on the oxygen% \begin{equation} \epsilon_{pd}\rightarrow\epsilon_{pd}+v_{p}x^{2}; \end{equation} where $v_{p}$ depends on a Madelung sum. In an ionic crystal it is arguable that the sign of $\ v_{p}$ will be positive since the environment of a negative ion typically consists of positive ions, so as the O-ion approaches them the local oxide gap $\epsilon_{pd}$ becomes larger. However in a perovskite structure the issue needs specific calculation. Expanding to first order% \begin{equation} \frac{1}{\epsilon_{pd}+v_{p}x^{2}}=\frac{1}{\epsilon_{pd}}-\frac{v_{p}x^{2}% }{\epsilon_{pd}^{2}}. \end{equation} Returning to Eq. (\ref{formulation1}), we insert the foregoing expansion into the 2 terms to obtain \begin{equation} \Delta\widetilde{H^{d}}\rightarrow-\frac{tv_{p}}{\epsilon_{pd}}\sum _{\left\langle i,j\right\rangle }\left( n_{i\sigma}+n_{j\sigma}\right) x_{ij}^{2}+\frac{tv_{p}}{\epsilon_{pd}}\sum_{\left\langle i,j\right\rangle }X_{ij}x_{ij}^{2}. \label{formulation3}% \end{equation} The effect of the oscillator correction (\ref{formulation3}) from this mechanism can be absorbed into (\ref{formulation2a}), giving the same final result (\ref{formulation2a})\ but with \begin{equation} \frac{v}{2\sqrt{nn_{s}}}=\left( 2t_{pd}v_{pd}+tv_{p}\right) /\epsilon_{pd}. \end{equation} The sign of $v$ will be positive if the $t_{pd}v_{pd}$ term in parenthesis is dominant, or if $v_{p}$ is positive as argued above. In this section we have formally derived the FBM coupling, showing the approximations involved explicitly, and demonstrated the existence of a new term in the coupling. \section{FBM\ Hamiltonian} The FBM\ Hamiltonian involves three pieces% \begin{equation} H=H^{v}+H^{e}+H^{ev}. \end{equation} In $H$ the Cu sites, which define the unit cell, are defined as 2D integral-component vectors $\mathbf{i}=\left( i_{x},i_{y}\right) $ (lattice constant is taken as unity). The two oxygens in each unit cell $\mathbf{i}$ are located at the sites $\mathbf{i+}\widehat{\mathbf{\alpha}}/2$, where $\widehat{\mathbf{\alpha}}$ is a unit vector along the $x$- or $y$- axes, hence $\widehat{\mathbf{\alpha}}$ defines whether the oxygen is in a Cu-O-Cu bond oriented along the $x$- or $y$- direction. In the vibrator piece $H^{v}$ the oxygen degree of freedom is an $n$-component vector $\mathbf{x}_{\mathbf{i+}\widehat{\mathbf{\alpha}}/2}$, where $n=1$ if a single mode is dominant (as assumed in the manuscript), $n=2$ if the two modes transverse to the Cu-O-Cu bond are roughly equivalent, or in a case now considered unlikely (as the along-bond mode is found to be weakly coupled) $n=3$ if the two transverse modes and the along-bond mode can all be considered equivalent. $H^{v}$ is given by \begin{equation} H^{v}=\sum_{\mathbf{i,\alpha=x}}^{\mathbf{y}}\left[ \frac{1}{2m}% p_{\mathbf{i+}\widehat{\mathbf{\alpha}}/2}^{2}+\frac{\chi_{0}}{2}% x_{\mathbf{i+}\widehat{\mathbf{\alpha}}/2}^{2}+\frac{w}{8n}\left( x_{\mathbf{i+}\widehat{\mathbf{\alpha}}/2}^{2}\right) ^{2}\right] . \label{vibrational}% \end{equation} In $H^{v}$ the scalar products $\mathbf{x}_{\mathbf{i+}\widehat{\mathbf{\alpha }}/2}\cdot\mathbf{x}_{\mathbf{i+}\widehat{\mathbf{\alpha}}/2}$ are abbreviated to $x_{\mathbf{i+}\widehat{\mathbf{\alpha}}/2}^{2}$, and a momentum $\mathbf{p}_{\mathbf{i+}\widehat{\mathbf{\alpha}}/2}$\ conjugate to coordinate $\mathbf{x}_{\mathbf{i+}\widehat{\mathbf{\alpha}}/2}$ is introduced, to define the vibrator kinetic energy, with $m$ the oxygen mass ($M$ in the Ms.). The "bare" bond force constant is $\chi_{0}$. The quartic term, with coefficient $w$, is assumed in the degenerate case to be radially ($n=2$) or spherically ($n=3$) symmetric. The electronic piece $H^{e}$ is \begin{equation} H^{e}=-\frac{1}{2}\sum_{\mathbf{i,j},\sigma}t\left( \mathbf{i}-\mathbf{j}% \right) c_{\mathbf{i},\sigma}^{+}c_{\mathbf{j},\sigma}, \label{electronic}% \end{equation} where $c_{\mathbf{i},\sigma}^{+}$($c_{\mathbf{i},\sigma}$) denote respectively the creation (destruction) operators for the $3d_{x^{2}-y^{2}}$ orbital (or, more rigorously, the $d_{x^{2}-y^{2}}$-type Cu$3d$-O$2p$ antibonding Wannier function) on lattice site $\mathbf{i}$ of spin $\sigma$. The strongest interaction is the nearest neighbor hopping integral $t(\pm1,0)=t(0,\pm1)=t$, ($t$ is positive), followed by the next-nearest neighbor interaction $t(\pm1,\pm1)=t^{\prime}$, ($t^{\prime}$ is negative) and then the 3rd-nearest neighbor interaction $t(\pm2,0)=t(0,\pm2)=t^{\prime\prime}$ ($t^{\prime\prime }$ is positive). The band eigenvalues $\epsilon_{\mathbf{k}}$ of (\ref{electronic}) are \begin{equation} \epsilon_{\mathbf{k}}=-2t(\cos k_{x}+\cos k_{y})-4t^{\prime}\cos k_{x}\cos k_{y}-2t^{\prime\prime}(\cos2k_{x}+\cos2k_{y}). \label{band_structure}% \end{equation} The model band structure has a minimum at $\Gamma$ ($\mathbf{k}=(0,0)$), a maximum at $Z$ ($\mathbf{k}=(\pi,\pi)$), and saddle points (SP)\ at X ($\mathbf{k}=(0,\pi)$), and Y ($\mathbf{k}=(\pi,0)$). As a result of the saddle points, located at $\epsilon_{\mathbf{SP}}=$ $4t^{\prime}% -4t^{\prime\prime}$, the density of states (DOS) has a logarithmic peak (van Hove singularity or vHs) at $\epsilon_{\mathbf{SP}}$ which is found from ARPES and band structure calculations for near-optimally doped systems to lie close to the Fermi level \cite{OKA2bs,ZX214} - the resulting high DOS at the Fermi level strongly enhances the FBM coupling. The total band width is $8t$. The electron-vibrator coupling piece is \begin{align} H^{ev} & =\frac{v}{2\sqrt{nn_{s}}}\sum_{\mathbf{i,\alpha=x}}^{\mathbf{y}% }x_{\mathbf{i+}\widehat{\mathbf{\alpha}}/2}^{2}\left[ -\sum_{\sigma}\left( n_{\mathbf{i,\sigma}}+n_{_{\mathbf{i+}\widehat{\mathbf{\alpha}}},\sigma }\right) +X_{\mathbf{i+}\widehat{\mathbf{\alpha}}/2}\right] ;\text{ \ \ \ \ }\label{full_coupling}\\ X_{\mathbf{i+}\widehat{\mathbf{\alpha}}/2} & =\sum_{\sigma}\left( c_{\mathbf{i},\sigma}^{+}c_{\mathbf{i+}\widehat{\mathbf{\alpha}},\sigma }+c_{\mathbf{i+}\widehat{\mathbf{\alpha}},\sigma}^{+}c_{\mathbf{i},\sigma }\right) , \end{align} where the bond order operator $X$ is associated with the oxygen site at the bond center, and we have defined in the mixed degeneracy factor $\left( nn_{s}\right) ^{-1/2}$, where $n_{s}=2$ is the spin degeneracy, to make the term of order $\sqrt{nn_{s}}$, motivated by a version of large-$N$ theory jointly expanding in $1/n$ and $1/n_{s}$. In Ref. \cite{FBM1} only the $X$-piece of (\ref{full_coupling}) was included. The combination $-\sum_{\sigma}\left( n_{\mathbf{i,\sigma}}+n_{_{\mathbf{i+}% \widehat{\mathbf{\alpha}}},\sigma}\right) +X_{\mathbf{i+}\widehat {\mathbf{\alpha}}/2}$ can also be written in more compact form, defining the antibonding orbital $\left\vert a,\mathbf{i+}\widehat{\mathbf{\alpha}% }/2\right\rangle =\left( \left\vert \mathbf{i}\right\rangle -\left\vert \mathbf{i+}\widehat{\mathbf{\alpha}}\right\rangle \right) /\sqrt{2}$, with number operator $n_{\mathbf{i+}\widehat{\mathbf{\alpha}}/2}^{a}$ (summing over spin). In the manuscript $n^{a}$ is simplified to $N$.% \begin{equation} -\sum_{\sigma}\left( n_{\mathbf{i,\sigma}}+n_{\mathbf{i+\widehat {\mathbf{\alpha}},\sigma}}\right) +X_{\mathbf{i+}\widehat{\mathbf{\alpha}}% /2}=-2n_{\mathbf{i+}\widehat{\mathbf{\alpha}}/2}^{a}. \label{interaction2}% \end{equation} The complete Hamiltonian $H=H^{v}+H^{e}+H^{ev}$ is then% \begin{align} H & =\sum_{\mathbf{i,\alpha=x}}^{\mathbf{y}}\left[ \frac{1}{2m}% p_{\mathbf{i+}\widehat{\mathbf{\alpha}}/2}^{2}+\frac{\chi_{0}}{2}% x_{\mathbf{i+}\widehat{\mathbf{\alpha}}/2}^{2}+\frac{w}{8n}\left( x_{\mathbf{i+}\widehat{\mathbf{\alpha}}/2}^{2}\right) ^{2}\right] -\frac {1}{2}\sum_{\mathbf{i,j},\sigma}t\left( \mathbf{i}-\mathbf{j}\right) c_{\mathbf{i},\sigma}^{+}c_{\mathbf{j},\sigma}\label{Hamiltonian}\\ & +\frac{v}{\sqrt{nn_{s}}}\sum_{\mathbf{i,\alpha=x}}^{\mathbf{y}% }x_{\mathbf{i+}\widehat{\mathbf{\alpha}}/2}^{2}n_{\mathbf{i+}\widehat {\mathbf{\alpha}}/2}^{a}.\nonumber \end{align} Note that in Eq.(\ref{Hamiltonian}) $K=v^{2}/w$ defines a coupling energy. \section{Determination of Coupling $v$} Calculation of the oxygen PE surface as a function of doping is not an ideal approach to calculationg the FBM coupling constant for two reasons. The PE surface calculations such as illustrated in Fig. 2 in the Ms. are relaxed surfaces, i.e. when certain oxygen coordinates are held fixed all other atoms are allowed to find their minumum energy. This differs from the "vertical" couplings entering the FBM Hamiltonian, where atoms other than planar oxygen are assumed fixed in place. Secondly the coupling in the FBM Hamiltonian is to the number of electrons $n^{a}$ in the antibonding orbital, which mainly involves states at the top of the $d$-band and will be filled mainly by adding electrons rather than holes as was done (for reasons of computational stability) in Fig. 2 of the Ms.. The method adopted to calculate the coupling strength $v$ is based on comparing the shift in band structure energies when the oxygen location is perturbed with the same shift deduced from the FBM Hamiltonian. The FBM\ coupling (third term in Eq.(\ref{Hamiltonian})) leads to splittings in the tight-binding band structure. If all oxygens in the $x$-oriented bonds are globally shifted by $u_{x}$, and all oxygens in the $y$-oriented bonds by $u_{y}$, there is a splitting of the band energy between the band energy $\epsilon_{X}$ at the saddle point (SP) X$=\left( \pi,0\right) $, and $\epsilon_{Y}$ at Y$=\left( 0,\pi\right) $, given by $\epsilon_{X}% -\epsilon_{Y}=\sqrt{2/n}v\left( u_{x}^{2}-u_{y}^{2}\right) .$ By numerically calculating the band structure with first the $x$-oxygens displaced, and then the $y$-oxygens, and subtracting the corresponding band structure energies energies at, say, the SP X, any isotropic shift resulting from displacing a single oxygen can be cancelled out and the coupling $v$ determined. The results are shown in Table I. \section{Mean Field Approximation} Mean field theory is a useful step in investigating the properties of many models. In the FBM, the mean field approximation decouples the electronic and vibrational parts of the Hamiltonian. In the vibrational part, an expectation value of the electronic terms shifts the oscillator harmonic frequency, the expectation value being assumed spatially uniform, but it can be different in the $x$- and $y$- bonds (in this section we return to the notation in the Ms.):% \begin{align} H^{vib} & =\sum_{\left\langle i,j\right\rangle }\frac{p_{ij}^{2}}{2M}% +\frac{1}{2}\sum_{\left\langle i,j\right\rangle }\chi_{0}u_{ij}^{2}+\frac {w}{8}\sum_{\left\langle i,j\right\rangle }u_{ij}^{4} \label{effective_quartic}\\ & +\frac{v}{2\sqrt{2}}\sum_{\left\langle i,j\right\rangle ,\sigma}\left( 2-2p+\left\langle c_{i,\sigma}^{+}c_{j,\sigma}+c_{j,\sigma}^{+}c_{i,\sigma }\right\rangle \right) u_{ij}^{2}.\nonumber \end{align} $H^{vib}$ can easily be diagonalized in a harmonic oscillator basis. In the electronic part, the expectation value of the square of the oscillator amplitude has been taken,% \begin{equation} H^{el}=\sum_{\mathbf{k},\sigma}\epsilon_{\mathbf{k}}n_{\mathbf{k},\sigma }+\frac{v}{2\sqrt{2}}\sum_{\left\langle i,j\right\rangle ,\sigma}\left[ c_{i,\sigma}^{+}c_{j,\sigma}+c_{j,\sigma}^{+}c_{i,\sigma}\right] \left\langle u_{ij}^{2}\right\rangle , \label{effective_el}% \end{equation} giving a band structure problem in which there are new nearest-neighbor hopping terms $\left( v/2\sqrt{2}\right) \left[ c_{i,\sigma}^{+}% c_{j,\sigma}+c_{j,\sigma}^{+}c_{i,\sigma}\right] \left\langle u_{ij}% ^{2}\right\rangle $ (the uniform shift represented by the number operator terms does not change the band structure and is omitted) with the effect of reducing the nearest-neighbor hopping integral. Allowing the oscillator amplitude squared for the $x$-directed $\left\langle u_{ij}^{2}\right\rangle _{x}$ and $y$-directed $\left\langle u_{ij}^{2}\right\rangle _{y}$\ bonds to be unequal (the C4 symmetry-split case), the band structure is changed to \begin{equation} \widetilde{\epsilon}_{\mathbf{k}}=\epsilon_{\mathbf{k}}+\frac{v}{\sqrt{2}% }\left\langle u_{ij}^{2}\right\rangle _{x}\cos k_{x}+\frac{v}{\sqrt{2}% }\left\langle u_{ij}^{2}\right\rangle _{y}\cos k_{y}. \label{eff_bs}% \end{equation} Using the band structure $\widetilde{\epsilon}_{\mathbf{k}}$ (\ref{eff_bs}) the expectation values $\left\langle c_{i,\sigma}^{+}c_{j,\sigma}+c_{j,\sigma }^{+}c_{i,\sigma}\right\rangle $ for $x$-oriented and $y$-oriented bonds are calculated, hence defining two quartic Hamiltonians (\ref{effective_quartic}) whose exact solution yields the squared vibrator amplitudes $\left\langle u_{ij}^{2}\right\rangle _{x}$ and $\left\langle u_{ij}^{2}\right\rangle _{y}$. These interconnected electronic and quartic problems are then solved self-consistently as regards the expectation values. The parameters used were similar to Table I, $v=0.0198$ au, $w=0.085$ au, the oscillator bare force constant was $\chi_{0}=-0.0225$ au. The band structure is parametrized by the (negative of the) hopping matrix elements, the nearest-neighbor hopping matrix element $t=0.25$ eV, next-nearest-neighbor hopping m.e. $t^{\prime}=-0.05$ eV, and third next-nearest-neighbor hopping m.e. $t^{\prime\prime}=27.2$ meV. We can rewrite the effective band structure as \begin{equation} \widetilde{\epsilon}_{\mathbf{k}}=\overline{\epsilon_{\mathbf{k}}}+\frac{1}% {2}\Delta_{ps}\left( \cos k_{x}-\cos k_{y}\right) , \end{equation} where $\Delta_{ps}=\left( v/\sqrt{2}\right) \left( \left\langle u_{ij}% ^{2}\right\rangle _{x}-\left\langle u_{ij}^{2}\right\rangle _{y}\right) $ is the pseudogap, and the renormalized nearest-neighbor hopping $\left( v/2\sqrt{2}\right) \left( \left\langle u_{ij}^{2}\right\rangle _{x}+\left\langle u_{ij}^{2}\right\rangle _{y}\right) $ is absorbed into $\overline{\epsilon_{\mathbf{k}}}$. $\ $The experimental data \cite{Kohsaka1} show that the pseudogap is not uniform over the sample as we have, for simplicity, assumed, but the coherence length over which the sign of $\Delta_{ps}^{0}$ varies is quite short, only a few lattice spacings. Probably as a result of this nanoscopic domain structure, the phase boundary of the pseudogap region is not typically found experimentally to constitute a true, sharp, phase boundary \cite{PGReview}. The variation of pseudogap with doping at low temperature seen in the contour plot (Manuscript Fig. 3) is similar to thet seen in experimental data \cite{JinhoLee} (see Fig. 2)% \begin{figure} [ptb] \begin{center} \includegraphics[ trim=3.051982in 1.740398in 0.657645in 0.000000in, height=2.9698in, width=3.2188in ]% {Fig_3s.eps}% \caption{Histograms of measured energy gaps $\Delta$ from a sequence of samples with different dopings, black being strongly overdoped and blue strongly underdoped \cite{JinhoLee}.}% \end{center} \end{figure} \section{Intensity Variation in Experimental $R$-plots} In order to model the experimental behavior in the STM experiments \cite{Kohsaka1} on C4 symmetry-split systems, we calculated the projected DOS for a 3-band model with the basis of oxygen $2p_{x}$ and $2p_{y}$ orbitals and Cu $3d_{x^{2}-y^{2}}$ orbitals shown in Fig. 1. The $pd\sigma$ hopping matrix element is $t_{pd}=1.12$ eV. \ There are $pp$ hopping matrix elements between nearest-neighbor $2p_{x}$ and $2p_{y}$ orbitals given by $t_{pp}=-0.528$ eV, and an oxide gap $\epsilon_{d}-\epsilon_{p}=6$ eV. A spatially-uniform pseudogap is introduced by modifying the $t_{pd}$ matrix elements to $t_{p_{x}d}=t_{pd}+\Delta t$ (i.e. for the lower vibrational amplitude oxygen) and $t_{p_{y}d}=t_{pd}-\Delta t$ (i.e. for the higher vibrational amplitude oxygen), where $\Delta t=0.0375$ eV (the argument below only depends on these being semiquantitatively correct). The results for the DOS projected into the oxygen $2p_{x}$ orbitals (lying in $x$-oriented Cu-O-Cu bonds - see Fig. 1) and oxygen $2p_{y}$ orbitals are different, as seen in Fig. 3. The DOS peak associated with the van Hove singularity is seen in Fig. 3 to be split, the peak above the Fermi level being localized only on the lower vibrational amplitude oxygen, and the peak below the Fermi level being being localized only the higher vibrational amplitude oxygen. The STM $R$-map technique \cite{Kohsaka1} for detecting the C4 splitting experimentally involves the ratio $R$ of the tunneling current into the empty DOS to the hole current into the filled DOS. Evidently from Fig3 , $R$ is predicted to be large on the low amplitude oxygens and small on the high amplitude oxygens, in agreement with the observation \cite{Kohsaka1}, in which the high amplitude oxygens are associated with dark streaks in the $R$-map, while the low amplitude oxygens are associated with bright spots. Note that the C4 splitting is characterized by nanoscale domains \cite{Kohsaka1}.% \begin{figure} [ptb] \begin{center} \includegraphics[ trim=1.736693in 1.083276in 2.837024in 1.097101in, height=2.7691in, width=2.8193in ]% {Fig_2s.eps}% \caption{Oxygen projected $2p$-DOS for oxygens in $x$-oriented and $y$-oriented bonds.The peak above the Fermi level is for the lower vibrational amplitude oxygen, and the peak below the Fermi level is for the higher vibrational amplitude oxygen.}% \end{center} \end{figure} \section{Lagrangian Formalism} Here we wish to derive the pairing interaction in the FBM. For this purpose a simple approach, akin to the familiar RPA, is to utilise the $1/N$ expansion technique, where $N$ is degeneracy, e.g. vibrational degeneracy $n$ or spin degeneracy $n_{s}$. In this approach the exact solution to the $n=1$ quartic vibrator used in the main Ms. will be replaced by the (very similar) mean field solution to the $n>1$ quartic vibrator \cite{FBM1}. The $1/N$ expansion works well e.g. for the Kondo problem \cite{Read-Newns}, the results remaining physical down to spin degeneracy $n_{s}=2$. Here we symetrically co-expand in the inverse of the mode degeneracy $n$ and the spin degeneracy $n_{s}$ \cite{Coleman}, meaning by expressions such as "$1/N$" the joint orders \thinspace$1/n$ and $1/n_{s}$. The $1/N$ expansion technique is usually implemented within a Lagrangian/path integral formulation \cite{Coleman}, the approach we shall adopt here. We take $\hbar=1$, and omit the sum-normalizing factors $1/N_{x}^{2}$, which can always be restored by inspection. Within the usual imaginary time (Euclidean) Lagrangian formulation at finite temperature the partition function can be written \begin{equation} Z=\int\mathcal{D}x\mathcal{D}ce^{-\int_{0}^{\beta}\mathcal{L}\left( \tau\right) d\tau}, \end{equation} where $\mathcal{L}\left( \tau\right) $ is the Lagrangian as a function of imaginary time $\tau$, $\beta$ is $1/T$ ($T$ is temperature, taking Boltzmann's constant $k_{B}=1$), $\int\mathcal{D}x\mathcal{D}c$ implies a path integral over the vibrator and fermionic (Grassman) variables. The Lagrangian, readily derived from the Hamiltonian (\ref{Hamiltonian}), \begin{equation} \mathcal{L}=\mathcal{L}_{e}^{(1)}+\mathcal{L}_{v}^{(1)}+\mathcal{L}_{v}% ^{(2)}+\mathcal{L}_{ev}^{(2)}, \label{Lagrangian}% \end{equation} comprises the previously described terms (\ref{electronic}), (\ref{vibrational}), and (\ref{full_coupling}) \begin{align} \mathcal{L}_{e}^{(1)} & =\sum_{\mathbf{i,}\sigma}c_{\mathbf{i},\sigma}% ^{+}\frac{\partial}{\partial\tau}c_{\mathbf{i,\sigma}}-\frac{1}{2}% \sum_{\mathbf{i,j},\sigma}t\left( \mathbf{i}-\mathbf{j}\right) c_{\mathbf{i},\sigma}^{+}c_{\mathbf{j},\sigma},.\\ \mathcal{L}_{v}^{(1)} & =\frac{1}{2}\sum_{\mathbf{i,\alpha}}\left[ m\dot {x}_{\mathbf{i+}\widehat{\mathbf{\alpha}}/2}^{2}+\chi_{0}x_{\mathbf{i+}% \widehat{\mathbf{\alpha}}/2}^{2}\right] ,\nonumber\\ \mathcal{L}_{v}^{(2)} & =\frac{w}{8n}\sum_{\mathbf{i,\alpha}}\left( x_{\mathbf{i+}\widehat{\mathbf{\alpha}}/2}^{2}\right) ^{2},\nonumber\\ \mathcal{L}_{ev}^{(2)} & =\frac{v}{\sqrt{nn_{s}}}\sum_{\mathbf{i,\alpha=x}% }^{\mathbf{y}}x_{\mathbf{i+}\widehat{\mathbf{\alpha}}/2}^{2}n_{\mathbf{i+}% \widehat{\mathbf{\alpha}}/2}^{a}\nonumber \end{align} A kinetic energy for the fermion variables is introduced on line 1, otherwise the terms are the same as in Eq's (\ref{electronic}), (\ref{vibrational}), and (\ref{full_coupling}). In the following we extend the earlier FBM results \cite{FBM1} to include the more complete form of coupling derived in Sec. II. \section{Results} We implement a Stratonovic decoupling on the bilinear terms $\mathcal{L}% _{v}^{(2)}$ and $\mathcal{L}_{ev}^{(2)}$ in the Lagrangian, generating path integrals over new fields $z_{v,\mathbf{\mathbf{i+}\widehat{\mathbf{\alpha}% }/2}}$ and $z_{e,\mathbf{\mathbf{i+}\widehat{\mathbf{\alpha}}/2}}$ defined on each Cu-O-Cu bond. Later this will make it possible to implement the path integrals over the Boson and Fermion fields under the $z$-path integral. Now the partition function is written in terms of the action $S$% \begin{equation} Z=\int\mathcal{D}z\mathcal{D}x\mathcal{D}ce^{-\beta\mathcal{S}}, \end{equation} which is again decomposed into the terms \begin{equation} S=S_{v}^{(1)}+S_{e}^{(1)}+S_{z}^{(2)}+S_{zv}+S_{ze}, \label{Action_full}% \end{equation} defined by \begin{align} S_{v}^{(1)} & =\frac{m}{2}\sum_{\mathbf{k},m,\alpha}\left( \omega_{m}% ^{2}+\overline{\omega}_{\alpha}^{2}\right) x_{-\mathbf{k},-m,\alpha }x_{\mathbf{k},m,\alpha},\\ S_{e}^{(1)} & =\sum_{\mathbf{k},m,\sigma}\left( \overline{\epsilon }_{\mathbf{k}}-i\nu_{m}\right) c_{\mathbf{k},m,\sigma}^{+}c_{\mathbf{k}% ,m,\sigma},\nonumber\\ S_{z}^{(2)} & =\frac{n_{s}}{2K}\sum_{\mathbf{q},n,\alpha}z_{-\mathbf{q}% ,-n,\alpha}^{e}z_{\mathbf{q},n,\alpha}^{e}-\frac{2\sqrt{nn_{s}}}{v}% \sum_{\mathbf{q},\alpha,n}z_{-\mathbf{q},-n,\alpha}^{e}z_{\mathbf{q},n,\alpha }^{v},\nonumber\\ S_{zv} & =-\sum_{\mathbf{q,k},n,m,\alpha}z_{-\mathbf{q},-n,\alpha}% ^{v}x_{-\mathbf{k},-m,\alpha}x_{\mathbf{k+q},m+n,\alpha}+\sum_{\alpha }z_{\mathbf{0},0,\alpha}^{v}\left\langle x^{2}\right\rangle ,\nonumber\\ S_{ze} & =\sum_{\mathbf{q},n,\sigma,\alpha}\psi_{\mathbf{k,k}+\mathbf{q}% }^{\alpha}z_{-\mathbf{q},-n,\alpha}^{e}c_{\mathbf{k},m,\sigma}^{+}% c_{\mathbf{k+q},m+n,\sigma}-2\sum_{\alpha}z_{\mathbf{0},0,\alpha}% ^{e}\left\langle n^{a}\right\rangle .\nonumber \end{align} Here% \begin{align} x(t) & =\sum_{n}e^{-i\omega_{n}t}x_{n},\text{ \ }\omega_{n}=2n\pi T;\text{ \ \ \ \ \ \ \ \ \ \ Bose and z fields,}\\ c(t) & =\sum_{n}e^{-i\nu_{n}t}c_{n},\text{ \ }\nu_{n}=\left( 2n+1\right) \pi T;\text{ \ \ Fermion fields,}\nonumber \end{align} define the Fourier series converting to Bosonic and Fermionic Matsubara frequencies $\omega_{n}$ and $\nu_{n}$, and a coupling amplitude $\psi_{\mathbf{k,k}^{\prime}}^{\alpha}$ different from that in \cite{FBM1} appears \begin{equation} \psi_{\mathbf{k,k}^{\prime}}^{\alpha}=4\sin\left[ k_{\alpha}/2\right] \sin\left[ k_{\alpha}^{\prime}/2\right] . \label{chi}% \end{equation} The band structure $\overline{\epsilon}_{\mathbf{k}}$ and vibrator frequencies $\overline{\omega}_{\alpha}$ now involve mean field quantities, an effective nearest-neighbor hopping integral $t_{\alpha}$ (\ref{t_alpha}) defined by% \begin{equation} t_{\alpha}=t+z_{e}^{\alpha}=t-\frac{v}{2\sqrt{nn_{s}}}\left\langle x^{2}\right\rangle _{\alpha}, \label{t_alpha}% \end{equation} physically meaning that the effective hopping is weakened by the vibrational displacement of the oxygen, and vibration frequencies $\overline{\omega }_{\alpha}$ given by (\ref{freq2})% \begin{align} m\overline{\omega}_{\alpha}^{2} & =\chi_{0}-2z_{v}^{\alpha}\label{freq2}\\ & =\chi_{0}-\frac{2v}{\sqrt{nn_{s}}}\left\langle n^{a}\right\rangle _{\alpha }+\frac{w}{2n}\left\langle x^{2}\right\rangle _{\alpha}.\nonumber \end{align} physically meaning that the quasiharmonic vibrator is softened by the number of antibonding electrons $\left\langle n^{a}\right\rangle _{\alpha}$, and stiffened by the effect of the quartic potential. On implementing the path integrals over the Boson and Fermion fields under the $z$-path integral (here we drop the $\alpha$-dependence of the mean field quantities for simplicity), we obtain to order $(1/N)$ the quasi-harmonic action \begin{equation} S=\frac{1}{2}\sum_{\mathbf{q},n}\left[ z_{-\mathbf{q},-n,x}^{v}% ,z_{-\mathbf{q},-n,y}^{v}z_{-\mathbf{q},-n,x}^{e},z_{-\mathbf{q},-n,y}% ^{e}\right] \mathbf{A}(\mathbf{q},n)\left[ \begin{array} [c]{c}% z_{\mathbf{q},n,x}^{v}\\ z_{\mathbf{q},n,y}^{v}\\ z_{\mathbf{q},n,x}^{e}\\ z_{\mathbf{q},n,y}^{e}% \end{array} \right] , \end{equation} where the matrix $\mathbf{A}$ is given by% \begin{equation} \mathbf{A}(\mathbf{q},n)=\left[ \begin{array} [c]{cccc}% -nD_{2}(n) & 0 & -\frac{2\sqrt{nn_{s}}}{v} & 0\\ 0 & -nD_{2}(n) & 0 & -\frac{2\sqrt{nn_{s}}}{v}\\ -\frac{2\sqrt{nn_{s}}}{v} & 0 & \frac{n_{s}}{K}\left( 1-KR_{xx}% (\mathbf{q},n)\right) & -n_{s}R_{xy}(\mathbf{q},n)\\ 0 & -\frac{2\sqrt{nn_{s}}}{v} & -n_{s}R_{yx}(\mathbf{q},n) & \frac{n_{s}}% {K}\left( 1-KR_{yy}(\mathbf{q},n)\right) \end{array} \right] , \end{equation} and the Response functions (RF) are defined by \begin{equation} D_{2}\left( \mathbf{q},n\right) =\frac{2}{m^{2}\overline{\omega}}\left[ \frac{1}{\left( \omega_{n}^{2}+4\overline{\omega}^{2}\right) }\coth\left( \frac{\overline{\omega}}{2T}\right) +\frac{\delta_{n,0}}{8T\overline{\omega }\sinh^{2}\left( \frac{\overline{\omega}}{2T}\right) }\right] , \label{2phonon}% \end{equation} and ($f(\epsilon)$ is the Fermi function)% \begin{equation} R_{\alpha\beta}(\mathbf{q},n)=-\sum_{\mathbf{k}}\frac{f(\epsilon_{\mathbf{k}% })-f(\epsilon_{\mathbf{k+q}})}{\epsilon_{\mathbf{k}}-\epsilon_{\mathbf{k+q}% }+i\omega_{n}}\psi_{\mathbf{k,k+q}}^{\alpha}\psi_{\mathbf{k+q,k}}^{\beta}. \end{equation} \section{Discussion} The important regions of $k$-space in the Brillouin Zone are the high-density of states regions around the saddle points (SP)\ at $\mathbf{k}=(0,\pi)$ (termed Y), and $(\pi,0)$ (termed X). The key issue is to understand how the FBM behaves at the SP. Because of the $\psi$-factors, the RF are close to being diagonal if we assume only regions around the SP are important in the $k$-sum. Therefore the $A$-matrix separates into a component having a pole in the $x$-channel, and a component having a pole in the $y$-channel. At zero frequency and wavevector, these poles correspond to the onset of the phase boundary for the PG at $T^{\ast}$. As regards pairing, we consider the interaction for a pair to scatter from $(\mathbf{k,-k})$ to $(\mathbf{k}% ^{\prime}\mathbf{,-k}^{\prime})$, which involves the factor $\left( \psi_{\mathbf{k,k}^{\prime}}^{\alpha}\right) ^{2}$. This will either scatter where $\mathbf{k}$ and $\mathbf{k}^{\prime}$ lie in the neighborhood of the point X which involves the factor $\left( \psi_{\mathbf{k,k}^{\prime}}% ^{x}\right) ^{2}\sim16$, in which case the inverse of the $A$-matrix in the $x$-channel is involved, or in the neighborhood of the point Y, which involves the factor $\left( \psi_{\mathbf{k,k}^{\prime}}^{y}\right) ^{2}\sim16$, in which case the inverse of the $A$-matrix in the $y$-channel is involved. The interaction has the classic form required for $d$-wave pairing, in that the interaction is attractive (in fact close to singular) for $(\mathbf{k-k}% ^{\prime})$ small, and at large $(\mathbf{k-k}^{\prime})$ the attraction dies off (in fact, because of the short-range repulsive interaction $U$, which is flat in $k$-space, the interaction actually becomes repulsive at large $(\mathbf{k-k}^{\prime})$). The situation for the full FBM Hamiltonian (\ref{Hamiltonian}) is similar but different in detail for the interaction which considers only the $X$-term in the electronic coupling (\ref{interaction2}) \cite{FBM1}, when instead of the coupling amplitude $\psi_{\mathbf{k,k}^{\prime}}^{\alpha}$ we have the amplitude% \begin{equation} \chi_{\mathbf{k,k}^{\prime}}^{\alpha}=-2\cos\left( \frac{k_{\alpha}% +k_{\alpha}^{\prime}}{2}\right) . \end{equation} This function leads to a different $A$-matrix \cite{FBM1}, in which only a component with $d$-symmetry (the $z$'s in the $x$- and $y$- channels having opposite sign) is singular, the other, $s$-channel, being nonsingular. Now scattering around either SP X or SP Y is possible with the the factor $\left( \chi_{\mathbf{k,k}^{\prime}}^{\alpha}\right) ^{2}\sim4$ for either $\alpha=x$ or $\alpha=y$. However only one combination of channels is singular, so we assume that this can be considered as if there is only one effective channel for scattering around each SP, just as in the case where both terms in the electronic coupling (\ref{interaction2}) are present. Hence we conclude on the basis of dominance by the SP, that the pairing interaction in the case of the full electronic coupling (\ref{interaction2}) is $4\times$ as large as that in the reduced interaction considered earlier. This is the result cited in the manuscript, which however needs verification by numerical solution of the gap equation.
{ "attr-fineweb-edu": 1.891602, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdB04uzlhbeH0ZiuE
\section{#1}\setcounter{equation}{0}} \begin{document} \title{Nonexistence of solutions for Dirichlet problems with supercritical growth in tubular domains } \maketitle \vspace{5mm} \begin{center} { {\bf Riccardo MOLLE$^a$,\quad Donato PASSASEO$^b$}} \vspace{5mm} {\em ${\phantom{1}}^a$Dipartimento di Matematica, Universit\`a di Roma ``Tor Vergata'',\linebreak Via della Ricerca Scientifica n. 1, 00133 Roma, Italy.\\ e-mail: [email protected]} \vspace{2mm} {\em ${\phantom{1}}^b$Dipartimento di Matematica ``E. De Giorgi'', Universit\`a di Lecce,\linebreak P.O. Box 193, 73100 Lecce, Italy. } \end{center} \vspace{5mm} {\small {\sc \noindent \ \ Abstract.} - We deal with Dirichlet problems of the form $$ \Delta u+f(u)=0 \mbox{ in }\Omega,\qquad u=0\ \mbox{ on }\partial \Omega $$ where $\Omega$ is a bounded domain of $\mathbb{R}^n$, $n\ge 3$, and $f$ has supercritical growth from the viewpoint of Sobolev embedding. In particular, we consider the case where $\Omega$ is a tubular domain $T_{\varepsilon}(\Gamma_k)$ with thickness ${\varepsilon}>0$ and centre $\Gamma_k$, a $k$-dimensional, smooth, compact submanifold of $\mathbb{R}^n$. Our main result concerns the case where $k=1$ and $\Gamma_k$ is contractible in itself. In this case we prove that the problem does not have nontrivial solutions for ${\varepsilon}>0$ small enough. When $k\ge 2$ or $\Gamma_k$ is noncontractible in itself we obtain weaker nonexistence results. Some examples show that all these results are sharp for what concerns the assumptions on $k$ and $f$. \vspace{3mm} {\em \noindent \ \ MSC:} 35J20; 35J60; 35J65. \vspace{1mm} {\em \noindent \ \ Keywords:} Supercritical Sobolev exponents. Integral identities. Nonexistence results. Tubular domains. } \sezione{Introduction} The results we present in this paper are concerned with existence or nonexistence of nontrivial solutions for Dirichlet problems of the form \begin{equation} \label{*} \Delta u+f(u)=0\ \mbox{ in }\ \Omega,\qquad u=0\ \mbox{ on }\ \partial\Omega, \end{equation} where $\Omega$ is a bounded domain of $\mathbb{R}^n$, $n\ge 3$ and $f$ has supercritical growth from the viewpoint of the Sobolev embedding. \noindent Let us consider, for example, the case where $f(t)=|t|^{p-2}t$ $\forall t\in\mathbb{R}$ (this function obviously satisfies the condition (\ref{f}) we use in this paper). In this case, a well known nonexistence result of Pohozaev (see \cite{Po}) says that the Dirichlet problem \begin{equation} \label{p} \Delta u+|u|^{p-2}u=0\ \mbox{ in }\ \Omega,\qquad u=0\ \mbox{ on }\ \partial\Omega \end{equation} has only the trivial solution $u\equiv 0$ when $\Omega$ is starshaped and $p\ge{2n\over n-2}$ (the critical Sobolev exponent). \noindent On the other hand, if $\Omega$ is an annulus it is easy to find infinitely many radial solutions for all $p>1$ (as pointed out by Kazdan and Werner in \cite{KW}). Thus, it is natural to ask whether or not the nonexistence result of Pohozaev can be extended to non starshaped domains and the existence result in the annulus can be extended, for example, to all noncontractible domains of $\mathbb{R}^n$. \noindent Following some stimulated questions pointed out by Brezis, Nirenberg, Rabinowitz, etc. (see \cite{B,BN}) many results have been obtained, relating nonexistence, existence and multiplicity of nontrivial solutions to the shape of $\Omega$ (see \cite{Di,D88,Pmm89, P93,P94,Pl92,P4092,Ptmna96,Pd98,MPcvpde06,MPaihp06,MPcras02,MPcras2002}, etc.). \noindent In the present paper our aim is to show that, even if the Pohozaev nonexistence result cannot be extended to all the contractible domains of $\mathbb{R}^n$, one can prove that there exist contractible non starshaped domains $\Omega$, which may be very different from the starshaped ones and even arbitrarily close to noncontractible domains, such that the Dirichlet problem (\ref{p}) has only the trivial solution $u\equiv 0$ for all $p>{2n\over n-2}$. \noindent In order to construct such domains, we use suitable Pohozaev type integral identities in tubular domains $\Omega=T_{\varepsilon}(\Gamma_k)$ with thickness ${\varepsilon}>0$ and centre $\Gamma_k$, where $\Gamma_k$ is a $k$-dimensional, compact, smooth submanifold of $\mathbb{R}^n$. \noindent If $k=1$, $\Gamma_k$ is contractible in itself and $p>{2n\over n-2}$, we prove that there exists $\bar{\varepsilon}>0$ such that, for all ${\varepsilon}\in(0,\bar{\varepsilon})$, the Dirichlet problem (\ref{p}) with $\Omega=T_{\varepsilon}(\Gamma_k)$ does not have any nontrivial solution (this nonexistence result follows, as a particular case, from Theorem \ref{T2.1}). \noindent Let us point out that, if $k=1$ but $\Gamma_k$ is noncontractible in itself or if $k>1$, a nonexistence result analogous to Theorem \ref{T2.1} cannot hold under the assumption $p>{2n\over n-2}$. In fact, the method we use in Theorem \ref{T2.1} fails when $k=1$ and $\Gamma_k$ is noncontractible because the multipliers to be used in the Pohozaev type integral identity are not well defined. Using other multipliers, we obtain a weaker nonexistence result which holds only when $n\ge 4$ and $p>{2(n-1)\over n-3}$ (it follows from Theorem \ref{T2.4}). On the other hand, this weaker result is sharp because, if $\Gamma_k$ is for example a circle of radius $R$ (that is $T_{\varepsilon}(\Gamma_k)$ is a solid torus), one can easily obtain infinitely many solutions for all ${\varepsilon}\in(0,R)$ when $n=3$ and $p>1$ or $n\ge 4$ and $p\in\left(1,{2(n-1)\over n-3}\right)$. \noindent Propositions \ref{P3.1}, \ref{P3.2} and \ref{P3.3} give examples of existence and multiplicity results of positive and sign changing solutions for some $p\ge{2n\over n-2}$ in tubular domains $T_{\varepsilon}(\Gamma_k)$ with $k\ge 2$ and $\Gamma_k$ contractible in itself. This examples explain why Theorem \ref{T2.1} cannot be extended to the case $k>1$ under the assumption $p>{2n\over n-2}$. \noindent However, in the case $k>1$, with $\Gamma_k$ contractible or not, we prove a weaker nonexistence result (given by Theorem \ref{T3.4}) which holds only when $n>k+2$ and $p>{2(n-k)\over n-k-2}$. \noindent Some existence and multiplicity results, when $n\le k+2$ or $n>k+2$ and $p<{2(n-k)\over n-k-2}$, in tubular domains $T_{\varepsilon}(\Gamma_k)$ with $k\ge 2$ and ${\varepsilon}$ non necessarily small, show that also the nonexistence result given by Theorem \ref{T3.4} is sharp. \noindent Finally, let us point out that if in the equation $\Delta u+f(u)=0$ we replace the Laplace operator $\Delta u$ by the operator $\div(|D u|^{q-2}Du)$ with $1<q<2$, then critical and supercritical nonlinearities arise also for $n=2$ and produce analogous nonexistence results (see \cite{plap,PLap}). These results suggest that if $n=2$, $1<q<2$ and $p>{2q\over 2-q}$, the Pohozaev nonexistence result for starshaped domains can be extended to all the contractible domains of $\mathbb{R}^2$ while it is not possible for example if $n\ge 3$, $q=2$ and $p\ge {2n\over n-2}$ because of Propositions \ref{P3.1}, \ref{P3.2} and \ref{P3.3} (see Remark \ref{R3.6}). \sezione{Integral identities and nonexistence results} In order to obtain nonexistence results for nontrivial solutions of problem (\ref{*}), we use the Pohozaev type integral identity given in the following Lemma. \begin{lemma} \label{L2.0} Let $\Omega$ be a piecewise smooth bounded domain of $\mathbb{R}^n$, $n\ge 3$, $v=(v_1,\ldots,v_n)\in{\cal C}^1(\overline \Omega,\mathbb{R}^n)$ a vector field in $\overline\Omega$ and $f$ a continuous function in $\mathbb{R}$. Then every solution of problem (\ref{*}) satisfies the integral identity \begin{equation} \label{id} {1\over 2}\int_{\partial\Omega} |Du|^2\, v\cdot\nu\, d\sigma= \int_\Omega dv[Du]\cdot Du\, dx+ \int_\Omega \div v\, \left(F(u)-{1\over 2}|Du|^2\right)\, dx, \end{equation} where $\nu$ denotes the outward normal to $\partial\Omega$, $dv[\xi]=\sum\limits_{i=1}^nD_iv\, \xi_i$ $\forall \xi=(\xi_1,\ldots,\xi_n)\in\mathbb{R}^n$ and $F(t)=\int_0^tf(\tau)d\tau$ $\forall t\in\mathbb{R}$. \end{lemma} \noindent For the proof it is sufficient to apply the Gauss-Green formula to the function $v\cdot Du\, Du$ and argue as in \cite{Po}. Notice that the Pohozaev identity is obtained for $v(x)=x$. \vspace{2mm} \noindent Now our aim is to find suitable domains $\Omega$ and vector fields $v\in{\cal C}^1(\overline \Omega,\mathbb{R}^n)$ such that the identity (\ref{id}) can be satisfied only by a trivial solution of problem (\ref{*}). \noindent In order to construct $\Omega$ and $v$ with this property, let us consider a curve $\gamma\in{\cal C}^3([a,b],\mathbb{R}^n)$ such that $\gamma'(t)\neq 0$ $\forall t\in[a,b]$ and $\gamma(t_1)\neq\gamma(t_2)$ if $t_1\neq t_2$, $t_1,t_2\in [a,b]$. \noindent For all $t\in[a,b]$ and $r>0$, let us set $N(t)=\{\xi\in\mathbb{R}^n\ :\ \xi\cdot\gamma'(t)=0\}$ and $N_r(t)=\{\xi\in N(t)\ :\ |\xi|\le r\}$. \noindent Notice that there exists $\bar{\varepsilon}_1>0$ such that, for all ${\varepsilon}\in(0,\bar{\varepsilon}_1]$, \begin{equation} [\gamma(t_1)+N_{\varepsilon}(t_1)]\cap[\gamma(t_2)+N_{\varepsilon}(t_2)]=\emptyset\quad\mbox{ if }t_1\neq t_2,\quad t_1,t_2\in [a,b]. \end{equation} For all ${\varepsilon}\in(0,\bar{\varepsilon}_1)$ let us consider the open, piecewise smooth, bounded domain $T^\gamma_{\varepsilon}$ defined by \begin{equation} \label{Te} T^\gamma_{\varepsilon}=\bigcup_{t\in(a,b)} [\gamma(t)+N_{\varepsilon}(t)]. \end{equation} Then, the following nonexistence result holds for the nontrivial solutions in the domain $\Omega=T^\gamma_{\varepsilon}$. \begin{teo} \label{T2.1} Assume the continuous function $f$ satisfies the condition \begin{equation} \label{f} tf(t)\ge p\int_0^tf(\tau)d\tau\ge 0\qquad\forall t\in\mathbb{R} \end{equation} for a suitable $p>{2n\over n-2}$. Then, there exists $\bar {\varepsilon}>0$ such that for all ${\varepsilon}\in(0,\bar{\varepsilon})$ the Dirichlet problem (\ref{*}) has only the trivial solution $u\equiv 0$ in the domain $\Omega=T^\gamma_{\varepsilon}$. \end{teo} \noindent It is clear that condition (\ref{f}) implies $f(0)=0$, so the function $u\equiv 0$ in $T^\gamma_{\varepsilon}$ is a trivial solution $\forall {\varepsilon}\in(0,\bar{\varepsilon}_1)$. \noindent In order to prove that it is the unique solution for ${\varepsilon}$ small enough, we need some preliminary results. \noindent Notice that if ${\varepsilon}\in(0,\bar{\varepsilon}_1)$, the following property holds: for all $x\in T^\gamma_{\varepsilon}$ there exists a unique $t(x)\in(a,b)$ such that $\mathop{\rm dist}\nolimits(x,\Gamma)=|x-\gamma(t(x))|$, where \begin{equation} \Gamma=\{\gamma(t)\ :\ t \in [a,b]\}. \end{equation} If we set $\xi(x)=x-\gamma(t(x))$, we have $\xi(x)\cdot\gamma'(t(x))=0$ $\forall x\in T^\gamma_{\varepsilon}$. Therefore, for all $y\in T^\gamma_{\varepsilon}$ there exists a unique pair $ (t(x),\xi(x))$ such that $t(x)\in [a,b]$, $\xi(x)\in N_{\varepsilon}(t(x))$ and $x=\gamma(t(x))+\xi(x)$. \noindent Without any loss of generality, we can assume in addition that $a\le 0\le b$ and $|\gamma'(t)|=1$ $\forall t\in [a,b]$. \noindent For all $\xi\in N_{\varepsilon}(0)$ let us consider the function $\tau\mapsto x(\xi,\tau)$ which solves the Cauchy problem \begin{equation} \left\{ \begin{array}{l} \vspace{2mm} {\partial x\over\partial\tau}=\gamma'(t(x))\\ x(\xi,0)=\gamma(0)+\xi. \end{array} \right. \end{equation} Notice that $\mathop{\rm dist}\nolimits(x(\xi,\tau),\gamma)=|\xi|$ $\forall \tau\in[a,b]$. Moreover, for all $\xi\in N_{\varepsilon}(0)$, the function $\tau\mapsto t(x(\xi,\tau))$ is increasing. As a consequence, we can consider the inverse function $t\mapsto \tau(\xi,t)$ which satisfies $t(x(\xi,\tau(\xi,t)))=t$ $\forall t\in [a,b]$. \noindent Notice that $\tau(\xi,0)=0$ $\forall \xi\in N_{\varepsilon}(0)$ because $t(x(\xi,0))=0$. For all $\xi\in N_{\varepsilon}(0)$, let us set $\psi(\xi,t)=x(\xi,\tau(\xi,t))-\gamma(t)$. Then, $\psi(\xi,t)\in N_{\varepsilon}(t)$ and $|\psi(\xi,t)|=|\xi|$ $\forall \xi\in N_{\varepsilon}(0)$ $\forall t\in [a,b]$. Moreover, for all $x\in T^\gamma_{\varepsilon}$ there exists a unique $\xi\in N_{\varepsilon}(0)$ such that $\xi(x)=\psi(\xi,t(x))$ and the function $\xi\mapsto\psi(\xi,t)$ is a one to one function between $N_{\varepsilon}(0)$ and $N_{\varepsilon}(t)$, satisfying $|\psi(\xi_1,t)-\psi(\xi_2,t)|=|\xi_1-\xi_2|$ $\forall\xi_1,\xi_2\in N_{\varepsilon}(0)$, $\forall t\in[a,b]$. \noindent Now, let us consider the vector field $v$ defined by \begin{equation} \label{v} v(\gamma(t)+\psi(\xi,t))=t\gamma'(t)[1-\psi(\xi,t)\cdot\gamma''(t)]+\psi(\xi,t)\qquad\forall t\in(a,b),\ \forall\xi\in N_{\varepsilon}(0). \end{equation} Since $\gamma\in{\cal C}^3([a,b],\mathbb{R}^n)$, we have $v\in{\cal C}^1(\overline T^\gamma_{\varepsilon},\mathbb{R}^n)$, so the integral identity (\ref{id}) holds. \noindent In the following lemma we estabilish some properties of the vector field $v$. \begin{lemma} \label{L2.2} In the domain $\overline T^\gamma_{\bar{\varepsilon}_1}$, let us consider the vector field $v\in {\cal C}^1(\overline T^\gamma_{\bar{\varepsilon}_1},\mathbb{R}^n)$ defined in (\ref{v}). Then we have \begin{itemize} \item[(a)] $v\cdot\nu>0$ on $\partial \overline T^\gamma_{\varepsilon}$ $\forall{\varepsilon}\in(0,\bar{\varepsilon}_1)$, \item[(b)] $\lim\limits_{{\varepsilon}\to 0}\sup\{|n-\div v(x)|\ :\ x\in T^\gamma_{\varepsilon}\}=0$, \item[(c)] $\lim\limits_{{\varepsilon}\to 0}\sup\{|1- d v (x)[\eta]\cdot \eta|\ :\ x\in T^\gamma_{\varepsilon},\ \eta\in\mathbb{R}^n,\ |\eta|=1\}=0$. \end{itemize} \end{lemma} \mbox {{\underline {\sf Proof}} \hspace{2mm}} Taking into account the choice of $\bar{\varepsilon}_1$, since we are assuming $|\gamma'(t)|=1$ $\forall t\in[a,b]$, we have $[1-\psi(\xi,t)\cdot\gamma''(t)]\ge 0$ $\forall t\in [a,b]$. Therefore, since we are also assuming $a\le 0\le b$, property $(a)$ is a direct consequence of the definition of $T^\gamma_{\varepsilon}$ and $v$. \noindent In order to prove $(b)$, notice that, since $v\in {\cal C}^1(\overline T^\gamma_{\varepsilon},\mathbb{R}^n)$ $\forall {\varepsilon}\in(0,\bar{\varepsilon}_1)$, there exist $t_{\varepsilon}\in[a,b]$ and $\xi_{\varepsilon}\in\overline{N_{\varepsilon}(0)}$ such that \begin{equation} |n-\div v(\gamma(t_{\varepsilon})+\psi(\xi_{\varepsilon},t_{\varepsilon}))|=\max\{|n-\div v(x)|\ :\ x\in \overline T^\gamma_{\varepsilon}\}\quad\forall{\varepsilon}\in(0,\bar{\varepsilon}_1). \end{equation} When ${\varepsilon}\to 0$, we obtain (up to a subsequence) $t_{\varepsilon}\to t_0$ for a suitable $t_0\in[a,b]$ while $\xi_{\varepsilon}\to 0$ (because $|\xi_{\varepsilon}|\le{\varepsilon}$) and, as a consequence, also $\psi(\xi_{\varepsilon},t_{\varepsilon})\to 0$ (because $|\psi(\xi_{\varepsilon},t_{\varepsilon})|=|\xi_{\varepsilon}|$). Therefore we get \begin{equation} \lim_{{\varepsilon}\to 0}\max\{|n-\div v(x)|\ :\ x\in \overline T^\gamma_{\varepsilon}\}=|n-\div v(\gamma(t_0))|. \end{equation} Now, notice that \begin{equation} dv(\gamma(t_0))[\gamma'(t_0)]=\gamma'(t_0)+t_0\gamma''(t_0) \end{equation} and \begin{equation} dv(\gamma(t_0))[\psi]=-t_0[\psi\cdot\gamma''(t_0)]\gamma'(t_0)+\psi\qquad\forall\psi\in N(t_0). \end{equation} It follows that $\div v(\gamma(t_0))=n$, so property $(b)$ holds. \noindent In a similar way we can prove property $(c)$. In fact, since $v\in{\cal C}^1(\overline T^\gamma_{\varepsilon},\mathbb{R}^n)$ $\forall {\varepsilon}\in(0,\bar{\varepsilon}_1)$, there exist $\bar t_{\varepsilon}\in[a,b]$, $\bar \xi_{\varepsilon}\in\overline{N_{\varepsilon}(0)}$ and $\bar\eta _{\varepsilon}\in \mathbb{R}^n$ such that $|\bar\eta_{\varepsilon}|=1$ and \begin{equation} |1-dv(\gamma(\bar t_{\varepsilon})+\psi(\bar\xi_{\varepsilon},\bar t_{\varepsilon}))[\bar\eta_{\varepsilon}]\cdot\bar\eta_{\varepsilon}|= \max \{|1- d v (x)[\eta]\cdot \eta|\ :\ x\in \overline T^\gamma_{\varepsilon},\ \eta\in\mathbb{R}^n,\ |\eta|=1\}. \end{equation} Since $|\psi(\bar\xi_{\varepsilon},\bar t_{\varepsilon})|=|\bar\xi_{\varepsilon}|\le{\varepsilon}$ $\forall{\varepsilon}\in(0,\bar{\varepsilon}_1)$, we have $\lim\limits_{{\varepsilon}\to 0}\psi(\bar\xi_{\varepsilon},\bar t_{\varepsilon})=0$. Moreover, there exist $\bar t_0\in [a,b]$ and $\bar\eta_0\in \mathbb{R}^n$ such that (up to a subsequence) $\bar t_{\varepsilon}\to \bar t_0$ and $\bar\eta_{\varepsilon}\to\bar\eta_0$ as ${\varepsilon}\to 0$. It follows that \begin{equation} \lim_{{\varepsilon}\to 0}\max \{|1- d v (x)[\eta]\cdot \eta|\ :\ x\in \overline T^\gamma_{\varepsilon},\ \eta\in\mathbb{R}^n,\ |\eta|=1\}=|1- d v (\gamma(\bar t_0))[\bar\eta_0]\cdot \bar\eta_0|. \end{equation} Now, let us set $\bar\psi_0=\bar\eta_0-\bar\eta_0\cdot\gamma'(\bar t_0)\, \gamma'(\bar t_0)$ and notice that $\bar\psi_0\in N(\bar t_0)$. Therefore we have \begin{equation} dv(\gamma(\bar t_0))[\bar\psi_0]=\bar\psi_0-\bar t_0\, \bar\psi_0\cdot\gamma ''(\bar t_0)\, \gamma'(\bar t_0). \end{equation} Thus, since \begin{equation} dv(\gamma(\bar t_0))[\gamma'(\bar t_0)]=\gamma'(\bar t_0)+\bar t_0\gamma''(\bar t_0) \end{equation} and $\gamma'(\bar t_0)\cdot\gamma''(\bar t_0)=0$, we obtain \begin{eqnarray} \nonumber dv(\gamma(\bar t_0))[\bar\eta_0]\cdot\bar\eta_0 & = & dv(\gamma(\bar t_0))[\bar\eta_0\cdot\gamma'(\bar t_0)\, \gamma'(\bar t_0)+\bar \psi_0]\cdot (\bar\eta_0\cdot\gamma'(\bar t_0)\, \gamma'(\bar t_0)+\bar \psi_0) \\ \nonumber & = & \left\{ \bar\eta_0\cdot\gamma'(\bar t_0)\, [\gamma'(\bar t_0)+\bar t_0\gamma''(\bar t_0)]+\bar\psi_0-\bar t_0\, \bar\psi_0\cdot\gamma''(\bar t_0)\, \gamma'(\bar t_0)\right\} \\ & & \cdot (\bar\eta_0\cdot\gamma'(\bar t_0)\, \gamma'(\bar t_0)+\bar\psi_0) \\ \nonumber &= &[\bar\eta_0\cdot\gamma'(\bar t_0)]^2+|\bar\psi_0|^2=|\bar\eta_0|^2=1, \end{eqnarray} which implies property $(c)$. {\hfill {\em q.e.d.}\\\vspace{1mm}} \begin{cor} \label{C2.3} Let $f$ and $F$ be as in Lemma \ref{L2.0}. Let $T^\gamma_{\varepsilon}$ and $v\in {\cal C}^1(\overline T^\gamma_{\varepsilon},\mathbb{R}^n)$ be as in Lemma \ref{L2.2}. Then, every solution $u_{\varepsilon}$ of the Dirichlet problem (\ref{*}) in $\Omega =T^\gamma_{\varepsilon}$ satisfies the inequality \begin{equation} 0\le \left[ 1-{n\over 2}+\mu({\varepsilon})\right]\int_{T^\gamma_{\varepsilon}}|Du_{\varepsilon}|^2dx+\int_{T^\gamma_{\varepsilon}}(\div v)F(u_{\varepsilon})dx, \end{equation} where $\mu({\varepsilon})\to 0$ as ${\varepsilon}\to 0$. \end{cor} \noindent The proof follows directly from Lemmas \ref{L2.0} and \ref{L2.2}. \vspace{2mm} {\mbox {{\underline {\sf Proof of Theorem \ref{T2.1}}} \hspace{2mm}}} In order to prove that the trivial solution $u\equiv 0$ in $T^\gamma_{\varepsilon}$ is the unique solution for ${\varepsilon}$ small enough, for every ${\varepsilon}\in(0,\bar{\varepsilon}_1]$, let us consider a solution $u_{\varepsilon}$ of problem (\ref{*}) in $\Omega=T^\gamma_{\varepsilon}$. Taking into account Lemma \ref{L2.0} and condition (\ref{f}), from Lemma \ref{L2.2} and Corollary \ref{C2.3} we obtain \begin{equation} 0\le\left[1-{n\over 2}+\mu({\varepsilon})\right]\int_{T^\gamma_{\varepsilon}}|Du_{\varepsilon}|^2dx+[n+\bar\mu({\varepsilon})]{1\over p}\int_{T^\gamma_{\varepsilon}}u_{\varepsilon} f(u_{\varepsilon})dx, \end{equation} where $\bar\mu({\varepsilon})\to 0$ as ${\varepsilon}\to 0$. On the other hand, since $u_{\varepsilon}$ is a solution of problem (\ref{*}) in $\Omega= T^\gamma_{\varepsilon}$, we have \begin{equation} \int_{T^\gamma_{\varepsilon}}u_{\varepsilon} f(u_{\varepsilon})\, dx=\int_{T^\gamma_{\varepsilon}}|Du_{\varepsilon}|^2dx. \end{equation} Therefore we obtain \begin{equation} 0\le\left[1-{n\over 2}+{n\over p}+\mu({\varepsilon})+\bar\mu({\varepsilon})\right]\int_{T^\gamma_{\varepsilon}}|Du_{\varepsilon}|^2dx. \end{equation} Since $1-{n\over 2}+{n\over p}<0$ for $p>{2n\over n-2}$, there exists $\bar {\varepsilon}\in (0,\bar{\varepsilon}_1)$ such that $1-{n\over 2}+{n\over p}+\mu({\varepsilon})+\bar\mu({\varepsilon})<0$ $\forall {\varepsilon}\in(0,\bar{\varepsilon})$. Therefore, for all ${\varepsilon}\in(0,\bar{\varepsilon})$, we must have $\int_{T^\gamma_{\varepsilon}}|Du_{\varepsilon}|^2dx=0$ which implies $u_{\varepsilon}\equiv 0$ in $T^\gamma_{\varepsilon}$ and completes the proof. {\hfill {\em q.e.d.}\\\vspace{1mm}} \noindent Notice that if, instead of the vector field $v$ defined in (\ref{v}), we consider the vector field $\tilde v$ defined by \begin{equation} \tilde v(\gamma(t)+\psi(\xi,t))=\psi(\xi,t)\qquad\forall t\in (a,b),\quad\forall \xi\in \overline{N(0)}, \end{equation} we obtain a nonexistence result for $n\ge 4$ and $p>{2(n-1)\over n-3}$ (the critical Sobolev exponent in dimension $n-1$, which is greater than $2n\over n-2$). \noindent Let us point out that the vector field $\tilde v$ is well defined also when $\gamma$ is a smooth circuit, that is $\gamma(a)=\gamma(b)$ and $\Omega$ is the interior of $\overline T^\gamma_{\varepsilon}$. Therefore, also in these domains we can prove nonexistence results for $n\ge 4$ and $p>{2(n-1)\over n-3}$, see Theorem \ref{T2.4}. On the contrary, in these domains the vector field $v$ could not be well defined because \begin{equation} v(\gamma(a)+\psi(\xi,a))\neq v(\gamma(b)+\psi(\xi,b))\qquad\forall \xi\in N_{\varepsilon}(0), \end{equation} while $\gamma(a)+\psi(\xi,a)=\gamma(b)+\psi(\xi,b)$ when $\gamma(a)=\gamma(b)$ and $\gamma'(a)=\gamma'(b)$. \noindent On the other hand, in these domains one cannot expect to obtain nonexistence results for $p>{2n\over n-2}$ since it is possible that there exist nontrivial solutions when $n\ge 4$ and ${2n\over n-2}<p<{2(n-1)\over n-3}$ while they do not exist for $p\ge{2(n-1)\over n-3}$, which happens for example in the case of a solid torus (see \cite{Pjfa93,Pdie95,MPcras02}). \noindent In next theorem we consider the case where $\Omega$ is a tubular domain near a circuit, $n\ge 4$ and condition (\ref{f}) holds with $p>{2(n-1)\over n-3}$ (see Theorem \ref{T3.4} for an extension to more general tubular domains). \begin{teo} \label{T2.4} Assume that $\tilde\gamma:[a,b]\to\mathbb{R}^n$ is a smooth curve which satisfies $\tilde\gamma'(t)\neq 0$ $\forall t\in[a,b]$, $\tilde\gamma(a)=\tilde\gamma(b)$, $\tilde\gamma'(a)=\tilde\gamma'(b)$, $\tilde\gamma(t_1)\neq\tilde\gamma(t_2)$ if $t_1,t_2\in(a,b)$ and $t_1\neq t_2$. Let us set \begin{equation} \widetilde\Gamma=\{\tilde\gamma(t)\ :\ t\in[a,b]\} \quad\mbox{ and }\quad \widetilde T_{\varepsilon}(\widetilde\Gamma)=\{x\in\mathbb{R}^n\ :\ \mathop{\rm dist}\nolimits(x,\widetilde\Gamma)<{\varepsilon}\}\quad\forall{\varepsilon}>0. \end{equation} Moreover assume that $n\ge 4$ and condition (\ref{f}) holds with $p>{2(n-1)\over n-3}$. \noindent Then there exists $\tilde{\varepsilon}>0$ such that, for all ${\varepsilon}\in(0,\tilde{\varepsilon})$, the Dirichlet problem (\ref{*}) has only the trivial solution $u\equiv 0$ in the smooth bounded domain $\Omega=\widetilde T_{\varepsilon}(\widetilde\Gamma)$. \end{teo} \mbox {{\underline {\sf Proof}} \hspace{2mm}} First notice that there exists $\bar{\varepsilon}_1>0$ such that for all ${\varepsilon}\in(0,\bar{\varepsilon}_1)$ and $x\in\widetilde T_{\varepsilon}(\widetilde\Gamma)$ there exists a unique $y\in\tilde\gamma$ such that $\mathop{\rm dist}\nolimits(x,\widetilde \Gamma)=|x-y|$. Let us denote this $y$ by $p(x)$ and consider in $\widetilde T_{\varepsilon}(\widetilde\Gamma)$ the vector field $\tilde v$ defined by $\tilde v(x)=x-p(x)$. \noindent One can verify by direct computation that \begin{equation} d\tilde v(\tilde\gamma(t))[\tilde\gamma'(t)]=0,\quad d\tilde v(\tilde\gamma(t))[\psi]=\psi\qquad\forall t\in[a,b],\ \forall\psi\in\mathbb{R}^n\ \mbox{ such that }\ \psi\cdot\tilde\gamma'(t)=0 \end{equation} and, as a consequence, \begin{equation} \label{e1} \div\tilde v(\tilde\gamma(t))=n-1\qquad\forall t\in[a,b] \end{equation} \begin{equation} \label{e2} d\tilde v(\tilde\gamma(t))[\eta]\cdot\eta=|\eta|^2-{[\eta\cdot\tilde\gamma'(t)]^2\over |\tilde\gamma'(t)|^2}\qquad \forall t\in[a,b],\ \forall \eta\in\mathbb{R}^n. \end{equation} It follows that \begin{equation} \label{e1'} \lim_{{\varepsilon}\to 0}\sup\{|n-1-\div\tilde v(x)|\ :\ x\in\widetilde T_{\varepsilon}(\widetilde\Gamma)\}=0 \end{equation} as one can easily obtain from (\ref{e1}) arguing as in the proof of assertion $(b)$ of Lemma \ref{L2.2}. Moreover, from (\ref{e2}) we obtain \begin{equation} \label{e2'} \lim_{{\varepsilon}\to 0}\sup\{d\tilde v(x)[\eta]\cdot\eta\ :\ x\in\widetilde T_{\varepsilon}(\widetilde\Gamma),\ \eta\in \mathbb{R}^n,\ |\eta|=1\}=1. \end{equation} In fact, for all ${\varepsilon}\in(0,\bar{\varepsilon}_1)$, choose $x_{\varepsilon}\in \widetilde T_{\varepsilon}(\widetilde\Gamma)$ and $\eta_{\varepsilon}\in\mathbb{R}^n$ such that $|\eta_{\varepsilon}|=1$ and $s_{\varepsilon}-{\varepsilon}\le d\tilde v(x_{\varepsilon})[\eta_{\varepsilon}]\cdot\eta_{\varepsilon}$ where \begin{equation} s_{\varepsilon}=\sup\{d\tilde v(x)[\eta]\cdot\eta\ :\ x\in\widetilde T_{\varepsilon}(\widetilde\Gamma),\ \eta\in \mathbb{R}^n,\ |\eta|=1\}. \end{equation} Since $\mathop{\rm dist}\nolimits(x_{\varepsilon},\widetilde\Gamma)\to 0$ as ${\varepsilon}\to 0$, and $\widetilde\Gamma$ is a compact manifold, from (\ref{e2}) we infer that $\limsup\limits_{{\varepsilon}\to 0} s_{\varepsilon}\le 1$. On the other hand, (\ref{e2}) implies $s_{\varepsilon}\ge 1$ $\forall{\varepsilon}\in(0,\bar{\varepsilon}_1)$, so (\ref{e2'}) is proved. \noindent Furthermore, one can easily verify that $\tilde v\cdot\nu>0$ on $\partial \widetilde T_{\varepsilon}(\widetilde\Gamma)$ $\forall{\varepsilon}\in(0,\bar{\varepsilon}_1)$. Thus, taking also into account condition (\ref{f}), from Lemma \ref{L2.0} we infer that every solution $\tilde u_{\varepsilon}$ of problem (\ref{*}) in the domain $\widetilde T_{\varepsilon}(\widetilde\Gamma)$ satisfies \begin{equation} 0\le \left[ 1-{n-1\over 2}+\tilde\mu({\varepsilon})\right]\int_{\widetilde T_{\varepsilon}(\widetilde\Gamma)}|D\tilde u_{\varepsilon}|^2dx+\left[{n-1\over p}+\tilde\mu({\varepsilon})\right]\int_{\widetilde T_{\varepsilon}(\widetilde\Gamma)}\tilde u_{\varepsilon} f(\tilde u_{\varepsilon})\, dx, \end{equation} where $\tilde\mu({\varepsilon})\to 0$ as ${\varepsilon}\to 0$. Since \begin{equation} \int_{\widetilde T_{\varepsilon}(\widetilde\Gamma)}\tilde u_{\varepsilon} f(\tilde u_{\varepsilon})\, dx=\int_{\widetilde T_{\varepsilon}(\widetilde\Gamma)}|D\tilde u_{\varepsilon}|^2dx \end{equation} (because $\tilde u_{\varepsilon}$ solves problem (\ref{*}) in $\widetilde T_{\varepsilon}(\widetilde\Gamma)$) we obtain \begin{equation} \label{le} 0\le\left[1-{n-1\over 2}+{n-1\over p}+2\tilde \mu({\varepsilon})\right]\int_{\widetilde T_{\varepsilon}(\widetilde\Gamma)}|D\tilde u_{\varepsilon}|^2dx \end{equation} where $1-{n-1\over 2}+{n-1\over p}<0$ because $n\ge 4$ and $p>{2(n-1)\over n-3}$. Therefore, there exists $\tilde {\varepsilon}\in(0,\bar{\varepsilon}_1)$ such that for all ${\varepsilon}\in(0,\tilde{\varepsilon})$ (\ref{le}) implies $\tilde u_{\varepsilon}\equiv 0$ in $\widetilde T_{\varepsilon}(\widetilde\Gamma)$. So the proof is complete. {\hfill {\em q.e.d.}\\\vspace{1mm}} \sezione{Tubular domains of higher dimension and final remarks} The nonexistence results presented in Section 2 are concerned with domains $\Omega$ which are thin neighbourhoods of 1-dimensional manifolds (with boundary and contractible in Theorem \ref{T2.1}, without boundary and noncontractible in Theorem \ref{T2.4}). In this section we consider the case where $\Omega$ is a thin neighbourhood of $k$-dimensional smooth, compact manifold $\Gamma_k$ with $k>1$. \noindent If $\Gamma_k$ is a submanifold of $\mathbb{R}^n$ with $n>k$, for all $x\in\Gamma_k$ we set $N(x)=T^\perp(x)$ and $N_{\varepsilon}(x)=\{x\in N(x)\ :\ |x|<{\varepsilon}\}$, where $T(x)$ is the tangent space to $\Gamma_k$ in $x$ and $N(x)$ is the normal space. Since $\Gamma_k$ is a compact smooth submanifold, there exists $\bar{\varepsilon}_1>0$ such that, for all ${\varepsilon}\in(0,\bar{\varepsilon}_1]$, we have $[x_1+N_{\varepsilon}(x_1)]\cap [x_2+N_{\varepsilon}(x_2)]=\emptyset$ for all $x_1$ and $x_2$ in $\Gamma_k$ such that $x_1\neq x_2$. Then, for all ${\varepsilon}\in(0,\bar{\varepsilon}_1)$, we consider the piecewise smooth, bounded domain $T_{\varepsilon}(\Gamma_k)$ defined as the interior of the set $\cup_{x\in\Gamma_k}[x+N_{\varepsilon}(x)]$ (we say that $T_{\varepsilon}(\Gamma_k)$ is the tubular domain with thickness ${\varepsilon}$ and center $\Gamma_k$). Our aim is to study existence and nonexistence of nontrivial solutions of problem (\ref{*}) in the domain $\Omega=T_{\varepsilon}(\Gamma_k)$. \noindent Let us point out that when $k>1$ one cannot prove a theorem analogous to Theorem \ref{T2.1}. In fact, if $\Gamma_k$ is a $k$-dimensional manifold contractible in itself and $k>1$, one cannot obtain nonexistence results for nontrivial solutions of problem (\ref{*}) in the domain $\Omega=T_{\varepsilon}(\Gamma_k)$ under the assumption that condition (\ref{f}) holds with $p>{2n\over n-2}$ as in Theorem \ref{T2.1}. The reason is explained by the following examples where existence results hold. \begin{ex} \label{E3.0} {\em For all $n\ge k+1$, let us consider the function $\gamma_k:\mathbb{R}^k\to\mathbb{R}^n$ defined as follows: \begin{equation} \begin{array}{rcll} \gamma_{k,i}(x_1,\ldots,x_k)&=& {2x_i\over |x|^2+1}&\qquad \mbox{ for }i=1,\ldots,k \\ \gamma_{k,k+1}(x_1,\ldots,x_k)&=& {|x|^2-1\over |x|^2+1} & \\ \gamma_{k,i}(x_1,\ldots,x_k)&=& 0 &\qquad\mbox{ for }i=k+2,\ldots,n \end{array} \end{equation} ($\gamma_k$ is the stereographic projection of $\mathbb{R}^k$ on a $k$-dimensional sphere of $\mathbb{R}^n$). \noindent Moreover, for all $r>0$, let us set $\Gamma^r_k=\{\gamma_k(x)\ :\ x\in\mathbb{R}^k,\ |x|<r\}$. } \end{ex} \noindent Then one can easily verify that the domain $T_{\varepsilon}(\Gamma^r_k)$ is contractible in itself for all $r>0$ and ${\varepsilon}\in(0,1)$. Moreover, the following propositions hold. \begin{prop} \label{P3.1} Let $k\ge 2$ and $n\ge k+1$. Assume that $f(t)=|t|^{p-2}t$ with $p\ge{2n\over n-2}$ and that $p<{2(n-k+1)\over n-k-1}$ if $n>k+1$. \noindent Then, there exists $\bar r>0$ such that if $r>\bar r$ and ${\varepsilon}\in(0,1)$, problem (\ref{*}) in the domain $\Omega=T_{\varepsilon}(\Gamma^r_k)$ has positive and sign changing solutions; moreover, under the additional assumption $p>{2n\over n-2}$, for all ${\varepsilon}\in(0,1)$ the number of solutions tend to infinity as $r\to\infty$. \end{prop} \noindent For the proof it suffices to look for solutions having radial symmetry with respect to the first $k$-variables and argue as in \cite{Pmm89,Pd98,P4092,Ptmna96,P94,MPaihp06,MPcvpde06,MPaihp04}. \begin{prop} \label{P3.2} Let $k\ge 2$, $n\ge k+1$, $r>1$, ${\varepsilon}\in(0,1)$. Moreover, assume that $f(t)=|t|^{p-2}t$ $\forall t\in\mathbb{R}$. Then, there exists $\bar p>{2n\over n-2}$ such that, if $n=k+1$ and $p\ge \bar p$ or if $n>k+1$ and $p\in\left[\bar p,{2(n-k+1)\over n-k-1}\right)$, problem (\ref{*}) with $\Omega=T_{\varepsilon}(\gamma^r_k)$ has solution. \end{prop} \noindent The proof can be carried out arguing for example as in \cite{MPcvpde06} in order to obtain solutions having radial symmetry with respect to the first $k$ variables. \begin{prop} \label{P3.3} Let $k\ge 2$, $n\ge k+1$, $r>1$, ${\varepsilon}\in(0,1)$ and assume that $f(t)=|t|^{p-2}t$ $\forall t\in\mathbb{R}$. Then, there exists $\tilde p>{2n\over n-2}$ such that problem (\ref{*}) with $\Omega=T_{\varepsilon}(\Gamma^r_k)$ has positive solutions for all $p\in\left({2n\over n-2} ,\tilde p\right)$. Moreover, the number of solutions tends to infinity as $p\to{2n\over n-2}$. \end{prop} \noindent The proof is based on a Lyapunov-Schmidt type finite dimensional reduction method as in \cite{MPcras02},\cite{MPaihp04}, etc. \noindent Thus, while Theorem \ref{T2.1} gives a nonexistence result for all $p>{2n\over n-2}$ when $k=1$, $\Gamma_k$ is contractible in itself and $\Omega$ is a thin tubular domain centered in $\Gamma_k$, Propositions \ref{P3.1}, \ref{P3.2} and \ref{P3.3} give examples of existence results for some $p>{2n\over n-2}$ when $\Omega$ is a tubular domain centered in a suitable $k$-dimensional manifold $\Gamma^r_k$, contractible in itself but with $k\ge 2$. In this sense we mean that Theorem \ref{T2.1} cannot be extended to the case $k\ge 2$ (see also Remark \ref{R3.5} for more details about the differences between the cases $k=1$ and $k>1$). \noindent However, notice that a weaker nonexistence result holds for all $k\ge 1$ (even if $\Gamma_k$ is noncontractible in itself) when $n>k+2$ and $p>{2(n-k)\over n-k-2}$, as we prove in the following Theorem \ref{T3.4}. \noindent If $n\le k+2$ or $n>k+2$ and $p<{2(n-k)\over n-k-2}$, the existence of nontrivial solutions can be proved even if $\Omega$ is a tubular domain $T_{\varepsilon}(\Gamma_k)$ with ${\varepsilon}$ not necessarily small: for example, if $\Gamma_k$ is a $k$-dimensional sphere, we can look for solutions with radial symmetry with respect to $k+1$ variables, so we obtain infinitely many solutions for all ${\varepsilon}\in(0,R)$ where $R$ is the radius of the sphere. \begin{teo} \label{T3.4} Let $k\ge 1$, $n>k+2$ and assume that $\Gamma_k$ is a $k$-dimensional, compact, smooth submanifold of $\mathbb{R}^n$. Moreover, assume that condition (\ref{f}) holds with $p>{2(n-k)\over n-k-2}$. \noindent Then, there exists $\bar{\varepsilon}>0$ such that, for all ${\varepsilon}\in(0,\bar{\varepsilon})$, the Dirichlet problem (\ref{*}) has only the trivial solution $u\equiv 0$ on the tubular domain $\Omega=T_{\varepsilon}(\Gamma_k)$. \end{teo} \mbox {{\underline {\sf Proof}} \hspace{2mm}} Taking into account the definition of the tubular domain $T_{\varepsilon}(\Gamma_k)$, for all ${\varepsilon}\in(0,\bar{\varepsilon}_1)$ and $x\in T_{\varepsilon}(\Gamma_k)$ there exists a unique $y\in\Gamma_k$ such that $x\in y+N_{\varepsilon}(y)$. Then, denote this $y$ by $p_k(x)$ and set $v_k(x)=x-p_k(x)$ $\forall x\in T_{\varepsilon}(\Gamma_k)$. One can easily verify that the vector field $v_k$ satisfies $v_k\cdot \nu\ge 0$ on $\partial T_{\varepsilon}(\Gamma_k)$ $\forall{\varepsilon}\in(0,\bar{\varepsilon}_1)$. \noindent Therefore, from Lemma \ref{L2.0} we infer that every solution $u_{\varepsilon}$ of problem (\ref{*}) in $T_{\varepsilon}(\Gamma_k)$ satisfies \begin{equation} \label{lek} 0\le\int_{T_{\varepsilon}(\Gamma_k)}dv_k[Du_{\varepsilon}]\cdot Du_{\varepsilon}\, dx+\int_{T_{\varepsilon}(\Gamma_k)}\div v_k\left(F(u_{\varepsilon})-{1\over 2}|D u_{\varepsilon}|^2\right)dx. \end{equation} Notice that \begin{equation} dv_k(x)[\phi]=0,\qquad dv_k(x)[\psi]=\psi\qquad \forall x\in\Gamma_k,\ \forall\phi\in T(x),\ \forall\psi\in N(x) \end{equation} as one can verify by direct computation. \noindent As a consequence we obtain \begin{equation} \div v_k(x)=n-k,\qquad dv_k(x)[\phi+\psi]\cdot (\phi+\psi)=|\psi|^2 \qquad \forall x\in\Gamma_k,\ \forall\phi\in T(x),\ \forall\psi\in N(x). \end{equation} Since $\Gamma_k$ is a compact manifold, it follows that \begin{equation} \lim_{{\varepsilon}\to 0}\sup\{|n-k-\div v_k(x)|\ :\ x\in T_{\varepsilon}(\Gamma_k)\}=0 \end{equation} and \begin{equation} \lim_{{\varepsilon}\to 0}\sup\{dv_k(x)[\eta]\cdot\eta\ :\ x\in T_{\varepsilon}(\Gamma_k),\ \eta\in\mathbb{R}^n,\ |\eta|=1\}=1 \end{equation} as one can infer arguing as in the proof of Theorem \ref{T2.4}. \noindent Thus, taking also into account that \begin{equation} \int_{T_{\varepsilon}(\Gamma_k)} u_{\varepsilon} f(u_{\varepsilon})\, dx= \int_{T_{\varepsilon}(\Gamma_k)}|Du_{\varepsilon}|^2dx, \end{equation} from condition (\ref{f}) we infer that \begin{equation} 0\le \left[1-{n-k\over 2}+{n-k\over p}+\mu_k({\varepsilon})\right]\int_{T_{\varepsilon}(\Gamma_k)} |Du_{\varepsilon}|^2dx \end{equation} where $\mu_k({\varepsilon})\to 0$ as ${\varepsilon}\to 0$. Since $1-{n-k\over 2}+{n-k\over p}<0$ (because $n>k+2$ and $p>{2(n-k)\over n-k-2}$), it follows that there exists $\bar{\varepsilon}\in(0,\bar{\varepsilon}_1)$ such that, for all ${\varepsilon}\in(0,\bar{\varepsilon})$ we have $u_{\varepsilon}\equiv 0$ in $T_{\varepsilon}(\Gamma_k)$, so the problem has only the trivial solution $u\equiv 0$. {\hfill {\em q.e.d.}\\\vspace{1mm}} \begin{rem} \label{R3.5} {\em Proposition \ref{P3.1}, as well as the results reported in \cite{Pmm89,Pd98,P4092,Ptmna96,P94,MPaihp06,MPcvpde06,MPaihp04}, suggest that the existence of nontrivial solutions is related to the property that the domain $\Omega$ is obtained by removing a subset of small capacity from a domain having a different $k$-dimensional homology group with $k\ge 2$. \noindent For example, in the case of domains with small holes, every hole has small capacity and changes the $(n-1)$-dimensional homology group. \noindent In the case of tubular domains $T_{\varepsilon}(\Gamma^r_k)$, the existence results for $k\ge 2$ and $r$ large enough given by Proposition \ref{P3.1} is related to the fact that $\Gamma^r_k$ tends to a $k$-dimensional sphere $S_k$ as $r\to\infty$, the capacity of $T_{\varepsilon}(S_k)\setminus T_{\varepsilon}(\Gamma^r_k)$ tends to 0 as $r\to\infty$ and the domains $T_{\varepsilon}(S_k)$ and $T_{\varepsilon}(\Gamma^r_k)$ have different $k$-dimensional homology group. \noindent On the contrary, when $k=1$, the capacity of $T_{\varepsilon}(S_1)\setminus T_{\varepsilon}(\Gamma^r_1)$ does not tend to 0 as $r\to\infty$. This fact explains the nonexistence result given by Theorem \ref{T2.1} in the case of the domains $T_{\varepsilon}(\Gamma^r_1)$, when ${\varepsilon}$ is small enough, for all $r>0$. }\end{rem} \begin{rem} \label{R3.6} {\em If $n=2$ we do not have critical or supercritical phenomena for the Laplace operator. But, if we replace it by the $q$-Laplace operator, this phenomena arise and may produce nonexistence results for nontrivial solutions. For example, if we consider the Dirichlet problem \begin{equation} \label{q} \div(|Du|^{q-2}Du)+|u|^{p-2}u=0\quad\mbox{ in }\Omega,\qquad u=0\quad\mbox{ on }\partial \Omega \end{equation} where $\Omega$ is a bounded domain of $\mathbb{R}^2$, $1<q<2$, $p\ge {2q\over 2-q}$, then one can prove nonexistence results in some bounded contractible domains which can be non starshaped and even arbitrarily close to noncontractible domains (see \cite{plap,PLap}). For example, if $\Omega=T_{\varepsilon}(\Gamma^r_1)$, there exists $\bar{\varepsilon}>0$ such that problem (\ref{q}) has only the trivial solution $u\equiv 0$ for all $r>0$ and ${\varepsilon}\in(0,\bar{\varepsilon})$. \noindent The results obtained in \cite{plap,PLap} suggest that the nonexistence of nontrivial solutions for Dirichlet problem (\ref{q}) might be proved in all the contractible domains of $\mathbb{R}^2$ (while it is not possible for problem (\ref{p}) when $n\ge 3$ and $p\ge {2n\over n-2}$ because of Proposition \ref{P3.1}). }\end{rem} {\small {\bf Acknowledgement}. The authors have been supported by the ``Gruppo Nazionale per l'Analisi Matematica, la Probabilit\`a e le loro Applicazioni (GNAMPA)'' of the {\em Istituto Nazio\-nale di Alta Matematica (INdAM)} - Project: Equazioni di Schrodinger nonlineari: soluzioni con indice di Morse alto o infinito. The second author acknowledges also the MIUR Excellence Department Project awarded to the Department of Mathematics, University of Rome Tor Vergata, CUP E83C18000100006 } {\small
{ "attr-fineweb-edu": 1.421875, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdBo5i7PA9PGpdB0l
\section{Introduction} \label{sec:intro} \floattable \begin{deluxetable}{cccccc} \tablecaption{Telescope Information \label{tab:teleloc}} \tablecolumns{5} \tablewidth{0pt} \tablehead{\colhead{Number} & \colhead{Telescope} & \colhead{Institution} & \colhead{\# Obs.} & \colhead{Lat.} & \colhead{Lon.}} \startdata 1 & 40-cm Newtonian- & Foggy Bottom Observatory & 135 & $42\degr\ 48\arcmin\ 59\arcsec$ N & $75\degr\ 31\arcmin\ 59\arcsec$ W \\ & Cassegrain& Colgate University, Hamilton, NY, USA & & & \\ 2 & 1.83-m Perkins & Lowell Observatory & 60 & $35\degr\ 05\arcmin\ 53\arcsec$ N & $111\degr\ 32\arcmin\ 12\arcsec$ W \\ & & Flagstaff, AZ, USA & & & \\ 3 & 70-cm AZT-8 & Crimean Astrophysical Observatory & 139 & $44\degr\ 43\arcmin\ 38\arcsec$ N & $34\degr\ 00\arcmin\ 49\arcsec$ E \\ & & Nauchny, Crimea & & & \\ 4 & 40-cm LX-200 & St. Petersburg University & 10 & $59\degr\ 52\arcmin\ 55\arcsec$ N & $29\degr\ 49\arcmin\ 35\arcsec$ E \\ & & St. Petersburg, Russia & & & \\ 5$^{*}$ & 1.54-m Kuiper & Steward Observatory & & $32\degr\ 24\arcmin\ 54\arcsec$ N & $110\degr\ 42\arcmin\ 52\arcsec$ W \\ & & Mt. Bigelow, AZ, USA & & & \\ 6$^{*}$ & 2.3-m Bok & Steward Observatory & 28 & $31\degr\ 57\arcmin 36\arcsec$ N & $111\degr\ 35\arcmin\ 59\arcsec$ W \\ & & Kitt Peak, AZ, USA & & & \\ \enddata \tablecomments{$^*$ Observations from these telescopes were made with the same instrument and are counted together in the number of observations.} \end{deluxetable} The blazar 3C454.3 (3FGL J2254.0+1609, $z = 0.859$) is an optically violent, flat spectrum radio quasar noted for being among the brightest $\gamma$-ray\ sources in the sky. Since the early 2000s, 3C454.3 has undergone a number of extremely energetic and rapidly variable outbursts across the electromagnetic spectrum \citep[e.g.,][]{Villata2006, Jorstad2010, Vercellone2010, Wehrle2012}. While the basic cause of the extremely high non-thermal luminosity and rapid variability of the flux and polarization in blazars such as 3C454.3 can be explained by a relativistic jet of high-energy plasma \citep[e.g.,][]{Blandford1979, Marscher1985, Sikora2009}, understanding of the physical processes involved and the mechanism(s) for high energy production remains limited \citep{Jorstad2013}. During the current age of large-scale surveys, long and concentrated observations of individual objects remain vital due to their ability to provide a wealth of detailed information about an object. Such observations are particularly useful when the source can be observed at many different wavebands \citep{Wehrle2012}, leading to calculations and measurements of timescales of variability, apparent speeds of superluminal knots, and the time-evolution of the spectral energy distribution \citep[e.g.,][]{Marscher2010, Jorstad2010, Britto2016}. The highest-amplitude optical outburst of 3C454.3 occurred in 2005, with a peak optical brightness of $R=12$ magnitude, triggering a multi-frequency campaign by the Whole Earth Blazar Telescope \citep{Villata2006}. \citet{Villata2007}, \citet{Raiteri2008}, \citet{Hagenthorn2009}, and \citet{Jorstad2010} have analyzed comprehensive multi-frequency observations of this outburst. Several subsequent, smaller outbursts have been intensely studied across the electromagnetic spectrum, including those in 2008 \citep{HagenThorn2013}, 2009 \citep{Raiteri2011}, 2010 \citep{Bachev2011}, and 2014 \citep{Kushwaha2016}. While 3C454.3 has been observed at optical frequencies as far back as 1899 \citep{Angione1968}, it was first detected at $\gamma$-ray\ frequencies in 1992 by EGRET on the \emph{Compton Gamma-Ray Observatory} \citep{Hartman1999}. The blazar was not observed by $\gamma$-ray\ telescopes during the outburst of 2005, but has been routinely detected by the \emph{Astro-rivelatore Gamma a Immagini LEggero} (AGILE) \citep{Tavani2009, Vercellone2012} and \emph{Fermi Gamma-Ray Space Telescope} \citep{Abdo2009Early, Atwood2009} orbiting observatories starting in 2007 and 2008, respectively. The blazar was especially bright during a series of $\gamma$-ray\ outbursts in late 2009, early 2010, and late 2010 \citep{Ackermann2010, Vercellone2011, Coogan2016}. In November 2010, 3C454.3 reached a flux of $F_\gamma^\mathrm{max} = 8.5 \pm 0.5 \times 10^{-5}$ photons cm$^{-2}$ s$^{-1}$, the highest $\gamma$-ray\ flux ever detected from a single, non-transient cosmic source up to that point \citep{Abdo2011}. Analyses of these rich datasets in conjunction with lower-frequency observations serve as a valuable probe into the structure and conditions of the jet within distances of $\lesssim 10$ pc from the central engine \citep{Coogan2016}. The June 2016 optical and $\gamma$-ray\ outburst of 3C454.3 is analyzed in this paper. A time period between 2016 May 1 and 2016 September 30 was selected to concentrate on short-timescale variability using observations from several ground-based telescopes in $R$ band, polarimetric observations, and $\gamma$-ray\ observations obtained with the \emph{Fermi}\ Large Area Telescope (LAT). The observations and data reduction methods are described in $\S$\ref{sec:Obs}. Analyses of the data, including structure and timing of features in the resultant light curves, are given in $\S$\ref{sec:structure}. An investigation into the rapid intraday and micro-variability fluctuations in $R$ band is presented in $\S$\ref{sec:variability}. An analysis of a sequence of Very Long Baseline Array (VLBA) 43 GHz images of 3C454.3 is presented in $\S$\ref{sec:RadioKnots}, yielding a measure of the Doppler factor of an observed radio knot, labeled $K16$. This measurement is used in $\S$\ref{sec:MagFieldStrength} and $\S$\ref{sec:shortvariability}, along with the timescales of variability and observed flux values, to calculate important physical parameters of 3C454.3, including an estimate of the speed of turbulence in the jet. The findings are summarized and concluding remarks are made in $\S$\ref{sec:conclusions}.\\ \begin{figure*} \plotone{f1.eps} \caption{Flux and polarization vs. time of 3C454.3. The date 2016 July 1 is RJD: 7570.5. (a) \emph{Fermi}-LAT $\gamma$-ray\ flux with varying time bins; (b) optical light curve in $R$ band; (c) degree of optical linear polarization; (d) position angle ($\chi_\mathrm{opt}$) of optical polarization. In (a), the outer, blue, vertical, solid lines mark the division between one-day and six-hour $\gamma$-ray\ binning, while the inner pair of black, vertical, dashed lines mark the division between six-hour and three-hour binning. Upper limits on 24 \emph{Fermi}-LAT data points are marked with a downward-facing, red arrow. In (d), the horizontal lines correspond to polarization angles that are parallel ($\chi_{\mathrm{opt},\parallel}$, red dashed) and perpendicular ($\chi_{\mathrm{opt},\perp}$, blue dash-dot) to the average parsec-scale jet direction of -$79\degr$ determined using 43 GHz VLBA imaging of the blazar between Jan 2016 and Jun 2017 (see $\S$\ref{sec:RadioKnots}).} \label{fig:lightcurves} \end{figure*} \section{Observations and Data Reduction} \label{sec:Obs} We analyze data obtained from 2016 May 1 to 2016 September 30 at $\gamma$-ray\ energies from 0.1-300 GeV and in the optical Johnson $R$ band. The observations at optical wavelengths, as well as the data reduction at all wavelengths, were performed by the authors. Throughout this paper, dates are referred to using reduced Julian Date, RJD $=$ JD$ - 2,450,000$, as well as the UT date. The analyzed period is RJD: 7509.5 - 7662.5. We adopt current cosmological constants from \citet{Planck2014}: $\Omega_M = 0.308,\ \Omega_\Lambda = 0.692,$ and Hubble Parameter $H_0 = 67.8$ km s$^{-1}$ Mpc$^{-1}$. \subsection{Multi-frequency Light Curves} \label{sec:lightcurves} The $\gamma$-ray\ data were collected with the \emph{Fermi}-LAT. Pass 8 photon and spacecraft data were used, along with version \emph{v10r0p5} of the Fermi Science Tools, the \emph{iso\_P8R2\_SOURCE\_V6\_v06.txt} isotropic template, and the \emph{gll\_iem\_v06} Galactic diffuse emission model.\footnote{Provided at \url{https://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html}} We use standard analysis cuts of \texttt{evtype = 3} and \texttt{zmax = 90} for the likelihood analysis. Instead of a single time bin for the entire analysis period, the time binning for the flux over different time periods around the peak of the outburst was modified in order to increase the time resolution of the light curve during the highest levels of activity. For the time periods May 1-June 11 (RJD: 7509.5-7550.5) and July 6-September 30 (RJD: 7575.5-7662.5), a time bin of one day was used to ensure a significant detection despite relatively low flux levels. A six-hour time bin was used for the periods June 11-20 (RJD: 7550.5-7559.5) and July 1-6 (RJD: 7570.5-7575.5). Finally, during the time around the peak of the outburst (June 20-July 1, RJD: 7559.5-7570.5), a time bin of three hours was adopted. A region of radius $25\degr$ centered on 3C454.3 was chosen for this analysis. The $\gamma$-ray-emission from 3C454.3 and other point sources within a 15$\degr$ radius region of interest of the blazar were represented with spectral models as found in the 3FGL catalog of sources detected by the LAT \citep{Acero2015}, creating a standard annulus with a thickness of $10\degr$ around the region of interest. Specifically, the energy spectrum of 3C454.3 was modeled as a power-law with an exponential cut-off \citep[see][]{Acero2015} of the form \begin{equation*} \frac{\mathrm{d}N}{\mathrm{d}E} = N_0 \left( \frac{E}{E_0} \right)^{\gamma_1} \exp{\left(- \frac{E}{E_\mathrm{c}} \right)^{\gamma_2}\ .} \end{equation*} \noindent During the unbinned likelihood analysis\footnote{As described in the \emph{Fermi}\ Analysis Threads: \url{https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/python_tutorial.html}} to compute the light curve, the spectral parameters of all sources within the region of interest were kept fixed to the values listed in the 3FGL, as were the spectral parameters of the quasar, which are $E_0 = 412.75$ MeV for the scale factor, $E_\mathrm{c} = 25.65$ MeV for the cutoff energy, and $\gamma_1 = 1.63$ and $\gamma_2 = 0.28$ for the two power-law indices (Acero et al. 2015). The prefactor $N_0$ was allowed to vary for 3C454.3, as were the normalization parameters of the isotropic emission template and Galactic diffuse emission model. This procedure produces a $\gamma$-ray\ light curve with 275 measurements of 3C454.3. The source was considered detected if the test statistic, TS, provided by the maximum-likelihood analysis exceeded 10, which corresponds to approximately a $3\sigma$ detection level \citep{Nolan2012}. Upper-limits were calculated using the standard procedure for the 24 data points with TS $<10$. The optical photometric data in $R$ band were collected at various telescopes listed in Table~\ref{tab:teleloc}. The data reduction of observations from the Perkins, CAO, and St. Petersburg telescopes is described in \citet{Larionov2008} and \citet{Jorstad2010}. The reduction of the Steward Observatory data is described in \citet{Smith2009}. The observations from the Colgate University Foggy Bottom Observatory (FBO) are described in more detail below. Between 2016 May 1 and September 30, 860 images of 3C454.3 on 41 nights were taken at FBO with \emph{Photometrics Star 1} CCD systems. The images are primarily two-minute exposures with the $R$ filter designed by \citet{Beckert1989} to conform to the Johnson-Cousins system (central wavelength $\lambda_c = 640$ nm, bandwidth $\Delta \lambda = 160$ nm, with magnitude-to-flux conversion coefficient for a 15 magnitude star $C_\lambda = 3.08$ mJy). The data were reduced using standard IRAF\footnote{IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.} V2.12 packages and customized scripts written to facilitate the data handling. The images were processed using aperture photometry with the IRAF \emph{apphot} package using a 10\arcsec\ diameter aperture and a sky annulus of inner diameter 24\arcsec\ and outer diameter 44\arcsec. Star 8 of \citet{Smith1998}, with a known magnitude of $R = 13.10 \pm 0.03$, was used as the primary comparison star. A faint star $\sim 10$\arcsec\ west of star 8 contributes $<1\%$ to the brightness of star 8, and is ignored. All error bars presented in this paper are $1\sigma$ uncertainty and were calculated using \emph{apphot}. The validity of these errors was verified by measuring the scatter within a night, as well as over the entire data set for several comparison stars of different magnitudes. All data points of 3C454.3, unless otherwise stated in the captions to figures, represent the average of 12 images if the magnitude of the blazar $R > 15$, or the average of 6 images if $R < 15$. This was done in an attempt to increase the temporal resolution of data obtained at FBO at high flux levels. The range of $R$ binned in this way was $15.84 \geq R \geq 13.03$ (a flux range of $1.42 \leq S_\mathrm{opt} \leq 18.91$ mJy). Discrepancies between the flux scales of data from the different telescopes is $\sim$0.05 magnitude. However, analysis of data taken at a similar time from telescopes with different reduction methods indicate that no systematic offset is necessary, so no correction is made. No interstellar extinction was factored into the conversion between magnitude and flux. The main comparison stars used have similar color indices as that of the blazar, so we make no correction for atmospheric absorption. Figure~\ref{fig:lightcurves} shows the $\gamma$-ray\ and optical light curves from 2016 May 1 to September 30 (RJD: 7509.5-7662.5) in panels (a) and (b), respectively. Upper limits are denoted by red downward facing arrows in panel (a). Visual inspection of the light curves reveals a $\sim3$ week period of high activity at both $\gamma$-ray\ and optical frequencies, from June 11 to July 1 (RJD: 7550.5-7570.5), with lower-level activity occurring both before and after. \begin{figure*} \plotone{f2.eps} \caption{The observed optical $R$ band light curve of the June 2016 outburst, relative to $S^{\mathrm{max}}_{\mathrm{opt}} = 18.91 \pm 0.08$ mJy and centered on $T^{\mathrm{max}}_{\mathrm{\gamma}} = 7564.3$ (see Table~\ref{tab:flareparam}). Error bars are included, but in many cases are smaller than the symbols.} \label{fig:optcurve} \end{figure*} \subsection{Observations of Polarization} \label{sec:PolObs} Optical linear polarization measurements were performed at telescopes 2-6. The polarization measurements made using telescopes 2 and 3 were obtained in $R$ band, while those from telescope 4 were in white light. The measurements made with telescopes 5 and 6 were spectropolarimetric observations spanning the spectral range 4000-7550 \AA\ at a resolution of 15-20 \AA. Here, we report the polarization averaged over the range of 5000 to 7000 \AA. The details of optical polarization observations and data reduction for these telescopes can be found in \citet{Larionov2008} and \citet{Jorstad2010}. The spectropolarimetric observations of 3C454.3 at Steward Observatory were obtained as part of a program to monitor bright $\gamma$-ray\ blazars from the \emph{Fermi}\ blazar list during the first 10 years of the \emph{Fermi}\ mission.\footnote{\url{http://james.as.arizona.edu/~psmith/Fermi}} Details of the spectropolarimetric data reduction can be found in \citet{Smith2009}, and the results are discussed in $\S$\ref{sec:opticalpolarization}. The combined optical polarization data obtained from the telescopes used for this study consist of 128 measurements of the degree, $P_\mathrm{opt}$, and electric-vector position angle, $\chi_\mathrm{opt}$. The degree of polarization from the source (Fig.~\ref{fig:lightcurves}c) increased from a few percent to $\sim 20\%$ at roughly the same time as the beginning of the outburst at optical and $\gamma$-ray\ frequencies, near mid-June 2016. The value of $P_\mathrm{opt}$ during the outburst and the post-outburst period was quite variable, ranging between the pre-outburst level to the maximum. The value of $P_\mathrm{opt}$ decreased over a longer timescale than the fluxes at optical and $\gamma$-ray\ frequencies. Over the course of the main outburst, flare $a$ (defined visually as the period of the outburst with the highest $\gamma$-ray flux, from 2016 June 19 to June 25; RJD 7558-7564 -- see $\S$\ref{sec:gammaray}), $\chi_\mathrm{opt}$ varied over a range $\sim 120\degr$, remaining stable at times for only several days (Fig.~\ref{fig:lightcurves}d). During the peak of the outburst, $\chi_\mathrm{opt}$ varied erratically. \section{Structure and Timescales of the Outburst} \label{sec:structure} \subsection{Optical Outburst} \label{sec:opticaloutburst} \floattable \begin{deluxetable}{lc|lc} \tablecaption{Parameters of the 2016 Outburst \label{tab:flareparam}} \tablecolumns{5} \tablewidth{0pt} \tablehead{\colhead{$\gamma$-ray\ Parameter} & \colhead{Value} & \colhead{Optical Parameter} & \colhead{Value}} \startdata $M$ & 275 (24) & $M$ & 120 \\ $\Delta T_\gamma$ [days] & 86 & $\Delta T_\mathrm{opt}$ [days] & 20 \\ $\langle S_\gamma \rangle$ [10$^{-6}$ photons cm$^{-2}$ s$^{-1}$] & 5.44 $\pm$ 5.76 & $\langle S_\mathrm{opt} \rangle$ [mJy] & 7.21 $\pm$ 5.15 \\ $\langle \sigma_\gamma \rangle$ [10$^{-6}$ photons cm$^{-2}$ s$^{-1}$] & 0.96 & $\langle \sigma_\mathrm{opt} \rangle$ [mJy] & 0.07 \\ \hline $\Delta T^{a}_\gamma$ [days] & 6.3 & $\Delta T^\mathrm{peak}_\mathrm{opt}$ [days] & 5.82 \\ $T^{\mathrm{max}}_\gamma$ & $\sim$ 2016 Jun 24 19:00 & $T^\mathrm{max}_\mathrm{opt}$ & 2016 Jun 24 05:30 \\ $T^{\mathrm{max}}_\gamma$ [RJD] & 7564.3 & $T^\mathrm{max}_\mathrm{opt}$ [RJD] & 7563.731 \\ $S^{\mathrm{max}}_\gamma$ [10$^{-6}$ photons cm$^{-2}$ s$^{-1}$] & 22.20 $\pm$ 0.18 & $S^\mathrm{max}_\mathrm{opt}$ [mJy] & 18.91 $\pm$ 0.08 \\ $\tau^a_\gamma$ [hr] & 2.63 & $\tau^\mathrm{min}_\mathrm{opt}$ [hr] & 1.97 \\ $f_\gamma^a$ & 3.13 & $f_\mathrm{opt}$ & 1.05 \\ \hline $\Delta T^{\mathrm{pre}}_\gamma$ [days] & 6 & $\dots$ & $\dots$ \\ $S^\mathrm{pre}_\gamma$ [10$^{-6}$ photons cm$^{-2}$ s$^{-1}$] & 2.75 $\pm$ 1.18 & $\dots$ & $\dots$ \\ $\tau^\mathrm{pre}_\gamma$ [hr] & 6.33 & $\dots$ & $\dots$ \\ $f_\gamma^\mathrm{pre}$ & 2.58 & $\dots$ & $\dots$ \\ \hline $\Delta T^{\mathrm{post}}_\gamma$ [days] & 7.5 & $\dots$ & $\dots$ \\ $S^\mathrm{post}_\gamma$ [10$^{-6}$ photons cm$^{-2}$ s$^{-1}$] & 1.63 $\pm$ 0.74 & $\dots$ & $\dots$ \\ $\tau^\mathrm{post}_\gamma$ [hr] & 2.86 & $\dots$ & $\dots$ \\ $f_\gamma^\mathrm{post}$ & 2.86 & $\dots$ & $\dots$ \\ \hline $T^b_\gamma$ & 2016 Jul 28 & $\dots$ & $\dots$ \\ $T^b_\gamma$ [RJD] & 7598.0 & $\dots$ & $\dots$ \\ $S^b_\gamma$ [10$^{-6}$ photons cm$^{-2}$ s$^{-1}$] & 4.88 $\pm$ 0.32 & $\dots$ & $\dots$ \\ \hline $T^c_\gamma$ & 2016 Aug 23 & $\dots$ & $\dots$ \\ $T^c_\gamma$ [RJD] & 7623.6 & $\dots$ & $\dots$ \\ $S^c_\gamma$ [10$^{-6}$ photons cm$^{-2}$ s$^{-1}$] & 3.32 $\pm$ 0.32 & $\dots$ & $\dots$ \\ \hline $\tau^\mathrm{min}_\gamma$ [hr] & 2.63 & $\tau^\mathrm{min}_\mathrm{opt}$ [hr] & 1.97 \\ $ T^{\tau_\mathrm{min}}_\gamma$ [RJD] & 7561.06 & $T^{\tau_\mathrm{min}}_\mathrm{opt}$ [RJD] & 7564.512 \\ $ \langle \tau_{\gamma,2} \rangle$ [hr] & 34 $\pm$ 20 & $\langle \tau_{\mathrm{opt},2} \rangle$ [hr] & 38 $\pm$ 14 \\ \enddata \tablecomments{\textbf{$\bf\gamma$-\textbf{ray}\ Parameters:} $M$: Number of observations (number of upper-limits); $\Delta T_\gamma$: Duration of the $\gamma$-ray\ outburst; $\langle S_\gamma \rangle$: The average flux during the outburst, and its standard deviation; $\langle \sigma_\gamma \rangle$: The average 1$\sigma$ uncertainty of an individual measurement during the outburst; $\Delta T^{a}_\gamma$: Duration of the main flare (FWHM, see text Section~\ref{sec:gammaray}); $T^{\mathrm{max}}_\gamma$: The date of maximum of the $\gamma$-ray\ outburst; $S^{\mathrm{max}}_\gamma$: Flux at the peak of the $\gamma$-ray\ outburst over a 3 hour bin; $\tau^a_\gamma$: Minimum timescale of variability of $\gamma$-ray flux during the main flare; $f^a_\gamma$: Factor of the $\gamma$-ray flux change over $\tau^a_\gamma$; $\Delta T^\mathrm{pre}_\gamma$: Duration of the pre-flare plateau for flare $a$; $S^\mathrm{pre}_\gamma$: The average $\gamma$-ray\ flux and its standard deviation over period of $\Delta T^\mathrm{pre}_\gamma$; $\tau^\mathrm{pre}_\gamma$: Minimum timescale of variability of $\gamma$-ray\ flux during $\Delta T^\mathrm{pre}_\gamma$; $f^\mathrm{pre}_\gamma$: Factor of the $\gamma$-ray\ flux change over $\tau^\mathrm{pre}_\gamma$; $\Delta T^\mathrm{post}_\gamma, S^\mathrm{post}_\gamma, \tau^\mathrm{post}_\gamma, f^\mathrm{post}_\gamma$: Parameters for the post-flare plateau obtained in the same manner as for the pre-flare plateau; $T^b_\gamma, S^b_\gamma$: Epoch and maximum flux for flare $b$, calculated in the same manner as flare $a$; $T^c_\gamma, S^c_\gamma$: Epoch and maximum flux for flare $c$, calculated in the same manner as flare $a$; $\tau^{\mathrm{min}}_\gamma$ (hr): Minimum timescale of variability of $\gamma$-ray\ flux during an outburst; $T^{\tau_{\mathrm{min}}}_\gamma$: Epoch of the start of an event with minimum timescale of variability; $\langle \tau_{\gamma,2} \rangle$: Typical timescale of flux doubling (see text). \textbf{Optical Parameters:} $M$: Number of observations; $\Delta T_\mathrm{opt}$: Duration of optical outburst; $\langle S_\mathrm{opt} \rangle$: Average flux-density during the outburst and its standard deviation; $\langle \sigma_\mathrm{opt} \rangle$: The average 1$\sigma$ uncertainty of an individual measurement during the outburst; $\Delta T^\mathrm{peak}_\mathrm{opt}$: Duration of the main flare (FWHM); $T^\mathrm{max}_\mathrm{opt}$: Epoch of maximum during the optical outburst; $S^\mathrm{max}_\mathrm{opt}$: Maximum flux-density and error of the optical outburst; $\tau^\mathrm{min}_\mathrm{opt}$: Minimum timescale of variability during the optical outburst; $f_\mathrm{opt}$: Factor of the flux change over $\tau^\mathrm{min}_\mathrm{opt}$; $T^{\tau_\mathrm{min}}_\mathrm{opt}$: Epoch of start of an event with minimum timescale of variability; $\langle \tau_{\mathrm{opt},2} \rangle$: Typical timescale of flux doubling (see text).} \end{deluxetable} A detailed subset of the $R$ band optical light curve of the outburst is presented in Figure~\ref{fig:optcurve}. The light curve is normalized to the maximum flux density of the outburst, $S_\mathrm{opt}^\mathrm{max} = 18.91 \pm 0.08$ mJy, and centered on the date of maximum $\gamma$-ray\ flux ($T_\gamma^\mathrm{max} \approx$ June 24 19:00, RJD: 7564.3) with a range of $\pm\ 20$ days. Characteristic parameters for the optical light curve can be found in Table~\ref{tab:flareparam}. The maximum flux density value $S_\mathrm{opt}^\mathrm{max}$ occurred on $T_\mathrm{opt}^\mathrm{max} = $ June 24, 05:30 (RJD: 7563.7314), slightly before the $\gamma$-ray\ maximum. The time of maximum flux occurred late into the outburst; the rising time for the optical outburst was $\sim 14$ days, and the decay occurred on a timescale $< 5$ days. The flare profile has a skewed shape that is also apparent in the $\gamma$-ray\ outburst (see Figure~\ref{fig:gammaonly} and $\S$\ref{sec:gammaray}), and differs from past optical outbursts \citep[see, e.g.,][]{Ogle2011, Jorstad2013, Kushwaha2016}. Also, unlike the optical outbursts analyzed in \citet{Jorstad2013}, there are no evident pre- and post-outburst plateaus in the optical light curve. We use the full width at half-maximum (FWHM) of a Gaussian function that fits the flare profile near the maximum flux density to define the duration of the optical outburst despite the asymmetry of the light curve, $\Delta T_\mathrm{opt}^\mathrm{peak} \sim 5.8$ days. We determine timescales of optical flux variability ($\tau_\mathrm{opt}$) using a formalism suggested by \citet{Burbidge1974} and utilized by \citet{Jorstad2013}: $\tau = \Delta t / \ln{(S_2 / S_1)}$, where $S_i$ is the flux density at epoch $t_i$, with $S_2 > S_1$, and $\Delta t = |t_2 - t_1|$. The timescale of variability was calculated for all possible pairs of flux measurements within 3 days of each other if, for a given pair, $S_2 - S_1 > \frac{3}{2} (\sigma_{S_1} + \sigma_{S_2})$, where $\sigma_{S_i}$ is the uncertainty for an individual measurement. The minimum timescale of variability, $\tau_\mathrm{opt}^\mathrm{min}$, is very short: $\sim$2.0 hr, indicative of intraday and perhaps micro-variability (see $\S$\ref{sec:variability}). Such episodes of extreme variability occur infrequently in 3C454.3, with the majority of the active periods exhibiting a timescale of flux doubling between 1 and 2 days. A remarkable feature of the optical light curve is the precipitous decay in flux density over one day on June 25 (RJD: 7564.5). The flux density in $R$ band decreased by a factor of $\sim 4$, from 14.2 mJy to 3.8 mJy. Due to the sampling rate of the light curve, the observed timescale of 24 hr is likely an overestimation, as the bulk of the decay occurred over a 4.5 hr time period when the flux density decreased by $\sim2$, from 11.5 to 6 mJy. An in-depth discussion of the optical variability is given in $\S$\ref{sec:variability}. \\ \begin{figure} \plotone{f3.eps} \caption{Stokes $q$ and $u$ changes over the course of the 2016 outburst. The time ranges are defined following the $\gamma$-ray\ outburst (see $\S$\ref{sec:gammaray}): Pre-Outburst - from 2016 May 01 to June 12 (RJD: 7509-7551); Flare a - from June 12 to July 4 (RJD: 7551-7573); Flare b and c - from July 4 to September 6 (RJD: 7573-7637); and Post-Outburst - from September 6 to October 4 (RJD: 7637-7665). \label{fig:QUStokes}} \end{figure} \subsection{Optical Polarization} \label{sec:opticalpolarization} While the sampling of the optical polarization during the outburst was significantly less intense than that of the optical flux density, there are a few noteworthy features of the polarization curves presented in Figure~\ref{fig:lightcurves}. Prior to the outburst, the degree of optical polarization was low ($P_\mathrm{opt} = 2.47 \pm 0.39 \%$ during May). However, the large gap in sampling between May and June makes it difficult to accurately describe the nature of the polarization prior to the outburst. During the outburst the average polarization $\langle P_\mathrm{opt} \rangle = 12.77 \pm 5.47\%$, with a maximum value of $P_\mathrm{opt}^\mathrm{max} = 20.03 \pm 0.10 \%$ on June 16 (RJD: 7555.97). This is displaced by 8 days from the peaks of the optical and $\gamma$-ray\ light curves on Jun 24 (RJD: 7564.3). The quasar 3C454.3 was highly polarized during the June 2016 outburst, while the blazar was in a weakly-polarized state prior to the outburst. The polarization at the peak of the $\gamma$-ray\ and optical light curves was $P_\mathrm{opt} = 12.16 \pm 0.15\%$, near the average for the outburst. While both the $\gamma$-ray\ and optical outbursts experienced dramatic and precipitous decays on June 25 (RJD: 7564.5), a decrease in the polarization was not seen until June 26 (RJD: 7565.5), when $P_\mathrm{opt}$ fell from $14.55 \pm 0.48 \%$ to $2.91 \pm 0.33\%$ over a 24-hour time period. The value of $P_\mathrm{opt}$ later increased up to $>20\%$, a higher level than during the outburst, for a short period of time, before settling down to near pre-outburst levels. According to the available data obtained prior to and after the outburst, changes in the degree of polarization were more chaotic after than prior to the outburst (despite the sparser sampling in May 2016). One month prior to the start of the outburst, the emission from 3C454.3 was polarized roughly parallel to the jet direction (see $\S$\ref{sec:RadioKnots}). Throughout the course of the outburst, $\chi_\mathrm{opt}$ rotated in an irregular fashion by $\sim 120\degr$. This change took place over a longer timescale than the rise times of either the optical or $\gamma$-ray\ outbursts. It is interesting to note that the changes in $P_\mathrm{opt}$ and $\chi_\mathrm{opt}$ do not coincide. The value of $\chi_\mathrm{opt}$ did not return to a direction nearly parallel to the jet until flare $c$ (described in the following section). Similar high-amplitude drifts in $\chi_\mathrm{opt}$ in 3C454.3 have been noted previously \citep[e.g.,][]{Jorstad2007}. While $P_\mathrm{opt}$ was chaotic over the course of the outburst, and $\chi_\mathrm{opt}$ rotated in an irregular fashion by $\sim120\degr$, the polarized light from 3C454.3 followed a general trend over the course of the outburst. Prior to the outburst, the average polarization was low, with average Stokes parameters $\langle q_\text{pre} \rangle = -2.70 \pm 1.06\%$ and $\langle u_\text{pre} \rangle = -0.55 \pm 1.05\%$, where $1.06\%$ and $1.05\%$ represent the standard deviation of the average values, while the typical uncertainty on a measurement of $q$ or $u$ is $\langle \sigma \rangle = 0.36\%$. During the main part of the outburst, $\gamma$-ray\ flare $a$ (see $\S$\ref{sec:gammaray}), $q$ and $u$ became much higher as well as more random than the pre-outburst state, with $\langle q_a \rangle = -5.58 \pm 6.63\%$ and $\langle u_a \rangle = 9.45 \pm 5.41\%$ (see Figure~\ref{fig:QUStokes}). Although there is clustering of $u$ at high values during Flare $a$, several low values are measured as well. This behavior can be connected with the complex structure of the flare seen in the $R$ band light curve (Fig.~\ref{fig:optcurve}). However, sparser sampling of the polarization data does not allow us to investigate a detailed correlation between the degree of polarization and flux behavior. As the outburst faded through flares $b$ and $c$, $q$ and $u$ became more erratic around a central low polarization, with $\langle q_{bc} \rangle = 0.35 \pm 6.36\%$ and $\langle u_{bc}\rangle = -2.04 \pm 5.66\%$. After the outburst, $\langle q_\text{post} \rangle = -2.30 \pm 3.78\%$ and $\langle u_\text{post} \rangle = 0.61 \pm 3.35\%$. The increase in the standard deviation of $\langle q \rangle$ and $\langle u \rangle$ indicates that the post-outburst state was more turbulent than the pre-outburst state. The high-amplitude fluctuations of $q$ and $u$ early in the outburst and the clustering of measurements around low polarizations near the end of the outburst is consistent with the interpretation that the jet contains a superposition of ordered and turbulent magnetic fields \citep{Marscher2017}. Spectropolarimetric measurements were obtained in addition to the photometric measurements using the same instrument on telescopes 5 and 6. While a complete analysis of the spectra is beyond the scope of this work, we briefly describe the general trends. In order to describe these trends, we rotate the $q$ and $u$ Stokes spectra so that $u^\prime$ averages to $0$ between $5000$ and $7000$ \AA. In this frame, nearly all of the polarization is given by $q^\prime$, and we can avoid the complications of the statistical bias in measurements of $P_\mathrm{opt}$ arising from that parameter's non-normal error distribution \citep{Wardle1974}. For each spectrum of $q^\prime$ and $u^\prime$, the median values in two widely separated wavelength bins, each 500 \AA\ wide, were taken to analyze $P_\mathrm{opt} (\lambda)$. A blue region centered on $\lambda = 4750$ \AA\ and a red region centered on $\lambda = 7250$ \AA\ were chosen to avoid any major emission line features. We then construct $\Delta q^\prime = q^\prime (\text{red}) - q^\prime (\text{blue})$ and $\Delta u^\prime = u^\prime (\text{red}) - u^\prime (\text{blue})$. In this representation, the wavelength dependence of $P_\mathrm{opt}$ is quantitatively given by $\Delta q^\prime$, while $\Delta u^\prime$ indicates the strength of the dependence of $\chi_\mathrm{opt}$ on wavelength. \begin{figure} \plotone{f4.eps} \caption{The rotated differential Stokes parameters $\Delta q^\prime$ (red circles) and $\Delta u^\prime$ (blue squares) vs. time for the 2016 outburst of 3C454.3. The dot-dash lines show the weighted average of the values. The black dashed line at 0 is included for comparison. The uncertainties are derived from the value $\text{RMS}/\sqrt{N}$ for each region ($\lambda = 7000$-$7500$\ \AA\ (red) and $\lambda = 4000$-$4500$\ \AA\ (blue)), where $N$ is the number of pixels in the spectral region (126 for both regions) and RMS is the root-mean-square calculated from the pixels within the sample region.\label{fig:DQDU}} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.4\textwidth, trim=100 200 100 225]{f5.eps} \end{center} \caption{Median $q^\prime$ and $u^\prime$ (rotated $q$ and $u$ such that $\langle u^\prime \rangle = 0$ for the wavelength range $5000$-$7000$ \AA) of 3C454.3 for 758 spectropolarimetric observations obtained from 2008-2018 in blue. See text for details. The median optical flux spectrum of 3C454.3 over the same period is shown in red. The symbol $\oplus$ indicates absorption features due to the atmosphere.\label{fig:QWaveDep}} \end{figure} During the outburst, the weighted average and propagated uncertainty $\langle \Delta u^\prime \rangle = -0.24 \pm 0.05\%$ (with a standard deviation of $1.01\%$, thus a roughly equal distribution around $\Delta u^\prime = 0$), indicating that $\chi_\mathrm{opt}$ is not strongly dependent on wavelength. However, $\Delta q^\prime$ shows a bias towards positive values, with $\langle \Delta q^\prime \rangle = 1.22 \pm 0.05\%$ and a standard deviation of $1.54\%$. This bias is likely due to the unpolarized blue bump emission diluting the polarization from the non-thermal emission in the jet \citep{Smith1988}, and shows that $P_\text{opt}$ increases with increasing wavelength. The values of $\Delta q^\prime$ and $\Delta u^\prime$ are shown in Figure~\ref{fig:DQDU}, along with their weighted averages. The majority of the spectropolarimetry data were obtained before and after the main outburst, flare $a$, while the optical emission of the blazar was weak ($<5$ mJy) and the polarization only moderately high ($\sim 10\%$). As a result, the observed increase of $P_\text{opt}$ with increasing wavelength supports the findings of \citet{Jorstad2013}, who found a similar trend for weak emission/moderate polarization states of 3C454.3. These trends are seen not just in this outburst, but also in the 10-year spectropolarimetric monitoring of 3C454.3, with $\langle \Delta q^\prime_\mathrm{tot} \rangle = 0.827 \pm 0.001\%$ and $\langle \Delta u^\prime_\mathrm{tot} \rangle = -0.063 \pm 0.008\%$, and standard deviations of $1.310\%$ and $1.002\%$, respectively. Figure~\ref{fig:QWaveDep} shows the median $q^\prime$ and $u^\prime$ spectra for all Steward Observatory observations from 2008-2018. As expected from the adopted rotation of the Stokes parameters, $u^\prime \approx 0$ across the spectrum, but $q^\prime$ shows a general decrease towards the blue. In the figure, each $q^\prime$ spectrum was normalized such that the median value in the $5500$-$6500$ \AA\ region was set to 10\%. Fig.~\ref{fig:QWaveDep} also shows the median optical flux spectrum of 3C454.3, with the average flux density in the $5400$-$5600$\ \AA\ range normalized to 1. The decrease in $q^\prime$ at the wavelengths corresponding to the Mg II line emission indicates that the broad-line region is unlikely to have a strong polarization. \begin{figure*} \plotone{f6.eps} \caption{Gamma-ray light curve of the June 2016 outburst, relative to $T^{\mathrm{max}}_\gamma = 7564.3$ and normalized to $S^{\mathrm{max}}_\gamma = 22.20 \pm 0.18 \times 10^{-5}$ photons cm$^{-2}$ s$^{-1}$ (see Table~\ref{tab:flareparam}). Upper limits are denoted with red downward arrows. Flares $a$, $b$, and $c$, pre- and post-flare $a$ plateaus, and pre- and post-outburst times are marked with dotted lines. The inset figure shows the shape of the flare $a$ in more detail in the units of the main figure. Flares $b$ and $c$, while low-amplitude, are comparable to flares $b$ and $c$ presented in \citet{Jorstad2013}.} \label{fig:gammaonly} \end{figure*} \begin{figure*} \plotone{f7.eps} \caption{Flux density vs. time during all nights with a calculated ANOVA confidence level $p < 0.001$ ($>3\sigma$) of variability (a-g). Data points represent individual images obtained with a \emph{Photometrics Star 1} CCD system and Johnson $R$ filter on the 40-cm Newtonian-Cassegrain telescope of FBO. On all figures, flux density (in mJy) is on the y-axis, and time (in UT hours) is on the x-axis. The scale is the same in all figures to better compare among nights. All nights are within 6 days of the peak of the outburst. Error bars are plotted, but some are smaller than the symbols. $F$ and $p$ values are given in Table~\ref{tab:anova}, and $p$ values for an individual night are in the respective figure. (h) June 30 (RJD: 7569.5) is given as a comparison for a non-variable night, with $p = 0.71$.} \label{fig:varnights} \end{figure*} \subsection{Gamma-Ray Outburst} \label{sec:gammaray} The $\gamma$-ray\ light curve presented in Figure~\ref{fig:gammaonly} is normalized to the maximum flux density of the outburst ($S_\gamma^\mathrm{max} = 22.20 \pm 0.18 \times 10^{-6}$ ph cm$^{-2}$ s$^{-1}$), with time $t=0$ set to the date of maximum ($\sim$ June 24, 19:00, RJD: 7564.3). As mentioned in $\S$\ref{sec:PolObs}, a main flare, $a$, is identified in the outbubrst, along with two smaller-amplitude flares, $b$ and $c$. Flare $b$ occurred $\sim 1$ month after flare $a$, and flare $c$ $\sim 1$ month after flare $b$. All three flares are marked in Figure~\ref{fig:gammaonly}. The main flare, $a$, is similar in structure and duration to the optical outburst. Both flux profiles have an asymmetric shape, with the peak occurring late in the flare. The duration of flare $a$ is determined as for the optical outburst: the FWHM of a Gaussian function that fits the flux profile near maximum, $6.3$ days. This duration is close to that of the optical outburst, $\Delta T_\mathrm{opt}^\mathrm{peak} = 5.8$ days. Also, both flare $a$ and the optical outburst exhibit a precipitous decay from peak to pre-outburst levels. Over a similar 24-hour period on June 25 (RJD: 7564.5), flare $a$ decayed by a factor of 10, from $\sim 2.0 \times 10^{-5}$ to $\sim 2.0\times 10^{-6}$ ph cm$^{-2}$ s$^{-1}$. As in the optical case, the decay of flare $a$ occurred mainly over an even shorter timescale, declining by a factor of $\sim 4$ over only 6 hours. Unlike the optical outburst, flare $a$ has pre- and post-flare ``plateaus" of enhanced $\gamma$-ray\ emission, a common feature of blazar $\gamma$-ray\ flares, first discussed by \citet{Abdo2011}. These plateaus, identified visually, are marked by time periods indicated in Figure~\ref{fig:gammaonly}, with durations of $\Delta T_\gamma^\mathrm{pre} \approx 6$ and $\Delta T_\gamma^\mathrm{post} \approx 7.5$ days. In total, the duration of flare $a$ is $\Delta T^a_\gamma = 19.8$ days. Parameters for flare $a$ and the pre- and post-flare plateaus are given in Table~\ref{tab:flareparam}. The timescales of variability are calculated with the same formalism as for the optical light curve. The triple-flare structure of a $\gamma$-ray\ outburst of 3C454.3 has been seen in previous events, such as the late 2009 (Outburst I), early 2010 (Outburst II), and late 2010 (Outburst III) outbursts analyzed by \citet{Jorstad2013}. Previously, the delay between flare $a$ and $b$ was $\sim 30$ days, and the delay between flares $a$ and $c$ was $\sim 47$ days. For the mid-2016 outburst discussed in this paper, flares $b$ and $c$ occurred later, with delays of $\sim 38$ and $\sim 60$ days from flare $a$, respectively. Another difference between the 2016 outburst and these previous three is the shape of flare $a$. While all four main flares had a pre- and post-flare plateau, the total duration of flare $a$ for the 2016 outburst is less than the duration of the three previous flares by $\sim 10$ days. The extremely short decay of flare $a$ of the 2016 outburst suggests a faster disturbance in our frame or a smaller, more violently variable emission region, as only Outburst III showed a comparable decay range, although over a much longer timescale ($>24$ hours). Despite the minor differences in shape and timescales, the similarity in structure of the $\gamma$-ray\ outbursts argues in favor of a similar mechanism(s) and location of $\gamma$-ray\ production for all four events. In fact, the triple-flare structure may be the archetypical pattern of outbursts of 3C454.3. \citet{Jorstad2010} noted a triple-flare structure in the optical light curve of 3C454.3 that coincided in time with the passage of superluminal knots through the mm-wave core of the jet. A similar passage of a knot through sections of the jet containing a relatively high magnetic field and/or relativistic electron density (e.g., a series of standing shocks) could produce the triple-flare structure of the $\gamma$-ray\ outbursts. While the measured time interval between the first and third peaks of the earlier events was $\sim 50$ days, it is possible that a similar mechanism/location could result in the structure seen in the 2016 outburst. Parameters of the mid-2016 $\gamma$-ray\ outburst, calculated and presented in a manner similar to the outbursts analyzed in \citet{Jorstad2013}, are given in Table~\ref{tab:flareparam}. \floattable \begin{deluxetable}{ccccccccc} \tablecaption{Nights with Significant Variability \label{tab:anova}} \tablecolumns{2} \tablewidth{0pt} \tablehead{\colhead{Night} & \colhead{M} & \colhead{$F$} & \colhead{$p$} & \colhead{$\langle S \rangle$} & \colhead{$\langle \sigma \rangle$} &\colhead{$\Delta T$} & \colhead{$\Delta S$} & \colhead{Sky Conditions} \\ UT Date (RJD) & & & & [mJy] & [mJy] & [hrs] & [mJy]} \startdata June 19 (7558.5) & 24 & 29.37 & 6.72$\times 10^{-8}$ & $7.94 \pm 0.41$ & 0.016 & 2.09 & -0.92 & Full Moon, Clear\\ June 20 (7559.5) & 24 & 10.14 & 1.40$\times 10^{-4}$ & $9.83 \pm 0.34$ & 0.21 & 1.35 & 0.91 & Full Moon, Local Haze \\ June 22 (7561.5) & 13 & 17.11 & 5.50$\times 10^{-4}$ & $15.03 \pm 1.09$ & 0.36 & 0.49 & 2.08 & Full Moon, Partial Clouds\\ & & & & & & & &near End of Night \\ June 23 (7562.5)$^a$ & 51 & 113.99&2.94$\times 10^{-23}$& $16.81 \pm 1.52$ & 0.28 & 1.86 & -4.14 & Partial Clouds in Beginning, \\ & & & & & & & & then Clear after 06:30 UT \\ June 24 (7563.5) & 42 & 15.94 & 1.14$\times 10^{-7}$ & $17.69 \pm 0.73$ & 0.17 & 2.23 & -2.09 & Clear \\ June 25 (7564.5) & 47 & 178.43& 9.15$\times 10^{-26}$& $ 7.72 \pm 1.33$ & 0.16 &2.90 & -3.69 & Clear\\ June 26 (7565.5) & 40 & 16.61 & 1.38$\times 10^{-7}$ & $ 4.20 \pm 0.43 $ & 0.21 & 2.05 & -1.19 & Clear \\ June 30 (7569.5) & 48 & 0.53 & 0.71 & $ 2.28 \pm 0.14$ & 0.14 & 0.55 & -0.42 & Clear \\ \enddata \tablecomments{$^a$Due to unfavorable weather conditions, 20 of the first 22 images during the night were 60-second exposures. The parameters are labeled as follows: $M$: Number of observations during the night; $F$: $F$-value calculated from ANOVA test; $p$: $p$-value calculated from ANOVA test, interpreted as significantly variable if $p \leq 0.001$ ($>3\sigma$); $\langle S \rangle$: Average flux-density during the night, with 1$\sigma$ standard deviation; $\langle \sigma \rangle$: Average error per measurement; $\Delta T$: Time between third-highest and third-lowest flux density levels, in hours; $\Delta S$: Flux density difference between the third-highest and third-lowest flux densities. Negative values indicate that the flux density decreased over the course of the night.} \end{deluxetable} \section{Rapid Optical Flux Variability} \label{sec:variability} 3C454.3 had been observed to have variations in brightness only on the order of 1 magnitude over the course of entire observing seasons \citep{Angione1968, Lloyd1984, Webb1988} until several outbursts during the late 2000s, most notably the unprecedented 2005 outburst \citep{Villata2006}, with a peak brightness of $R = 12$. Inspection of the optical $R$ band light curve during the 2016 outburst reveals several periods of intense variability over the course of a single night. We refer to such events as ``intraday variability" if the light curve appears to connect smoothly with the flux on the preceding and subsequent nights. The term ``micro-variability" is reserved to describe changes during a night when the behavior of the flux deviates from the interday variability trend. This section describes observations of notable intraday variability, as well as one night with clearly evident micro-variability in the form of quasi-periodic oscillation of the optical flux. Only data obtained from FBO are used in this analysis, since none of the other optical telescopes involved in this study observed 3C454.3 continuously over a given night. \subsection{Intraday Variability} \label{sec:intraday} In order to increase the confidence of reports of variability in light curves, several statistical methods have been developed to quantify the variations of sources. For example, \citet{DeDiego2010} has provided a direct comparison of several statistical tests and determined that a one-way analysis of variance (ANOVA) test is a robust method to detect and quantify variations from AGN. Applied to quasar variability, an ANOVA test checks for the probability of several sample groups being equal, with the null hypothesis for an ANOVA test being non-variability. We utilized a standard ANOVA test instead of a more complicated enhanced $F$-test or Bartels test \citep{DeDiego2014}, as only a single non-variable bright comparison star was used during the photometry, and the light curve on each night is oversampled compared to the timescale of variations being determined. More robust statistical methods can be used \citep[e.g.,][]{DeDiego2015}, but in our case an ANOVA test is sufficient. In this analysis, an ANOVA test was first used to eliminate potential variability of comparison stars in the same field as 3C454.3. The ANOVA test was then applied to each night of data of 3C454.3 collected from FBO. The ANOVA test revealed 8 nights of possible intraday variability at the $p < 0.001$ confidence level ($>3\sigma$), on every night of observations between June 19 and June 26 (RJD: 7558.5-7565.5), plus July 21 (RJD: 7590.5). However, since July 21 was after the outburst, we ignore that night's data for the rest of the analysis. On several other nights, the flux varied at a confidence level $<3\sigma$; we also ignore the data from these nights. Light curves for nights between June 19 and June 26 (RJD: 7558.5-7565.5) are given in Figure~\ref{fig:varnights}. June 30 (RJD: 7569.5) is also included as a control, since no variability of 3C454.3 was observed on that night according to the ANOVA test. Important calculated values for the data on each night are presented in Table~\ref{tab:anova}, as is the sky condition. The calculated $F$ and $p$ values from each ANOVA test are given in Table~\ref{tab:anova}, and the $p$ values are also included in Figure~\ref{fig:varnights}. Smaller $p$ values indicate a higher probability of the source being variable during the night. The average flux density, $\langle S \rangle$ (with 1$\sigma$ standard deviations), the timescale of variability, $\Delta T$, and the change in flux density of the source, $\Delta S$, were calculated for each night. The change in flux during each night and the time difference between maximum and minimum points were calculated using the third-highest and third-lowest flux density values to avoid the influence of outlying data points in the analysis. The most significant variability was seen on June 23 and 25 (RJD: 7562.5 and 7564.5). The total change in flux density of 3C454.3 during these two nights was $\Delta S = - 4.14$ mJy and $\Delta S = -3.69$ mJy, where the negative value of $\Delta S$ indicates a decrease of flux density during the night. \subsection{Micro-Variability} \label{sec:Micro} The light curve of 3C454.3 on June 25 (RJD: 7564.5) exhibits significant intraday variability (see Fig.~\ref{fig:varnights} (f)), with the flux density decreasing by a factor $\sim2$ over the 3.5 hours of observation. This is the steepest observed rise or fall in optical flux density throughout the outburst. Since the shortest timescales of variability are important for constraining the physical size of the emission region, the variability of 3C454.3 on June 25 is now examined in detail. Following \citet{Valtaoja1999}, we roughly model the optical light curve of 3C454.3 on June 25 with an exponential rise and decay of the form $S = A \exp{(-Bt)} + C$. It is only possible to obtain a rough estimation for the value of $A$ ($\sim100$ mJy), since the amplitude of the exponential is changing drastically over a short period of time. This large value is a consequence of fitting such a small section of the data and is not meant to connect the beginning of the light curve on June 25 with the end of the light curve on the preceding night. The fitted parameters $B = 0.569 \pm 0.004$ hr$^{-1}$ and $C = 5.20 \pm 0.08$ mJy are more robust. Application of this fit and the resulting residuals are shown in Figure~\ref{fig:June25expo}. \begin{figure}[t!] \plotone{f8.eps} \caption{$(a)$ Intraday variability of 3C454.3 on June 25 (UT). The data are parameterized by a single decaying exponential (dashed line). $(b)$ Residuals of the exponential fit. In both panels, error bars are included.} \label{fig:June25expo} \end{figure} The residuals of the fit show oscillations above and below the exponential trend throughout the night. These oscillations are more pronounced than short-term scatter. We have checked that observational effects, such as weather, atmospheric reddening, or placement of the source on the CCD had no impact on the observed flux density. The oscillations appear to be largely independent of the aperture used during photometry. Thus, we judge these variations to be intrinsic to the blazar. In order to model these oscillations, we use a decaying sinusoid of the form \begin{equation*} S = \frac{f(t)}{f_0} (A_0 \sin{(B_0 t-C_0)})\ , \end{equation*} \noindent where $f(t)$ is the flux density of the exponential trend, $f_0$ is a reference value chosen to be 6 mJy (the flux density of the blazar at $\sim 8.5$ UT), and $A_0, B_0,$ and $C_0$ are constants. A decaying sinusoid was adopted to better represent the data at the beginning and end of the night, since the flux density oscillation amplitude decreased during the night. The sinusoidal fit to the residual data is shown in Figure~\ref{fig:sinonly}. The constants calculated from the fit are $A_0 = 0.17 \pm 0.01$ mJy, $B_0 = 10.39 \pm 0.03$ hr$^{-1}$ (period $= 36.28 \pm 0.09$ minutes), and a phase factor $C_0 = 0.750 \pm 0.002$. There are insufficient data to determine whether the oscillation pattern lined up with any data a few hours before or after the FBO observations. Although a changing frequency of oscillation may more accurately fit the observed oscillations, there are too few data points to justify such a complication. Combining both the exponential and sinusoidal models produces a fit that yields very low residual flux density levels, as seen in Figure~\ref{fig:finalfit}. The residual scatter fits mostly within two standard deviations of the average error bar. The chi-square per degree of freedom of the total combined exponential and sinusoid model $\chi_{\mathrm{dof}}^2 = 1.06$. \begin{figure} \plotone{f9.eps} \caption{Sinusoidal fit (red solid line) to the residuals of the exponential decay on June 25 (black circles). See text for details.} \label{fig:sinonly} \end{figure} \begin{figure*} \plotone{f10.eps} \caption{$(a)$ Exponential and sinusoidal fit combined together to model the behavior of 3C454.3 on June 25 (RJD: 7564.5). See text for details. $(b)$ The residual flux from the fit, on the same scale as the light curve. The solid and dashed black lines in $(b)$ represent the average $1\sigma$ and $2\sigma$ error bars. A $\chi^2$ test for goodness of fit yields $\chi_\mathrm{dof}^2 = 1.06$ for the fit. \label{fig:finalfit}} \end{figure*} The micro-variability oscillations on June 25 show that 3C454.3 can vary significantly on sub-hour timescales, in this case 36 minutes. This severely constrains the size of the emission region as discussed in $\S$\ref{sec:discuss}. \section{Discussion} \label{sec:discuss} The multi-frequency light curves reveal the extraordinary June 2016 outburst of 3C454.3, with rapid, high-amplitude changes in flux density over short timescales. At $\gamma$-ray\ energies, 3C454.3 varied on timescales at least as short as $\sim 3$ hours. The necessity of binning data prevents us from detecting any more rapid variations that may have occurred. At optical $R$ band, we observed flux density variations of more than 1 mJy per hour near the peak of the outburst, with a minimum timescale of variability $\tau_\mathrm{opt}^\mathrm{min} \approx 2$ hours. Observations on June 25 (RJD: 7564.5) revealed micro-variability in the form of quasi-periodic oscillations with an estimated period of 36 minutes. \subsection{Overall Variability in the Jet} \label{sec:overallvariability} It is possible to relate the observed timescale of variability $\tau_\mathrm{var}(\mathrm{obs})$ to the intrinsic value $\tau_\mathrm{min}^\mathrm{intr}$ in the rest frame of the blazar using \begin{equation*} \tau_\mathrm{min}^\mathrm{intr} = \frac{\delta \tau_\mathrm{var}(\mathrm{obs})}{1+z}\ , \end{equation*} \noindent where $z$ is the redshift of the host galaxy and $\delta$ is the Doppler factor. A technique developed by \citet{Jorstad2005} derives the Doppler factor $\delta$ through analysis of 43 GHz Very Long Baseline Array (VLBA) images. The movement of bright ``knots" down the jet often coincides with flux outbursts. Knots can have different speeds and values of $\delta$ \citep[e.g.,][]{Jorstad2001, Jorstad2010, Kellermann2004, Lister2009}. \subsubsection{Radio Knots} \label{sec:RadioKnots} \begin{figure} \epsscale{0.775} \plotone{f11.eps} \caption{VLBA total (contours) and polarized (color scale) intensity images of 3C454.3 at 43 GHz showing the evolution of $K16$, convolved with a beam of $0.33 \times 0.14$ mas$^{2}$ at $\mathrm{PA} = -10\degr$ (the bottom left grey-lined oval). The global intensity peak is $7300$ mJy/beam, and contour levels start at 0.1\% of the peak and increase by a factor of 2. Black line segments within each image show the direction of linear polarization, while the length of the segment is proportional to the polarized intensity values. The black and navy vertical lines indicate the position of the core and stationary feature $C$ \citep[see][]{Jorstad2017}, respectively, while the red circles indicate the position and size of $K16$ according to modeling. \label{fig:K16}} \end{figure} \begin{figure*} \plotone{f13.eps} \caption{Separation of $K16$ from the core vs. time in the jet of 3C454.3 from the VLBA-BU-BLAZAR monitoring program. The vectors show the position angle of each knot with respect to the core at the corresponding epoch. The dashed lines represent polynomial fits to the motion, as done in \citet{Jorstad2017}. The black dots mark the position of the core, $A0$. The red points correspond to the stationary feature $C$, while the blue points correspond to $K16$. The vertical line segments show the approximate $1\sigma$ positional uncertainties based on the brightness temperature $T_\mathrm{b}$. The horizontal line segment for $K16$ shows uncertainty in the date of ejection of the knot from the core. \label{fig:Distance}} \end{figure*} We have analyzed the VLBA data obtained at 43 GHz within the VLBA-BU-BLAZAR program\footnote{\url{http://www.bu.edu/blazars/VLBAproject.html}} from 2016 January to 2017 June. The data reduction and model fitting are described in \citet{Jorstad2017}. The analysis reveals a knot ejected from the 43 GHz core of the blazar near in time to the 2016 outburst, designated $K16$. The knot was first distinguishable from the 43 GHz core in June 2016, coincident in time with the optical and $\gamma$-ray\ outburst. Figure~\ref{fig:K16} presents a sequence of 43 GHZ total and polarized intensity VLBA images of 3C454.3 depicting the evolution of $K16$. Following $K16$ until it was no longer visible, the knot had an apparent speed of $v_\mathrm{app} = 20.3c \pm 0.8c$. A backward extrapolation of the motion under the assumption of constant speed yields a date of 2016 February 25 (RJD: 7443 $\pm$ 17.5 days) when the brightness centroid of the knot crossed that of the core. Using the method described in \citet{Jorstad2017}, the Doppler factor $\delta = 22.6 \pm 4.4$, bulk Lorentz factor $\Gamma = 20.4 \pm 0.4$, and viewing angle of the path of $K16$ with respect to the line of sight $\Theta_\circ = 2.5 \degr \pm 0.3\degr$. Knot $K16$ has a wider viewing angle than $K09$ and $K10$ (associated with Outbursts I-III, $\S$\ref{sec:gammaray}), that moved down the jet along paths oriented $1.35 \degr \pm 0.2\degr$ and $0.4\degr \pm 0.1\degr$ from our line of sight \citep{Jorstad2013}. Figure~\ref{fig:Distance} shows the separation of $K16$ from the core, in addition to the stationary feature $C$ located $\sim0.58$ mas from the core. $K16$ was ejected from the core $\sim \mathbf{4}$ months before the outburst. The time delay could be shorter if the knot decelerated as it separated from the core. The angular sizes of K16 and the core when the VLBA observations could first resolve them separately were $0.2 \pm 0.02 $ mas and $0.1 \pm 0.02 $ mas, respectively. Since in 4 months $K16$ moved $\sim 0.1$ mas, the upstream boundary of the knot was still crossing the core when the outburst occurred. In multiwavelength outbursts of other sources \citep[such as the BL Lacertae object AO 0235+164, see][]{Agudo2011}, superluminal knots have been seen as the ``head" of an extended disturbance containing a front-back structure stretched by light-travel delays in the observer's frame \citep[e.g.,][]{Aloy2003}. Then, when the back perturbation encounters the core, particle acceleration causes the observed muliwavelength variability. The timing of $K16$ and the multiwavelength variability observed is consistent with a lagging upstream end of $K16$ causing the outburst as it interacted with a standing shock in the core. Analysis of $\gamma$-ray\ data collected with the \emph{Fermi}-LAT in the months preceding the June 2016 outburst reveals a small amplitude $\gamma$-ray\ outburst with $S_\gamma^\mathrm{max} = 8.55\pm0.42 \times 10^{-6}$ ph cm$^{-2}$ s$^{-1}$ on $T_\gamma^\mathrm{max} = $ 2016 March 13 (RJD: 7460, see Figure~\ref{fig:ExtraGamma}). The timing of this flare is consistent with the event being caused by a forward section of $K16$ passing through the core. The position angle of the jet projected on the sky for $K16$ and the stationary component $C$ during the 2016 outburst was $-79\degr \pm 1 \degr$. This position angle is outside the 5-year average found in \citet{Jorstad2017} of $-98\degr \pm 10\degr$. However, the jet position angle during the 2016 outburst is consistent with the average position angle of the knot $B10$ \citep{Jorstad2017}. In the range of polarization angles presented in Figure~\ref{fig:lightcurves} (d), values $\chi_\text{opt} \sim 101\degr$ are parallel to the jet direction, while either $\chi_\text{opt} \sim 11\degr$ or $191\degr$ are nearly perpendicular to the jet direction.\\ $\left. \right.$ \subsubsection{Magnetic Field Strength} \label{sec:MagFieldStrength} For this analysis, we use the Doppler factor of $K16$, $\delta = 22.6$. With $z = 0.859$ for 3C454.3, the timescale of variability in the rest frame of the emitting plasma in $R$ band is $\tau_\mathrm{min}^\mathrm{intr} \approx 24$ hours. The maximum size of the emission region is related to the intrinsic variability timescale through relativistic causality: $r \lesssim c\tau_\mathrm{min}^\mathrm{intr} \approx 2.6\times 10^{15}$ cm. The intrinsic timescale of variability can also be used to provide an estimate for the strength of the magnetic field in the jet. For shock-in-jet models of blazar variability \citep[e.g.,][]{Marscher1985}, the shock energizes relativistic electrons as they enter the emitting region behind the shock front. Both synchrotron and inverse Compton radiative losses then determine the extent of the emission region in the direction of the jet flow. If the spectral energy distribution at infrared-optical and $\gamma$-ray\ frequencies is similar to that of the 2010 November outburst \citep[Fig. 25 of][]{Jorstad2013}, the ratio of inverse Compton ($\gamma$-ray) to synchrotron (IR) luminosity at the peak of the 2016 June outburst is $\sim 5$. (The 2010 November outburst spectral energy distribution is used due to the lack of available X-ray, UV, and infrared data for the 2016 outburst, thus the peaks in the spectral energy distribution cannot be determined.) In the electron energy loss equation, the total loss rate is derived in part by summing both the energy density due to the relativistic particles and the magnetic field. Since these quantities are proportional to the inverse Compton and synchrotron peaks respectively, if the ratio of the inverse Compton to synchrotron luminosity is $5:1$, then the expression $u_\mathrm{ph} + B^2 / 8\pi$ can be reduced to $6B^2 / 8\pi$. In the observer's frame, the lifetime of electrons emitting at a frequency $\nu_\mathrm{GHz}$ (in GHz) can be related to the magnetic field strength $B_\mathrm{G}$ (in Gauss) through \begin{equation*} B_\mathrm{G} \approx \left(\frac{1+z}{6 \delta \nu_\mathrm{GHz}} \left(\frac{4.75\times10^2}{t_\mathrm{loss,days}} \right)^2 \right)^\frac{1}{3}\ , \end{equation*} \noindent where $t_\mathrm{loss,days}$ is the timescale of energy loss in days \citep[e.g.,][]{Hagenthorn2008}. At $R$ band (central frequency of $\nu_\mathrm{GHz} = 4.69\times 10^5$ GHz), the observed value of $t_\mathrm{loss,days} = \tau_\mathrm{opt,days}^\mathrm{min} = 0.082$ yields $B_\mathrm{G} \approx 1$ Gauss during the outburst. This magnetic field strength is a factor of $\sim2$ higher than other estimates in the optically emitting region of a blazar (e.g., \citet{Hagenthorn2008}), as expected during an outburst. \subsection{Micro-Variability in the Jet} \label{sec:shortvariability} \begin{figure} \plotone{f14.eps} \caption{\emph{Fermi}-LAT $\gamma$-ray\ flux of 3C454.3 with daily time bins prior to the 2016 outburst. Upper limits are marked with a downward facing, red arrow. The downward facing, blue arrow marks the date of ejection of $K16$: 2016 February 25 (RJD: 7443 $\pm$ 17.5 days).} \label{fig:ExtraGamma} \end{figure} Micro-variability, as defined in $\S$\ref{sec:variability}, was observed on 2016 June 25 (RJD: 7564.5) in the $R$ band light curve of 3C454.3. It can be described as quasi-periodic oscillations, with an amplitude of $2 - 3\%$ (corresponding to 0.17 mJy) about an exponentially decreasing trend (from 11 to 6 mJy) over 3.5 hours, with an oscillation period of 36 minutes. The micro-variability cannot be explained by an emission region as large as that derived in $\S$\ref{sec:overallvariability}, $r \lesssim 2.6\times10^{15}$ cm, without violating causality unless one invokes a contrived geometry \citep[see][]{Spada1999}. Instead, for the observed timescale of micro-variability, the intrinsic timescale is $\tau_\mathrm{min}^\mathrm{intr} \approx 7$ hours. The size of an emission region capable of varying on such a timescale is $\lesssim 8 \times 10^{14}$ cm. Statistical analyses of the flux variations in blazars have shown that the variations are governed by noise processes with higher amplitudes on longer timescales \citep[e.g.,][]{Chatterjee2008, Abdo2011}. The presence of a helical magnetic field \citep{Lyutikov2005, Pushkarev2005} can explain the range of the degree and position angle of polarization measured in 3C454.3 during the 2016 outburst \citep[and in blazars in general, e.g.,][]{Jorstad2007}. However, the rapid, seemingly random variations observed (see $\S$\ref{sec:opticalpolarization}) are naturally reproduced in neither a 100\% globally ordered nor completely chaotic (on small scales) field. Instead, a more natural explanation for the fluctuations of flux and polarization is the presence of turbulent plasma in the relativistic jets of blazars. This type of emission has been the subject of various simulation studies of blazar variability \citep[e.g.,][]{Marscher2014, Calafut2015, Pollack2016}. If one approximates the pattern of turbulence in terms of $N$ turbulent cells, each with a uniform magnetic field with random orientation, then the degree of linear polarization has an average value of $\langle \Pi \rangle \approx \Pi_\mathrm{max} N^{-1/2}$, where $\Pi_\mathrm{max}$ corresponds to a uniform field case and is typically between 0.7 and 0.8 \citep{Burn1966}. The polarization will vary about the mean with a standard deviation $\sigma_\Pi \approx 0.5 \Pi_\mathrm{max} N^{-1/2}$ if the cells pass into and out of the emission region \citep{Jones1988}. We adopt this model to explain the variation of $P_\mathrm{opt}$ and $\chi_\mathrm{opt}$ over time. The average degree of polarization during the outburst in 3C454.3 (2016 June 10-28, RJD: 7549.5-7567.5) was $\langle \Pi \rangle = 12.5\% \pm 5.4\%$. For the median value $\Pi_\mathrm{max} = 75\%$, the number of turbulent cells during the outburst was $N = (\Pi_\mathrm{max}/\langle \Pi\rangle)^2 \approx (0.75/0.125)^2 \approx 35$ cells. The expected standard deviation of the fluctuations of $P_\mathrm{opt}$ is then $\sigma_\Pi = 6.3\%$, which is similar to the observed standard deviation, 5.4\%. The behavior of $\chi_\mathrm{opt}$ in comparison with this model can provide new insight into the mechanics of the turbulent cells. All cells in the model have their own uniform magnetic field with a random orientation. Prior to the 2016 outburst, a low level of polarization of 3C454.3 was measured (see Fig.~\ref{fig:lightcurves}c), which requires a large number ($\gtrsim 100$) of cells. Only $\sim 35$ cells participated in the outburst, perhaps via a number of magnetic reconnections of the turbulent magnetic field that rapidly accelerated electrons to energies $\gtrsim 10^4$ mc$^2$ \citep[e.g.][]{Kadowaki2015}. The turbulence could have been enhanced by feedback between the reconnections and chaotic motions of the plasma \citep{Lazarian2016}. This might have led to clustering of the cells that most efficiently accelerated electrons to cause a more coherent flux outburst. The size of the emission region, $r \lesssim 8 \times 10^{14}$, involving $\sim35$ turbulent cells gives the size of each turbulent cell as $r_\mathrm{cell} \lesssim 2.3 \times 10^{13}$ cm. \citet{Calafut2015} simulated blazar light curves under a variety of conditions with a rotating turbulent cell model for the jet. The turbulent cells were considered roughly spherical, and differential Doppler beaming the eddies could be responsible for variations in the light curves of blazars ranging from a few percent to large, chaotic outbursts, depending on the speed of the turbulent motions. From the simulations, \citet{Calafut2015} determined that, in order to generate simulated light curves similar to those observed, the rotation speed of the turbulence should be $0.1c \leq v_\mathrm{cells} \leq 0.3c$, and that higher turbulent velocities should not be common. The estimated size of the cells can be combined with the period of the quasi-periodic micro-variability found in $\S$\ref{sec:Micro} to provide an observational measurement of the speed of the turbulent motions in a blazar jet. If we approximate that the turbulent cells are rotating cylinders, the period of the oscillations, $P$, can be related to the angular speed, $\omega$, of rotation through $\omega = \frac{2\pi}{P}$. In the rest frame of the blazar, the period of oscillations is $\sim 7$ hours, which yields $\omega \approx 2\times 10^{-4}$ rad s$^{-1}$. The tangential velocity $v_\mathrm{cell} = \omega r_\mathrm{cell} \simeq 0.2c$. This value of the turbulent speed agrees with results of the simulations in \citet{Calafut2015}, as does the $2-3\%$ level of the quasi-periodic oscillations in flux observed in 3C454.3. \section{Conclusions} \label{sec:conclusions} At both optical and $\gamma$-ray\ frequencies, the flux density from 3C454.3 increased over a 1.5-week rising time and featured a precipitous decay over the course of $\sim 24$ hours. The peaks of the $\gamma$-ray\ and optical light curves are coincident to within a 24-hour binning of the $\gamma$-ray\ light curve, suggesting that the location of the enhanced $\gamma$-ray\ and optical emission is similar. Prior to the outburst, the $R$ band degree of polarization $P_\mathrm{opt}$ was low, and the angle of polarization $\chi_\mathrm{opt}$ was roughly parallel to the radio jet axis. During the outburst, $P_\mathrm{opt}$ changed to $\sim 20\%$, with $\chi_\mathrm{opt}$ changing in an irregular fashion by $\sim 120\degr$. In general, the optical spectropolarimetric observations obtained during the outburst indicate that $P_\mathrm{opt}$ is dependent on wavelength, decreasing towards the blue end of the spectrum, which can be attributed to dilution of the polarization from the unpolarized blue bump emission. The polarization decreases at the Mg II line, indicating that the broad line region does not have a strong polarization. The high time-resolution light curve on 2016 June 25 (RJD:7564.5) reveals micro-variability in the form of quasi-periodic oscillations with an amplitude of $2$-$3\%$ around the mean trend and a period of 36 minutes. Analysis of 43 GHz VLBA maps of the total and polarized intensity of 3C454.3 indicates a ``knot" of plasma, $K16$, ejected from the radio core of the blazar near in time to the 2016 outburst on 2016 Feb 25 (RJD: 7443 $\pm$ 17.5). This knot is likely responsible for a small amplitude $\gamma$-ray\ outburst in 2016 March when the ``head" of the knot moved past the standing shock, as well as the main 2016 outburst when the lagging end moved past the shock. From the time dependence of the optical $R$ band flux density, polarization degree, and position angle, the following physical characteristics of the jet of 3C454.3 can be determined. The minimum observed timescale of variability $\tau_\mathrm{opt}^\mathrm{min} \approx 2$ hours. The intrinsic timescale of variability in the rest frame of the emitting plasma $\tau_\mathrm{min}^\mathrm{intr} \approx 24$ hours, based on a Doppler factor $\delta = 22.6$ from the VLBA analysis. Relativistic causality restricts the size of the emission region to $r \lesssim 2.6\times 10^{15}$ cm. If the timescale of flux decline corresponds to the energy loss time of the radiating electrons, the magnetic field in the jet $B_G \approx 1$ Gauss, under the assumption that the ratio of the inverse Compton to synchrotron luminosity is 5:1, as during the November 2010 outburst. A shock-in-jet model with turbulence can naturally explain the observed variability in the light curves. From the micro-variability oscillations, the average optical degree of polarization and its variations during the outburst, we estimate the size of a single turbulent cell to be $r_\mathrm{cell} \lesssim 2.3 \times 10^{13}$ cm. The speed of rotation of the turbulent cell is then $\sim 0.2c$. This value is in agreement with simulations of the effect of turbulence on blazar light curves \citep{Calafut2015}, which predict a change in flux density on the order of a few percent for a turbulent speed $0.1c \leq v \leq 0.3c$. The turbulence could have been responsible for the outburst through a series of magnetic reconnection events that rapidly accelerated electrons to energies capable of producing optical synchrotron and $\gamma$-ray\ inverse Compton photons. \acknowledgements ZW gratefully acknowledges support through Colgate University's Justus and Jayne Schlichting Student Research and the Division of Natural Sciences and Mathematics funds. The knowledge of Prof. Sara Buson, Dr. Elizabeth Ferrara, and the rest of the Fermi Team was invaluable in helping to perform the $\gamma$-ray\ analysis. The assistance of the other undergraduate observers at Colgate University during the observing campaign is greatly appreciated. The research at Boston University was supported in part by National Science Foundation grant AST-1615796 and NASA Fermi Guest Investigator grant 80NSSC17K0649. Data from the Steward Observatory spectropolarimetric monitoring program were used. This program is supported by Fermi Guest Investigator grant NNX15AU81G. The St. Petersburg University team acknowledges support from Russian Science Foundation grant 17-12-01029. The VLBA is an instrument of the National Radio Astronomy Observatory. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. We gratefully thank the anonymous referee for comments and useful suggestions that helped to improve this work. This research made use of Astropy, a community-developed core Python package for Astronomy \citep{Astropy2013, Astropy2018}. \facilities{Perkins (PRISM), CrAO: 0.7m AZT-8, SPbU: 0.4m LX-200, SO:Bok, Kuiper (SPOL), Fermi (LAT), Foggy Bottom Observatory, VLBA} \software{IRAF, \emph{Fermi}\ Science Tools, HEASoft, Python, Astropy}
{ "attr-fineweb-edu": 1.545898, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdC04ubnjot1QXl8Z
\section{Introduction} \label{intro} We consider the numerical approximation of an abstract evolution equation \begin{align} \begin{aligned} \frac{d\boldsymbol{x}}{dt}(t)+A(t)(\boldsymbol{x}(t))&= \boldsymbol{f}(t)&&\quad\text{ in } V^*, \\ \boldsymbol{x}(0)&={x}_0&&\quad\text{ in }H, \end{aligned} \label{eq:1.1} \end{align} by means of a quasi non-conforming Rothe-Galerkin scheme. Here, ${V{\hookrightarrow} H \cong H^*{\hookrightarrow}V^*}$ is a given evolution triple, $I:=\left(0,T\right)$ a finite time horizon, ${x}_0\in H$ an initial value, ${\boldsymbol{f}\in L^{p'}(I,V^*)}$, $1<p<\infty$, a right-hand side and $A(t):V\rightarrow V^*$, $t\in I$, a family of operators. If not specified differently, we will denote in boldface elements of Bochner spaces (as the solution $\boldsymbol{x}(t)$ and the external source $\boldsymbol{f}(t)$), to highlight the difference with elements belonging to standard Banach spaces, denoted by usual symbols. In order to make~\eqref{eq:1.1} accessible for non-conforming approximation methods, we will additionally require that there exists a further evolution triple $X{\hookrightarrow} Y \cong Y^*{\hookrightarrow}X^*$, such that $V\subseteq X$ with $\|\cdot\|_V=\|\cdot\|_X$ in $V$ and $H\subseteq Y$ with $(\cdot,\cdot)_H=(\cdot,\cdot)_Y$ in $H$, and extensions ${\widehat{A}(t):X\rightarrow X^*}$,~$t\in I$, and $\hat{\boldsymbol{f}}\in L^{p'}(I,X^*)$ of $\{A(t)\}_{t\in I}$ and $\boldsymbol{f}$, resp., i.e., $\langle \widehat{A}(t)v,w\rangle_X=\langle A(t)v,w\rangle_V$ and $\langle \hat{\boldsymbol{f}}(t),v\rangle_X=\langle \boldsymbol{f}(t),v\rangle_V$ for all $v,w\in V$ and almost every $t\in I$. For sake of readability we set $A(t):=\widehat{A}(t)$ and $\boldsymbol{f}(t):=\hat{\boldsymbol{f}}(t)$ for almost every $t\in I$. \medskip The main aim of this paper is to extend the abstract framework of Bochner pseudo-monotone operators to handle also problems coming from the analysis of incompressible non-Newtonian fluids. As a prototypical application of the abstract results we will consider a fully discrete Rothe-Galerkin scheme for the unsteady $p$-Navier-Stokes equations. This is a system describing the unsteady motion of incompressible shear-dependent fluids in a bounded polygonal Lipschitz domain $\Omega \subseteq \mathbb{R}^d$, $d\ge 2$, in the time interval $I=(0,T)$. The motion is governed by the following initial boundary value problem \begin{align} \begin{split} \begin{alignedat}{2} \partial_t \boldsymbol{u}-\divo \textbf{S}(\cdot,\cdot,\textbf{D}\boldsymbol{u})+\text{div}(\boldsymbol{u}\otimes \boldsymbol{u}) +\nabla\pi&=\boldsymbol{f}&&\quad\text{ in }Q_T, \\ \divo \boldsymbol{u}&=0&&\quad\text{ in }Q_T, \\ \boldsymbol{u}&=\mathbf{0}&&\quad\text{ on }\Gamma_T, \\ \boldsymbol{u}(0)&=\textbf{u}_0&&\quad\text{ in }\Omega. \end{alignedat} \end{split}\label{eq:p-NS} \end{align} Here, $Q_T:=I\times \Omega$ denotes a time-space cylinder, $\Gamma_T:=I\times \partial \Omega$, $\boldsymbol{u}:Q_T\to \mathbb{R}^d$ denotes the velocity, $\boldsymbol{f}:Q_T\to \mathbb{R}^d$ is a given external force, $\textbf{u}_0:\Omega\to \mathbb{R}^d$ an initial condition, $\pi:Q_T\to \mathbb{R}$ the pressure and $\textbf{D}\textbf{u}:=\frac{1}{2}(\nabla \textbf{u}+\nabla \textbf{u}^\top)$ the symmetric gradient. The mapping $\textbf{S}:Q_T\times\mathbb{M}^{d\times d}_{\text{sym}}\to \mathbb{M}^{d\times d}_{\text{sym}}$\footnote{$\mathbb{M}^{d\times d}_{\text{sym}}$ is the vector space of all symmetric $d\times d$ matrices $\mathbf{A}=(A_{ij})_{i,j=1,...,d}$. We equip $\mathbb{M}^{d\times d}_{\text{sym}}$ with the scalar product $\mathbf{A}:\mathbf{B}:=\sum_{i,j=1}^{d}{A_{ij}B_{ij}}$ and the norm $\vert\mathbf{A}\vert:=(\mathbf{A}:\mathbf{A})^{\frac{1}{2}}$. By $\textbf{a}\cdot\textbf{b}$ we denote the usual scalar product in $\mathbb{R}^d$ and by $\vert \textbf{a}\vert$ we denote the Euclidean norm.} is supposed to possess a \textit{$(p,\delta)$-structure}, i.e., for some $p\in \left(1,\infty\right)$ and $\delta \ge 0$, the following properties are satisfied: \begin{description}[{\textbf{(S.3)})}] \item[\textbf{(S.1)}]\hypertarget{S.1}{} $\textbf{S}:Q_T\times\mathbb{M}^{d\times d}_{\text{sym}}\to \mathbb{M}^{d\times d}_{\text{sym}}$ is a Carath\'eodory mapping\footnote{$\textbf{S}(\cdot,\cdot,\mathbf{A}):Q_T\to \mathbb{M}^{d\times d}_{\text{sym}}$ is Lebesgue measurable for every $\mathbf{A}\in \mathbb{M}^{d\times d}_{\text{sym}}$ and $\textbf{S}(t,x,\cdot):\mathbb{M}^{d\times d}_{\text{sym}}\to \mathbb{M}^{d\times d}_{\text{sym}}$ is continuous for almost every $(t,x)^\top\!\in Q_T$.}. \item[\textbf{(S.2)}]\hypertarget{S.2}{} $\vert\textbf{S}(t,x,\mathbf{A})\vert \leq \alpha(\delta+\vert \mathbf{A}\vert)^{p-2}\vert\mathbf{A}\vert+\beta$ for all $\mathbf{A}\in \mathbb{M}^{d\times d}_{\text{sym}}$, a.e. $(t,x)^\top\!\in Q_T$. $(\alpha>0,\,\beta\ge 0)$. \item[\textbf{(S.3)}]\hypertarget{S.3}{} $\textbf{S}(t,x,\mathbf{A})\!:\!\mathbf{A}\ge c_0(\delta+\vert \mathbf{A}\vert)^{p-2}\vert\mathbf{A}\vert^2-c_1$ for all $\mathbf{A}\in \mathbb{M}^{d\times d}_{\text{sym}}$, a.e. $(t,x)^\top\!\in Q_T$~${(c_0>0,\,c_1\ge 0)}$. \item[\textbf{(S.4)}]\hypertarget{S.4}{} $(\textbf{S}(t,x,\mathbf{A})-\textbf{S}(t,x,\mathbf{B})): (\mathbf{A}-\mathbf{B})\ge 0$ for all $\mathbf{A},\mathbf{B}\in \mathbb{M}^{d\times d}_{\text{sym}}$, a.e. $(t,x)^\top\in Q_T$. \end{description} We define for $p>\frac{3d+2}{d+2}$ the function spaces $X:=W^{1,p}_0(\Omega)^d$, $Y:=L^2(\Omega)^d$, $V:=W^{1,p}_{0,\divo }(\Omega)$ as the closure of $\mathbfcal{V}:=\{\mathbf{v}\in C_0^\infty(\Omega)^d\mid \divo \,\mathbf{v}\equiv 0\}$ in $X$, $H:=L^2_{\divo }(\Omega)$ as the closure of $\mathbfcal{V}$ in $Y$, and the families of operators ${S(t),B:X\to X^*}$, $t\in I$, for all $\textbf{u},\textbf{v}\in X$ and almost every $t\in I$ via \begin{gather}\label{eq:sb} \langle S(t)\textbf{u},\textbf{v}\rangle_X:= \int_\Omega{\textbf{S}(t,\cdot,\textbf{D}\textbf{u}):\textbf{Dv}\,dx} \quad \text{ and } \quad \langle B\textbf{u},\textbf{v}\rangle_X:=-\int_\Omega{\textbf{u}\otimes\textbf{u}:\textbf{Dv}\,dx}. \end{gather} Then, \eqref{eq:p-NS} for $\mathbf{u}_0\in H$ and $\boldsymbol f \in L^{p'}(I,X^*)$ can be re-written as the abstract evolution equation \begin{align} \begin{split} \begin{alignedat}{2} \frac{d\boldsymbol{u}}{dt}(t)+S(t)(\boldsymbol{u}(t))+B(\boldsymbol{u}(t))&=\boldsymbol{f}(t)&&\quad\text{ in }V^*, \\ \boldsymbol{u}(0)&=\textbf{u}_0&&\quad\text{ in }H, \label{eq:p-NS2} \end{alignedat} \end{split} \end{align} where, in the notation of~\eqref{eq:1.1}, we set $A(t):=S(t)+B:X\to X^*$~for~almost~every~${t\in I}$. As the construction of finite element spaces $(V_n)_{n\in \mathbb{N}}$, which meet the divergence constraint, i.e., satisfy $V_n\subseteq V$ for all $n\in \mathbb{N}$, highly restricts the flexibility of the approximation, one usually forgoes to work with divergence-free finite element spaces and imposes a discrete divergence constraint instead, naturally suggesting the usage of non-conforming spaces, i.e., $V_{n}\not\subseteq V$, but the spaces $V_n$ are immersed in a larger ambient space $X$. \subsection{The numerical scheme} A quasi non-conforming Rothe-Galerkin approximation of the initial value problem~\eqref{eq:1.1} usually consists of two parts: The first part is a spatial discretization, often called Galerkin approximation, which consists in the approximation of $V$ by a sequence of closed subspaces $(V_n)_{n\in \mathbb{N}} $ of $X$. We emphasize that we do not require $(V_n)_{n\in \mathbb{N}}$ to be a sequence of subspaces of $V$, which motivates the prefix \textit{non-conforming}. Hence, we do \textit{not} have $V_{n}\subseteq V$ and $V_{n}\uparrow V$ (approximation from below). The prefix \textit{quasi} indicates that in contrast to a fully non-conforming spatial approximation, where the norm on $V_n $ depends on $n$, here the subspaces $V_n$ are equipped with the norm of a space $X$ such that \begin{equation*} V_{n}\not\subseteq V,\qquad V\subseteq X,\qquad V_{n}\subseteq X. \end{equation*} So in this case we have, under appropriate assumptions, a sort of approximation from above of the space $V$. The second part is a temporal discretization, also called Rothe scheme, which consists in the approximation of the unsteady problem \eqref{eq:1.1} by a sequence of piece-wise constant, steady problems. This is achieved by replacing the time derivative $\frac{d}{dt}$ by so-called backwards difference quotients, which are for a given step-size $\tau:=\frac{T}{K}>0$, where $K\in \mathbb{N}$, and a given finite sequence $(x^k)_{k=0,...,K}\subseteq X$ defined via \begin{align*} d_\tau x^k:=\frac{1}{\tau}(x^k-x^{k-1})\quad\text{ in }X\quad\text{ for all }k=1,..., K. \end{align*} Moreover, the operator family $A(t):X\to X^*$, $t\in I$, and the right-hand side $\boldsymbol{f}\in L^{p'}(I,X^*)$ need to be discretized and this is obtained by means of the Cleme\'nt $0$-order quasi interpolant, i.e., for a given step-size $\tau=\frac{T}{K}>0$, where $K\in \mathbb{N}$, we replace them piece-wise by their local temporal means, i.e., by $[A]^\tau_k:X\to X^*$, $k=1,...,K$, and $([\boldsymbol{f}]^\tau_k)_{k=1,...,K}\subseteq X^*$, resp., for every $k=1,...,K$ and $x\in X$ given via \begin{align*} [A]^\tau_kx:=\fint_{\tau(k-1)}^{\tau k}{A(t)x\,dt},\qquad[\boldsymbol{f}]^\tau_k:=\fint_{\tau(k-1)}^{\tau k}{\boldsymbol{f}(t)\,dt}\qquad\text{ in }X^*. \end{align*} Altogether, using these two levels of approximation, we formulate the following fully discrete or Rothe-Galerkin scheme of the evolution problem \eqref{eq:1.1}: \begin{alg}[quasi non-conforming Rothe-Galerkin scheme For given $K,n\in \mathbb{N}$ and $x_n^0\in V_n$ the sequence of iterates $(x_n^{k})_{k=0,...,K}\subseteq V_n$ is given via the implicit scheme for $\tau=\frac{T}{K}$ \begin{align} (d_\tau x^k_n,v_n)_Y+\langle [A]^\tau_kx^k_n,v_n\rangle_X = \langle [\boldsymbol{f}]^\tau_k,v_n\rangle_X \quad\text{ for all }v_n\in V_n. \label{eq:fully} \end{align} \end{alg} As soon as the existence of the iterates $(x_n^k)_{k=0,...,K}\subseteq V_n$, solving the scheme \eqref{eq:fully}, is proved for a sufficiently small step-size $\tau=\frac{T}{K}\in \left(0,\tau_0\right)$, where $\tau_0>0$, and $K,n\in \mathbb{N}$, one can check whether the resulting family of piece-wise constant interpolants $\overline{\boldsymbol{x}}^\tau_n\in L^\infty(I,V_n)$, $K,n\in \mathbb{N}$ with $\tau=\frac{T}{K}\in \left(0,\tau_0\right)$ (cf.~\eqref{eq:polant}), converges towards a weak solution of \eqref{eq:1.1}, at least in an appropriate weak sense. The main abstract result is then the following one (see Section~\ref{sec:6} for notation and proofs), showing the convergence of the fully discrete approximate solutions. \begin{thm \label{5.17} Let Assumption \eqref{asum} be satisfied~and~set ${\tau_0\!:=\!\frac{1}{4c_1}}$. If $(\overline{\boldsymbol{x}}_n)_{n\in \mathbb{N}}:=(\overline{\boldsymbol{x}}^{\tau_n}_{m_n})_{n\in \mathbb{N}}\subseteq L^\infty(I,X)$, where $\tau_n=\frac{T}{K_n}$ and $K_n,m_n\to\infty$ $(n\to\infty)$, is an arbitrary diagonal sequence of piece-wise constant interpolants $\overline{\boldsymbol{x}}_n^\tau\in \mathbfcal{S}^0(\mathcal{I}_{\tau},X)$, $K,n\in \mathbb{N}$ with $\tau=\frac TK\in \left(0,\tau_0\right)$, from Proposition~\ref{apriori}. Then, there exists a not relabelled subsequence and a weak limit $\overline{\boldsymbol{x}}\in L^p(I,V)\cap_{\boldsymbol{j}} L^\infty(I,H)$ such that \begin{align*} \begin{alignedat}{2} \overline{\boldsymbol{x}}_n\;\;&\rightharpoonup\;\;\overline{\boldsymbol{x}}&&\quad\text{ in }L^p(I,X),\\ \overline{\boldsymbol{x}}_n\;\;&\overset{\ast}{\rightharpoondown}\;\;\overline{\boldsymbol{x}}&&\quad\text{ in }L^\infty(I,Y), \end{alignedat}\begin{aligned} \qquad (n\to \infty). \end{aligned} \end{align*} Furthermore, it follows that $\overline{\boldsymbol{x}}\in \mathbfcal{W}^{1,p,q}_e(I,V,H)$ is a weak solution of the initial value~problem~\eqref{eq:1.1}. \end{thm} Traditionally, the verification of the weak convergence of a Rothe-Galerkin scheme like \eqref{eq:fully} to a weak solution of the evolution equation \eqref{eq:1.1} causes a certain effort, and in the case of quasi non-conforming approximations like \eqref{eq:fully} to the best knowledge of the author's there are no abstract results guaranteeing the weak convergence of such a scheme. Therefore, this article's purpose is (i) to give general and easily verifiable assumptions on both the operator family $A(t):X\to X^*$,~$t\in I$, and the sequence of approximative spaces $(V_n)_{n\in \mathbb{N}}$ which provide both the existence of iterates $(x_n^k)_{k=0,...,K}\subseteq V_n$, solving \eqref{eq:fully}, for a sufficiently small step-size $\tau=\frac{T}{K}\in \left(0,\tau_0\right)$, where $\tau_0>0$, and $K,n\in \mathbb{N}$; (ii) to prove the stability of the scheme, i.e., the boundedness of the piece-wise constant interpolants ${\overline{\boldsymbol{x}}^\tau_n\in L^\infty(I,V_n)}$, $K,n\in \mathbb{N}$ with $\tau=\frac{T}{K}$, given via \eqref{eq:polant}, in $L^p(I,X)\cap L^\infty(I,Y)$; (iii) to show the weak convergence of a diagonal subsequence $(\overline{\boldsymbol{x}}^{\tau_n}_{m_n})_{n\in \mathbb{N}}\subseteq L^\infty(I,X)$, where $\tau_n=\frac{T}{K_n} $ and $K_n,m_n\to \infty$~${(n\to\infty)}$, towards a weak solution of \eqref{eq:1.1}.\\[-3mm] A common approach is to require that $(V_n)_{n\in \mathbb{N}}$ forms a \textit{conforming approximation of }$V$, i.e., satisfies the following two conditions: \begin{description} \item[\textbf{(C.1)}] \hypertarget{C.1}{} $(V_n)_{n \in \mathbb N}$ is an increasing sequence of closed subspaces of $V$, i.e., $V_n\subseteq V_{n+1}\subseteq V$~for~all~${n\in \mathbb{N}}$. \item[\textbf{(C.2)}] \hypertarget{C.2}{} $\bigcup_{n\in \mathbb{N}}{V_n}$ is dense in $V$. \end{description} In particular, (\hyperlink{C.1}{C.1}) and (\hyperlink{C.2}{C.2}) allow us to choose $X=V$ above. Surprisingly, even in the conforming case there are only few contributions with a rigorous convergence analysis of fully discrete Rothe-Galerkin schemes towards weak solutions. Most authors consider only semi-discrete schemes, i.e., either a pure Rothe scheme (cf.~\cite{Rou05}) or a pure Galerkin scheme (cf.~\cite{GGZ}, \cite{Zei90B}, \cite{showalter}). Much more results are concerned with explicit convergence rates for more regular data and more regular solutions (cf.~\cite{BarLiu94}, \cite{rulla}, \cite{NoSaVe00}, \cite{FeOePr05}, \cite{DiEbRu07}, \cite{P08}, \cite{CHP10}, \cite{BaNoSa14}, \cite{sarah-phd}, \cite{BDN}, \cite{breit-mensah}). Concerning the convergence analysis of a conforming Rothe-Galerkin scheme we are only aware of the early contribution \cite{AL83} which treats the porous media equations and the recent contributions \cite{tscherpel-phd}, \cite{sueli-tscherpel} treating the $p$-Navier-Stokes equations and \cite{BR19} dealing with a similar setting as the present paper. In fact, if ${A(t):V\to V^*}$,~${t\in I}$, satisfies appropriate assumptions, e.g., \cite[condition (C.1)--(C.4)]{KR19}, one can easily verify the existence of the iterates $(x_n^k)_{k=0,...,K}\subseteq V_n$, for sufficiently small $\tau=\frac{T}{K}\in \left(0,\tau_0\right)$, where $\tau_0>0$, and $K,n\in \mathbb{N}$, solving \eqref{eq:fully}, the boundedness of the corresponding family of piece-wise constant interpolants $\overline{\boldsymbol{x}}_n^\tau\in L^\infty(I,V)$, $K,n\in \mathbb{N}$ with $\tau=\frac{T}{K}\in \left(0,\tau_0\right)$, in $L^p(I,V)\cap L^\infty(I,H)$, and the existence of a diagonal subsequence $(\overline{\boldsymbol{x}}_n)_{n\in \mathbb{N}}:=(\overline{\boldsymbol{x}}_{m_n}^{\tau_n})_{n\in \mathbb{N}}\subseteq L^\infty(I,V)$, where $\tau_n=\frac{T}{K_n}$ and $K_n,m_n\to \infty$ $(n\to \infty)$, and an element $\overline{\boldsymbol{x}}\in L^p(I,V)\cap L^\infty(I,H)$, such that \begin{alignat}{2} \overline{\boldsymbol{x}}_n&\;\;\rightharpoonup\;\;\overline{\boldsymbol{x}}&&\quad\text{ in }L^p(I,V)\qquad \;\,(n\to \infty),\label{eq:1.3}\\ \overline{\boldsymbol{x}}_n&\;\;\overset{\ast}{\rightharpoondown}\;\;\overline{\boldsymbol{x}}&&\quad\text{ in }L^\infty(I,H)\qquad (n\to \infty),\label{eq:1.4}\\ \limsup_{n\to \infty}&\;\langle \mathbfcal{A}\overline{\boldsymbol{x}}_n,\overline{\boldsymbol{x}}_n&&-\overline{\boldsymbol{x}}\rangle_{L^p(I,V)}\leq 0,\label{eq:1.5} \end{alignat} where $\mathbfcal{A}: L^p(I,V)\cap L^\infty(I,H)\to (L^p(I,V))^*$ denotes the induced operator, which is for every $\boldsymbol{x}\in L^p(I,V)\cap L^\infty(I,H)$ and $\boldsymbol{y}\in L^p(I,V)$ given via $\langle \mathbfcal{A}\boldsymbol{x},\boldsymbol{y}\rangle_{L^p(I,V)}:=\int_I{\langle A(t)(\boldsymbol{x}(t)),\boldsymbol{y}(t)\rangle_V\,dt}$. Apart from that, using methods in \cite{BR19}, we can extract from the scheme \eqref{eq:fully} by means of \eqref{eq:1.3} and \eqref{eq:1.4} the additional convergence \begin{align} \overline{\boldsymbol{x}}_n(t)\;\;\rightharpoonup\;\;\overline{\boldsymbol{x}}(t)\quad\text{ in }H\quad(n\to\infty)\quad\text{ for a.e. }t\in I.\label{eq:1.6} \end{align} In this context, in \cite{KR19} it is proved that $\mathbfcal{A}: L^p(I,V)\cap L^\infty(I,H)\to (L^p(I,V))^*$ is \textit{Bochner pseudo-monotone}, i.e., from \eqref{eq:1.3}--\eqref{eq:1.6} it follows for every $\boldsymbol{y}\in L^p(I,V)$ \begin{align*} \langle \mathbfcal{A}\overline{\boldsymbol{x}},\overline{\boldsymbol{x}}-\boldsymbol{y}\rangle_{L^p(I,V)}\leq \liminf_{n\to\infty}{ \langle \mathbfcal{A}\overline{\boldsymbol{x}}_n,\overline{\boldsymbol{x}}_n-\boldsymbol{y}\rangle_{L^p(I,V)}}, \end{align*} which in a standard manner leads to $\mathbfcal{A}\overline{\boldsymbol{x}}_n\rightharpoonup\mathbfcal{A}\overline{\boldsymbol{x}} $ in $(L^p(I,V))^*$ $(n \to \infty)$, and therefore to the weak convergence of the scheme \eqref{eq:fully}. Without the conditions (\hyperlink{C.1}{C.1}) and (\hyperlink{C.2}{C.2}), i.e., $V\neq X$ and $V_n\not\subseteq V$ for all $n\in \mathbb{N}$, but the family $A(t):X\to X^*$, $t\in I$, still satisfying appropriate assumptions, e.g., \mbox{\cite[{condition (C.1)--(C.4)}]{KR19}}, the situation changes dramatically. Even though we can probably prove the existence of iterates $(x_n^k)_{k=0,...,K}\subseteq V_n$, solving \eqref{eq:fully}, for sufficiently small $\tau =\frac{T}{K}\in\left(0,\tau_0\right)$, where $\tau_0>0$, and $K,n\in \mathbb{N}$, the boundedness of the corresponding family of piece-wise constant interpolants\linebreak $\overline{\boldsymbol{x}}_n^\tau\in L^\infty(I,X)$, $K,n\in \mathbb{N}$ with $\tau=\frac{T}{K}\in \left(0,\tau_0\right)$, in $L^p(I,X)\cap L^\infty(I,Y)$, and the weak convergence of a diagonal subsequence ${(\overline{\boldsymbol{x}}_n)_{n\in \mathbb{N}}:=(\overline{\boldsymbol{x}}_{m_n}^{\tau_n})_{n\in \mathbb{N}}\subseteq L^\infty(I,X)}$, where $\tau_n=\frac{T}{K_n}$ and $K_n,m_n\to \infty$ $(n\to \infty)$, to a weak limit $\overline{\boldsymbol{x}}\in L^p(I,X)\cap L^\infty(I,Y)$, we can solely expect that \begin{align} \begin{alignedat}{2} \overline{\boldsymbol{x}}_n\;\;& \rightharpoonup \;\; \overline{\boldsymbol{x}}&&\quad\text{ in }L^p(I,X) \\ \overline{\boldsymbol{x}}_n\;\;&\overset{\ast}{\rightharpoondown} \;\;\overline{\boldsymbol{x}}&&\quad\text{ in }L^\infty(I,Y), \label{eq:1.8} \end{alignedat} \begin{aligned} \qquad (n\to \infty). \end{aligned} \end{align} Without any further assumptions on the spatial approximation $(V_n)_{n\in \mathbb{N}}$ it is not even clear, whether the weak limit lies in the right function space, i.e., whether $\overline{\boldsymbol{x}}\in L^p(I,V)\cap L^\infty(I,H)$. In addition, we are neither aware of whether an inequality like \begin{align} \limsup_{n\to \infty}{\langle \mathbfcal{A}\overline{\boldsymbol{x}}_n,\overline{\boldsymbol{x}}_n-\overline{\boldsymbol{x}}\rangle_{L^p(I,X)}}\leq 0\label{eq:1.9} \end{align} is satisfied, nor whether we can extract from \eqref{eq:fully} by means of \eqref{eq:1.8} an additional convergence similar to \eqref{eq:1.6}. To guarantee the latter, we make the following assumptions on $(V_n)_{n\in\mathbb{N}}$: \begin{description}[{(ii)}] \item[\textbf{(QNC.1)}] There exists a dense subset $D\subseteq V$, such that for each $v\in D$ there exist elements $v_n\in V_n$, $n\in\mathbb{N}$, such that $v_n\to v $ in $X$~$(n\to\infty)$. \item[\textbf{(QNC.2)}] For each sequence $\boldsymbol{x}_n\in L^p(I,V_{m_n})$, $n\in \mathbb{N}$, where ${(m_n)_{n\in\mathbb{N}}\subseteq\mathbb{N}}$ with $m_n\to \infty$ $(n\to\infty)$, from $\boldsymbol{x}_n\rightharpoonup \boldsymbol{x}$ in $L^p(I,X)$~$(n\to\infty)$, it follows that $\boldsymbol{x}\in L^p(I,V)$. \end{description} In fact, using (QNC.1) and (QNC.2) in this simpler setting, we are able to derive from \eqref{eq:1.8}$_{1}$ that ${\overline{\boldsymbol{x}}\in L^p(I,V)\cap L^\infty(I,H)}$, the inequality \eqref{eq:1.9} and the additional convergence \begin{align} P_H(\overline{\boldsymbol{x}}_n(t))\;\;\rightharpoonup \;\;P_H(\overline{\boldsymbol{x}}(t))\quad\text{ in }H\quad(n\to\infty)\quad\text{ for a.e. }t\in I,\label{eq:1.10} \end{align} where $P_H:Y\to H$ denotes the orthogonal projection of $Y$ into $H$. Note that we have no information, whether $\overline{\boldsymbol{x}}_n(t)\rightharpoonup \overline{\boldsymbol{x}}(t)$ in $Y$ $(n\to \infty)$ for almost every $t\in I$. In consequence, we cannot fall back on the approaches of \cite{KR19}, \cite{BR19}. However, using anew (QNC.1) and (QNC.2), we are able to adapt and extend the methods in \cite{KR19}, \cite{BR19}, and deduce from \eqref{eq:1.8}--\eqref{eq:1.10}~that~for~all~${\boldsymbol{y}\in L^p(I,X)}$ \begin{align*} \langle \mathbfcal{A}\overline{\boldsymbol{x}},\overline{\boldsymbol{x}}-\boldsymbol{y}\rangle_{L^p(I,X)}\leq \liminf_{n\to\infty}\langle \mathbfcal{A}\overline{\boldsymbol{x}}_n,\overline{\boldsymbol{x}}_n-\boldsymbol{y}\rangle_{L^p(I,X)}, \end{align*} which leads to $\mathbfcal{A}\overline{\boldsymbol{x}}_n\rightharpoonup\mathbfcal{A}\overline{\boldsymbol{x}}$ in $(L^p(I,X))^*$ $(n\to\infty)$, i.e., the convergence of the~scheme~\eqref{eq:fully}. \subsection{The example of the $p$-Navier-Stokes equations} The case of the $p$-Navier-Stokes equations is the prototypical example that motivated the above abstract setting. The discretely divergence-free finite element approximation, introduced below, fits into the abstract setting of the previous section. Hence, the corresponding convergence Theorem~\ref{rem:7.4} follows just by checking that the hypotheses of the abstract result are satisfied. Let $Z:=L^{p'}(\Omega)$ and for a given family of shape regular triangulations (cf.~\cite{BS08}) $(\mathcal{T}_h)_{h>0}$ of our polygonal Lipschitz domain $\Omega$ and given $m,\ell \in \mathbb N_0$, we denote by\footnote{$\mathcal{P}_m(\mathcal{T}_h)$, $m\in \mathbb{N}_0$, denotes the space of possibly discontinuous scalar functions, which are polynomials of degree at most $m$ on each simplex $K\in \mathcal{T}_h$.} ${X_h\subset\mathcal{P}_m(\mathcal{T}_h)^d\cap X}$ and $Z_h\subset \mathcal{P}_\ell(\mathcal{T}_h)\cap Z$ appropriate finite element spaces. Note that we consider always continuous approximations for the velocity, while we allow for discontinuous approximations for the pressure. In addition, we define for $h>0$ the \textit{discretely divergence free finite element spaces} \begin{align*} V_h:=\{\mathbf{v}_h\in X_h \mid \langle\divo \mathbf{v}_h,\eta_h\rangle_Z=0\text{ for all }\eta_h\in Z_h\}. \end{align*} For a null sequence $(h_n)_{n\in \mathbb{N}}\subseteq\left(0,\infty\right)$ and $V_n:=V_{h_n}$, $n\in \mathbb{N}$, one usually formulates the following algorithm of a time-space discrete approximation of \eqref{eq:p-NS}: \begin{alg} For given $K,n\in \mathbb{N}$ and $\textbf{u}_n^0\in V_n$ the sequence of iterates $(\textbf{u}_n^{k})_{k=0,...,K}\subseteq V_n$ is given via the implicit Rothe-Galerkin scheme for $\tau=\frac{T}{K}$ \begin{align} (d_\tau \textbf{u}^k_n,\textbf{v}_n)_Y+\langle [S]^\tau_k\textbf{u}^k_n,\textbf{v}_n\rangle_X+\langle \hat{B}\textbf{u}^k_n,\textbf{v}_n\rangle_X=\langle [\boldsymbol{f}]^\tau_k,\textbf{v}_n\rangle_X\quad\text{ for all }\textbf{v}_n\in V_n,\label{eq:p-NSnon} \end{align} where $\hat{B}:X\to X^*$ is given via $\langle \hat{B}\textbf{u},\textbf{w}\rangle_X:=\frac{1}{2}\int_{\Omega} {\textbf{u}\otimes\textbf{v}:\textbf{Du}}-\frac{1}{2} \int_{\Omega}{\textbf{u}\otimes\textbf{u}:\textbf{Dv}}$ for all ${\textbf{u},\textbf{v}\in X}$. \end{alg} The operator $\hat{B}$ can be viewed as an extension of $B$, as $\langle\hat{B}\textbf{u},\textbf{v}\rangle_X=\langle B\textbf{u},\textbf{v}\rangle_X$ for all $\textbf{u},\textbf{v}\in V$, which in contrast to $B$ fulfills $\langle \hat{B}\textbf{u},\textbf{u}\rangle_X=0$ for all $\textbf{u}\in X$, and therefore guarantees the stability of the scheme \eqref{eq:p-NSnon}. The sequence $(V_n)_{n\in \mathbb{N}}$ violates the conditions (\hyperlink{C.1}{C.1}) and (\hyperlink{C.2}{C.2}). However, $(V_n)_{n\in \mathbb{N}}$ perfectly fits into the framework of quasi non-conforming approximations (cf.~Proposition~\ref{ex:3.5}). To be more precise, we will see that the assumptions (QNC.1) and (QNC.2) on the discrete spaces $(V_n)_{n\in \mathbb{N}}$ are often fulfilled, e.g., if the following assumption on the existence of appropriate projection operators with respect to the discrete spaces $X_h$ and $Z_h$ is satisfied: \begin{asum}[Projection operators]\label{proj} We assume that for every $h>0$ it holds \linebreak ${\mathcal{P}_1(\mathcal{T}_h)^d\subset X_h}$, $\mathbb{R}\subset Z_h$, and that there exist linear projection operators $\Uppi_h^{\divo }:X\to X_h$ and $\Uppi_h^{Z}:Z\to Z_h$ with the following properties: \begin{description}[{(iii)}] \item[(i)] \textbf{Divergence preservation of $\Uppi_h^{\divo }$ in $Z_h^*$:} It holds for all $\textbf{w}\in X$ and $\eta_h\in Z_h$ \begin{align*} \langle \divo \textbf{w},\eta_h\rangle_Z=\langle \divo \Uppi_h^{\divo }\textbf{w},\eta_h\rangle_Z. \end{align*} \item[(ii)] \textbf{$W^{1,1}$-stability of $\Uppi_h^{\divo }$:} There exists a constant $c>0$, independent of $h>0$, such that for every $\textbf{w}\in X$ and $K\in \mathcal{T}_h$\footnote{The neighbourhood $S_K$ of a simplex $K \in \mathcal T_h$ is defined via $S_K := \text{interior } \bigcup _{\{\overline K' \in \mathcal T_h \mid \overline K'\cap \overline K \neq \emptyset\}} \overline K'$.} \begin{align*} \fint_K{\vert \Uppi_h^{\divo }\textbf{w}\vert\,dx}\leq c\fint_K{\vert\textbf{w}\vert\,dx}+c\, h_K\fint_{S_K} {\vert \nabla\textbf{w}\vert\,dx}. \end{align*} \item[(iii)] \textbf{$L^1$-stability of $\Uppi_h^Z$:} There exists a constant $c>0$, independent of $h>0$, such that for every $\eta\in Z$ and $K\in \mathcal{T}_h$ \begin{align*} \fint_K{\vert \Uppi_h^{Z}\eta\vert\,dx}\leq c\fint_{S_K}{\vert\eta\vert\,dx}. \end{align*} \end{description} \end{asum} Certainly, the existence of projection operators $\Uppi_h^{\divo }$ and $\Uppi_h^Z$ satisfying Assumption \ref{proj} depends on the choice of $X_h$ and $Z_h$. It is shown in \cite{BF91}, \cite{GL01}, \cite{GS03} that $\Uppi_h^{\divo }$ exists for a variety of spaces $X_h$ and $Z_h$, which, e.g., include the Taylor-Hood, the conforming Crouzeix-Raviart, and the MINI element in dimension two and three. Projection operators $\Uppi_h^Z$ satisfying Assumption \ref{proj} (iii) are e.g. the Cl\'ement interpolation operator (cf.~\cite{clement}) and a version of the Scott-Zhang interpolation operator (cf.~\cite{zhang-scott}). The abstract assumptions allow for an easy extension of our results to other choices of $X_h$ and $Z_h$ in future works. \bigskip \textbf{Plan of the paper:} In Section \ref{sec:2} we recall some basic definitions and results concerning the theory of pseudo-monotone operators and evolution equations. In Section \ref{sec:3} we introduce the concept of quasi non-conforming approximations. In Section \ref{sec:4} we introduce the quasi non-conforming Bochner pseudo-monotonicity, and give sufficient and easily verifiable conditions on families of operators such that the corresponding induced operator satisfies this concept. In Section \ref{sec:5} we recall some basic facts about the Rothe scheme. In Section \ref{sec:6} we formulate the scheme of a fully discrete, quasi non-conforming approximation of an evolution equation, prove that this scheme is well-defined, i.e., the existence of iterates, that the corresponding family of piece-wise constant interpolants satisfies certain a-priori estimates and that there exist a diagonal subsequence which weakly converges to a weak solution of the corresponding evolution equation. In Section \ref{sec:7} we apply this approximation scheme on the unsteady $p$-Navier-Stokes equations. In Section \ref{sec:8} we present some numerical experiments. \section{Preliminaries} \label{sec:2} \subsection{Operators} For a Banach space $X$ with norm $\|\cdot\|_X$ we denote by $X^*$ its dual space equipped with the norm ${\|\cdot\|_{X^*}}$. The duality pairing is denoted by $\langle\cdot,\cdot\rangle_X$. All occurring Banach spaces are assumed to be real. \begin{defn}\label{2.7} Let $X$ and $Y$ be Banach spaces. The operator $A: X\rightarrow Y$ is said to be \begin{description}[{(iii)}] \item[{(i)}] \textbf{bounded}, if for all bounded subsets $M\subseteq X$ the image $A(M)\subseteq Y$ is bounded. \item[{(ii)}] \textbf{pseudo-monotone}, if $Y=X^*$, and for $(x_n)_{n\in \mathbb{N}}\subseteq X$ from $x_n\rightharpoonup x$ in $X$ $(n\to\infty)$ and ${\limsup_{n\to\infty}{\langle Ax_n,x_n-x\rangle_X}\leq 0}$, it follows $\langle Ax,x-y\rangle_X\leq \liminf_{n\to\infty}{\langle Ax_n,x_n-y\rangle_X}$ for every $y\in X$. \item[{(iii)}] \textbf{coercive}, if $Y=X^*$ and $ \lim_{\|x\|_X\rightarrow\infty} {\frac{\langle Ax,x\rangle_X}{\|x\|_X}}=\infty$. \end{description} \end{defn} \begin{prop}\label{2.9} If $X$ is a reflexive Banach space and $A:X\to X^*$ a bounded, pseudo-monotone and coercive operator, then $R(A)=X^*$. \end{prop} \begin{proof} See \cite[Corollary 32.26]{Zei90B}.\hfill$\qed$ \end{proof} \begin{lem}\label{2.9a} If $X$ is a reflexive Banach space and $A:X\to X^*$ a locally bounded and pseudo-monotone operator, then $A$ is demi-continuous. \end{lem} \begin{proof} See \cite[Proposition 27.7]{Zei90B}.\hfill$\qed$ \end{proof} \subsection{Evolution equations} We call $(V,H,j)$ an \textbf{evolution triple}, if $V$ is a reflexive Banach space, $H$ a Hilbert space and $j:V\to H$ a dense embedding, i.e., $j$ is a linear, injective and bounded operator with $\overline{j(V)}^{\|\cdot \|_H}=H$. Let $R:H\rightarrow H^*$ be the Riesz isomorphism with respect~to~${(\cdot,\cdot)_H}$. As $j$ is a dense embedding the adjoint \mbox{$j^*:H^*\rightarrow V^*$} and therefore $e:=j^*Rj: V \rightarrow V^*$ are embeddings as well. We call $e$ the \textbf{canonical embedding} of $(V,H,j)$. Note that \begin{align}\label{eq:2.14} \langle ev,w\rangle_V=(jv,jw)_H\quad\text{ for all }v,w\in V. \end{align} For an evolution triple $(V,H,j)$, $I:=\left(0,T\right)$, $T<\infty$, and $1\leq p\leq q\leq \infty$ we define operators ${\boldsymbol{j}:L^p(I,V)\to L^p(I,H)}\colon \boldsymbol x \to \boldsymbol{j}\boldsymbol{x} $ and $\boldsymbol{j}^*:L^{q'}(I,H^*)\to L^{q'}(I,V^*) \colon \boldsymbol y \to \boldsymbol{j}^*\boldsymbol{y}$, where $\boldsymbol{j}\boldsymbol{x} $ and $\boldsymbol{j}^*\boldsymbol{y} $ are for every $\boldsymbol{x}\in L^p(I,V)$ and ${\boldsymbol{y}\in L^{q'}(I,H^*)}$ given via \begin{alignat*}{2} (\boldsymbol{j}\boldsymbol{x})(t)&:=j(\boldsymbol{x}(t))&&\quad\text{ in }H\quad\;\text{ for a.e. }t\in I,\\ (\boldsymbol{j}^*\boldsymbol{y})(t)&:=j^*(\boldsymbol{y}(t))&&\quad\text{ in }V^*\quad\text{ for a.e. }t\in I. \end{alignat*} It is shown in \cite[Prop.~2.19]{KR19} that both $\boldsymbol{j}$ and $\boldsymbol{j}^*$ are embeddings, which we call \textbf{induced embeddings}. In particular, note that we will use throughout the entire article bold letters, i.e., $\boldsymbol{x}$, to indicate that a function is a Bochner-Lebesgue function. Moreover, we define the intersection space \begin{align*} L^p(I,V)\cap_{\boldsymbol{j}}L^q(I,H):=\{\boldsymbol{x}\in L^p(I,V)\mid \boldsymbol{j}\boldsymbol{x}\in L^q(I,H)\}, \end{align*} which forms a Banach space equipped with the canonical sum norm \begin{align*} \|\cdot\|_{ L^p(I,V)\cap_{\boldsymbol{j}}L^q(I,H)}:=\|\cdot\|_{L^p(I,V)}+\|\boldsymbol{j}(\cdot)\|_{L^q(I,H)}. \end{align*} If $1<p\leq q<\infty$, then $L^p(I,V)\cap_{\boldsymbol{j}}L^q(I,H)$ is additionally reflexive. Furthermore, for each $\boldsymbol{x}^*\in (L^p(I,V)\cap_{\boldsymbol{j}}L^q(I,H))^*$ there exist functions $\boldsymbol{g}\in L^{p'}(I,V^*)$ and $\boldsymbol{h}\in L^{q'}(I,H^*)$, such that for every $\boldsymbol{x}\in L^p(I,V)\cap_{\boldsymbol{j}}L^q(I,H)$ it holds \begin{align} \langle \boldsymbol{x}^*,\boldsymbol{x}\rangle_{L^p(I,V)\cap_{\boldsymbol{j}}L^q(I,H)}=\int_I{\langle\boldsymbol{g}(t)+(\boldsymbol{j}^*\boldsymbol{h})(t),\boldsymbol{x}(t)\rangle_V\,dt},\label{eq:dual} \end{align} and $\|\boldsymbol{x}^*\|_{(L^p(I,V)\cap_{\boldsymbol{j}}L^q(I,H))^*}:=\|\boldsymbol{g}\|_{L^{p'}(I,V^*)}+\|\boldsymbol{h}\|_{L^{q'}(I,H^*)}$, i.e., $(L^p(I,V)\cap_{\boldsymbol{j}}L^q(I,H))^*$ is isometrically isomorphic to the sum $L^{p'}(I,V^*)+\boldsymbol{j}^*(L^{q'}(I,H^*))$ (cf.~\cite[Kapitel I, Bemerkung 5.13 \& Satz 5.13]{GGZ}), which is a Banach space equipped with the norm \begin{align*} \|\boldsymbol{f}\|_{L^{p'}(I,V^*)+\boldsymbol{j}^*(L^{q'}(I,H^*))}:=\min_{\substack{\boldsymbol{g}\in L^{p'}(I,V^*)\\\boldsymbol{h}\in L^{q'}(I,H^*)\\\boldsymbol{f}=\boldsymbol{g}+\boldsymbol{j}^*\boldsymbol{h}}}{\|\boldsymbol{g}\|_{L^{p'}(I,V^*)}+\|\boldsymbol{h}\|_{L^{q'}(I,H^*)}}. \end{align*} \begin{defn}[Generalized time derivative]\label{2.15} Let $(V,H,j)$ be an evolution triple, ${I:=\left(0,T\right)}$, $T<\infty$, and $1< p\leq q<\infty$. A function $\boldsymbol{x}\in L^p(I,V)\cap_{\boldsymbol{j}}L^q(I,H)$ possesses a \textbf{generalized derivative with respect to the canonical embedding $e$ of $(V,H,j)$} if there exists a function $\boldsymbol{x}^*\in L^{p'}(I,V^*)+\boldsymbol{j}^*(L^{q'}(I,H^*))$ such that for all $v\in V$ and $\varphi\in C_0^\infty(I)$ \begin{align*} -\int_I{(j(\boldsymbol{x}(s)),jv)_H\varphi^\prime(s)\,ds}= \int_I{\langle\boldsymbol{x}^*(s),v\rangle_{V}\varphi(s)\,ds}. \end{align*} As this function $\boldsymbol{x}^*\in L^{p'}(I,V^*)+\boldsymbol{j}^*(L^{q'}(I,H^*))$ is unique (cf. \cite[Proposition 23.18]{Zei90A}), $\frac{d_e\boldsymbol{x}}{dt}:=\boldsymbol{x}^*$ is well-defined. By \begin{align*} \mathbfcal{W}^{1,p,q}_e(I,V,H):=\bigg\{\boldsymbol{x}\in L^p(I,V)\cap_{\boldsymbol{j}}L^q(I,H)\;\bigg|\;\exists\, \frac{d_e\boldsymbol{x}}{dt}\in L^{p'}(I,V^*)+\boldsymbol{j}^*(L^{q'}(I,H^*))\bigg\} \end{align*} we denote the \textbf{Bochner-Sobolev space with respect to $e$}. \end{defn} \begin{prop}[Formula of integration by parts]\label{2.16} Let $(V,H,j)$ be an evolution triple, $I:=\left(0,T\right)$, $T<\infty$, and $1<p\leq q<\infty$. Then, it holds: \begin{description}[(iii)] \item[(i)] The space $\mathbfcal{W}^{1,p,q}_e(I,V,H)$ forms a Banach space equipped with the norm \begin{align*} \|\cdot\|_{\mathbfcal{W}^{1,p,q}_e(I,V,H)}:=\|\cdot\|_{L^p(I,V)\cap_{\boldsymbol{j}}L^q(I,H)}+\left\|\frac{d_e}{dt}\cdot\right\|_{L^{p'}(I,V^*)+{\boldsymbol{j}^*}(L^{q'}(I,H^*))}. \end{align*} \item[(ii)] Given $\boldsymbol{x}\in \mathbfcal{W}^{1,p,q}_e(I,V,H)$ the function $\boldsymbol{j}\boldsymbol{x}\in L^q(I,H)$ possesses a unique representation $\boldsymbol{j}_c\boldsymbol{x}\in C^0(\overline{I},H)$, and the resulting mapping ${\boldsymbol{j}_c:\mathbfcal{W}^{1,p,q}_e(I,V,H)\rightarrow C^0(\overline{I},H)}$ is an embedding. \item[(iii)] \textbf{Generalized integration by parts formula:} It holds \begin{align*} \int_{t'}^t{\bigg\langle \frac{d_e\boldsymbol{x}}{dt}(s),\boldsymbol{y}(s)\bigg\rangle_V\,ds} =\left[((\boldsymbol{j}_c\boldsymbol{x})(s), (\boldsymbol{j}_c \boldsymbol{y})(s))_H\right]^{s=t}_{s=t'}-\int_{t'}^t{\bigg\langle \frac{d_e\boldsymbol{y}}{dt}(s),\boldsymbol{x}(s)\bigg\rangle_V\,ds} \end{align*} for all $\boldsymbol{x},\boldsymbol{y}\in \mathbfcal{W}^{1,p,q}_e(I,V,H)$ and $t,t'\in \overline{I}$ with $t'\leq t$. \end{description} \end{prop} \begin{proof} See \cite[Kapitel IV, Satz 1.16 \& Satz 1.17]{GGZ}.\hfill $\qed$ \end{proof} For an evolution triple $(V,H,j)$, $I:=\left(0,T\right)$, $T<\infty$, and $1<p\leq q< \infty$ we call an operator $\mathbfcal{A}:L^p(I,V)\cap_{\boldsymbol{j}}L^q(I,H)\to (L^p(I,V)\cap_{\boldsymbol{j}}L^q(I,H))^*$ \textbf{induced} by a family of operators $A(t):V\to V^*$, $t\in I$, if for every $\boldsymbol{x},\boldsymbol{y}\in L^p(I,V)\cap_{\boldsymbol{j}}L^q(I,H)$ it holds \begin{align} \langle\mathbfcal{A}\boldsymbol{x},\boldsymbol{y}\rangle_{L^p(I,V)\cap_{\boldsymbol{j}}L^q(I,H)}=\int_I{\langle A(t)(\boldsymbol{x}(t)),\boldsymbol{y}(t)\rangle_V\,dt}.\label{eq:induced} \end{align} \begin{rmk}[Need for $L^p(I,V)\cap_{\boldsymbol{j}}L^q(I,H)$] Note that an operator family $A(t):V\to V^*$, $t\in I$ can define an induced operator in different spaces. In \cite{KR19}, \cite{K19} the induced operator $\mathbfcal{A}$ is considered as an operator from $L^p(I,V)\cap_{\boldsymbol{j}}L^\infty(I,H)$ into $(L^p(I,V))^*$. Here, we consider the induced operator $\mathbfcal{A}$ as an operator from $L^p(I,V)\cap_{\boldsymbol{j}}L^q(I,H)$ into $(L^p(I,V)\cap_{\boldsymbol{j}}L^q(I,H))^*$, which is a more general and enables us to consider operator families with significantly worse growth behavior. Here, the so-called \textit{Temam modification} $\widehat{B}:X\to X^*$, tracing back to \cite{Tem68}, \cite{Tem77}, of the convective term $B :X\to X^*$ defined in \eqref{eq:sb}, defined for $p>\frac{3d+2}{d+2}$ and all $\mathbf{u},\mathbf{v}\in X$ via \begin{align} \langle \widehat{B}\mathbf{u},\mathbf{v}\rangle_X= \frac{1}{2}\int_\Omega{\mathbf{u}\otimes\mathbf{v}: \mathbf{D}\mathbf{u}\,dx}- \frac{1}{2}\int_\Omega{\mathbf{u}\otimes\mathbf{u}: \mathbf{D}\mathbf{v}\,dx}\label{temam-mod}, \end{align} serves as a prototypical example. In fact, following \cite[Example 5.1]{KR19}, one can prove that ${B:X\to X^*}$ satisfies for $d=3$ and $p\ge \frac{11}{5}$ the estimate \begin{gather}\label{eq:esti} \|B\mathbf{u}\|_{X^*}\leq c(1+\|\mathbf{u}\|_Y)(1+\|\mathbf{u}\|_X^{p-1}) \end{gather} for all $\mathbf{u}\in X$ and that corresponding induced operator $\mathbfcal{B}$ is well-defined and bounded as an operator from $L^p(I,X)\cap L^\infty(I,Y)$ to $(L^p(I,X))^*$. Regrettably, for the remaining term in Temam's modification, i.e., for the operator $\tilde{B}:=\widehat{B}-\frac{1}{2}B:X\to X^*$, we can prove \eqref{eq:esti} for $d=3$ only for $p>\frac{13}{5}$. In order to reach $p>\frac{11}{5}$ for $d=3$, one is forced to use a larger target space, i.e., we view the induced operator of $\tilde B$ as an operator from $ L^p(I,X)\cap L^q(I,Y)$ to $ (L^p(I,X)\cap L^q(I,Y))^*$, where $q\in [p,\infty)$ is specified in Proposition~\ref{4.7}. \end{rmk} \begin{defn}[Weak solution \label{2.17} Let $(V,H,j)$ be an evolution triple, $I:=\left(0,T\right)$, $T<\infty$, and $1<p\leq q< \infty$. Moreover, let ${x}_0\in H$ be an initial value, $\boldsymbol{f}\in L^{p'}(I,V^*)$ a right-hand side, and $\mathbfcal{A}:L^p(I,V)\cap_{\boldsymbol{j}}L^q(I,H)\to (L^p(I,V)\cap_{\boldsymbol{j}}L^q(I,H))^*$ induced by a family of operators $A(t):V\to V^*$, $t\in I$. A function $\boldsymbol x \in \mathbfcal{W}^{1,p,q}_e(I,V,H)$ is called \textbf{weak solution} of the initial value problem \eqref{eq:1.1} if $(\boldsymbol{j}_c\boldsymbol{x})(0)={x}_0$ in $H $ and for all $\boldsymbol{\phi}\in C^1_0(I,V)$ there holds \begin{align* \int_I{\bigg\langle\frac{d_e\boldsymbol{x}}{dt}(t),\boldsymbol{\phi}(t)\bigg\rangle_V\,dt}+\int_I{\langle A(t)(\boldsymbol{x}(t)),\boldsymbol{\phi}(t)\rangle_V\,dt}&=\int_I{\langle \boldsymbol{f}(t),\boldsymbol{\phi}(t)\rangle_V\,dt}. \end{align*} \end{defn} Here, the initial condition is well-defined since due to Proposition \ref{2.16} (ii) there exists the unique continuous representation $\boldsymbol{j}_c\boldsymbol{x}\in C^0(\overline{I},H)$ of $\boldsymbol x \in \mathbfcal{W}^{1,p,q}_e(I,V,H)$. \section{Quasi non-conforming approximation} \label{sec:3} In this section we introduce the concept of quasi non-conforming approximations. \begin{defn}[Quasi non-conforming approximation]\label{3.1} Let $(V,H,j) $ and $(X,Y,j)$ be evolution triples such that $V\subseteq X$ with $\|\cdot\|_V=\|\cdot\|_X$ in $V$ and ${H\subseteq Y}$~with~${(\cdot,\cdot)_H=(\cdot,\cdot)_Y}$~in~${H\times H}$. Moreover, let $I:=\left(0,T\right)$, $T<\infty$, and let $1<p<\infty$. We call a sequence of closed subspaces $(V_n)_{n\in\mathbb{N}}$ of $X$ a \textbf{quasi non-conforming approximation of $V$ in $X$}, if the following properties are satisfied: \begin{description}[{(ii)}] \item[\textbf{(QNC.1)}] \hypertarget{QNC.1}{} There exists a dense subset $D\subseteq V$, such that for each $v\in D$ there exist elements $v_n\in V_n$, $n\in\mathbb{N}$, such that $v_n\to v $ in $X$~$(n\to\infty)$. \item[\textbf{(QNC.2)}] \hypertarget{QNC.2}{} For each sequence $\boldsymbol{x}_n\in L^p(I,V_{m_n})$, $n\in \mathbb{N}$, where ${(m_n)_{n\in\mathbb{N}}\subseteq\mathbb{N}}$ with $m_n\to \infty$ $(n\to\infty)$, from $\boldsymbol{x}_n\rightharpoonup \boldsymbol{x}$ in $L^p(I,X)$~$(n\to\infty)$, it follows that $\boldsymbol{x}\in L^p(I,V)$. \end{description} \end{defn} The next proposition shows that our motivating example, namely the approximation of divergence-free Sobolev functions through discretely divergence-free finite element spaces perfectly fits into the framework of quasi non-conforming approximations. \begin{prop}\label{ex:3.5} For $p\ge \frac{2d}{d+2}$, we set as in the introduction $(V,H,\textup{id}_V):=(W^{1,p}_{0,\divo }(\Omega),L^2_{\divo }(\Omega),\textup{id})$, $ (X,Y,\textup{id}_X):=(W^{1,p}_0(\Omega)^d,L^2(\Omega)^d,\textup{id})$ and $Z:=L^{p'}(\Omega)$. Moreover, for given $m, \ell \in \mathbb N_0$ and all $h>0$ let $X_h\subseteq \mathcal{P}_m(\mathcal{T}_h)^d\cap X$ and $Z_h\subseteq \mathcal{P}_\ell(\mathcal{T}_h)\cap Z$ be finite element spaces satisfying Assumption~\ref{proj}. Then, for a null sequence $(h_n)_{n\in\mathbb{N}}\subseteq\left(0,\infty\right)$ the sequence $(V_n)_{n\in \mathbb{N}}$, for every $n\in \mathbb{N}$ given via \begin{align*} V_n:=V_{h_n}:=\{\mathbf{v}_n\in X_{h_n} \mid \langle\divo \mathbf{v}_n,\eta_n\rangle_{Z}=0\text{ for all }\eta_n\in Z_{h_n}\}, \end{align*} forms a quasi non-conforming approximation of $V$ in $X$. \end{prop} \begin{proof} Clearly, $(V,H,\textup{id}_V)$ and $(X,Y,\textup{id}_X)$ form evolution triples, such that $V\subseteq X$ with ${\|\!\cdot\!\|_V=\|\!\cdot\!\|_X}$ in $V$ and $H\subseteq Y$ with $(\cdot,\cdot)_H=(\cdot,\cdot)_Y$ in $H\times H$. So, let us verify that $(V_n)_{n\in\mathbb{N}}$ satisfies (\hyperlink{QNC.1}{QNC.1}) and (\hyperlink{QNC.2}{QNC.2}): \textbf{ad (\hyperlink{QNC.1}{QNC.1})} Due to their finite dimensionality, the spaces $(V_n)_{n\in \mathbb{N}}$ are closed. We set $D:=\mathbfcal{V}:=\{\textbf{v}\in C_0^\infty(\Omega)^d\mid\divo \textbf{v}=0\}$. Let $\mathbf{v}\in D$. Then, owning to standard estimates for polynomial projection operators (cf.~\cite[Lemma 2.25]{tscherpel-phd}), the sequence $\mathbf{v}_n:=\Uppi_{h_n}^{\divo }\mathbf{v}\in V_n$, $n\in \mathbb{N}$, satisfies \begin{align*} \|\mathbf{v}-\mathbf{v}_n\|_X\leq ch_n\|\mathbf{v}\|_{W^{2,2}(\Omega)^d}\;\;\to\;\; 0\quad(n\to \infty). \end{align*} \textbf{ad (\hyperlink{QNC.2}{QNC.2})} Let $\boldsymbol{x}_n\in L^p(I,V_{m_n})$, $n\in \mathbb{N}$, where $(m_n)_{n\in\mathbb{N}}\subseteq\mathbb{N}$ with $m_n\to\infty$ $(n\to\infty)$, be such that $\boldsymbol{x}_n\rightharpoonup \boldsymbol{x}$ in $L^p(I,X)$ $(n\to\infty)$. Let $\eta\in C^\infty_0(\Omega)$ and $\varphi\in C_0^\infty(I)$. As in the previous step we infer that the sequence $\eta_n:=\Uppi_{m_n}^Z\eta\in Z_{h_{m_n}}$, $n\in\mathbb{N}$, satisfies $\eta_n\to\eta$ in $Z$ $(n\to\infty)$. On the other hand, since $\langle\divo \boldsymbol{x}_n(t),\eta_n\rangle_Z=0$ for almost every $t\in I$ and $n\in \mathbb{N}$, as $\boldsymbol{x}_n(t)\in V_{m_n}$ for almost every $t\in I$ and all $n\in \mathbb{N}$, there holds for every $n\in \mathbb{N}$ \begin{align} \int_I{\langle\divo \boldsymbol{x}_n(t),\eta_n\rangle_Z\varphi(s)\,ds}=0.\label{eq:3.4} \end{align} By passing in \eqref{eq:3.4} for $n\to \infty$, we obtain for every $\eta \in C_0^\infty(\Omega)$ and $\varphi\in C_0^\infty(I)$ \begin{align*} \int_I{\langle \divo \boldsymbol{x}(s),\eta\rangle_Z\varphi(s)\,ds}=0, \end{align*} i.e., $\boldsymbol{x}\in L^p(I,V)$. \hfill$\qed$ \end{proof} The next proposition shows that the notion of quasi non-conforming approximation is indeed a generalization of the usual notion of conforming approximation. \begin{prop}\label{ex} Let $(X,Y,j)$ and $(V,H,j)$ be as in Definition \ref{3.1}. Then, it holds: \begin{description}[{(ii)}] \item[(i)] The constant approximation $V_n=V$, $n\in \mathbb{N}$, is a quasi non-conforming approximation of $V$ in $X$. \item[(ii)] If $(V_n)_{n\in \mathbb{N}}$ is a conforming approximation of $V$, i.e., $(V_n)_{n\in \mathbb{N}}$ satisfy (\hyperlink{C.1}{C.1}) and (\hyperlink{C.2}{C.2}), then $(V_n)_{n\in \mathbb{N}}$ is a quasi non-conforming approximation of~$V$~in~$X$. \end{description} \end{prop} \begin{proof} \textbf{ad (i)} Follows right from the definition. \textbf{ad (ii)} We set $D:=\bigcup_{n\in \mathbb{N}}{V_n}$. Then, for each $v\in D$ there exists an integer $n_0\in \mathbb{N}$ such that $v\in V_n$ for every $n\ge n_0$. Therefore, the sequence $v_n\in V_n$, $n\in \mathbb{N}$, given via $v_n:=0$ if $n<n_0$ and $v_n:=v$ if $n\ge n_0$, satisfies $v_n\to v$ in $V$ $(n\to \infty)$, i.e., $(V_n)_{n\in \mathbb{N}}$ satisfies (\hyperlink{QNC.1}{QNC.1}). Apart from that, $(V_n)_{n\in \mathbb{N}}$ obviously fulfills (\hyperlink{QNC.2}{QNC.2}).\hfill$\qed$ \end{proof} The following proposition will be crucial in verifying that the induced operator $\mathbfcal A$ of a family of operators $(A(t))_{t\in I}$ is quasi non-conforming Bochner pseudo-monotone (cf.~Definition~\ref{3.4}). \begin{prop}\label{3.2} Let $(V,H,j) $ and $(X,Y,j)$ be as in Definition \ref{3.1} and let $(V_n)_{n\in\mathbb{N}}$ be a quasi non-conforming approximation of $V$ in $X$. Then, the following statements hold true: \begin{description}[{(iii)}] \item[(i)] For a sequence $v_n\in V_{m_n}$, $n\in \mathbb{N}$, where $(m_n)_{n\in\mathbb{N}}\subseteq\mathbb{N}$ with $m_n\to \infty$ $(n\to \infty)$, from $v_n\rightharpoonup v$ in $X$ $(n\to \infty)$, it follows that $v\in V$. \item[(ii)] For a sequence $v_n\in V_{m_n}$, $n\in \mathbb{N}$, where $(m_n)_{n\in\mathbb{N}}\subseteq\mathbb{N}$ with $m_n\to \infty$ $(n\to \infty)$, with $\sup_{n\in \mathbb{N}}{\|v_n\|_X}<\infty$, and $v\in V$ the following statements are equivalent: \begin{description} \item[(a)] $v_n\rightharpoonup v$ in $X$ $(n\to\infty)$. \item[(b)] $P_Hjv_n\rightharpoonup jv$ in $H$ $(n\to\infty)$, where $P_H\!:\!Y\!\to\! H$ is the orthogonal projection of~$Y$~into~$H$. \end{description} \item[(iii)] For each $h\in H$ there exists a sequence $v_n\in V_{m_n}$, $n\in \mathbb{N}$, where $(m_n)_{n\in\mathbb{N}}\subseteq\mathbb{N}$ with $m_n\to \infty$ $(n\to \infty)$, such that $jv_n\to h$ in $Y$ $(n\to \infty)$. \end{description} \end{prop} \begin{proof} \textbf{ad (i)} Immediate consequence of (\hyperlink{QNC.2}{QNC.2}). \textbf{ad (ii)} \textbf{(a) $\boldsymbol{\Rightarrow}$ (b)} Follows from the weak continuity of $j:X\to Y$ and $P_H:Y\to H$. \textbf{(b) $\boldsymbol{\Rightarrow}$ (a)} From the reflexivity of $X$, we obtain a subsequence $(v_n)_{n\in \Lambda}$, with $\Lambda\subseteq \mathbb{N}$, and an element $\tilde{v}\in X$, such that $v_n\rightharpoonup\tilde{v}$ in $X$ $(\Lambda\ni n\to \infty)$. Due to \textbf{(i)} we infer $\tilde{v}\in V$. From the weak continuity of $j:X\to Y$ and $P_H:Y\to H$ we conclude $P_Hjv_n\rightharpoonup P_Hj\tilde{v}=j\tilde{v}$ in $H$ $(\Lambda\ni n\to \infty)$. In consequence, we have $j\tilde{v}=jv$ in $H$, which in virtue of the injectivity of $j:V\to H$ implies that $\tilde{v}=v$ in $V$, and therefore \begin{align} v_n\;\;\rightharpoonup \;\;v\quad\text{ in }X\quad(\Lambda\ni n\to \infty).\label{eq:3.3} \end{align} Since this argumentation stays valid for each subsequence of $(v_n)_{n\in \mathbb{N}}\subseteq X$, $v\in V$ is weak accumulation point of each subsequence of $(v_n)_{n\in \mathbb{N}}\subseteq X$. Therefore, the standard convergence principle (cf.~\cite[Kap. I, Lemma 5.4]{GGZ}) guarantees that \eqref{eq:3.3} remains true even if $\Lambda=\mathbb{N}$. \textbf{ad (iii)} Since $(V,H,j)$ is an evolution triple, $j(V)$ is dense in $H$. As a result, for fixed $h\in H$ there exists a sequence $(v_n)_{n\in \mathbb{N}}\subseteq V$, such that $\|h-jv_n\|_H\leq 2^{-n}$ for all $n\in \mathbb{N}$. Due to (\hyperlink{QNC.1}{QNC.1}) there exist a sequence $(w_n)_{n\in \mathbb N} \subseteq D$, such that $\|v_n-w_n\|_V\le 2^{-n-1}$ for all $n\in \mathbb{N}$ and a double sequence $(v_k^n)_{n,k\in \mathbb{N}}\subseteq X$, with $v_k^n\in V_k$ for all $k,n\in \mathbb{N}$, such that $v_k^n\to w_n$ in $X$ $(k\to \infty)$ for all $n\in \mathbb{N}$. Thus, for each $n\in \mathbb{N}$ there exists $m_n\in \mathbb{N}$, such that $\|w_n-v_k^n\|_X\leq 2^{-n-1}$ for all $k\ge m_n$. Then, we have $v_{m_n}^n\in V_{m_n}$ for all $n\in \mathbb{N}$ and $\|h-jv_{m_n}^n\|_Y\leq (1+c)2^{-n}$ for all $n\in \mathbb{N}$, where $c>0$ is the embedding constant of $j$. \hfill$\qed$ \end{proof} \section{Quasi non-conforming Bochner pseudo-monotonicity} \label{sec:4} In this section we introduce an extended notion of Bochner pseudo-monotonicity (cf.~\cite{KR19}, \cite{K19}), which incorporates a given quasi non-conforming approximation $(V_n)_{n\in \mathbb{N}}$. \begin{defn}\label{3.4} Let $(X,Y,j)$ and $(V,H,j) $ be as in Definition \ref{3.1} and let $(V_n)_{n\in\mathbb{N}}$ be a quasi non-conforming approximation of $V$ in $X$, $I:=\left(0,T\right)$, with $0<T<\infty$, and $1<p\leq q<\infty$. An operator $\mathbfcal{A}:L^p(I,X)\cap_{\boldsymbol{j}} L^q(I,Y)\to (L^p(I,X)\cap_{\boldsymbol{j}} L^q(I,Y))^*$ is said to be \textbf{quasi non-conforming Bochner pseudo-monotone with respect to $(V_n)_{n\in\mathbb{N}}$} if for a sequence $\boldsymbol{x}_n\in L^\infty(I,V_{m_n})$, $n\in\mathbb{N}$, where $(m_n)_{n\in\mathbb{N}}\subseteq\mathbb{N}$ with $m_n\to \infty$ $(n\to\infty)$, from \begin{alignat}{2} \boldsymbol{x}_n&\;\;\rightharpoonup\;\; \boldsymbol{x}&&\quad\text{ in }L^p(I,X)\;\quad (n\rightarrow\infty)\label{eq:3.5}, \\ \boldsymbol{j}\boldsymbol{x}_n&\;\;\overset{\ast}{\rightharpoondown}\;\; \boldsymbol{j}\boldsymbol{x}&&\quad\text{ in } L^\infty(I,Y)\quad (n\rightarrow\infty), \label{eq:3.6} \\ P_H(\boldsymbol{j}\boldsymbol{x}_n)(t)&\;\;\rightharpoonup\;\; (\boldsymbol{j}\boldsymbol{x})(t) &&\quad\text{ in }H\quad (n\rightarrow\infty)\quad\text{for a.e. }t\in I,\label{eq:3.7} \end{alignat} and \begin{align} \limsup_{n\rightarrow\infty}{\langle \mathbfcal{A}\boldsymbol{x}_n,\boldsymbol{x}_n-\boldsymbol{x}\rangle_{L^p(I,X)\cap_{\boldsymbol{j}} L^q(I,Y)}}\leq 0\label{eq:3.8} \end{align} it follows for all $\boldsymbol{y}\in L^p(I,X)\cap_{\boldsymbol{j}} L^q(I,Y)$ that \begin{align*} \langle \mathbfcal{A}\boldsymbol{x},\boldsymbol{x}-\boldsymbol{y}\rangle_{L^p(I,X)\cap_{\boldsymbol{j}} L^q(I,Y)}\leq \liminf_{n\rightarrow\infty}{\langle \mathbfcal{A}\boldsymbol{x}_n,\boldsymbol{x}_n-\boldsymbol{y}\rangle_{L^p(I,X)\cap_{\boldsymbol{j}} L^q(I,Y)}}. \end{align*} \end{defn} Note that \eqref{eq:3.5} and \eqref{eq:3.6} guarantee that $\boldsymbol{x}\in L^p(I,V)\cap_{\boldsymbol{j}} L^\infty(I,H)$ due to Definition \ref{3.1}. \medskip The basic idea of quasi non-conforming Bochner pseudo-monotonicity, in comparison to the original notion of Bochner pseudo-monotonicity tracing back to \cite{KR19}, consists in incorporating the approximation $(V_n)_{n\in \mathbb{N}}$. We will see in the proof of Theorem \ref{5.17} that \eqref{eq:3.5}--\eqref{eq:3.8} are natural properties of a sequence $\boldsymbol{x}_n\in L^p(I,V_{m_n})$, $n\in \mathbb{N}$, coming from a quasi non-conforming Rothe-Galerkin approximation of \eqref{eq:1.1}, if $\mathbfcal{A}$ satisfies appropriate additional assumptions. In fact, \eqref{eq:3.5} usually is a consequence of the coercivity of $\mathbfcal{A}$, \eqref{eq:3.6} stems from the time derivative, while \eqref{eq:3.7} and \eqref{eq:3.8} follow directly from the approximation scheme. \begin{prop}\label{3.9} Let $(X,Y,j)$ and $(V,H,j) $ be as in Definition \ref{3.1} and let $(V_n)_{n\in\mathbb{N}}$ be a quasi non-conforming approximation of $V$ in $X$, $I:=\left(0,T\right)$, $T<\infty$, and $1<p\leq q<\infty$. Moreover, let ${A(t):X\to X^*}$, $t\in I$, be a family of operators with the following properties: \begin{description}[{(A.3)}] \item[\textbf{(A.1)}] \hypertarget{A.1} $A(t):X\to X^*$ is pseudo-monotone for almost every $t\in I$. \item[\textbf{(A.2)}] \hypertarget{A.2} $A(\cdot)x:I\to X^*$ is Bochner measurable for every $x\in X$. \item[\textbf{(A.3)}] \hypertarget{A.3} For some constants $c_0>0$ and $c_1,c_2\ge 0$ holds for almost every $t\in I$ and every $x\in X$ \begin{align*} \langle A(t)x,x\rangle_X\ge c_0\|x\|_X^p-c_1\|jx\|_Y^2-c_2. \end{align*} \item[\textbf{(A.4)}] \hypertarget{A.4} For constants $\gamma\ge 0$ and $\lambda\in \left(0,c_0\right)$ holds for almost every $t\in I$ and every $x,y\in X$ \begin{align*} \vert\langle A(t)x,y\rangle_X\vert \leq \lambda\|x\|_X^p+\gamma\big[1+\|jx\|_Y^q+\|jy\|_Y^q+\|y\|_X^p\big]. \end{align*} \end{description} Then, the induced operator $\mathbfcal{A}:L^p(I,X)\cap_{\boldsymbol{j}} L^q(I,Y)\to (L^p(I,X)\cap_{\boldsymbol{j}} L^q(I,Y))^*$,~given~via~\eqref{eq:induced}, is well-defined, bounded and quasi non-conforming Bochner pseudo-monotone~with~respect~to~${(V_n)_{n\in \mathbb{N}}}$. \end{prop} \begin{proof} \textbf{1. Well-definiteness:} For $\boldsymbol{x}_1,\boldsymbol{x}_2\in L^p(I,X)\cap_{\boldsymbol{j}} L^q(I,Y)$ there exists sequences of simple functions $(\boldsymbol{s}_n^m)_{n\in \mathbb{N}}\subseteq L^\infty(I,X)$, $m=1,2$, i.e., $\boldsymbol{s}_n^m(t)=\sum_{i=1}^{k_n^m}{s_{n,i}^m\chi_{E_{n,i}^m}(t)}$ for $t\in I$ and $m=1,2$, where $s_{n,i}^m\in X$, $k_n^m\in \mathbb{N}$ and $E_{n,i}^m\in \mathcal{L}^1(I) $ with $\bigcup_{i=1}^{k_n^m}{E_{n,i}^m}=I$ and $E_{n,i}^m\cap E_{n,j}^m=\emptyset$ for $i\neq j$, such that $ \boldsymbol{s}_n^m(t)\to \boldsymbol{x}_m(t)$ in $X$ for almost every $t\in I$ and $m=1,2$. Moreover, it follows from Lemma~\ref{2.9a} that $A(t):X\to X^*$ is for almost every $t\in I$ demi-continuous, since it is for almost every $t\in I$ pseudo-monotone (cf.~(\hyperlink{A.1}{A.1})) and bounded (cf.~(\hyperlink{A.4}{A.4})). This yields for almost~every~${t\in I}$ \begin{align} \langle A(t)(\boldsymbol{s}_n^1(t)),\boldsymbol{s}_n^2(t)\rangle_{X}=\sum_{i=1}^{k_n^1}{\sum_{j=1}^{k_n^2}{\langle A(t)s_{n,i}^1,s_{n,j}^2\rangle_{X}\chi_{E_{n,i}^1\cap E_{n,j}^2}\!(t)}}\overset{n\to\infty}{\to }\langle A(t)(\boldsymbol{x}_1(t)),\boldsymbol{x}_2(t)\rangle_{X}.\label{eq:3.92} \end{align} Thus, since the functions $(t\mapsto\langle A(t)s_{n,i}^1,s_{n,j}^2\rangle_{X}:I\to \mathbb{R}$, $i=0,...,k_n^1$, $j=1,...,k_n^2$, $n\in \mathbb{N}$, are Lebesgue measurable due to (\hyperlink{A.2}{A.2}), we conclude from \eqref{eq:3.92}~that~${(t\mapsto\langle A(t)(\boldsymbol{x}_1(t)),\boldsymbol{x}_2(t)\rangle_{X})\!:\!I\to \mathbb{R}}$ is Lebesgue measurable. In addition, using (\hyperlink{A.4}{A.4}), we obtain \begin{align} \begin{split} \int_I{\langle A(t)(\boldsymbol{x}_1(t)),\boldsymbol{x}_2(t)\rangle_{X}\,dt}&\leq \lambda \|\boldsymbol{x}_1\|_{L^p(I,X)}^p\\&\quad+\gamma [T+\|\boldsymbol{j}\boldsymbol{x}_1\|_{L^q(I,Y)}^q+\|\boldsymbol{j}\boldsymbol{x}_2\|_{L^q(I,Y)}^q+\|\boldsymbol{x}_2\|_{L^p(I,X)}^p], \end{split}\label{eq:3.91} \end{align} i.e., $\mathbfcal{A}:L^p(I,X)\cap_{\boldsymbol{j}} L^q(I,Y)\to (L^p(I,X)\cap_{\boldsymbol{j}} L^q(I,Y))^*$ is well-defined. \\[-3mm] \textbf{2. Boundedness:} As $\|\boldsymbol{y}\|_{L^p(I,X)\cap_{\boldsymbol{j}} L^q(I,Y)}\leq 1$ implies that $\|\boldsymbol{y}\|_{L^p(I,X)}^p+\|\boldsymbol{j}\boldsymbol{y}\|_{L^q(I,Y)}^q\leq 2$ for every $\boldsymbol{y}\in L^p(I,X)\cap_{\boldsymbol{j}} L^q(I,Y)$, we infer from \eqref{eq:3.91} for every $\boldsymbol{x}\in L^p(I,X)\cap_{\boldsymbol{j}} L^\infty(I,Y)$ that \begin{align*} \begin{split} \|\mathbfcal{A}\boldsymbol{x}\|_{(L^p(I,X)\cap_{\boldsymbol{j}} L^q(I,Y))^*}&=\sup_{\|\boldsymbol{y}\|_{L^p(I,X)\cap_{\boldsymbol{j}} L^q(I,Y)}\leq 1}{\langle \mathbfcal{A}\boldsymbol{x},\boldsymbol{y}\rangle_{L^p(I,X)\cap_{\boldsymbol{j}} L^q(I,Y)}} \\&\leq \lambda \|\boldsymbol{x}\|_{L^p(I,X)}^p+\gamma\|\boldsymbol{j}\boldsymbol{x}\|_{L^q(I,Y)}^q+\gamma [T+2], \end{split} \end{align*} i.e., $\mathbfcal{A}:L^p(I,X)\cap_{\boldsymbol{j}} L^q(I,Y)\to (L^p(I,X)\cap_{\boldsymbol{j}} L^q(I,Y))^*$ is bounded. \\[-3mm] \textbf{3. Quasi non-conforming Bochner pseudo-monotonicity:} In principle, we proceed analogously to \cite[Proposition~3.13]{KR19}. However, as we have solely almost everywhere weak convergence of the orthogonal projections available, i.e., \eqref{eq:3.7}, in the definition of quasi-nonconforming Bochner pseudo-monotonicity (cf.~Definition~\ref{3.4}), the arguments in \cite{KR19} ask for some slight modifications. In fact, in this context the properties of the quasi non-conforming approximation $(V_n)_{n\in \mathbb{N}}$ come into play. Especially the role of Proposition \ref{3.2} will be crucial. We split the proof of the quasi non-conforming Bochner pseudo-monotonicity into four steps: \\[-3mm] \textbf{3.1. Collecting information:} Let $\boldsymbol{x}_n\in L^\infty(I,V_{m_n})$, $n\in\mathbb{N}$, where $(m_n)_{n\in\mathbb{N}}\!\subseteq\!\mathbb{N}$~with~${m_n\!\to\!\infty}$ $(n\to\infty)$, be a sequence satisfying \eqref{eq:3.5}--\eqref{eq:3.8}. We fix an arbitrary $\boldsymbol{y}\in L^p(I,X)\cap_{\boldsymbol{j}} L^q(I,Y)$, and choose a subsequence $(\boldsymbol{x}_n)_{n\in\Lambda}$, with $\Lambda\subseteq\mathbb{N}$, such that \begin{align} \lim_{\substack{n\rightarrow\infty\\n\in\Lambda}}{ \langle \mathbfcal{A}\boldsymbol{x}_n,\boldsymbol{x}_n-\boldsymbol{y}\rangle_{L^p(I,X)\cap_{\boldsymbol{j}} L^q(I,Y)}}= \liminf_{n\rightarrow\infty}{\langle\mathbfcal{A} \boldsymbol{x}_n,\boldsymbol{x}_n-\boldsymbol{y} \rangle_{L^p(I,X)\cap_{\boldsymbol{j}} L^q(I,Y)}}.\label{eq:3.10} \end{align} Due to \eqref{eq:3.7} there exists a subset $E\subseteq I$, with $I\setminus E$ a null set, such that for all $t\in E$ \begin{align} P_H(\boldsymbol{j}\boldsymbol{x}_n)(t)\;\;\rightharpoonup\;\;(\boldsymbol{j}\boldsymbol{x})(t)\quad\text{ in }H\quad(n\to\infty).\label{eq:3.11} \end{align} Using (\hyperlink{A.3}{A.3}) and (\hyperlink{A.4}{A.4}), we get for every $\boldsymbol{z}\in L^p(I,X)\cap_{\boldsymbol{j}} L^q(I,Y)$~and~almost~every~${t\in I}$ \begin{align} \begin{split} \langle A&(t)(\boldsymbol{x}_n(t)),\boldsymbol{x}_n(t) -\boldsymbol{z}(t)\rangle_X \\ &\ge c_0\|\boldsymbol{x}_n(t)\|_X^p- c_1\|j(\boldsymbol{x}_n(t))\|_Y^2-c_2 -\langle A(t)(\boldsymbol{x}_n(t)),\boldsymbol{z}(t)\rangle_X \\ &\ge (c_0-\lambda)\|\boldsymbol{x}_n(t)\|_X^p -c_1K^2-c_2-\gamma\big[1+K^q+\|(\boldsymbol{j}\boldsymbol{z})(t)\|_Y^q+\|\boldsymbol{z}(t)\|_X^p\big], \end{split}\label{eq:3.12} \end{align} where $K:=\sup_{n\in\mathbb{N}}{\|\boldsymbol{j}\boldsymbol{x}_n\|_{L^\infty(I,Y)}}<\infty$ (cf.~\eqref{eq:3.6}). If we set $\mu_{\boldsymbol{z}}(t):=-c_1K^2-c_2-\gamma\big[1+K^q+\|(\boldsymbol{j}\boldsymbol{z})(t)\|_Y^q+\|\boldsymbol{z}(t)\|_X^p\big]\in L^1(I)$, then \eqref{eq:3.12} reads \begin{align} \langle A(t)(\boldsymbol{x}_n(t)),\boldsymbol{x}_n(t)- \boldsymbol{z}(t)\rangle_X\ge (c_0-\lambda)\|\boldsymbol{x}_n(t)\|_X^p -\mu_{\boldsymbol{z}}(t) \tag*{$(\ast)_{\boldsymbol{z},n,t}$} \end{align} for almost every $t\in I$ and all $n\in \Lambda$. Next, we define \begin{align*} \boldsymbol{\mathcal{S}}:= \big \{t\in E \mid A(t):X\rightarrow X^*\text{ is pseudo-monotone}, \vert\mu_{\boldsymbol{x}}(t)\vert<\infty\text{ and }(\ast)_{\boldsymbol{x},n,t}\text{ holds for all }n\in\Lambda\big \}. \end{align*} Apparently, $I\setminus\boldsymbol{\mathcal{S}}$ is a null set. \\[-3mm] \textbf{3.2. Intermediate objective:} Our next objective is to verify for all $t\in\mathcal{S}$ that \begin{align} \liminf_{\substack{n\rightarrow\infty\\n\in\Lambda}} {\langle A(t)(\boldsymbol{x}_n(t)),\boldsymbol{x}_n(t) -\boldsymbol{x}(t)\rangle_X}\ge 0. \tag*{$(\ast\ast )_{t}$} \end{align} To this end, let us fix an arbitrary $t\in\boldsymbol{\mathcal{S}}$ and define \begin{align*} \Lambda_t:=\{n\in\Lambda\mid \langle A(t)(\boldsymbol{x}_n(t)),\boldsymbol{x}_n(t) -\boldsymbol{x}(t)\rangle_X< 0\}. \end{align*} We assume without loss of generality that $\Lambda_t$ is not finite. Otherwise, $(\ast\ast )_{t}$ would already hold true for this specific $t\in\mathcal{S}$ and nothing would be left to do. But if $\Lambda_t$ is not finite, then \begin{align} \limsup_{\substack{n\rightarrow\infty\\n\in\Lambda_t}} {\langle A(t)(\boldsymbol{x}_n(t)),\boldsymbol{x}_n(t)- \boldsymbol{x}(t)\rangle_X}\leq 0.\label{eq:3.13} \end{align} The definition of $\Lambda_t$ and $(\ast)_{\boldsymbol{x},n,t}$ imply for all $n\in\Lambda_t$ \begin{align} (c_0-\lambda)\|\boldsymbol{x}_n(t)\|_X^p\leq\langle A(t) (\boldsymbol{x}_n(t)),\boldsymbol{x}_n(t)- \boldsymbol{x}(t)\rangle_X+ \vert\mu_{\boldsymbol{x}}(t)\vert<\vert\mu_{\boldsymbol{x}}(t)\vert <\infty. \label{eq:3.14} \end{align} This and $\lambda <c_0$ yield that the sequence $(\boldsymbol{x}_n(t))_{n\in\Lambda_t}$ is bounded in $X$. In view of \eqref{eq:3.11}, Proposition \ref{3.2} (ii) implies that \begin{align}\label{eq:psmon} \boldsymbol{x}_n(t)\;\;\rightharpoonup\;\;\boldsymbol{x}(t)\quad\text{ in }X\quad( \Lambda_t\ni n\to \infty). \end{align} Since $A(t):X\rightarrow X^*$ is pseudo-monotone, we get from \eqref{eq:psmon} and \eqref{eq:3.13} that \begin{align*} \liminf_{\substack{n\rightarrow\infty\\n\in\Lambda_t}} {\langle A(t)(\boldsymbol{x}_n(t)),\boldsymbol{x}_n(t) -\boldsymbol{x}(t)\rangle_X}\ge 0. \end{align*} Due to $\langle A(t)(\boldsymbol{x}_n(t)), \boldsymbol{x}_n(t) -\boldsymbol{x}(t)\rangle_X\ge 0$ for all $n\in\Lambda\setminus\Lambda_t$, $(\ast\ast)_t$ holds for all $t\in\mathcal{S}$. \\[-3mm] \textbf{3.3. Switching to the image space level:} In this passage we verify the existence of a subsequence $(\boldsymbol{x}_n)_{n\in\Lambda_0}\subseteq L^p(I,X)\cap_{\boldsymbol{j}} L^\infty(I,Y)$, with $\Lambda_0\subseteq\Lambda$, such that for almost every $t\in I$ \begin{align} \begin{split} \boldsymbol{x}_n(t)\;\;\rightharpoonup\;\; \boldsymbol{x}(t)\quad\text{ in }X\quad(\Lambda_0\ni n\to \infty),\\ \limsup_{\substack{n\rightarrow\infty\\n\in\Lambda_0}} {\langle A(t)(\boldsymbol{x}_n(t)),\boldsymbol{x}_n(t) -\boldsymbol{x}(t)\rangle_X}\leq 0. \end{split} \label{eq:3.15} \end{align} As a consequence, we are in a position to exploit the almost everywhere pseudo-monotonicity of the operator family. Thanks to $\langle A(t)(\boldsymbol{x}_n(t)),\boldsymbol{x}_n(t)-\boldsymbol{x}(t)\rangle_X\ge -\mu_{\boldsymbol{x}}(t)$ for all $t\in\mathcal{S}$ and $n\in\Lambda$ (cf. $(\ast)_{\boldsymbol{x},n,t}$), Fatou's lemma (cf. \cite[Theorem 1.18]{Rou05}) is applicable. It yields, also using \eqref{eq:3.8} \begin{align} 0&\leq \int_I{\liminf_{\substack{n\rightarrow\infty\\n\in\Lambda}} {\langle A(s)(\boldsymbol{x}_n(s)),\boldsymbol{x}_n(s)- \boldsymbol{x}(s)\rangle_X}\,ds}\label{eq:3.16} \\ &\leq \liminf_{\substack{n\rightarrow\infty\\n\in\Lambda}}{\int_I{\langle A(s)(\boldsymbol{x}_n(s)),\boldsymbol{x}_n(s)-\boldsymbol{x}(s)\rangle_X\,ds}}\leq \limsup_{n\rightarrow\infty} {\langle\mathbfcal{A}\boldsymbol{x}_n,\boldsymbol{x}_n -\boldsymbol{x}\rangle_{L^p(I,X)\cap_{\boldsymbol{j}} L^q(I,Y)}}\leq 0.\notag \end{align} Let us define $h_n(t):=\langle A(t)(\boldsymbol{x}_n(t)),\boldsymbol{x}_n(t)-\boldsymbol{x}(t)\rangle_{X}$. Then, $(\ast\ast)_t$ and \eqref{eq:3.16} read: \begin{align} \liminf_{\substack{n\rightarrow\infty\\n\in\Lambda}}{h_n(t)}&\ge 0\quad\text{ for all }t\in\mathcal{S}.\label{eq:3.17}\\ \lim_{\substack{n\rightarrow\infty\\n\in\Lambda}}{\int_I{h_n(s)\,ds}}&=0.\label{eq:3.18} \end{align} As $s\mapsto s^-:=\min\{0,s\}$ is continuous and non-decreasing we deduce from \eqref{eq:3.17} that \begin{align*} 0\ge\limsup_{\substack{n\rightarrow\infty\\n\in\Lambda}} {h_n(t)^-}\ge\liminf_{\substack{n\rightarrow\infty\\n\in\Lambda}} {h_n(t)^-}\ge \min\Big\{0, \liminf_{\substack{n\rightarrow\infty\\n\in\Lambda}}{h_n(t)}\Big\}=0, \end{align*} i.e.,~$h_n(t)^-\to 0$ $(\Lambda\ni n\to\infty)$ for all $t\in \mathcal{S}$. Since $0\ge h_n(t)^-\ge -\mu_{\boldsymbol{x}}(t)$ for all $t\in\mathcal{S}$ and $n\in\Lambda$, Vitali's theorem yields $h_n^-\to 0$ in $L^1(I)$ $(\Lambda\ni n\to\infty)$. From the latter, $\vert h_n\vert=h_n-2h_n^-$ and \eqref{eq:3.18}, we conclude that $h_n\to0$ in $L^1(I)$ $(\Lambda\ni n\to\infty)$. This provides a further subsequence $(\boldsymbol{x}_n)_{n\in\Lambda_0}$ with $\Lambda_0\subseteq\Lambda$ and a subset $F\subseteq I$ such that $ I\setminus F$ is a null set and for all $t\in F$ \begin{align} \lim_{\substack{n\rightarrow\infty\\n\in\Lambda_0}}{\langle A(t)(\boldsymbol{x}_n(t)),\boldsymbol{x}_n(t)-\boldsymbol{x}(t)\rangle_X}= 0. \label{eq:3.19} \end{align} This and \eqref{eq:3.14} implies for all $t\in \boldsymbol{\mathcal{S}}\cap F$ that \begin{align*} \limsup_{\substack{n\rightarrow\infty\\n\in\Lambda_0}} {(c_0-\lambda)\|\boldsymbol{x}_n(t)\|_X^p}\leq \limsup_{\substack{n\rightarrow\infty\\n\in\Lambda_0}} {\langle A(t)(\boldsymbol{x}_n(t)),\boldsymbol{x}_n(t) -\boldsymbol{x}(t)\rangle_X+ \vert\mu_{\boldsymbol{x}}(t)\vert} =\vert\mu_{\boldsymbol{x}}(t)\vert<\infty, \end{align*} i.e., $(\boldsymbol{x}_n(t))_{n\in \Lambda_0}$ is bounded in $X$ for all $t\in \boldsymbol{\mathcal{S}}\cap F$. Thus, \eqref{eq:3.11} and Proposition \ref{3.2} (ii) yield for all $t\in \boldsymbol{\mathcal{S}}\cap F$ \begin{align} \boldsymbol{x}_n(t)\;\;\rightharpoonup\;\;\boldsymbol{x}(t)\quad\text{ in } X\quad(\Lambda_0\ni n \to \infty).\label{eq:3.20} \end{align} The relations \eqref{eq:3.19} and \eqref{eq:3.20} are just \eqref{eq:3.15}. \\[-3mm] \textbf{3.4. Switching to the Bochner-Lebesgue level:} From the pseudo-monotonicity of the operators $A(t):X\rightarrow X^*$ for all $t\in\boldsymbol{\mathcal{S}}\cap F$ we obtain almost every $t\in I$ \begin{align*} \langle A(t)(\boldsymbol{x}(t)),\boldsymbol{x}(t)- \boldsymbol{y}(t)\rangle_X\leq \liminf_{\substack{n\rightarrow\infty\\n\in\Lambda_0}} {\langle A(t)(\boldsymbol{x}_n(t)),\boldsymbol{x}_n(t) -\boldsymbol{y}(t)\rangle_X}. \end{align*} Due to $(\ast)_{\boldsymbol y,n,t}$, we have $\langle A(t)(\boldsymbol{x}_n(t)),\boldsymbol{x}_n(t) -\boldsymbol{y}(t)\rangle_X\ge-\mu_{\boldsymbol{y}}(t)$ for almost every $t\in I$~and~all~${n\in\Lambda_0}$. Thus, using the definition of the induced operator \eqref{eq:induced}, Fatou's lemma and \eqref{eq:3.10} we deduce \begin{align*} \langle \mathbfcal{A}\boldsymbol{x},\boldsymbol{x} -\boldsymbol{y}\rangle_{L^p(I,X)\cap_{\boldsymbol{j}} L^q(I,Y)} &\leq \int_I{\liminf_{\substack{n\rightarrow\infty\\n\in\Lambda_0}} {\langle A(s)(\boldsymbol{x}_n(s)),\boldsymbol{x}_n(s) -\boldsymbol{y}(s)\rangle_X}\,ds} \\ &\leq \liminf_{\substack{n\rightarrow\infty\\n\in\Lambda_0}} {\int_I{\langle A(s)(\boldsymbol{x}_n(s)),\boldsymbol{x}_n(s) -\boldsymbol{y}(s)\rangle_X\,ds}} \\ &=\lim_{\substack{n\rightarrow\infty\\n\in\Lambda}} {\langle \mathbfcal{A}\boldsymbol{x}_n,\boldsymbol{x}_n -\boldsymbol{y}\rangle_{L^p(I,X)\cap_{\boldsymbol{j}} L^q(I,Y)}} \\ &= \liminf_{n\rightarrow\infty}{\langle \mathbfcal{A} \boldsymbol{x}_n,\boldsymbol{x}_n-\boldsymbol{y}\rangle_{L^p(I,X)\cap_{\boldsymbol{j}} L^q(I,Y)}} \end{align*} As $\boldsymbol{y}\in L^p(I,X)\cap_{\boldsymbol{j}} L^q(I,Y)$ was chosen arbitrary, this completes the proof of Proposition \ref{3.9}.~\hfill$\qed$ \end{proof} \begin{prop}\label{4.7} Let $(X,Y,j)$ be as in Proposition \ref{ex:3.5} with $p>\frac{3d+2}{d+2}$ and $d\ge 2$. Moreover, let ${S(t),\hat{B}:X\to X^*}$, $t\in I$, be defined in \eqref{eq:sb} and \eqref{temam-mod}, respectively. Then, the operator family $A(t):=S(t)+\hat{B}:X\to X^*$, $t\in I$, satisfies (\hyperlink{A.1}{A.1})--(\hyperlink{A.4}{A.4}). \end{prop} \begin{proof} Let us first consider $S(t):X\to X^*$, $t\in I$, separately. From (\hyperlink{S.1}{S.1}) and (\hyperlink{S.2}{S.2}) in conjunction with the standard theory of Nemytski\u{\i} operators {(cf.~\cite[Theorem 1.43]{Rou05})} we deduce for almost every $t\in I$ the well-definiteness and continuity of $S(t):X\to X^*$, including the conditions (\hyperlink{A.2}{A.2}) and (\hyperlink{A.3}{A.3}). (\hyperlink{S.4}{S.4}) certainly implies for almost every $t\in I$ the monotonicity of $S(t):X\to X^*$. Hence, $S(t):X\to X^*$ is for almost every $t\in I$ pseudo-monotone, i.e., condition (\hyperlink{A.1}{A.1}) is satisfied. Eventually, it can readily be seen by exploiting (\hyperlink{S.3}{S.3}) that $S(t):X\to X^*$, $t\in I$, satisfies (\hyperlink{A.4}{A.4}). Next, we treat the more delicate part $\hat{B}:X\to X^*$. Here, we limit ourselves to the case $d\ge 3$, as the case $d=2$ simplifies due to better Sobolev embeddings. In the same manner, one can verify by the standard theory of Nemytski\u{\i} operators and Rellich's compactness theorem that $\hat{B}:X\to X^*$ is bounded and pseudo-monotone, i.e., satisfies (\hyperlink{A.1}{A.1}) and (\hyperlink{A.2}{A.2}). In addition, by Hölder's inequality there holds for every $\textbf{u},\textbf{v}\in X$ \begin{align} \vert\langle \hat{B}\mathbf{u},\mathbf{v}\rangle_X\vert\leq \|\mathbf{u}\|_{L^{2p'}(\Omega)^d}^2\|\mathbf{v}\|_X+\|\mathbf{u}\|_{L^{2p'}(\Omega)^d}\|\mathbf{v}\|_{L^{2p'}(\Omega)^d}\|\mathbf{u}\|_X.\label{eq:4.7.1} \end{align} If $p\ge d$, then $p-1\ge 2$, as $d\ge 3$, and $\|\mathbf{u}\|_{L^{2p'}(\Omega)^d}\leq c\|\mathbf{u}\|_X$ for all $\mathbf{u}\in X$ by means of Sobolev embedding. Thus, using $a^2\leq (1+a)^{p-1}\leq 2^{p-2}(1+a^{p-1})$ for all $a\ge 0$ and the weighted $\varepsilon$-Young inequality with constant $c_p(\varepsilon):=(p'\varepsilon)^{1-p}/p$, we obtain for every $\textbf{u},\textbf{v}\in X$ and $\varepsilon>0$ \begin{align*} \vert\langle \hat{B}\mathbf{u},\mathbf{v}\rangle_X\vert\leq c\|\mathbf{u}\|_X^2\|\mathbf{v}\|_X\leq c2^{p-2}(1+\|\mathbf{u}\|_X^{p-1})\|\mathbf{v}\|_X\leq c\varepsilon 2^{p(p-2)}\|\mathbf{u}\|_X^p+(c2^{p-2}+c_p(\varepsilon))\|\mathbf{v}\|_X^p, \end{align*} i.e., (\hyperlink{A.3}{A.3}) for $\varepsilon>0$ sufficiently small. If $p\in (\frac{3d+2}{d+2},d)$, then by interpolation with $\frac{1}{\rho}\!=\!\frac{1-\theta}{p^*}+\frac{\theta}{2}$, where $\rho\!=\!p\frac{d+2}{d}$, $\theta\!=\!\frac{2}{d+2}$ and $p^*\!=\!\frac{dp}{d-p}$, we obtain for all $\mathbf{u}\in X$ \begin{align} \|\mathbf{u}\|_{L^{\rho}(\Omega)^d}\leq \|\mathbf{u}\|_Y^\frac{2}{d+2}\|\mathbf{u}\|_{L^{p^*}(\Omega)^d}^{\frac{d}{d+2}}\leq c\|\mathbf{u}\|_Y^\frac{2}{d+2}\|\mathbf{u}\|_X^{\frac{d}{d+2}}.\label{eq:4.7.2} \end{align} Hence, since also $\rho\ge 2p'$, we further conclude from \eqref{eq:4.7.2} in \eqref{eq:4.7.1} that for all $\mathbf{u},\mathbf{v}\in X$ \begin{align} \vert\langle \hat{B}\mathbf{u},\mathbf{v}\rangle_X\vert &\leq c\|\mathbf{u}\|_Y^\frac{4}{d+2}\|\mathbf{u}\|_X^{\frac{2d}{d+2}}\|\mathbf{v}\|_X+c\|\mathbf{u}\|_Y^\frac{2}{d+2}\|\mathbf{u}\|_X^{\frac{2d+2}{d+2}}\|\mathbf{v}\|_Y^\frac{2}{d+2}\|\mathbf{v}\|_X^{\frac{d}{d+2}}\label{eq:4.7.3}\\&\leq c\|\mathbf{u}\|_Y^\frac{4p'}{d+2}\|\mathbf{u}\|_X^{\frac{2d}{d+2}p'}+c\|\mathbf{u}\|_Y^\frac{2p}{p(d+2)-d}\|\mathbf{u}\|_X^{\frac{p(2d+2)}{p(d+2)-d}}\|\mathbf{v}\|_Y^{2p}+c\|\mathbf{v}\|_X^p,\label{eq:4.7.4} \end{align} where we applied Young's inequality with exponent $p$ in the first summand in \eqref{eq:4.7.3} and with exponent $\frac{d+2}{d}p$ in the second summand. Since $s:=\frac{p(2d+2)}{p(d+2)-d}<p$ and $r:=\frac{2d}{d+2}p'<p$ for $p>\frac{3d+2}{d+2}$, we apply the weighted $\varepsilon$-Young inequality in the first two summands in \eqref{eq:4.7.4} with exponent $\frac{p}{r}>1$ and $\frac{p}{s}>1$, respectively. In doing so, we obtain for every $\mathbf{u},\mathbf{v}\in X$ and $\varepsilon>0$ that \begin{align*} \vert\langle \hat{B}\mathbf{u},\mathbf{v}\rangle_X\vert& \leq c\big[c_{(\frac{p}{r})'}(\varepsilon)\|\mathbf{u}\|_Y^{\frac{4p'}{d+2}\frac{p}{p-r}}+c_{(\frac{p}{s})'}(\varepsilon)\|\mathbf{u}\|_Y^{\frac{2p}{p(d+2)-d}\frac{p}{p-s}}\|\mathbf{v}\|_Y^\frac{2p^2}{p-s}+2\varepsilon\|\mathbf{u}\|_X^p+\|\mathbf{v}\|_X^p\big]\\& \leq c\big[c_{(\frac{p}{r})'}(\varepsilon)\|\mathbf{u}\|_Y^{\frac{4p'}{d+2}\frac{p}{p-r}}+c_{(\frac{p}{s})'}(\varepsilon)\|\mathbf{u}\|_Y^{\frac{4p}{p(d+2)-d}\frac{p}{p-s}}+\|\mathbf{v}\|_Y^\frac{4p^2}{p-s}+2\varepsilon\|\mathbf{u}\|_X^p+\|\mathbf{v}\|_X^p\big], \end{align*} i.e., (\hyperlink{A.3}{A.3}) for $\varepsilon>0$ sufficiently small and \begin{align*} q:=\max\bigg\{\frac{4p'}{d+2}\frac{p}{p-r},\frac{4p}{p(d+2)-d}\frac{p}{p-s},\frac{4p^2}{p-s}\bigg\}. \end{align*} Altogether, $\hat{B}:X\to X^*$ satisfies (\hyperlink{A.3}{A.3}). As a result, $A(t):X\to X^*$, $t\in I$, satisfies (\hyperlink{A.1}{A.1})--(\hyperlink{A.3}{A.3}), and thanks to $\langle \hat{B}\mathbf{u},\mathbf{u}\rangle_X=0$ for all $\mathbf{u}\in X$ also (\hyperlink{A.4}{A.4}). \hfill$\qed$ \end{proof} \section{Rothe scheme} \label{sec:5} Let $X$ be a Banach space, and $I:=\left(0,T\right)$, ${T<\infty}$. For $K\in\mathbb{N}$ we set $\tau:=\frac{T}{K}$, \linebreak $I_k^\tau:=\left((k-1)\tau,k\tau\right]$, $k=1,...,K$, and $\mathcal{I}_\tau :=\{I_k^\tau\}_{k=1,...,K}$. Moreover, we denote by \begin{align*} \mathbfcal{S}^0(\mathcal{I}_\tau,X):=\{\boldsymbol{x}:I\to X\mid \boldsymbol{x}(s)=\boldsymbol{x}(t)\text{ in }X\text{ for all }t,s\in I_k^\tau,k=1,...,K\}\subset L^\infty(I,X) \end{align*} the \textbf{space of piece-wise constant functions with respect to $\mathcal{I}_\tau$}. For a given finite sequence $(x^k)_{k=0,...,K}\subseteq X$ the \textbf{backward difference quotient} operator is defined via \begin{align*} d_\tau x^k:=\frac{1}{\tau}(x^k-x^{k-1})\quad\text{ in }X,\quad k=1,...,K. \end{align*} Furthermore, we denote for a given finite sequence $(x^k)_{k=0,...,K}\subseteq X $ by $\overline{\boldsymbol{x}}^\tau\in \mathbfcal{S}^0(\mathcal{I}_\tau,X)$ the \textbf{piece-wise constant interpolant}, and by $\hat{\boldsymbol{x}}^\tau\in W^{1,\infty}(I,X)$ the \textbf{piece-wise affine interpolant}, for every $t\in I_k^\tau$ and $k=1,...,K$ given via \begin{align}\label{eq:polant} \overline{\boldsymbol{x}}^\tau(t):=x^k,\qquad\hat{\boldsymbol{x}}^\tau(t):=\Big(\frac{t}{\tau}-(k-1)\Big)x^k+\Big(k-\frac{t}{\tau}\Big)x^{k-1}\quad\text{ in }X. \end{align} In addition, if $(X,Y,j)$ is an evolution triple and $(x^k)_{k=0,...,K}\subseteq X$ a finite sequence, then it holds for $k,l=0,...,K$ the \textbf{discrete integration by parts formula} \begin{align} \int_{k\tau}^{l\tau}{\bigg\langle \frac{d_e\hat{\boldsymbol{x}}^\tau}{dt}(t),\overline{\boldsymbol{x}}^\tau(t)\bigg\rangle_X\,dt}\ge \frac{1}{2}\|jx^l\|_Y^2-\frac{1}{2}\|jx^k\|_Y^2,\label{eq:4.2} \end{align} which is an immediate consequence of the identity $\langle d_\tau ex^k,x^k\rangle_X=\frac{1}{2}d_\tau\|jx^k\|_Y^2+\frac{\tau}{2}\|d_\tau jx^k\|_Y^2$ for every $k=1,...,K$. For the discretization of the right-hand side in \eqref{eq:1.1} we use the following construction. Let $X$ be a Banach space, $I=\left(0,T\right)$, $T<\infty$, $K \in \mathbb N$, $\tau:= \frac{T}{K}>0$ and $1<p<\infty$. The \textbf{Clem\'ent $0$-order quasi-interpolation operator} ${\mathscr{J}_\tau:L^p(I,X)\to \mathbfcal{S}^0(\mathcal{I}_\tau,X)}$ is defined for every $\boldsymbol{x}\in L^p(I,X)$ via \begin{align*} \mathscr{J}_\tau[\boldsymbol{x}]:=\sum_{k=1}^K{[\boldsymbol{x}]_k^\tau\chi_{I_k^\tau}}\quad\text{ in }\mathbfcal{S}^0(\mathcal{I}_\tau,X),\qquad[\boldsymbol{x}]_k^\tau:=\fint_{I_k^\tau}{\boldsymbol{x}(s)\,ds}\in X. \end{align*} \begin{prop}\label{4.4} For every $\boldsymbol{x}\in L^p(I,X)$ it holds: \begin{description}[{(iii)}] \item[(i)] $\mathscr{J}_\tau[\boldsymbol{x}]\to\boldsymbol{x}$ in $L^p(I,X)$ $(\tau\to 0)$, i.e., $\bigcup_{\tau>0}\mathbfcal{S}^0(\mathcal{I}_\tau,X)$ is dense in $L^p(I,X)$. \item[(ii)] $\sup_{\tau>0}{\|\mathscr{J}_\tau[\boldsymbol{x}]\|_{L^p(I,X)}}\leq \|\boldsymbol{x}\|_{L^p(I,X)}$. \end{description} \end{prop} \begin{proof} See \cite[Remark 8.15]{Rou05}.\hfill$\qed$ \end{proof} Since we treat non-autonomous evolution equations we also need to discretize the time dependent family of operators in \eqref{eq:1.1}. This will also be obtained by means of the Clem\'ent $0$-order quasi-interpolant. Let $(X,Y,j)$ be an evolution triple, $I:=\left(0,T\right)$, $T<\infty$, $K \in \mathbb N$, $\tau= \frac{T}{K}>0$ and $1<p\leq q<\infty$. Let $A(t):X\to X^*$, $t\in I$, be a family of operators satisfying the conditions (\hyperlink{A.1}{A.1})--(\hyperlink{A.4}{A.4}), and denote by $\mathbfcal{A}:L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y)\to (L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y))^*$ the induced operator (cf.~\eqref{eq:induced}). The \textbf{k-th temporal mean} $[A]^\tau_k:X\to X^*$, $k=1,...,K$, of $A(t):X\to X^*$, $t\in I$, is defined for every $x\in X$ via \begin{align*} [A]^\tau_k x:=\fint_{I_k^\tau}{A(s)x\,ds}\quad\text{ in }X^*. \end{align*} The \textbf{Clement $0$-order quasi-interpolant} $\mathscr{J}_\tau[A](t):X\to X^*$, $t\in I$, of $A(t):X\to X^*$, $t\in I$, is defined for almost every $t\in I$ and $x\in X$ via \begin{align*} \mathscr{J}_\tau[A](t)x:=\sum_{k=1}^{K}{\chi_{I_k^\tau}(t)[A]^\tau_kx}\quad\text{ in }X^*. \end{align*} The \textbf{Clement $0$-order quasi-interpolant} $\mathscr{J}_\tau[\mathbfcal{A}]:L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y)\to (L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y))^*$, of ${\mathbfcal{A}:L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y)\to (L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y))^*}$ is for all $\boldsymbol{x},\boldsymbol{y}\in L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y)$ defined via \begin{align*} \langle \mathscr{J}_\tau[\mathbfcal{A}]\boldsymbol{x},\boldsymbol{y}\rangle_{L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y)}:=\int_I{\langle \mathscr{J}_\tau[A] (t)(\boldsymbol{x}(t)),\boldsymbol{y}(t)\rangle_X\,dt}. \end{align*} Note that $\mathscr{J}_\tau[\mathbfcal{A}]$ is the induced operator of the family of operators $\mathscr{J}_\tau[A](t):X\to X^*$, $t \in I$. \begin{prop}[Clement $0$-order quasi-interpolant for induced operators]\label{4.6}\newline With the above notation we have: \begin{description}[{(iii)}] \item[(i)] $[A]^\tau_k:X\to X^*$ is well-defined, bounded, pseudo-monotone, and satisfies: \\[-3mm] \begin{description}[{(a)}] \item[(i.a)] $\langle [A]^\tau_kx,y\rangle_X\leq \lambda\|x\|_X^p+\gamma[1+\|jx\|_Y^q+\|jy\|_Y^q+\|y\|_X^p]$ for all $x,y\in X$. \\[-3mm] \item[(i.b)] $\langle [A]^\tau_kx,x\rangle_X\ge c_0\|x\|_X^p-c_1\|jx\|_Y^2-c_2$ for all $x\in X$. \\[-3mm] \end{description} \item[(ii)] $\mathscr{J}_\tau[A](t):X\to X^*$, $t\in I$, satisfies the conditions (\hyperlink{A.1}{A.1})--(\hyperlink{A.4}{A.4}). \\[-3mm] \item[(iii)] $\mathscr{J}_\tau[\mathbfcal{A}]\!:\!L^p(I,X)\cap_{\boldsymbol{j}}\!L^q(I,Y)\!\to\!(L^p(I,X)\cap_{\boldsymbol{j}}\!L^q(I,Y))^*$~is~well-defined,~bounded~and~satisfies: \\[-3mm] \begin{description}[{(a)}] \item[(iii.a)] For all $\boldsymbol{x}_\tau\in \mathbfcal{S}^0(\mathcal{I}_\tau,X)$, $\boldsymbol{y}\in L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y)$ holds \begin{align*} \langle \mathscr{J}_\tau[\mathbfcal{A}]\boldsymbol{x}_\tau,\boldsymbol{y}\rangle_{L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y)}=\langle \mathbfcal{A}\boldsymbol{x}_\tau, \mathscr{J}_\tau[\boldsymbol{y}]\rangle_{L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y)}. \end{align*} \item[(iii.b)] If the functions $\boldsymbol{x}_\tau\in\mathbfcal{S}^0(\mathcal{I}_\tau,X)$, $\tau>0$, are bounded in $L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y)$, then \begin{align*} \mathbfcal{A}\boldsymbol{x}_\tau-\mathscr{J}_\tau[\mathbfcal{A}]\boldsymbol{x}_\tau\;\;\rightharpoonup\;\;\boldsymbol{0}\quad\text{ in }(L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y))^*\quad(\tau\to 0). \end{align*} \item[(iii.c)] If $\boldsymbol{x}_\tau\in\mathbfcal{S}^0(\mathcal{I}_\tau,X)$, then $\|\mathscr{J}_\tau[\mathbfcal{A}]\boldsymbol{x}_\tau\|_{(L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y))^*}\leq \|\mathbfcal{A}\boldsymbol{x}_\tau\|_{(L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y))^*}$. \end{description} \end{description} \end{prop} \begin{proof} \textbf{ad (i)} Let $x\in X$. Due to (\hyperlink{A.2}{A.2}) the function $A(\cdot)x:I\to X^*$ is Bochner measurable. (\hyperlink{A.4}{A.4}) guarantees that $\|A(\cdot)x\|_{X^*}\in L^1(I)$, and thus the Bochner integrability~of $A(\cdot)x:I\to X^*$. As a result, the Bochner integral $[A]^\tau_kx=\fint_{I^\tau_k}{A(s)x\,ds}\in X^*$ exists, i.e., $[A]^\tau_k:X\to X^*$ is well-defined. The inequalities \textbf{(i.a)} and \textbf{(i.b)} are obvious. In particular, we gain from inequality \textbf{(i.a)} the boundedness of $[A]^\tau_k:X\to X^*$. So, it is left to show the pseudo-monotonicity. Therefore, let $(x_n)_{n\in \mathbb{N}}\subseteq X$ be a sequence such that \begin{align} x_n\;\;\rightharpoonup\;\; x\quad\text{ in }X\quad(n\to\infty),\label{eq:4.7}\\ \limsup_{n\to\infty}{\langle [A]^\tau_kx_n,x_n-x\rangle_X}\leq 0.\label{eq:4.8} \end{align} If we set $\boldsymbol{x}_n:=x_n\chi_{I_k^\tau}\in L^\infty(I,X)$, $n\in \mathbb{N}$, and $\boldsymbol{x}:=x\chi_{I_k^\tau}\in L^\infty(I,X)$, then \eqref{eq:4.7}, the Lebesgue theorem on dominated convergence and the properties of the induced embedding $\boldsymbol{j}$ imply \begin{alignat}{2} \boldsymbol{x}_n&\;\;\rightharpoonup\;\; \boldsymbol{x}&&\quad\text{ in }L^p(I,X)\;\quad(n\to \infty),\label{eq:4.9}\\ \boldsymbol{j}\boldsymbol{x}_n&\;\;\overset{\ast}{\rightharpoondown}\;\;\boldsymbol{j}\boldsymbol{x}&&\quad\text{ in }L^\infty(I,Y)\quad(n\to\infty),\label{eq:4.10}\\ (\boldsymbol{j}\boldsymbol{x}_n)(t)&\overset{n\to\infty}{\rightharpoonup}(\boldsymbol{j}\boldsymbol{x})(t)&&\quad\text{ in }Y\quad\text{for a.e. }t\in I.\label{eq:4.11} \end{alignat} In addition, from \eqref{eq:4.8} we infer \begin{align} \limsup_{n\to\infty}{\langle \mathbfcal{A}\boldsymbol{x}_n,\boldsymbol{x}_n-\boldsymbol{x}\rangle_{L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y)}}=\tau\limsup_{n\to\infty}{\langle [A]^\tau_kx_n,x_n-x\rangle_{X}}\leq 0.\label{eq:4.12} \end{align} Since $\mathbfcal{A}:L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y)\to (L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y))^*$ is quasi non-conforming Bochner pseudo-monotone with respect to the constant approximation $V_n=X$, $n\in \mathbb{N}$, by Proposition~\ref{3.9}, we obtain from \eqref{eq:4.9}--\eqref{eq:4.12} that for all $\boldsymbol{y}\in L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y)$ \begin{align} \langle \mathbfcal{A}\boldsymbol{x},\boldsymbol{x}-\boldsymbol{y}\rangle_{L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y)}\leq\liminf_{n\to\infty}{\langle \mathbfcal{A}\boldsymbol{x}_n,\boldsymbol{x}_n-\boldsymbol{y}\rangle_{L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y)}}.\label{eq:4.13} \end{align} If we choose in \eqref{eq:4.13} $\boldsymbol{y}:=y\chi_{I_k^\tau}\in L^\infty(I,X)$ with $y\in X$ and divide by $\tau>0$, we conclude \begin{align*} \langle [A]^\tau_kx,x-y\rangle_{X }\leq\liminf_{n\to\infty}{\langle [A]^\tau_kx_n,x_n-y\rangle_{X}}. \end{align*} In other words, $[A]^\tau_k:X\to X^*$ is pseudo-monotone. \\[-3mm] \textbf{ad (ii)} The assertion follows immediately from \textbf{(i)} and the definition of $\mathscr{J}_\tau[A](t)$, $t\in I$. \\[-3mm] \textbf{ad (iii)} Since $\mathscr{J}_\tau[\mathbfcal{A}]$ is the induced operator of the family of operators $\mathscr{J}_\tau[A](t)$, $t \in I$, the well-definiteness and boundedness of $\mathscr{J}_\tau[\mathbfcal{A}]$ results from \textbf{(ii)} in conjunction with Proposition~\ref{3.9}. \\[-3mm] \textbf{ad (iii.a)} Let $\boldsymbol{x}_\tau\in \mathbfcal{S}^0(\mathcal{I}_\tau,X)$ and $\boldsymbol{y}\in L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y)$. Then, using for every $t,s\in I_k^\tau$, $k=1,...,K$, that $\langle A(s)(\boldsymbol{x}_\tau(t)),\boldsymbol{y}(t)\rangle_X=\langle A(s)(\boldsymbol{x}_\tau(s)),\boldsymbol{y}(t)\rangle_X$ and Fubini's theorem, we infer \begin{align*} \langle \mathscr{J}_\tau[\mathbfcal{A}]\boldsymbol{x}_\tau,\boldsymbol{y}\rangle_{L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y)}&=\int_I{\langle \mathscr{J}_\tau[A](t)(\boldsymbol{x}(t)),\boldsymbol{y}(t)\rangle_X\,dt}\\&=\sum_{k=1}^{K}{\int_{I_k^\tau}{\bigg\langle\fint_{I_k^\tau}{ A(s)(\boldsymbol{x}_\tau(t))\,ds},\boldsymbol{y}(t)\bigg\rangle_{\!\!X}dt}} \\&=\sum_{k=1}^{K}{\int_{I_k^\tau}{\bigg\langle A(s)(\boldsymbol{x}_\tau(s)),\fint_{I_k^\tau}{\boldsymbol{y}(t)\,dt}\bigg\rangle_{\!\!X}\,ds}}\\&= \int_I{\langle A(s)(\boldsymbol{x}(s)), \mathscr{J}_\tau[\boldsymbol{y}](s)\rangle_{X}\,ds} \\&=\langle \mathbfcal{A}\boldsymbol{x}_\tau,\mathscr{J}_\tau[\boldsymbol{y}]\rangle_{L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y)}. \end{align*} \textbf{ad (iii.b)} Let the family $\boldsymbol{x}_\tau\in\mathbfcal{S}^0(\mathcal{I}_\tau,X)$, $\tau>0$, be bounded in $L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y)$. Then, by the boundedness of $\mathbfcal{A}:L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y)\to (L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y))^*$ (cf.~Proposition~\ref{3.9}), the family $(\mathbfcal{A}\boldsymbol{x}_\tau)_{\tau>0}\subseteq (L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y))^*$ is bounded as well. Therefore, also using \textbf{(iii.a)}, we conclude for every $\boldsymbol{y}\in L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y)$ that \begin{align*} \langle \mathbfcal{A}\boldsymbol{x}_\tau-\mathscr{J}_\tau[\mathbfcal{A}]\boldsymbol{x}_\tau,\boldsymbol{y}\rangle_{L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y)}= \langle \mathbfcal{A}\boldsymbol{x}_\tau,\boldsymbol{y}-\mathscr{J}_\tau[\boldsymbol{y}]\rangle_{L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y)}\;\;\to\;\; 0\quad(\tau\to 0), \end{align*} where we also used Proposition \ref{4.4} (i). \\[-3mm] \textbf{ad (iii.c)} Using \textbf{(iii.a)} and Proposition \ref{4.4} (ii), we deduce \begin{align*} \|\mathscr{J}_\tau[\mathbfcal{A}]\boldsymbol{x}_\tau\|_{(L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y))^*}&=\sup_{\|\boldsymbol{y}\|_{L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y)}\leq 1}{\langle \mathscr{J}_\tau[\mathbfcal{A}]\boldsymbol{x}_\tau,\boldsymbol{y}\rangle_{L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y)}}\\&=\sup_{\|\boldsymbol{y}\|_{L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y)}\leq 1}{\langle \mathbfcal{A}\boldsymbol{x}_\tau,\mathscr{J}_\tau[\boldsymbol{y}]\rangle_{L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y)}}\\&\leq \|\mathbfcal{A}\boldsymbol{x}_\tau\|_{(L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y))^*}.\tag*{$\qed$} \end{align*} \end{proof} \section{Fully discrete, quasi non-conforming approximation} \label{sec:6} In this section we formulate the exact framework of a quasi non-conforming Rothe-Galerkin approximation, prove its well-posedness, i.e., the existence of iterates, its stability, i.e., the boundedness of the corresponding double sequence of piece-wise constant interpolants, and its weak convergence, i.e., the weak convergence of a diagonal subsequence towards a weak solution of \eqref{eq:1.1}. \begin{asum}\label{asum} Let $I\!:=\!\left(0,T\right)$, $T\!<\!\infty$ and $1\!<\!p\!\leq \!q\!<\!\infty$. We make the following assumptions: \begin{description}[{(iii)}] \item[(i)] \textbf{Spaces:} $(V,H,j)$ and $(X,Y,j)$ are as in Definition \ref{3.1} and $(V_n)_{n\in\mathbb{N}}$ is a quasi non-conforming approximation of $V$ in $X$. \item[(ii)] \textbf{Initial data:} ${x}_0\!\in\!H$ and there is a sequence $x_n^0\!\in\! V_n$, $n\in\mathbb{N}$, such that ${x_n^0\to {x}_0}$~in~$Y$~${(n\!\to\! \infty)}$ and $\sup_{n\in \mathbb{N}}{\|jx_n^0\|_Y}\leq \|{x}_0\|_H$.\footnote{For a quasi non-conforming approximation Proposition \ref{3.2} guarantees the existence of such a sequence.} \item[(iii)] \textbf{Right-hand side}: $\boldsymbol{f}\in L^{p'}(I,X^*)$. \item[(iv)] \textbf{Operators}: $A(t):X\to X^*$, $t\in I$, is a family of operators satisfying (\hyperlink{A.1}{A.1})--(\hyperlink{A.4}{A.4}) and $\mathbfcal{A}:L^p(I,X)\cap_{\boldsymbol{j}}L^\infty(I,Y)\to (L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y))^*$ the corresponding induced operator. \end{description} \end{asum} Furthermore, we set $H_n:=j(V_n)\subseteq Y$ equipped with $(\cdot,\cdot)_Y$, denote by $j_n:V_n\to H_n$ the restriction of $j$ to $V_n$ and by $R_n:H_n\to H_n^*$ the corresponding Riesz isomorphism with respect to $(\cdot,\cdot)_Y$. As $j_n$ is an isomorphism, the triple $(V_n,H_n,j_n)$ is an evolution triple with canonical embedding $e_n:=j_n^*R_nj_n:V_n\to V_n^*$, which satisfies \begin{align} \langle e_nv_n,w_n\rangle_{V_n}=(j_nv_n,j_nw_n)_Y\quad\text{ for all }v_n,w_n\in V_n. \label{eq:iden} \end{align} Putting all together leads us to the following algorithm: \begin{alg}[Quasi non-conforming Rothe-Galerkin scheme] Let Assumption \eqref{asum} be satisfied. For given $K,n\in \mathbb{N}$ the sequence of iterates $(x_n^{k})_{k=0,...,K}\subseteq V_n$ is given via the implicit Rothe-Galerkin scheme for $\tau=\frac{T}{K}$ \begin{align} (d_\tau jx_n^k,jv_n)_Y+\langle [A]^\tau_k x_n^k,v_n\rangle_X= \langle [\boldsymbol{f}]_k^\tau,v_n\rangle_X\quad\text{ for all }v_n\in V_n.\label{eq:4.15} \end{align} \end{alg} \begin{rmk} Note that the Rothe-Galerkin scheme \eqref{eq:4.15} also covers pure Rothe schemes, i.e., without spatial approximation, and fully discrete conforming approximations: \begin{description}[{(ii)}] \item[(i)] If $X=V$, $Y=H$, and $V_n=X$, $n\in \mathbb{N}$, then \eqref{eq:4.15} forms a pure Rothe scheme. \item[(ii)] If $X=V$, $Y=H$, and the closed subspaces $(V_n)_{n\in \mathbb{N}}$ satisfy (\hyperlink{C.1}{C.1})--(\hyperlink{C.2}{C.2}), then \eqref{eq:4.15} forms a conforming Rothe-Galerkin scheme. \end{description} \end{rmk} \begin{prop}[Well-posedness of \eqref{eq:4.15}]\label{5.1} Let Assumption \eqref{asum} be satisfied and set \linebreak ${\tau_0:=\frac{1}{4c_1}}$. Then, for all $K,n\in \mathbb{N}$ with $\tau=\frac{T}{K}<\tau_0$ there exist iterates $(x_n^k)_{k=1,...,K}\subseteq V_n$, solving~\eqref{eq:4.15}. \end{prop} \begin{proof} Using \eqref{eq:iden} and the identity mapping $\textup{id}_{V_n}\colon V_n \to X$, we see that \eqref{eq:4.15} is equivalent to \begin{align} (\textup{id}_{V_n})^*\big ([\boldsymbol{f}]^\tau_k\big )+\frac{1}{\tau}e_nx_n^{k-1} \in R\Big(\frac{1}{\tau}e_n+(\textup{id}_{V_n})^*\circ [A]^\tau_k \circ \textup{id}_{V_n}\Big),\quad\text{for all }k=1,...,K,\label{eq:5.2} \end{align} We fix an arbitrary $k=1,...,K$. Apparently, $\frac{1}{\tau}e_n:V_n\to V_n^*$ is linear and continuous. Using \eqref{eq:iden}, we infer that $\langle \frac{1}{\tau}e_{n}x,x\rangle_{V_n}=\frac{1}{\tau}\|j_nx\|_Y^2\ge 0$ for all $x\in V_n$, i.e., $\frac{1}{\tau}e_n:V_n\to V_n^*$ is positive definite, and thus monotone. In consequence, $\frac{1}{\tau}e_n:V_n\to V_n^*$ is pseudo-monotone. Since the conditions (\hyperlink{A.1}{A.1})--(\hyperlink{A.4}{A.4}) are inherited from $A\colon X\to X^*$ to $(\textup{id}_{V_n})^*\circ A \circ \textup{id}_{V_n}\colon V_n \to V_n^*$ and since $(\textup{id}_{V_n})^*\circ [A]^\tau_k\circ \textup{id}_{V_n}=[(\textup{id}_{V_n})^*\circ A \circ \textup{id}_{V_n}]^\tau_k$, Proposition \ref{4.6} (i) guarantees that the operator $(\textup{id}_{V_n})^*\circ [A]_\tau^k \circ \textup{id}_{V_n}:V_n\to V_n^*$ is bounded and pseudo-monotone. Altogether, we conclude that the sum $\frac{1}{\tau}e_n+(\textup{id}_{V_n})^*\circ [A]_\tau^k\circ \textup{id}_{V_n}:V_n\to V_n^*$ is bounded and pseudo-monotone. In addition, as $\tau<\frac{1}{2c_1}$, combining \eqref{eq:iden} and Proposition~\ref{4.6}~(i.b), provides for all $x\in V_n$ \begin{align*} \Big\langle \Big(\frac{1}{\tau}e_n+(\textup{id}_{V_n})^* \circ [A]^\tau_k\circ \textup{id}_{V_n}\Big)x,x\Big\rangle_{V_n}\ge 3c_1\|j_nx\|_Y^2+c_0\|x\|_X^p-c_2, \end{align*} i.e., $\frac{1}{\tau}e_n+(\textup{id}_{V_n})^*\circ [A]^\tau_k\circ \textup{id}_{V_n}:V_n\to V_n^*$ is coercive. Hence, Proposition~\ref{2.9} proves \eqref{eq:5.2}.\hfill$\qed$ \end{proof} \begin{prop}[Stability of \eqref{eq:4.15}]\label{apriori} Let Assumption \eqref{asum} be satisfied and set $\tau_0:=\frac{1}{4c_1}$. Then, there exists a constant $M>0$ (not depending on $K,n\in \mathbb{N}$), such that the piece-wise constant interpolants $\overline{\boldsymbol{x}}_n^\tau\in \mathbfcal{S}^0(\mathcal{I}_\tau,X)$, $K,n\in \mathbb{N}$ with $\tau=\frac{T}{K}\in \left(0,\tau_0\right)$, and piece-wise affine interpolants $\hat{\boldsymbol{x}}_n^\tau\in W^{1,\infty}(I,X)$, $n\in \mathbb{N}$, $\tau\in (0,\tau_0)$ (cf.~\eqref{eq:polant}) generated by iterates~${(x_n^k)_{k=0,...,K}\subseteq V_n}$, $K,n\in \mathbb{N}$ with $\tau=\frac{T}{K}\in \left(0,\tau_0\right)$, solving \eqref{eq:4.15}, satisfy the following estimates: \begin{align} \|\overline{\boldsymbol{x}}_n^\tau\|_{L^p(I,X)\cap_{\boldsymbol{j}}L^\infty(I,Y)}&\leq M,\label{eq:5.4}\\ \|\boldsymbol{j}\hat{\boldsymbol{x}}_n^\tau\|_{L^\infty(I,Y)}&\leq M,\label{eq:5.5}\\ \|\mathbfcal{A}\overline{\boldsymbol{x}}_n^\tau\|_{(L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y))^*}&\leq M,\label{eq:5.6}\\ \|e_n(\hat{\boldsymbol{x}}_n^\tau-\overline{\boldsymbol{x}}_n^\tau)\|_{L^{q'}(I,V_n^*)}&\leq\tau \big (\|\boldsymbol{f}\|_{L^{p'}(I,X^*)}+M\big ).\label{eq:5.7} \end{align} \end{prop} \begin{proof} We use $v_n=x_n^k\in V_n$, $k=1,...,l$, for arbitrary $l=1,...,K$ in \eqref{eq:4.15}, multiply~by~${\tau\in (0,\tau_0)}$, sum with respect to $k=1,...,l$, use \eqref{eq:4.2} and $\sup_{n\in \mathbb{N}}{\|jx^0_n\|_Y\leq \|{x}_0\|_H}$, to obtain~for~every~${l=1,...,K}$ \begin{align}\begin{split} \frac{1}{2}\|j x_n^l\|_Y^2+\sum_{k=1}^l{\tau\langle[A]^k_\tau x_n^k,x_n^k\rangle_X}\leq \frac{1}{2}\|{x}_0\|_H^2+ \sum_{k=1}^l{\tau\langle[\boldsymbol{f}]^k_\tau ,x_n^k\rangle_X}. \end{split} \label{eq:5.8} \end{align} Applying the weighted $\varepsilon$-Young inequality with constant $c(\varepsilon):=(p\varepsilon)^{1-p'}/p'$ for all $\varepsilon>0$, using $\|\mathscr{J}_\tau[\boldsymbol{f}]\|_{L^{p'}(I,X^*)}\leq \|\boldsymbol{f}\|_{L^{p'}(I,X^*)}$ (cf.~Proposition~\ref{4.4} (ii)), we deduce for every $l=1,...,K$ \begin{align*} \begin{split} \sum_{k=1}^l{\tau\langle[\boldsymbol{f}]^k_\tau ,x_n^k\rangle_X}=\langle\mathscr{J}_\tau[\boldsymbol{f}],\overline{\boldsymbol{x}}_n^\tau\chi_{\left[0,l\tau\right]}\rangle_{L^p(I,X)}\leq c(\varepsilon)\|\boldsymbol{f}\|_{L^{p'}(I,X^*)}^{p'}+\varepsilon\int _0^{l\tau}{\|\overline{\boldsymbol{x}}^\tau_n(s)\|_X^p\,ds}. \end{split \end{align*} In addition, using Proposition \ref{4.6} (i.b), we obtain for every $l=1,...,K$ \begin{align} \sum_{k=1}^l{\tau\langle[A]^k_\tau x_n^k,x_n^k\rangle_X}\ge c_0\int_0^{l\tau}{\|\overline{\boldsymbol{x}}^\tau_n(s)\|_X^p\,ds}-\tau c_1\|jx_n^l\|_Y^2-\sum_{k=1}^{l-1}{\tau c_1\|jx_n^k\|_Y^2}-c_2T.\label{eq:5.10} \end{align} We set $\varepsilon:= \frac{c_0}{2}$, $\alpha:=\frac{1}{2}\|{x}_0\|_H^2+ c(\varepsilon)\|\boldsymbol{f}\|_{L^{p'}(I,X^*)}^{p'}+c_2T$, $\beta:=4\tau c_1<1$ and ${y^k_n:=\frac{1}{4}\|jx^k_n\|_Y^2}$ for $k=1,...,K$. Thus, we infer for every $l=1,...,K$ from \eqref{eq:5.8}--\eqref{eq:5.10} that \begin{align} y^l_n+\frac{c_0}{2}\int_0^{l\tau}{\|\overline{\boldsymbol{x}}^\tau_n(s)\|_X^p\,ds}\leq \alpha+\beta\sum_{k=1}^{l-1}{y^k_n}.\label{eq:5.11} \end{align} The discrete Gronwall inequality applied on \eqref{eq:5.11} yields \begin{align*} \frac 14\|\boldsymbol{j}\overline{\boldsymbol{x}}_n^\tau\|^2_{L^\infty(I,Y)} + \frac {c_0}2 \|\overline{\boldsymbol{x}}_n^\tau\|^p_{L^p(I,X)} \le \alpha\exp(K\beta)=\alpha\exp(4Tc_1)=:C_0, \end{align*} which proves \eqref{eq:5.4}. From the boundedness of $\mathbfcal{A}:L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y)\to (L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y))^*$ (cf.~Proposition \ref{3.9}) and \eqref{eq:5.4} we infer $\|\mathbfcal{A}\overline{\boldsymbol{x}}_n^\tau\|_{(L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y))^*}\leq C_1$ for some $C_1>0$, i.e., \eqref{eq:5.6}. In addition, it holds ${\|\boldsymbol{j}\hat{\boldsymbol{x}}_n^\tau\|^2_{L^\infty(I,Y)}\leq \|\boldsymbol{j}\overline{\boldsymbol{x}}_n^\tau\|^2_{L^\infty(I,Y)}\leq 4C_0}$ for every $n\in \mathbb{N}$ and $\tau\in(0,\tau_0)$, i.e., \eqref{eq:5.5}. Moreover, since $e_n\big(\hat{\boldsymbol{x}}_n^\tau(t)-\overline{\boldsymbol{x}}_n^\tau(t)\big)=(t-k\tau) d_\tau e_n \hat{\boldsymbol{x}}_n^\tau(t)=(t-k\tau)\frac{d_{e_n}\hat{\boldsymbol{x}}_n^\tau}{dt}(t)$ in $V_n^*$ and $\vert t-k\tau\vert \leq \tau$ for every $t\in I_k^\tau$, $k=1,...,K$, $K,n\in\mathbb{N}$, there holds for every $n\in\mathbb{N}$ and $\tau\in(0,\tau_0)$ \begin{align*} \begin{split} \big\|e_n\big(\hat{\boldsymbol{x}}_n^\tau-\overline{\boldsymbol{x}}_n^\tau\big)\big\|_{L^{q'}(I,V_n^*)}&\leq\tau\left\|\frac{d_{e_n}\hat{\boldsymbol{x}}_n^\tau}{dt}\right\|_{L^{q'}(I,V_n^*)}\\&= \tau\big\|(\textup{id}_{L^q(I,V_n)})^*\big(\mathscr{J}_\tau[\boldsymbol{f}]-\!\mathscr{J}_\tau[\mathbfcal{A}]\overline{\boldsymbol{x}}_n^\tau\big)\big\|_{L^{q'}(I,V_n^*)}\leq \tau\big(\|\boldsymbol{f}\|_{L^{p'}(I,X^*)}\!+C_1\big), \end{split} \end{align*} i.e., the estimate \eqref{eq:5.7}, where we used Proposition \ref{4.4} (ii) and Proposition \ref{4.6} (iii.c). \hfill$\qed$ \end{proof} We can now prove the main abstract convergence results, that is Theorem~\ref{5.17} \begin{proof}[Proof of Theorem~\ref{5.17}] We split the proof into four steps:\\[-3mm] \textbf{1. Convergences:} From the estimates \eqref{eq:5.4}--\eqref{eq:5.7}, the reflexivity of ${L^p(I,X)\cap_{\boldsymbol{j}} L^q(I,Y)}$, also using Proposition \ref{4.6} (iii.b), we obtain not relabelled subsequences $(\overline{\boldsymbol{x}}_n)_{n\in \mathbb{N}},(\hat{\boldsymbol{x}}_n)_{n\in \mathbb{N}}\subseteq L^p(I,X)\cap_{\boldsymbol{j}} L^\infty(I,Y)$, where $\hat{\boldsymbol{x}}_n:=\hat{\boldsymbol{x}}_{m_n}^{\tau_n}$ for all $n\in \mathbb{N}$, as well as $\overline{\boldsymbol{x}}\in L^p(I,X)\cap_{\boldsymbol{j}} L^\infty(I,Y)$, $\boldsymbol{j}\hat{\boldsymbol{x}}\in L^\infty(I,Y)$ and ${\overline{\boldsymbol{x}}^*\in (L^p(I,X)\cap_{\boldsymbol{j}} L^q(I,Y))^*}$ such that \begin{align} \begin{alignedat}{2} \overline{\boldsymbol{x}}_n&\;\;\rightharpoonup\;\;\overline{\boldsymbol{x}}&\quad &\text{ in }L^p(I,X)\;\quad(n\to\infty),\\ \boldsymbol{j}\overline{\boldsymbol{x}}_n&\;\;\overset{\ast}{\rightharpoondown}\;\;\boldsymbol{j}\overline{\boldsymbol{x}}&&\text{ in }L^\infty(I,Y)\quad (n\rightarrow\infty),\\ \boldsymbol{j}\hat{\boldsymbol{x}}_n&\;\; \overset{\ast}{\rightharpoondown}\;\;\boldsymbol{j}\hat{\boldsymbol{x}}&&\text{ in }L^\infty(I,Y)\quad (n\rightarrow\infty),\\ \mathscr{J}_{\tau_n}[\mathbfcal{A}]\overline{\boldsymbol{x}}_n,\mathbfcal{A}\overline{\boldsymbol{x}}_n&\;\;\rightharpoonup\;\;\overline{\boldsymbol{x}}^*&&\text{ in }(L^p(I,X)\cap_{\boldsymbol{j}} L^q(I,Y))^*\quad(n\to \infty). \end{alignedat}\label{eq:5.18} \end{align} From (\hyperlink{QNC.2}{QNC.2}) we immediately obtain that $\overline{\boldsymbol{x}}\in L^p(I,V)\cap_{\boldsymbol{j}} L^\infty(I,H)$. In particular, there exists $\boldsymbol{g}\in L^{p'}(I,V^*)+\boldsymbol{j}^*(L^{q'}(I,H^*))$ (cf.~\eqref{eq:dual}), such that for every ${\boldsymbol{v}\in L^p(I,V)\cap_{\boldsymbol{j}} L^q(I,H)}$ \begin{align} \langle \overline{\boldsymbol{x}}^*,\boldsymbol{v}\rangle_{L^p(I,X)\cap_{\boldsymbol{j}} L^q(I,Y)}=\int_I{\langle \boldsymbol{g}(t),\boldsymbol{v}(t)\rangle_V\,dt}.\label{eq:rep} \end{align} Due to $\eqref{eq:5.7}$ there exists a subset $E\subset I$, with $I\setminus E$ a null set, such that for every $t\in E$ \begin{align} \|e_{m_n}(\hat{\boldsymbol{x}}_n(t)-\overline{\boldsymbol{x}}_n(t))\|_{V_{m_n}^*}\;\;\to \;\;0\quad(n\to\infty).\label{eq:5.19} \end{align} Owing to (\hyperlink{QNC.1}{QNC.1}) we can choose for every element $v$ of the dense subset $D\subseteq V$ a sequence $v_n\in V_{m_n}$,~${n\in\mathbb{N}}$, such that $v_n\to v$ in $X$ $(n\to\infty)$. Then, using the definition of $P_H$, \eqref{eq:iden}, \eqref{eq:5.4}, \eqref{eq:5.5} and \eqref{eq:5.19}, we infer for~every~${t\in E}$~that \begin{align} \begin{split} \vert(P_H[(\boldsymbol{j}&\hat{\boldsymbol{x}}_n)(t)-(\boldsymbol{j}\overline{\boldsymbol{x}}_n)(t)],jv)_H\vert= \vert((\boldsymbol{j}\hat{\boldsymbol{x}}_n)(t)-(\boldsymbol{j}\overline{\boldsymbol{x}}_n)(t),jv)_Y\vert \\&\leq \vert\langle e_{m_n}[(\boldsymbol{j}\hat{\boldsymbol{x}}_n)(t)-(\boldsymbol{j}\overline{\boldsymbol{x}}_n)(t)],v_n\rangle_{V_{m_n}}\vert+ \vert\left((\boldsymbol{j}\hat{\boldsymbol{x}}_n)(t)-(\boldsymbol{j}\overline{\boldsymbol{x}}_n)(t),jv-jv_n\right)_Y\vert \\&\leq \|e_{m_n}[\hat{\boldsymbol{x}}_n(t)-\overline{\boldsymbol{x}}_n(t)]\|_{V_{m_n}^*}\|v_n\|_X+ \|(\boldsymbol{j}\hat{\boldsymbol{x}}_n)(t)-(\boldsymbol{j}\overline{\boldsymbol{x}}_n)(t)\|_Y\|jv-jv_n\|_Y \\&\leq \|e_{m_n}[\hat{\boldsymbol{x}}_n(t)-\overline{\boldsymbol{x}}_n(t)]\|_{V_{m_n}^*}\|v_n\|_X+ 2M\|jv-jv_n\|_Y\;\;\to\;\; 0\quad(n\to\infty). \end{split}\label{eq:5.20} \end{align} Since $D$ is dense in $V$ and $j(V)$ is dense in $H$, we conclude from \eqref{eq:5.20} for every $t\in E$ that \begin{align} P_H[(\boldsymbol{j}\hat{\boldsymbol{x}}_n)(t)-(\boldsymbol{j}\overline{\boldsymbol{x}}_n)(t)]\;\;\rightharpoonup \;\;0\quad\text{ in }H\quad(n\to\infty).\label{eq:5.21} \end{align} Since the sequences $(P_H\boldsymbol{j}\overline{\boldsymbol{x}}_n)_{n\in\mathbb{N}},(P_H\boldsymbol{j}\hat{\boldsymbol{x}})_{n\in\mathbb{N}}\subseteq L^\infty(I,H)$ are bounded (cf. \eqref{eq:5.4} and \eqref{eq:5.5}), \cite[Prop.~2.15]{KR19} yields, due to \eqref{eq:5.21}, that $P_H(\boldsymbol{j}\hat{\boldsymbol{x}}_n-\boldsymbol{j}\overline{\boldsymbol{x}}_n)\rightharpoonup \boldsymbol 0$ in $L^q(I,H)$ $(n \to \infty)$. From \eqref{eq:5.18}$_{2,3}$ we easily deduce that $P_H(\boldsymbol{j}\hat{\boldsymbol{x}}_n-\boldsymbol{j}\overline{\boldsymbol{x}}_n)\rightharpoonup P_H(\boldsymbol{j}\hat{\boldsymbol{x}}-\boldsymbol{j}\overline{\boldsymbol{x}})$ in $L^q(I,H)$ $(n \to \infty)$. Thus, $P_H(\boldsymbol{j}\hat{\boldsymbol{x}})=P_H(\boldsymbol{j}\overline{\boldsymbol{x}})=\boldsymbol{j}\overline{\boldsymbol{x}}$ in $L^\infty (I,H)$, where we used that $\overline{\boldsymbol{x}}\in L^p(I,V)\cap_{\boldsymbol{j}} L^\infty(I,H)$.\\[-3mm] \textbf{2. Regularity and trace of the weak limit:} \hypertarget{3.2}{} Let $v\in D$ and $v_n\in V_{m_n}$, $n\in\mathbb{N}$, be a sequence such that $v_n\to v$ in $X$ $(n\to\infty)$. Testing \eqref{eq:4.15} for $n\in \mathbb{N}$ by $v_n\in V_{m_n}$, multiplication by $\varphi\in C^\infty(\overline{I})$ with $\varphi(T)=0$, integration over $I$, and integration by parts yields for every $n\in \mathbb{N}$ \begin{align} \begin{split} \langle \mathscr{J}_{\tau_n}[\mathbfcal{A}]\overline{\boldsymbol{x}}_n,v_n\varphi\rangle_{L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y)}&-\int_{I}{\langle\mathscr{J}_{\tau_n}[\boldsymbol{f}](s),v_n\rangle_{X}\varphi(s)\,ds} \\&=\int_I{((\boldsymbol{j}\hat{\boldsymbol{x}}_n)(s),jv_n)_Y\varphi^\prime(s)\,ds} +(x_{m_n}^0,jv_n)_Y\varphi(0). \end{split}\label{eq:5.22} \end{align} By passing in \eqref{eq:5.22} for $n\to \infty$, using \eqref{eq:5.18}, \eqref{eq:rep}, Proposition \ref{4.4} (i), $P_H(\boldsymbol{j}\hat{\boldsymbol{x}})=\boldsymbol{j}\overline{\boldsymbol{x}}$ in $L^{\infty}(I,H)$, $x_{m_n}^0\to {x}_0$ in $Y$ $(n\to\infty)$ and the density of $D$ in $V$, we obtain that for all $v\in V$ and $\varphi\in C^\infty(\overline{I})$ with $\varphi(T)=0$ there holds \begin{align} \begin{split} \int_{I}{\langle\boldsymbol{g}(s)-\boldsymbol{f}(s),v\rangle_V\varphi(s)\,ds} &=\int_I{((\boldsymbol{j}\hat{\boldsymbol{x}})(s),jv)_Y\varphi^\prime(s)\,ds} +({x}_0,jv)_Y\varphi(0) \\&=\int_I{((\boldsymbol{j}\overline{\boldsymbol{x}})(s),jv)_H\varphi^\prime(s)\,ds} +({x}_0,jv)_H\varphi(0). \end{split}\label{eq:5.23} \end{align} In the case $\varphi\in C_0^\infty(I)$ in \eqref{eq:5.23}, recalling Definition \ref{2.15}, we conclude that $\overline{\boldsymbol{x}}\in \mathbfcal{W}_e^{1,p,q}(I,V,H)$ with continuous representation $\boldsymbol{j}_c\overline{\boldsymbol{x}}\in C^0(\overline{I},H)$ and \begin{align} \frac{d_e\overline{\boldsymbol{x}}}{dt}=\boldsymbol{f}-\boldsymbol{g}\quad\text{ in } L^{p'}(I,V^*)+ \boldsymbol{j}^*(L^{q'}(I,H^*)).\label{eq:5.24} \end{align} Thus, we are able to apply the generalized integration by parts formula in $\mathbfcal{W}_e^{1,p,q}(I,V,H)$ (cf.~Proposition~\ref{2.16}) in \eqref{eq:5.23} in the case $\varphi\in C^\infty(\overline{I})$ with $\varphi(T)=0$ and $\varphi(0)=1$, which yields for all $v\in V$ \begin{align*} ((\boldsymbol{j}_c\overline{\boldsymbol{x}})(0)-{x}_0,jv)_H=0. \end{align*} As $j(V)$ is dense in $H$ and $(\boldsymbol{j}_c\overline{\boldsymbol{x}})(0)\in H$, we deduce from \eqref{eq:5.24} that $(\boldsymbol{j}_c\overline{\boldsymbol{x}})(0)={x}_0$ in $H$. \\[-3mm] \textbf{3. Pointwise weak convergence:} Next, we show that $ P_H(\boldsymbol{j}\hat{\boldsymbol{x}}_n)(t)\rightharpoonup (\boldsymbol{j}_c \overline{\boldsymbol{x}})(t)$ in $H$ ${(n\to\infty)}$ for all $t \in \overline{I}$, which due to \eqref{eq:5.21} in turn yields that $ P_H(\boldsymbol{j}\overline{\boldsymbol{x}}_n)(t) \rightharpoonup (\boldsymbol j \overline{\boldsymbol{x}})(t)$ in $H$ $(n\to\infty)$ for almost every $t \in \overline{I}$. To this end, let us fix an arbitrary $t\in I$. From the a-priori estimate $\|(\boldsymbol{j}\hat{\boldsymbol{x}}_n)(t)\|_{Y}\leq M$ for all $t\in \overline{I}$ and $n\in\mathbb{N}$ (cf.~\eqref{eq:5.5}) we obtain a subsequence $((\boldsymbol{j}\hat{\boldsymbol{x}}_n)(t))_{n\in\Lambda_t}\subseteq Y$ with $\Lambda_t\subseteq\mathbb{N}$, initially depending on this fixed $t$, and an element $\hat{\boldsymbol{x}}_{\Lambda_t}\in Y$ such that \begin{align} (\boldsymbol{j}\hat{\boldsymbol{x}}_n)(t) \;\;\rightharpoonup\;\; \hat{\boldsymbol{x}}_{\Lambda_t}\quad\text{ in }Y\quad( \Lambda_t\ni n\to \infty).\label{eq:5.27} \end{align} Let $v\in D$ and $v_n\in V_{m_n}$, $n\in\mathbb{N}$, be such that $v_n\to v$ in $X$ $(n\to\infty)$. Then, we test \eqref{eq:4.15} for $n\in \Lambda_t$ by $v_n\in V_{m_n}$, multiply by $\varphi\in C^\infty(\overline{I})$ with $\varphi(0)=0$ and $\varphi(t)=1$, integrate over $\left[0,t\right]$ and integrate by parts, to obtain for all $n\in \Lambda_t$ \begin{align} \langle \mathscr{J}_{\tau_n}[\mathbfcal{A}]\overline{\boldsymbol{x}}_n,v_n\varphi\chi_{\left[0,t\right]}\rangle_{L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y)}&-\int_0^t{\langle\mathscr{J}_{\tau_n}[\boldsymbol{f}](s),v_n\rangle_{X}\varphi(s)\,ds} \label{eq:5.28}\\&=\int_0^t{((\boldsymbol{j}\hat{\boldsymbol{x}}_n)(s),jv_n)_Y\varphi^\prime(s)\,ds} -((\boldsymbol{j}\hat{\boldsymbol{x}}_n)(t),jv_n)_Y.\notag \end{align} By passing in \eqref{eq:5.28} for $n\in \Lambda_t$ to infinity, using \eqref{eq:5.18}, \eqref{eq:rep}, Proposition \ref{4.4} (i), \eqref{eq:5.27} and the density of $D$ in $V$,~we~obtain for all $v\in V$ \begin{align} \begin{split} \int_0^t{\langle\boldsymbol{g}(s)-\boldsymbol{f}(s),v\rangle_V\varphi(s)\,ds} =\int_0^t{((\boldsymbol{j}\overline{\boldsymbol{x}})(s),jv)_H\varphi^\prime(s)\,ds} -(\hat{\boldsymbol{x}}_{\Lambda_t},jv)_Y. \end{split} \label{eq:5.29b} \end{align} From \eqref{eq:5.24}, \eqref{eq:5.29b}, the integration by parts formula in $\mathbfcal{W}_e^{1,p,q}(I,V,H)$ and the properties of $P_H$ we obtain \begin{align} 0=((\boldsymbol{j}_c\overline{\boldsymbol{x}})(t)- \hat{\boldsymbol{x}}_{\Lambda_t},jv)_Y=((\boldsymbol{j}_c\overline{\boldsymbol{x}})(t)- P_H\hat{\boldsymbol{x}}_{\Lambda_t},jv)_H\label{eq:5.29} \end{align} for all $v\in V$. Thanks to the density of $j(V)$ in $H$, \eqref{eq:5.29} yields $(\boldsymbol{j}_c\overline{\boldsymbol{x}})(t)=P_H\hat{\boldsymbol{x}}_{\Lambda_t}$ in $H$, i.e., \begin{align} P_H(\boldsymbol{j}\hat{\boldsymbol{x}}_n)(t)\;\; \rightharpoonup\;\;(\boldsymbol{j}_c\overline{\boldsymbol{x}})(t)\quad\text{ in }H\quad(\Lambda_t\ni n \to \infty).\label{eq:5.30} \end{align} As this argumentation stays valid for each subsequence of ${(P_H(\boldsymbol{j}\hat{\boldsymbol{x}}_n)(t) )_{n\in\mathbb{N}}\subseteq H}$, ${(\boldsymbol{j}_c\overline{\boldsymbol{x}})(t)\in H}$ is a weak accumulation point of each subsequence of $(P_H(\boldsymbol{j}\hat{\boldsymbol{x}}_n)(t) )_{n\in\mathbb{N}}\subseteq H$. The standard convergence principle (cf.~\cite[Kap. I, Lemma 5.4]{GGZ}) yields $\Lambda_t=\mathbb{N}$ in \eqref{eq:5.30}. Therefore, using \eqref{eq:5.21} and that $(\boldsymbol{j}_c\overline{\boldsymbol{x}})(t)=(\boldsymbol{j}\boldsymbol{x})(t)$ in $H$ for almost every $t\in I$, there holds for almost every $t\in I$ \begin{align}\label{eq:5.31} P_H(\boldsymbol{j}\overline{\boldsymbol{x}}_n)(t)\;\;\rightharpoonup\;\;(\boldsymbol{j}\overline{\boldsymbol{x}})(t)\quad\text{ in }H\quad(n\to\infty).\\[-3mm]\notag \end{align} \textbf{4. Identification of $\mathbfcal{A}\overline{\boldsymbol{x}}$ and $\overline{\boldsymbol{x}}^*$}: Inequality \eqref{eq:5.8} in the case $\tau=\tau_n$, $n=m_n$ and $l=K_n$, using Proposition \ref{4.6} (iii.a), $(\boldsymbol{j}_c\overline{\boldsymbol{x}})(0)\!=\!{x}_0$ in $H$, ${\|P_H(\boldsymbol{j}\hat{\boldsymbol{x}}_n)(T)\|_H\!\leq\!\|(\boldsymbol{j}\hat{\boldsymbol{x}}_n)(T)\|_Y\!=\!\|(\boldsymbol{j}\overline{\boldsymbol{x}}_n)(T)\|_Y}$ and $\langle\mathscr{J}_{\tau_n}[\boldsymbol{f}],\overline{\boldsymbol{x}}_n\rangle_{L^p(I,X)}=\langle\boldsymbol{f},\overline{\boldsymbol{x}}_n\rangle_{L^p(I,X)}$ for all $n\in \mathbb{N}$, yields for all $n\in \mathbb{N}$ \begin{align} \langle \mathbfcal{A}\overline{\boldsymbol{x}}_n,\overline{\boldsymbol{x}}_n\rangle_{L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y)}\leq -\frac{1}{2}\|P_H(\boldsymbol{j}\hat{\boldsymbol{x}}_n)(T)\|_H^2+\frac{1}{2}\|(\boldsymbol{j}_c\overline{\boldsymbol{x}})(0)\|_H^2+\langle\boldsymbol{f},\overline{\boldsymbol{x}}_n\rangle_{L^p(I,X)}.\label{eq:5.32} \end{align} Thus, the limit superior with respect to $n\in\mathbb{N}$ on both sides in \eqref{eq:5.32}, \eqref{eq:5.18}, \eqref{eq:rep}, \eqref{eq:5.30} with $\Lambda_t=\mathbb{N}$ in the case $t=T$, the weak lower semi-continuity of $\|\cdot\|_H$, the integration by parts formula in $\mathbfcal{W}_e^{1,p,q}(I,V,H)$ and \eqref{eq:5.24} yield \begin{align}\begin{split} \limsup_{n\rightarrow\infty}{ \langle \mathbfcal{A}\overline{\boldsymbol{x}}_n,\overline{\boldsymbol{x}}_n-\overline{\boldsymbol{x}}\rangle_{L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y)}} &\leq -\frac{1}{2}\|(\boldsymbol{j}_c\overline{\boldsymbol{x}})(T)\|_H^2+\frac{1}{2}\|(\boldsymbol{j}_c\overline{\boldsymbol{x}})(0)\|_H^2\\&\quad+ \int_I{\langle\boldsymbol{f}(t)-\boldsymbol{g}(t),\overline{\boldsymbol{x}}(t)\rangle_V\,dt}\\&=-\int_I{\Big\langle \frac{d_e\overline{\boldsymbol{x}}}{dt}(t)+\boldsymbol{f}(s)-\boldsymbol{g}(s),\overline{\boldsymbol{x}}(t)\Big\rangle_V\,dt}=0. \end{split}\label{eq:5.33} \end{align} As a result of \eqref{eq:5.18}, \eqref{eq:5.31}, \eqref{eq:5.33} and the quasi non-conforming Bochner pseudo-monotonicity of $\mathbfcal{A}:L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y)\rightarrow ({L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y)})^*$ (cf.~Proposition~\ref{3.9}), there holds \begin{align*} \langle \mathbfcal{A}\overline{\boldsymbol{x}},\overline{\boldsymbol{x}}-\boldsymbol{y}\rangle_{L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y)}&\leq\liminf_{n\to\infty}{\langle \mathbfcal{A}\overline{\boldsymbol{x}}_n,\overline{\boldsymbol{x}}_n-\boldsymbol{y}\rangle_{{L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y)}}}\\&\leq\langle\overline{\boldsymbol{x}}^*,\overline{\boldsymbol{x}}-\boldsymbol{y}\rangle_{{L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y)}}, \end{align*} for any $\boldsymbol{y}\in {L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y)}$, which in turn implies that $\mathbfcal{A}\overline{\boldsymbol{x}}=\overline{\boldsymbol{x}}^*$ in $({L^p(I,X)\cap_{\boldsymbol{j}}L^q(I,Y)})^*$. This completes the proof of Theorem \ref{5.17}.\hfill$\qed$ \end{proof} \section{Application: ($p$-Navier-Stokes equations, $p>\frac{3d+2}{d+2}$)} \label{sec:7} We follow the procedure of Section \ref{sec:6} in order to prove the well-posedness, stability and weak convergence of the quasi non-conforming Rothe-Galerkin scheme \eqref{eq:p-NSnon} of the to the unsteady $p$-Navier-Stokes equations \eqref{eq:p-NS} corresponding evolution equation \eqref{eq:p-NS2} by means of discretely divergence-free FEM spaces. \begin{asum}\label{asumex} Let $\Omega\subseteq \mathbb{R}^d$, $d\ge 2$, a bounded polygonal Lipschitz domain, $I:=\left(0,T\right)$, $T<\infty$, and $p\!>\!\frac{3d+2}{d+2}$. We make the following assumptions: \begin{description}[{(iii)}] \item[(i)] $(V,H,\text{id})$, $(X,Y,\text{id})$ and $(V_n)_{n\in \mathbb{N}}$ are defined as in Proposition \ref{ex:3.5}. \item[(ii)] $\textbf{u}_0\in H$ and $\textbf{u}_n^0\in V_n$, $n\in \mathbb{N}$, such that $\textbf{u}_n^0\to \textbf{u}_0$ in $Y$ $(n\to \infty)$ and $\sup_{n\in \mathbb{N}}{\|\textbf{u}_n^0\|_Y}\leq \|\textbf{u}_0\|_H$. \item[(iii)] $\boldsymbol{f}\in L^{p'}(I,X^*)$. \item[(iv)] $S(t):X\to X^*$, $t\in I$, and $\hat{B}:X\to X^*$ are defined as in Proposition \ref{4.7}. \end{description} \end{asum} Furthermore, we denote by $e:=(\textup{id}_V)^*R_H:V\to V^*$ the canonical embedding with respect to the evolution triple $(V,H,\textup{id})$. Let us next recall the quasi non-conforming Rothe-Galerkin scheme we already revealed in the introduction. \begin{alg} Let Assumption \ref{asumex} be satisfied. For given $K,n\in \mathbb{N}$ the sequence of iterates $(\textbf{u}_n^{k})_{k=0,...,K}\subseteq V_n$ is given via the implicit Rothe-Galerkin scheme for $\tau=\frac{T}{K}$ \begin{align} (d_\tau \textbf{u}^k_n,\textbf{v}_n)_Y+\langle [S]^\tau_k\textbf{u}^k_n,\textbf{v}_n\rangle_X+\langle \hat{B}\textbf{u}^k_n,\textbf{v}_n\rangle_X=\langle [\boldsymbol{f}]^\tau_k,\textbf{v}_n\rangle_X\quad\text{ for all }\textbf{v}_n\in V_n,\label{eq:p-NSnon2} \end{align} \end{alg} By means of Proposition \ref{5.1}, Proposition \ref{apriori}, Theorem \ref{5.17} and the observation we already made in Proposition \ref{4.7}, we can immediately conclude the following results. \begin{thm}[Well-posedness, stability and weak convergence of \eqref{eq:p-NSnon2}]\label{rem:7.4}\newline Let Assumption \ref{asumex} be satisfied. Then, it holds: \begin{description}[{(III)}] \item[(I)] \textbf{Well-posedness:} For every $K,n\in \mathbb{N}$ there exist iterates $(\textbf{u}_n^k)_{k=0,...,K}\subseteq V_n$, solving \eqref{eq:p-NSnon2}, without any restrictions on the step-size. \item[(II)] \textbf{Stability:} The corresponding piece-wise constant interpolants $\overline{\boldsymbol{u}}_n^\tau\in \mathbfcal{S}^0(\mathcal{I}_\tau,X)$, $K,n\in\mathbb{N}$ with $\tau=\frac{T}{K}$, are bounded in $L^p(I,X)\cap L^\infty(I,Y)$. \item[(III)] \textbf{Weak convergence:} If $(\overline{\boldsymbol{u}}_n)_{n\in\mathbb{N}}:=(\overline{\boldsymbol{u}}_{m_n}^{\tau_n})_{n\in\mathbb{N}}$, where $\tau_n=\frac{T}{K_n}$ and ${K_n,m_n\to \infty }$~${(n\to\infty)}$, is an arbitrary diagonal sequence of the piece-wise constant interpolants ${\overline{\boldsymbol{u}}_n^\tau\in \mathbfcal{S}^0(\mathcal{I}_\tau,X)}$, ${K,n\in\mathbb{N}}$ with ${\tau=\frac{T}{K}}$, then there exists a not relabelled subsequence and a weak limit ${\overline{\boldsymbol{u}}\in L^p(I,V)\cap L^\infty(I,H)}$ such that \begin{align*} \begin{alignedat}{2} \overline{\boldsymbol{u}}_n&\;\;\rightharpoonup\;\;\overline{\boldsymbol{u}}&&\quad\text{ in }L^p(I,X),\\ \overline{\boldsymbol{u}}_n&\;\;\overset{\ast}{\rightharpoondown}\;\;\overline{\boldsymbol{u}}&&\quad\text{ in }L^\infty(I,Y), \end{alignedat} \begin{aligned} \quad(n\to\infty). \end{aligned} \end{align*} Furthermore, it follows that $\overline{\boldsymbol{u}}\in \mathbfcal{W}_e^{1,p,p}(I,V,H)\cap L^\infty(I,H)$ satisfies $\overline{\boldsymbol{u}}(0)=\textbf{u}_0$ in $H$ and for all ${\boldsymbol{\phi}\in L^p(I,V)}$ \begin{align*} \int_I{\bigg\langle\frac{d_e\overline{\boldsymbol{u}}}{dt}(t),\boldsymbol{\phi}(t)\bigg\rangle_V\,dt}+\int_I{\langle S(t)(\overline{\boldsymbol{u}}(t))+B(\overline{\boldsymbol{u}}(t)),\boldsymbol{\phi}(t)\rangle_X\,dt}=\int_I{\langle\boldsymbol{f}(t),\boldsymbol{\phi}(t)\rangle_X\,dt}. \end{align*} \end{description} \end{thm} \begin{proof} \textbf{ad (I)/(II)} The assertions follow immediately from Proposition \ref{5.1} and Proposition \ref{apriori}, since the operator family $A(t):=S(t)+\hat{B}:X\to X^*$, $t\in I$, satisfies (\hyperlink{A.1}{A.1})--(\hyperlink{A.4}{A.4}) with $c_1=0$ due to Proposition \ref{4.7}. \textbf{ad (III)} The assertions follow from Theorem \ref{5.17}. To be more precise, Theorem \ref{5.17} initially yields that $\overline{\boldsymbol{u}}\in \mathbfcal{W}_e^{1,p,q}(I,V,H)$, where $q>1$ is specified in the proof of Proposition \ref{4.7}, satisfies $\overline{\boldsymbol{u}}(0)=\mathbf{u}_0$ in $H$ and for all $\boldsymbol{\phi}\in C^1_0(I,V)$ \begin{align*} \int_I{\bigg\langle\frac{d_e\overline{\boldsymbol{u}}}{dt}(t),\boldsymbol{\phi}(t)\bigg\rangle_V\,dt}+\int_I{\langle S(t)(\overline{\boldsymbol{u}}(t))+\hat{B}(\overline{\boldsymbol{u}}(t)),\boldsymbol{\phi}(t)\rangle_X\,dt}=\int_I{\langle\boldsymbol{f}(t),\boldsymbol{\phi}(t)\rangle_X\,dt}. \end{align*} Since $\langle\hat{B}(\overline{\boldsymbol{u}}(t)),\boldsymbol{\phi}(t)\rangle_X=\langle B(\overline{\boldsymbol{u}}(t)),\boldsymbol{\phi}(t)\rangle_X$ for almost every $t\in I$ and all $\boldsymbol{\phi}\in C^1_0(I,V)$ as well as $B(\overline{\boldsymbol{u}}(\cdot))\in L^{p'}(I,V^*)$ (cf.~\cite[Example~5.1]{KR19}), as $\overline{\boldsymbol{u}}(t)\in L^p(I,V)\cap L^\infty(I,H)$, we actually proved that $\overline{\boldsymbol{u}}\in \mathbfcal{W}_e^{1,p,p}(I,V,H)\cap L^\infty(I,H)$, such that for all $\boldsymbol{\phi}\in C^1_0(I,V)$ \begin{align*} \int_I{\bigg\langle\frac{d_e\overline{\boldsymbol{u}}}{dt}(t),\boldsymbol{\phi}(t)\bigg\rangle_V\,dt}+\int_I{\langle S(t)(\overline{\boldsymbol{u}}(t))+B(\overline{\boldsymbol{u}}(t)),\boldsymbol{\phi}(t)\rangle_X\,dt}=\int_I{\langle\boldsymbol{f}(t),\boldsymbol{\phi}(t)\rangle_X\,dt}.\tag*{$\qed$} \end{align*} \end{proof} \begin{rmk}\label{rem:ex} The results in Theorem~\ref{rem:7.4} are among others already contained in \cite{tscherpel-phd} (cf.~\cite{sueli-tscherpel}). There a numerical analysis of the unsteady $p$-Navier-Stokes equations using the framework of maximal monotone graphs is performed. The convergence of a conforming implicit fully discrete Rothe-Galerkin scheme of an evolution problem with Bochner pseudo-monotone operators is proved in \cite{BR19}. Convergence results with optimal rates for the unsteady $p$-Navier-Stokes equations for $p\le 2$ and space periodic boundary conditions can be found in \cite{bdr-3-2}. Optimal convergence rates in the case of homogeneous Dirichlet boundary conditions for unsteady $p$-Stokes equations can be found in \cite{sarah-phd}. First results for optimal convergence rates of a fully space-time discretization for an unsteady problem with $(p,\delta)$-structure depending on the symmetric gradient are contained in \cite{br-parabolic} (cf.~\cite{breit-mensah} for a setting with variable exponents). \end{rmk} \newcommand{\vspace{-2.5mm}}{\vspace{-2.5mm}} \section{Numerical experiments} \label{sec:8} Eventually, we want to present some numerical experiments with low regularity data which perfectly suit the framework of this article, but are to irregular to apply usual optimal convergence results, see e.g. Remark \ref{rem:ex} for an overview. The choice of appropriate data for the first experiment is motivated by \cite{CHP10}. However, differing from \cite{CHP10} we will consider constant exponents and a time-dependent viscosity. All numerical experiments were conducted using the finite element software \textsf{FEniCS} \cite{LW10}. The graphics are generated using the \textsf{Matplotlib} library \cite{Hun07}. \begin{ex} Let $\Omega=\left(0,3\right)\times \left(0,1\right)\subseteq \mathbb{R}^2$, $T=1$, $p=\frac{11}{5}$ and $\boldsymbol{f}=\boldsymbol{0}$. The initial data $\mathbf{u}_0\in H$ is described in Figure \ref{fig1}, in which $\vert \mathbf{u}_0\vert=1$ in the left vortex, i.e., in $\left(0,2\right)\times \left(0,1\right)$, and $\vert \mathbf{u}_0\vert=10$ in the right vortex, i.e., in $\left(2,3\right)\times \left(0,1\right)$. \begin{figure}[hbt!] \centering \includegraphics[width=11.5cm]{ID1.png} \caption{Velocity field of initial data $\mathbf{u}_0\in H$. The length of the vectors is suitable scaled.\\[3mm]} \label{fig1} \end{figure} The spatial discretization of our domain $\Omega$ is obtained by a uniform finite element mesh consisting of triangles with straight sides, shown in Figure \ref{fig2}. The mapping $\mathbf{S}:Q_T\times \mathbb{M}^{d\times d}_{\textup{sym}}\to \mathbb{M}^{d\times d}_{\textup{sym}}$ is given via $\nu(t,x)(\delta+\vert \mathbf{A}\vert)^{p-2}\mathbf{A}$ in $\mathbb{M}^{d\times d}_{\textup{sym}}$ for almost every $(t,x)^{\top}\in Q_T$ and all $\mathbf{A}\in \mathbb{M}^{d\times d}_{\textup{sym}}$, where $\nu\in L^\infty(Q_T)$ is given via $\nu(t,x)=t^2+\exp(-x_1^2+x_2^2)$ for almost every $(t,x)^{\top}\in Q_T$ and $\delta =1\mathrm{e}{-2}$. We consider both the MINI element (cf.~Figure~\ref{fig3}) and the conforming Crouzeix-Raviart element (cf.~Figure~\ref{fig5}). Moreover, we choose the step-size $\tau=1\mathrm{e}{-2}$, i.e., $K=100$. Then, the iterates $(\textbf{u}_n^k)_{k=0,...,K}\subseteq V_n$, solving~\eqref{eq:p-NSnon2}, are approximated by means of Newton's iteration, where the linear system emerging in each Newton step is solved using the LU solver of \textsf{PETSc} \cite{PETSc19}. Under all these circumstances, we gain the following pictures: \begin{figure}[hbt!] \centering \includegraphics[width=11.5cm]{Mesh.png} \centering \caption{Uniform mesh consisting of $48\times 16$ rectangles, each divided into a pair of triangles, i.e., $1536$ triangles.} \label{fig2} \end{figure} \begin{figure}[hbt!] \centering \includegraphics[width=14cm]{MINIex1.png} \includegraphics[width=14cm]{MINIex2.png} \caption{Snapshots of the velocity field (left) and the pressure (right) in the case of the MINI element at times $t= 0.01,0.2$.} \label{fig3} \end{figure} \begin{figure}[hbt!] \centering \includegraphics[width=14cm]{CRex1.png} \includegraphics[width=14cm]{CRex2.png} \caption{Snapshots of the velocity field (left) and the pressure (right) in the case of the conforming Crouzeix-Raviart element at times $t= 0.01,0.2$.} \label{fig5} \end{figure} Leaving the discontinuities of the pressure in the conforming Crouzeix-Raviart element aside, we observe that both elements produce similar pictures. More precisely, the right (faster) vortex clearly dominates the left (slower) vortex, in the sense that the right vortex quickly accelerates the left vortex while simultaneously decelerating itself. This acceleration takes place in the discontinuity line $\{2\}\times \left(0,1\right)$ of the velocity field and thus causes a large magnitude of the pressure, when both vortexes clash together with different orientations around the points $(2,0)^\top$ and $(2,1)^\top$. Still, we observe that this evolution quickly results in an equilibrium state, since we already see at the relatively short time scale $t=0.2$ that both vortexes have more or less the same average length of vectors. \end{ex} In the next experiment we are interested in the convergence speed of the scheme \eqref{eq:p-NSnon2} in the case of low regularity data, which just falls into the scope of application of the weak convergence result Theorem \ref{rem:7.4}. The considered data is mainly motivated by \cite{BBDR12}. \begin{ex} Let $\Omega=\left(-1,1\right)^2\subseteq \mathbb{R}^2$, $T=\frac{1}{2}$, $Q_T:=I\times \Omega$, $\Gamma:=I\times\partial\Omega$ and $p=\frac{11}{5}$. We consider solutions with a point singularity at the origin in the velocity. More precisely, for $\alpha=\frac{6}{5}-\frac{2}{p}\approx 0.291$, let \begin{align*} \boldsymbol{u}(t,\mathbf{x}):=\left( \begin{array}{c} t^2\\ t^2\\ \end{array} \right)+\vert\mathbf{x}\vert^{\alpha-1}\left( \begin{array}{c} x_2\\ - x_1\\ \end{array} \right), \qquad \boldsymbol{p}(t,\mathbf{x}):=0. \end{align*} The mapping $\mathbf{S}: \mathbb{M}^{2\times 2}_{\textup{sym}}\to \mathbb{M}^{2\times 2}_{\textup{sym}}$ is given via $\mathbf{S}(\mathbf{A}):=(\delta+\vert \mathbf{A}\vert)^{p-2}\mathbf{A}$ in $\mathbb{M}^{2\times 2}_{\textup{sym}}$ for all $\mathbf{A}\in \mathbb{M}^{2\times 2}_{\textup{sym}}$, where $\delta = 1\mathrm{e}{-4}$. Then, choosing $\boldsymbol{f}:=\partial_t \boldsymbol{u}-\divo \mathbf{S}(\mathbf{D}\boldsymbol{u})+\divo(\boldsymbol{u}\otimes\boldsymbol{u})\in L^{p'}(I,(W_0^{1,p'}(\Omega)^d)^*)$, $\boldsymbol{u}(0):=\mathbf{u}_0\in W^{1,p}_{\textup{div}}(\Omega):=\{\mathbf{v}\in W^{1,p}_{\textup{div}}(\Omega)^2\mid \divo\mathbf{v}=0\}$ and $\mathbf{u}_D:= \boldsymbol{u}|_{\Gamma_T}$, we trivially have\footnote{The exact solution is not zero on the boundary of the computational domain. However, the error is mainly concentrated around the singularity in the origin and thus small inconsistency with the setup of the theory does not have any influence in the results.} \begin{alignat*}{2} \partial_t\boldsymbol{u}-\divo\mathbf{S}(\mathbf{D}\boldsymbol{u})+\divo(\boldsymbol{u}\otimes\boldsymbol{u})+\nabla \boldsymbol{p}&=\boldsymbol{f}&&\quad\text{ in }Q_T,\\ \divo \boldsymbol{u}&=0&&\quad\text{ in }Q_T,\\ \boldsymbol{u}&=\mathbf{u}_D&&\quad\text{ in }\Gamma_T,\\ \boldsymbol{u}(0)&=\mathbf{u}_0&&\quad\text{ in }\Omega. \end{alignat*} In particular, note that the parameter $\alpha$ is chosen so small that just $\boldsymbol{u}(t)\in W^{1,p}_{\textup{div}}(\Omega)^2$ for every $t\in \overline{I}$, as then $\vert \mathbf{D}\boldsymbol{u}(t)\vert\sim\vert\cdot\vert^{\alpha-1}\in L^p(\Omega)$, but $\boldsymbol{u}(t)\notin W^{2,1}(\Omega)^2$ for every $t\in \overline{I}$. Similar, this choice guarantees $\mathbf{S}(\mathbf{D}\boldsymbol{u}(t))\in L^{p'}(\Omega)^{2\times 2}$ for every $t\in \overline{I}$, but neither $\mathbf{S}(\mathbf{D}\boldsymbol{u}(t))\notin W^{1,p'}(\Omega)^{2\times 2}$, nor $\divo\mathbf{S}(\mathbf{D}\boldsymbol{u}(t))\in L^{p'}(\Omega)^2$ for every $t\in \overline{I}$. Thus, the right-hand side has just enough regularity, namely $\boldsymbol{f}\in L^{p'}(I,(W_0^{1,p'}(\Omega)^2)^*)$, to fall into the framework of our weak convergence result Theorem \ref{rem:7.4}. In consequence, we can expect (weak) convergence of the scheme \eqref{eq:p-NSnon2} for this choice of $\alpha$. However, the attaching experimental results indicate that we presumably cannot expect convergence with rates. \begin{figure}[ht] \centering \hspace*{-5mm}\includegraphics[width=14cm]{exactsol.png} \caption{Snapshots of the exact solution $\boldsymbol{u}(t)$ at times $t=0,0.25,0.5$.} \label{exactsol} \end{figure} The spatial discretization of our domain $\Omega$ is obtained by a sequence of uniform finite element meshes $(\mathcal{T}_{h_n})_{n\in \mathbb{N}}$, consisting of triangles with straight sides and diameter ${h_n:=\frac{h_0}{2^n}}$,~${h_0:=2\sqrt{2}}$, for every $n\in \mathbb{N}$. Beginning with $\mathcal{T}_{h_1}$, see e.g. Figure \ref{refine} the first and the third picture, and for every $n\in \mathbb{N}$ with $n\ge 2$ the mesh $\mathcal{T}_{h_n}$ is a refinement of $\mathcal{T}_{h_{n-1}}$ obtained by dividing each triangle into four, which is based an edge midpoint or regular $1:4$ refinement algorithm. Once more we consider both the MINI element (cf.~Table~\ref{tab1}), the Taylor-Hood element (cf.~Table~\ref{tab2}) and the conforming Crouzeix-Raviart element (cf.~Table~\ref{tab3}). Moreover, we choose the step-size $\tau=4\mathrm{e}{-3}$, i.e., $K=250$. Then, the iterates $(\textbf{u}_n^k)_{k=0,...,K}\subseteq V_n$, solving \eqref{eq:p-NSnon2}, are again approximated by means of Newton's iteration. Let the mapping $\mathbf{F}:\mathbb{M}^{2\times 2}_{\textup{sym}}\to \mathbb{M}^{2\times 2}_{\textup{sym}}$ be given via $\mathbf{F}(\mathbf{A}):=(\delta+\vert \mathbf{A}\vert)^{\frac{p-2}{2}}\mathbf{A}$ for every $\mathbf{A}\in \mathbb{M}^{2\times 2}_{\textup{sym}}$. We are interested in the parabolic errors \begin{align*} e_{\mathbf{F},h_n}&:=\bigg(\sum_{k=0}^K{\tau\|\mathbf{F}(\mathbf{D}\boldsymbol{u}(t_k))-\mathbf{F}(\mathbf{D}\mathbf{u}_n^k)\|_{L^2(\Omega)^{d\times d}}}\bigg)^{\frac{1}{2}},\\ e_{L^2,h_n}&:=\max_{0\leq k \leq K}{\|\boldsymbol{u}(t_k)-\mathbf{u}_n^k\|_{L^2(\Omega)^d}},\quad n=1,...,7, \end{align*} which are approximations of the errors $\|\mathbf{F}(\mathbf{D}\boldsymbol{u})-\mathbf{F}(\mathbf{D}\overline{\boldsymbol{u}}_n^\tau)\|_{L^2(Q_T)^{d\times d}}$ and $\|\boldsymbol{u}-\overline{\boldsymbol{u}}_n^\tau\|_{L^\infty(I,L^2(\Omega)^d)}$. In particular, we are interested in the total parabolic error $e_{\mathbf{tot},h_n}:=e_{\mathbf{F},h_n}+e_{L^2,h_n}$, $n=1,...,7$. As an estimation of the convergence rates, we use the experimental order of convergence (EOC): \begin{align*} \textup{EOC}(e_{h_n}):=\frac{\log(e_{h_n}/e_{h_{n-1}})}{\log(h_n/h_{n-1})},\quad n=2,...,7, \end{align*} where $e_{h_n}$, $n=2,...,7$, either denote $e_{\mathbf{F},h_n}$, $e_{L^2,h_n}$ or $e_{\mathbf{tot},h_n}$, $n=2,...,7$, respectively. \begin{figure}[ht] \includegraphics[width=14.5cm]{refine.png}\\ \caption{From the left to the right: snapshots of the meshes $\mathcal{T}_{h_1}$, $\mathcal{T}_{h_1}'$, $\mathcal{T}_{h_2}$ and $\mathcal{T}_{h_2}'$.} \label{refine} \end{figure} In order to obtain higher accuracy in the computation of these errors, especially in view of the singularity of the exact solution around the origin, we interpolate both $\boldsymbol{u}(t_k)$ and $\mathbf{u}_n^k$ into a higher polynomial space with respect to an appropriate refined mesh, namely into $\mathcal{P}_5(\mathcal{T}_{h_n}')$, where $\mathcal{T}_{h_n}'$ is a refinement of $\mathcal{T}_{h_n}$, which is obtained by applying the longest edge bisection method of \textsf{Fenics} on $\mathcal{T}_{h_{n+1}}$ for all cells $T\in \mathcal{T}_{h_{n+1}}$ with $\textup{dist}(T,\mathbf{0}) < 0.25$ and subsequently on the resulting refined mesh $\widetilde{\mathcal{T}}_{h_{n+1}}$ for all cells $T\in \widetilde{\mathcal{T}}_{h_{n+1}}$ with $\textup{dist}(T,\mathbf{0})< 0.1$, see e.g. Figure \ref{refine} the second and the fourth picture. In this manner, we obtain the following results: \begin{table}[ht] \centering \begin{tabular}[t]{ c c c c c c c} \toprule &$h_n=\frac{h_0}{2^n}$& $e_{L^2,h_n}$& $\textup{EOC}(e_{L^2,h_n})$ & $e_{\mathbf{F},h_n}$& $\textup{EOC}(e_{\mathbf{F},h_n})$ & $\textup{EOC}(e_{\mathbf{tot},h_n})$\\ \midrule $n=1$& $\sqrt{2}\approx 1.414$ & $4.698\mathrm{e}{-1}$ & - & $1.256$ & -& -\\ $n=2$& $\frac{\sqrt{2}}{2}\approx 7.07\mathrm{e}{-1}$ & $2.451\mathrm{e}{-1}$ & $0.939$ & $1.062$& $0.243$& $0.402$\\ $n=3$& $\frac{\sqrt{2}}{4}\approx 3.54\mathrm{e}{-1}$ & $1.578\mathrm{e}{-1}$ & $0.635$ & $9.407\mathrm{e}{-1}$& $0.174$& $0.250$\\ $n=4$& $\frac{\sqrt{2}}{8}\approx 1.77\mathrm{e}{-1} $ & $1.037\mathrm{e}{-1}$ & $0.606$ & $8.484\mathrm{e}{-1}$& $0.149$ & $0.206$\\ $n=5$& $\frac{\sqrt{2}}{16}\approx 8.39\mathrm{e}{-2} $ & $6.940\mathrm{e}{-2}$ & $0.579$ & $ 7.602\mathrm{e}{-1}$& $0.158$& $0.199$\\ $n=6$& $\frac{\sqrt{2}}{32}\approx 4.42\mathrm{e}{-2} $ & $5.293\mathrm{e}{-2}$ & $0.391$ & $6.802\mathrm{e}{-1}$ & $0.161$ & $0.178$\\ $n=7$& $\frac{\sqrt{2}}{64}\approx 2.21\mathrm{e}{-2} $ & $4.610\mathrm{e}{-2}$ & $0.199$ & $6.138\mathrm{e}{-1}$ & $0.148$ & $0.152$\\ \bottomrule \hline \end{tabular} \caption{Error analysis for the MINI element.} \label{tab1} \end{table} \begin{table}[ht] \centering \begin{tabular}[t]{ c c c c c c c} \toprule &$h_n=\frac{h_0}{2^n}$& $e_{L^2,h_n}$& $\textup{EOC}(e_{L^2,h_n})$ & $e_{\mathbf{F},h_n}$& $\textup{EOC}(e_{\mathbf{F},h_n})$& $\textup{EOC}(e_{\mathbf{tot},h_n})$\\ \midrule $n=1$& $\sqrt{2}\approx 1.41$ & $1.876\mathrm{e}{-1}$ & - & $9.336\mathrm{e}{-1}$ & - & -\\ $n=2$ & $\frac{\sqrt{2}}{2}\approx 7.07\mathrm{e}{-1}$ & $1.261\mathrm{e}{-1}$ & $0.573$ & $8.289\mathrm{e}{-1}$& $0.172$ & $0.232$\\ $n=3$ & $\frac{\sqrt{2}}{4}\approx 3.54\mathrm{e}{-1}$ & $1.058\mathrm{e}{-1}$ & $0.256$ & $7.527\mathrm{e}{-1}$& $0.139$& $0.153$\\ $n=4$ & $\frac{\sqrt{2}}{8}\approx 1.77\mathrm{e}{-1} $ & $8.191\mathrm{e}{-2}$ & $0.370$ & $6.919\mathrm{e}{-1}$& $0.122$& $0.150$\\ $n=5$ & $\frac{\sqrt{2}}{16}\approx 8.39\mathrm{e}{-2} $ & $6.206\mathrm{e}{-2}$ & $0.400$ & $6.295\mathrm{e}{-1}$& $0.136$& $0.162$\\ $n=6$ & $\frac{\sqrt{2}}{32}\approx 4.42\mathrm{e}{-2} $ & $5.090\mathrm{e}{-2}$ & $0.286$ & $5.724\mathrm{e}{-1}$ & $0.137$ & $0.150$ \\ $n=7$& $\frac{\sqrt{2}}{64}\approx 2.21\mathrm{e}{-2} $ & $4.552\mathrm{e}{-2}$ & $0.161$ & $5.263\mathrm{e}{-1}$ & $0.121$ & $0.124$\\ \bottomrule \hline \end{tabular} \caption{Error analysis for the Taylor-Hood element.} \label{tab2} \end{table} \begin{table}[ht] \centering \begin{tabular}[t]{ c c c c c c c} \toprule&$h_n=\frac{h_0}{2^n}$& $e_{L^2,h_n}$& $\textup{EOC}(e_{L^2,h_n})$ & $e_{\mathbf{F},h_n}$& $\textup{EOC}(e_{\mathbf{F},h_n})$ & $\textup{EOC}(e_{\mathbf{tot},h_n})$\\ \midrule $n=1$& $\sqrt{2}\approx 1.41$ & $2.085\mathrm{e}{-1}$ & - & $9.627\mathrm{e}{-1}$ & -&-\\ $n=2$& $\frac{\sqrt{2}}{2}\approx 7.07\mathrm{e}{-1}$ & $1.336\mathrm{e}{-1}$ & $0.643$ & $8.431\mathrm{e}{-1}$& $0.191$& $0.262$\\ $n=3$& $\frac{\sqrt{2}}{4}\approx 3.54\mathrm{e}{-1}$ & $1.091\mathrm{e}{-1}$ & $0.292$ & $7.654\mathrm{e}{-1}$& $0.139$& $0.159$\\ $n=4$& $\frac{\sqrt{2}}{8}\approx 1.77\mathrm{e}{-1} $ & $8.400\mathrm{e}{-2}$ & $0.388$ & $7.041\mathrm{e}{-1}$& $0.121$& $0.151$\\ $n=5$& $\frac{\sqrt{2}}{16}\approx 8.39\mathrm{e}{-2} $ & $6.252\mathrm{e}{-2}$ & $0.416$ & $6.401\mathrm{e}{-1}$& $0.137$& $0.164$\\ $n=6$& $\frac{\sqrt{2}}{32}\approx 4.42\mathrm{e}{-2} $ & $ 5.109\mathrm{e}{-2}$ & $0.291$ & $5.812\mathrm{e}{-1}$ & $0.139$ & $0.152$\\ $n=7$& $\frac{\sqrt{2}}{64}\approx 2.21\mathrm{e}{-2} $ & $4.567\mathrm{e}{-2}$ & $0.161$ & $5.334\mathrm{e}{-1}$ & $0.124$ & $0.127$\\ \bottomrule \hline \end{tabular} \caption{Error analysis for the conforming Crouzeix-Raviart element.} \label{tab3} \end{table} \vspace*{5mm} We observe for this example that the MINI element provides the best results, even though its errors are initially larger than these of the Taylor-Hood and Crouzeix-Raviart element. Still, we observe by comparing $\textup{EOC}(e_{\mathbf{tot},h_n})$, $n=1,...,7$, with $\textup{EOC}(e_{\mathbf{F},h_n})$, $n=1,...,7$, and $\textup{EOC}(e_{L^2,h_n})$, $n=1,...,7$, that $\textup{EOC}(e_{\mathbf{F},h_n})$, $n=1,...,7$, clearly dominate $\textup{EOC}(e_{L^2,h_n})$, $n=1,...,7$. However, the main observation of the Tables \ref{tab1}--\ref{tab3} is that neither $\textup{EOC}(e_{L^2,h_n})$, nor $\textup{EOC}(e_{\mathbf{F},h_n})$ or $\textup{EOC}(e_{\mathbf{tot},h_n})$, $n=1,...,7$, stabilize, so that we are not able to extract experimental convergence rates. This circumstance may be traced back two the low regularity of the data provided by the exact solution controlled by the parameter $\alpha>0$. In fact, for even lower values of $\alpha>0$, e.g., for $\alpha=\frac{11}{10}-\frac{2}{p}\approx 0.19$, the authors were no longer able to observe (strong) convergence of the scheme \eqref{eq:p-NSnon2}. In addition, Figure~\ref{fig6} and Figure~\ref{fig7} illustrate the temporal developments of the errors $\|\mathbf{F}(\mathbf{D}\boldsymbol{u}(t_k))-\mathbf{F}(\mathbf{D}\mathbf{u}_n^k)\|_{L^2(\Omega)^{2\times 2}}$ and $\|\boldsymbol{u}(t_k)-\mathbf{u}_n^k\|_{L^2(\Omega)^2}$, $k=0,...,250$. \begin{figure}[ht] \centering \includegraphics[width=15cm]{Fp.png} \caption{Temporal development of the errors $\|\mathbf{F}(\mathbf{D}\boldsymbol{u}(t_k))-\mathbf{F}(\mathbf{D}\mathbf{u}_n^k)\|_{L^2(\Omega)^{2\times 2}}$, $k=0,...,250$.} \label{fig6} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=15cm]{L2.png} \caption{Temporal development of the errors $\|\boldsymbol{u}(t_k)-\mathbf{u}_n^k\|_{L^2(\Omega)^2}$, $k=0,...,250$.} \label{fig7} \end{figure} \end{ex} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \ifx\undefined\leavevmode\hbox to3em{\hrulefill}\, \newcommand{\leavevmode\hbox to3em{\hrulefill}\,}{\leavevmode\hbox to3em{\hrulefill}\,} \fi
{ "attr-fineweb-edu": 1.382812, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdCs4uzlhfMbxTqAO
\section*{Details of the simulations} We use a tree-level, $O(a^2)$-improved Symanzik gauge action\cite{S_Luscher:1985zq} and work with tree-level, clover-improved Wilson fermions, coupled to links which have undergone six levels of stout link averaging\cite{S_Morningstar:2003gk}. (The precise form of the action is presented in \cite{S_Durr:2008rw}.) Simulation parameters, lattice sizes and trajectory lengths after thermalization are summarized in Table \ref{parameters}. Note, that we work on spatial volumes as large as $L^3{\simeq} (4\,\mathrm{fm})^3$ and temporal extents up to $T{\simeq} 8\,\mathrm{fm}$. Besides significantly reducing finite-volume corrections, this choice has a similar effect on the statistical uncertainties of the results as increasing the number of trajectories at fixed volume. For a given pion mass, this increase is proportional to the ratio of volumes. Thus, for $T\propto L$, 1,300 trajectories at $M_\pi L{=}4$ are approximately equivalent to 4,000 trajectories at $M_\pi L$=3. (A factor $L^3$ comes from the summation over the spatial volume required to project the hadron correlation functions onto the zero-momentum sector and an additional $L$ comes from the fact that more timeslices are available for extracting the corresponding hadron mass.) The integrated autocorrelation times of the smeared plaquette and that of the number of conjugate gradient iteration steps are less than approximately ten trajectories. Thus every tenth trajectory is used in the analysis. We calculate the spectrum by using up to eight timeslices as sources for the correlation functions. For the precise form of the hadronic operators see e.g. \cite{S_Montvay:1994cy}. We find that Gaussian sources and sinks of radii $\approx 0.32\,\mathrm{fm}$ are less contaminated by excited states than point sources/sinks (see Figure \ref{sources}). The integrated autocorrelation times for hadron propagators, computed on every tenth trajectory, are compatible with 0.5 and no further correlations were found through binning adjacent configurations. In order to exclude possible long-range correlations in our simulations, we performed a run with 10,000 and one with 4,500 trajectories. No long-range correlations were observed. Further, we never encountered algorithmic instabilities as illustrated by the time history of the fermionic force in Figure \ref{ferm_force} and discussed in more detail in \cite{S_Durr:2008rw}. Note that the fermionic force, which is the derivative of the fermionic action with respect to the gauge field, is directly related to the locality properties of our action (see Figure \ref{locality}). \section*{Finite volume corrections and resonances} For fixed bare parameters (gauge coupling, light quark mass and strange quark mass), the energies of the different hadronic states depend on the spatial size of the lattice (in a finite volume the energy spectrum is discrete and all states are stable). There are two sources of volume dependence, which we call type I and type II. These were discussed in a series of papers by M. L\"uscher \cite{S_Luscher:1985dn,S_Luscher:1986pf,S_Luscher:1990ux,S_Luscher:1991cf}. Both effects were quantified in a self-consistent manner in our analysis, using only the results of our calculations (i.e. no numerical inputs from experiments were used). Type I effects result from virtual pion exchanges between the different copies of our periodic system. These effects induce corrections in the spectrum which fall off exponentially with $M_\pi L$ for large enough volumes \cite{S_Luscher:1985dn}. For one set of parameters ($M_\pi{\approx}320$~MeV at $a{\approx}0.125$~fm), additional runs have been carried out for several spatial volumes ranging from $M_\pi L{\approx}3.5$ to 7. The size dependences of the different hadron masses $M_X$ are successfully described by $M_X(L)=M_X+c_X(M_\pi)\cdot \exp(-M_\pi L)/(M_\pi L)^{3/2}$. Figure~\ref{fin_volume} shows the volume dependence at $M_\pi$=320~MeV for the two statistically most significant channels : the pion and nucleon channels. The fitted $c_X$ coefficients are in good agreement with those suggested by \cite{S_Colangelo:2005gd,S_Colangelo:2005cg} which predicts a behavior of $c_X(M_\pi)\propto M_\pi^2$. Our results for these and other channels confirm the rule of thumb: $M_\pi L{\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}}4$ gives the infinite volume masses within statistical accuracy. Nevertheless, we included these finite volume corrections in our analysis. The other source of volume dependence (type II) is relevant only to resonant states, in regions of parameter space where they would decay in infinite volume (five out of the twelve particles of the present work are resonant states). Since in this case the lowest energy state with the quantum numbers of the resonance in infinite volume is a two particle scattering state, we need to take the effects of scattering states into account in our analysis. For illustration we start by considering the hypothetical case where there is no coupling between the resonance (which we will refer to as ``heavy state'' in this paragraph) and the scattering states. In a finite box of size $L$, the spectrum in the center of mass frame consists of two particle states with energy $\sqrt{M_1^2+{\bf k}^2}+\sqrt{M_2^2+{\bf k}^2}$, where ${\bf k}={\bf n}2\pi/L$, ${\bf n}\in Z^3$ and $M_1$, $M_2$ are the masses of the lighter particles (with corrections of type I discussed in the previous paragraph) and, in addition, of the state of the heavy particle $M_X$ (again with type I corrections). As we increase $L$, the energy of of any one of the two particle states decreases and eventually becomes smaller than the energy $M_X$ of $X$. An analogous phenomenon can occur when we fix $L$ but reduce the quark mass (the energy of the two light particles changes more than $M_X$). In the presence of interactions, this level crossing disappears and, due to the mixing of the heavy state and the scattering state, an avoided level crossing phenomenon is observed. Such mass shifts due to avoided level crossing can distort the chiral extrapolation of hadron masses to the physical pion mass. The literature \cite{S_Luscher:1986pf,S_Luscher:1990ux,S_Luscher:1991cf} provides a conceptually satisfactory basis to study resonances in lattice QCD: each measured energy corresponds to a momentum, $|{\bf k}|$, which is a solution of a complicated non-linear equation. Though the necessary formulae can be found in the literature (cf. equations (2.7, 2.10-2.13, 3.4, A3) of \cite{S_Luscher:1991cf}), for completeness the main ingredients are summarized here. We follow \cite{S_Luscher:1991cf} where the $\rho$-resonance was taken as an example and it was pointed out that other resonances can be treated in the same way without additional difficulties. The $\rho$-resonance decays almost exclusively into two pions. The absolute value of the pion momentum is denoted by $k=|{\bf k}|$. The total energy of the scattered particles is $W=2(M_\pi^2+k^2)^{1/2}$ in the center of mass frame. The $\pi\pi$ scattering phase $\delta_{11}(k)$ in the isospin $I=1$, spin $J=1$ channel passes through $\pi/2$ at the resonance energy, which correspond to a pion momentum $k$ equal to $k_\rho=(M_\rho^2/4-M_\pi^2)^{1/2}$. In the effective range formula $(k^3/W)\cdot\cot \delta_{11}=a+bk^2$, this behavior implies $a=-bk_\rho^2=4k_\rho^5/(M_\rho^2\Gamma_\rho)$, where $\Gamma_\rho$ is the decay width the resonance (which can be parametrized by an effective coupling between the pions and the $\rho$). The basic result of \cite{S_Luscher:1990ux} is that the finite-volume energy spectrum is still given by $W=2(M_\pi^2+k^2)^{1/2}$ but with $k$ being a solution of a complicated non-linear equation, which involves the $\pi\pi$ scattering phase $\delta_{11}(k)$ in the isospin $I=1$, spin $J=1$ channel and reads $n\pi-\delta_{11}(k)=\phi(q)$. Here $k$ is in the range $0<k<\sqrt{3}M_\pi$, $n$ is an integer, $q=kL/(2\pi)$ and $\phi(q)$ is a known kinematical function which we evaluate numerically for our analysis ($\phi(q)\propto q^3$ for small $q$ and $\phi(q)\approx \pi q^2$ for $q\ge 0.1$ to a good approximation; more details on $\phi(q)$ are given in Appendix A of \cite{S_Luscher:1991cf}). Solving the above equation leads to energy levels for different volumes and pion masses (for plots of these energy levels, see Figure 2 of \cite{S_Luscher:1991cf}). Thus, the spectrum is determined by the box length $L$, the infinite volume masses of the resonance $M_X$ and the two decay products $M_1$ and $M_2$ and one parameter, $g_X$, which describes the effective coupling of the resonance to the two decay products and is thus directly related to the width of the resonance. In the unstable channels our volumes and masses result in resonance states $M_X$ which have lower energies than the scattering states (there are two exceptions, see later). In these cases $M_X$ can be accurately reconstructed from $L$, $M_1$, $M_2$ and $g_X$. However, since we do not want to rely on experimental inputs in our calculations of the hadron masses, we choose to use, for each resonance, our set of measurements for various $L$, $M_1$ and $M_2$ to determine both $M_X$ and $g_X$. With our choices of quark masses and volumes we find despite limited sensitivity to the resonances' widths, that we can accurately determine the resonances' masses. Moreover, the finite volume corrections induced by these effects never exceed a few percent. In addition, the widths obtained in the analysis are in agreement with the experimental values, albeit with large errors. (For a precise determination of the width, which is not our goal here, one would preferably need more than one energy level obtained by cross-correlators. Such an analysis is beyond the scope of the present paper.) Out of the 14$\cdot$12=168 mass determinations (14 sets of lattice parameters/volumes--see Table~\ref{parameters}--and 12 hadrons) there are two cases for which $M_X$ is larger than the energy of the lowest scattering state. These exceptions are the $\rho$ and $\Delta$ for the lightest pion mass point at $a$$\approx$0.085~fm. Calculating the energy levels according to \cite{S_Luscher:1990ux,S_Luscher:1991cf} for these two isolated cases, one observes that the energy of the lowest lying state is already dominated by the contribution from the neighboring, two particle state. More precisely, this lowest state depends very weakly on the resonance mass, which therefore cannot be extracted reliably. In fact, an extraction of $M_X$ from the lowest lying state would require precise information on the width of the resonance. Since one does not want to include the experimental width as an input in an ab initio calculation, this point should not be used to determine $M_\rho$ and $M_\Delta$. Thus, for, and only for the $\rho$ and $\Delta$ channels, we left out this point from the analysis. \section*{Approaching the physical mass point and the continuum limit} We consider two different paths, in bare parameter space, to the physical mass point and continuum limit. These correspond to two different ways of normalizing the hadron masses obtained for a fixed set of bare parameters. For both methods we follow two strategies for the extrapolation to the physical mass point and apply three different cuts on the maximum pion mass. We also consider two different parameterizations for the continuum extrapolation. All residual extrapolation uncertainties are accounted for in the systematic errors. We carry out this analysis both for the $\Xi$ and for the $\Omega$ sets separately. We call the two ways of normalizing the hadron masses: 1. ``the ratio method'', 2. ``mass independent scale setting''. 1. The ratio method is motivated by the fact that in QCD one can calculate only dimensionless combinations of observables, e.g. mass ratios. Furthermore, in such ratios cancellations of statistitical uncertainties and systematic effects may occur. The method uses the ratios $r_X$=$M_X$/$M_\Xi$ and parametrizes the mass dependence of these ratios in terms of $r_\pi$=$M_\pi$/$M_\Xi$ and $r_K$=$M_K$/$M_\Xi$. The continuum extrapolated two-dimensional surface $r_X$=$r_X$($r_\pi$,$r_K$) is an unambiguous prediction of QCD for a particle of type $X$ (a couple of points of this surface have been determined in \cite{S_Durr:2008rw}). One-dimensional slices ($2r_K^2-r_\pi^2$ was set to 0.27, to its physical value) of the two-dimensional surfaces for $N$ and $\Omega$ are shown on Figure 2 of our paper. (Here we write the formulas relevant for $\Xi$ set; analogous expressions hold for the $\Omega$ set. The final results are also given for the $\Omega$ set). A linear term in $r_K^2$ (or $M_K^2$) is sufficient for the small interpolation needed in the strange quark mass direction. On the other hand, our data is accurate enough that some curvature with respect to $r_\pi^2$ (or $M_\pi^2$) is visible in some channels. In order to perform an extrapolation to the physical pion mass one needs to use an expansion around some pion mass point. This point can be $r_\pi{=}0$ ($M_\pi{=}0$), which corresponds to chiral perturbation theory. Alternatively one can use a non-singular point which is in a range of $r_\pi^2$ (or $M_\pi^2$) which includes the physical and simulated pion masses. We follow both strategies (we call them ``chiral fit'' and ``Taylor fit'', respectively). In addition to a linear expression in $M_\pi^2$, chiral perturbation theory predicts~\cite{S_Langacker:1974bc} an $M_\pi^3$ next-to-leading order behavior for masses other than those of the pseudo-Goldstone bosons. This provides our first strategy (``chiral fit''). A generic expansion of the ratio $r_X$ around a reference point reads: $r_X=r_X(ref)+\alpha_X[r_\pi^2-r_\pi^2(ref)]+\beta_X[r_K^2-r_K^2(ref)]+hoc$, where $hoc$ denotes higher order contributions. In our chiral fit, $hoc$ is of the form $r_\pi^3$, all coefficients are left free and the reference point is taken to be $r_\pi^2(ref){=}0$ and $r_K^2(ref)$ is the midpoint between our two values of $r_K^2$, which straddle $r_K^2(phys)$. The second strategy is a Taylor expansion in $r_\pi^2$ and $r_K^2$ around a reference point which does not correspond to any sort of singularity (``Taylor fit''). In this case, $r_K^2(ref)$ is again at the center of our fit range and $r_\pi^2(ref)$ is the midpoint of region defined by the physical value of the pion mass and the largest simulated pion mass considered. This choice guarantees that all our points are well within the radius of convergence of the expansion, since the nearest singularities are at $M_\pi=0$ and/or $M_K=0$. Higher order contributions, $hoc$, of the form $r_\pi^4$ turned out to be sufficient. We extrapolate to the physical pion mass following both strategies (cubic term of the ``chiral fit'' or a quartic contribution of the ``Taylor fit''). The variations in our results which follow from the use of these different procedures are included in our systematic error analysis. The range of applicability of these expansions is not precisely known a priori. In case of the two vector mesons the coefficients of the higher order ($r_\pi^3$ or $r_\pi^4$) contributions were consistent with zero even when using our full pion mass range. Nevertheless, they are included in the analysis. For the baryons, however, the higher order contributions are significant. The difference between the results obtained with the two approaches gives some indication of the possible contributions of yet higher order terms not included in our fits. To quantify these contributions further, we consider three different ranges of pion mass. In the first one we include all 14 simulation points, in the second one we keep points upto $r_\pi=0.38$ (thus dropping two pion mass points) and in the third one we apply an even stricter cut at $r_\pi=0.31$ (which corresponds to omitting the five heaviest points). The pion masses which correspond to these cuts will be given shortly. The differences between results obtained using these three pion mass ranges are included in the systematic error analysis. To summarize, the ``ratio method'' uses the input data $r_X$, $r_\pi$ and $r_K$ to determine $r_X(ref)$, $\alpha_X$ and $\beta_X$ and, based on them, we obtain $r_X$ at the physical point. The determination of this value is done with the two fit strategies (``chiral'' and ``Taylor'') for all three pion mass ranges. 2. The second, more conventional method (``mass independent scale setting'') consists of first setting the lattice spacing by extrapolating $M_\Xi$ to the physical point, given by the physical ratios of $M_\pi/M_\Xi$ and $M_K/M_\Xi$. Using the resulting lattice spacings obtained for each bare gauge coupling, we then proceed to fit $M_X$ vs. $M_\pi$ and $M_K$ applying both extrapolation stratagies (``chiral'' and ``Taylor'') discussed above. We use the same three pion mass ranges as for the ``ratio method'': in the first all simulation points are kept, in the second we cut at $M_\pi{=}560$~MeV and the third case this cut was brought down to $M_\pi{=}450$~MeV. \bigskip As shown in the $2{+}1$ flavor scaling study of \cite{S_Durr:2008rw}, typical hadron masses, obtained in calculations which are performed with our $O(a)$-improved action, deviate from their continuum values by less than approximately 1\% for lattice spacings up to $a\approx 0.125\,\mathrm{fm}$. Moreover, \cite{S_Durr:2008rw} shows that these cutoff effects are linear in $a^2$ as $a^2$ is scaled from $a\sim 0.065\,\mathrm{fm}$ to $a\sim 0.125\,\mathrm{fm}$ and even above. Thus, we use the results obtained here, for three values of the lattice spacing down to $a\sim 0.065\,\mathrm{fm}$, to extrapolate away these small cutoff effects, by allowing $r_X(ref)$ (or $M_X(ref)$) to acquire a linear dependence in $a^2$. In addition to the extrapolation in $a^2$, we perform an extrapolation in $a$ and use the difference as an estimate for possible contributions of higher order terms not accounted for in our continuum extrapolation. The physical mass and continuum extrapolations are carried out simultaneously in a combined, correlated analysis. \section*{Statistical and systematic error analysis} Systematic uncertainties are accounted for as described above. In addition, to estimate the possible contributions of excited states to our extraction of hadron masses from the time-dependence of two-point functions, we consider 18 possible time intervals whose initial time varies from low values, where excited states may contribute, to higher values, where the quality of fit clearly indicate the absence of such contributions. Since the light hadron spectrum is known experimentally it is of extreme importance to carry out a blind data analysis. One should avoid any arbitrariness related e.g. to the choice of some fitting intervals or pre-specified coefficients of the chiral fit. We follow an extended frequentist's method \cite{S_Yao:2006px}. To this end we combine several possible sets of fitting procedures (without imposing any additional information for the fits) and weight them according to their fit quality. Thus, we have 2 normalization methods, 2 strategies to extrapolate to the physical pion mass, 3 pion mass ranges, 2 different continuum extrapolations and 18 time intervals for the fits of two point functions, which result in 2$\cdot$2$\cdot$3$\cdot$2$\cdot$18=432 different results for the mass of each hadron. In lattice QCD calculations, electromagnetic interactions are absent and isospin is an exact symmetry. Electromagnetic and isospin breaking effects are small, typically a fraction of 1\% in the masses of light vector mesons and baryons~\cite{S_Gasser:1982ap}. Moreover, electromagnetic effects are a small fraction of the mass difference between the members of a same isospin multiplet~\cite{S_Gasser:1982ap}. We account for these effects by isospin averaging the experimental masses to which we compare our results. This eliminates the leading isospin breaking term, leaving behind effects which are only a small fraction of 1\%. For the pion and kaon masses, we use isospin averaging and Dashen's theorem~\cite{S_Dashen:1969eg}, which determines the leading order electromagnetic contributions to these masses. Higher order corrections, which we neglect in our work, are expected to be below the 3 per mil level (see e.g. \cite{S_Aubin:2004fs}). All of these residual effects are very small, and it is safe to neglect them in comparing our results to experiment. The central value and systematic error bar for each hadron mass is determined from the distribution of the results obtained from our 432 procedures, each weighted by the corresponding fit quality. This distribution for the nucleon is shown in Figure \ref{err_distr}. The central value for each hadron mass is chosen to be the median of the corresponding distribution. The systematic error is obtained from the central 68\% confidence interval. To calculate statistical errors, we repeat the construction of these distributions for 2000 bootstrap samples. We then build the bootstrap distribution of the medians of these 2000 distributions. The statistical error (SEM) on a hadron mass is given by the central 68\% confidence interval of the corresponding bootstrap distribution. These systematic and statistical errors are added in quadrature, yielding our final error bars. The individual components of the total systematic error are given in Table~\ref{error_budget}. \clearpage \begin{table} \begin{center} \begin{tabular}{|l|l|l|l|l|} \hline $\beta$& $am_{ud}$& $am_s$ & $L^3\cdot T$ & \# traj.\\ \hline\hline \multirow{5}{*}{3.3} & -0.0960 & -0.057 & $16^3\cdot 32$ & 10000 \\ & -0.1100 & -0.057 & $16^3\cdot 32$ & 1450\\ & -0.1200 & -0.057 & $16^3\cdot 64$ & 4500 \\ & -0.1233 & -0.057 & $16^3\cdot 64$ / $24^3\cdot 64$ / $32^3\cdot 64$ & 5000 / 2000 / 1300 \\ & -0.1265 & -0.057 & $24^3\cdot 64$ & 2100 \\ \hline \multirow{4}{*}{3.57} & -0.0318 & 0.0 / -0.01 & $24^3\cdot 64$ & 1650 / 1650\\ & -0.0380 & 0.0 / -0.01 & $24^3\cdot 64$ & 1350 / 1550\\ & -0.0440 & 0.0 / -0.007& $32^3\cdot 64$ & 1000 / 1000\\ & -0.0483 & 0.0 / -0.007& $48^3\cdot 64$ & 500 / 1000\\ \hline \multirow{5}{*}{3.7} & -0.0070 & 0.0 & $32^3\cdot 96$ & 1100\\ & -0.0130 & 0.0 & $32^3\cdot 96$ & 1450\\ & -0.0200 & 0.0 & $32^3\cdot 96$ & 2050\\ & -0.0220 & 0.0 & $32^3\cdot 96$ & 1350\\ & -0.0250 & 0.0 & $40^3\cdot 96$ & 1450\\ \hline \end{tabular} \end{center} \caption{\label{parameters} Bare lagrangian parameters, lattice sizes and statistics. The table summarizes the 14 simulation points at three different lattice spacings ordered by the light quark masses. Note that due to the additive mass renormalization, the bare mass parameters can be negative. At each lattice spacing 4-5 light quark masses are studied. The results of all these simulations are used to perform a combined mass and continuum extrapolation to the physical point. In addition, for one set of Lagrangian parameters, different volumes were studied and four of our simulations at $\beta$=3.57 were repeated with different strange quark masses. } \end{table} \begin{table} \begin{center} \begin{tabular}{|l|l|l|l|l|} \hline & continuum extrapolation & chiral fits/normalization & excited states& finite volume \\ \hline\hline $\rho$ & 0.20 & 0.55 & 0.45 & 0.20 \\ $K^*$ & 0.40 & 0.30 & 0.65 & 0.20 \\ $N$ & 0.15 & 0.90 & 0.25 & 0.05 \\ $\Lambda$ & 0.55 & 0.60 & 0.40 & 0.10 \\ $\Sigma$ & 0.15 & 0.85 & 0.25 & 0.05 \\ $\Xi$ & 0.60 & 0.40 & 0.60 & 0.10 \\ $\Delta$ & 0.35 & 0.65 & 0.95 & 0.05 \\ $\Sigma^*$ & 0.20 & 0.65 & 0.75 & 0.10 \\ $\Xi^*$ & 0.35 & 0.75 & 0.75 & 0.30 \\ $\Omega$ & 0.45 & 0.55 & 0.60 & 0.05 \\ \hline \end{tabular} \end{center} \caption{\label{error_budget} Error budget given as fractions of the total systematic error. Results represent averages over the $\Xi$ and $\Omega$ sets. The columns correspond to the uncertainties related to the continuum extrapolation (${\cal O}(a)$ or ${\cal O}(a^2)$ behavior), to the extrapolation to the physical pion mass (obtained from chiral/Taylor extrapolations for each of three possible pion mass intervals using the ratio method or the mass independent scale setting), to possible excited state contamination (obtained from different fit ranges in the mass extractions), and to finite volume corrections (obtained by including or not including the leading exponential correction). If combined in quadrature, the individual fractions do not add up to exactly 1. The small ($\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}} 20\%$) differences are due to correlations, the non-Gaussian nature of the distributions and the fact that the very small finite volume effects are treated like corrections in our analysis, not contributions to the systematic error (the effect of yet higher order corrections is completely negligible). The finite volume corrections of the decuplet resonances increase with increasing strange content. This is only due to the fact that these are fractions of decreasing total systematic errors. The absolute finite volume corrections of these resonances are on the same level. } \end{table} \begin{figure} \centerline{ \includegraphics[width=8cm]{plots/gppi.eps} \hspace*{0.1cm} \includegraphics[width=8cm]{plots/gpnucl.eps} } \caption{\label{sources} Effective masses for different source types in the pion (left panel) and nucleon (right panel) channels. Point sources have vanishing extents, whereas Gaussian sources, used on Coulomb gauge fixed configurations have radii of approximately 0.32~fm. Clearly, the extended sources/sinks result in much smaller excited state contamination. } \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{plots/ff.eps}} \caption{\label{ferm_force} Forces in the molecular dynamics time history. We show here this history for a typical sample of trajectories after thermalization. Since the algorithm is more stable for large pion masses and spatial sizes, we present --as a worst case scenario-- the fermionic force for our smallest pion mass ($M_\pi{\approx}190$~MeV; $M_\pi L{\approx}4$). The gauge force is the smoothest curve. Then, from bottom to top there are pseudofermion 1, 2, the strange quark and pseudofermion 3 forces, in order of decreasing mass. No sign of instability is observed. } \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{plots/loc.eps}} \caption{\label{locality} Locality properties of the Dirac operator used in our simulations. In the literature, the term locality is used in two different ways (see e.g. \cite{S_Hernandez:1998et,S_Kovacs:2002nz,S_Durr:2005ax}). Our Dirac operator is ultralocal in both senses. First of all (type A locality), in the sum $\sum_{x,y} {\bar \psi(x)} D(x,y)\psi(y)$ the non-diagonal elements of our $D(x,y)$ are by definition strictly zero for all $(x,y)$ pairs except for nearest neighbors. The figure shows the second aspect of locality (type B), i.e., how $D(x,y)$ depends on the gauge field $U_\mu$ at some distance $z$: $\|\partial D(x,y)/\partial U_\mu(x+z)\|$. In the analyses we use the Euclidian metric for $| z |$. We take the Frobenius norm of the resulting antihermitian matrix and sum over spin, color and Lorentz indices. An overall normalization is performed to ensure unity at $| z |$=0. The action is by definition ultralocal, thus $\|\partial D(x,y)/\partial U_\mu(x+z)\|$ depends only on gauge field variables residing within a fixed range. Furthermore, within this ultralocality range the decay is, in very good approximation, exponential with an effective mass of about 2.2$a^{-1}$. This is much larger than any of our masses, even on the coarsest lattices. } \end{figure} \begin{figure} \centerline{ \includegraphics[width=8.15cm]{plots/fvpion.eps} \hspace*{0.1cm} \includegraphics[width=8cm]{plots/fvnucl.eps} } \caption{\label{fin_volume} Volume dependence of the $\pi$ (left panel) and $N$ (right panel) masses for one of our simulation points corresponding to $a\approx0.125 \mathrm{fm}$ and $M_\pi\approx 320$~MeV. The results of fits to the form $c_1+c_2 \exp(-M_\pi L)/(M_\pi L)^{3/2}$ are shown as the solid curves, with $c_1=aM_X(L=\infty)$ and $c_2=ac_X(M_\pi)$ given in the text ($X=\pi, N$ for pion/nucleon). The dashed curves correspond to fits with the $c_2$ of refs.~\cite{S_Colangelo:2005gd,S_Colangelo:2005cg}.} \end{figure} \begin{figure} \centerline{ \includegraphics[width=12cm]{plots/nucl_histo.eps} } \caption{\label{err_distr} Distribution used to estimate the central value and systematic error on the nucleon mass. The distribution was obtained from 432 different fitting procedures as explained in the text. The median is shown by the arrow. The experimental value of the nucleon mass is indicated by the vertical line. } \end{figure} \clearpage \makeatletter \renewcommand\@biblabel[1]{S#1.} \makeatother
{ "attr-fineweb-edu": 1.428711, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdFk25V5hYDkHG2ol
\section{Introduction} The literature on dynamical networks has predominantly focused on emergence and control of global internal properties such as synchronization or consensus \cite{pecora,renbeard}. Recently, several studies considered input-to-output or transfer characteristics of dynamical networks. In particular, input-output analyses have been motivated by questions regarding the disturbance responses of networks as they are scaled \cite{studli2017vehicular,besselink2018scalable,mirabilio2021scalable}; these efforts generalize notions of string stability \cite{swaroop1996string} toward general network structures. In a separate track, input-to-output analyses have also been motivated by controller design needs in infrastructure networks with sparse actuation and measurement capabilities \cite{pirani,kooreh,mahia,abad2014graph}. Additionally, questions related to security and estimability of network processes have motivated input-output analyses \cite{vosughi,pasqualetti}. Prior input-to-output analyses of dynamical networks have been concerned with particular metrics arising from application contexts, such as $l_\infty$ gains or transfer function zeros, and have primarily been focused on tying the input-output metrics to global graph properties (e.g. \cite{abad2014graph}). Relative to these efforts, our main contribution in this short article is to demonstrate that a compendium of input-output metrics exhibit a special spatial pattern, for a canonical network synchronization/consensus model. In particular, we study a standard discrete-time model for consensus/synchronization among nodes with scalar states \cite{xiao2004fast}. For this model, we consider several input-to-output metrics between node pairs, including $l_p$ gains, frequency responses, frequency-band energy, and Markov parameters. The main outcome of our analyses is to show that these metrics satisfy a {\em decrescence property}, wherein they are nonincreasing in value across cutsets away from an input node. Additionally, two implications of the spatial pattern analysis are developed in brief vignettes. First, the analysis is used to give some insights into signal-to-noise ratios in network measurements, which bear on estimation/detection and sensor placement. Second, the analysis is used to define a notion of input-output or propagation stability for networks, and to verify propagation stability for the canonical model regardless of model scale (size). \section{Model and Goals} A dynamical network with $n$ agents or nodes is considered. Each node $i=1,\hdots, n$ has a scalar state $x_i(k)$ which evolves in discrete time ($k=0,1,2,\hdots$), via linear diffusive interactions with other nodes. Here, our primary interest is in characterizing transfer characteristics in the network, hence we also model an exogenous input $u(k)$ applied at one source node $s$ and an output $y(k)$ taken at a target node $i$. In particular, the following single-input single-output dynamics is considered: \begin{eqnarray} & & X(k+1) = AX(k) + e_{s}u(k) \label{eq:1}\\ & & y(k) = e_{i}^TX(k) \nonumber \end{eqnarray} where $X(k)=\begin{bmatrix} x_1[k] \\ \vdots \\ x_n[k] \end{bmatrix}$, $A = [a_{ij}]$ is assumed to be row stochastic or substochastic (i.e. $a_{ij}\geq 0$ and $\sum_{j} a_{ij} \le 1$), and the notation $e_b$ is used for a $0--1$ indicator vector whose $b$th entry is equal to $1$. The zero-state response of the system (\ref{eq:1}) is of interest in this study. The model (\ref{eq:1}) is a standard model for network synchronization or consensus (for stochastic $A$), and also is representative of other diffusive processes in networks. A digraph $\Gamma$ is used to represent pairwise interactions among the nodes in the network. Specifically, the graph $\Gamma$ is defined to have $n$ vertices labeled $1,\hdots, n$, which correspond with the $n$ network nodes. A directed edge is drawn from vertex $j$ to vertex $l$ if $a_{lj}>0$, which indicates that the state of vertex $j$ directly influences that of vertex $l$. The vertices $s$ and $i$ are referred to as the source and target vertices, respectively. To simplify the development, we assume throughout that $\Gamma$ is strongly connected, i.e., there is a directed path from each vertex to every other vertex. The strongly-connected case is of primary interest when considering transfer characteristics; although the results can be simply generalized for non-strongly connected cases, these details are omitted. Our primary aim in this study is to compare input-output metrics for the network model (\ref{eq:1}) for different target vertex locations $i$ in $\Gamma$, so as to characterize the spatial pattern of the input-output response. The specific metrics that we consider for the system (\ref{eq:1}) for a particular target location $i$ are: \begin{itemize} \item The {\bf $l_p$ gain} $G_p(i,k_f)$ over a time horizon $k_f$, defined as the maximum of the quantity $\left[\sum_{k=0}^{k_f} |y(k)|^p\right]^\frac{1}{p}$ over all inputs $u(0),\hdots, u(k_f-1)$ such that $\left[\sum_{k=0}^{k_f} |u(k)|^p\right]^\frac{1}{p}=1$. \item The {\bf frequency response} $H_{i}(e^{j\Omega})$, where $\Omega$ is the frequency of the (discrete time) input and $H_i()$ is the system's transfer function. Additionally, the responses to arbitrary periodic inputs, and the signal content in a frequency band, are also characterized. \item The {\bf Markov parameters} $M_i(k)={\bf e}_i^T A^k {\bf e}_s$ for $k=0,1,2,\hdots$. \end{itemize} These metrics together form the basis for the external stability analysis of linear systems, and modulate estimation and feedback controller design. Conceptually, the diffusive structure of the dynamics (\ref{eq:1}) suggests that inputs should have a localized sphere of influence, and hence the input-output metrics should exhibit a spatial degradation with distance from the source. Our goal is to provide a formal characterization of this spatial falloff, and to develop implications of this spatial analysis. \section{Main Results: Spatial Pattern Analysis} Graph theoretic analyses are developed for the input-output metrics defined for the system (\ref{eq:1}). The main results show that the metrics ($l_p$ gain, frequency response, Markov parameters) for different target locations falloff monotonically along graph cutsets away from the source location. To formalize these notions, it is convenient to define the notion of a {\bf separating cutset} for a graph. To do so, let us consider a network graph $\Gamma$ with a source vertex $s$ and a particular target vertex $i=q^*$. A set of vertices $C=\{ c(1),c(2),...,c(m) \}$ (which does not contain $s$ and $q^*$) is said to be a separating cutset, if any directed path from $s$ to $q^*$ passes through at least one vertex in $C$. The concept is illustrated in Figure \ref{fig:1}. We also find it convenient to use the notation $Z$ for the partition of $\Gamma$ containing the target vertex, upon removal of the separating cutset. We refer to $Z$ as the {\bf target partition}. \begin{figure}[!htb] \centering \vspace{-0.2cm} \includegraphics[width=0.45\textwidth]{Figs/SeparatingCutset.jpg} \vspace{-0.2cm} \caption{Illustration of a vertex cutset.} \label{fig:1} \vspace{-10 pt} \end{figure} The spatial pattern analyses of the input-output metrics depend on a reformulation of system (\ref{eq:1}), wherein the target node's state dynamics are expressed in terms of the states of nodes corresponding to a separating cutset, rather than directly in terms of the input. To develop this reformulation, let us define the {\bf cutset state vector} $X_c(k)$ as containing the time-$k$ states of the network nodes corresponding to the vertices in $C$. Likewise, we define the {\bf target partition state vector} $X_z(k)$ as containing the time-$k$ states corresponding to the vertices in $Z$ (including $q^*$). The dynamical update of the target partition state vector can be expressed in terms of only the current cutset state and target partition state: \begin{equation}\label{eq:2} X_{z}(k+1) = A_zX_z(k) + B X_c(k) \end{equation} where $A_z$ is the principal submatrix of $A$ formed by selecting the rows and columns of $A$ indicated in $Z$; and $B$ is a submatrix of $A$ formed by selecting the rows indicated in $Z$ and the columns indicated in $C$. The main graph-theoretic results build on the following characterization of the reformulated state dynamics (\ref{eq:2}): \begin{lemma} Assume that the network model (1) is initially relaxed and an exogenous input $u(k)$ is applied at the source node $s$. Consider any separating cutset $C$ and corresponding target partition $Z$. The target partition state vector over the interval $k=0,1,\hdots, k_f$ can be expressed in terms of the cutset state vector as follows: \begin{equation}\label{eq:3} {\small \begin{bsmallmatrix} X_{z}(0)\\ \vdots \\ X_{z}(k_f) \end{bsmallmatrix}} = Q_{k_f}{\small \begin{bsmallmatrix} X_{c}(k_f)\\ \vdots \\X_{z}(0) \end{bsmallmatrix}}, \end{equation} \noindent where \begin{equation}\label{eq:4} Q_k = {\small \begin{bmatrix} 0&\dots&0&0\\ 0&\dots&0&B\\ 0&\dots&B&A_zB\\ \vdots&\ddots&\vdots & \vdots\\ 0&\dots&A_z^{k-2}B&A_z^{k-1}B \end{bmatrix}}. \end{equation} Further, the matrix $Q_{k_f}$ has nonnegative entries, and the sum of the entries in each row is at most $1$. \end{lemma} \begin{proof} The target partition state vector can be computed in terms of the cutset state vector by solving (\ref{eq:2}). This gives: \begin{equation}\label{eq:5} X_{z}(k) = \sum_{j = 0}^{k-1} A_z^{j}BX_c(k-j-1) \end{equation} Assembling the responses for $k=0,\hdots, k_f$ then immediately yields Equations (\ref{eq:3}) and (\ref{eq:4}) in the lemma statement. The entries in $Q_{k_f}$ are nonnegative, as $A_z$ and $B$ are nonnegative. To characterize the row sums for $Q_{k_f}$, let us first consider the last block of rows $\hat{Q}_{k_f} = [0 \quad B \hdots A_z^{k_f-2}B \quad A_z^{k_f-1}B]$. We characterize the row sums of $\hat{Q}_{k_f}$ by considering powers of the following matrix $F$: \begin{equation*} F = {\small \begin{bmatrix} A_z&B\\ \mathbf{0} & I_m\end{bmatrix}} \end{equation*} The $k_f$th power of matrix $F$ is given by: \begin{equation*} F^{k_f} = {\small \begin{bmatrix} A^{k_f}_{z}& \sum_{k=0}^{k_f-1}A^{k}_{z}B\\ \mathbf{0} & I_m\end{bmatrix}} \end{equation*} Since $[A_z \quad B]$ is a submatrix of $A$, it is immediate that $F$ is a stochastic or substochastic matrix. Consequently, the row sums of $F^{k_f}$ are also at most $1$. From the expression for $F^{k_f}$, it follows that the matrix $\sum_{k=0}^{k=k_f-1} A_z^kB$ has row sums of at most $1$. However, the row sums of $\sum_{k=0}^{k=k_f-1} A_z^kB$ and the row sums of the matrix $\hat{Q}_{k_f}$ are identical. Hence, the row sums of $\hat{Q}_{k_f}$ are upper bounded by $1$. Further, the sum of each row of the matrix $Q_{k_f}$ is upper bounded by one of the row sums of $\hat{Q}_{k_f}$. The result thus follows. \end{proof} Our first main result is a graph-theoretic characterization of the $l_p$ gains of the input-output model (\ref{eq:1}): \begin{theorem} Consider the $l_p$ gain of the network model (\ref{eq:1}) for a particular target node $i=q^*$, i.e. $G_p(q^*,k_f)$. Also consider the $l_p$ gains when the target node is alternately on a separating cutset $C=\{ c(1),c(2),...,c(m) \}$, i.e. $G_p(c(i),k_f)$. For any time horizon $k_f$, it holds that $G_p(q^*,k_f)\leq G_p(c(i),k_f)$ for some $i=1,2,\hdots,m$. \end{theorem} \begin{proof} Theorem 1 is verified for finite $p$ (i.e., $1 \le p < \infty$), and then for $p \to \infty$. \begin{case} $1\leq p<\infty$ \end{case} The state of any node $q$ other than the source $s$ evolves as: \begin{equation}\label{eq:8} x_{q}(k+1) = a_{qq}x_q(k) + \sum_{j \in N(q)} a_{qj}x_{j}(k) \end{equation} \noindent where $N(q)$ contains the upstream neighbors of vertex $q$ in $\Gamma$ (the vertices with directed edges to $q$). Notice that $a_{qj} > 0$ for $j \in {\cal N}(q)$ and $\sum_{j=1}^{n} a_{qj} \leq 1$. For any $1\leq p< \infty$, the function $f(z) = |z|^p$ is convex over all $z$. Considering $z = x_q(k+1)$, it follows immediately that \begin{equation} \begin{split} |x_{q}(k+1)|^p \leq a_{qq}|x_q(k)|^p + \sum_{j \in N(q)} a_{qj}|x_{j}(k)|^p \end{split} \end{equation} Summing the inequality for $k=0,\hdots, k_f$ yields: \begin{equation}\label{eq:10} \sum_{k = 0}^{k_f}|x_{q}(k+1)|^p \leq a_{qq} \sum_{k = 0}^{k_f} |x_q(k)|^p + \sum_{j \in N(q)} a_{qj}\sum_{k = 0}^{k_f} |x_{j}(k)|^p \end{equation} Rewriting the leftmost sum and using $x_q(0)=0$, we get: \begin{multline}\label{eq:11} |x_{q}(k_f+1)|^p+\sum_{k = 0}^{k_f}|x_{q}(k)|^p \leq a_{qq} \sum_{k = 0}^{k_f} |x_q(k)|^p + \\ \sum_{j \in N(q)} a_{qj}\sum_{k = 0}^{k_f} |x_{j}(k)|^p \end{multline} As $|x_{q}(k_f+1)|^p$ is nonnegative, we then obtain: \begin{equation}\label{eq:12} \sum_{k = 0}^{k_f}|x_{q}(k)|^p \leq \sum_{j \in N(q)} \frac{a_{qj}}{1-a_{qq}}\sum_{k = 0}^{k_f} |x_{j}(k)|^p \end{equation} Let us define $W_j = \frac{a_{qj}}{1-a_{qq}}$. Notice that $W_j > 0$ and $\sum_{j = 1}^{n} W_j \leq 1$. In this notation, the equation (\ref{eq:12}) can be written as: \begin{equation}\label{eq:13} \sum_{k = 0}^{k_f}|x_{q}(k)|^p \leq \sum_{j \in N(q)} W_{j}\sum_{k = 0}^{k_f} |x_{j}(k)|^p \end{equation} The term $\sum_{k = 0}^{k_f}|x_{j}(k)|^p$ is the $p$th power of the $p$-norm of the signal $x_{j}(k)$ over the interval $[0,k_f]$. Since equation (\ref{eq:13}) holds for any input, it follows that the $l_p$ gains satisfy the following for any node $q$ other than the source $s$: \begin{equation}\label{eq:14} (G_p(q,k_f))^p \leq \sum_{j \in N(q)} W_{j} (G_p(j,k_f))^p \end{equation} Now consider the target vertex $q^*$. From equation (\ref{eq:14}), $G_p(q^*,k_f) \leq G_p(j,k_f)$ for some $j\in N(q^*)$, and further the inequality is strict unless $G_p(j,k_f)=G_p(q^*,k_f)$ for all $j \in N(q^*)$. Then choose a neighbor $\hat{j}$ for which $G_p(j,k_f)$ is maximum. Repeating this argument for $\hat{j}$, we see that there is a vertex other than $q^*$ and $\hat{j}$, say $\overline{j}$, such that $G_p(\overline{j},k_f) \leq G_p(\hat{j},k_f)$. Iterating, we finally get that $G_p(q^*,k_f)\leq G_p(c(i),k_f)$ for some $i=1,2,\hdots,m$. \begin{case} $p \rightarrow \infty$ \end{case} The proof follows readily from Lemma 1. Specifically, consider the separating cutset $C$ and the resulting target partition $Z$. Applying Equation (\ref{eq:3}) in Lemma 1 and then taking the infinity norm of both sides of Equation (\ref{eq:3}), we obtain: \begin{equation}\label{eq:15} \|{\small \begin{bmatrix} X_z(0) \\ \vdots \\ X_z(k_f) \end{bmatrix}}\|_\infty \leq \|Q_{k_f}\|_\infty \|{\small \begin{bmatrix} X_c(k_f)\\ \vdots \\ X_c(0) \end{bmatrix}}\|_\infty \end{equation} The maximum row sum of $Q_{k_f}$ is $1$ and hence $ \|Q_{k_f}\|_\infty \leq 1$. As a result, we get the following inequality: \begin{equation}\label{eq:16} \|{\small \begin{bmatrix} X_z(0) \\ \vdots \\ X_z(k_f) \end{bmatrix}}\|_\infty \leq \|{\small \begin{bmatrix} X_c(k_f)\\ \vdots \\ X_c(0) \end{bmatrix}}\|_\infty \end{equation} Equation (\ref{eq:16}) holds for all inputs $u(k)$, including the input that maximizes the $\infty$-norm when the target node is $q^*$. Since $q^* \in Z$, it follows immediately from (\ref{eq:16}) that $G_p(q^*,k_f)$ is upper bounded by the maximum magnitude entry in $ \begin{bmatrix} X_c(k_f)\\ \vdots \\ X_c(0) \end{bmatrix}$ for this input. Therefore, $G_p(q^*,k_f)\leq G_p(c(i),k_f)$ for some $i=1,2,\hdots,m$. \end{proof} \begin{remark} The spatial pattern on the $l_p$ gains also holds in the infinite-horizon limit ($G_p(q^*,k_f)$ for $k_f \rightarrow \infty$), provided that the gain is finite. A finite gain is achieved if $A$ is strictly substochastic (i.e., at least one row sum is strictly less than $1$), since the matrix $A$ is irreducible (the graph $\Gamma$ is strongly connected) by assumption. \end{remark} \begin{remark} Per Theorem 1, the spatial degradation of the $l_p$ norms holds for model (\ref{eq:1}) for any input signal. It follows that the spatial result of the $l_p$ gains also holds for the closed-loop system, when a feedback is applied from the target to the source node. \end{remark} \begin{remark} The proof also trivially extends to mixed-norm gains, since the spatial pattern holds for any input. \end{remark} Next, the response of the input-output model (Equation 1) to a periodic input, and hence the frequency response, is also verified to exhibit a spatial degradation: \begin{theorem} Consider the zero-state (i.e. $X(0)=0$) response of the network input-output model (\ref{eq:1}), under the assumption that the matrix $A$ is strictly substochastic (i.e. at least one row sum is strictly less than $1$). Consider a particular target vertex $i=q^*$. Also, consider any separating cutset $C=\{ c(1),c(2),...,c(m) \} $. Then the following statements are true: \begin{enumerate}[label=(\roman*)] \item For a periodic input $u(k)$, the response at the target vertex $q^*$, $x_{q^*}(k)\leq \max \{0, \max_{\widehat{k}=0,1,\hdots,k} x_{c(i)}(\widehat{k})\}$ for some $i=1,\hdots,m$. Also, $x_{q^*}(k)\geq \min \{0, \min_{\widehat{k}=0,1,\hdots,k} x_{c(j)}(\widehat{k})\}$ for some $j=1,\hdots,m$. \item At each frequency $\Omega$, $|H_{q^*}(e^{j\Omega})| \leq |H_{c(i)}(e^{j\Omega})|$ for some $i=1,\hdots, m$, where $H_{b}(j \omega)$ is the frequency response when the output is taken at vertex $b$. \end{enumerate} \end{theorem} \begin{proof} Since $A$ is substochastic and also irreducible ($\Gamma$ is strongly connected) by assumption, the system is asymptotically stable in the sense of Lyapunov. Hence, the response to a periodic input is periodic in the asymptote, and bounded for all time. Let us consider the solution of equation (\ref{eq:2}) at time $k$: \begin{equation}\label{eq:17} X_{z}(k) = {\small \begin{bmatrix} 0&B&A_zB& \hdots A_z^{k-1}B \end{bmatrix}} {\small \begin{bmatrix} X_c(k)\\\vdots\\ X_c(0) \end{bmatrix}} \end{equation} Since the target vertex $q^*$ is in the target vertex set $X_z(k)$, we can write the corresponding response as follows: \begin{equation}\label{eq:18} x_{q^*}(k) = {\small \begin{bmatrix} 0&B&A_zB& \hdots A_z^{k-1}B \end{bmatrix}}_{q^*} {\small \begin{bmatrix} X_c(k)\\\vdots\\ X_c(0) \end{bmatrix}} \end{equation} where ${\small \begin{bmatrix} 0&B&A_zB& \hdots A_z^{k-1}B \end{bmatrix}}_{q^*}$ is the row of ${\small \begin{bmatrix} 0&B&A_zB& \hdots A_z^{k-1}B \end{bmatrix}}$ corresponding to the target vertex. Notice that the row sum of ${\small \begin{bmatrix} 0&B&A_zB& \hdots A_z^{k-1}B \end{bmatrix}}$ is less than or equal to unity. Therefore, it is immediate that for a periodic input, $x_{q^*}(k)\leq \max \{0, \max_{\widehat{k}=0,1,\hdots,k} x_{c(i)}(\widehat{k})\}$ for all $k$ and some $i=1,\hdots,m$. Similarly, the lower bound of the target state can be characterized in terms of cutset state as $x_{q^*}(k)\geq \min \{0, \min_{\widehat{k}=0,1,\hdots,k} x_{c(i)}(\widehat{k})\}$, for some $i=1,\hdots,m$. To prove part (ii), consider the response of system (\ref{eq:1}) for a sinusoidal input $u(k) = \cos{\Omega k}$. The response at the each node has a sinusoidal steady-state component at the driving frequency $\Omega$, plus a transient component. The sinusoidal component of the response at the target node is given by \begin{equation}\label{eq:19} x_{q^*,SS}(k) = |H_{q^*}(e^{j\Omega})|\cos({\Omega k + \angle{H_{q^*}(e^{j\Omega})}}). \end{equation} Similarly, the sinusoidal component of the response at each separating cutset node is given by: \begin{equation}\label{eq:19a} x_{c(i),SS}(k) = |H_{c(i)}(e^{j\Omega})|\cos({\Omega k + \angle{H_{c(i)}(e^{j\Omega})}}). \end{equation} Now consider finding the response at the target node using Equation (\ref{eq:2}), when the driving signal $X_c(k)$ is set to contain the sinusoidal components of the cutset nodes' state $x_{c(i),SS}(k)$ for the sinusoidal input. The response at target node computed in this way has a transient and a sinusoidal steady-state component. Importantly, the sinusoidal component is identical to the sinusoidal response of the original network model (\ref{eq:1}) for the input $u(k)=\cos{\Omega k}$, i.e. the response is $x_{q^*,SS}(k)=|H_{q^*}(e^{j\Omega})|\cos({\Omega k + \angle{H_{q^*}(e^{j\Omega})}})$. Finally, using the same argument as for other periodic inputs, one finds that $x_{q^*,SS}(k) \le \max_{\widehat{k}=0,1,\hdots,k} x_{c(i),SS}(\widehat{k})$ for some $i=1,\hdots, m$, for all sufficiently large $k$. However, this is only possible if $|H_{q^*}(e^{j\Omega})| \leq |H_{c(i)}(e^{j\Omega})|$ for some $i=1,\hdots, m$. \end{proof} \begin{remark} The result in Theorem 2(i) is presented in a generalized form, to account for any periodic input. If the maximum of the responses over the cutset vertex set is positive, then the comparison with $0$ can be excluded in the statement for the maximum response. Similarly, if the minimum of the responses over the cutset vertices is negative, the comparison with 0 can be excluded in the statement for the minimum. \end{remark} \begin{remark} The zero ($0$) terms in the comparisons in Theorem 2(i) can be excluded, if the state matrix $A$ is stochastic and further the asymptotic response is considered. That is, the upper and lower bound of the target is dependent only on the response of the cutset vertices in this case; this can be verified by using the fact that the row sums of ${\small \begin{bmatrix} 0&B&A_zB& \hdots A_z^{k-1}B \end{bmatrix}}$ approach unity asymptotically. \end{remark} The frequency-response analysis carries through to the case that $A$ is stochastic, i.e. its row sums are unity, with only a couple of slight limitations. This formalized in the following theorem: \begin{theorem} Consider the network model (\ref{eq:1}) in the case that $A$ is an ergodic stochastic matrix. Also, consider a target node $q^*$, and any separating cutset $C=\{ c(1),c(2),...,c(m) \} $. Then, for each frequency $\Omega>0$, $|H_{q^*}(e^{j\Omega})| \leq |H_{c(i)}(e^{j\Omega})|$ for some $\Omega>0$. . \end{theorem} \begin{proof} Since $A$ is assumed ergodic and stochastic, it has a single eigenvalue at unity with remaining eigenvalues strictly inside the unit circle. It follows that the response of the system (\ref{eq:1}) to a sinusoidal input at a non-zero frequency $\Omega$ produces a bounded sinusoidal response at the same frequency, i.e. the frequency response is finite and well-defined at $\Omega$. With this observation, the remainder of the proof is identical to that of Theorem 2. \end{proof} Theorems 2 and 3 have characterized the spatial response properties of the system (\ref{eq:1}) at a particular frequency. The energy in the response over a frequency band can also be shown to exhibit a spatial falloff, by relying on Parseval's theorem and the spatial analysis of the two-norm response. This result is formalized in the following theorem: \begin{theorem} Consider the system (\ref{eq:1}), under the assumption that $A$ is substochastic. Also, consider a separating cutset $C=c(1),c(2),...,c(m)$ (where the target node is $q^*$). Then, $\int_{\Omega_1}^{\Omega_2} |H_{q^*}(e^{j\Omega})|^2 d\Omega \leq \int_{\Omega_1}^{\Omega_2} |H_{c(i)}(e^{j\Omega})|^2 d\Omega$ for some $i=1,2,\hdots,m$, and any frequency band $[\Omega_1 \quad \Omega_2]$. \end{theorem} \begin{proof} The result is proved based on the norm inequality (\ref{eq:13}), for the special case where $p=2$. Equation (\ref{eq:13}) in this case is give by: \begin{equation}\label{eq:20} \sum_{k = 0}^{k_f}|x_{q}(k)|^2 \leq \sum_{j \in N(q)} W_{j}\sum_{k = 0}^{k_f} |x_{j}(k)|^2 \end{equation} Now, consider the target vertex $q^*$ and the vertex cutset $C = c(1), c(2), \hdots, c(m)$. Using similar arguments to those used in the proof of theorem 1, we can obtain the following inequality: \begin{equation}\label{eq:21} \sum_{k = 0}^{k_f}|x_{q^*}(k)|^2 \leq \sum_{k = 0}^{k_f} |x_{c(i)}(k)|^2, \end{equation} for some $i=1,\hdots, m$. The inequality holds for any input $u(k)$, and any $k_f$. Considering $k_f \to \infty$ and then applying Parseval's theorem (which holds since the system is asymptotically stable), Equation (\ref{eq:21}) can be written in the frequency domain as follows: \begin{equation}\label{eq:22} \frac{1}{2\pi} \int_{-\pi}^{\pi}|X_q(e^{j\Omega})|^2d\Omega \leq \frac{1}{2\pi} \int_{-\pi}^{\pi}|X_{c(i)}(e^{j\Omega})|^2d\Omega \end{equation} where we use the notation $X_i(e^{j\Omega})$ as the Fourier transform of the signal $x_i(k)$. Using the relationship $|X_i(e^{j\omega})| = |H_i(e^{j\Omega})||U(e^{j\Omega})|$. Equation (\ref{eq:22}) can be rewritten as: \begin{equation}\label{eq:23} \int_{-\pi}^{\pi}|H_q(e^{j\Omega})|^2|U(e^{j\Omega})|^2 d\Omega \leq \int_{-\pi}^{\pi}|H_{c(i)}(e^{j\Omega})|^2|U(e^{j\Omega})|^2 d\Omega \end{equation} As Equation (\ref{eq:23}) holds for any input, let us choose the input to be $u(k) = \frac{1}{\pi k}\sin(Wk)e^{jW_0k}$. The Fourier transform of this input, $U_1(e^{j\Omega})$, is unity for $|\Omega-W_0| <W$ and zero otherwise. Choosing $W_0-W=\Omega_1$ and $W_0+W=\Omega_2$ and substituting into (\ref{eq:23}), we find: \begin{equation}\label{eq:25} \int_{\Omega_1}^{\Omega_2} |H_{q^*}(e^{j\Omega})|^2 d\Omega \leq \int_{\Omega_1}^{\Omega_2} |H_{c(i)}(e^{j\Omega})|^2 d\Omega \end{equation} where $c(i)$ is a vertex in vertex cutset $C$. \end{proof} \begin{remark} Since Equation (\ref{eq:23}) is valid for any input, the spatial degradation pattern also holds for any frequency-domain signal which is filtered by the network dynamics. Specifically, $\int_{\Omega_1}^{\Omega_2}|H_q(e^{j\Omega})|^2|F(e^{j\Omega})|^2d\Omega \leq \int_{\Omega_1}^{\Omega_2}|H_{c(i)}(e^{j\Omega})|^2|F(e^{j\Omega})|^2d\Omega$ for the target node $q^*$ and some $c(i)$ in vertex cutset $C$, where $F(e^{j\Omega})$ is an arbitrary function which is filtered by the network. \end{remark} Now, we show that the Markov parameters also exhibit a spatial degradation: \begin{theorem} Consider the Markov parameters for the system (1), in the cases that the output is taken at the target vertex $i=q^*$ and at vertices on a separating cutset $C=\{ c(1),c(2),...,c(m) \}$. The Markov parameters satisfy the following inequality: for all $k$, $M_{q^*}(k)\leq \max_{\widehat{k}=0,1,\hdots, k}M_{c(i)}(\widehat{k})$ for some $i=1,2,\hdots,m$. \end{theorem} \begin{proof} The proof of theorem 3 can be derived from the proof of theorem 2. Notice that equation (\ref{eq:18}) holds for any input. So, for any input it follows that $x_{q^*}(k)\leq \max_{k}x_{c(i)}(k)$ for all $k$ and some $i=1,2,\hdots,m$ in vertex cutset $C$. If the system is initially relaxed and driven by an impulse input, the response of target node $q^*$ can be written as $x_{q^*}(k) = e_{q^*}A^{k-1}e_s = M_{q^*}(k-1)$. Similarly, the response in a vertex cutset node $c(i)$ can be expressed as $x_{q^*}(k) = e_{c(i)}A^{k-1}e_s = M_{c(i)}(k-1)$. Form this, it is immediate that $M_{q^*}(k)\leq \max_{k}M_{c(i)}(k)$ for some $i=1,2,\hdots,m$. \end{proof} We have focused in this short article on the single-input single-output system (\ref{eq:1}), with the goal of understanding pairwise transfer characteristics in network synchronization or diffusion processes. However, it turns out that the spatial pattern of responses in the synchronization model holds even when inputs are applied at multiple network nodes. To formalize this notion, we consider a modified model where inputs are applied at multiple nodes: \begin{eqnarray} & & X(k+1) = AX(k) + BU(k) \label{eq:26}\\ & &y(k) = e_{i}^TX(k) \nonumber \end{eqnarray} where $X(k)$, $A$, and $e_i$ are defined as before in Equation (\ref{eq:1}), $U(k)=\begin{bmatrix} u_1(k) \\ \vdots \\ u_{\hat{m}}(k) \end{bmatrix}$ is a vector of inputs, and $B$ is a $n \times \hat{m}$ matrix whose columns are $0--1$ indicator vectors for a set of input nodes in the network (input vertices in the network's graph). As an illustration, we characterize the $l_p$ gain for the multi-input case. This requires a redefinition of the gain concept: \begin{definition} The {\bf $l_p$ gain} $G_p(i,k_f)$ for the system (\ref{eq:26}) over the time horizon $k_f$, with output taken at node $i$, is defined as the maximum of the quantity $\left[\sum_{k=0}^{k_f} |y(k)|^p\right]^\frac{1}{p}$ over all inputs $U(0),\hdots, U(k_f-1)$ such that $\left[\sum_{k=0}^{k_f} \sum_{j=0}^{\hat{m}} |u_j(k)|^p\right]^\frac{1}{p}=1$. \end{definition} The $l_p$ gains of the system in equation (\ref{eq:26}) also show a spatial falloff defined by cutsets in the network graph, as formalized in the following theorem: \begin{theorem} Consider the $l_p$ gain $G_p(q^*,k_f)$ of the system (\ref{eq:26}) for a target node $q^*$ and a time horizon $k_f$. Also, assume that $C=\{ c(1),c(2),...,c(m) \}$ is a separating cutset of the network, in the sense that all directed paths in $\Gamma$ from each input vertex to the target vertex pass through at least one of these vertices. Then, the $l_p$ gain of a cutset node majorizes the $l_p$ gain of the target, i.e. $G_p(q^*,k_f)\leq G_p(c(i),k_f)$ for some $i=1,2,\hdots,m$. \end{theorem} \begin{proof} The proof of Theorem 4 is nearly identical to that of Theorem 1, hence we omit the details. \end{proof} \section{Simulations} The spatial fall-off in the $l_p$ gains (Theorem 1) is illustrated in an example. A network with $9$ nodes with the following substochastic state matrix is considered: \begin{equation*} {\small A=\begin{bmatrix} 0&0&0&0&0&0&0.2&0.3&0.25\\ 0&0&0&0&0&0&0.35&0.25&0.2\\ 0&0&0&0&0&0&0.15&0.25&0.45\\ 0.2&0.4&0.35&0&0&0&0&0&0\\ 0.2&0.15&0.25&0&0&0&0&0&0\\ 0.2&0.45&0.15&0&0&0&0&0&0\\ 0&0&0&0.25&0.25&0.15&0&0&0\\ 0&0&0&0.3&0.35&0.25&0&0&0\\ 0&0&0&0.25&0.35&0.1&0&0&0\\ \end{bmatrix}} \end{equation*} The network's digraph, which has 27 directed edges, is shown in Figure 2. The source vertex (Vertex 1) is also indicated. \begin{figure}[hbt!] \centering \includegraphics[width=0.48\textwidth]{Figs/Network} \caption{The digraph of the example network.} \label{fig:2} \end{figure} To illustrate Theorem 1, the $l_1$ gain is computed when each node in the network is selected as the target. These gains are plotted on top of the network's graph in Figure 3. The plot shows the decrescence in the gains along cutsets in the network graph. As one example, the $l_1$ gain when vertex $9$ is the target is upper-bounded by the maximum among the $l_1$ gains when vertices $4$, $5$, and $6$ are chosen as targets. Since these vertices form a separating cutset, the example matches with the formal result. \begin{figure}[!htb] \centering \includegraphics[width=0.48\textwidth, height=0.27\textwidth ]{Figs/L_1_gain_vertex.png} \caption{The $l_1$ gains for different target nodes.} \label{fig:4} \end{figure} The decrescence of $l_p$ gains along cutsets also immediately implies that the maximum gain among target vertices at a certain distance from the source is a non-increasing function of the distance. This is true because the set of vertices at each distance forms a separating cutset between the source vertex and more distant vertices. Thus, it is insightful to calculate maximum $l_p$ gains for target vertices at different distances. We have done so for the $1$, $2$, and $\infty$ gains in Table I. Each $l_p$ gain is highest at the source (distance $0$), and decreases with distance from source. \begin{table}[!htb] \centering \begin{tabular}{|c|c|c|c|} \hline Distance & Max $l_1$ Gain & Max $l_2$ Gain & Max $l_\infty$ Gain\\ \hline 0 & 1.2122 & 1.1676 & 1.2122\\ \hline 1 & 0.4098 & 0.3654 & 0.4098\\ \hline 2 & 0.3328 & 0.2994 & 0.3328\\ \hline 3 & 0.2348 & 0.2113 & 0.2348\\ \hline \end{tabular} \caption{Decrescence of $l_p$ gains with distance from the source.} \end{table} \section{Applications of the Spatial Pattern Analysis} Two brief vignettes are presented, to illustrate potential applications of the spatial analyses developed in the short paper. The first is concerned with signal recovery from remote measurements in a network, while the second is concerned with definitions of (external) propagation stability in networks. \subsection{Signal Recovery and Signal-to-Noise Ratios in Diffusive Networks} The problem of estimating or detecting an external input to a network synchronization or diffusion process from noisy remote measurements arises in several settings\cite{ye2018optimal}. For instance, it may be necessary to locate and characterize a pollution source in a complex environment, based on localized measurements of the diffusing pollutant. Similarly, it may be of interest to identify a stubborn or manipulative agent in a network opinion dynamics, based on imperfect measurements of certain agents' opinions. Standard machinery for detection/estimation (e.g., hypothesis testing, Wiener filtering) can be brought to bear for these problems. In turn, the statistical performance of the recovery technique can be characterized, and used to develop algorithms for sensor placement. However, these analyses -- particularly ones for performance analysis and sensor placement -- can become challenging to develop and/or computationally infeasible, particularly when the network is high dimensional or incompletely known. For these reasons, graph-theoretic insights into network signal estimation/detection performance are valuable. Broadly, the performance of estimators and detectors is often closely tied with the signal-to-noise ratio (SNR) in the measured signal, i.e. the ratio between the energy/power of the signal of interest and that of the noise. The characterizations of input-output responses developed in this article imply that signal-to-noise ratios also degrade spatially in a network away from a signal source, and hence suggests that detection/estimation performance degrades monotonically away from the source. To present the concept more explicitly, let us consider the network dynamics (\ref{eq:1}) when the input $u(k)$ at the source node is an unknown signal, and measurements at a remote target node are subject to an additive zero-mean white noise with variance $\sigma^2$. That is, we assume that measurements taken at a node $i$ are of the form $\overline{y}(k)=x_i(k)+v(k)$, where $v(k)$ is a zero-mean white noise signal. In the case where the the input $u(k)$ is a finite-energy or transient signal, it is natural to define the SNR at the target node as the ratio between the energy (or other norm) of the response signal $x_i(k)$ and the noise intensity, e.g. $SNR=\frac{||x_i(k)||_p}{\sigma}$. From Theorem 1, it is immediate that the SNR for any input signal (and the best-case SNR among all inputs with a certain energy/norm) exhibits the spatial degradation pattern defined by graph cutsets. That is, as compared to the SNR at a target node, the SNR will be higher for at least one node on a cutset separating the source and target. Thus, estimation or detection performance should be improved on the separating cutset, i.e. at nodes closer to the source. In the case where the input $u(k)$ is a persistent signal, the SNR is typically defined in terms of the power of the response signal $x_i(k)$ and the noise. A typical case may be that the input and hence the the response signal $x_i(k)$ is a stochastic band-limited signal. In this case, the total power in $x_i(k)$ can be determined as the integral of the power spectrum over the band. Thus, the SNR can be found as: $SNR=\frac{\int_{\Omega}|H_i(e^{j\Omega})|^2 S_{UU}(\Omega)| d\Omega}{\sigma^2}$, where $S_{UU}(\Omega)$ is the power spectrum of the input signal $u(k)$. From Theorem 3 and the following remark (Remark 2), we immediately see that the SNR in this case also exhibits a spatial degradation along cutsets away from the source. Thus, estimation or detection is again seen to be improved at cutsets near the source node. We stress that the spatial degradation of signal strengths and SNRs holds not only for the $2$-norm but for all other $p$-norms, which are relevant for a number of signal reconstruction and detection problems \cite{pnormsignal}. \subsection{Network Propagation Stability Analysis} The stability analysis of network synchronization models has been primarily focused on internal (Lyapunov) stability notions, which capture emergence of a synchronous state \cite{renbeard,pecora,li2006global}. However, in many application areas, the responses of network processes to exogenous disturbances is of substantial interest \cite{lu2011stabilization,liu2013input,siami2016fundamental}. With this motivation, a number of studies have considered external stability of network synchronization models \cite{gchen,wangoutput1}. These studies generally define external stability as a bounded-input bounded-output (BIBO) stability notion, with the assumption that exogenous (disturbance) inputs are present at any node or a set of leader nodes, and the output is the full network's state or projections thereof. For many network processes, a primary concern is whether a disturbance at one node can amplify and propagate across the network, or whether alternately it remains localized. For instance, power-system operators recognize that oscillatory disruptions originating from generator controls can couple with the system's natural modes to cause system-wide oscillations. The propagation or amplification of a disturbance across a network is distinct from both the internal stability and external stability concepts. This distinction motivates alternative definitions of stability for networks, concerned with disturbance propagation. In fact, propagation of disturbances in cascaded systems and bidirectional line networks has been extensively studied, under the heading of {\em string stability} \cite{peppard1974string,7879221,swaroop1996string}. Several subtly different definitions of string stability have been proposed. Broadly, the definitions require that a disturbance originating at one location in the string can only be amplified by a specified finite gain for any output location, regardless of the length of the string. A stronger notion, referred to as {\em strict string stability}, requires that the disturbance response is diminished (scaled down) at each subsequent node in the string \cite{ploeg2013lp,rogge2008vehicle}. The string stability notion has been extended for other network topologies in a couple of ways. First, there is a considerable literature on {\em mesh stability}, which generalizes the string-stability concept to general directional networks \cite{983389,9449895}. Also, an important recent study by Studli and co-workers \cite{studli2017vehicular} has considered generalization of string stability concepts to more general networks. This study provides a parallel definition to string stability for general networks, as well as a parallel to strict string stability for tree-like networks, although without characterizations for particular network models. Other input-to-state stability concepts with a similar flavor, which generalize mesh stability, have also recently been developed for networks with general graph topologies and nonlinear nodal dynamics \cite{8370724}. We argue that our spatial input-output analysis immediately suggests a definition for strict stability with respect to disturbance propagation, for general network processes. Specifically, a network dynamics can be viewed as strictly propagation stable, if responses degrade along cutsets away from the source. This notion is formalized in the following definition: \begin{definition} Consider a dynamical network process where an input ${\bf u}(k)$ is applied at a source node $s$, and the state responses ${\bf y}_{q^*}(k)$ are considered at target nodes $q^*$ in the network. The network is ${\cal L}_{p,t}$ strictly propagation stable if, for every source node $s$, target node $q^*$, separating cutset $C=\{ c(1),c(2),...,c(m) \}$, and finite-$p$-norm input ($||u(k)||_p < \infty$), it holds that $||{\bf y}_{q^*}(k)||_t \leq ||{\bf y}_{c(i)}(k)||_t$ for some $i=1,\hdots,m$. \end{definition} The definition for propagation stability can easily be rephrased in terms of paths in the network's graph. In particular, a network is ${\cal L}_{p.t}$ strictly propagation stable if and only if, for every source vertex $s$, target vertex $q^*$, and finite-$p$-norm input, there exists a path from $s$ to $q^*$ in the network graph such that the $t$-norm of the response at the corresponding network nodes is decreasing along the path. From this phrasing, it is evident that the concept reduces to the standard notions for strict stability for strings and tree graphs. The statement also clarifies that the definition is flexible to the complex propagations that may occur in networks, in that decrescence is only required along one path between a source and target node (not all paths). From the proof of Theorem 1, the scalar network synchronization model (\ref{eq:1}) is immediately seen to be ${\cal L}_{p,t}$ strictly propagation stable. Thus, we see that disturbances in this model are localized/diminishing in the sense of the strict propagation stability definition, regardless of the specific network topology. For more complex synchronization models involving multivariate nodal dynamics, one anticipates that the node or subsystem model and the network graph will together determine whether or not strict propagation stability holds. \section{Conclusions} This short note has focused on the input-to-output analysis of a canonical discrete time network synchronization model. Our key finding is the a family of input-to-output metrics (e.g. $l_p$ gains, frequency responses, frequency-band energy, Markov parameters) show a spatial degradation across cutsets away from the input location. Two applications of the analysis have been briefly developed: 1) characterization of signal-to-noise ratios in network processes, and 2) definition of propagation stability notions for dynamical networks. Of interest, the spatial analyses carry through to a number of time-varying and nonlinear processes. We expect to address these cases in future work. \bibliographystyle{IEEEtran}
{ "attr-fineweb-edu": 1.290039, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdHM5qoTA-75cixp-
\section{Introduction} \begin{figure}[t] \begin{center} \includegraphics[width=0.9\linewidth]{figuresCameraReady/doubleHeadOverviewv3.pdf} \end{center} \vspace{-2mm} \caption{Comparison between single head and double heads, (a) a single fully connected (2-\textit{fc}) head, (b) a single convolution head, (c) Double-Head, which splits classification and localization on a fully connected head and a convolution head respectively, and (d) Double-Head-Ext, which extends Double-Head by introducing supervision from unfocused tasks during training and combining classification scores from both heads during inference.} \label{fig:overview} \vspace{-4mm} \end{figure} Most two-stage object detectors \cite{girshick15fastrcnn, girshick2014rcnn, ren2015faster, Dai_RFCN, Lin_FPN} share a head for both classification and bounding box regression. Two different head structures are widely used. Faster R-CNN \cite{ren2015faster} uses a convolution head (conv5) on a single level feature map (conv4), while FPN \cite{Lin_FPN} uses a fully connected head (2-\textit{fc}) on multiple level feature maps. However, there is a lack of \textbf{understanding} between the two head structures with respect to the two tasks (object classification and localization). In this paper, we perform a thorough comparison between the fully connected head (\textit{fc-head}) and the convolution head (\textit{conv-head}) on the two detection tasks, i.e. object classification and localization. We find that \textit{these two different head structures are complementary}. \textit{fc-head} is more suitable for the classification task as its classification score is more correlated to the intersection over union (IoU) between a proposal and its corresponding ground truth box. Meanwhile, \textit{conv-head} provides more accurate bounding box regression. We believe this is because \textit{fc-head} is spatially sensitive, having different parameters for different parts of a proposal, while \textit{conv-head} shares convolution kernels for all parts. To validate this, we examine the output feature maps of both heads and confirm that \textit{fc-head} is more spatially sensitive. As a result, \textit{fc-head} is better to distinguish between a complete object and part of an object (classification) and \textit{conv-head} is more robust to regress the whole object (bounding box regression). In light of above findings, we propose a {Double-Head} method, which includes a fully connected head (\textit{fc-head}) for classification and a convolution head (\textit{conv-head}) for bounding box regression (see Figure \ref{fig:overview}-(c)), to leverage advantages of both heads. This design outperforms both single \textit{fc-head} and single \textit{conv-head} (see Figure \ref{fig:overview}-(a), (b)) by a non-negligible margin. In addition, we extend Double-Head (Figure \ref{fig:overview}-(d)) to further improve the accuracy by leveraging unfocused tasks (i.e. classification in \textit{conv-head}, and bounding box regression in \textit{fc-head}). Our method outperforms FPN baseline by a non-negligible margin on MS COCO 2017 dataset \cite{lin2014microsoft}, gaining 3.5 and 2.8 AP for using ResNet-50 and ResNet-101 backbones, respectively. \section{Related Work} \noindent \textbf{One-stage Object Detectors:} OverFeat \cite{sermanet2013overfeat} detects objects by sliding windows on feature maps. SSD \cite{liu2016ssd,fu2017dssd} and YOLO \cite{redmon2016you,redmon2017yolo9000,redmon2018yolov3} have been tuned for speed by predicting object classes and locations directly. RetinaNet \cite{lin2018focal} alleviates the extreme foreground-background class imbalance problem by introducing focal loss. Point-based methods \cite{Law2018cornernet, Law2019cornernetlite, Zhou2019objaspt, Duan2019centernet, Zhou2019extremenet} model an object as keypoints (corner, center, etc), and are built on keypoint estimation networks. \noindent \textbf{Two-stage Object Detectors:} RCNN \cite{girshick2014rich} applies a deep neural network to extract features from proposals generated by selective search \cite{uijlings2013selective}. SPPNet \cite{he2014spatial} speeds up RCNN significantly using spatial pyramid pooling. Fast RCNN \cite{girshick15fastrcnn} improves the speed and performance utilizing a differentiable RoI Pooling. Faster RCNN \cite{ren2015faster} introduces Region Proposal Network (RPN) to generate proposals. R-FCN \cite{Dai_RFCN} employs position sensitive RoI pooling to address the translation-variance problem. FPN \cite{Lin_FPN} builds a top-down architecture with lateral connections to extract features across multiple layers. \noindent \textbf{Backbone Networks:} Fast RCNN \cite{girshick15fastrcnn} and Faster RCNN \cite{ren2015faster} extract features from conv4 of VGG-16 \cite{simonyan2014very}, while FPN \cite{Lin_FPN} utilizes features from multiple layers (conv2 to conv5) of ResNet \cite{he2016deep}. Deformable ConvNets \cite{dai2017deformable,zhu2018deformable} propose deformable convolution and deformable Region of Interests (RoI) pooling to augment spatial sampling locations. Trident Network \cite{li2019scale} generates scale-aware feature maps with multi-branch architecture. MobileNet \cite{howard2017mobilenets,sandler2018mobilenetv2} and ShuffleNet \cite{zhang2018shufflenet,ma2018shufflenet} introduce efficient operators (like depth-wise convolution, group convolution, channel shuffle, etc) to speed up on mobile devices. \noindent \textbf{Detection Heads:} Light-Head RCNN \cite{li2017light} introduces an efficient head network with thin feature maps. Cascade RCNN \cite{Cai_2018_CVPR} constructs a sequence of detection heads trained with increasing IoU thresholds. Feature Sharing Cascade RCNN \cite{li2019rethinking} utilizes feature sharing to ensemble multi-stage outputs from Cascade RCNN \cite{Cai_2018_CVPR} to improve the results. Mask RCNN \cite{he2017mask} introduces an extra head for instance segmentation. COCO Detection 18 Challenge winner (Megvii) \cite{megviiCOCO2018} couples bounding box regression and instance segmentation in a convolution head. IoU-Net \cite{jiang2018acquisition} introduces a branch to predict IoUs between detected bounding boxes and their corresponding ground truth boxes. Similar to IoU-Net, Mask Scoring RCNN \cite{huang2019msrcnn} presents an extra head to predict Mask IoU scores for each segmentation mask. He et. al. \cite{he2019bounding} learns uncertainties of bounding box prediction with an extra task to improve the localization results. Learning-to-Rank \cite{Tan_2019_ICCV} utilizes an extra head to produce a rank value of a proposal for Non-Maximum Suppression (NMS). Zhang and Wang \cite{Zhang_2019_ICCV} point out that there exist mis-alignments between classification and localization task domains. In contrast to existing methods, which apply a single head to extract Region of Interests (RoI) features for both classification and bounding box regression tasks, we propose to split these two tasks into different heads, based upon our thorough analysis. \section{Analysis: Comparison between \textit{fc-head} and \textit{conv-head}} \label{sec:analysisTwoHeads} In this section, we compare \textit{fc-head} and \textit{conv-head} for both classification and bounding box regression. For each head, we train a model with FPN backbone \cite{Lin_FPN} using ResNet-50 \cite{he2016deep} on MS COCO 2017 dataset \cite{lin2014microsoft}. The \textit{fc-head} includes two fully connected layers. The \textit{conv-head} has five residual blocks. The evaluation and analysis is conduct on the MS COCO 2017 validation set with 5,000 images. \textit{fc-head} and \textit{conv-head} have 36.8\% and 35.9\% AP, respectively. \subsection{Data Processing for Analysis} \label{sec:slidingwindowclassification} To make a fair comparison, we perform analysis for both heads on predefined proposals rather than proposals generated by RPN \cite{ren2015faster}, as the two detectors have different proposals. The predefined proposals include sliding windows around the ground truth box with different sizes. For each ground truth object, we generate about 14,000 proposals. The IoUs between these proposals and the ground truth box (\textit{denoted as proposal IoUs}) gradually change from zero (background) to one (the ground truth box). For each proposal, both detectors (\textit{fc-head} and \textit{conv-head}) generate classification scores and regressed bounding boxes.This process is applied for all objects in the validation set. We split the IoUs between predefined proposals and their corresponding ground truth into 20 bins uniformly, and group these proposals accordingly. For each group, we calculate mean and standard deviation of classification scores and IoUs of regressed boxes. Figure \ref{fig:slidingwindowMeanstd} shows the results for small, medium and large objects. \subsection{Comparison on Classification Task} \label{sec:fcConvCls} The first row of Figure \ref{fig:slidingwindowMeanstd} shows the classification scores for both \textit{fc-head} and \textit{conv-head}. Compared to \textit{conv-head}, \textit{fc-head} provides higher scores for proposals with higher IoUs. This indicates that \textit{classification scores of \textit{fc-head} are more correlated to IoUs between proposals and corresponding ground truth than of \textit{conv-head}}, especially for small objects. To validate this, we compute the Pearson correlation coefficient (PCC) between proposal IoUs and classification scores. The results (shown in Figure \ref{fig:slidingwindowPscores} (Left)) demonstrate that the classification scores of \textit{fc-head} are more correlated to the proposal IoUs. We also compute Pearson correlation coefficient for the proposals generated by RPN \cite{ren2015faster} and final detected boxes after NMS. Results are shown in Figure \ref{fig:slidingwindowPscores} (Right). Similar to the predifined proposals, \textit{fc-head} has higher PCC than \textit{conv-head}. Thus, the detected boxes with higher IoUs are ranked higher when calculating AP due to their higher classification scores. \subsection{Comparison on Localization Task} The second row of Figure \ref{fig:slidingwindowMeanstd} shows IoUs between the regressed boxes and their corresponding ground truth for both \textit{fc-head} and \textit{conv-head}. Compared to \textit{fc-head}, the regressed boxes of \textit{conv-head} are more accurate when the proposal IoU is above 0.4. This demonstrates that \textit{\textit{conv-head} has better regression ability than \textit{fc-head}}. \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{figuresCameraReady/slidingwindowResultsSize.pdf} \end{center} \vspace{-2mm} \caption{ Comparison between \textit{fc-head} and \textit{conv-head}. Top row: mean and standard deviation of classification scores. Bottom row: mean and standard deviation of IoUs between regressed boxes and their corresponding ground truth. Classification scores in \textit{fc-head} are more correlated to proposal IoUs than in \textit{conv-head}. \textit{conv-head} has better regression results than \textit{fc-head}. } \vspace{-2mm} \label{fig:slidingwindowMeanstd} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=0.9\linewidth]{figuresCameraReady/pscoresSizeAllv2.pdf} \end{center} \vspace{-2mm} \caption{ Pearson correlation coefficient (PCC) between classification scores and IoUs. Left: PCC of predefined proposals for large, medium and small objects. Right: PCC of proposals generated by RPN and detected boxes after NMS. } \label{fig:slidingwindowPscores} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{figuresCameraReady/FeatureMapWeightsv11.pdf} \end{center} \vspace{-2mm} \caption{ Left: Spatial correlation in output feature map of \textit{conv-head}. Middle: Spatial correlation in output feature map of \textit{fc-head}. Right: Spatial correlation in weight parameters of \textit{fc-head}. \textit{conv-head} has significantly more spatial correlation in output feature map than \textit{fc-head}. \textit{fc-head} has a similar spatial correlation pattern in output feature map and weight parameters. } \label{fig:fcspatial} \end{figure} \subsection{Discussion} \textit{Why does \textit{fc-head} show more correlation between the classification scores and proposal IoUs, and perform worse in localization?} We believe it is because \textit{fc-head} is more spatially sensitive than \textit{conv-head}. Intuitively, \textit{fc-head} applies \textit{unshared} transformations (fully connected layer) over different positions of the input feature map. Thus, the spatial information is implicitly embedded. The spatial sensitivity of \textit{fc-head} helps distinguish between a complete object and part of an object, but is not robust to determine the offset of the whole object. In contrast, \textit{conv-head} uses a \textit{shared} transformation (convolutional kernels) on all positions of the input feature map, and uses average pooling to aggregate. Next, we inspect the spatial sensitivity of \textit{conv-head} and \textit{fc-head}. For \textit{conv-head} whose output feature map is a $7\times7$ grid, we compute the spatial correlation between any pair of locations using the cosine distance between the corresponding two feature vectors. This results in a $7\times7$ correlation matrix per cell, representing the correlation between the current cell and other cells. Thus, the spatial correlation of a output feature map can be visualized by tiling the correlation matrices of all cells in a $7\times7$ grid. Figure \ref{fig:fcspatial} (Left) shows the average spatial correlation of \textit{conv-head} over multiple objects. For \textit{fc-head} whose output is not a feature map, but a feature vector with dimension 1024, we reconstruct its output feature map. This can be done by splitting the weight matrix of fully connected layer ($256\cdot 7\cdot 7 \times 1024$) by spatial locations. Each cell in the $7\times7$ grid has a transformation matrix with dimension $256\times1024$, which is used to generate output features for that cell. Thus, output feature map $7\times7\times1024$ for \textit{fc-head} is reconstructed. Then we can compute its spatial correlation in a similar manner to \textit{conv-head}. Figure \ref{fig:fcspatial} (Middle) shows the average spatial correlation in output feature map of \textit{fc-head} over multiple objects. \textit{fc-head} has significant less spatial correlation than \textit{conv-head}. This supports our conjecture that \textit{fc-head} is more spatially sensitive than \textit{conv-head}, making it easier to distinguish if one proposal covers one complete or partial object. On the other hand, it is not as robust as \textit{conv-head} to regress bounding boxes. We further examine the spatial correlation of weight parameters ($256\cdot 7\cdot 7 \times 1024$) in \textit{fc-head}, by splitting them along spatial locations. As a result, each cell of the $7\times7$ grid has a matrix with dimension $256\times1024$, which is used to compute correlation with other cells. Similar to the correlation analysis on output feature map, we compute the correlation matrices for all cells. Figure \ref{fig:fcspatial} (Right) shows the spatial correlation in weight parameters of \textit{fc-head}. It has a similar pattern to the spatial correlation in output feature map of \textit{fc-head} (shown in Figure \ref{fig:fcspatial} (Middle)). \section{Our Approach: Double-Head} Based upon above analysis, we propose a double-head method to leverage the advantages of two head structures. In this section, we firstly introduce the network structure of Double-Head, which has a fully connected head (\textit{fc-head}) for classification and a convolution head (\textit{conv-head}) for bounding box regression. Then, we extend Double-Head to Double-Head-Ext by leveraging unfocused tasks (i.e. bounding box regression in \textit{fc-head} and classification in \textit{conv-head}). \subsection{Network Structure}\label{sec:our-net} Our Double-Head method (see Figure \ref{fig:overview}-(c)) splits classification and localization into \textit{fc-head} and \textit{conv-head}, respectively. The details of backbone and head networks are described as follows: \noindent \textbf{Backbone:} We use FPN \cite{Lin_FPN} backbone to generate region proposals and extract object features from multiple levels using RoIAlign \cite{he2017mask}. Each proposal has a feature map with size $256\times7\times7$, which is transformed by \textit{fc-head} and \textit{conv-head} into two feature vectors (each with dimension 1024) for classification and bounding box regression, respectively. \noindent\textbf{Fully Connected Head (\textit{fc-head})} has two fully connected layers (see Figure \ref{fig:overview}-(c)), following the design in FPN \cite{Lin_FPN} (Figure \ref{fig:overview}-(a)). The output dimension is 1024. The parameter size is 13.25M. \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{figures/networkstructure_v3.pdf} \end{center} \vspace{-2mm} \caption{Network architectures of three components: (a) residual block to increase the number of channels (from 256 to 1024), (b) residual bottleneck block, and (c) non-local block.} \label{fig:upchannels} \end{figure} \label{sec:convheadstrcture} \noindent\textbf{Convolution Head (\textit{conv-head})} stacks $K$ residual blocks \cite{he2016deep}. The first block increases the number of channels from 256 to 1024 (shown in Figure \ref{fig:upchannels}-(a)), and others are bottleneck blocks \cite{he2016deep} (shown in Figure \ref{fig:upchannels}-(b)). At the end, average pooling is used to generate the feature vector with dimension 1024. Each residual block has 1.06M parameters. We also introduce a variation for the convolution head by inserting a non-local block \cite{wang2018non} (see Figure \ref{fig:upchannels}-(c)) before each bottleneck block to enhance foreground objects. Each non-local block has 2M parameters. \noindent\textbf{Loss Function:} Both heads (i.e. \textit{fc-head} and \textit{conv-head}) are jointly trained with region proposal network (RPN) end to end. The overall loss is computed as follows: \begin{align} \mathcal{L} &= \omega^{fc}\mathcal{L}^{fc} + \omega^{conv}\mathcal{L}^{conv} + \mathcal{L}^{rpn}, \label{eq:loss-det} \end{align} where $\omega^{fc}$ and $\omega^{conv}$ are weights for \textit{fc-head} and \textit{conv-head}, respectively. $\mathcal{L}^{fc}$, $\mathcal{L}^{conv}$, $\mathcal{L}^{rpn}$ are the losses for \textit{fc-head}, \textit{conv-head} and RPN, respectively. \subsection{Extension: Leveraging Unfocused Tasks} In vanilla Double-Head, each head focuses on its assigned task (i.e. classification in \textit{fc-head} and bounding box regression in \textit{conv-head}). In addition, we found that unfocused tasks (i.e. bounding box regression in \textit{fc-head} and classification in \textit{conv-head}) are helpful in two aspects: (a) bounding box regression provides auxiliary supervision for \textit{fc-head}, and (b) classifiers from both heads are complementary. Therefore, we introduce unfocused task supervision in training and propose a complementary fusion method to combine classification scores from both heads during inference (see Figure \ref{fig:overview}-(d)). This extension is referred to as Double-Head-Ext. \noindent \textbf{Unfocused Task Supervision}: Due to the introduction of the unfocused tasks, the loss for \textit{fc-head} ($\mathcal{L}^{fc}$) includes both classification loss and bounding box regression loss as follows: \begin{align} \mathcal{L}^{fc} &= \lambda^{fc}L^{fc}_{cls}+(1-\lambda^{fc})L^{fc}_{reg}, \label{eq:loss-heads} \end{align} where $L^{fc}_{cls}$ and $L^{fc}_{reg}$ are the classification and bounding box regression losses in \textit{fc-head}, respectively. $\lambda^{fc}$ is the weight that controls the balance between the two losses in \textit{fc-head}. In the similar manner, we define the loss for the convolution head ($\mathcal{L}^{conv}$) as follows: \begin{align} \mathcal{L}^{conv} &= (1-\lambda^{conv})L^{conv}_{cls}+\lambda^{conv}L^{conv}_{reg}, \label{eq:loss-heads-2} \end{align} where $L^{conv}_{cls}$ and $L^{conv}_{reg}$ are classification and bounding box regression losses in \textit{conv-head}, respectively. Different from $\lambda^{fc}$ that is multiplied by the classification loss $L^{fc}_{cls}$, the balance weight $\lambda^{conv}$ is multiplied by the regression loss $L^{conv}_{reg}$, as the bounding box regression is the focused task in \textit{conv-head}. Note that the vanilla Double-Head is a special case when $\lambda^{fc}=1$ and $\lambda^{conv}=1$. Similar to FPN \cite{Lin_FPN}, cross entropy loss is applied to classification, and Smooth-$L_1$ loss is used for bounding box regression. \noindent \textbf{Complementary Fusion of Classifiers}: We believe that the two heads (i.e. \textit{fc-head} and \textit{conv-head}) capture complementary information for object classification due to their different structures. Therefore we propose to fuse the two classifiers as follows: \begin{align} s &= s^{fc} + s^{conv}(1-s^{fc}) = s^{conv} + s^{fc}(1-s^{conv}), \label{eq:cls-fusion} \end{align} where $s^{fc}$ and $s^{conv}$ are classification scores from \textit{fc-head} and \textit{conv-head}, respectively. The increment from the first score (e.g. $s^{fc}$) is a product of the second score and the reverse of the first score (e.g. $s^{conv}(1-s^{fc})$). This is different from \cite{Cai_2018_CVPR} which combining all classifiers by average. Note that this fusion is only applicable when $\lambda^{fc}\neq0$ and $\lambda^{conv}\neq1$. \section{Experimental Results} We evaluate our approach on MS COCO 2017 dataset \cite{lin2014microsoft} and Pascal VOC07 dataset \cite{everingham2010pascal}. MS COCO 2017 dataset has 80 object categories. We train on \texttt{train2017} (118K images) and report results on \texttt{val2017} (5K images) and \texttt{test-dev} (41K images). The standard COCO-style Average Precision (AP) with different IoU thresholds from 0.5 to 0.95 is used as evaluation metric. Pascal VOC07 dataset has 20 object categories. We train on \texttt{trainval} with 5K images and report results on \texttt{test} with 5K images. We perform ablation studies to analyze different components of our approach, and compare our approach to baselines and state-of-the-art. \subsection{Implementation Details} Our implementation is based on Mask R-CNN benchmark in Pytorch 1.0 \cite{massa2018mrcnn}. Images are resized such that the shortest side is 800 pixels. We use no data augmentation for testing, and only horizontal flipping augmentation for training. The implementation details are described as follows: \noindent \textbf{Architecture:} Our approach is evaluated on two FPN \cite{Lin_FPN} backbones (ResNet-50 and ResNet-101 \cite{he2016deep}), which are pretrained on {ImageNet} \cite{deng2009imagenet}. The standard RoI pooling is replaced by RoIAlign \cite{he2017mask}. Both heads and RPN are jointly trained end to end. All batch normalization (BN) \cite{ioffe2015batch} layers in the backbone are frozen. Each convolution layer in \textit{conv-head} is followed by a BN layer. The bounding box regression is class-specific. \noindent \textbf{Hyper-parameters:} All models are trained using 4 NVIDIA P100 GPUs with 16GB memory, and a mini-batch size of 2 images per GPU. The weight decay is 1e-4 and momentum is 0.9. \noindent \textbf{Learning Rate Scheduling:} All models are fine-tuned with 180k iterations. The learning rate is initialized with 0.01 and reduced by 10 after 120K and 160K iterations, respectively. \subsection{Ablation Study}\label{sec:ablation:threecases} We perform a number of ablations to analyze Double-Head with ResNet-50 backbone on COCO \texttt{val2017}. \noindent \textbf{Double-Head Variations}: Four variations of double heads are compared: \begin{itemize} \setlength\itemsep{0em} \item \textbf{Double-FC} splits the classification and bounding box regression into two fully connected heads, which have the identical structure. \item \textbf{Double-Conv} splits the classification and bounding box regression into two convolution heads, which have the identical structure. \item \textbf{Double-Head} includes a fully connected head (\textit{fc-head}) for classification and a convolution head (\textit{conv-head}) for bounding box regression. \item \textbf{Double-Head-Reverse} switches tasks between two heads (i.e. \textit{fc-head} for bounding box regression and \textit{conv-head} for classification), compared to Double-Head. \end{itemize}{} \begin{table}[t] \begin{center} \begin{footnotesize} \begin{tabular}{c|c c c c|c c c|c c c} \bottomrule &\multicolumn{2}{|c}{\textit{fc-head}} & \multicolumn{2}{c|}{\textit{conv-head}} \\ &cls & reg & cls & reg &AP \\ \hline % Single-FC & 1.0 & 1.0 & - & - &36.8 \\ Single-Conv & - & - & 1.0 & 1.0 & 35.9 \\ \hline Double-FC & 1.0 & 1.0 & - & - &{37.3} \\ Double-Conv & - & - & 1.0 & 1.0 & {33.8} \\ Double-Head-Reverse & - & 1.0 & 1.0 & - & {32.6} \\ Double-Head & 1.0 & - & - & 1.0 & \textbf{38.8} \\ \hline Double-FC & 2.0 & 2.0 & - & - &{38.1} \\ Double-Conv & - & - & 2.5 & 2.5 & {34.3} \\ Double-Head-Reverse & - & 2.0 & 2.5 & - & {32.0} \\ Double-Head & 2.0 & - & - & 2.5 & \textbf{39.5} \\ \toprule \end{tabular} \end{footnotesize} \end{center} \caption{Evaluations of detectors with different head structures on COCO \texttt{val2017}. The backbone is FPN with ResNet-50. The top group shows performances for single head detectors. The middle group shows performances for detectors with double heads. The weight for each loss (classification and bounding box regression) is set to 1.0. Compared to the middle group, the bottom group uses different loss weight for \textit{fc-head} and \textit{conv-head} ($\omega^{fc}=2.0$, $\omega^{conv}=2.5$). Clearly, Double-Head has the best performance, outperforming others by a non-negligible margin. Double-Head-Reverse has the worst performance. } \label{table:headcombine} \end{table} The detection performances are shown in Table \ref{table:headcombine}. The top group shows performances for single head detectors. The middle group shows performances for detectors with double heads. The weight for each loss (classification and bounding box regression) is set to 1.0. Compared to the middle group, the bottom group uses different loss weights for \textit{fc-head} and \textit{conv-head} ($\omega^{fc}=2.0$ and $\omega^{conv}=2.5$), which are set empirically. Double-Head outperforms single head detectors by a non-negligible margin (2.0+ AP). It also outperforms Double-FC and Double-Conv by at least 1.4 AP. Double-Head-Reverse has the worst performance (drops 6.2+ AP compared to Double-Head). This validates our findings that \textit{fc-head} is more suitable for classification, while \textit{conv-head} is more suitable for localization. Single-Conv performs better than Double-Conv. We believe that the regression task helps the classification task when sharing a single convolution head. This is supported by sliding window analysis (see details in section \ref{sec:slidingwindowclassification}). Figure \ref{fig:doubleConvComparision} shows the comparison between Single-Conv and Double-Conv. Their regression results are comparable. But Single-Conv has higher classification scores than Double-Conv on proposals which have higher IoUs with the ground truth box. Thus, sharing regression and classification on a single convolution head encourages the correlation between classification scores and proposal IoUs. This allows Single-Conv to better determine if a complete object is covered. \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{figuresCameraReady/doublesingleConv.pdf} \end{center} \vspace{-3mm} \caption{ Comparison between Single-Conv and Double-Conv. Left: mean and standard deviation of classification scores. Right: mean and standard deviation of IoUs between regressed boxes and their corresponding ground truth. Single-Conv has higher classification scores than Double-Conv, while regression results are comparable. } \label{fig:doubleConvComparision} \end{figure} In contrast, sharing two tasks in a single fully connected head (Single-FC) is not as good as separating them in two heads (Double-FC). We believe that adding the regression task in the same head with equal weight introduces confliction. This is supported by the sliding window analysis. Figure \ref{fig:doubleFCComparision} shows that Double-FC has slightly higher classification scores and higher IoUs between the regressed boxes and their corresponding ground truth than Single-FC. \begin{figure}[!t] \begin{center} \includegraphics[width=0.95\linewidth]{figuresCameraReady/doublesinglefcv5.pdf} \end{center} \caption{ Comparison between Single-FC and Double-FC. (a): mean and standard deviation of classification scores. (b): zooming in of the box in plot-(a). (c): mean and standard deviation of IoUs between regressed boxes and their corresponding ground truth. (d): zooming in of the box in plot-(c). Double-FC has slightly higher classification scores and better regression results than Single-FC. } \label{fig:doubleFCComparision} \end{figure} \noindent \textbf{Depth of \textit{conv-head}}: We study the number of blocks for the convolution head. The evaluations are shown in Table \ref{table:convolutionHeadStack}. The first group has $K$ residual blocks (Figure \ref{fig:upchannels}-(a-b)), while the second group has alternating $(K+1)/2$ residual blocks and $(K-1)/2$ non-local blocks (Figure \ref{fig:upchannels}-(c)). When using a single block in \textit{conv-head}, the performance is slightly behind FPN baseline (drops 0.1 AP) as it is too shallow. However, adding another convolution block boosts the performance substantially (gains 1.9 AP from FPN baseline). As the number of blocks increases, the performance improves gradually with decreasing growth rate. Considering the trade-off between accuracy and complexity, we choose \textit{conv-head} with 3 residual blocks and 2 non-local blocks ($K=5$ in the second group) for the rest of the paper, which gains 3.0 AP from baseline. \noindent \textbf{More training iterations}: When increasing training iterations from 180k ($1 \times$ training) to 360k ($2 \times$ training), Double-Head gains 0.6 AP (from 39.8 to 40.4). \begin{table}[!t] \begin{center} \begin{footnotesize} \begin{tabular}{ c c|c|c c c \bottomrule NL&$K$ & param &AP & AP$_{0.5}$ & AP$_{0.75}$ \\% & AR$^1$ & AR$^{10}$ &AR$^{100}$ & AR$^{100}_s$ &AR$^{100}_m$ & AR$^{100}_l$ \\ \hline &0 & - & 36.8 & 58.7 & 40.4 \\ \hline &1 & 1.06M & 36.7 { (-0.1)} & {59.3} & {39.6} \\ &2 & 2.13M & 38.7 (+1.9) & {59.2} & 41.9 \\ &3 & 3.19M & {39.2 (+2.4)} & {59.4} & {42.5}\\ &4 & 4.25M & 39.3 (+2.5)& {59.2} & 42.9 \\ &5 & 5.31M & 39.5 (+2.7)& {59.6} & 43.2 \\ &6 & 6.38M & 39.5 (+2.7)& {59.4} & {43.3} \\ &7 & 7.44M & 39.7 (+2.9)& {59.8} & 43.2 \\ \hline \checkmark&3 & 4.13M & 38.8 (+2.0) & {59.2} & 42.4\\ \checkmark&5 & 7.19M & 39.8 (+3.0) & {59.6} & 43.6\\ \checkmark&7 & 10.25M & \textbf{40.0 (+3.2)} & \textbf{59.9} & \textbf{43.7}\\ \toprule \end{tabular} \end{footnotesize} \end{center} \caption{The number of blocks (Figure \ref{fig:upchannels}) in the convolution head. The baseline ($K=0$) is equivalent to the original FPN \cite{Lin_FPN} which uses \textit{fc-head} alone. The first group only stacks residual blocks, while the second group alternates $(K+1)/2$ residual blocks and $(K-1)/2$ non-local blocks. } \label{table:convolutionHeadStack} \end{table} \begin{table}[!t] \begin{center} \begin{footnotesize} \begin{tabular}{ c | c c c c \bottomrule Fusion Method &AP & AP$_{0.5}$ & AP$_{0.75}$ \\% & AR$^1$ & AR$^{10}$ &AR$^{100}$ & AR$^{100}_s$ &AR$^{100}_m$ & AR$^{100}_l$ \\ \hline No fusion & 39.7 & {59.5} & 43.4 \\ Max & {39.9} & {59.7} & {43.7} \\ Average & 40.1 & {59.8} & 44.1 \\ Complementary & \textbf{40.3} & \textbf{60.3} & \textbf{44.2}\\ \toprule \end{tabular} \end{footnotesize} \end{center} \caption{Fusion of classifiers from both heads. Complementary fusion (Eq. \ref{eq:cls-fusion}) outperforms others. The model is trained using weights $\lambda^{fc}=0.7$, $\lambda^{conv}=0.8$.} \label{table:cls-fusion} \end{table} \begin{figure*}[!t] \begin{center} \includegraphics[width=\linewidth]{figuresCameraReady/heatmapV3_na_22_captionv3.pdf} \end{center} \caption{AP over balance weights $\lambda^{fc}$ and $\lambda^{conv}$. For each ($\lambda^{fc}$, $\lambda^{conv}$) pair, we trained a Double-Head-Ext model. Note that the vanilla Double-Head is a special case with $\lambda^{fc}=1$, $\lambda^{conv}=1$. For each model, we evaluate AP in four ways: (a) using \textit{conv-head} alone, (b) using \textit{fc-head} alone, (c) using classification from \textit{fc-head} and bounding box from \textit{conv-head}, and (d) using classification fusion from both heads and bounding box from \textit{conv-head}. Note that the first row in (a) and (d) is not available, due to the unavailability of classification in \textit{conv-head} when $\lambda^{conv}=1$. The last column in (b) is not available, due to the unavailability of bound box regression in \textit{fc-head} when $\lambda^{fc}=1$.} \label{fig:parameterSearch} \end{figure*} \begin{table} [!t] \begin{center} \begin{footnotesize} \begin{tabular}{c | c c c } \bottomrule Method & AP & AP$_{0.5}$ & AP$_{0.75}$ \\ \hline FPN baseline \cite{Lin_FPN} & {47.4} & {75.7}& {41.9} \\ Double-Head-Ext (ours) & \textbf{49.2} & \textbf{76.7}& \textbf{45.6} \\ \toprule \end{tabular} \end{footnotesize} \end{center} \caption{ Comparisons with FPN baseline \cite{Lin_FPN} on VOC07 datasets with ResNet-50 backbone. Our Double-Head-Ext outperforms FPN baseline. } \label{table:comparisonBaselineVOC} \end{table} \begin{table*} [!t] \begin{center} \begin{footnotesize} \begin{tabular}{c c | c c c | c c c c c|c c c|c c c|c|c|c|c|c|c|} \bottomrule Method & Backbone & AP & AP$_{0.5}$ & AP$_{0.75}$ & AP$_s$ & AP$_m$& AP${_l}$ \\ \hline Faster R-CNN \cite{ren2015faster} & ResNet-50-C4 & 34.8 & {55.8} & 37.0& 19.1& 38.8& 48.2 \\ FPN baseline \cite{Lin_FPN} & ResNet-50 & 36.8 & 58.7 & 40.4 & 21.2 & 40.1 & 48.8 \\ Double-Head (ours) & ResNet-50 & {39.8}& {59.6}& {43.6}& \textbf{22.7}& {42.9}& {53.1} \\ Double-Head-Ext (ours) & ResNet-50 & \textbf{40.3}& \textbf{60.3}& \textbf{44.2}& {22.4}& \textbf{43.3}& \textbf{54.3} \\ \hline Faster R-CNN \cite{ren2015faster} & ResNet-101-C4 & 38.5 & {59.4} & 41.4& 19.7& 43.1& 53.3 \\ FPN baseline \cite{Lin_FPN} & ResNet-101 & {39.1} &61.0& {42.4} & {22.2}& 42.5& 51.0 \\ Double-Head (ours) & ResNet-101 & {41.5} & {61.7} & {45.6} & {23.8} & \textbf{45.2}& {54.9} \\ Double-Head-Ext (ours) & ResNet-101& \textbf{41.9} & \textbf{62.4} & \textbf{45.9} & \textbf{23.9} & \textbf{45.2}& \textbf{55.8} \\ \toprule \end{tabular} \end{footnotesize} \end{center} \caption{Object detection results (bounding box AP) on COCO \texttt{val2017}. Note that FPN baseline only has \textit{fc-head}. Our Double-Head and Double-Head-Ext outperform both Faster R-CNN and FPN baselines on two backbones (ResNet-50 and ResNet-101). } \label{table:comparisonBaseline} \end{table*} \noindent \textbf{Balance Weights $\lambda^{fc}$ and $\lambda^{conv}$}: Figure \ref{fig:parameterSearch} shows APs for different choices of $\lambda^{fc}$ and $\lambda^{conv}$. For each ($\lambda^{fc}$, $\lambda^{conv}$) pair, we train a Double-Head-Ext model. The vanilla Double-Head model is corresponding to $\lambda^{fc}=1$ and $\lambda^{conv}=1$, while other models involve supervision from unfocused tasks. For each model, we evaluate AP for using \textit{conv-head} alone (Figure \ref{fig:parameterSearch}-(a)), using \textit{fc-head} alone (Figure \ref{fig:parameterSearch}-(b)), using classification from \textit{fc-head} and bounding box from \textit{conv-head} (Figure \ref{fig:parameterSearch}-(c)), and using classification fusion from both heads and bounding box from \textit{conv-head} (Figure \ref{fig:parameterSearch}-(d)). $\omega^{fc}$ and $\omega^{conv}$ are set as $2.0$ and $2.5$ in all experiments, respectively. We summarize key observations as follows. Firstly, using two heads (Figure \ref{fig:parameterSearch}-(c)) outperforms using a single head (Figure \ref{fig:parameterSearch}-(a), (b)) for all ($\lambda^{fc}$, $\lambda^{conv}$) pairs by at least 0.9 AP. Secondly, fusion of classifiers introduces at least additional 0.4 AP improvement for all ($\lambda^{fc}$, $\lambda^{conv}$) pairs (compare Figure \ref{fig:parameterSearch}-(c) and (d)). And finally the unfocused tasks are helpful as the best Double-Head-Ext model (40.3 AP) is corresponding to $\lambda^{fc}=0.7$, $\lambda^{conv}=0.8$ (blue box in Figure \ref{fig:parameterSearch}-(d)). It outperforms Double-Head (39.8 AP, green box in Figure \ref{fig:parameterSearch}-(c)) without using unfocused tasks by 0.5 AP. For the rest of the paper, we use $\lambda^{fc}=0.7$ and $\lambda^{conv}=0.8$ for Double-Head-Ext. \noindent \textbf{Fusion of Classifiers}: We study three different ways to fuse the classification scores from both the fully connected head ($s^{fc}$) and the convolution head ($s^{conv}$) during inference: (a) average, (b) maximum, and (c) complementary fusion using Eq. (\ref{eq:cls-fusion}). The evaluations are shown in Table \ref{table:cls-fusion}. The proposed complementary fusion outperforms other fusion methods (max and average) and gains 0.6 AP compared to using the score from \textit{fc-head} alone. \begin{table*} [!t] \begin{center} \begin{footnotesize} \begin{tabular}{c c | c c c | c c c c c|c c c|c c c|c|c|c|c|c|c|} \bottomrule Method & Backbone & AP & AP$_{0.5}$ & AP$_{0.75}$ & AP$_s$ & AP$_m$& AP${_l}$ \\ \hline FPN \cite{Lin_FPN} & ResNet-101 & 36.2 & 59.1 & 39.0 & 18.2 & 39.0 & 48.2 \\ Mask RCNN \cite{he2017mask} & ResNet-101 & 38.2 & 60.3 & 41.7 & 20.1 & 41.1 & 50.2 \\ Deep Regionlets \cite{Xu_2018_ECCV}& ResNet-101 & 39.3 & 59.8 & - & 21.7 & 43.7 & 50.9\\ IOU-Net \cite{jiang2018acquisition} & ResNet-101 & 40.6 & 59.0 & - & - & - &- \\ Soft-NMS \cite{bodla2017soft} & Aligned-Inception-ResNet & 40.9& \textbf{62.8} & - & 23.3 & 43.6 & 53.3\\ LTR \cite{Tan_2019_ICCV} & ResNet-101 & {41.0}& {60.8}& {44.5}& {23.2}& {44.5}& {52.5} \\ Fitness NMS \cite{tychsen2018improving} & DeNet-101 \cite{tychsen2017denet} & 41.8 & 60.9 & 44.9 & 21.5 &\textbf{45.0} & \textbf{57.5} \\ Double-Head-Ext (ours) & ResNet-101 & \textbf{42.3}& \textbf{62.8}& \textbf{46.3}& \textbf{23.9}& {44.9}& {54.3} \\ \toprule \end{tabular} \end{footnotesize} \end{center} \caption{ Object detection results (bounding box AP), \textit{vs.} state-of-the-art on COCO \texttt{test-dev}. All methods are in the family of two-stage detectors with a single training stage. Our Double-Head-Ext achieves the best performance. } \label{table:comparisonSOTA} \end{table*} \subsection{Main Results} \noindent \textbf{Comparison with Baselines on VOC07}: We conduct experiments on Pascal VOC07 dataset and results are shown in Table \ref{table:comparisonBaselineVOC}. Compared with FPN, our method gains 1.8 AP. Specifically, it gains 3.7 AP$_{0.75}$ on the higher IoU threshold 0.75 and gains 1.0 AP$_{0.5}$ on the lower IoU threshold 0.5. \noindent \textbf{Comparison with Baselines on COCO}: Table \ref{table:comparisonBaseline} shows the comparison between our method with Faster RCNN \cite{ren2015faster} and FPN \cite{Lin_FPN} baselines on COCO \texttt{val2017}. Our method outperforms both baselines on \textit{all} evaluation metrics. Compared with FPN, our Double-Head-Ext gains 3.5 and 2.8 AP on ResNet-50 and ResNet-101 backbones, respectively. Specifically, our method gains 3.5+ AP on the higher IoU threshold (0.75) and 1.4+ AP on the lower IoU threshold (0.5) for both backbones. This demonstrates the advantage of our method with double heads. We also observe that Faster R-CNN and FPN have different preferences over object sizes when using ResNet-101 backbone: i.e. Faster R-CNN has better AP on medium and large objects, while FPN is better on small objects. Even comparing with the best performance among FPN and Faster R-CNN across different sizes, our Double-Head-Ext gains 1.7 AP on small objects, 2.1 AP on medium objects and 2.5 AP on large objects. This demonstrates the superiority of our method, which leverages the advantage of \textit{fc-head} on classification and the advantage of \textit{conv-head} on localization. \noindent \textbf{Comparison with State-of-the-art on COCO}: We compare our Double-Head-Ext with the state-of-the-art methods on MS COCO 2017 \texttt{test-dev} in Table \ref{table:comparisonSOTA}. ResNet-101 is used as the backbone. For fair comparison, the performance of single-model inference is reported for all methods. Here, we only consider the two-stage detectors with a single training stage. Our Double-Head-Ext achieves the best performance with 42.3 AP. This demonstrates the superior performance of our method. Note that Cascade RCNN \cite{Cai_2018_CVPR} is not included as it involves multiple training stages. Even through our method only has one training stage, the performance of our method is slightly below Cascade RCNN (42.8 AP). \section{Conclusions} In this paper, we perform a thorough analysis and find an interesting fact that two widely used head structures (convolution head and fully connected head) have {opposite} preferences towards classification and localization tasks in object detection. Specifically, \textit{fc-head} is more suitable for the classification task, while \textit{conv-head} is more suitable for the localization task. Furthermore, we examine the output feature maps of both heads and find that \textit{fc-head} has more spatial sensitivity than \textit{conv-head}. Thus, \textit{fc-head} has more capability to distinguish a complete object from part of an object, but is not robust to regress the whole object. Based upon these findings, we propose a {Double-Head} method, which has a fully connected head focusing on classification and a convolution head for bounding box regression. Without bells and whistles, our method gains +3.5 and +2.8 AP on MS COCO dataset from FPN baselines with ResNet-50 and ResNet-101 backbones, respectively. We hope that our findings are helpful for future research in object detection. \clearpage
{ "attr-fineweb-edu": 1.87207, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdIg4dbgg9CUVsJAa
\section{Introduction}\label{sec1} Studies of quantum field theories in curved space-time were originally developed in the context of gravitational physics, such as the probe in black hole geometry and the evolution in cosmology. However, in recent years, it has been understood that physics of the quantum field theories in curved space-time uncovers far richer structures even if we are ultimately interested in the properties in the flat space-time limit. In particular, the renormalization group with the space-time dependent cut-off (a.k.a local renormalization group) in the curved space-time and its relation to Weyl anomaly has been playing a significant role in revealing beautiful natures of the landscape of quantum field theories that are connected by the renormalization group flow \cite{Osborn:1991gm}\cite{JO}. It is hard to imagine that the recent progress in our understanding of monotonicity of the renormalization group flow \cite{Komargodski:2011vj}\cite{Komargodski:2011xv} and the possible equivalence between scale invariance and conformal invariance at the end point of the renormalization group flow \cite{Nakayama:2012nd}\cite{Luty:2012ww}\cite{Fortin:2012hn} were possible without such a formulation that heavily relies on the curved space-time (see e.g. \cite{Nakayama:2013is} for a review).\footnote{To avoid seemingly pathological counterexamples \cite{Dorigoni:2009ra}\cite{ElShowk:2011gz}, we will assume that our theories can be coupled to gravity with no anomaly in the conservation of the well-defined energy-momentum tensor.} Moreover, the applicability of the local renormalization group seems to be a foundation of the holographic interpretation of the quantum field theories. While it may be natural to introduce the extra radial direction in holography as the one corresponding to the global renormalization group scale transformation, it is a very particular response of the dual quantum field theories to the local renormalization group that guarantees the full diffeomorphism invariance of the holographic bulk description that treats the field theory directions and the renormalization group direction equally \cite{Lee:2012xba}\cite{Lee:2013dln}. For instance, the invariance under the special conformal transformation rather than the merely scaling transformation plays a crucial role in establishing AdS/CFT correspondence with the full space-time diffeomorphism (rather than foliation preserving diffeomorphism) in the bulk \cite{Nakayama:2012sn}. The aim of this paper is to understand the implication of the local renormalization group and Weyl anomaly in $1+2$ dimensional space-time. It is typically presumed that the Weyl anomaly only exists in even space-time dimension (see e.g. \cite{Duff:1993wm} for a the historical review of the gravitational contribution to the Weyl anomaly), and it might not be very useful to consider the local renormalization group in odd space-time dimensions. We show this is not the case. By scrutinising the local renormalization group and its consistency conditions in $d=1+2$ dimension, we derive various hidden structures in renormalization group. For instance, we show that beta functions cannot be arbitrary: it must be transverse to various tensors appearing in the Weyl anomaly in $d=1+2$ dimension. We give a classification of the consistency conditions and ambiguities in most generality within the power-counting renormalization scheme. We argue that they provide many non-trivial constraints on possible forms of beta functions and anomalous dimensions of general $d=1+2$ dimensional quantum field theories. While our main focus is in $d=1+2$ dimension, we hope our systematic approach to the local renormalization group analysis will give comprehensive understanding of this subject in the other dimensions, too. Indeed, we stress that our systematic classification of consistency conditions and ambiguities in local renormalization group will be applicable in any other dimensions with little modifications while the actual expressions may differ in even and odd dimensions. In particular we hope that our discussions on the relatively less known ambiguities in renormalization group will clarify some of the confusions we have encountered in the study of relations between scale invariance and conformal invariance. The organization of the paper is as follows. In section \ref{sec2}, we begin with the analysis of local renormalization group in the situation where there is no dimensionful coupling constants. Essential features of the local renormalization group in $d=1+2$ dimension will be explained there. Theoretically, we can skip section \ref{sec2} and go directly to section \ref{sec3}, in which we analyse the local renormalization group in most generality within the power-counting renormalization scheme, but we hope that section \ref{sec2} will be pedagogical enough to capture the logic by avoiding too many terms. In section \ref{sec4}, we give some modest checks of our results in conformal perturbation theories, supersymmetric field theories and holographic computations. In section \ref{sec5}, we conclude with some future perspectives. We have two appendices. In appendix \ref{appa}, we discuss a possible generalization of the local renormalization group analysis with cosmological constant. In appendix \ref{appb} we collect our conventions and some useful formulae. \section{Local renormalization group and consistency conditions without mass parameters} \label{sec2} Let us consider a $(1+2)$ dimensional relativistic quantum field theory originally defined in the flat space-time. In most of the part of this paper, we are implicit about the Wick rotation and work in the Euclidean signature. The study of the local renormalization group gives non-trivial constraints on possible renormalization group flow. The starting point of the local renormalization group is to construct the generating functional for correlation functions (i.e. Schwinger vacuum energy functional \cite{Schwinger:1951xk}) by promoting coupling constants $g^I$ to space-time dependent background fields $g^{I}(x)$. \begin{align} e^{W[g^I(x)]} = \int \mathcal{D} X e^{-S_0[X] - \int dx^3 g^I(x) O_I(x) + \mathcal{O}(g^2) } \ , \end{align} where $O_I(x)$ are all the (primary) operators in the theory (we will also discuss various tensorial operators below).\footnote{There is a small caveat here. If $O_I(x)$ (rather than its space-time integral) is not well-defined, the promotion of the coupling constants to background fields may not be possible. A famous example is the Chern-Simons interaction. At the same-time, in such situations, there is a topological obstruction so that the renormalization of such coupling constants are very much constrained (e.g. only 1-loop shift in Chern-Simons theory). We can simply treat such coupling constants as external fixed parameters in the following argument. In particular there is no associated Weyl anomaly.} The $\mathcal{O}(g^2)$ higher order terms in the definition of the renormalized Schwinger functional contain some arbitrariness related to contact terms and scheme dependence, which we will dwell on later. However, at this point, we should mention that there are two types of important background fields whose structure of the contact terms may be constrained by requiring the relevant Ward-Takahashi identities. The first one is the background metric $\gamma_{\mu\nu}(x) = \eta_{\mu\nu} + h_{\mu\nu}(x) + \cdots$ (here $\eta_{\mu\nu}$ is the flat space-time metric) that naturally couples with the energy-momentum tensor as $h_{\mu\nu} T^{\mu\nu} + O(h^2)$. The arbitrariness for the coupling to the background metric is reduced by requiring that the vacuum energy functional $W[\gamma_{\mu\nu}(x), g^I(x)]$ is diffeomorphism invariant with respect to the background metric $ds^2 = \gamma_{\mu\nu}(x) dx^{\mu} dx^{\nu}$. Still, it does not fix the arbitrariness entirely because there are higher curvature corrections such as the $\xi R \phi^2$ term in scalar field theories with $R$ being the Ricci scalar which cannot be fixed without further assumptions (e.g. Weyl invariance or supersymmetry). We could also add the local counterterms constructed out of metric which is diffeomorphism invariant. The second important example is the background vector fields $a_{\mu}(x)$ that couple to not-necessarily-conserved vector operators $J^\mu(x)$. Generically, the vector operators $J^\mu$ are not conserved due to the source terms $g^I(x)O_I(x)$ in the interaction. In order to systematically implement the broken Ward-Takahashi identities for the vector operators $J^\mu$, it is convenient to introduce the compensated gauge transformations for the source of the violation such as $g^I(x)$ so that the vacuum energy functional $W[\gamma_{\mu\nu}(x), g^I(x), a_{\mu}(x)]$ is invariant under the compensated gauge transformation: \begin{align} \delta a_\mu(x) &= D_{\mu} w(x) \cr \delta g^I(x) &= -(wg)^I(x) \ . \label{compensg} \end{align} Here we assume that the ``free part" of the action $S_0[X]$ has the symmetry $\mathcal{G}$ and the background gauge fields $a_{\mu}(x)$ lies in the corresponding Lie algebra $\mathfrak{g}$. The coupling constants $g^I(x)$ form a certain representation under $\mathcal{G}$. We will denote the covariant derivative $D_{\mu} = \partial_\mu + a_{\mu}$ and the field strength $f_{\mu\nu} = \partial_\mu a_\nu - \partial_\nu a_{\mu} + [a_{\mu},a_{\nu}]$ as usual in the matrix notation. When the covariant derivative acts on tensors, they must contain the additional space-time connection. This compensated gauge invariance plays a significant role in understanding the importance of operator identities in the local renormalization group analysis \cite{Osborn:1991gm}\cite{JO}. The crucial assumption in the following is that the Schwinger vacuum energy functional is finitely renormalized (renormalizability assumption). Theoretically this assumption is a great advantage because varying the renormalized Schwinger functional automatically takes into account the renormalization of the composite operators.\footnote{This is a chicken or egg problem in the actual computation because we have to renormalize the infinite set of operators with derivatives to construct the renormalized vacuum energy functional after all. However, the general structure of the renormalization group flow is more transparently seen in just declaring its existence.} The renormalization group equation for this Schwinger functional, whose study is the main goal of this paper, is known as the local renormalization group equation \cite{Osborn:1991gm} because we perform the space-time dependent change of coupling constants as well as renormalization scale. This has a huge advantage in discussing the conformal invariance (rather than merely scale invariance) because it directly provides the response to the non-constant Weyl transformation. Throughout this section, we concentrate on the so-called massless renormalization group flow in which we have no dimensionful coupling constants. Without any dimensionful coupling constant at hand, the local renormalization group operator can be expressed as \begin{align} \Delta_{\sigma} &= \int d^3x \sqrt{|\gamma|} \left( 2\sigma \gamma_{\mu\nu} \frac{\delta}{\delta \gamma_{\mu\nu}} + \sigma \beta^I \frac{\delta}{\delta g^I} \right. + \left. \left( \sigma \rho_I D_{\mu} g^I - (\partial_\mu \sigma) v \right)\cdot \frac{\delta}{\delta a_{\mu}} \right) \ . \end{align} In the subsequent sections, we will study further generalizations with dimensionful coupling constants. The assumption of the renormalizability is equivalent to the claim that the Schwinger functional is annihilated by $\Delta_{\sigma}$ up to the Weyl anomaly that is a local functional of the renormalized sources. The each term in $\Delta_{\sigma}$ has a simple interpretation. The first term $2\sigma \gamma_{\mu\nu} \frac{\delta}{\delta \gamma_{\mu\nu}}$ generates nothing but the Weyl rescaling of the metric by the Weyl factor $\sigma(x)$: $\delta_{\sigma}\gamma_{\mu\nu}(x) = 2\sigma(x) \gamma_{\mu\nu}(x)$. The renormalization of the coupling constants introduce additional running of the coupling constants under the change of the local scale transformation: $\beta^I$ is the scalar beta function for the corresponding operator $O_I$ which is necessary to cancel the divergence appearing in the coupling constant renormalization for $g^I$. Less familiar terms $\rho_I$ and $v$ are related to the renormalization group running for the vector background source $a_{\mu}$. We emphasize that once the coupling constant $g^I(x)$ is space-time dependent, we have extra divergence in relation to vector operators that must be cancelled by renormalizing the background vector fields $a_{\mu}$. Even in the flat space-time limit, such effects are actually visible as the renormalization of the composite vector operators. The invariance of the Schwinger functional under the local renormalization group (up to anomaly) corresponds to the trace identity \begin{align} T^{\mu}_{\ \mu} = \beta^I O_I + (\rho_I D_\mu g^I) \cdot J^\mu + D_\mu (v\cdot J^\mu) + A_\mathrm{anomaly} \ \label{traceiden} \end{align} from the definition of the renormalized composite operators: \begin{align} 2\frac{\delta}{\delta \gamma^{\mu\nu}(x)} W &= -\langle T_{\mu\nu}(x) \rangle \cr \frac{\delta}{\delta g^I(x)} W &= - \langle O_I(x) \rangle \cr \frac{\delta}{\delta a_{\mu}(x)}W &= - \langle J^\mu(x) \rangle \ . \end{align} These relations are typically known as the Schwinger (quantum) action principle \cite{Schwinger:1951xk}. In our local renormalization group approach, it simply gives the definition of the renormalized composite operators. The last term $A_\mathrm{anomaly}$ in \eqref{traceiden} is a $c$-number that depends on the space-time dependent coupling constants or background fields, usually known as Weyl anomaly (or trace anomaly) that we will discuss below. As we will discuss in more detail in section \ref{sec2.2}, the Schwinger functional must be invariant under the compensated gauge transformation \eqref{compensg}: \begin{align} \Delta_{w} W[\gamma_{\mu\nu},g^I, a_{\mu}] = \int d^3x \sqrt{|\gamma|} \left(D_\mu w \cdot \frac{\delta}{\delta a_{\mu}} - (wg)^I \frac{\delta}{\delta g^I} \right) W[\gamma_{\mu\nu},g^I, a_{\mu}] = 0 \ \label{gaugetrans} \end{align} for any Lie algebra element $w \in \mathfrak{g}$ that generates the compensated symmetry $\mathcal{G}$, so the local renormalization group operator can be equivalently rewritten as \begin{align} \Delta_{\sigma} &= \int d^3x \sqrt{|\gamma|} \left( 2\sigma \gamma_{\mu\nu} \frac{\delta}{\delta \gamma_{\mu\nu}} + \sigma B^I \frac{\delta}{\delta g^I} \right. + \left. \left( \sigma \hat{\rho}_I D_{\mu} g^I \right)\cdot \frac{\delta}{\delta a_{\mu}} \right) \ , \label{lrgrv} \end{align} when we act on the gauge invariant $W[\gamma_{\mu\nu},g^I, a_{\mu}]$, where \begin{align} B^I &= \beta^I - (vg)^I \cr \hat{\rho}_I &= \rho_I + \partial_I v \ . \end{align} In the language of the trace identity, rewriting here corresponds to the use of the operator identity or the equation of motion\footnote{This equation may seem to assume implicitly that the tree level equations motion are the same as the renormalized ones. Depending on the renormalization scheme, it may not be the case and it is possible to have corrections such that $(wg)^I$ is effectively replaced by $(Xwg)^I$, where $X = 1+ O(g^I)$ now contains the higher order corrections. Similar ambiguities appear in section \ref{sec2.2} (Class 2 ambiguity). Such a possibility is unavoidable in $d=1+3$ dimension due to possible gauge anomaly in the right hand side of \eqref{gaugetrans}. We do not expect the gauge anomaly in $d=1+3$ dimension, but we may have (fractional) Chern-Simons counterterms we will discuss later. In any case, after rewriting it as in \eqref{lrgrv} with whatever renormalized operator identity we have in the theory, there will be no significant difference in the following.} \begin{align} v \cdot D_\mu J^\mu = -(vg)^I O_I \ \end{align} so that we have the equivalent expression \cite{Osborn:1991gm} \begin{align} T^{\mu}_{\ \mu} = B^I O_I + (\hat{\rho}_I D_\mu g^I) \cdot J^\mu + A_\mathrm{anomaly} \ . \label{trirv} \end{align} Although the physics does not change with the gauge (for the background fields) which we choose, we will mostly stick to the conventional choice \eqref{lrgrv} and \eqref{trirv} in the following. This choice has a great advantage in the flat space-time limit because $B^I = 0$ directly implies the conformal invariance (i.e. $T^{\mu}_{\ \mu}|_{\gamma_{\mu\nu} = \eta_{\mu\nu}} = 0$). If we used the other choice, we would have to keep track of both $\beta^I$ and $v$ to compute $B^I = \beta^I - (vg)^I$ in order to discuss the conformal invariance. For this reason, it is most convenient \cite{JO}\cite{Nakayama:2012nd} to define the renormalization group equation for the running background source fields by \begin{align} \frac{dg^I}{d\sigma} &= B^I \cr \frac{da_{\mu}}{d\sigma} &= \hat{\rho}_I D_{\mu} g^I \ . \end{align} Again, we could evolve the coupling constants in whatever gauge we like (i.e. $\frac{dg^I}{d\sigma} = \beta^I$), and the physics does not change. However, the conformal invariance at the fixed point would be disguised. In the flat space-time limit, the physical meaning of these equations can be summarized as the (massless) Callan-Symanzik equation or Gell-Mann Low equation: \begin{align} \left(\frac{\partial}{\partial \log\mu} + \beta^I \frac{\partial}{\partial g^I} \right) W[\gamma_{\mu\nu}(x) = \eta_{\mu\nu}, g^I(x) = g^I, a_\mu(x) = 0] = 0 \ , \end{align} where $\mu$ is the space-time independent renormalization scale. Here the generator of the constant scaling transformation by the metric is replaced by the change of the renormalization scale $\mu$ from the dimensional counting. Note that (A) the contribution from the source term anomaly $A_{\mathrm{anomaly}}$ is gone, and (B) the total divergence terms $D^\mu J_\mu$ in the trace identity do not contribute (at least except for possible contact terms) so that one can replace $B^I$ with $\beta^I$, which makes it harder to keep track of these terms in the flat space-time renormalization \cite{Nakayama:2012nd}.\footnote{Note that due to the contact terms, we do have to keep track of the wave-function renormalization factor and equation of motion terms if we compute the higher point (integrated) correlation functions. These contact terms will be different when we use $\beta^I$ functions than when we use $B^I$ functions.} In $d=1+2$, without any mass parameter, the allowed structure of the Weyl anomaly is limited from the power-counting renormalization scheme that we assume. Up to total derivatives, we have the anomalous Weyl variation \begin{align} A_{\sigma} & = \Delta_\sigma W|_{\mathrm{anomaly}} \cr &= \int d^3 x \sqrt{|\gamma|} \sigma \left( \epsilon^{\mu\nu\rho} C_{IJK} D_{\mu} g^I D_{\nu} g^J D_{\rho} g^K + \epsilon^{\mu\nu\rho} f_{\mu\nu} \cdot C_I \cdot D_{\rho} g^I \right) \ . \label{3danomaly} \end{align} Here $C_{IJK}$ maps $(R_I \otimes R_J \otimes R_K) \to \mathbf{R}$,\footnote{We always choose $C_{IJK}$ to be antisymmetric with respect to permutations of $IJK$: $C_{IJK} = C_{[IJK]}$. See appendix \ref{appb} for our convention of antisymmetric symbol.} and $C_I$ maps $(\mathrm{adj} \otimes R_I) \to \mathbf{R}$ under the compensated symmetry group $\mathcal{G}$. Equivalently, we have the trace anomaly from the space-time dependent coupling constants: \begin{align} T^{\mu}_{\ \mu} |_{\mathrm{anomaly}} = A_{\mathrm{anomaly}} = -\epsilon^{\mu\nu\rho} C_{IJK} D_{\mu} g^I D_{\nu} g^J D_{\rho} g^K - \epsilon^{\mu\nu\rho} f_{\mu\nu} \cdot C_I \cdot D_{\rho} g^I \ . \label{traceanomalymassless} \end{align} Note that CP must be broken due to the appearance of the Levi-Civita tensor $\epsilon^{\mu\nu\rho}$ to obtain this non-trivial trace anomaly. We also notice that for a constant scale transformation (i.e. $\partial_\mu \sigma = 0$), we have the equivalence relations $C_{IJK} \sim C_{IJK} + \partial_{[I} \Lambda_{JK]}$ and $C_{I} \sim C_{I} + \partial_I \Lambda$ thanks to possible integration by part. Thus, the constant scale anomaly is weaker than the Weyl anomaly in such a situation (see e.g. \cite{Nakayama:2012sn} for a similar argument in relation to holography). \subsection{Consistency condition} \label{sec2.1} So far, we have introduced various beta functions and anomalous Weyl variations for space-time dependent sources. The important observation is that there exist non-trivial consistency conditions they must satisfy. In this subsection, we discuss such consistency conditions in a systematic way. We first propose that there are two distinct classes of consistency conditions from the integrability of the local renormalization group. \begin{itemize} \item Class 1 consistency condition: Integrability conditions for the local renormalization group transformation operator \item Class 2 consistency condition: Integrability conditions for the Weyl anomaly \end{itemize} Both of them are based on the requirement that the local renormalization group (or Weyl transformation) is Abelian: \begin{align} [\Delta_\sigma, \Delta_{\tilde{\sigma}}] = 0 \ . \end{align} This is known as the Wess-Zumino consistency condition \cite{Osborn:1991gm}. Class 1 consistency condition comes from the general property of the local renormalization group operator $\Delta_{\sigma}$, and it does not depend on the specific form of the Weyl anomaly. Therefore, Class 1 consistency condition is more or less independent of the space-time dimension $d$ while we focus on the $d=1+2$ in this paper. The requirement of the commutation relation \begin{align} [\Delta_\sigma, \Delta_{\tilde{\sigma}}] W[\gamma_{\mu\nu}, g^I , a_{\mu}] = 0 \end{align} on any (local or non-local) functional $W[\gamma_{\mu\nu}, g^I, a_{\mu}]$, we must demand\footnote{More precisely, the integrability condition must be only true for the functional $W[\gamma_{\mu\nu}, g^I, a_{\mu}]$ that is consistent with the local renormalization group so at this stage it may not be necessarily true for arbitrary functionals. As we will discuss, however, we can always add local counterterms on $W[\gamma_{\mu\nu}, g^I, a_{\mu}]$, so the following requirement that can be obtained from the action on the local functional is certainly necessary for our purpose.} \begin{align} -\int d^3x\sqrt{|\gamma|} (\sigma \partial_\mu \tilde{\sigma} - \tilde{\sigma} \partial_\mu \sigma) B^I \hat{\rho}_I \cdot \frac{\delta}{\delta a_{\mu}} = 0 \ , \end{align} or \begin{align} B^I \hat{\rho}_I = 0 \ , \label{rhocons} \end{align} which shows a transversal condition of the beta functions. Note that this condition is same as the one we found in $d=1+3$ dimension \cite{Osborn:1991gm}, which played an important role in deriving perturbative strong $a$-theorem with non-trivial vector operators. On the other hand, Class 2 consistency condition comes from the anomalous terms $A_{\mathrm{anomaly}}$ (or its integrated form $A_{\sigma}$) in the local renormalization group transformation, and therefore the following conditions are unique to $d=1+2$ dimension. The Wess-Zumino consistency condition on the anomalous variation demands \begin{align} \Delta_{\tilde{\sigma}} A_{\sigma} = \Delta_{\sigma} A_{\tilde{\sigma}} \ . \end{align} by recalling the definition of the anomaly $A_{\sigma} = \Delta_\sigma W$. By substituting the available form of the anomaly \eqref{3danomaly}, and using the variational formula \begin{align} \Delta_{\sigma} D_\mu g^I &= \partial_\mu \sigma B^I + \sigma(\partial_J B^I + (\hat{\rho}_J g)^I) D_\mu g^J \cr \Delta_{\sigma} f_{\mu\nu} &= \sigma( (f_{\mu\nu} g)^I \hat{\rho}_I + (\partial_I \hat{\rho}_J - \partial_J \hat{\rho}_I) D_{\mu} g^I D_{\nu} g^J ) \cr & + \partial_\mu \sigma \hat{\rho}_I D_\nu g^I - \partial_\nu \sigma \hat{\rho}_I D_\mu g^I \ , \end{align} we obtain the consistency condition from terms proportional to $D_{\mu}g^I D_\nu g^J$ and $f_{\mu\nu}$ as \begin{align} 3B^I C_{IJK} + \hat{\rho}_J C_K - \hat{\rho}_{K} C_J & = 0 \cr B^I C_I & = 0 \ . \label{consistenccc} \end{align} Note that contracting the first equation with $B^J$ requires the second equation from Class 1 consistency condition $B^I \hat{\rho}_I = 0$. Again, the consistency conditions require that the beta functions must satisfy certain transversality conditions. With the same logic, Osborn \cite{Osborn:1991gm} derived Class 2 consistency conditions for the Weyl anomaly in $d=1+1$ and $d=1+3$ dimension, among which he obtained \begin{align} B^I \partial_I \tilde{A} = - g_{IJ} B^I B^J \label{atheorem} \end{align} with a certain ``metric" $g_{IJ}$ and a potential function $\tilde{A}$ on the coupling constant space. This equation provided a foundation of the perturbative proof \cite{Osborn:1991gm} of $c$-theorem \cite{Zamolodchikov:1986gt} in $d=1+1$ and $a$-theorem \cite{Cardy:1988cwa}\cite{Komargodski:2011vj} in $d=1+3$, where $\tilde{A}$ is identified as the interpolating $a$-function along the renormalization group flow. Our results do not directly give the analogous monotonicity results in $d=1+2$ dimension, but they still show non-trivial constraints on the renormalization group. \subsection{Ambiguity} \label{sec2.2} The renormalization group has intrinsic ambiguities typically known as scheme dependence. The use of the local renormalization group leads to a classification of such ambiguities in a systematic manner. A well-known scheme dependence (e.g. various subtraction scheme in dimensional regularization) is understood as a particular subclass (Class 2) of the ambiguities we will discuss in this subsection. Broader classes of ambiguities play a significant role in understanding composite operator renormalization and the operator mixing such as energy-momentum tensor. We have three distinct classes of ambiguities in local renormalization group. \begin{itemize} \item Class 1 ambiguity: Gauge (or equations of motion) ambiguity \item Class 2 ambiguity: Scheme ambiguity \item Class 3 ambiguity: Local counterterm ambiguity \end{itemize} We have already mentioned Class 1 ambiguity at the beginning of this section in order to introduce the concept of gauge invariant flow of coupling constants by $B^I$ functions rather than ambiguous beta functions $\beta^I$ that depends on the gauge we choose. Here, we recapitulate Class 1 ambiguities in more detail. Due to invariance under the compensating gauge transformation for the coupling constants, the Schwinger functional $W[\gamma_{\mu\nu}, g^I, a_{\mu}]$ is constructed so that it is invariant under the gauge transformation \begin{align} \Delta_{w} W[\gamma_{\mu\nu}, g^I,a_\mu] = \int d^3x \sqrt{|\gamma|} \left(D_\mu w \cdot \frac{\delta}{\delta a_{\mu}} - (wg)^I \frac{\delta}{\delta g^I} \right) W[\gamma_{\mu\nu}, g^I,a_\mu] = 0 \ \label{gaugeW} \end{align} and correspondingly, the form of the Weyl anomaly is ambiguous up to the terms that vanish by \eqref{gaugeW}. In the trace identity, we have seen that the gauge transformation is related to the use of the operator identity \begin{align} w \cdot D_\mu J^\mu = -(wg)^I O_I \ . \end{align} We call it gauge ambiguity because it is the gauge transformation on the space-time dependent source terms. In \cite{Nakayama:2012sn}, it was discussed that it corresponds to a certain gauge transformation in $d+1$ dimensional space-time in holography. As we mentioned before, this gauge freedom causes the ambiguities in the definition of beta functions because the choice of the gauge affects the evolution of the scalar coupling constants $g^I$. This ambiguity in defining beta functions in flat space-time is cancelled if we use the gauge invariant $B^I$ function rather than the $\beta^I$ function in the renormalization group equation \cite{Osborn:1991gm}. Moreover, vanishing of the $B^I$ function is directly related to the Weyl invariance of the theory. In this paper, we mainly focus on the gauge in which the flow of coupling constants is generated by the $B^I$ function although the physics does not change by the choice of gauge. Class 2 ambiguity is given by the scheme dependence in the renormalization group. Certainly there is an ambiguity in the parameterization of the coupling constant space, given a ``classical action". The parameterization depends on the renormalization scheme we choose. A well known example is the reparametrization of the scalar coupling constant $g^I \to \tilde{g}^I(g)$. It induces the general coordinate transformation in coupling constant space. Under such reparametrization, various terms transform in rather obvious manners. For instance, $B^I$ and $\hat{\rho}_I$ transform as a vector and one-form respectively, and the anomaly coefficients $C_{IJK}$, $C_{K}$ transform as three-form and one-form. The consistency conditions are manifestly covariant under the reparametrization.\footnote{The situation was a little bit more non-trivial in $d=1+3$ dimension in which some anomaly coefficients do not naturally transform as tensors without further modifications of their definitions \cite{Osborn:1991gm}. We will encounter a similar situation in $d=1+2$ dimension once we introduce the dimensionful coupling constants.} In a more abstract way, we can generate the scheme ambiguity by considering the variation \begin{align} \delta \Delta_{\sigma} &= [\mathcal{D}, \Delta_{\sigma}] \cr \delta A_{\sigma} & = \mathcal{D} A_{\sigma} \end{align} with any local functional differential operator $\mathcal{D}$ \cite{JO}.\footnote{Practically, we restrict ourselves in the situation where $\mathcal{D}$ preserves the power-counting and the manifest symmetry group $\mathcal{G}$.} The above scalar coupling constant reparametrization is generated by choosing \begin{align} \mathcal{D} = \int d^3x \sqrt{|\gamma|} f^I(g) \frac{\delta}{\delta g^I} \ , \label{scalarred} \end{align} where $\tilde{g}^I = g^I + f^I(g)$ infinitesimally. A more non-trivial ambiguity in this class is given by the mixing between $a_{\mu}$ and $D_{\mu} g^I$. Choosing \begin{align} \mathcal{D} = \int d^3x \sqrt{|\gamma|} r_I D_{\mu} g^I \cdot \frac{\delta}{\delta a_{\mu} } \ \label{vectorred} \end{align} introduces among other things the shift of the total derivative terms in the trace identity by the amount $\delta v = r_I B^I$. This shift forces us to departure from the original gauge we choose (i.e. $v=0$), so after eliminating this extra $v$ again by Class 1 ambiguity (gauge ambiguity), we can go back to the original gauge with the new parameterization of the local renormalization group: \begin{align} \delta \hat{\rho}_I &= (r_I g)^J \hat{\rho}_J - (\hat{\rho}_Ig)^J r_J + (\partial_I r_J - \partial_J r_I)B^J \cr \delta B^I &= -B^J (r_Jg)^I \ \label{class21} \end{align} for the trace identity and \begin{align} \delta C_{IJK} &= 3C_{L[JK} (r_{I]} g)^L + 2(\partial_{[I}r_{J})C_{K]} \ \cr \delta C_I &= C_I (r_Kg)^K + C_K (r_I g)^K \label{class22} \end{align} for the trace anomaly. A similar, but a different choice \begin{align} \mathcal{D} = \int d^3x \sqrt{|\gamma|} D_{\mu} w \cdot \frac{\delta}{\delta a_{\mu} } \ \end{align} would just induce the gauge transformation for the background field $a_\mu$, so we could compensate it by transforming the coupling constants $g^I$ using Class 1 ambiguity or the gauge equivalence \eqref{gaugeW}, which leads to a particular choice of the reparametrization of the coupling constants $g^I$ discussed above. We should note that these ambiguities are all compatible with the consistency conditions proposed in section \ref{sec2.1}. At this point, probably it is also worthwhile mentioning that the condition for the conformal invariance $B^I = 0$ in the flat space-time limit with constant source terms is not affected by Class 2 ambiguities. Finally, Class 3 ambiguity is concerned with the ambiguity in the trace anomaly itself. It is customary that any anomaly is defined only up to local counterterms because we can always add them by hand in the definition of the Schwinger functional. The Schwinger functional is a generating functional for correlation functions of local operators, and the local counterterms do not change the correlation functions except at coincident points in the flat space-time limit, so we can declare that they are arbitrary as long as there are no other constraints from symmetries. Thus we can generate a class of ambiguities in local renormalization group by adding any local functional of coupling constants to the Schwinger functional. In our discussions of the Weyl anomaly, Class 3 ambiguity therefore shows that the anomalous Weyl variation is arbitrary up to the terms generated by the local counterterms: \begin{align} \delta A_{\sigma} = \Delta_{\sigma} W_{\mathrm{local}}[\gamma_{\mu\nu}, g^I, a_{\mu}] \ . \end{align} Without any mass parameters, the power-counting demands that the allowed local counterterms are given by \begin{align} W_{\mathrm{local}}[\gamma_{\mu\nu}, g^I, a_{\mu}] = \int d^3 x \sqrt{|\gamma|} \left( \epsilon^{\mu\nu\rho} c_{IJK} D_{\mu} g^I D_{\nu} g^J D_{\rho} g^K + \epsilon^{\mu\nu\rho} f_{\mu\nu} \cdot c_I \cdot D_{\rho} g^I \right) \ . \label{masslessct} \end{align} As before totally antisymmetric $c_{IJK}$ maps $(R_I \otimes R_J \otimes R_K) \to \mathbf{R}$, and $c_I$ maps $(\mathrm{adj} \otimes R_I) \to \mathbf{R}$ under the compensated symmetry group $\mathcal{G}$. After some computation, the local counterterms give the ambiguity in the trace anomaly as \begin{align} \delta C_{IJK} &= 4B^L\partial_{[L} c_{IJK]} + 3c_{L[JK}(\hat{\rho}_{I]} g)^L +2(\hat{\rho}_{[I} \partial_Jc_{K]}) \cr \delta C_I &= -3c_{KJI}B^K g^J + B^K(\partial_K c_I -\partial_I c_K)\ . \label{class3trace} \end{align} There is a further possible local counterterm given by Chern-Simons terms for the background field $a_{\mu}$: \begin{align} W_{\mathrm{local}}[\gamma_{\mu\nu}, g^I, a_{\mu}] = \frac{k_{cs}}{4\pi} \int d^3x \sqrt{|\gamma|} \epsilon^{\mu\nu\rho} \mathrm{Tr} \left(\partial_\mu a_\nu a_{\rho} - \frac{2}{3}a_\mu a_\nu a_\rho \right) \ . \end{align} The induced ambiguity in the trace anomaly is \begin{align} \delta C_I = \frac{k_{cs}}{4\pi} \hat{\rho}_I \ . \label{cssmb} \end{align} Furthermore we could have added the gravitational Chern-Simons term to the Schwinger functional as a local counterterm, but it would not contribute to the trace anomaly we are interested in. The importance of Chern-Simons local counterterms in $1+2$ dimensional quantum field theories has been discussed in the literature \cite{Maldacena:2011nz}\cite{Closset:2012vg}\cite{Closset:2012vp}. Once $k_{cs}$ is quantized from the requirement of the invariance under the large gauge transformation, the ambiguity we discuss here is also quantized. Since Class 3 ambiguities are generated by the variation of the local functional, it is trivial to see that they satisfy the consistency conditions discussed in section \ref{sec2.1}. \section{Local renormalization group and consistency conditions in most general cases} \label{sec3} In this section, we consider the most general forms of the local renormalization group in $d=1+2$ dimension within the power-counting renormalization scheme by adding dimensionful coupling constants to the massless case discussed in section \ref{sec2}.\footnote{Since it does not introduce any interesting new aspects, in this section we will not consider the renormalization of the cosmological constant, which is the source of the identity operator. We present further details on the cosmological constant in appendix A.} Since the lower dimensional operators (with no additional derivatives) do not mix with the higher dimensional operators in power-counting renormalization scheme, the inclusion of the dimensionful coupling constants do not alter the massless renormalization group flow in the perturbative search for the conformal fixed point. However, the following discussions may be important in understanding the effect of the composite operator renormalization such as the energy-momentum tensor and mass operators even within the massless renormalization group flow, which have some practical applications such as conformal sequestering and conformal technicolor models. We introduce the additional ``mass terms" $m^{\alpha} O^{(m)}_{\alpha}$ with mass dimension 2 (e.g. fermion mass terms or scalar quartic interactions) and $M^i O^{(M)}_i$ with mass dimension 1 (e.g. scalar mass terms). Local renormalization group demands that the sources $m^{\alpha}$ and $M^i$ must be space-time dependent. We suppress the indices $\alpha$ and $i$, which are in certain representations of compensated symmetry group $\mathcal{G}$, in the following to make the notation lighter. The local renormalization group operator is modified with additional terms \begin{align} \Delta_{\sigma,m} = - \int d^3 x \sqrt{|\gamma|} \sigma (1-\gamma_{(m)}) m \cdot \frac{\delta}{\delta m} \ \end{align} and \begin{align} \Delta_{\sigma,M} &= -\int d^3x \sqrt{|\gamma|}\left(\sigma(2-\gamma_{(M)}) M + \frac{1}{4} \sigma R \eta + \sigma \delta_I (D^2 g^I) + \sigma \epsilon_{IJ} (D^\mu g^I D_\mu g^J) \right. \ \cr & \left. \left. + 2\partial_\mu \sigma (\theta_I D^\mu g^I) + (D^2 \sigma) \tau + \sigma m \cdot \kappa \cdot m \right) \cdot \frac{\delta}{\delta M} \right) \ \label{mderiv} \end{align} from the simple power-counting. Hereafter $\cdot$ implies the summation over $\alpha$ and $i$ induced by the inner product of the symmetry group. With these additional contributions, the total local renormalization group operator is now modified as \begin{align} \Delta_{\sigma} =& \int d^3x \sqrt{|\gamma|} \left( 2\sigma \gamma_{\mu\nu} \frac{\delta}{\delta \gamma_{\mu\nu}} + \sigma \beta^I \frac{\delta}{\delta g^I} \right. + \left. \left( \sigma \rho_I D_{\mu} g^I - (\partial_\mu \sigma) v \right)\cdot \frac{\delta}{\delta a_{\mu}} \right) \cr & + \Delta_{\sigma,m} + \Delta_{\sigma,M} \ . \end{align} They correspond to the additional terms in the trace identity \begin{align} T^\mu_{\ \mu}|_{M,m} &= (\gamma_{(m)}-1) m \cdot O^{(m)} + (\gamma_{(M)}-2) M \cdot O^{(M)} - \frac{1}{4} R \eta \cdot O^{(M)} - D^2 (\tau \cdot O^{(M)}) \cr & - \delta_I (D^2 g^I) \cdot O^{(M)} - \epsilon_{IJ} (D^\mu g^I D_\mu g^J ) \cdot O^{(M)} \cr & + 2 D_{\mu} (\theta_I D^{\mu} g^I \cdot O^{(M)}) - m \cdot \kappa \cdot m \cdot O^{(M)} \ \end{align} with the Schwinger action principle: \begin{align} \frac{\delta}{\delta m(x)} W &= - \langle O^{(m)} (x) \rangle \cr \frac{\delta}{\delta M(x)}W &= - \langle O^{(M)}(x) \rangle \ . \end{align} At this point, it is instructive to understand the meaning of some coefficients in the trace identity as the operator mixing under the massless renormalization group. From the local renormalization group equation combined with the power-counting, we obtain the operator mixing in the flat space-time limit with constant coupling constants \cite{Osborn:1991gm}: \begin{align} \frac{d}{d\log\mu}\left(\begin{array}{c} T^{\mu}_{\ \mu} \\ O^{(M)} \\ O_I \end{array} \right) = \left( \begin{array}{ccc} 0 & \eta \Box & 0 \\ 0 & -\gamma_{(M)} & 0 \\ 0 & \delta_I \Box & -\gamma_I^{\ J} \\ \end{array} \right) \left( \begin{array}{c} T^{\mu}_{\ \mu} \\ O^{(M)} \\ O_J \end{array} \right) . \label{compr} \end{align} Here $\gamma_{(M)}$ is interpreted as the mass anomalous dimension matrix for operators $O^{(M)}$, and $\gamma_I^{\ J} = \partial_I B^J + (\hat{\rho}_Ig)^J$ as the anomalous dimension matrix for dimension 3 scalar operators $O_I$.\footnote{The gauge rotation by $\hat{\rho}_I$ is necessary from Class 1 ambiguity. The combination is what appears in the modified Lie derivative \eqref{modlie} introduced in \cite{Osborn:1991gm}\cite{JO}, and we will see how this gives the expected result in supersymmetric field theories in section \ref{sec4.2}.} Similarly, $\delta_I$ terms are interpreted as the mixing between $O_I$ and $\Box O^{(M)}$ under renormalization. We will see that the renormalization of the curvature coupling term $\eta$ can be related to the other terms as a consequence of the consistency conditions. Physically, this $\eta$ term is the main source of the renormalization of the energy-momentum tensor as \begin{align} \frac{d}{d\log\mu} T_{\mu\nu} =-\frac{1}{2}(\partial_\mu \partial_\nu - \Box \eta_{\mu\nu}) \eta O^{(M)} \label{renormalem} \end{align} and it may play an important role in cosmology. Note that the right hand side is automatically conserved irrespective of the nature of $O^{(M)}$, and it is consistent with the conservation of the renormalized energy-momentum tensor at every energy scale. We also note that the global energy and momenta are not renormalized despite the renormalization of the energy-momentum tensor. With the presence of the dimensionful coupling constants, the anomalous Weyl variation of the Schwinger functional acquires new terms \begin{align} & A_{\sigma;M,m} = \cr & \int d^3x \sqrt{|\gamma|}\left( \sigma( M \cdot \beta \cdot m - \frac{1}{4} R I \cdot m + J_I(D^2 g^I) \cdot m + K_{IJ} (D_\mu g^I D^\mu g^J) \cdot m + Sm^3) \right. \cr & \left. - 2\partial_\mu \sigma (L_I D^\mu g^I \cdot m) + (D^2 \sigma) k\cdot m \right) \ , \label{massanomaou} \end{align} where we assume $K_{IJ} = K_{(IJ)}$ is symmetric, and $Sm^3$ is a shorthand notation for $S_{\alpha\beta\gamma} m^{\alpha}m^{\beta}m^{\gamma}$. They correspond to the additional terms in the trace anomaly \begin{align} A_{\mathrm{anomaly};M,m} & = -M \cdot \beta \cdot m + \frac{1}{4} R I \cdot m - J_I (D^2 g^I) \cdot m - K_{IJ} (D_\mu g^I D^\mu g^J) \cdot m - Sm^3 \cr & -2D_{\mu} (L_I D^\mu g^I \cdot m) - D^2 (k\cdot m) \ . \end{align} Note that unlike the situation in section \ref{sec2}, the trace anomaly may not vanish even in the flat space-time limit with constant sources. This is because the power-counting allows that the cosmological constant is renormalized when the mass parameters are present. At the conformal fixed point, some of these terms are computed in \cite{Bzowski:2013sza}. \subsection{Consistency condition}\label{sec3.1} We can repeat the same analysis for the consistency conditions of local renormalization group with additional mass parameters. As discussed in section \ref{sec2.1}, there are two distinct classes of consistency conditions from the integrability condition $[\Delta_{\sigma}, \Delta_{\tilde{\sigma}}] = 0$ of the local renormalization group operator. Class 1 consistency condition (Integrability conditions for the local renormalization group transformation operator) is obtained by requiring $[\Delta_{\sigma}, \Delta_{\tilde{\sigma}}] = 0$ as a differential operator acting on arbitrary functional $W[\gamma_{\mu\nu}, g^I, a_{\mu}, m, M]$. With the additional dimensionful parameters, in addition to the previous constraint \eqref{rhocons}, we must require (see appendix \ref{appb} for Weyl variations) \begin{align} \eta &= \delta_I B^I - (B^I \partial_I \tau - \gamma_{(M)}) \tau \cr \delta_I + 2(\partial_I B^J + \frac{1}{2}(\hat{\rho}_I g)^J) \delta_J + 2 \epsilon_{IJ} B^J &= 2(\tilde{\mathcal{L}}_{B,\hat{\rho}} - \gamma_{(M)}) \theta_I \ . \label{consis} \end{align} Here the modified Lie derivative \cite{Osborn:1991gm}\cite{JO} \begin{align} \tilde{\mathcal{L}}_{B,\hat{\rho}} \theta_I = B^J\partial_J \theta_I + (\partial_I B^J+(\hat{\rho}_I g)^J) \theta_J = B^J \partial_J \theta_I + \gamma_{I}^{\ J} \theta_J \label{modlie} \end{align} for the 1-form is introduced (we will use the similar definition for the other tensors). Note that the first equation \eqref{consis} determines $\eta$ from the other parameters in the trace anomaly.\footnote{The $\eta$ term in the trace anomaly is a genuine geometric obstruction for the Weyl transformation in $(1+2)$ dimension, but as we will discuss, we can make it vanish at conformal fixed point by choosing the judicious counterterms.} The necessity of the first equation can be also seen from the consistency of the trace identity \begin{align} T^{\mu}_{\ \mu} = B^I O_I - \tau \cdot \Box O^{(M)} \ . \end{align} under the massless renormalization group with the composite operator renormalization \eqref{compr} in the flat space-time limit with constant sources. We emphasize again that Class 1 consistency condition is rather universal and the structure is not very much different from the one that appeared in $d=1+3$ \cite{Osborn:1991gm}\cite{JO} with mass parameters. We can also understand the universality from the above argument that the consistency condition is a consequence of the trace identity and the composite operator renormalization. Instead, Class 2 consistency conditions (Integrability conditions for the Weyl anomaly) deal with the anomalous variation and the subsequent conditions will be unique to $d=1+2$ dimension. By demanding \begin{align} \Delta_{\tilde{\sigma}} A_{\sigma} = \Delta_{\sigma} A_{\tilde{\sigma}} \ \end{align} in the new terms in Weyl anomaly \eqref{massanomaou}, we obtain the new constraint: \begin{align} I + B^I J_I - \tau \beta &= B^I \partial_I k + \gamma_{(m)} k \ \cr \frac{1}{2} J_I + (\partial_I B^J + \frac{1}{2}(\hat{\rho}_I g)^J )J_J + K_{IJ} B^J + \tilde{\mathcal{L}}_{B,\hat{\rho}} L_ I + \gamma_{(m)} L_I &= \theta_I \beta \ \label{massconsist} \end{align} in addition to \eqref{consistenccc}. Unlike in $d=1+3$ discussed in \cite{Osborn:1991gm}\cite{JO}, the consistency conditions \eqref{consistenccc} for the beta functions for dimensionless coupling constants are not modified by the presence of the dimensionful coupling constants. \subsection{Ambiguity} \label{sec3.2} The ambiguities in massless renormalization group discussed in section \ref{sec2.2} can be extended to the most generic renormalization group with the dimensionful parameters. We have three distinct classes of ambiguities. Class 1 ambiguities (Gauge ambiguity) appear due to the gauge invariance of the Schwinger functional $W[\gamma_{\mu\nu}, g^I, a_{\mu}, m, M]$. The gauge invariance must be extended to include the dimensionful operators: \begin{align} \Delta_{w} = \int d^3x \sqrt{|\gamma|} \left(D_\mu w \cdot \frac{\delta}{\delta a_{\mu}} - (wg)^I \frac{\delta}{\delta g^I} - (wM) \cdot \frac{\delta}{\delta M} - (wm) \cdot \frac{\delta}{\delta m} \right) = 0 \ , \end{align} which corresponds to the operator identity \begin{align} w \cdot D_\mu J^\mu = -(w g)^I O_I - (wM) \cdot O^{(M)} - (wm) \cdot O^{(m)} \ . \end{align} By using this ambiguity, we can always remove the total derivative term $D_\mu (v\cdot J^\mu)$ in the trace identity with $\beta^I \to B^I = \beta^I - (vg)^I$ and so on.\footnote{ In principle this equation could contain additional terms $(w \alpha_R) R O^M + (w \alpha_d) D^2 O^M$.} In section \ref{sec3.1}, it was assumed that this gauge ambiguity is fixed by requiring there is no $w\cdot D_{\mu} J^\mu$ term in the trace anomaly. This is the most convenient gauge choice because vanishing of $B^I$ function together with vanishing of dimensionful parameters (e.g. $M$ and $m$) will imply the Weyl invariance of the theory up on the improvement of the energy-momentum tensor that we will discuss in a moment. Class 2 ambiguities (Scheme ambiguity) are related to the scheme choice of the local renormalization group. The simplest example is the reparametrization $g^I \to \tilde{g}^I(g^J)$ of the dimensionless scalar coupling constants, which is usually associated with the choice of the renormalization schemes. Most of the consistency equations are manifestly covariant under such reparametrization, but some consistency equations (e.g. second lines of \eqref{consis} and \eqref{massconsist}) are not manifestly covariant because ordinary derivatives with respect to $I$ appears rather than covariant derivatives or Lie derivatives. However, some coefficients such as $\epsilon_{IJ}$ and $K_{IJ}$ transforms non-covariantly due to $D^2 g^I$ terms in \eqref{mderiv} and \eqref{massanomaou} so that the consistency conditions are actually covariant as they should be. More generally, we can generate the scheme ambiguity by considering the variation \begin{align} \delta \Delta_{\sigma} &= [\mathcal{D}, \Delta_{\sigma}] \cr \delta A_{\sigma} & = \mathcal{D} A_{\sigma} \end{align} with any local functional differential operator $\mathcal{D}$. The above mentioned reparametrization ambiguity is induced by \begin{align} \mathcal{D} = \int d^3 x \sqrt{|\gamma|} \left( f_{g}^I \frac{\delta}{\delta g^I} + f_m m \cdot \frac{\delta}{\delta m} + (f_M M + m f_{Mm} m) \cdot \frac{\delta}{\delta M}\right) \ . \end{align} We have included the additionally possible reparametrization of mass parameters $\delta m = f_{m} m$ and $\delta M = f_M M + m f_{Mm} m$. In addition, we have other Class 2 ambiguities for the mixing between $D_\mu g^I$ and $a_\mu$ as \begin{align} \mathcal{D} = \int d^3x \sqrt{|\gamma|} r_I D_\mu g^I \frac{\delta}{\delta a_\mu} \ , \end{align} which, in addition to \eqref{class21} we have already obtained in the massless case, induces \begin{align} \delta\delta_I &= (r_Ig)^J \delta_J \cr \delta \theta_I &= (r_I g)^J \theta_J \cr \delta \epsilon_{IJ} &= (r_Ig)^K \epsilon_{KJ} + (r_Jg)^K \epsilon_{IK} + (\partial_{(I} r_{J)} g )^K \delta_K + 2\delta_{K}(r_{(I})^K_{\ J)} \ . \end{align} In the last line, explicit matrix notation of $(r_{I})^{K}_{\ J} = r_{I}^a (T_a)^{K}_{\ J}$ is used. At the same time, the trace anomaly is modified, in addition to \eqref{class22}, as \begin{align} \delta K_{IJ} &= (r_Ig)^K K_{KJ} + (r_Jg)^K K_{IK} +(\partial_{(I}r_{J)}g)^KJ_K + 2J_{K}(r_{(I})^K_{\ J)} \cr \delta L_I &= (r_Ig)^J L_J \cr \delta J_I &= (r_Ig)^J J_J \ . \end{align} Furthermore, we have extra Class 2 ambiguity for the mixing between $R$, $D^2 g^I$ and $D_\mu g^I D^\mu g^J$ with \begin{align} \mathcal{D} = \int d^3x \sqrt{|\gamma|} \left(\frac{1}{4}R h + (D^2 g^I) d_I + (D_\mu g^I D^\mu g^J) e_{IJ} \right) \cdot \frac{\delta}{\delta M} \ , \end{align} where we assume $e_{IJ} = e_{(IJ)}$ is symmetric. Under this scheme change associated with the field redefinition, we obtain \begin{align} \delta \eta &= (B^I \partial_I h - \gamma_{(M)} h) \cr \delta \tau &= -h + d_I B^I \cr \delta \theta_I &= \frac{1}{2}d_I + \left(\partial_IB^J + \frac{1}{2}(\hat{\rho}_Ig)^J \right)d_J + e_{IJ} B^J \cr \delta \delta_I &= (\tilde{\mathcal{L}}_{B,\hat{\rho}} -\gamma_{(M)}) d_I \cr \delta \epsilon_{IJ} &= (\tilde{\mathcal{L}}_{B\,\hat{\rho}} - \gamma_{(M)}) e_{IJ} + (\partial_I \partial_J B^K + (\partial_{(I}(\hat{\rho}_{J)})g)^K)d_K + 2d_K(\hat{\rho}_{(I})^K_{J)} \ \end{align} as well as the change in the trace anomaly \begin{align} \delta I &= -4h\beta \cr \delta J_I &= \beta d_I \cr \delta K_{IJ} &= \beta e_{IJ} \ . \end{align} In particular, one may always set $\tau = \theta_I = 0$ by using the ambiguity. We note that $\tau = 0$ choice is nothing but the improvement of the energy-momentum tensor so that the $\Box O^{(M)}$ term is absent in the trace anomaly in the flat space-time as we will discuss shortly. Finally Class 3 ambiguities (Counterterm ambiguity) are induced by the local counterterms in the Schwinger functional. With the presence of the dimensionful coupling constants, the new local counterterms we could add in addition to \eqref{masslessct} are \begin{align} \int d^3x \sqrt{|\gamma|} \left( M \cdot \mathcal{B}_{Mm} \cdot m -\frac{1}{4} R \mathcal{I} \cdot m + \mathcal{J}_I (D^2 g^I) \cdot m + \mathcal{K}_{IJ} (D_\mu g^I D^\mu g^J)\cdot m + \mathcal{S} m^3 \right) \ , \label{class3massive} \end{align} where we assume $\mathcal{K}_{IJ} = \mathcal{K}_{(IJ)}$ is symmetric, and $\mathcal{S} m^3$ is a shorthand notation for $\mathcal{S}_{\alpha\beta\gamma} m^{\alpha}m^{\beta}m^{\gamma}$ These counterterms induce the modification of the trace anomaly as \begin{align} \delta \beta &= B^K\partial_K\mathcal{B}_{Mm} + \gamma_{(M)} \mathcal{B}_{Mm} + \mathcal{B}_{Mm} \gamma_{(m)} \cr \delta I &= \eta \mathcal{B}_{Mm} + \mathcal{I} \gamma_{(m)} + B^K \partial_K \mathcal{I} \cr \delta J_I &= -\delta_I \mathcal{B}_{Mm} +\tilde{\mathcal{L}}_{B,\hat{\rho}}\mathcal{J}_I +\mathcal{J}_I \gamma_{(m)} \cr \delta K_{IJ} &= \tilde{\mathcal{L}}_{B,\hat{\rho}} \mathcal{K}_{IJ} + \mathcal{K}_{IJ}\gamma_{(m)}- \epsilon_{IJ} \mathcal{B}_{Mm} +(\partial_I \partial_JB^K +(\partial_{(I}(\hat{\rho}_{J)})g)^K) \mathcal{J}_K +2\mathcal{J}_K(\hat{\rho}_{(I})^K_{J)} \cr \delta S & = -\kappa \mathcal{B}_{Mm} +B^K\partial_K \mathcal{S} + 3\gamma_{(m)} \mathcal{S} \cr \delta L_I &= \theta_I \mathcal{B}_{Mm} -\frac{1}{2}\mathcal{J}_I - (\partial_IB^K) \mathcal{J}_K - \frac{1}{2}(\hat{\rho}_I g)^K \mathcal{J}_K - \mathcal{K}_{IJ} B^J \cr \delta k &= -\tau \mathcal{B}_{Mm} - \mathcal{I} + B^I \mathcal{J}_I \label{3amb} , \end{align} where $3\gamma_{(m)} \mathcal{S}$ really means $(\gamma_{(m)}^{\alpha\alpha'} + \gamma_{(m)}^{\beta\beta'} + \gamma_{(m)}^{\gamma \gamma'}) \mathcal{S}_{\alpha' \beta' \gamma'}$. With these ambiguities, we may set $k= L_I=0$. To conclude this section, let us address some applications of the local renormalization group with mass parameters. In particular, we address some properties of the energy-momentum tensor under renormalization. The first application is concerned with how to construct the renormalization group invariant energy-momentum tensor. For many applications, it is important to understand the renormalization of the energy-momentum tensor and possible improvements. Generally, the energy-momentum tensor in flat space-time is ambiguous under the improvement \begin{align} T_{\mu\nu} \to T_{\mu\nu} + (\partial_\mu\partial_\nu - \Box \eta_{\mu\nu}) L \end{align} for any scalar operator $L$.\footnote{More generically spin 2 (or higher) improvement is possible \cite{Polchinski:1987dy} (in particular in non-unitary theories) but it is not relevant for our discussions.} In the local renormalization group with curved space-time background, we have already argued that by using Class 3 ambiguity induced by $h$, we can always set $\tau = 0$. This convention is know as the Callan-Coleman-Jackiw improved energy-momentum tensor \cite{Callan:1970ze}. One advantage of the choice is that when $B^I = 0$ at the fixed point, the theory is manifestly conformal invariant in the flat space-time and we keep the same property during the renormalization by adjusting $h$ at each energy scale. Actually, Class 2 consistency condition \eqref{consis} tells that it is even Weyl invariant in the curved background when $M = m = 0$ with constant coupling constants at the fixed point because the curvature term in the trace anomaly also vanishes $\eta=0$. This is the energy-momentum tensor implicitly assumed in \cite{Komargodski:2011vj}. However, away from the conformal fixed point, this improved energy-momentum tensor may be renormalized according to \eqref{renormalem} due to the operator mixing. Indeed, Class 2 consistency condition \eqref{consis} tells that this is unavoidable as long as $\delta^I \neq 0$. For this reason, it may be sometimes more useful to define the non-renormalized energy-momentum tensor by demanding $\eta = 0$ rather than $\tau = 0$. This is known as Zamolodchikov's canonically scaling energy-momentum tensor \cite{Zamolodchikov:1986gt}\cite{Polchinski:1987dy} (see also \cite{Yonekura:2012kb}). As argued by Polchinski,\footnote{There is a typo in eq (18) of \cite{Polchinski:1987dy}. We would like to thank Z.~Komargodski for the related discussion.} this is always possible by adjusting $h$ when $\gamma_{(M)}$ does not contain any zero eigenvalues, being invertible. Otherwise, due to a potential obstruction to choose $\eta=0$, it is logically possible that the theory is scale invariant, but the energy-momentum tensor is still logarithmically renormalized. When the theory is conformal invariant (i.e. $B^I = 0$) then such a possibility is unavailable from the consistency conditions (e.g. \eqref{consis}). In any case, away from the fixed point, it is important to understand that the Callan-Coleman-Jackiw improved energy-momentum tensor and Zamolodchikov's non-renormalized energy-momentum tensor (if any) may differ. Another potentially interesting application of the massive local renormalization group analysis is the renormalization of the Einstein-Hilbert term that appears as $I$ in the trace anomaly. We have already discussed that one can always set $k=0$ by using Class 3 ambiguity. If we further use the Callan-Coleman-Jackiw improved energy-momentum tensor (i.e. $\tau = 0$), we see that the Einstein-Hilbert term is not renormalized at the conformal fixed point $B^I = 0$. Alternatively, by using non-zero $k$, we may be able to set $I=0$ and try to keep the non-renormalization of the Einstein-Hilbert term away from the fixed point whenever $\gamma_{(m)}$ does not contain any zero eigenvalues. Needless to say, regardless of the possibility to obtain the non-renormalized Einstein-Hilbert term discussed here, the actual value of the Einstein-Hilbert term can be changed in an arbitrary manner (at a given renormalization scale) by adding the local counterterm. \section{Checks of consistency conditions}\label{sec4} So far, our discussions have been rather formal. In this section, we would like to perform modest checks of our arguments on the local renormalization group in some examples. Of course, our discussions must apply to perturbation theories based on Feynman diagrams in any renormalization scheme, but we would like to show the generality of our results from the other ways to compute beta functions and the trace anomaly in renormalization group. \subsection{Conformal perturbation theory}\label{sec4.1} To begin with, we would like to compute beta functions for vector operators (i.e. $v$ and $\rho_I$) in conformal perturbation theory (see also \cite{Nakayama:2013is}). We note that the conventional perturbation theory based on Feynman diagrams is just an example of conformal perturbation theory around a free (massless ultraviolet) fixed point. Here we start with a general conformal field theory and perturb it by adding marginal scalar interactions $\delta S = \int d^3x g^I(x) O_I(x)$. In order to facilitate the computation of the vector beta functions, we have introduced the space-time dependent coupling constants $g^I(x)$. At the order we are interested in, the curvature of the space-time is not important. We assume that the scalar operator $O_I(x)$ has the canonical normalization \begin{align} \langle O_I(x) O_J(y) \rangle_0 = \frac{\delta_{IJ}}{(x-y)^6} \ \end{align} in the reference conformal field theory at the ultraviolet fixed point. For simplicity we have assumed that the operators $O_I(x)$ are conformal primaries with dimension $\Delta_I = 3$ in the reference conformal field theory, but generalizations to a slightly relevant perturbation are possible (within the so-called Zamolodchikov scheme \cite{Zamolodchikov:1987ti}). In order to compute the scalar beta functions as well as vector beta functions, we assume the operator product expansion: \begin{align} O_I(x) O_J(y) = \frac{\mathcal{C}_{IJK}}{(x-y)^3}O_K(y) + \frac{\mathcal{C}^{a}_{IJ}(x-y)_\mu}{(x-y)^{5}} J_{a}^\mu (y) + \cdots , \label{ope} \end{align} where the operator product expansion coefficient $\mathcal{C}_{IJK}$ is totally symmetric and $\mathcal{C}^a_{IJ} = -\mathcal{C}^a_{JI}$ is a certain representation matrix of the flavor symmetry group (denoted by $\mathcal{G}$ before) generated by $J_a^\mu$. In the reference conformal field theory, the current $J_a^{\mu}$ is conserved with conformal dimension $\Delta_a = 2$. The appearance of $\mathcal{C}^{a}_{IJ}$ in the scalar operator product expansion means that the current conservation is violated by the perturbation \cite{Friedan:2012hi}\cite{Nakayama:2013is} as \begin{align} \partial_\mu J^{\mu}_a = g^I \mathcal{C}_{IJ}^a O^J \ . \end{align} At the second order in conformal perturbation theory, we have to evaluate and renormalize the divergent integral in the evaluation of the Schwinger functional \begin{align} \delta W = \left\langle \int d^3x d^3y g^I(x) O_I(x) g^J(y) O_J(y) \right \rangle_{0} \end{align} by using the above operator product expansion. The scalar part of the operator product expansion gives a diverging factor \begin{align} \delta W|_{\mathrm{scalar}} \sim \left\langle 2\pi \int d^3z \log\mu \mathcal{C}_{IJK} g^I(z) g^J(z) O_K(z) \right \rangle_{0} \ , \end{align} which gives the scalar beta function \begin{align} \beta^I = \frac{dg^I}{d\log\mu} = 2\pi \mathcal{C}_{IJK} g^J g^K + \mathcal{O}(g^3) \ . \end{align} Similarly, from the current part of the operator product expansion gives another diverging contribution \begin{align} \delta W|_{\mathrm{vector}} \sim \left\langle 2\pi \int d^3z \log\mu g^I(z) \partial_\mu g^J(z) \mathcal{C}_{IJ}^a J^\mu_a(z) \right \rangle_0 \ , \end{align} which results in the renormalization of the background gauge fields $a_\mu$ with \begin{align} \rho^a_I &= 2\pi \mathcal{C}_{IJ}^a g^J \cr v&= 0 \ . \end{align} It is possible to change the renormalization prescription so that $v$ is non-zero by using the equations of motion or gauge transformation of the background source fields \cite{Nakayama:2013is}, but it does not affect the following argument because we work on the gauge invariant $B^I$ functions and $\hat{\rho}_I$ functions. At the second order in conformal perturbation theory, we therefore conclude \begin{align} B^I &= 2\pi \mathcal{C}_{IJK} g^J g^K \cr \hat{\rho}_I^a &= 2\pi \mathcal{C}_{IJ}^a g^J \ . \end{align} As a check of our formal argument in section \ref{sec2}, we immediately realize \begin{align} B^I \hat{\rho}_I^a = 0 \ \end{align} due to the symmetry of $\mathcal{C}_{IJK}$ and anti-symmetry of $\mathcal{C}_{IJ}^a$. Thus, the transversality condition is satisfied. At a higher order, this becomes more non-trivial because apparently the computation of $B^I$ and $\hat{\rho}_I$ are not immediately related with each other in particular at different orders in perturbations theory (see however the supersymmetric case in section \ref{sec4.2}). We have a couple of technical remarks about the above computation. \begin{itemize} \item In the above evaluation of the divergent integral, we had to keep track of (the absence of) the total derivative terms. We used the Polyakov regularization \cite{Polyakov:1981re} $\lim_{x\to y} \log(x-y)|_{\mathrm{reg}} = \log \sigma(x)$ in order to take into account the position dependent cut-off scale. At the second order in conformal perturbation theory, this is the most natural prescription, but at higher orders, it may be more practical to use the dimensional regularization because the total derivative terms will not affect the bare energy-momentum tensor in $d = 3-\epsilon$ dimension, and total derivative terms in counterterms can be discarded safely. A systematic way to compute the higher order vector beta functions in dimensional regularization with minimal subtraction was thoroughly developed in \cite{Jack:1990eb}\cite{JO} (see also \cite{Fortin:2012hn}). \item Once we try to evaluate the integral in the dimensional regularization, we have to assign the scaling dimensions of the operators $O^I$ (called $k^I$ in \cite{Jack:1990eb} as $\Delta_I = 3-k^I \epsilon$) in $3-\epsilon$ dimension. In conventional Lagrangian field theories, these are naturally determined by the wavefunction renormalization of the kinetic operators in $d=3-\epsilon$ dimension, but it is not obvious how it works in general conformal perturbation theory without explicit Lagrangian. However, we can check that this ambiguity cancels out in the final computation of the trace of the energy-momentum tensor because the energy-momentum tensor in $d=3-\epsilon$ dimension also contains the additional contributions that are related to $k^I$ from $\beta^I_{d = 3-\epsilon} = k^I g^I + \beta^I_{d=3}$ and $T^{\mu}_{\ \mu} = \beta^I_{3-\epsilon} O_I + \cdots$ which eventually led to the explicit loop counting factor in the dimensional regularization formula found in \cite{JO} (see also \cite{Fortin:2012hn} for the appearance of $k^I$ in the computation of $v$ there). This cancellation is reassuring because the ``loop counting" is different from the order of conformal perturbation, and the explicit appearing of the former in the computation of vector beta functions seems mysterious from the conformal perturbation theory viewpoint. \end{itemize} Let us briefly discuss the trace anomaly induced by the space-time dependent coupling constant within the conformal perturbation theory. In principle, it should be computable by evaluating the vacuum energy in conformal perturbation theory and renormalize it. In order to compute the contribution to the term \begin{align} \epsilon^{\mu\nu\rho} C_{IJK}(g) \partial_\mu g^I \partial_\nu g^J \partial_\rho g^K \end{align} in the trace anomaly, for instance, we have to break the CP invariance due to the appearance of $\epsilon^{\mu\nu\rho}$. Such breaking is not encoded in the leading order operator product expansion \eqref{ope} nor in the normalization of the two-point function in a manifest manner. In this way, we have to evaluate the vacuum energy at least fourth order in perturbation theory (and probably fifth order to break the CP from the scalar perturbations alone) to obtain non-zero results. Unfortunately, there is no systematic way to evaluate the conformal perturbation theory at that order since we need the full spectrum and operator product expansion to compute the correlation functions, so we would like to defer the actual computation for a future problem. \subsection{Supersymmetry}\label{sec4.2} While our discussions so far do not assume supersymmetry, it is possible to check some of our results to all order in perturbation theory if we assume $\mathcal{N}=2$ supersymmetry in $d=1+2$ dimension (we follow the superspace convention of \cite{Dumitrescu:2011iu}). Let us consider the Wess-Zumino model with dimensionless coupling constants \begin{align} W = Y^{abcd} \Phi_a \Phi_b \Phi_c \Phi_d , \end{align} where $\Phi^a$ $(a=1,\cdots N)$ are chiral superfields and the flavor symmetry group $\mathcal{G}$ compatible with $\mathcal{N}=2$ supersymmetry is $U(N)$ (in addition to the $U(1)$ R-symmetry). In order to discuss the local renormalization group with the manifest supersymmetry, we uplift the coupling constants $Y^{abcd}$ to chiral superfields. The usual argument based on the holomorphy and R-symmetry tells that the divergence to all orders in perturbation theory can be removed by the counterterm in the K\"ahler potential \begin{align} \mathcal{L}_{\mathrm{ct}} = \int d^4 \theta K^{a\bar{b}}(Y,\bar{Y}) \Phi_a \bar{\Phi}_{\bar{b}} \ . \end{align} One consequence of the supersymmetric non-renormalization theorem is that the beta function for $Y_{abcd}$ is completely determined from the anomalous dimension matrix \begin{align} \beta_{Y^{abcd}} = \gamma^{a \bar{e}} Y^{ebcd} + \gamma^{b\bar{e}}Y^{aecd} + \gamma^{c\bar{e}} Y^{abed} + \gamma^{d\bar{e}} Y^{abce} \ . \label{susybeta} \end{align} Here, the anomalous dimension matrix $\gamma^{a\bar{b}}(Y,\bar{Y})$ is obtained from the renormalization of the K\"ahler potential counterterm as \begin{align} \gamma^{a\bar{b}} = \frac{d K^{a\bar{b}}}{d \log \mu} \ . \end{align} The unitarity demands that the K\"ahler potential $K^{a\bar{b}}$ hence $\gamma^{a\bar{b}}$ is Hermitian. On the other hand, the same K\"ahler potential determines the vector beta functions for the $U(N)$ rotations \cite{Nakayama:2012nd}\cite{Fortin:2012hc}\cite{JO}: \begin{align} [\rho_{abcd} dY^{abcd} + \bar{\rho}_{\bar{a}\bar{b}\bar{c}\bar{d}} d\bar{Y}^{\bar{a}\bar{b}\bar{c}\bar{d}}]^{e\bar{f}} = - (\partial_{Y^{abcd}} \gamma^{e\bar{f}}) dY^{abcd} + (\partial_{\bar{Y}^{\bar{a}\bar{b}\bar{c}\bar{d}}} \gamma^{e\bar{f}}) d\bar{Y}^{\bar{a}\bar{b}\bar{c}\bar{d}} \end{align} from the $\bar{\theta} \sigma^\mu \theta$ terms in $K^{a\bar{b}}$. Assuming that the computation is done in dimensional regularization (in order to avoid the complexity due to total derivatives), the counterterm also determines \begin{align} v^{e\bar{f}} = i\frac{\partial \gamma^{e\bar{f}}}{\partial Y^{abcd}}Y^{abcd} - i\frac{\partial \gamma^{e\bar{f}}}{\partial \bar{Y}^{\bar{a}\bar{b}\bar{c}\bar{d}}} \bar{Y}^{\bar{a}\bar{b}\bar{c}\bar{d}} = 0 \end{align} in the holomorphic scheme we use here. The anomalous dimension of the chiral operators that appear in the superpotential must be determined from $\gamma^{a\bar{b}}$. We can confirm that this is the case by using $\gamma_I^{\ J} = \partial_I B^J + (\hat{\rho}_Ig)^J$ with the above formula for the beta functions. Notice that the additional rotation by $\hat{\rho}_I$ is important to cancel various unwanted mixing from $\partial_I B^J$ alone. In section \ref{sec2.1}, we have shown that Class 1 consistency condition demands that \begin{align} B^I \hat{\rho}_I = 0 \ , \end{align} which is equivalent to \begin{align} \frac{\partial \gamma^{e\bar{f}}}{\partial Y^{abcd}}\beta_{{Y^{abcd}}} - \frac{\partial \gamma^{e\bar{f}}}{\partial \bar{Y}^{\bar{a}\bar{b}\bar{c}\bar{d}}} \beta_{{\bar{Y}^{\bar{a}\bar{b}\bar{c}\bar{d}}}} = 0 \ , \label{nontrivial} \end{align} where $\beta_{{Y^{abcd}}} $ can be expressed by \eqref{susybeta} with the anomalous dimension matrix. We can see that this condition is true at each order in supergraph computations of the anomalous dimensions \cite{Jack:1999aj}\cite{JO}. Operationally, what $ \frac{\partial \gamma^{e\bar{f}}}{\partial Y^{abcd}}\beta_{{Y^{abcd}}} $ does is adding extra anomalous dimension factor to each $\Phi_a \to \bar{\Phi}_{\bar{a}}$ lines in supergraph computation of the wavefunction renormalization. Since every propagator is oriented as $\Phi_a \to \bar{\Phi}_{\bar{a}}$ in the computation for wavefunction renormalization (due to R-symmetry), the action of $\frac{\partial \gamma^{e\bar{f}}}{\partial \bar{Y}^{\bar{a}\bar{b}\bar{c}\bar{d}}} \beta_{{\bar{Y}^{\bar{a}\bar{b}\bar{c}\bar{d}}}}$ does exactly the same thing and \eqref{nontrivial} holds. It would be interesting to see if there is a more direct proof without relying on the supergraph. In specific to $d=1+2$ dimension, let us discuss the possible $\mathcal{N}=2$ supersymmetric extension of the Weyl anomaly. The Weyl anomaly is replaced by super Weyl anomaly generated by a chiral superfiled $\Sigma$. We can easily construct the supersymmetric generalization of the Weyl anomaly terms. For instance, if the symmetry group $\mathcal{G}$ is $U(1)$, the supersymmetric generalization of the first term in \eqref{3danomaly} is \begin{align} \int d^4\theta (\Sigma + \bar{\Sigma}) C_{IJK}(Y,\bar{Y}) Y^I (D_{\alpha} Y^J) (\bar{D}^{\alpha} \bar{Y}^K) \ , \label{superano1} \end{align} and the second term is \begin{align} \int d^4\theta (\Sigma + \bar{\Sigma}) C_{I}(Y,\bar{Y}) Y^I D_{\alpha} \bar{D}^{\alpha} V \ , \label{superano2} \end{align} where $V$ is a real vector superfield. Although we have not included it for simplicity, the R-anomaly proportional to $i(\Sigma-\bar{\Sigma})$ is also possible. We have discussed that local counterterms introduce an additional contribution to the Weyl anomaly. When the local counterterms are chosen arbitrarily, we argued that they give Class 3 ambiguities. In particular, replacing $C_{IJK} (Y,\bar{Y})$ and $C_{I}(Y,\bar{Y})$ with $c_{IJK}(Y,\bar{Y})$ and $c_I(Y,\bar{Y})$ in \eqref{superano1} \eqref{superano2} and computing the Weyl variation, we obtain the $\mathcal{N}=2$ supersymmetric version of Class 3 ambiguities discussed in section \ref{sec2.2}. One more interesting contribution to the Weyl anomaly comes from the supersymmetric Chern-Simons counterterms discussed in \cite{Closset:2012vg}\cite{Closset:2012vp}. Within R-symmetric $\mathcal{N}=2$ supergravity, they showed three-possible supersymmetric Chern-Simons counterterms. Among them, the gravitational Chern-Simons term is Weyl invariant by itself, so it does not lead to any Weyl anomaly, while Z-Z Chern-Simons term and Flavor-R Chern-Simons term do show the Weyl anomaly. The bosonic part of the Z-Z Chern-Simons counterterm in component is \begin{align} {W}_{ZZ} = -\frac{k_{ZZ}}{4\pi} \int d^3x \sqrt{|\gamma|} \left(\epsilon^{\mu\nu\rho}(a^R_\mu-\frac{1}{2}v_\mu) \partial_\nu(a^R_\rho-\frac{1}{2}v_\rho) + \frac{1}{2} H R + \cdots \right) \end{align} Here, $a^R_\mu$ is the vector source for the R-current, and $v_{\mu}$ is the vector source for the central charge current. When they are conserved, they do not give any Weyl anomaly as a part of Class 3 ambiguities. On the other hand, $H$ is the source for dimension $2$ scalar operator (called $J^{(Z)}$ in \cite{Closset:2012vg}\cite{Closset:2012vp}) in the central charge current multiplet, so this is nothing but $\mathcal{I}$ term in \eqref{class3massive}. The counterterm is not Weyl invariant, and it induces the extra contribution to the Weyl anomaly as in \eqref{3amb}, which may or may not be cancelled from the other terms such as $k$ term in the Weyl anomaly that had existed before adding the Chern-Simons counterterm. The bosonic part of the flavor-R Chern-Simons counterterm in component is \begin{align} {W}_{fr} = -\frac{k_{fr}}{2\pi} \int d^3x \sqrt{|\gamma|} \left(\epsilon^{\mu\nu\rho}a^f_\mu \partial_\nu(a^R_\rho-\frac{1}{2}v_\rho) + \frac{1}{4} \sigma R -DH \cdots \right) \ , \end{align} where $a^f_\mu$ is the vector source for the flavor symmetry current, and $D$ and $\sigma$ are scalar sources for dimension $1$ and $2$ operators in the current supermultiplet \cite{Closset:2012vg}\cite{Closset:2012vp}. When the flavor symmetry is conserved, the first Chern-Simons term do not contribute to the Weyl anomaly, but when it is not conserved, then non-zero vector beta functions will give terms similar to \eqref{cssmb} in the Weyl anomaly. Furthermore, the $R\sigma$ term and $DH$ term are $\mathcal{B}_{Mm}$ and $\mathcal{I}$ term in the Weyl anomaly, which give the extra contribution as in \eqref{3amb}. These terms may or may not be cancelled from the original Weyl anomaly terms such as $k$ term before adding the Chern-Simons counterterm. \subsection{Holography} \label{sec4.3} Our final example is the holographic computation of Schwinger vacuum functional. From the AdS/CFT correspondence, we identify Gubser-Klebanov-Polyakov-Witten free energy of the gravitational system in $d+1$ dimension as the Schwinger vacuum functional of the dual $d$-dimensional quantum field theory \cite{Gubser:1998bc}\cite{Witten:1998qj}. In the definition of the Gubser-Klebanov-Polyakov-Witten free energy, the space-dependent sources in field theory direction are naturally encoded as the boundary conditions for the bulk fields at the AdS boundary. In the AdS/CFT correspondence, the extra radial direction is understood as the renormalization group scale. The renormalization of the Schwinger functional is realized by the holographic renormalization of the Gubser-Klebanov-Polyakov-Witten free energy. The holographic renormalization group has been successful in deriving the holographic Weyl anomaly \cite{Henningson:1998gx}, holographic $c$-theorem \cite{Akhmedov:1998vf}\cite{Alvarez:1998wr}\cite{Girardello:1998pd}\cite{Freedman:1999gp}\cite{Sahakian:1999bd}\cite{Myers:2010tj} as well as the holographic equivalence between scale invariance and conformal invariance \cite{Nakayama:2009fe}\cite{Nakayama:2010wx}\cite{Nakayama:2010zz}. In this section, we would like to understand how our general framework of local renormalization group analysis and Weyl anomaly in $d=1+2$ dimensional quantum field theories fit with the holographic computation in $d=1+3$ dimensional effective semiclassical gravity. We do not assume a particular string realization of the AdS/CFT correspondence, but we may apply the following argument to known holographic examples in string theory. In order to obtain our new trace anomaly terms, we need to break the parity. The simplest parity violating terms in the $d=1+3$ dimensional bulk can be obtained by topological $\theta$ terms for bulk gauge fields as well as by the gravitational $\theta$-term (Pontryagin-Hirzebruch term): \begin{align} S_f = \int d^4x \sqrt{|g|} \epsilon^{ABCD} \theta_f \mathrm{Tr} (F_{AB} F_{CD}) \end{align} \begin{align} S_g = \int d^4x \sqrt{|g|} \epsilon^{ABCD} \theta_g R_{AB}^{\ \ EF} R_{CDEF} \ . \end{align} In this subsection earlier Latin indices $AB\cdots$ denote $d=1+3$ dimensional tensor indices. These terms are equivalent to boundary Chern-Simons interaction after integration by part in the radial direction, and they give local contributions to the Gubser-Klebanov-Polyakov-Witten partition function \cite{Witten:2003ya}. Thus we can easily obtain the contribution to the Weyl anomaly from these parity violating terms in the bulk action. First of all, the gravitational $\theta$-term (Pontryagin-Hirzebruch term) does not produce any Weyl anomaly. The only effect is that we introduce a parity violating conformal invariant contact term in the two-point function of the energy-momentum tensor \cite{Witten:2003ya}\cite{Maldacena:2011nz}. On the other hand, the bulk gauge $\theta$-term does introduce the Weyl anomaly essentially by the same mechanism discussed in section \ref{sec2.2}. When there exist the vector beta functions $\hat{\rho}_I D^\mu g^I$ for the operator dual to $A_{\mu}$ appearing in the Chern-Simons interaction, then the contribution to the Weyl anomaly is \begin{align} A_{\mathrm{anomaly}} = \theta_f\epsilon^{\mu\nu\rho} \hat{\rho}_I f_{\mu\nu} D_{\rho} g^I \ . \end{align} Since $\theta_f$ is physical up to $2\pi$ integer shift, the effect cannot be removed by Class 3 ambiguity with a local counterterm, which must be integer shift of Chern-Simons term, and this essentially gives an existing proof of our Weyl anomaly terms in holography whenever $\theta_{f}$ is non-zero up to $2\pi$ integer shift. In the bulk gravity, the vector beta functions near the conformal fixed point are understood as follows. We use the Poincar\'e coordinate near the AdS boundary with metric $ds^2 = g_{AB} dx^A dx^B = \frac{dz^2 + dx_\mu dx^\mu}{z^2}$. The non-trivial vector beta function means that the vector field $A_{\mu}$ is related to the scalar fields $\Phi^I$ dual to the boundary operator $O_I$ as \begin{align} A_\mu(z,x_\mu) = (\log z) \rho_I D_\mu \Phi^I(z,x_\mu) \ , \end{align} where $\Phi^I(z,x_\mu)$ is slowly varying in the radial $z$ direction. It is not so obvious that such a relation is compatible at the exact conformal fixed point with the AdS isometry. This is related to the question if we can have non-zero vector beta functions at the conformal fixed point, and it is not particular to AdS/CFT correspondence. One should notice, however, the bulk vector fields $A_{B}$ must be Higgesed \cite{Nakayama:2012sn}\cite{Nakayama:2013is} in order to obtain non-zero vector beta functions, breaking the conservation of the dual operator $J_\mu$. As discussed in section \ref{sec4.1}, generically the vector beta function is non-zero slightly away from the fixed point, and therefore the induced Weyl anomaly does not vanish. We can also consider the parity violating terms which do not immediately give the local contribution to the Gubser-Klebanov-Polyakov-Witten functional. For instance, the bulk axion interaction \begin{align} S = \int d^4x \sqrt{|g|} \epsilon^{ABCD} \Theta_f(\Phi^I) \mathrm{Tr} (F_{AB} F_{CD}) \end{align} will give non-zero contribution to the parity violating Weyl anomaly at the higher order in holographic computations (possibly with bulk loop factors). It is possible to give holographic interpretations to various ambiguities discussed in previous sections. Class 1 ambiguity is given by the gauge transformation in the bulk. For simplicity, let us consider the $U(1)$ gauge field $A=A_B dx^B$ in the bulk. Let us also assume we have a charged scalar field $\Phi$ in the bulk so the gauge symmetry acts as \begin{align} \Phi &\to e^{i\Lambda}\Phi \cr A& \to A + d\Lambda \ . \end{align} As discussed in \cite{Nakayama:2012sn}\cite{Nakayama:2013is}, this gauge transformation gives the holographic realization of Class 1 ambiguity when $\Phi$ has a non-trivial vacuum expectation value. For example, the bulk field configuration \begin{align} \Phi &= \gamma z^{i\alpha} \cr A & = 0 \end{align} which is interpreted as $\beta^g = i\alpha \gamma g$ and $v = 0$ in the dual field theory is gauge equivalent to \begin{align} \Phi &= \gamma \cr A & = \frac{\alpha dz}{z} \ , \end{align} which is interpreted as $\beta^g = 0$ but $v = \alpha$ in the dual field theory. In both cases, the covariant derivative $zD_z \Phi = i\alpha \Phi$ in the radial direction is interpreted as the gauge invariant $B^I$ function of the dual field theory. Class 2 ambiguity in holography is the scheme change of the bulk-boundary correspondence. The simplest example is the target space diffeomorphism for bulk scalar fields $\Phi^I \to \tilde{\Phi}^I (\Phi)$. This is nothing but the scheme change of the scalar coupling constants \eqref{scalarred} discussed in section \ref{sec2.2}. Other more involved field redefinitions in the bulk are possible such as $A_A \to A_A + r_I D_A \Phi^I$, which must be comparable with \eqref{vectorred}. In some cases, we may use these field redefinitions to make the gravitational action canonical such as the one in the Einstein frame, where energy-condition can be naturally applied, but the availability of such a choice may not be guaranteed in more complicated situations. Such ambiguities, in particular in relation to unitarity, are important issues begging for further studies in holography (see e.g. \cite{Myers:2010tj}). Finally the holographic realization of Class 3 ambiguity is given by adding boundary counterterms, which is also understood as the bulk total derivative terms. We have already mentioned the effect of the boundary Chern-Simons terms above. When the coefficient is suitably quantized, they can be removed by the counterterms. Another example would be the parity breaking interaction term \begin{align} \int d^4x \sqrt{|\gamma|} \epsilon^{ABCD} c_{IJKL} D_{A} \Phi^I D_{B} \Phi^J D_C\Phi^K D_D \Phi^L \ . \end{align} When the scalar coupling constant has a non-zero beta functions \begin{align} zD_z \Phi^I \sim B^I \end{align} near the boundary, it is easy to see that $z$ integration gives rise to the logarithmic divergence near the boundary and we have the induced holographic Weyl anomaly \begin{align} \delta A_{\mathrm{anomaly}} = B^{L}c_{IJKL} \epsilon^{\mu\nu\rho} D_{\mu} g^I D_{\nu} g^J D_{\rho} g^K \ , \end{align} which is comparable with the field theory Class 3 ambiguity \eqref{class3trace}. \section{Discussions} \label{sec5} In this paper, we have discussed the consistency conditions and ambiguities in local renormalization group in most generic quantum field theories in $1+2$ dimension within power-counting renormalization scheme. We have argued that the consistency conditions from the local renormalization group require various non-trivial transversality conditions on beta functions and various tensors that appear in the trace anomaly. We have performed modest checks of these conditions in examples including supersymmetric field theories and holography. As is the case with the other anomalies in different dimensions, the anomaly we have discussed in this paper must remain the same under the duality transformation up to ambiguities we have thoroughly discussed. In addition, it must satisfy the matching condition under the renormalization group flow. Therefore we may be able to use our new Weyl anomaly in $1+2$ dimensions for novel checks of the dualities proposed in the literature. For instance, $S$ in \eqref{massanomaou} is nothing but the operator product expansion coefficients of $O^{(m)}$ at the conformal fixed point and they must agree between duality pairs. With respect to the anomaly matching, it would be interesting to construct the Wess-Zumino action as the integrated form of the anomaly in contrast to the infinitesimal variation we have discussed in this paper. After all, the Wess-Zumino conditions guarantee that the integration is possible. The integrated Weyl anomaly in even dimensions are studied as dilaton effective action in \cite{Schwimmer:2010za}\cite{Komargodski:2011vj} at the conformal fixed point in relation to proving the $a$-theorem in $1+3$ dimension. The complete dilaton effective action off criticality incorporating the space-time dependent coupling constant contribution was obtained in \cite{JO}\cite{R} (see \cite{Fortin:2012hn}\cite{Nakayama:2013is} for related computations). It is possible to apply the same technique here in $1+2$ dimension. We only note, however, that the parity violating contribution to the on-shell dilaton scattering is trivial due to the Bose symmetry (see \cite{Nakayama:2013is} for a related remark in $d=1+3$ dimension). In this paper, we have not addressed the question if the conjectured F-theorem \cite{Myers:2010tj}\cite{Jafferis:2011zi} could be understood from the consistency conditions of the renormalization group (and probably with other assumptions such as unitarity). While our consistency conditions give various constraints on the renormalization group flow, we have not so far obtained the equation analogous to \eqref{atheorem} valid in even space-time dimensions. Probably, we should study the properties of the partition function itself by integrating the Weyl transformation explicitly. \section*{Acknowledgements} The author would like to thank H.~Osborn for discussions and sharing his note. He would like to thank CERN theory division and APCTP for hospitality where the current research was developed. He in particular thanks organizers of the wonderful workshops there. This work is supported by Sherman Fairchild Senior Research Fellowship at California Institute of Technology and DOE grant DE-FG02-92ER40701.
{ "attr-fineweb-edu": 1.710938, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdJ04ubng-qmPCUAx
\section{Introduction} Recently, attention based deep models have been proved effective at handling a variety of AI problems such as machine translation \cite{bahdanau2014neural}, object detection \cite{mnih2014recurrent,ba2014multiple}, visual question answering \cite{xu2015ask,chen2015abc}, and image captioning \cite{xu2015show}. Inspired by human attention mechanisms, these deep models learn dynamic weightings of the input vectors, which allow for more flexibility and expressive power. In this work we focus on attention models for image captioning. The state-of-the-art image captioning models \cite{kiros2014unifying,mao2014deep,karpathy2015deep,donahue2015long,vinyals2015show} adopt Convolutional Neural Networks (CNNs) to extract image features and Recurrent Neural Networks (RNNs) to decode these features into a sentence description. Within this encoder-decoder framework \cite{cho2014learning}, the models proposed by \cite{xu2015show} apply an attention mechanism, i.e. attending to different areas of the image when generating words one by one. Although impressive visualization results of the attention maps for image captioning are shown in \cite{xu2015show}, the authors do not provide \textit{quantitative evaluations} of the attention maps generated by their models. Since deep network attention can be viewed as a form of alignment from language space to image space, we argue that these attention maps in fact carry important information in understanding (and potentially improving) deep networks. Therefore in this paper, we study the following two questions: \begin{itemize} \item How often and to what extent are the attention maps consistent with human perception/annotation? \item Will more human-like attention maps result in better captioning performance? \end{itemize} \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{figs/fig1a_jh.pdf} \caption{Image captioning models \protect\cite{xu2015show} can attend to different areas of the image when generating the words. However, these generated attention maps may not correspond to the region that the words or phrases describe in the image (e.g. ``shovel''). We evaluate such phenomenon quantitatively by defining attention correctness, and alleviate this inconsistency by introducing explicit supervision. In addition, we show positive correlation between attention correctness and caption quality.} \label{fig:overview} \end{figure} Towards these goals, we propose a novel quantitative metric to evaluate the ``correctness'' of attention maps. We define ``correctness'' as the consistency between the attention maps generated by the model and the corresponding region that the words/phrases describe in the image. More specifically, we use the alignment annotations between image regions and noun phrase caption entities provided in the Flickr30k Entities dataset \cite{plummer2015flickr30k} as our ground truth maps. Using this metric, we show that the attention model of \cite{xu2015show} performs better than the uniform attention baseline, but still has room for improvement in terms of attention consistency with human annotations. Based on this observation, we propose a model with explicit supervision of the attention maps. The model can be used not only when detailed ground truth attention maps are given (e.g. the Flickr30k Entities dataset \cite{plummer2015flickr30k}) but also when only the semantic labelings of image regions (which is a much cheaper type of annotations) are available (e.g. MS COCO dataset \cite{lin2014microsoft}). Our experiments show that in both scenarios, our models perform consistently and significantly better than the implicit attention counterpart in terms of both attention maps accuracy and the quality of the final generated captions. To the best of our knowledge, this is the first work that quantitatively measures the quality of visual attention in deep models and shows significant improvement by adding supervision to the attention module. \section{Related Work} \textbf{Image Captioning Models} There has been growing interest in the field of image captioning, with lots of work demonstrating impressive results \cite{kiros2014unifying,xu2015show,mao2014deep,vinyals2015show,donahue2015long,fang2015captions,karpathy2015deep,chen2014learning}. However, it is uncertain to what extent the captioning models truly understand and recognize the objects in the image while generating the captions. \cite{xu2015show} proposed an attention model and qualitatively showed that the model can attend to specific regions of the image by visualizing the attention maps of a few images. Our work takes a step further by quantitatively measuring the quality of the attention maps. The role of the attention maps also relates to referring expressions \cite{mao2015generation,hu2015natural}, where the goal is predicting the part of the image that is relevant to the expression. \noindent \textbf{Deep Attention Models} In machine translation, \cite{bahdanau2014neural} introduced an extra softmax layer in the RNN/LSTM structure that generates weights of the individual words of the sentence to be translated. The quality of the attention/alignment was qualitatively visualized in \cite{bahdanau2014neural} and quantitatively evaluated in \cite{luong2015effective} using the alignment error rate. In image captioning, \cite{xu2015show} used convolutional image features with spatial information as input, allowing attention on 2D space. \cite{you2016image} targeted attention on a set of concepts extracted from the image to generate image captions. In visual question answering, \cite{chen2015abc,xu2015ask,shih2016wtl,zhu2015visual7w} proposed several models which attend to image regions or questions when generating an answer. But none of these models quantitatively evaluates the quality of the attention maps or imposes supervision on the attention. Concurrently, \cite{das2016human} analyzed the consistency between human and deep network attention in visual question answering. Our goal differs in that we are interested in how attention changes with the progression of the description. \noindent \textbf{Image Description Datasets} For image captioning, Flickr8k \cite{hodosh2013framing}, Flickr30k \cite{young2014image}, and MS COCO \cite{lin2014microsoft} are the most commonly used benchmark datasets. \cite{plummer2015flickr30k} developed the original caption annotations in Flickr30k by providing the region to phrase correspondences. Specifically, annotators were first asked to identify the noun phrases in the captions, and then mark the corresponding regions with bounding boxes. In this work we use this dataset as ground truth to evaluate the quality of the generated attention maps, as well as to train our strongly supervised attention model. Our model can also utilize the instance segmentation annotations in MS COCO to train our weakly supervised version. \section{Deep Attention Models for Image Captioning} \label{sec:model} In this section, we first discuss the attention model that learns the attention weights implicitly \cite{xu2015show}, and then introduce our explicit supervised attention model. \subsection{Implicit Attention Model} \label{sec:implicit_model} The implicit attention model \cite{xu2015show} consists of three parts: the encoder which encodes the visual information (i.e. a visual feature extractor), the decoder which decodes the information into words, and the attention module which performs spatial attention. The visual feature extractor produces $L$ vectors that correspond to different spatial locations of the image: $a = \{\mathbf{a}_1, \hdots, \mathbf{a}_L\}, \ \mathbf{a}_i \in \mathbb{R}^D$. Given the visual features, the goal of the decoder is to generate a caption $y$ of length $C$: $y = \{y_1, \hdots, y_C\}$. We use $\mathbf{y}_t \in \mathbb{R}^K$ to represent the one-hot encoding of $y_t$, where $K$ is the dictionary size. In \cite{xu2015show}, an LSTM network \cite{hochreiter1997long} is used as the decoder: \begin{align} &\mathbf{i}_t = \sigma(W_i E \mathbf{y}_{t-1} + U_i \mathbf{h}_{t-1} + Z_i \mathbf{z}_t + \mathbf{b}_i) \\ &\mathbf{f}_t = \sigma(W_f E \mathbf{y}_{t-1} + U_f \mathbf{h}_{t-1} + Z_f \mathbf{z}_t + \mathbf{b}_f) \\ &\mathbf{c}_t = \mathbf{f}_t \mathbf{c}_{t-1} + \mathbf{i}_t \text{tanh}(W_c E \mathbf{y}_{t-1} + U_c \mathbf{h}_{t-1} + Z_c \mathbf{z}_t + \mathbf{b}_c) \\ &\mathbf{o}_t = \sigma(W_o E \mathbf{y}_{t-1} + U_o \mathbf{h}_{t-1} + Z_o \mathbf{z}_t + \mathbf{b}_o) \\ &\mathbf{h}_t = \mathbf{o}_t \text{tanh}(\mathbf{c}_t) \end{align} where $\mathbf{i}_t, \mathbf{f}_t, \mathbf{c}_t, \mathbf{o}_t, \mathbf{h}_t$ are input gate, forget gate, memory, output gate, and hidden state of the LSTM respectively. $W, U, Z, \mathbf{b}$ are weight matrices and biases. $E \in \mathbb{R}^{m \times K}$ is an embedding matrix, and $\sigma$ is the sigmoid function. The context vector $\mathbf{z}_t = \sum_{i=1}^L \alpha_{ti} \mathbf{a}_i$ is a dynamic vector that represents the relevant part of image feature at time step $t$, where $\alpha_{ti}$ is a scalar weighting of visual vector $\mathbf{a}_i$ at time step $t$, defined as follows: \begin{equation} \alpha_{ti} = \frac{\exp(e_{ti})}{\sum_{k=1}^L \exp(e_{tk})} \quad \quad e_{ti} = f_{attn}(\mathbf{a_i}, \mathbf{h}_{t-1}) \end{equation} $f_{attn}(\mathbf{a_i}, \mathbf{h}_{t-1})$ is a function that determines the amount of attention allocated to image feature $\mathbf{a_i}$, conditioned on the LSTM hidden state $\mathbf{h}_{t-1}$. In \cite{xu2015show}, this function is implemented as a multilayer perceptron. Note that by construction $\sum_{i=1}^L \alpha_{ti} = 1$. The output word probability is determined by the image $\mathbf{z}_t$, the previous word $y_{t-1}$, and the hidden state $\mathbf{h}_t$: \begin{equation} p(y_t| a, y_{t-1}) \propto \exp( G_o (E \mathbf{y}_{t-1} + G_h \mathbf{h}_t + G_z \mathbf{z}_t)) \end{equation} where $G$ are learned parameters. The loss function, ignoring the regularization terms, is the negative log probability of the ground truth words $w = \{w_1, \hdots, w_C\}$: \begin{equation} \label{eqn:loss-cap} L_{t, cap} = -\log p(w_t | a, y_{t-1}) \end{equation} \subsection{Supervised Attention Model} In this work we are interested in the attention map generated by the model $\pmb{\alpha}_t = \{\alpha_{ti}\}_{i = 1, \hdots, L}$. One limitation of the model in \cite{xu2015show} is that even if we have some prior knowledge about the attention map, it will not be able to take advantage of this information to learn a better attention function $f_{attn}(\mathbf{a_i}, \mathbf{h}_{t-1})$. We tackle this problem by introducing explicit supervision. Concretely, we first consider the case when the ground truth attention map $\pmb{\beta}_t = \{\beta_{ti}\}_{i = 1, \hdots, L}$ is provided for the ground truth word $w_t$, with $\sum_{i=1}^L \beta_{ti} = 1$. Since $\sum_{i=1}^L \beta_{ti} = \sum_{i=1}^L \alpha_{ti} = 1$, they can be considered as two probability distributions of attention and it is natural to use the cross entropy loss. For the words that do not have an alignment with an image region (e.g. ``a'', ``is''), we simply set $L_{t, attn}$ to be 0: \begin{equation} L_{t, attn} = \begin{cases} -\sum_{i=1}^L \beta_{ti} \log \alpha_{ti} & \text{if } \pmb{\beta}_t \text{ exists for } w_t \\ 0 & \text{otherwise} \end{cases} \end{equation} The total loss is the weighted sum of the two loss terms: $L = \sum_{t=1}^C L_{t, cap} + \lambda \sum_{t=1}^C L_{t, attn}$. We then discuss two ways of constructing the ground truth attention map $\pmb{\beta}_t$, depending on the types of annotations. \subsubsection{Strong Supervision with Alignment Annotation} \label{sec:model-strong} In the simplest case, we have direct annotation that links the ground truth word $w_t$ to a region $R_t$ (in the form of bounding boxes or segmentation masks) in the image (e.g. Flickr30k Entities). We encourage the model to ``attend to'' $R_t$ by constructing $\pmb{\hat{\beta}}_t = \{\hat{\beta}_{t\hat{i}}\}_{\hat{i}=1, .., \hat{L}}$ where: \begin{equation} \hat{\beta}_{t\hat{i}} = \begin{cases} 1 & \hat{i} \in R_t \\ 0 & \text{otherwise} \end{cases} \end{equation} Note that the resolution of the region $R$ (e.g. $224 \times 224$) and the attention map $\pmb{\alpha}, \pmb{\beta}$ (e.g. $14 \times 14$) may be different, so $\hat{L}$ could be different from $L$. Therefore we need to resize $\pmb{\hat{\beta}}_t$ to the same resolution as $\pmb{\alpha}_t$ and normalize it to get $\pmb{\beta}_t$. \subsubsection{Weak Supervision with Semantic Labeling} \label{sec:model-weak} Ground truth alignment is expensive to collect and annotate. A much more general and cheaper annotation is to use bounding boxes or segmentation masks with object class labels (e.g. MS COCO). In this case, we are provided with a set of regions $R_j$ in the image with associated object classes $c_j$, $j=1, \hdots, M$ where $M$ is the number of object bounding boxes or segmentation masks in the image. Although not ideal, these annotations contain important information to guide the attention of the model. For instance, for the caption ``a boy is playing with a dog'', the model should attend to the region of a person when generating the word ``boy'', and attend to the region of a dog when generating the word ``dog''. This suggests that we can approximate image-to-language (region $\rightarrow$ word) consistency by language-to-language (object class $\rightarrow$ word) similarity. Following this intuition, we set the likelihood that a word $w_t$ and a region $R_j$ are aligned by the similarity of $w_t$ and $c_j$ in the word embedding space: \begin{equation} \label{eqn:sim} \hat{\beta}_{t\hat{i}} = \begin{cases} \text{sim}(\tilde{E}(w_t), \tilde{E}(c_j)) & \hat{i} \in R_j \\ 0 & \text{otherwise} \end{cases} \end{equation} where $\tilde{E}(w_t)$ and $\tilde{E}(c_j)$ denote the embeddings of the word $w_t$ and $c_j$ respectively. $\tilde{E}$ can be the embedding $E$ learned by the model or any off-the-shelf word embedding (e.g. pre-trained word2vec). We then resize and normalize $\pmb{\hat{\beta}}_t$ in the same way as the strong supervision scenario. \section{Attention Correctness: Evaluation Metric} \label{sec:metric} At each time step in the implicit attention model, the LSTM not only predicts the next word $y_t$ but also generates an attention map $\pmb{\alpha_t} \in \mathbb{R}^L$ across all locations. However, the attention module is merely an intermediate step, while the error is only backpropagated from the word-likelihood loss in Equation~\ref{eqn:loss-cap}. This opens the question of whether this implicitly-learned attention module is indeed effective. Therefore in this section we introduce the concept of \textit{attention correctness}, an evaluation metric that quantitatively analyzes the quality of the attention maps generated by the attention-based model. \subsection{Definition} \label{sec:metric-def} \begin{wrapfigure}{r}{0.19\textwidth} \vspace{-12mm} \centering \includegraphics[width=0.19\textwidth]{figs/attn.pdf} \caption{Attention correctness is the sum of the weights within ground truth region (red bounding box), in this illustration 0.12 + 0.20 + 0.10 + 0.12 = 0.54.} \vspace{-4mm} \label{fig:att-correct} \end{wrapfigure} For a word $y_t$ with generated attention map $\pmb{\alpha}_t$, let $R_t$ be the ground truth attention region, then we define the word attention correctness by \begin{equation} AC(y_t) = \sum_{\hat{i} \in R_t} \hat{\alpha}_{t\hat{i}} \end{equation} which is a score between 0 and 1. Intuitively, this value captures the sum of the attention score that falls within human annotation (see Figure~\ref{fig:att-correct} for illustration). $\pmb{\hat{\alpha}}_t = \{ \hat{\alpha}_{t\hat{i}} \}_{\hat{i} = 1, \hdots, \hat{L}}$ is the resized and normalized $ \pmb{\alpha}_t$ in order to ensure size consistency. In some cases a phrase $\{y_t, \hdots, y_{t+l} \}$ refers to the same entity, therefore the individual words share the same attention region $R_t$. We define the phrase attention correctness as the maximum of the individual scores\footnote{In the experiments, we found that changing the definition from maximum to average does not affect our main conclusion.}. \begin{equation} AC(\{y_t, \hdots, y_{t+l} \}) = \max (AC(y_t), \hdots, AC(y_{t+l})) \end{equation} The intuition is that the phrase may contain some less interesting words whose attention map is ambiguous, and the attention maps of these words can be ignored by the max operation. For example, when evaluating the phrase ``a group of people'', we are more interested in the attention correctness for ``people'' rather than ``of''. We discuss next how to find ground truth attention regions during testing, in order to apply this evaluation metric. \subsection{Ground Truth Attention Region During Testing} \label{sec:metric-match} In order to compute attention correctness, we need the correspondence between regions in the image and phrases in the caption. However, in the testing stage, the generated caption is often different from the ground truth captions. This makes evaluation difficult, because we only have corresponding image regions for the phrases in the ground truth caption, but not \textit{any} phrase. To this end, we propose two strategies. \noindent \textbf{Ground Truth Caption} One option is to enforce the model to output the ground truth sentence by resetting the input to the ground truth word at each time step. This procedure to some extent allows us to ``decorrelate'' the attention module from the captioning component, and diagnose if the learned attention module is meaningful. Since the generated caption exactly matches the ground truth, we compute attention correctness for all noun phrases in the test set. \noindent \textbf{Generated Caption} Another option is to align the entities in the generated caption to those in the ground truth caption. For each image, we first extract the noun phrases of the generated caption using a POS tagger (e.g. Stanford Parser \cite{manning2014stanford}), and see if there exists a word-by-word match in the set of noun phrases in the ground truth captions. For example, if the generated caption is ``A dog jumping over a hurdle'' and one of the ground truth captions is ``A cat jumping over a hurdle'', we match the noun phrase ``a hurdle'' appearing in both sentences. We then calculate the attention correctness for the matched phrases only. \section{Experiments} \label{sec:exp} \subsection{Implementation Details} \begin{figure}[t!] \centering \captionsetup[subfigure]{labelformat=empty, font=small} \begin{subfigure}{.23\textwidth} \captionsetup{width=0.95\textwidth} \centering \includegraphics[width=0.32\textwidth]{figs/coco/clock0} \includegraphics[width=0.32\textwidth]{figs/coco/clock1} \includegraphics[width=0.32\textwidth]{figs/coco/clock2} \caption{\textit{The huge \underline{clock} on the wall is near a wooden \underline{table}.}} \end{subfigure} \begin{subfigure}{.23\textwidth} \captionsetup{width=0.95\textwidth} \centering \includegraphics[width=0.32\textwidth]{figs/coco/laptop0} \includegraphics[width=0.32\textwidth]{figs/coco/laptop1} \includegraphics[width=0.32\textwidth]{figs/coco/laptop2} \caption{\textit{A \underline{man} is on his \underline{laptop} while people looking on. }} \end{subfigure} \begin{subfigure}{.23\textwidth} \captionsetup{width=0.95\textwidth} \centering \includegraphics[width=0.32\textwidth]{figs/coco/girl0} \includegraphics[width=0.32\textwidth]{figs/coco/girl1} \includegraphics[width=0.32\textwidth]{figs/coco/girl1} \caption{\textit{A young \underline{girl} and a \underline{woman} preparing food in a kitchen.}} \end{subfigure} \begin{subfigure}{.23\textwidth} \captionsetup{width=0.95\textwidth} \centering \includegraphics[width=0.32\textwidth]{figs/coco/kitchen0} \includegraphics[width=0.32\textwidth]{figs/coco/kitchen1} \includegraphics[width=0.32\textwidth]{figs/coco/kitchen2} \caption{\textit{A bicycle parked in a \underline{kitchen} by the stove.}} \end{subfigure} \caption{Ground truth attention maps generated for COCO. The first two examples show successful cases. The third example is a failed case where the proposed method aligns both ``girl'' and ``woman'' to the ``person'' category. The fourth example shows the necessity of using the scene category list. If we do not distinguish between object and scene (middle), the algorithm proposes to align the word ``kitchen'' with objects like ``spoon'' and ``oven''. We propose to use uniform attention (right) in these cases.} \label{fig:coco-gt} \end{figure} \textbf{Implicit/Supervised Attention Models} All implementation details strictly follow \cite{xu2015show}. We resize the image such that the shorter side has 256 pixels, and then center crop the $224\times 224$ image, before extracting the conv5\textunderscore4 feature of the 19 layer version of VGG net \cite{simonyan2014very} pretrained on ImageNet \cite{deng2009imagenet}. The model is trained using stochastic gradient descent with the Adam algorithm \cite{kingma2014adam}. Dropout \cite{srivastava2014dropout} is used as regularization. We use the hyperparameters provided in the publicly available code\footnote{https://github.com/kelvinxu/arctic-captions}. We set the number of LSTM units to 1300 for Flickr30k and 1800 for COCO. \noindent \textbf{Ground Truth Attention for Strong Supervision Model} We experiment with our strong supervision model on the Flickr30k dataset \cite{young2014image}. The Flickr30k Entities dataset \cite{plummer2015flickr30k} is used for generating the ground truth attention maps. For each entity (noun phrase) in the caption, the Flickr30k Entities dataset provides the corresponding bounding box of the entity in the image. Therefore ideally, the model should ``attend to'' the marked region when predicting the associated words. We evaluate on noun phrases only, because for other types of words (e.g. determiner, preposition) the attention might be ambiguous and meaningless. \noindent \textbf{Ground Truth Attention for Weak Supervision Model} The MS COCO dataset \cite{lin2014microsoft} contains instance segmentation masks of 80 classes in addition to the captions, which makes it suitable for our model with weak supervision. We only construct $\pmb{\beta}_t$ for the nouns in the captions, which are extracted using the Stanford Parser \cite{manning2014stanford}. The similarity function in Equation~\ref{eqn:sim} is chosen to be the cosine distance between word vectors \cite{mikolov2013distributed} pretrained on GoogleNews\footnote{https://code.google.com/archive/p/word2vec/}, and we set an empirical threshold of 1/3 (i.e. only keep those with cosine distance greater than the threshold). The $\pmb{\beta}_t$ generated in this way still contains obvious errors, primarily because word2vec cannot distinguish well between objects and scenes. For example, the similarity between the word ``kitchen'' and the object class ``spoon'' is above threshold. But when generating a scene word like ``kitchen'', the model should be attending to the whole image instead of focusing on a small object like ``spoon''. To address this problem, we refer to the supplement of \cite{lin2014microsoft}, which provides a scene category list containing key words of scenes used when collecting the dataset. Whenever some word in this scene category list appears in the caption, we set $\pmb{\beta}_t$ to be uniform, i.e. equal attention across image. This greatly improves the quality of $\pmb{\beta}_t$ in some cases (see illustration in Figure~\ref{fig:coco-gt}). \noindent \textbf{Comparison of Metric Designs} To show the legitimacy of our attention correctness metric, we compute the spearsman correlation of our design and three other metrics: negative L1 distance, negative L2 distance, and KL divergence between $\pmb{\hat{\beta}}_t$ and $\pmb{\hat{\alpha}}_t$. On the Flickr30k test set with implicit attention and ground truth caption, the spearsman correlations between any two are all above 0.96 (see supplementary material), suggesting that all these measurements are similar. Therefore our metric statistically correlates well with other metrics, while being the most intuitive. \subsection{Evaluation of Attention Correctness} In this subsection, we quantitatively evaluate the attention correctness of both the implicit and the supervised attention model. All experiments are conducted on the 1000 test images of Flickr30k. We compare the result with a uniform baseline, which attends equally across the whole image. Therefore the baseline score is simply the size of the bounding box over the size of the whole image. The results are summarized in Table~\ref{tab:attn-improve}. \begin{table}[t] \caption{Attention correctness and baseline on Flickr30k test set. Both the implicit and the (strongly) supervised models outperform the baseline. The supervised model performs better than the implicit model in both settings.} \label{tab:attn-improve} \centering \small{ \begin{tabular}{c c c c c} \bf{Caption} & \bf{Model} & \bf{Baseline} & \bf{Correctness} \\ \hline \\[-8pt] \multirow{2}{*}{Ground Truth} & Implicit & 0.3214 & 0.3836 \\ & Supervised & 0.3214 & \bf{0.4329} \\ \hline \\[-8pt] \multirow{2}{*}{Generated} & Implicit & 0.3995 & 0.5202 \\ & Supervised & 0.3968 & \bf{0.5787} \\ \hline \\[-8pt] \end{tabular} } \end{table} \begin{table}[t] \caption{Attention correctness and baseline on the Flickr30k test set (generated caption, same matches for implicit and supervised) with respect to bounding box size. The improvement is greatest for small objects.} \label{tab:size-split} \centering \small{ \begin{tabular}{c c c c c} \bf{BBox Size} & \bf{Model} & \bf{Baseline} & \bf{Correctness}\\ \hline \\[-8pt] \multirow{2}{*}{Small} & Implicit & 0.1196 & 0.2484 \\ & Supervised & 0.1196 & \bf{0.3682} \\ \hline \\[-8pt] \multirow{2}{*}{Medium} & Implicit & 0.3731 & 0.5371 \\ & Supervised & 0.3731 & \bf{0.6117} \\ \hline \\[-8pt] \multirow{2}{*}{Large} & Implicit & 0.7358 & 0.8117 \\ & Supervised & 0.7358 & \bf{0.8255} \\ \hline \\[-8pt] \end{tabular} } \end{table} \begin{figure}[t] \centering \captionsetup[subfigure]{font=small} \begin{subfigure}{.23\textwidth} \centering \includegraphics[width = \textwidth]{figs/gtcap} \caption{Ground truth caption result} \end{subfigure} \begin{subfigure}{.23\textwidth} \centering \includegraphics[width = \textwidth]{figs/gcap} \caption{Generated caption result} \end{subfigure} \caption{Histograms of attention correctness for the implicit model and the supervised model on the Flickr30k test set. The more to the right the better.} \label{fig:attn-improve} \end{figure} \noindent \textbf{Ground Truth Caption Result} In this setting, both the implicit and supervised models are forced to produce exactly the same captions, resulting in 14566 noun phrase matches. We discard those with no attention region or full image attention (as the match score will be 1 regardless of the attention map). For each of the remaining matches, we resize the original attention map from $14\times 14$ to $224\times 224$ and perform normalization before we compute the attention correctness for this noun phrase. Both models are evaluated in Figure~\ref{fig:attn-improve}a. The horizontal axis is the improvement over baseline, therefore a better attention module should result in a distribution further to the right. On average, both models perform better than the baseline. Specifically, the average gain over uniform attention baseline is 6.22\% for the implicit attention model \cite{xu2015show}, and 11.14\% for the supervised version. Visually, the distribution of the supervised model is further to the right. This indicates that although the implicit model has captured some aspects of attention, the model learned with strong supervision has a better attention module. In Figure~\ref{fig:gtcap} we show some examples where the supervised model correctly recovers the spatial location of the underlined entity, while the implicit model attends to the wrong region. \noindent \textbf{Generated Caption Result} In this experiment, word-by-word match is able to align 909 noun phrases for the implicit model and 901 for the supervised version. Since this strategy is rather conservative, these alignments are correct and reliable, as verified by a manual check. Similarly, we discard those with no attention region or full image attention, and perform resize and normalization before we compute the correctness score. The results are shown in Figure~\ref{fig:attn-improve}b. In general the conclusion is the same: the supervised attention model produces attention maps that are more consistent with human judgment. The average improvement over the uniform baseline is 12.07\% for the implicit model and 18.19\% for the supervised model, which is a 50\% relative gain. In order to diagnose the relationship between object size and attention correctness, we further split the test set equally with small, medium, and large ground truth bounding box, and report the baseline and attention correctness individually. We can see from Table~\ref{tab:size-split} that the improvement of our supervised model over the implicit model is greatest for small objects, and pinpointing small objects is stronger evidence of image understanding than large objects. In Figure~\ref{fig:gcap} we provide some qualitative results. These examples show that for the same entity, the supervised model produces more human-like attention than the implicit model. More visualization are in the supplementary material. \begin{figure}[t!] \centering \captionsetup[subfigure]{labelformat=empty, font=small} \begin{subfigure}{.23\textwidth} \centering \captionsetup{width=0.95\textwidth} \includegraphics[width=0.32\textwidth]{figs/gtcap_orig/416992999_8298_orig} \includegraphics[width=0.32\textwidth]{figs/gtcap_retrain/416992999_8298} \includegraphics[width=0.32\textwidth]{figs/gtcap_re4/416992999_8298} \caption{\textit{\underline{Girl} rock climbing on the rock wall.}} \end{subfigure} \begin{subfigure}{.23\textwidth} \centering \captionsetup{width=0.95\textwidth} \includegraphics[width=0.32\textwidth]{figs/gtcap_orig/179828434_1347_orig} \includegraphics[width=0.32\textwidth]{figs/gtcap_retrain/179828434_1347} \includegraphics[width=0.32\textwidth]{figs/gtcap_re4/179828434_1347} \caption{\textit{A young smiling child hold his toy \underline{alligator} up to the camera.}} \end{subfigure} \begin{subfigure}{.23\textwidth} \centering \captionsetup{width=0.95\textwidth} \includegraphics[width=0.32\textwidth]{figs/gtcap_orig/1351500610_665_orig} \includegraphics[width=0.32\textwidth]{figs/gtcap_retrain/1351500610_665} \includegraphics[width=0.32\textwidth]{figs/gtcap_re4/1351500610_665} \caption{\textit{Two male friends in swimming trunks jump on the \underline{beach} while people in the background lay in the sand.}} \end{subfigure} \begin{subfigure}{.23\textwidth} \centering \captionsetup{width=0.95\textwidth} \includegraphics[width=0.32\textwidth]{figs/gtcap_orig/2066271441_1704_orig} \includegraphics[width=0.32\textwidth]{figs/gtcap_retrain/2066271441_1704} \includegraphics[width=0.32\textwidth]{figs/gtcap_re4/2066271441_1704} \caption{\textit{A black dog swims in water with a colorful ball in his \underline{mouth.}}\\\hspace{\textwidth}} \end{subfigure} \caption{Attention correctness using ground truth captions. From left to right: original image, implicit attention, supervised attention. The red box marks correct attention region (from Flickr30k Entities). In general the attention maps generated by our supervised model have higher quality.} \label{fig:gtcap} \end{figure} \begin{figure}[t] \captionsetup[subfigure]{labelformat=empty, font=small} \begin{subfigure}{.09\textwidth} \centering \vspace{-0.2cm} \caption{Image} \vspace{-0.2cm} \includegraphics[width=0.95\textwidth]{figs/gcap_orig/5323049335.jpg} \caption{\\\hspace{\textwidth}\\\hspace{\textwidth}} \end{subfigure} \begin{subfigure}{.185\textwidth} \centering \captionsetup{width=0.95\textwidth} \vspace{-0.2cm} \caption{Implicit Attention} \vspace{-0.2cm} \includegraphics[width=0.46\textwidth]{figs/gcap_retrain/5323049335_01} \includegraphics[width=0.46\textwidth]{figs/gcap_retrain/5323049335_02} \caption{\textit{\underline{A man} in a red jacket and blue pants is snowboarding.}} \end{subfigure} \begin{subfigure}{.185\textwidth} \centering \captionsetup{width=0.95\textwidth} \vspace{-0.2cm} \caption{Supervised Attention} \vspace{-0.2cm} \includegraphics[width=0.46\textwidth]{figs/gcap_re4/5323049335_01} \includegraphics[width=0.46\textwidth]{figs/gcap_re4/5323049335_02} \caption{\textit{\underline{A man} in a red jumpsuit and a black hat is snowboarding.}} \end{subfigure} \begin{subfigure}{.09\textwidth} \centering \includegraphics[width=0.95\textwidth]{figs/gcap_orig/4882632874.jpg} \caption{\\\hspace{\textwidth}\\\hspace{\textwidth}} \end{subfigure} \begin{subfigure}{.185\textwidth} \centering \captionsetup{width=0.95\textwidth} \includegraphics[width=0.46\textwidth]{figs/gcap_retrain/4882632874_01} \includegraphics[width=0.46\textwidth]{figs/gcap_retrain/4882632874_02} \caption{\textit{\underline{A man} in a blue shirt and blue pants is sitting on a wall.}} \end{subfigure} \begin{subfigure}{.185\textwidth} \centering \captionsetup{width=0.95\textwidth} \includegraphics[width=0.46\textwidth]{figs/gcap_re4/4882632874_01} \includegraphics[width=0.46\textwidth]{figs/gcap_re4/4882632874_02} \caption{\textit{\underline{A man} in a blue shirt and blue pants is skateboarding on a ramp.}} \end{subfigure} \begin{subfigure}{.09\textwidth} \centering \includegraphics[width=0.95\textwidth]{figs/gcap_orig/4460747081.jpg} \caption{\\\hspace{\textwidth}} \end{subfigure} \begin{subfigure}{.185\textwidth} \centering \captionsetup{width=0.95\textwidth} \includegraphics[width=0.46\textwidth]{figs/gcap_retrain/4460747081_09} \includegraphics[width=0.46\textwidth]{figs/gcap_retrain/4460747081_10} \caption{\textit{A man and a woman are walking down \underline{the street}.}} \end{subfigure} \begin{subfigure}{.185\textwidth} \centering \captionsetup{width=0.95\textwidth} \includegraphics[width=0.46\textwidth]{figs/gcap_re4/4460747081_09} \includegraphics[width=0.46\textwidth]{figs/gcap_re4/4460747081_10} \caption{\textit{A man and a woman are walking down \underline{the street}.}} \end{subfigure} \caption{Attention correctness using generated captions. The red box marks correct attention region (from Flickr30k Entities). We show two attention maps for the two words in a phrase. In general the attention maps generated by our supervised model have higher quality.} \label{fig:gcap} \end{figure} \subsection{Evaluation of Captioning Performance} \begin{table}[t] \caption{Comparison of image captioning performance. * indicates our implementation. Caption quality consistently increases with supervision, whether it is strong or weak.} \label{tab:cap-improve} \centering \small{ \begin{tabular}{c c c c c} \bf{Dataset} & \bf{Model} & \bf{BLEU-3} & \bf{BLEU-4} & \bf{METEOR} \\ \hline \\[-8pt] \multirow{3}{*}{Flickr30k} & Implicit & 28.8 & 19.1 & 18.49 \\ &Implicit* & 29.2 & 20.1 & 19.10 \\ &Strong Sup & \bf{30.2} & \bf{21.0} & \bf{19.21} \\ \hline \\[-8pt] \multirow{3}{*}{COCO} & Implicit & 34.4 & 24.3 & 23.90 \\ &Implicit* & 36.4 & 26.9 & 24.46 \\ &Weak Sup & \bf{37.2} & \bf{27.6} & \bf{24.78} \\ \hline \\[-8pt] \end{tabular} } \end{table} \begin{table}[t] \caption{Captioning scores on the Flickr30k test set for different attention correctness levels in the generated caption, implicit attention experiment. Higher attention correctness results in better captioning performance.} \label{tab:cap-split} \centering \small{ \begin{tabular}{c c c c} \bf{Correctness} & \bf{BLEU-3} & \bf{BLEU-4} & \bf{METEOR} \\ \hline \\[-8pt] High & 38.0 & 28.1 & 23.01 \\ Middle & 36.5 & 26.1 & 21.94 \\ Low & 35.8 & 25.4 & 21.14 \\ \hline \\[-8pt] \end{tabular} } \end{table} We have shown that supervised attention models achieve higher attention correctness than implicit attention models. Although this is meaningful in tasks such as region grounding, in many tasks attention only serves as an intermediate step. We may be more interested in whether supervised attention model also has better captioning performance, which is the end goal. The intuition is that a meaningful dynamic weighting of the input vectors will allow later components to decode information more easily. In this subsection we give experimental support. We report BLEU \cite{papineni2002bleu} and METEOR \cite{banerjee2005meteor} scores to allow comparison with \cite{xu2015show}. In Table~\ref{tab:cap-improve} we show both the scores reported in \cite{xu2015show} and our implementation. Note that our implementation of \cite{xu2015show} gives slightly improved result over what they reported. We observe that BLEU and METEOR scores consistently increase after we introduce supervised attention for both Flickr30k and COCO. Specifically in terms of BLEU-4, we observe a significant increase of 0.9 and 0.7 percent respectively. To show the positive correlation between attention correctness and caption quality, we further split the Flickr30k test set (excluding those with zero alignment) equally into three sets with high, middle, and low attention correctness. The BLEU-4 scores are 28.1, 26.1, 25.4, and METEOR are 23.01, 21.94, 21.14 respectively (see Table~\ref{tab:cap-split}). This indicates that higher attention correctness means better captioning performance. \section{Discussion} In this work we make a first attempt to give a quantitative answer to the question: to what extent are attention maps consistent with human perceptions? We first define attention correctness in terms of consistency with human annotation at both the word level and phrase level. In the context of image captioning, we evaluated the state-of-the-art models with implicitly trained attention modules. The quantitative results suggest that although the implicit models outperform the uniform attention baseline, they still have room for improvement. We then show that by introducing supervision of attention map, we can improve both the image captioning performance and attention map quality. In fact, we observe a positive correlation between attention correctness and captioning quality. Even when the ground truth attention is unavailable, we are still able to utilize the segmentation masks with object category as a weak supervision to the attention maps, and significantly boost captioning performance. We believe closing the gap between machine attention and human perception is necessary, and expect to see similar efforts in related fields. \section{Acknowledgments} We gratefully acknowledge support from NSF STC award CCF-1231216 and the Army Research Office ARO 62250-CS. FS is partially supported by NSF IIS-1065243, 1451412, 1513966, and CCF-1139148. We also thank Tianze Shi for helpful suggestions in the early stage of this work. {\bibliographystyle{aaai} \small{
{ "attr-fineweb-edu": 1.879883, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdKY4dbgg9CUVsKrG
\section{Introduction} Let $Z$ denote a complex analytic surface in $\mathbb{C}^N$ which has an isolated singularity at the origin. By intersecting $Z$ with a small sphere centered at 0, we get a closed oriented $3$-manifold $M$, which is called the link of the singularity. The distribution of complex tangencies on $M$ is a contact distribution. In general the $3$-manifold $M$ may be the link of many other analytically distinct isolated singularities, but the induced contact structures are known to be contactomorphic \cite{CNP}. If a $3$-manifold is realized as a link of an isolated singularity, then the associated contact structure is called the canonical contact structure on $M$. Properties of these contact structures have been extensively studied in the literature \cite{AkhO, BO,LO}. One remarkable feature of canonical contact structures is that they are all Stein fillable; a deformation of the minimal resolution of a normal surface singularity and the Milnor fiber of a smoothable singularity determine Stein fillings of the canonical contact structure. The purpose of the present paper is to address the existence problem of Stein cobordisms between canonical contact structures on the link manifolds of various classes of singularities. This is also related to the problem of symplectically embedding one Milnor fiber into another. As a first step we work on almost rational (AR) singularities, which have been introduced by A.~N\'emethi as an extension of rational singularities. Recall that a complex surface singularity is rational if its geometric genus is zero. M.~Artin found out that this is equivalent to the case when any positive divisorial 2-cycle in a resolution of the singularity has nonpositive arithmetic genus; besides this last condition is independent of the resolution \cite{Art}. N\'{e}methi investigated the behavior of the arithmetic genus function (which is equal to $1-\chi$ where $\chi$ is as in \eqref{eqn:chi}) on the lattice of homology 2-cycles of a resolution of a normal surface singularity. Through his observation, he deduced that if a normal surface singularity is rational then its link manifold $M$ is an L-space \cite[Theorem~6.3]{N}; i.e. $M$ is a rational homology sphere and its Heegaard Floer homology is isomorphic to that of a lens space. The converse of this statement was also proved recently in \cite{N4}. Now, an AR-singularity is one which admits a good resolution whose dual graph is a negative definite, connected tree and has the following property: by reducing the weight on a vertex we get the dual graph of a rational singularity. This property allows one to compute the Heegaard Floer homology of the link by a combinatorial process similar to Laufer's method of finding Artin's fundamental cycle \cite{N}. Even though the class of AR-singularities is restrictive, it is still large enough to contain the rational singularities and all those singularities whose links are Seifert fibered rational homology spheres with negative definite plumbing graphs. We call an almost rational singularity \emph{proper AR} if it is not rational. After recalling some preliminaries on contact structures in Section~\ref{s:conpre} and on plumbings in Section~\ref{plumbing}, we present our first result in Section~\ref{construction} which claims the existence of a Stein cobordism from an arbitrary contact structure to the canonical contact structure of a proper AR-singularity. \begin{theorem}\label{thm:steinAR} Every closed, contact 3-manifold is Stein cobordant to the canonical contact structure of a proper AR-singularity. \end{theorem} The proof of this theorem is an explicit construction of a Stein cobordism from the given contact manifolds to Brieskorn spheres. By contrast there are obstructions to existence of Stein cobordisms going in the reverse direction. These obstructions come from Heegaard-Floer theory, particularly its plus flavor $HF^+$ \cite{OS4, OS5}. Recall that for any connected, closed, and oriented $3$-manifold $M$ the Heegaard-Floer homology group $HF^+(M)$ is a graded $\mathbb{F}[U]$-module, where $\mathbb{F}=\mathbb{Z}/2\mathbb{Z}$. Any oriented cobordism between two such manifolds induces a graded $\mathbb{F}[U]$ module homomorphism between the corresponding Heegaard-Floer homology groups. To any co-oriented contact structure $\xi$ on a $3$-manifold $M$, there is an associated element $c^+(\xi)\in HF^+(-M)$, which is a contactomorphism invariant of the contact structure and is natural under Stein cobordisms \cite{OS6}. In particular if $\xi$ is Stein fillable then $c^+(\xi)$ does not vanish since the tight structure on $S^3$ has non-vanishing $c^+$. Moreover, $U(c^+(\xi))=0$. Utilizing this property, one can generate a numerical invariant of contact structures by letting $$ \sigma(\xi)=-\mathrm{Sup} \left \{d\in \mathbb{N}\cup\{0\}:c^+(\xi)\in U^d \cdot HF^+(-M) \right \}.$$ The first author showed that $\sigma$ is monotone under Stein cobordisms and can take all the values in the set $\{0,-1,-2,\dots,-\infty \}$ \cite{K}. The computation of $\sigma(\xi)$ is in general hard, nevertheless we are able to show that it is zero for the canonical contact structures of proper AR-singularities. The Stein cobordism obstructions are obtained as immediate corollaries of this observation. \begin{theorem} \label{nocob} Let $(M,\xi)$ be the canonical contact link manifold of a proper AR-singularity. Then $\sigma(\xi)=0$. Hence there is no Stein cobordism from $\xi$ to \begin{enumerate} \item any contact structure supported by a planar open book, \item any contact structure on the link of a rational singularity, or \item any contact structure with vanishing Ozsv\'ath-Szab\'o invariant. \end{enumerate} \end{theorem} In the course of proving this theorem we give an explicit method to detect the contact invariant $c^+(\xi)$ in the homology of graded roots of A.~N\'emethi, if $\xi$ satisfies a certain compatibility condition with a plumbing graph. To detect $c^+(\xi)$ in the homology of graded roots, one should understand the isomorphism \begin{equation} \Phi:HF^+(-M(\Gamma),\mathfrak{t}_{\mathrm{can}})\to \mathbb{H}(R_{\tau}). \label{Fi} \end{equation} Here the left side is the Heegaard Floer homology of the 3-manifold described via an $AR$-plumbing graph $\Gamma$ in the canonical \spinc structure $\mathfrak{t}_{\mathrm{can}}$. The right side is the homology of a graded root $R_{\tau}$, which is much easier to compute. In Section~\ref{HF} we describe the isomorphism $\Phi$ explicitly along with the required algebraic objects, tell how $c^+$ is detected in graded roots and finally prove Theorem~\ref{nocob}. In Section~\ref{examples} we present explicit examples. \section{Contact Preliminaries}\label{s:conpre} In this section we mention some background material about contact geometry. Our main purpose is to set up our terminology. For a thorough discussion see \cite{OzbStip,Gei, CE}. Contact structures on 3-manifolds can be studied topologically via open books through the Giroux corespondence. We say that a contact structure $\xi$ on $M$ is compatible with an open book in $M$ if on the oriented binding of the open book a contact form of $\xi$ is a positive volume form and away from the binding $\xi$ can be isotoped through contact structures to be arbitrarily close to the tangent planes of the pages of the open book. Giroux correspondence states that this compatibility relation is in fact a one-to-one correspondence between contact 3-manifolds up to contact isotopy and open books up to positive stabilizations. Suppose $(W,J)$ is a compact complex surface with oriented boundary $-M_1\cup M_2$ that admits a strictly plurisubharmonic Morse function $\phi:W\to[t_1,t_2]$ such that $M_i=\phi^{-1}(t_i)$, for $i=1,2$, and $-dJ^*d\phi$ is a symplectic form. Then the set of complex tangencies on $M_i$ constitute a contact structure $\xi_i$, for $i=1,2$. In this case we say that $W$ is a Stein cobordism from $(M_1,\xi_1)$ to $(M_2,\xi_2)$. If $M_1=\emptyset$ we say that $W$ is a Stein filling for $(M_2,\xi_2)$. Establishing the existence of a Stein cobordism between given contact manifolds is a delicate problem. Etnyre and Honda proved that an overtwisted contact 3-manifold is Stein cobordant to any contact $3$-manifold and that for any contact $3$-manifold $M$ there is a Stein fillable one to which $M$ is Stein cobordant \cite{EH}. The obstructions usually utilize Floer type theories. The first author used Heegaard Floer homology to prove non-existence of Stein cobordisms between certain contact manifolds \cite{K}; see also Sections \ref{HF} and \ref{examples} below. Another powerful tool which obstructs more generally exact symplectic cobordisms is Latschev and Wendl's algebraic torsion which was built in the framework of symplectic field theory \cite{LW}. In the appendix of the same paper Hutchings described the same obstruction in embedded contact homology. Translation of the latter in the Heegaard Floer setting was recently given by Kutluhan et al \cite{KMVW}. Starting with a contact $3$-manifold $(M_1,\xi_1)$ we can build a Stein cobordism as follows. First take the product $M_1\times [0,1]$ and equip it with the standard Stein structure. Then attach $1$-handles to the upper boundary. Next attach $2$-handles along Legendrian knots in $M_1\sharp_n S^1\times S^2$ with framing one less than the contact framing. By the topological characterization of Stein cobordisms given by Eliashberg \cite{Eli}, see also \cite[Theorem~1.3]{Go}, the standard Stein structure on $M_1\times [0,1]$ extends uniquely over such handles and all Stein cobordisms can be constructed this way. We are going to need a well-known partial translation of this recipe in terms of open books. \begin{lemma}\label{l:monostei} There exists a Stein cobordism from $(M_1,\xi_1)$ to $(M_2,\xi_2)$ if there exist open books $(S_i,\phi_i)$ compatible with $(M_i,\xi_i)$ for $i=1,2$, such that $S_1$ and $S_2$ are homeomorphic surfaces and $\phi_2$ is obtained by multiplying $\phi_1$ with right handed Dehn twists along non-separating curves. \end{lemma} \begin{proof} By Legendrian realization principle \cite[Theorem~3.7]{H}, each non-separating curve on a page can be isotoped to a Legendrian curve staying on the same page such that the page framing and the contact framing agree. Then adding right handed Dehn twists to the monodromy along these curves precisely corresponds to adding a Stein handle along the Legendrian curves. \end{proof} \section{Preliminaries on Plumbings} \label{plumbing} Let $\Gamma$ be a weighted connected tree with vertices $\{b_j: j \in \mathcal{J}\}$ for some finite index set $\mathcal{J}$. Let $e_j$ denote the weight of the $j$th vertex $b_j$ for all $j\in \mathcal{J}$. We construct a $4$-manifold $X(\Gamma)$ as follows: For each vertex $b_j$, take a disk bundle over a sphere whose Euler number is $e_j$ and plumb two of these together whenever there is an edge connecting the vertices. We denote the boundary 3-manifold by $M(\Gamma)$. We shall call such a graph a plumbing graph. These graphs naturally arise in singularity theory as good dual resolution graphs where vertices correspond to exceptional divisors and the edges represent their intersection. See \cite{N3} for a survey on this topic. \subsection{The lattices $L$ and $L'$} The 4-manifold $X(\Gamma)$ admits a handle decomposition without $1$-handles, so the second homology group $L=H_2(X(\Gamma);\mathbb{Z})$ is freely generated by the the fundamental classes of the zero-sections of the disk bundles in the plumbing construction. Hence we have a generator of $L$ for each vertex of $\Gamma$. By abuse of notation we denote the generator in $L$ corresponding to the vertex $b_j$ by the same symbol. The intersection form $(\cdot,\cdot)$ on $L$ is a symmetric bilinear function naturally characterized by $\Gamma$. It is known that a plumbing graph can be realized as a good dual resolution graph of a singularity if and only if the intersection form is negative definite. Henceforth, we shall assume that all the plumbing graphs we consider satisfy this property. Therefore $L$ is a lattice and we have the short exact sequence \begin{equation}\label{eq:dia1} \begin{CD} 0 @>>> L @>PD>>L' @>>> H @>>>0 \end{CD} \end{equation} where $L'$ is the dual lattice $\mathrm{Hom}_{\mathbb{Z}}(L,\mathbb{Z})\cong H^2(X(\Gamma);\mathbb{Z})\cong H_2(X(\Gamma),M(\Gamma);\mathbb{Z}) $ with $PD(x)=(x,\cdot)$, and $H=H_1(M(\Gamma);\mathbb{Z})$ . We say that $k\in L'$ is \emph{characteristic} if for every vertex $b_j$ of $\Gamma$, we have $k(b_j)+e_j\equiv 0 \text{ mod } 2$. Let $\mathrm{Char}(\Gamma)$ denote the set of characteristic elements in $L'$. The lattice $L$ naturally acts on $\mathrm{Char}(\Gamma)$ by the rule $x\ast k =k +2PD(x)$ for every $x\in L$. The characteristic cohomology class $K\in L'$ satisfying $K(b_j)=-e_j-2$ for all $j\in \mathcal{J}$ is called the canonical class. For each $k\in \mathrm{Char}(\Gamma)$, we define the function $\chi_k:L\to\mathbb{Z}$ by \begin{equation} \label{eqn:chi} \chi_k(x)=-(k(x)+(x,x))/2. \end{equation} We simply use the symbol $\chi$ to denote $\chi_K$. \subsection{\spinc structures}\label{s:spinc} Every element of $\mathrm{Char}(\Gamma)$ uniquely defines a \spinc structure on $X(\Gamma)$. Two such \spinc structures induce the same \spinc structure on $M(\Gamma)$ if and only if the corresponding characteristic cohomology classes are in the same $L$-orbit. For a fixed \spinc structure $\mathfrak{t}$ of $M(\Gamma)$, let $\mathrm{Char}(\Gamma,\mathfrak{t})$ denote the set of all characteristic cohomology classes which restrict to $\mathfrak{t}$ on $M(\Gamma)$. Since $M(\Gamma)$ is a rational homology sphere, it has finitely many \spinc structures; these are in one-to-one correspondence with elements of $H_1(M(\Gamma),\mathbb{Z})$. We can express $\mathrm{Char}(\Gamma)$ as a disjoint union of finitely many $\mathrm{Char}(\Gamma,\mathfrak{t})$s. In this paper, the default \spinc structure, denoted by $\mathfrak{t}_{\mathrm{can}}$, is the one induced by the canonical class $K$. \subsection{Almost Rational Graphs} A plumbing graph $\Gamma$ is said to be \emph{rational} if $\chi(x)\geq 1$ for any $x>0$. Call a vertex $b_j$ in a plumbing graph a bad vertex if $-e_j$ is greater than the valency of the vertex. It is known that graphs with no bad vertices are rational. Moreover, rational graphs are closed under decreasing weights and taking subgraphs. A graph is said to be \emph{almost rational} (or AR for short) if by decreasing the weight of a vertex $b_0$, we get a rational graph. Note that such a distinguished vertex $b_0$ need not be unique. Graphs with at most one bad vertex are AR \cite[Section~8]{N}. In particular plumbing graphs of Seifert fibered rational homology spheres are AR. A plumbing graph is said to be proper almost rational if it is almost rational but not rational. We say that a surface singularity is rational (respectively, AR and proper AR) if it admits a good resolution graph which is rational (respectively AR and proper AR). For example, for pairwise relatively prime integers $p,q$ and $r$, the link of the Brieskorn singularity \begin{equation}\label{eq:Brieskorn} x^p+y^q+z^r=0, \end{equation} is a Seifert fibered integral homology sphere which is called Brieskorn sphere $\Sigma(p,q,r)$. Such a singularity is proper AR unless $(p,q,r)=(2,3,5)$ \cite{R2,E,N}. \section{Construction of Stein Cobordisms} \label{construction} The purpose of this section is to prove Theorem \ref{thm:steinAR}. In fact we will show that the proper AR-singularity in the theorem can be chosen as a Brieskorn singularity. For arbitrary positive integers $g$ and $n$, consider the triple $(p,q,r)$ with $$p=2,\quad q=2g+1,\text{ and } r=(4g+2)n+1.$$ \noindent Note that $p,q,$ and $r$ are pairwise relatively prime for all $g$ and $n$, so the corresponding Brieskorn sphere $\Sigma(p,q,r)$ is an integral homology sphere, and the corresponding Brieskorn singularity is proper AR. We shall describe an abstract open book which supports the canonical contact structure on $\Sigma(p,q,r)$. Let $S_g$ denote a compact orientable surface of genus $g$ with one boundary component. Let $\phi_{g,n}$ denote the diffeomorphism on $S_g$ which is a product of right handed Dehn twists along simple closed curves given by \begin{equation}\label{eq:monodromy} \phi_{g,n}=(t_{a_1}t_{a_2}\dots t_{a_{2g}})^{(4g+2)n+1}, \end{equation} \noindent where the curves $a_1,a_2,\dots,a_{2g}$ form a chain as in Figure \ref{fig:Page_Chain} \begin{figure}[h] \includegraphics[width=0.60\textwidth]{Page_Chain.eps} \caption{Curves $a_1,\dots a_{2_g}$ which form a chain on $S_g$.} \label{fig:Page_Chain} \end{figure} The chain relation \cite[Proposition 4.12]{FM} tells that \begin{equation}\label{eq:chain} (t_{a_1}t_{a_2}\dots t_{a_{2g}})^{(4g+2)}=t_\delta \end{equation} \noindent where $\delta$ is a simple closed curve on $S_g$ which is parallel to the boundary component. Since $\delta$ does not intersect with any of $a_1,\dots,a_{2g}$, the Dehn twist $t_\delta$ commutes with all of $t_{a_1},\dots,t_{a_{2g}}$. Hence $\phi_{g,n}$ can also be written as $$\phi_{g,n}=(t_{a_1}t_{a_2}\dots t_{a_{2g}})t_\delta ^n $$ \begin{lemma}\label{lem:sup} The canonical contact structure on $\Sigma (p,q,r)$ is supported by $(S_g,\phi_{g,n})$. \end{lemma} \begin{proof} Consider the Milnor fiber $M(p,q,r)$ of the singularity \eqref{eq:Brieskorn}. By definition we have $\partial M(p,q,r)=\Sigma (p,q,r)$. We shall construct a holomorphic Lefschetz fibration on $M(p,q,r)$ whose fibers are diffeomorphic to $S_g$ and monodromy factorizes exactly as in \eqref{eq:monodromy}. Our argument will be based on a work of Loi and Piergallini \cite{LP}, which relates branched covers of Stein 4-manifolds to Lefschetz fibrations. Since $p=2$, the Milnor fiber $M(p,q,r)$ is a 2-fold branched cover of $B^4=\{ (y,z)\in\mathbb{C}^2\,:\, |y|^2+|z|^2\leq \epsilon \}$, branched along the Milnor fiber $M(q,r)$ of the plane curve singularity \begin{equation}\label{eq:curve} y^q+z^r=0 \end{equation} \noindent Let $h:M(p,q,r)\to B^4$ be the covering map. It is well known that the link of the plane curve singularity \eqref{eq:curve} is the $(q,r)$-torus knot $T(q,r)$ in $S^3=\partial B^4$, and the Milnor fiber $M(q,r)$ is diffeomorphic to the minimal genus Seifert surface of $T(q,r)$ in $S^3$. Identify $B^4 \cong B^2\times B^2$ using the complex coordinates $(y,z)$ and consider the projection map to the second factor $\pi_2:B^4\to B^2$. The restriction of this map to $M(q,r)$ is a simple $q$-fold branched covering whose singular points, called the twist points, are in one-to-one correspondence with crossings of $T(q,r)$ represented as a braid with $q$ strands and $r$ twists. Here it is important that all the crossings of $T(q,r)$ are positive otherwise charts describing simple branched cover around the twist points would not be compatible with the complex orientation on $M(q,r)$. Let $s_1,s_2,\dots,s_{qr}$ be the twist points on $M(q,r)$. Without loss of generality we may assume that they are mapped to distinct points $z_1,z_2,\dots,z_{r}$ under $\pi_2$. By \cite[Proposition 1]{LP}, the composition $f:=\pi_2\circ h$ is a holomorphic Lefschetz fibration whose set of singular values is $\{z_1,z_2,\dots,z_{r}\}$. Next we determine the regular fibers of the Lefschetz fibration $f:M(p,q,r)\to B^2$. Away from the twist points, any disk $B^2\times \{ \mathrm{pt} \}$ in $B^4$ intersects $M(q,r)$ at $q$ points. Since $q=2g+1$, each such disk lifts under $h$ to a genus $g$ surface $F$ with one boundary component; this forms a regular fiber of $f$. The restriction map $h|_F:F\to B^2\times \{\mathrm{pt}\}$ is modeled by taking the quotient of $F$ by the hyperelliptic involution whose $2g+1$ fixed points map to $B^2\times \{ \mathrm{pt} \} \cap M(q,r)$. See the marked points on $F$ in Figure \ref{fig:braid_lift}. When a disk $B^2\times \{\mathrm{pt} \}$ intersects $M(q,r)$ at a twist point, its lift under $h$ is a singular fiber of $f$. We can describe the vanishing cycle of each singular fiber using the corresponding crossing of $T(q,r)$. Any arc $\gamma$ on $B^2\times \{ \mathrm{pt}\}$ connecting two strands of the braid $T(q,r)$ lifts to a unique simple closed curve $\alpha$ on the regular fiber $F$. In Figure \ref{fig:braid_lift}, we indicated these arcs and curves using different colors. If a crossing exchanges these two strands connected by $\gamma$ then the corresponding singular fiber has vanishing cycle $\alpha$. Each such vanishing cycle contributes a right handed Dehn twist along $\alpha$ if the corresponding crossing is positive (see \cite[Proposition 1]{LP} or \cite[Lemma 4.2]{BE}). Following the braid direction we see that the monodromy is as described in \eqref{eq:monodromy}. \begin{figure}[h] \includegraphics[width=0.50\textwidth]{braid_lift.eps} \caption{.} \label{fig:braid_lift} \end{figure} Having shown that the fibers of $f$ are diffeomorphic to $S_g$ and the total monodromy of $f$ agrees with $\phi_{g,n}$, we conclude that the restriction of $f$ on $\partial M(p,q,r)$ is the open book $(S_g,\phi_{g,n})$. Since the fibers of $f$ are complex submanifolds of $M(p,q,r)$, the open book $(S_g,\phi_{g,n})$ supports the canonical contact structure. \end{proof} \begin{remark} An alternative proof of the above lemma goes as follows: Using handlebody techniques and the fact that $\Sigma(p,q,pqn+1)$ is $-1/n$ surgery on $(p,q)$-torus knot, one can directly verify that the total space of the open book $(S_g,\phi_{g,n})$ is $\Sigma(2,2g+1,(2g+1)n+1)$. Using the chain relation and a result of W.~D.~Neumann and A.~Pichon \cite[Theorem 2.1]{NP}, one can show that the open book supports the canonical contact structure. \end{remark} \begin{lemma}\label{lem:cob} Given any contact 3-manifold $(M,\xi)$, there exist $g,n_0\in \mathbb{N}$ such that for every $n\geq n_0$ there is a Stein cobordism from $(M,\xi)$ to the canonical contact structure on the Brieskorn sphere $\Sigma(2,2g+1,(2g+1)n+1)$. \end{lemma} \begin{proof} Take an open book supporting $(M,\xi)$. If the pages of this open book have more than one boundary component, we can positively stabilize the open book to reduce the number of boundary components to one at the expense of increasing the page genus. Let $g$ denote the genus of the pages of the resulting open book after the stabilizations. Fix an identification of the pages with $S_g$, let $\phi$ denote the monodromy. Write the monodromy as a product of Dehn twists about non-separating curves $c_1,c_2,\dots,c_k$ in $S_g$, \begin{equation}\label{eq:factor} \phi=t_{c_1}^{\epsilon_1}t_{c_2}^{\epsilon_2}\cdots t_{c_k}^{\epsilon_k}, \text{ where } \epsilon_i\in \{-1,+1\} \text{ for all } i=1,\dots,k \end{equation} \noindent The exponents in the above equation emphasize that the factorization $\phi$ can contain both right handed and left handed Dehn twists. We shall make the monodromy agree with $\phi_{g,n_0}$ for some large $n_0\in \mathbb{N}$ by multiplying it by only right handed Dehn twists. If $\epsilon_1=-1$ we simply multiply $\phi$ by $t_{c_1}$ from left to cancel the first term. If $\epsilon_1=+1$, we need to do more work! Since $c_1$ is non-separating, a self-diffeomorphism $\psi$ of $S_g$ sends $a_{2g}$ to $c_1$ (see Figure~\ref{fig:Page_Chain}). Multiply both sides of the chain relation \eqref{eq:chain} by $\psi^{-1}$ from right and by $\psi$ from left, and use the fact that $t_\delta$ is in the center of the mapping class group of $S_g$, to see that $$(t_{\psi(a_1)}t_{\psi(a_2)}\dots t_{\psi(a_{2g})})^{(4g+2)}=t_\delta.$$ Hence using $(4g+2)(2g)-1$ right handed Dehn twists we can trade $t_{c_1}$ with $t_\delta$. The latter can be put at the end of the factorization \eqref{eq:factor}. Applying the same recipe to the remaining $c_i$'s we get the monodromy $t_\delta^{n_0}$, where $n_0$ is the number of positive exponents appearing in the factorization \eqref{eq:factor}. By adding Dehn twists appearing in the left hand side of the chain relation \eqref{eq:chain} as many times as necessary, we get the monodromy agree with $\phi_{g,n}$ for any $n\geq n_0$. During the process we made only two kinds of modifications on the open book: positive stabilizations and adding a right handed Dehn twist to the monodromy. By Lemma~\ref{l:monostei} the required Stein cobordism exists. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:steinAR}] Immediately follows from Lemma \ref{lem:sup} and Lemma \ref{lem:cob}. \end{proof} \section{Heegaard Floer homology, graded roots and Stein cobordism obstructions } \label{HF} To understand the image of $c^+(\xi)$ under $\Phi$ in (\ref{Fi}), we analyze the isomorphism $\Phi$ carefully by considering its factorization as follows: \begin{equation}\label{eq:dia} \begin{CD} HF^+(-M(\Gamma)) @>\Phi_1>> \mathbb{H}^+(\Gamma) @>\widetilde{\Phi}>> (\mathbb{K}^+(\Gamma))^* @>\Phi_2>> \mathbb{H}(R_\tau) \end{CD} \end{equation} In order to introduce the notation and recall the definitions, we recite briefly the construction of $HF^+$, $\mathbb{H}^+$, $\mathbb{K}^+$, and the isomorphisms $\Phi_1$ and $\widetilde{\Phi}$ in Sections~\ref{HF+}-\ref{K+} and the graded roots, their homology and the isomorphism $\Phi_2$ in Sections~\ref{rootR}-\ref{laufer}. We describe how to detect root vertices of graded roots in $(K^+(\Gamma))^*$ in Section~\ref{detecting} and distinguish the contact invariant in $(K^+(\Gamma))^*$ in Section~\ref{c+}. Finally we prove Theorem~\ref{nocob} in Section~\ref{pf2}. \subsection{Plus flavor of Heegaard Floer homology} \label{HF+} Heegaard Floer theory is a package of prominent invariants for $3$-manifolds, \cite{OS4, OS5} see also \cite{J,M} for recent surveys. To every closed and oriented $3$-manifold $M$, and any \spinc structure $\mathfrak{t}$ on $M$, one associates an $\mathbb{F}:=\mathbb{Z}/2\mathbb{Z}$-vector space $HF^+(M,\mathfrak{t})$ which admits a canonical $\mathbb{Z}/2\mathbb{Z}$ grading. In the case that the first Chern class of the \spinc structure is torsion (in particular when $M$ is a rational homology sphere), the $\mathbb{Z}/2\mathbb{Z}$-grading lifts to an absolute $\mathbb{Q}$-grading. There is also an endomorphism, denoted by $U$, decreasing the degree by $2$ so as to make $HF^+(M,\mathfrak{t})$ an $\mathbb{F}[U]$-module. For example when the $3$-manifold is the 3-sphere and $\mathfrak{t}_0$ is its unique \spinc structure, we have $HF^+(S^3,\mathfrak{t}_0)=\mathcal{T}^+_{0}$ where $\mathcal{T}^+$ stands for the $\mathbb{F}[U]$-module $\mathbb{F}[U,U^{-1}]/U\mathbb{F}[U]$ and $\mathcal{T}^+_{d}$ denotes the one in which the lowest degree element is supported in degree $d$. More generally when $M$ is a rational homology sphere, it is known that the Heegaard Floer homology is given by $HF^+(M,\mathfrak{t})=\mathcal{T}^+_{d}\oplus HF_{\mathrm{red}}(M,\mathfrak{t})$ where $HF_{\mathrm{red}}(M,\mathfrak{t})$ is a finitely generated $\mathbb{F}$-vector space (and hence a finitely generated $\mathbb{F}[U]$-module) \cite{OS2}. In the sequel, we shall assume that all our $3$-manifolds are rational $3$-spheres. Heegaard Floer homology groups behave functorially under cobordisms in the following sense: Suppose $M_1$ and $M_2$ are connected, closed and oriented $3$-manifolds and there is a cobordism $W$ from $M_1$ to $M_2$; i.e. the oriented boundary of $W$ is $\partial W=(-M_1)\cup M_2$. For every \spinc structure $\mathfrak{s}$ on $W$, let $\mathfrak{t}_1$ and $\mathfrak{t}_2$ denote the induced \spinc structures on $M_1$ and $M_2$ respectively. Then we have an $\mathbb{F}[U]$-module homomorphism $$ F_{W,\mathfrak{s}}:HF^+(-M_2,\mathfrak{t}_2)\to HF^+(-M_1,\mathfrak{t}_1).$$ \subsection{The graded module $\mathbb{H}^+(\Gamma)$} \label{H+} The differential in the chain complex defining Heegaard Floer homology counts certain types of holomorphic disks. This aspect makes the computation of Heegaard Floer homology groups difficult in general. On the other hand if a three manifold arises from a plumbing construction, there is a purely algebraic description for its Heegaard Floer homology. Let $\Gamma$ be a negative definite plumbing graph. For every \spinc structure $\mathfrak{t}$ of $M(\Gamma)$, let $\mathbb{H}^+(\Gamma,\mathfrak{t})$ be the subset of the set of functions Hom$(\mathrm{Char}_{\mathfrak{t}}(\Gamma),\mathcal{T}^+)$ satisfying the following property. For every $k\in \mathrm{Char}_\mathfrak{t}(\Gamma)$ and every $j\in\mathcal{J}$, let $n$ be the integer defined by \begin{equation}\label{e:kay}\chi_{k,j}=\chi_k(b_j)=-(k(b_j)+e_j)/2.\end{equation} Then for every positive integer $m$, we require $$U^{m-\chi_{k,j}}\phi(k+2PD(b_j))=U^m\phi(k) \text{ whenever } \chi_{k,j}\leq0, \text{ and}$$ $$U^{m}\phi(k+2PD(b_j))=U^{m+\chi_{k,j}}\phi(k) \text{ whenever } \chi_{k,j}>0.$$ We define $\mathbb{H}^+(\Gamma)=\bigoplus_{\mathfrak{t}}\mathbb{H}^+(\Gamma,\mathfrak{t})$, which has readily an $\mathbb{F}[U]$-module structure. Moreover the conditions above provide the existence of a suitable grading on it. A map $\Phi_1: HF^+(-M(\Gamma),\mathfrak{t})\to \mathbb{H}^+(\Gamma,\mathfrak{t} )$ can be described as follows: Remove a disk from the $4$-manifold $X(\Gamma)$ and regard it as a cobordism $\widetilde{X}$ from $-M(\Gamma)$ to $S^3$. For each characteristic cohomology class $k\in \mathrm{Char}_{\mathfrak{t}}(\Gamma)$ we have a map $F_{\widetilde{X},k}:HF^+(-M(\Gamma),\mathfrak{t})\to HF^+(S^3)=\mathcal{T}^+_{0}$. For any Heegaard Floer homology class $c\in HF^+(-M(\Gamma))$ and for any characteristic cohomology class $k$, we define $\Phi_1(c)(k):=F_{\widetilde{X},k}(c)$. Thanks to adjunction relations, this map is a well defined homomorphism which is in fact an isomorphism when $\Gamma$ is AR. This was shown by Ozsv\'ath and Szab\'o for the special case where $\Gamma$ has at most one bad vertex \cite{OS1} and later by N\'emethi in general \cite{N}. \subsection{The dual of $\mathbb{H}^+(\Gamma)$ } \label{K+} There is a simple description of the dual of $\mathbb{H}^+(\Gamma)$. We use the notation $U^m\otimes k$ to denote a typical element of $\mathbb{Z}^{\geq 0}\times\mathrm{Char}(\Gamma, \mathfrak{t})$. The elements of the form $U^0\otimes k$ are simply indicated by $k$. Define an equivalence relation $\sim$ on $\mathbb{Z}^{\geq 0}\times\mathrm{Char}(\Gamma,\mathfrak{t})$ by the following rule. Let $j\in\mathcal{J}$ and $n$ be as in (\ref{e:kay}). Then we require $$U^{m+n}\otimes(k+2PD(b_j))\sim U^m\otimes k \text{ if }n\geq 0,$$ \noindent and $$ U^{m}\otimes(k+2PD(b_j))\sim U^{m-n}\otimes k \text{ if }n< 0.$$ Let $\mathbb{K}^+(\Gamma,\mathfrak{t})$ denote the set of equivalence classes. Let $(\mathbb{K}^+(\Gamma,\mathfrak{t}))^*$ denote its dual. For any non-negative integer $l$, let Ker~$U^{l+1}$ denote the subgroup of $\mathbb{H}^+(\Gamma,\mathfrak{t})$ which is the kernel of the multiplication by $U^{l+1}$. The map $$\widetilde\Phi_l: \mathrm{Ker}\,(U^{l+1})\to\mathrm{Hom} \left (\frac{ \mathbb{K}^+(\Gamma,\mathfrak{t})}{\mathbb{Z}^{\geq l}\times{\mathrm{Char}_{\mathfrak{t}}(\Gamma)}},\mathbb{F} \right ),$$ given by the rule $$\widetilde{\Phi}_l(\phi)(U^m\otimes k)=(U^m\phi(k))_0 $$ \noindent is an isomorphism for every $l$ \cite[Lemma~2.3]{OS1}. Here $(\cdot)_0$ denotes the projection to the degree 0 subspace of $\mathcal{T}^+$. Since every element of $\mathbb{H}^+(\Gamma, \mathfrak{t})$ lies in some $ \mathrm{Ker}\,(U^{l+1})$, the maps $\widetilde\Phi_l$ give rise to an isomorphism $\widetilde{\Phi}:\mathbb{H}^+ (\Gamma,\mathfrak{t})\to (\mathbb{K}^+(\Gamma,\mathfrak{t}))^*$. \subsection{Graded roots} \label{rootR} Let $R$ be an infinite tree. Denote its vertex set and edge set by $\mathcal{V}(R)$ and $\mathcal{E}(R)$ respectively. Let $\chi:\mathcal{V}(R)\to \mathbb{Z}$ satisfy the following properties. \begin{enumerate} \item $\chi(u)-\chi(v)=\pm 1$, if $[u,v]\in \mathcal{E}(R)$. \item $\chi(u)>\mathrm{min}\{\chi (v),\chi (w)\}$, if $[u,v]\in \mathcal{E}(R)$, and $[u,w]\in \mathcal{E}(R)$. \item $\chi$ is bounded below. \item $\chi^{-1}(n)$ is a finite set for every $n$. \item $|\chi^{-1}(n)|=1$ for $n$ large enough. \end{enumerate} Such a pair $(R,\chi)$ is called a \emph{graded root}. When the grading function $\chi$ is apparent in the discussion we simply drop it from our notation and use $R$ to denote a graded root. Next we describe a particular graded root produced by a function $\tau:\mathbb{Z}^{\geq 0}\to \mathbb{Z}$ that is non-decreasing after a finite index. For each $i \in \mathbb{Z}^{\geqslant 0}$ consider the graded tree $R_i$ with vertices $\{v_i^m\}_{m\geq \tau(i)}$ and the edges $\{[v_i^m,v_i^{m+1}]\}_{m\geq \tau(i)}$ with grading $\chi(v_i^m)=m$. Define an equivalence relation on the disjoint union $\coprod_i R_i$ of trees as follows: $v_i^m \asymp v_j^n$ and $[v_i^m,v_i^{m+1}]\asymp [v_j^n,v_j^{n+1}]$ if and only if $m=n$ and $m\geq \tau(l)$ for all $l$ between $i$ and $j$. Then $R_{\tau}=\coprod_i R_i / \asymp$ is a tree with vertices the equivalence classes $\overline{v_i^m}$ and with the induced grading $\chi(\overline{v_i^m}) = m$. To each graded root $(R,\chi)$ as above, with vertex set $\mathcal{V}$ and edge set $\mathcal{E}$, one can associate a graded $\mathbb{F}[U]$ module $\mathbb{H}(R,\chi)$ as follows. As a set, $\mathbb{H}(R,\chi)$ is the set of functions $\phi: \mathcal{V}\to \mathcal{T}^+_{0}$ satisfying \begin{equation} \label{deg:Hr} U\cdot\phi (u)=\phi (v), \end{equation} \noindent whenever $[u,v]\in \mathcal{E}$ with $\chi(u)<\chi (v)$. The $U$-action on $\mathbb{H}(R,\chi)$ is defined by the rule $(U\cdot \phi) (v)=U(\phi(v))$. The grading on $\mathbb{H}(R,\chi)$ is defined in the following way. An element $\phi \in \mathbb{H}(R,\chi)$ is homogeneous of degree $d$ if for every $v\in\mathcal{V}$, $\phi(v)$ is homogeneous of degree $d-2 \chi(v)$. \subsection{Laufer sequences} \label{laufer} To simplify our discussion we will work with the canonical \spinc structure from now on; some of the discussion below works for a general \spinc structure, though. Our aim is to describe the isomorphism $$\Phi_2:(\mathbb{K}^+ (\Gamma, \mathfrak{t}_\mathrm{can}))^*\to \mathbb{H}(R_\tau), $$ discovered by N\'emethi, in a slightly different setup which suits to our needs; what we do below is nothing but rephrasing the findings in \cite{N}. Now, the isomorphism $\Phi_2$ is induced by a bijection between the dual objects: $$\Phi_3:\mathcal{V}(R_\tau) \to \mathbb{K}^+(\Gamma,\mathfrak{t}_\mathrm{can}).$$ Here $R_\tau$ is a graded root associated to a $\tau$ function and $\mathcal{V}(R_\tau)$ is its vertex set. We now describe the construction of the function $\tau$. Recursively form a sequence $(k(i))_{i=0}^\infty$ in $\mathrm{Char}(\Gamma, \mathfrak{t}_{\mathrm{can}})$ as follows: start with the canonical class $k(0)=K$. Suppose $k(i)$ has already been constructed. We find $k(i+1)$ using the algorithm below. \begin{enumerate} \item \label{alg:N1} We construct a computational sequence $z_0,z_1,\dotsm z_l$. Let $z_0=k(i)+2\mathrm{PD}(b_0)$. Suppose $z_m$ has been found. If there exists $j\in \mathcal{J}-\{0\}$ such that $$z_m(b_j)=-e_j$$ then we let $z_{m+1}= z_{m}+2\mathrm{PD(b_j)}$. \item If there is no such $j$, stop. Set $l=m$ and $k(i+1)= z_{l}$. \end{enumerate} The sequence $(k(i))_{i=0}^\infty$ is called the Laufer sequence of the AR graph $\Gamma$ associated with the canonical \spinc structure. This sequence depends on the choice of the distinguished vertex $b_0$ but is independent from our choice of vertices in step \ref{alg:N1} of the above algorithm. Define $\chi_{i,0}=\chi_{k(i)}(b_0)$. Note that since the elements of each computational sequence described above satisfy $z_m\sim z_{m+1}$ for every $m=0,\dots,l-1$, the vectors $k(i)$ satisfy the following relations in $\mathbb{K}^+(\Gamma,\mathfrak{t}_{\mathrm{can}})$: \begin{align*} U^{\chi_{i,0}}\otimes k(i) \sim k(i+1) & \text{ if }\chi_{i,0}\geq 0, \text{ and }\\ k(i)\sim U^{-\chi_{i,0}}\otimes k(i+1) & \text{ if }\chi_{i,0}< 0. \end{align*} Let $\tau(n)=\sum_{i=0}^{n-1}\chi_{i,0}$, with $\tau(0)=0$. It can be shown that there exists an index $N$ such that $\tau(i+1)\geq \tau(i)$ for all $i\geq N$ \cite[Theorem~9.3(a)]{N}. Hence $\tau$ defines a graded root $R_\tau$ and indices beyond $N$ do not give essential information about $R_\tau$. Moreover, for the canonical Spin$^c$ structure, $\tau(N)\geq 2$ and $\tau(i)\leq 1$ for all $i\leq N-1$ \cite[Theorem~6.1(d)]{N}. In that case one can stop the computation procedure of the Laufer sequence once $\tau\geq 2$. Now with a little effort, one can observe in that the map $\Phi_3:\mathcal{V}(R_\tau)\to \mathbb{K}^+(\Gamma)$ is defined by $\Phi_3(\overline{v_i^m})=U^{m-\tau(i)}\otimes k(i)$ (after the proofs of \cite[Theorem~9.3(b)]{N} and \cite[Proposition~4.7]{N}). One can check that $\Phi_3$ is well defined and injective and moreover $\Phi_3$ is in fact a bijection so it induces an $\mathbb{F}[U]$-module isomorphism $\Phi_2:(\mathbb{K}^+(\Gamma))^*\to \mathbb{H}(R_{\tau})$, which shifts grading by $(K^2+|\mathcal{J}|)/4$ \begin{remark} The Laufer sequence $\{x(i)\}$ in \cite{N} resides in $L$ while $\{k(i)\}$ here resides in $\mathrm{Char}(\Gamma, \mathfrak{t}_{\mathrm{can}})$. These two Laufer sequences are related by $$ k(i)=K+2PD(x(i)).$$ \end{remark} \begin{remark} The sequence $(\tau(i))_{i=0}^\infty$ contains a lot of redundant elements. The finite subsequence $(\tau(n_i))$ consisting of local maximum and local minimum values of $\tau$ is sufficient to construct the graded root. \end{remark} \begin{remark} An algorithm similar to what we have described above can be utilized to compute the Heegaard Floer homology groups for an arbitrary \spinc structure $\mathfrak{t}$. The only new necessary input is the distinguished representative of the \spinc structure inside $\mathrm{Char}(\Gamma,\mathfrak{t})$. Interested reader can consult \cite[Section 5]{N}. \end{remark} \subsection{Detecting root vertices} \label{detecting} Ozsv\'ath and Szab\'o used a variation of the above algorithm to determine $\mathrm{Ker}U$ in $(\mathbb{K}^+(\Gamma))^*$ \cite[Section 3.1]{OS1}. Their elements are also visible in the Laufer sequence. In a graded root $R$, say that a vertex is a \emph{root vertex} if it has valency $1$. The following lemma identifies root vertices of $R_\tau$ with elements of $\mathrm{Ker}U$. \begin{lemma}\label{l:ktog} Given $k\in \mathbb{K}^+(\Gamma,\mathfrak{t}_{\mathrm{can}})$ such that $k^* \in \mathrm{Ker}U$, there exists an element $k(i_0)$ of the Laufer sequence such that $k\sim k(i_0)$. This element is unique in the following sense: if $k(i_0)\sim k(i_1) \sim k$ and $i_0<i_1$ then $\tau(i)=\tau(i_0)$ for all $i$ satisfying $i_0\leq i\leq i_1$. As a result $\Phi_3^{-1}(k)$ is the root vertex of the branch in the graded root $R_\tau$ corresponding to $\tau(i_0)$. \end{lemma} \begin{proof} Since $\Phi_3$ is surjective, $k\sim U^{n}\otimes k(i_0)$ for some $i_0,n \in \mathbb{Z}^{\geq 0}$. Since $k^*$ is in $\mathrm{Ker}U$, $k$ does not admit any representation of the form $U^n\otimes k'$ unless $n=0$. Then the vertex $\overline{v_{i_0}^{\tau(i_0)}}$ in the graded root $R_\tau$ must have valency $1$ since otherwise $v_{i_0}^{\tau(i_0)} \asymp v_{i}^{\tau(i)+m}$ for some $i,m \in \mathbb{Z}^{\geq 0}$, implying that $k\sim U^m\otimes k(i)$, a case which we have dismissed. The same argument proves that $\tau$ must be constant between $i_0$ and $i_1$ if $k(i_0)\sim k(i_1) \sim k$. \end{proof} \begin{lemma} \label{l:laufdetect} If $k \in \mathrm{Char}(\Gamma,\mathfrak{t}_{\mathrm{can}})$ satisfies $$e_j+2 \leq k(b_j)\leq -e_j-2,\text{ for all } j\in \mathcal{J},$$ \noindent then $k^* \in \mathrm{Ker}U$ and there exists a unique $i_0\in \mathbb{Z}^{\geq 0}$ such that $k=k(i_0)$. Consequently $\Phi_3^{-1}(k)$ is the root vertex of the branch in the graded root $R_\tau$ corresponding to $\tau(i_0)$. Moreover, the index $i_0$ is the component of the vector $\mathrm{PD}^{-1}(k(i_0)-K)/2$ on $b_0$. \end{lemma} \begin{proof} By \cite[Proposition~3.2]{OS1}, $k$ forms a good full path of length $1$ so it does not admit any representation of the form $U^n\otimes k'$ with $n\geq 1$, and it is the only element in its $\sim$ equivalence class. By Lemma~\ref{l:ktog}, we must have $k= k(i_0)$ for some element $k(i_0)$ of the Laufer sequence. The claim about the index is in fact satisfied by every element of the Laufer sequence \cite[Lemma 7.6 (a)]{N}. \end{proof} \subsection{Contact invariant} \label{c+} To any co-oriented contact structure $\xi$ on a $3$-manifold $M$, one associates an element $c^+(\xi)\in HF^+(-M)$. This element is an invariant of the contact structure and it satisfies the following properties: \begin{enumerate} \item $c^+(\xi)$ lies in the summand $HF^+(-M,\mathfrak{t}_\xi)$ where $\mathfrak{t}_\xi$ is the \spinc structure uniquely determined by the homotopy class of $\xi$. \item $c^+(\xi)$ is homogeneous of degree $-d_3(\xi)-1/2$, where $d_3(\xi)$ is the $3$-dimensional invariant of $\xi$ of Gompf \cite{Go}. \item When $\xi$ is overtwisted, $c^+(\xi)=0$. \item When $\xi$ is Stein fillable $c^+(\xi)\neq 0$. \item We have $U(c^+(\xi))=0$. \item $c^+(\xi)$ is natural under Stein cobordisms. \end{enumerate} Our aim is to understand where the contact invariant falls under the isomorphism described in \eqref{eq:dia}. This could be difficult for a general contact structure. We need a certain type of compatibility of the contact structure with the plumbing. \begin{definition} Let $\Gamma$ be a plumbing graph. Suppose $\xi$ is a contact structure on $M(\Gamma)$. We say that $\xi$ is compatible with $\Gamma$ if the following are satisfied: \begin{enumerate} \item The contact structure $\xi$ admits a Stein filling whose total space, possibly after finitely many blow-ups, is $X(\Gamma)$. \item The induced \spinc structure agrees with that of the canonical class on $M(\Gamma)$. (This condition is automatically satisfied when $M(\Gamma)$ is an integral homology sphere.) \end{enumerate} \end{definition} Note that canonical contact structures of singularities are compatible with the dual resolution graphs. Furthermore if $\xi$ is compatible with $\Gamma$ then $X(\Gamma)$ is a strong symplectic filling for $\xi$. We shall denote by $J$ and $c_1(J)$, the corresponding almost complex structure on $X(\Gamma)$ and its first Chern class respectively. \begin{theorem}\label{cagri}(\cite[Proposition~1.2]{K}) Let $\xi$ be compatible with $\Gamma$. Then we have $\widetilde{\Phi}\circ \Phi_1(c^+(\xi))=(c_1(J))^*$. \end{theorem} \begin{proof} If $X(\Gamma)$ has no $-1$ spheres, the total space of the Stein filling is $X(\Gamma)$ itself. Then result is an easy consequence of the definitions of the maps $\Phi_1$ and $\widetilde{\Phi}$ and Plamenevskaya's theorem \cite{P} which says that, \begin{equation}\label{eq:pla} F_{\widetilde{X},k}(c^+(\xi))=\left \{ \begin{tabular}{lr} 1& \text{ if } $k \sim c_1(J)$,\\ 0 & \text{ if } $k\not \sim c_1(J)$. \end{tabular}\right . \end{equation} In the case that $X(\Gamma)$ contains $-1$ spheres, we blow them down until we get a Stein filling of $\xi$. Applying Plamenevskaya's theorem there and using blow-up formulas we see that \eqref{eq:pla} still holds. \end{proof} \begin{corollary}\label{cor:can} We have $\widetilde{\Phi}\circ \Phi_1(c^+(\xi_\mathrm{can}))=K^*$ where $K$ is the canonical class. \end{corollary} \subsection{Proof of Theorem~\ref{nocob}} \label{pf2} By Corollary~\ref{cor:can}, $\widetilde{\Phi}\circ \Phi_1(c^+(\xi_\mathrm{can}))=K^*$ where $K$ is the canonical class which is also the first element $k(0)$ of the Laufer sequence. Under the correspondence described in Lemma~\ref{l:ktog}, this element is associated with the root vertex of the branch of $\tau(0)$. We will be done once we prove this branch has length one, which implies that $\Phi(c^+(\xi_{\mathrm{can}}))$ is not in the image of $U^n$ for any $n> 0$. Since $\Phi$ is an $\mathbb{F}[U]$ module isomorphism, the same must hold for $c^+(\xi_{\mathrm{can}})$. By definition, $\tau(0)=0$, and a direct computation shows $\tau(1)=1$. Now, by \cite[Theorem~6.1(d)]{N} it follows that $\#\tau^{-1}(m)=1$ whenever $m\geq 1$; equivalently if $\tau(n)>1$ for some $n$ then $\tau$ is increasing beyond $n$. Then we have two cases: either $\tau$ is always increasing or there is some $n$ for which $\tau(n)<1$ and $\tau(j)=1$ for $1\leq j \leq n$. In the former case we get $\mathbb{H}(R_\tau)=\mathcal{T}^+_0$. However this is equivalent to having $\Gamma$ rational (\cite[Theorem~6.3]{N}), which we have dismissed by assumption. In the latter case, there are more than one root of the tree $R_{\tau}$ which have non-positive degree. In particular the branch corresponding $\tau(0)$ has length $1$ (as illustrated in Figure~\ref{fig:tree}), and therefore we have $\sigma(\xi)=0$. \begin{figure}[h] \includegraphics[width=0.40\textwidth]{gengra} \caption{If $\Gamma$ is proper AR, this tree embeds into the graded root $R$ where the red vertex is identified with $\tau (0)$.} \label{fig:tree} \end{figure} Finally, there can be no Stein cobordism from $(M(\Gamma),\xi)$ to $(M',\eta)$ if $\sigma(\eta)<0$. This follows from the naturality of the invariant $c^+$ under Stein cobordisms and the fact that the invariant $\sigma$ increases under a Stein cobordism \cite[Theorem~1.5]{K}. To finish the proof we recall that for each of the cases in the theorem, $\sigma(\eta)=-\infty$: this is immediate if $c^+(\xi)=0$; for the case $\eta$ is a planar contact structure, this claim is just \cite[Theorem~1.2]{OSS}; for the case when $M'$ is the link of a rational singularity, this is a consequence of \cite[Theorem~6.3]{N}. \hfill $\Box$ \section{Examples}\label{examples} The main result of our paper concerns canonical contact structures but with the techniques we developed in this paper we can in fact compute the $\sigma$ invariant of any contact structure which is compatible with an AR plumbing graph. Here we give a few examples. \subsection{Contact structures on $\Sigma(2,3,11)$} We start with a simple but an instructive example. The Brieskorn sphere $\Sigma(2,3,11)$ is the boundary of the graph in Figure~\ref{fig:plumbing} which we denote by $\Gamma$. We index the vertices $\{b_0,b_1,\ldots,b_8\}$ of $\Gamma$ so that $b_0$ is the one with adjacency 3 and $b_8$ is the one with weight $-3$. We know that $\Gamma$ is proper AR since $(2,3,11)$ are pairwise relatively prime. \begin{figure}[h] \includegraphics[width=0.40\textwidth]{plumbing.eps} \caption{Plumbing graph of the Brieskorn sphere $\Sigma(2,3,11)$.} \label{fig:plumbing} \end{figure} One may find compatible Stein structures on the $4$-manifold $X(\Gamma)$ by choosing the Legendrian attaching circles of $2$-handles corresponding to vertices with the prescribed intersection matrix such that the smooth framing is one less than the Thurston-Bennequin framing. Note that there is a unique way of doing this for each $(-2)$-framed vertex, but the $(-3)$-framed $2$-handle can be Legendrian realized in two different ways. We fix an orientation on this vertex and distinguish these two cases according to their rotation numbers \cite[Theorem~1.2]{LM}. Hence $X(\Gamma)$ has two natural distinct Stein structures whose Chern classes are $k_{\pm}=[0,\dots,0,\pm 1]$ where we write an element $k\in H^2(X(\Gamma),\mathbb{Z})$ in the form $[k(b_1),\dots,k(b_s)]$ in the dual basis. Denote the corresponding contact structures on boundary $\xi_{\pm }$. \begin{figure}[h] \includegraphics[width=1.0\textwidth]{legplumbing.eps} \caption{Stein handlebody pictures of $\xi_{\pm }$.} \label{fig:legplumbing} \end{figure} The Heegaard Floer homology of $\Sigma(2,3,11)$ is well known; we can readily compute it here by considering the Laufer sequence $k(n)$ and the corresponding values $\tau(n)$. Moreover from Lemma~\ref{l:laufdetect}, we know that $k_\pm$ will appear in the Laufer sequence. The first several $k(n)$ and $\tau(n)$ are as follows: \begin{align*} k_+= k(0)&=(0,0,0,0,0,0,0,0,1), \quad &\tau(0)=0 \\ k(1)&=(2,-2,0, -2, 0, 0, 0, 0, -3),\quad &\tau(1)=1 \\ k(2)&=(2, 0, -2, 0, 0, 0, 0, -2, -1), \quad &\tau(2)=1 \\ k(3)&=(2, -2, 0, 0, 0, 0, -2, 0, -1), \quad &\tau(3)=1 \\ k(4)&=( 2, 0, 0, -2, 0, -2, 0, 0, -1),\quad &\tau(4)=1 \\ k(5)&=(4, -2, -2, 0, -2, 0, 0, 0, -1), \quad &\tau(5)=1 \\ k_-=k(6)&=(0, 0, 0, 0, 0, 0, 0, 0, -1), \quad &\tau(6)=0 \end{align*} Moreover it can be proven that $\tau(n+1)\geq \tau(n)$ for every $n\geq 6$. An indirect way to do this is by observing that $\tau(13)=2$ and $\tau(n)=1$ for $7\leq n <13$ and then employing \cite[Theorem~6.1(d)]{N} to conclude that $\tau$ is increasing for $n\geq 13$. Hence we obtain the graded root $R_{\tau_K}$ as shown in Figure~\ref{fig:graded2311}. By the previous discussion, the contact invariants $c^+(\xi_{\pm})$ correspond to the two roots of the tree. As a result, we have $\sigma(\xi_\pm)=0$. Notice that $k_+=K$, and so $c^+(\xi_+)=c^+(\xi_\mathrm{can})$. \begin{figure}[h] \includegraphics[width=0.60\textwidth]{graded2311.eps} \caption{The graded root of $\Sigma(2,3,11)$ with the canonical \spinc structure. } \label{fig:graded2311} \end{figure} Finally we determine the Heegaard Floer homology by computing the homology of the graded root and shifting the degree by $(K^2+9)/4$. We get that $HF^+(-\Sigma(2,3,11))=\mathcal{T}^+_{(-2)}\oplus \mathbb{F}_{(-2)}$ and $c^+(\xi_\pm)$ are two distinct elements of degree $-2$ which project non-trivially to the reduced Floer homology. \subsection{Stein fillable contact structures of arbitrarily large $\sigma$} Consider the infinite family of Brieskorn spheres $M_n=\Sigma(3,3n+1, 9n+2)$, $n\geq 1$ given by plumbing graph $\Gamma_n$ in Figure~\ref{fig:plumpqpq-1}. For every $m=0,\dots,n-1$ we have the Stein structure $J_m$ corresponding to the Legendrian handle attachments along Legendrian unknots corresponding to the vertices of $\Gamma_n$ with framing $tb-1$. In order to get the correct framing we have to stabilize $(-3)$-and $(-n-1)$-framed unknots once and $n-1$ times respectively. To get the Stein structure $J_m$ we do one left stabilization on $(-3)$-framed unknot, and $m$ right and $n-m-1$ left stabilizations on $(-n-1)$-framed unknot see Figure~\ref{fig:legplumbing2}. Orienting each unknot clockwise we fix a basis for $b_0,\dots,b_{9n+5}$ for $L$ using the indices as show in Figure~\ref{fig:plumpqpq-1}. Then the number $c_1(J_m)(b_j)$ is given by the rotation number of the Legendrian unkot corresponding to the $j$th vertex. Hence we have $c_1(J_m)=[0,1,0,0,n-2m-1,0,\dots,0]$. Clearly for every $m=0,\dots,n-1$, the contact structure $\xi_m$ induced by $J_m$ is compatible with $\Gamma_n$. Hence by Lemma~\ref{l:laufdetect}, each $c_1(J_m)$ appears in the Laufer sequence of $\Gamma_n$. In fact we have $c_1(J_m)=k(i_m)$, where $i_m=3m(9n+2)$. \begin{figure}[h] \includegraphics[width=.50\textwidth]{plumpqpq-1.eps} \caption{Plumbing graph $\Gamma_n$.} \label{fig:plumpqpq-1} \end{figure} \begin{figure}[h] \includegraphics[width=0.70\textwidth]{legplumbing2.eps} \caption{Stein handlebody diagram of $J_m$} \label{fig:legplumbing2} \end{figure} We need to decide the branch lengths of the corresponding root vertices in the graded root. Borodzik and N\'emethi worked out the combinatorics of the $\tau$ function. \begin{lemma}\cite[Propositon 4.2]{BN} Suppose $p$ and $q$ are relatively prime positive integers. Let $\mathcal{S}_{p,q}$ be the semigroup of $\mathbb{N}$ generated by $p$ and $q$ including $0$. Let $\delta=(p-1)(q-1)/2$. Consider the function $\tau:\mathbb{Z}^{\geq 0}\to \mathbb{Z}$ associated to the Brieskorn sphere $\Sigma(p,q,pq-1)$. The function $\tau$ attains its local minima at $a_t=t(pq-1)$ for $0\leq t \leq 2\delta -2$, and local maxima at $A_t=tpq+1$ for $0\leq t\leq 2\delta -3$. Moreover for any $0\leq t\leq 2\delta -3$, one has \begin{align*} \tau(A_t)-\tau (a_{t+1})&=\#\{s\not \in \mathcal{S}_{p,q}\, : \, s\geq n+2 \}>0, \\ \tau(A_t)-\tau (a_{t})&=\#\{s\in \mathcal{S}_{p,q}\, : \, s\leq n \}>0. \end{align*} \end{lemma} Applying the above lemma for the case $p=3$ and $q=3n+1$, we see that $\sigma(\xi_m)=-m$. Hence by choosing $n$ and $m$ appropriately we realize any negative integer as the $\sigma$ invariant of a contact structure. Of course $\xi_m$ cannot be isomorphic to the canonical contact structure unless $m=0$. Therefore the following problem is natural. \begin{question} Is it possible to realize any negative integer as the $\sigma$ invariant of the canonical contact structure of a singularity? \end{question} \section{Acknowledgments} The first author is supported by a TUBITAK grant BIDEB 2232 No: 115C005. The second author is grateful to the Institut Camille Jordan, Universit\'{e} Lyon 1, where part of this work was completed.
{ "attr-fineweb-edu": 1.392578, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdKnxaKgT2CDEmHEd
\section{Introduction}\label{introduction} Let $\Omega\subseteq\R^n$ be an open bounded set with smooth boundary, let $f\in L^2(\Omega)$ be a positive function and let $\beta,\, C_0$ be positive constants. We consider the following energy functional \begin{equation}\label{eq: fun 1}\mathcal{F}(A,v)=\int_A \abs{\nabla v}^2\,d\Ln +\beta\int_{\partial A } v^2\,d\Hn -2\int_\Omega fv\,d\Ln+C_0\Ln(A\setminus\Omega),\end{equation} and the variational problem \begin{equation \label{problema} \inf \Set{\mathcal{F}(A,v) | \begin{aligned} &A\supseteq \Omega \text{ open, bounded and Lipschitz} \\ &v\in W^{1,2}(A), \: v\ge 0\,\text{in } A \end{aligned}}. \end{equation} This problem is related to the following thermal insulation problem: for a given heat source $f$ distributed in a conductor $\Omega$, find the best possible configuration of insulating material surrounding $\Omega$. A similar problem has been studied in \cite{symbreak} and \cite{robinins} for a thin insulating layer, and in \cite{CK}, \cite{BucLuc12} and \cite{nahon} for a prescribed temperature in $\Omega$. For a fixed open set $A$ with Lipschitz boundary, we have, via the direct methods of the calculus of variations, that there exists $u_A\in W^{1,2}(A)$ such that \[\mathcal{F}(A,u_A)\leq \mathcal{F}(A,v),\] for all $v\in W^{1,2}(A)$, with $v\ge0$ in $A$. Furthermore $u_A$ solves the following stationary problem, with Robin boundary condition on $\partial A . Precisely \[\begin{cases}-\Delta u_A= f & \text{in }\Omega,\\[3 pt] \dfrac{\partial u_A^+}{\partial \nu\hphantom{\scriptstyle{+}}}=\dfrac{\partial u_A^-}{\partial \nu\hphantom{\scriptstyle{+}}} & \text{on }\partial\Omega, \\[6 pt] \Delta u_A=0 & \text{in } A\setminus\Omega, \\[3 pt] \dfrac{\partial u}{\partial \nu} +\beta u=0 & \text{on } \partial A, \end{cases}\] where $u_A^-$ and $u_A^+$ denote the traces of $u_A$ on $\partial\Omega$ in $\Omega$ and in $A\setminus\Omega$ respectively. That is \begin{equation}\label{eq: el}\int_A \nabla u_A\cdot \nabla\varphi\,d\Ln+\beta\int_{\partial A} u_A\varphi\,\Hn=\int_\Omega f\varphi\,d\Ln,\end{equation} for all $\varphi\in W^{1,2}(A)$. The Robin boundary condition represents the case when the heat transfer with the environment is conveyed by convection. If for any couple $(A,v)$ with $A$ an open bounded set with Lipschitz boundary containing $\Omega$ and $v\in W^{1,2}(A)$, $v\geq0$ in $A$, we identify $v$ with $v\chi_A$, where $\chi_A$ is the characteristic function of $A$, and the set $A$ with the support of $v$, then the energy functional~\eqref{eq: fun 1} becomes \begin{equation}\label{eq: fun 2}\mathcal{F}(v)=\int_{\R^n} \abs{\nabla v}^2\,d\Ln+\beta\int_{J_v}\left( \overline{v}^2+\underline{v}^2\right)\,d\Hn-2\int_\Omega fu\,d\Ln+C_0\Ln(\set{v>0}\setminus\Omega),\end{equation} and the minimization problem~\eqref{problema} becomes \begin{equation} \label{problemar} \inf \Set{\mathcal{F}(v) | v\in \sbv^{\frac{1}{2}}(\R^n)\cap W^{1,2}(\Omega) }, \end{equation} where $\overline{v}$ and $\underline{v}$ are respectively the approximate upper and lower limits of $v$, $J_v$ is the jump set and $\nabla v$ is the absolutely continuous part of the derivative of $v$. See \autoref{notations} for the definitions. We state the main results of this paper in the two following theorems. \begin{teor} \label{teor: mainth1} Let $n\ge 2$, let $\Omega\subset\R^n$ be an open bounded set with $C^{1,1}$ boundary, let $f\in L^2(\Omega)$, with $f>0$ almost everywhere in $\Omega$. Assume in addition that, if $n=2$, \begin{equation}\label{eq:cond n=2}\norma{f}_{2,\Omega}^2<C_0\lambda_\beta(B)\mathcal{L}^2(\Omega),\end{equation} where $B$ is a ball having the same measure of $\Omega$. Then problem~\eqref{problemar} admits a solution. Moreover, if $p>n$ and $f\in L^p(\Omega)$, then there exists a positive constant $C=C(\Omega,f,p,\beta,C_0)$ such that if $u$ is a minimizer to problem~\eqref{problemar} then \[ \norma{u}_\infty\le C. \] \end{teor} \begin{teor} \label{teor: mainth2} Let $n\ge 2$, let $\Omega\subset\R^n$ be an open bounded set with $C^{1,1}$ boundary, let $p>n$ and let $f\in L^p(\Omega)$, with $f>0$ almost everywhere in $\Omega$. Assume in addition that, if $n=2$ condition~\eqref{eq:cond n=2} holds true. Then there exist positive constants $\delta_0=\delta_0(\Omega,f,p,\beta,C_0)$, $c=c(\Omega,f,p,\beta,C_0)$, $C=C(\Omega,f,p,\beta,C_0)$ such that if $u$ is a minimizer to problem \eqref{problemar} then \[ u\ge\delta_0 \qquad \text{$\Ln$-a.e. in }\set{u>0}, \] and the jump set $J_u$ satisfies the density estimates \[cr^{n-1}\leq\Hn(J_u\cap B_r(x))\leq C r^{n-1},\] with $x\in\overline{J_u}$, and $0<r<d(x,\partial\Omega)$. In particular, we have \[ \Hn(\overline{J_u}\setminus J_u)=0. \] \end{teor} We refer to \autoref{notations} for the definitions of $\lambda_\beta(B)$ in \eqref{eq:cond n=2}, and the distance $d(x,\partial\Omega)$ in \autoref{teor: mainth2}. \autoref{existence} is devoted to the proof \autoref{teor: mainth1}, while \autoref{estimates} is devoted to the proof \autoref{teor: mainth2}. We notice that the assumptions on the function $f$ do not seem to be sharp. Indeed, it is well known that (see for instance \cite[Theorem 8.15]{trudinger}), in the more regular case, the assumption $f\in L^p(\Omega)$ with $p>n/2$ ensures the boundedness of solutions to equation~\eqref{eq: el} \section{Notation and tools}\label{notations} In this section we recall some definitions and proprieties of the space $\sbvv$. We refer to \cite{bv}, \cite{robin-bg}, \cite{evans} for a deep study of the properties of these functions. In the following, given $A\subseteq\R^n$ and $1\le p\le\infty$, we will denote the $L^p(A)$ norm of a function $v\in L^p(A)$ as $\norma{v}_{p,A}$, in particular when $A=\R^n$ we will simply write $\norma{v}_{p}=\norma{v}_{p,\R^n}$. \begin{defi}[$\bv$] Let $u\in L^1(\R^n)$. We say that $u$ is a function of \emph{bounded variation} in $\R^n$ and we write $u\in\bv(\R^n)$ if its distributional derivative is a Radon measure, namely \[ \int_{\Omega}u\,\frac{\partial\varphi}{\partial x_i}=\int_{\Omega}\varphi\, d D_i u\qquad \forall \varphi\in C^\infty_c(\R^n), \] with $Du$ a $\R^n$-valued measure in $\R^n$. We denote with $\abs{Du}$ the total variation of the measure $Du$. The space $\bv(\R^n)$ is a Banach space equipped with the norm \[ \norma{u}_{\bv(\R^n)}=\norma{u}_{1}+\abs{Du}(\R^n). \] \end{defi} \begin{defi} Let $E\subseteq\R^n$ be a measurable set. We define the \emph{set of points of density 1 for $E$} as \[ E^{(1)}=\Set{x\in\R^n | \lim_{r\to0^+}\dfrac{\Ln(B_r(x)\cap E)}{\Ln(B_r(x))}=1}, \] and the \emph{set of points of density 0 for $E$} as \[ E^{(0)}=\Set{x\in\R^n | \lim_{r\to0^+}\dfrac{\Ln(B_r(x)\cap E)}{\Ln(B_r(x))}=0}. \] Moreover, we define the \emph{essential boundary} of $E$ as \[ \partial^*E=\R^n \setminus(E^{(0)}\cup E^{(1)}). \] \end{defi} \begin{defi}[Approximate upper and lower limits] Let $u\colon\R^n\to\R$ be a measurable function. We define the \emph{approximate upper and lower limits} of $u$, respectively, as \[\overline{u}(x)=\inf\Set{t\in\R|\limsup_{r\to0^+}\dfrac{\Ln(B_r(x)\cap\set{u>t})}{\Ln(B_r(x))}=0},\] and \[\underline{u}(x)=\sup\Set{t\in\R|\limsup_{r\to0^+}\dfrac{\Ln(B_r(x)\cap\set{u<t})}{\Ln(B_r(x))}=0}.\] We define the \emph{jump set} of $u$ as \[J_u=\Set{x\in\R^n|\underline{u}(x)<\overline{u}(x)}.\] We denote by $K_u$ the closure of $J_u$. \end{defi} If $\overline{u}(x)=\underline{u}(x)=l$, we say that $l$ is the approximate limit of $u$ as $y$ tends to $x$, and we have that, for any $\eps>0$, \[\limsup_{r\to0^+}\dfrac{\Ln(B_r(x)\cap\set{\abs{u-l}\geq\eps)}}{\Ln(B_r(x))}=0.\] If $u\in\bv(\R^n)$, the jump set $J_u$ is a $(n-1)$-rectifiable set, i.e. ${J_u\subseteq\bigcup_{i\in\mathbb{N}}M_i}$, up to a $\Hn$-negligible set, with $M_i$ a $C^1$-hypersurface in $\R^n$ for every $i$. We can then define $\Hn$-almost everywhere on $J_u$ a normal $\nu_u$ coinciding with the normal to the hypersurfaces $M_i$. Futhermore, the direction of $\nu_u(x)$ is chosen in such a way that the approximate upper and lower limits of $u$ coincide with the approximate limit of $u$ on the half-planes \[H^+_{\nu_u}=\set{y\in\R^n|\nu_u(x)\cdot(y-x)\geq0}\] and \[H^-_{\nu_u}=\set{y\in\R^n|\nu_u(x)\cdot(y-x)\leq0}\] respectively. \begin{defi} Let $E,\Omega\subseteq\R^n$ be measurable sets. We define the \emph{relative perimeter} of $E$ inside $\Omega$ as \[ P(E;\Omega)=\inf\Set{\int_E \divv\varphi\,d\Ln | \begin{aligned} \varphi\in &\:C^1_c(\Omega,\R^n) \\ &\abs{\varphi}\le 1 \end{aligned}}. \] If $P(E;\R^n)<+\infty$ we say that $E$ is a \emph{set of finite perimeter}. \end{defi} \begin{teor}[Relative Isoperimetric Inequality] \label{teor: relisop} Let $\Omega$ be an open, bounded, connected set with Lipschitz boundary. Then there exists a positive constants $C=C(\Omega)$ such that \[ \min\Set{\Ln(\Omega\cap E),\Ln(\Omega\setminus E)}^\frac{n-1}{n}\le C P(E;\Omega), \] for every set $E$ of finite perimeter. \end{teor} See for instance \cite{mazya} for the proof of this theorem. \begin{teor} \label{teor: isopint} Let $\Omega$ be an open, bounded, connected set with Lipschitz boundary. Then there exists a constant $C=C(\Omega)>0$ such that \[ \Hn (\partial^*E\cap \partial\Omega )\le C \Hn(\partial^* E\cap \Omega) \] for every set of finite perimeter $E\subset\Omega$ with $0<\Ln(E)\le \Ln(\Omega)/2$. \end{teor} We refer to \cite[Theorem 2.3]{cianchi2016poincare} for the proof of this theorem, observing that if $\Omega$ is a Lipschitz set, then it is an admissible set in the sense defined in \cite{cianchi2016poincare}(see \cite[Remark 5.10.2]{ziemer}). \begin{teor}[Decomposition of $\bv$ functions] Let $u\in\bv(\R^n)$. Then we have \[ dDu=\nabla u\,d\Ln+\abs{\overline{u}-\underline{u}}\nu_u\,d\Hn\lfloor_{{\Huge J_u}}+ dD^c u, \] where $\nabla u$ is the density of $Du$ with respect to the Lebesgue measure, $\nu_u$ is the normal to the jump set $J_u$ and $D^c u$ is the \emph{Cantor part} of the measure $Du$. The measure $D^c u$ is singular with respect to the Lebesgue measure and concentrated out of $J_u$. \end{teor} \begin{defi} Let $v\in\bv(\R^n)$, let $\Gamma\subseteq\R^n$ be a $\Hn$-rectificable set and let $\nu(x)$ be the generalized normal to $\Gamma$ defined for $\Hn$-a.e. $x\in\Gamma$. For $\Hn$-a.e. $x\in \Gamma$ we define the traces $\gamma_\Gamma^{\pm}(v)(x)$ of $v$ on $\Gamma$ by the following Lebesgue-type limit quotient relation \[ \lim_{r\to 0}\frac{1}{r^n}\int_{B_r^{\pm}(x)}\abs{v(y)-\gamma_\Gamma^{\pm}(v)(x)}\,d\Ln(y)=0, \] where \[ B_{r}^{+}(x)=\set{y\in B_r(x) | \nu(x)\cdot(y-x)>0}, \] \[ B_{r}^{-}(x)=\set{y\in B_r(x) | \nu(x)\cdot(y-x)<0}. \]\end{defi} \begin{oss} Notice that, by~\cite[Remark 3.79]{bv}, for $\Hn$-a.e. $x\in\Gamma$, $(\gamma_\Gamma^{+}(v)(x),\gamma_\Gamma^-(v)(x))$ coincides with either $(\overline{v}(x),\underline{v}(x))$ or $(\underline{v}(x),\overline{v}(x))$, while, for $\Hn$-a.e. $x\in \Gamma\setminus J_v$, we have that $\gamma_\Gamma^+(v)(x)=\gamma_\Gamma^-(v)(x)$ and they coincide with the approximate limit of $v$ in $x$. In particular, if $\Gamma=J_v$, we have \[ \gamma_{J_v}^+(v)(x)=\overline{v}(x) \qquad \gamma_{J_v}^-(v)(x)=\underline{v}(x) \] for $\Hn$-a.e. $x\in J_v$. \end{oss} We now focus our attention on the $\bv$ functions whose Cantor parts vanish. \begin{defi}[$\sbv$] Let $u\in\bv(\R^n)$. We say that $u$ is a \emph{special function of bounded variation} and we write $u\in\sbv(\R^n)$ if $D^c u=0$. \end{defi} For $\sbv$ functions we have the following. \begin{teor}[Chain rule]\label{teor: chain} Let $g\colon\R\to\R$ be a differentiable function. Then if $u\in\sbv(\R^n)$, we have \[\nabla g(u)=g'(u)\nabla u.\] Furthermore, if $g$ is increasing, \[\overline{g(u)}=g(\overline{u}),\quad \underline{g(u)}=g(\underline{u})\] while, if $g$ is decreasing, \[\overline{g(u)}=g(\underline{u}),\quad \underline{g(u)}=g(\overline{u}).\] \end{teor} We now give the definition of the following class of functions. \begin{defi}[$\sbv^{1/2}$] Let $u\in L^2(\R^n)$ be a non-negative function. We say that $u\in\sbvv$ if $u^2\in \sbv(\R^n)$. In addition, we define \begin{gather*} J_u:=J_{u^2} \qquad\quad \overline{u}:=\sqrt{\,\overline{u^2}\,}\qquad\quad \underline{u}:=\sqrt{\,\underline{u^2}\,} \\[5 pt] \nabla u:=\frac{1}{2u}\nabla(u^2)\chi_{\set{u>0}} \end{gather*} \end{defi} Notice that this definition extends the validity of the Chain Rule to the functions in $\sbvv$. We refer to \cite[Lemma 3.2]{robin-bg} for the coherence of this definition. \begin{teor}[Compactness in $\sbv^{1/2}$] \label{teor: sbv} Let $u_k$ be a sequence in $\sbvv$ and let $C>0$ be such that for every $k\in \N$ \[ \int_{\R^n}\abs{\nabla u_k}^2\, d\Ln+\int_{J_{u_{k}}}\left(\overline u_k^2+\underline u_{k}^2\right)\,d\Hn +\int_{\R^n}u_k^2\, d\Ln < C \] Then there exists $u\in\sbvv$ and a subsequence $u_{k_j}$ such that \begin{itemize} \item \emph{Compactness:} \[ u_{k_j}\xrightarrow{L^2_{\loc}(\R^n)} u \] \item \emph{Lower semicontinuity:} for every open set $A$ we have \[ \int_A \abs{\nabla u}^2\, d\Ln \le \liminf_{j\to+\infty}\int_A \abs{\nabla u_{k_j}}^2\,d\Ln \] and \[ \int_{J_u\cap A}\left( \overline u^2+\underline u^2\right)\, d\Hn \le \liminf_{j\to+\infty}\int_{J_{u_{k_j}}\cap A}\left( \overline u_{k_j}^2+\underline u_{k_j}^2\right)\, d\Hn \] \end{itemize} \end{teor} \begin{defi}[Robin Eigenvalue] \label{def: robineigen} Let $A\subseteq\R^n$ be an open bounded set with Lipschitz boundary, let $\beta>0$. We define $\lambda_{\beta}(A)$ as \begin{equation}\label{eq:Robin 2,q}\lambda_{\beta}(A)=\inf\Set{ \dfrac{\displaystyle\int_A \abs{\nabla v}^2\,d\Ln+\beta\int_{\partial A} v^2\,d\Hn}{\displaystyle\int_A v^2\,d\Ln}| v\in W^{1,2}(A)\setminus\set{0}}.\end{equation} \end{defi} \begin{oss} Standard tools of calculus of variation ensures that the infimum in~\eqref{eq:Robin 2,q} is achieved, see for instance. \end{oss} \begin{lemma} \label{lemma: stimaautovalori} For every $0<r<R$, the following inequality holds \[ \lambda_{\beta}(B_r)\le \left(\dfrac{ \Ln(B_R)}{\Ln(B_r)}\right)^{\frac{2}{n}}\lambda_{\beta}(B_R), \] where $B_R$ and $B_r$ are balls with radii $R$ and $r$ respectively. \begin{proof} Let $\varphi$ be a minimum of~\eqref{eq:Robin 2,q} for $A=B_R$ and with $\norma{\varphi}_{2,B_R}=1$. We define \[ w(x)=\varphi\left(\frac{R}{r}x\right) \qquad \forall x\in B_r. \] Therefore, \[ \begin{split} \lambda_{\beta}(B_r &\le \dfrac{\displaystyle \int_{B_r}\abs{\nabla w(x)}^2\,d\Ln(x) + \int_{\partial B_r}w(x)^2\, d\Hn(x)}{\displaystyle \int_{B_r}w(x)^2\, d\Ln(x)}\\[7 pt &= \dfrac{\displaystyle \left(\frac{r}{R}\right)^{n-2}\int_{B_R}\abs{\nabla \varphi(y)}^2\,d\Ln(y) + \left(\frac{r}{R}\right)^{n-1}\int_{\partial B_R}\varphi(y)^2\, d\Hn(y)}{\left(\dfrac{r}{R}\right)^{n}}. \end{split} \] Since $r/R<1$, by minimality of $\varphi$, we get \[ \lambda_{\beta}(B_r)\le \dfrac{\displaystyle\left(\frac{r}{R}\right)^{n-2}}{\displaystyle\left(\dfrac{r}{R}\right)^{{n}\hphantom{\!2}}}\lambda_{\beta}(B_R)=\left(\dfrac{\Ln(B_r)}{\Ln(B_R)}\right)^{-\frac{2}{n}}\lambda_{\beta}(B_R). \] \end{proof} \end{lemma} Let $\beta,m>0$, and let us denote by \[ \Lambda_{\beta,m}=\inf\Set{\dfrac{\displaystyle \int_{\R^n}\!\abs{\nabla v}^2\, d\Ln+\beta \int_{J_v}\!\!\left(\underline{v}^2+\overline{v}^2\right)\,d\Hn}{\displaystyle \int_{\R^n}v^2\, d\Ln} | \begin{aligned} &v\in\sbvv\setminus{\{0\}} \\ &\Ln\left(\set{v>0}\right)\le m \end{aligned}}. \] Here we state a theorem, referring to \cite[Theorem 5]{robin-bg} for the proof. \begin{teor \label{teor: faberkrahn} Let $B\subseteq\R^n$ be a ball of volume $m$. Then \[ \Lambda_{\beta,m}= \lambda_{\beta}(B). \] \end{teor} We will denote by $d(x,\partial\Omega)$ the distance between $x\in\R^n$ and the boundary $\partial\Omega$, and for every $\eps>0$ we define \[ \Omega_\eps=\Set{x\in\Omega | d(x,\partial\Omega)>\varepsilon}. \] We will use the following result. \begin{prop} \label{teor: volumedistanza} Let $\Omega$ be an open bounded set with $C^{1,1}$ boundary, then there exist a constant $C=C(\Omega)>0$ and a $\eps_0=\eps_0(\Omega)>0$ such that \[ \Ln(\Omega\setminus\Omega_\eps)\le C\varepsilon \qquad \qquad \forall \eps<\eps_0. \] \end{prop} \begin{proof} It is well known (see for instance \cite[Theorem 17.5]{maggi}) that there exist a constant $C=C(\Omega)$ and $\eps_0=\eps_0(\Omega)>0$ such that \[P(\Omega_\eps)= P(\Omega)+C(\Omega)\,\eps+O(\eps^2),\] for every $0<\eps<\eps_0$. Let $r(x)=d(x,\partial\Omega)$ be the distance from the boundary of $\Omega$. By coarea formula we have \[\Ln(\Omega\setminus\Omega_\eps)=\int_{\set{0<r<\eps}} \,d\Ln=\int_0^\eps P(\Omega_t)\,dt\le C(\Omega)\eps.\] \end{proof} \section{Existence of minimizers}\label{existence} In this section we prove \autoref{teor: mainth1}: in \autoref{teor: existence} we prove the existence of a minimizer to problem \eqref{problemar}; in \autoref{teor: linftybound} we prove the $L^\infty$ estimate for a minimizer. In this section we will assume that $\Omega\subseteq\R^n$ is an open bounded set with $C^{1,1}$ boundary, that $f\in L^2(\Omega)$ is a positive function and that $\beta, C_0$ are positive constants. We consider the energy functional $\mathcal{F}$ defined in \eqref{eq: fun 2} \begin{lemma} \label{lemma: stimel2} Let $n\ge2$ and assume that, if $n=2$, condition \eqref{eq:cond n=2} holds true. Then there exist two positive constants $c=c(\Omega,f,\beta,C_0)$ and $C=C(\Omega,f,\beta,C_0)$ such that if $v\in\sbv^{\frac{1}{2}}(\R^n)\cap W^{1,2}(\Omega)$, with $\mathcal{F}(v)\le 0$ and $\Omega\subseteq\set{v>0}$, then \begin{equation}\label{eq:stimasupporto} \Ln(\Set{v>0})\le c,\end{equation} \begin{equation}\label{eq:stimanormal2} \norma{v}_2\le C. \end{equation} \end{lemma} \begin{proof} Let $B'$ be a ball with the same measure as $\Set{v>0}$. By \autoref{teor: faberkrahn} \[ \begin{split} 0\ge\mathcal{F}(v)&\ge\,\lambda_\beta\left(B'\right)\int_{\R^n}v^2\,d\Ln-2\int_{\Omega}fv\, d\Ln\\ &\hphantom{\ge}+C_0\Ln\left(\set{v>0}\setminus\Omega\right). \end{split} \] By \autoref{lemma: stimaautovalori} and Hölder inequality \begin{equation} \label{eq: quadratic} \begin{split} 0\ge& \lambda_\beta(B)\left(\frac{\Ln(\Omega)}{\Ln(\Set{v>0})}\right)^{\frac{2}{n}}\norma{v}_2^2 -2\norma{f}_{2,\Omega}\norma{v}_2\\[3 pt] &+C_0\Ln\left(\set{v>0}\setminus\Omega\right) \end{split} \end{equation} where $B$ is a ball with the same measure as $\Omega$. Obviously~\eqref{eq: quadratic} implies that \[ \norma{f}_{2,\Omega}^2-\lambda_\beta(B)\left(\frac{\Ln(\Omega)}{\Ln(\Set{v>0})}\right)^{\frac{2}{n}}C_0\Ln\left(\set{v>0}\setminus\Omega\right)\ge 0. \] Let $M=\Ln(\set{v>0})$, and notice that, since $\Omega\subseteq\set{v>0}$, \[ \Ln\left(\set{v>0}\setminus\Omega\right)= M-\Ln(\Omega), \] therefore \[ \norma{f}_{2,\Omega}^2\ge C_0 \lambda_\beta(B)\left(\Ln(\Omega)\right)^{\frac{2}{n}}\left(M^{1-\frac{2}{n}}-M^{-\frac{2}{n}}\Ln(\Omega)\right). \] This implies (taking into account~\eqref{eq:cond n=2} if $n=2$) that there exists $c=c(\Omega,f,\beta,C_0)>0$ such that \[ \Ln(\set{v>0})<c. \] Finally observe that by~\eqref{eq: quadratic} it follows \begin{equation} \label{eq: norma2} \norma{v}_2\le C(M), \end{equation} where \[\begin{split}C(M)&=\frac{M^{\frac{2}{n}}\left(\norma{f}_{2,\Omega}+\sqrt{\norma{f}_{2,\Omega}^2- C_0 \lambda_\beta(B)\left(\dfrac{\Ln(\Omega)}{M}\right)^{\frac{2}{n}}\left(M-\Ln(\Omega)\right)}\right)}{\vphantom{\big(}\lambda_\beta(B)\Ln(\Omega)}\\[5 pt] &\le \frac{2 c^\frac{2}{n} \norma{f}_{2,\Omega}}{\lambda_\beta(B)\Ln(\Omega)}\end{split}\] \end{proof} \begin{oss} \label{oss: ominsupp} Let $v\in\sbvv\cap W^{1,2}(\Omega)$, it is always possible to choose a function $v_0$ such that $v_0=v$ in $\R^n\setminus\Omega$, $\mathcal{F}(v_0)\leq\mathcal{F}(v)$, and $\Omega\subseteq\set{v_0>0}$. Indeed the function $v_0\in W^{1,2}(\Omega)$, weak solution to the following boundary value problem \begin{equation}\label{eq: remark dir}\begin{cases} -\Delta v_0= f &\text{in }\Omega,\\ v_0 = v &\text{on }\partial\Omega,\end{cases} \end{equation} satisfies \[\int_\Omega\nabla v_0\cdot\nabla \varphi\,d\Ln=\int_\Omega f\varphi\,d\Ln\] for every $\varphi\in W^{1,2}_0(\Omega)$ and $v_0=v$ on $\partial\Omega$ in the sense of the trace. Then, extending $v_0$ to be equal to $v$ outside of $\Omega$, we have that $\Omega\subset\Set{v_0>0}$ and $\mathcal{F}(v_0)\leq\mathcal{F}(v)$ \end{oss} \begin{prop}[Existence] \label{teor: existence} Let $n\ge2$ and, if $n=2$, assume that condition~\eqref{eq:cond n=2} holds true. Then there exists a solution to problem~\eqref{problemar}. \end{prop} \begin{proof} Let $\{u_k\}$ be a minimizing sequence for problem~\eqref{problemar}. Without loss of generality we may always assume that, for all $k\in\N$, $\mathcal{F}(u_k)\leq\mathcal{F}(0)=0$, and, by \autoref{oss: ominsupp}, $\Omega\subseteq\set{u_k>0}$. Therefore we have \[\begin{split}0\geq\mathcal{F}(u_k)&\geq \int_{\R^n} \abs{\nabla u_k}^2\,d\Ln+\beta\int_{J_{u_k}}\left( \overline{u_k}^2+\underline{u_k}^2\right)\,d\Hn-2\int_\Omega fv\,d\Ln\\ &\geq\int_{\R^n} \abs{\nabla u_k}^2\,d\Ln+\beta\int_{J_{u_k}}\left( \overline{u_k}^2+\underline{u_k}^2\right)\,d\Hn-2\norma{f}_{2,\Omega}\norma{u_k}_{2,\Omega}\,, \end{split}\] and by~\eqref{eq:stimanormal2} \begin{equation*} \int_{\R^n} \abs{\nabla u_k}^2\,d\Ln+\beta\int_{J_{u_k}}\left( \overline{u_k}^2+\underline{u_k}^2\right)\,d\Hn\leq C\norma{f}_{2,\Omega}\,.\end{equation*} Then we have that there exists a positive constant still denoted by $C$, independent on the sequence $\{u_k\}$, such that \begin{equation}\label{eq: variazione u} \int_{\R^n}\abs{\nabla u_k}^2\, d\Ln+\int_{J_{u_{k}}}\left(\overline u_k^2+\underline u_k^2\right)\,d\Hn +\int_{\R^n}u_k^2\, d\Ln < C. \end{equation} The compactness theorem in $\sbvv$ (\autoref{teor: sbv}), ensures that there exists a subsequence $\{u_{k_j}\}$ and a function $u\in\sbvv\cap W^{1,2}(\Omega)$, such that $u_{k_j}$ converges to $u$ strongly in $L^2_{\loc} (\R^n)$, weakly in $W^{1,2}(\Omega)$, almost everywhere in $\R^n$ and \[\begin{split} \int_{\R^n}\abs{\nabla u}^2\, d\Ln \le& \liminf_{j\to+\infty}\int_{\R^n} \abs{\nabla u_{k_j}}^2\,d\Ln,\\[5 pt] \int_{J_u}\left( \overline u^2+\underline u^2\right)\, d\Hn \le& \liminf_{j\to+\infty}\int_{J_{u_{k_j}}} \left(\overline u_{k_j}^2+\underline u_{k_j}^2\right)\, d\Hn,\\[5 pt] \Ln(\set{u>0}\setminus\Omega)\le&\liminf_{j\to+\infty}\Ln(\set{u_{k_j}>0}\setminus\Omega).\end{split}\] Finally we have \[\mathcal{F}(u)\leq\liminf_{j\to+\infty}\mathcal{F}(u_{k_j})=\inf\Set{\mathcal{F}(v)|v\in\sbvv\cap W^{1,2}(\Omega)},\] Therefore $u$ is a minimizer to problem~\eqref{problemar}.\qedhere \end{proof} \begin{teor}[Euler-Lagrange equation] \label{teor: euler-lagrange} Let $u$ be a minimizer to problem \eqref{problemar}, and let $v\in \sbv^{1/2}(\R^n)$ such that $J_v\subseteq J_u$. Assume that there exists $t>0$ such that $\set{v>0}\subseteq \set{u>t}$ $\Ln$-a.e., and that \[ \int_{J_u\setminus J_v}v^2\, d\Hn<+\infty. \] Then \begin{equation} \label{eq: wel} \int_{\R^n}\nabla u\cdot\nabla v\, d\Ln+\beta\int_{J_u}\left(\overline{u}\gamma^+(v)+\underline{u}\gamma^-(v)\right)\, d\Hn=\int_\Omega fv\,d\Ln, \end{equation} where $\gamma^{\pm}=\gamma_{J_u}^\pm$. \end{teor} \begin{proof} Notice that since $v\in\sbvv$ with $J_v\subseteq J_u$ we have that $v\in\sbvv\cap W^{1,2}(\Omega)$. Assume $v\in \sbv^{1/2}(\R^n)\cap L^\infty(\R^n)$. If $s\in\R$, recalling that $\set{v>0}\subseteq \set{u>t}$ $\Ln$-a.e., \[ u(x)+sv(x)=u(x)\ge 0 \qquad \text{$\Ln$-a.e.}\:\forall x\in \set{u\le t}, \] while, for $\abs{s}$ small enough, \[ u(x)+s v(x)\ge t-\abs{s}\,\norma{v}_\infty>0 \qquad \forall x\in\set{u>t}. \] Therefore we still have \[ u+sv\in\sbv^{\frac{1}{2}}(\R^n,\R^+). \] Moreover by minimality of $u$ we have, for every $\abs{s}\le s_0$ \[\begin{split} \mathcal{F}(u)\le&\mathcal{F}(u+sv)\\[3 pt] =&\int_{\R^n}\abs{\nabla u+s\nabla v}^2\,d\Ln+\\[3 pt] &+\int_{J_{u+sv}}\left[\left(\gamma^+(u)+s\gamma^+(v)\right)^2+\left(\gamma^-(u)+s\gamma^-(v)\right)^2\right]\,d\Hn+ \\[3 pt] &-2\int_{\R^n}f(u+sv)\,d\Ln+C_0\Ln(\set{u>0}). \end{split} \] \marcomm{\textbf{Claim:}}The set \[S:=\Set{s\in[-s_0,s_0] | \, \Hn(J_{u}\setminus J_{u+sv})\ne 0}\] is at most countable. \vspace{5 pt} \noindent Let us define \begin{gather*} D_0=\Set{x\in J_u | \gamma^+(u)(x)\ne \gamma^-(u)(x)},\\[5 pt] D_s=\Set{x\in J_u | \gamma^+(u+sv)(x)\ne\gamma^-(u+sv)(x)}, \end{gather*} and notice that \[ \Hn(J_u\setminus D_0)=0, \qquad \Hn(J_{u+sv}\setminus D_s)=0. \] Then we have to prove that \[\Set{s\in[-s_0,s_0] | \, \Hn(D_0\setminus D_s)\ne 0}\] is at most countable. Observe that if $t\ne s$, \[ (D\setminus D_t)\cap(D\setminus D_s)=\emptyset. \] Indeed if $x\in D\setminus D_s$ \begin{gather*} \gamma^+(u)(x)\ne \gamma^-(u)(x),\\[5 pt] \gamma^+(u)+s\gamma^+(v)(x)=\gamma^-(u)+s\gamma^-(v)(x), \end{gather*} then \[ \gamma^+(v)(x)\ne \gamma^-(v)(x), \] and so \[ s=\frac{\gamma^-(u)(x)- \gamma^+(u)(x)}{\gamma^+(v)(x)- \gamma^-(v)(x)}. \] If $\mathcal{H}^0$ denotes the counting measure in $\R$, we can write \[ \int_{-s_0}^{s_0}\Hn(D_0\setminus D_s)\,d\mathcal{H}^0= \Hn\Bigg(\bigcup_{\scalebox{0.6}{$(-s_0,s_0)$}}\!D_0\setminus D_s\Bigg)\le\Hn(J_u)<+\infty, \] then the claim is proved. We are now able to differentiate in $s=0$ the function $\mathcal{F}(u+sv)$, and observing that $0\notin S$ is a minimum for $\mathcal{F}(u+sv)$, we get \[ \frac{1}{2}\delta\mathcal{F}(u,v)=\int_{\R^n}\nabla u\cdot\nabla v\, d\Ln+\beta\int_{J_u}\left[\overline{u}\gamma^+(v)+\underline{u}\gamma^-(v)\right]\, d\Hn-\int_\Omega fv\,d\Ln=0. \] If $v\notin L^\infty(\R^n)$, we consider $v_h=\min\set{v,h}$. Then \[ \delta\mathcal{F}(u,v_h)=0 \qquad \forall h>0. \] Observe that, since $\gamma^\pm(v_h)=\min\set{\gamma^\pm(v),h}$, \[ \gamma^\pm(v_h)\to\gamma^\pm(v) \qquad \Hn\text{-a.e. in }J_u. \] Therefore, passing to limit for $h\to+\infty$, by dominated convergence on the term \[ \int_{\R^n}\nabla u\cdot\nabla v_h\, d\Ln, \] and by monotone convergence on the terms \[ \beta\int_{J_u}\left[\overline{u}\gamma^+(v_h)+\underline{u}\gamma^-(v_h)\right]\, d\Hn, \qquad\int_\Omega fv_h\,d\Ln, \] we get \[ 0=\lim_h\delta \mathcal{F}(u,v_h)=\delta\mathcal{F}(u,v). \] \end{proof} We now want to use the Euler-Lagrange equation~\eqref{eq: wel} to prove that if $f$ belongs to $L^p(\Omega)$ with $p>n$, and if $u$ is a minimizer to problem~\eqref{problemar} then $u$ belongs to $L^{\infty}(\R^n)$. In order to prove this we need the following \begin{lemma} \label{lemma: poincare} Let $m$ be a positive real number. There exists a positive constant $C=C(m,\beta,n)$ such that, for every function $v\in\sbvv$ with $\Ln(\set{v>0})\leq m$, \[\left(\int_{\R^n} v^{2\cdot1^*}\,d\Ln\right)^\frac{1}{1^*}\leq C\left[\int_{\R^n} \abs{\nabla v}^2\,d\Ln+\beta\int_{J_v}\left(\overline{v}^2+\underline{v}^2\right)\,d\Hn\right],\] where $1^*=\dfrac{n}{n-1}$ is the Sobolev conjugate of $1$. \end{lemma} \begin{proof} Classical Embedding of $BV(\R^n)$ in $L^{1^*}(\R^n)$ ensures that \[\begin{split}\left(\int_{\R^n} v^{2\cdot1^*}\,d\Ln\right)^\frac{1}{1^*}\leq& C(n) \abs*{D v^2}(\R^n)\\[5 pt] =&C(n)\left[\int_{\R^n}2v\abs{\nabla v}\,d\Ln+\int_{J_v}\left(\overline{v}^2+\underline{v}^2\right)\,d\Hn\right].\end{split}\] For every $\eps>0$, using Young's and Hölder's inequalities, we have \[\begin{split}\left(\int_{\R^n} v^{2\cdot1^*}\,d\Ln\right)^\frac{1}{1^*}\leq& \frac{C(n)}{\eps}\int_{\R^n}v^2\,d\Ln+\\[3 pt] &+C(n)\left[\eps\int_{\R^n}\abs{\nabla v}^2\,d\Ln+\int_{J_v}\left(\overline{v}^2+\underline{v}^2\right)\,d\Hn\right]\\[5 pt] \leq& \frac{C(n)\,m^{\frac{1}{n}}}{\eps}\left(\int_{\R^n} v^{2\cdot1^*}\,d\Ln\right)^\frac{1}{1^*}+\\[3 pt] &+C(n)\left[\eps\int_{\R^n}\abs{\nabla v}^2\,d\Ln+\int_{J_v}\left(\overline{v}^2+\underline{v}^2\right)\,d\Hn\right]. \end{split}\] Setting $\eps=2C(n)m^{\frac{1}{n}}$, we can find two constants $C(m,n),C(m,\beta,n)>0$ such that \[\begin{split}\left(\int_{\R^n} v^{2\cdot1^*}\,d\Ln\right)^\frac{1}{1^*}&\leq C(m,n)\left[\int_{\R^n} \abs{\nabla v}^2\,d\Ln+\int_{J_v}\left(\overline{v}^2+\underline{v}^2\right)\,d\Hn\right]\\[5 pt] &\leq C(m,\beta,n)\left[\int_{\R^n} \abs{\nabla v}^2\,d\Ln+\beta\int_{J_v}\left(\overline{v}^2+\underline{v}^2\right)\,d\Hn\right].\end{split}\] \end{proof} We refer to \cite{stampacchia} for the following lemma. \begin{lemma}\label{lemma: stampacchia} Let $g: [0,+\infty) \to [0,+\infty)$ be a decreasing function and assume that there exist $C,\alpha >0$ and $\theta>1$ constants such that for every $h>k\ge 0$, \[ g(h)\le C(h-k)^{-\alpha}g(k)^\theta . \] Then there exists a constant $h_0>0$ such that \[ g(h)=0 \qquad \forall h\ge h_0. \] In particular we have \[ h_0=C^{\frac{1}{\alpha}}g(0)^{\frac{\theta -1}{\alpha}}2^{\theta(\theta-1)}. \] \end{lemma} \begin{prop}[$L^\infty$ bound] \label{teor: linftybound} Let $n\ge2$ and assume that, if $n=2$, condition~\eqref{eq:cond n=2} holds true. Let $f\in L^p(\Omega)$, with $p>n$. Then there exists a constant $C=C(\Omega,f,p,\beta,C_0)>0$ such that if $u$ is a minimizer to problem \eqref{problemar}, then \[ \norma{u}_{\infty}\le C. \] \end{prop} \begin{proof} Let $\gamma^{\pm}=\gamma_{J_u}^\pm$. For every $\varphi,\psi\in\sbvv$ satisfying $J_\varphi,J_\psi\subseteq J_u$, define \[ a(\varphi, \psi)=\int_{\R^n}\nabla \varphi\cdot\nabla\psi\, d\Ln+\beta\int_{J_u}\left[\gamma^+(\varphi)\gamma^+(\psi)+\gamma^-(\varphi)\gamma^-(\psi)\right]\, d\Hn. \] For every $v$ satisfying the assumptions of \autoref{teor: euler-lagrange}, it holds that \[ a(u,v)=\int_{\Omega}fv\,d\Ln. \] In particular, let us fix $k\in \R^+$ and define \[ \varphi_k(x)=\begin{cases} u(x)-k &\text{ if }u(x)\ge k,\\ 0 &\text{ if }u(x)<k, \end{cases} \] then \[ \gamma^+(\varphi_k)(x)=\begin{cases} \overline u(x)-k &\text{ if }\overline u(x)\ge k,\\ 0 &\text{ if }\overline u(x)<k, \end{cases} \] and analogously for $\gamma^-(\varphi_k)$. Furthermore, let us define \[ \mu(k)=\Ln(\set{u>k}). \] We want to prove that $\mu(k)=0$ for sufficiently large $k$. From \autoref{teor: euler-lagrange}, we have \begin{equation} \label{eq: ela} a(u,\varphi_k)=\int_{\Omega}f\varphi_k\,d\Ln, \end{equation} and we can observe that \[\begin{split} a(u,\varphi_k)&=\int_{\set{u>k}}\abs{\nabla u}^2\,d\Ln+\beta\int_{J_u\cap\set{u>k}}\left[\overline{u}(\overline{u}-k)+\underline{u}(\underline{u}-k)\right]\,d\Hn \\ &\geq\int_{\set{u>k}}\abs{\nabla u}^2\,d\Ln+\beta\int_{J_u\cap\set{u>k}}\left[(\overline{u}-k)^2+(\underline{u}-k)^2\right]\,d\Hn\\[5 pt]&=a(\varphi_k,\varphi_k). \end{split}\] Moreover, by minimality, $\mathcal{F}(u)\le \mathcal{F}(0)=0$ and by \autoref{oss: ominsupp}, $\Omega\subseteq\set{u>0}$. Therefore, \eqref{eq:stimasupporto} holds true and we can apply \autoref{lemma: poincare}, having that there exists $C=C(\Omega,f,\beta,C_0)>0$ such that \begin{equation} \label{eq: bounda} \int_{\Omega} f\varphi_k\,d\Ln =a(u,\varphi_k)\geq a(\varphi_k,\varphi_k)\geq C\norma{\varphi_k}_{2\cdot 1^*}^2. \end{equation} On the other hand \begin{equation}\label{eq: fbound} \begin{split} \int_\Omega f\varphi_k\,d\Ln=\int_{\Omega\cap\set{u>k}}f(u-k)\,d\Ln&\leq\left(\int_{\Omega\cap\set{u>k}}f^{\frac{2n}{n+1}}\right)^{\frac{n+1}{2n}}\norma{\varphi_k}_{2\cdot 1^*}\\[5 pt ] &\leq\norma{f}_{p,\Omega}\,\norma{\varphi_k}_{2\cdot 1^*}\:\mu(k)^\frac{n+1}{2n\sigma'}, \end{split} \end{equation} where \[ \sigma=\frac{p(n+1)}{2n}>1, \] since $p>n$. Joining~\eqref{eq: bounda} and~\eqref{eq: fbound}, we have \begin{equation}\label{eq: prebound}\norma{\varphi_k}_{2\cdot 1^*}\leq C\norma{f}_{p,\Omega}\:\mu(k)^\frac{n+1}{2n\sigma'}.\end{equation} Let $h>k$, then \[\begin{split}(h-k)^{2\cdot 1^*}\mu(h)=&\int_{\set{u>h}}(h-k)^{2\cdot 1^*}\,d\Ln\\ \leq&\int_{\set{u>h}}(u-k)^{2\cdot 1^*}\,d\Ln\\ \leq&\int_{\set{u>k}}(u-k)^{2\cdot 1^*}\,d\Ln=\norma{\varphi_k}_{2\cdot 1^*}^{2\cdot 1^*}.\end{split}\] Using~\eqref{eq: prebound} and the previous inequality, we have \[\mu(h)\leq C (h-k)^{-2\cdot 1^*}\mu(k)^\frac{n+1}{(n-1)\sigma'}.\] Since $p>n$, then $\sigma'<(n+1)/(n-1)$. By \autoref{lemma: stampacchia}, we have that $\mu(h)=0$ for all $h\geq h_0$ with $h_0=h_0(\Omega, f,\beta,C_0)>0$, which implies \[\norma{u}_{\infty}\leq h_0.\] \end{proof} \begin{proof}[Proof of \autoref{teor: mainth1}] The result is obtained by joining \autoref{teor: existence} and \autoref{teor: linftybound}. \end{proof} \section{Density estimates for the jump set}\label{estimates} In this section we prove \autoref{teor: mainth2}: in \autoref{teor: lowerbound} we prove the lower bound for minimizers to problem \eqref{problemar}; in \autoref{teor: density1} and \autoref{teor: density2} we prove the density estimates for the jump set of a minimizer to problem \eqref{problemar}. In this section we will assume that $\Omega\subseteq\R^n$ is an open bounded set with $C^{1,1}$ boundary, that $f\in L^p(\Omega)$, with $p>n$, is a positive function, and that $\beta, C_0$ are positive constants. We consider the energy functional $\mathcal{F}$ defined in \eqref{eq: fun 2}. In order to show that if $u$ is a minimizer to problem~\eqref{problemar} then $u$ is bounded away from 0, we will first prove that there exists a positive constant $\delta$ such that $u>\delta$ almost everywhere in $\Omega$, and then we will show that this implies the existence of a positive constant $\delta_0$ such that $u>\delta_0$ almost everywhere in the set $\set{u>0}$. In the following we will denote by $U_t:=\Set{u<t}\cap\Omega$. \begin{oss} Let $u$ be a minimizer to~\eqref{problemar}, by \autoref{oss: ominsupp}, $u$ is a solution to \[\begin{cases} -\Delta u=f & \text{in }\Omega,\\ u\ge0 & \text{on }\partial\Omega \end{cases}\] Let $u_0\in W^{1,2}_0(\Omega)$ be the solution to the following boundary value problem \begin{equation}\begin{cases} \label{eq: elomega} -\Delta u_0=f & \text{in }\Omega,\\ u_0=0 & \text{on }\partial\Omega. \end{cases}\end{equation} Then, by maximum principle, \[ u\geq u_0\quad\text{in $\Omega\subseteq\set{u>0}$}\quad\text{and}\quad \set{u<t}\cap\Omega=U_t\subseteq\set{u_0<t}\cap\Omega. \] \end{oss} \begin{lemma} \label{lemma: stimasottolivelli} There exist two positive constants $t_0=t_0(\Omega,f)$ and $C=C(\Omega, f)$ such that if $u$ is a minimizer to~\eqref{problemar} then for every $t\in[0,t_0]$ it results \begin{equation} \label{eq: stimasottolivelli} \Ln(U_t)\le C\, t. \end{equation} \begin{proof} Let $u_0$ be the solution to \eqref{eq: elomega}, fix $\varepsilon>0$ such that the set \[ \Omega_\eps=\Set{x\in\Omega | d(x,\partial\Omega)>\varepsilon} \] is not empty. Since $u_0$ is superharmonic and non-negative in $\Omega$, by maximum principle we have that \[\alpha=\inf_{\Omega_\eps}u_0>0.\] then $u_0$ solves \[ \begin{cases} -\Delta u_0=f &\text{in } \Omega,\\ u_0 = 0 &\text{on } \partial\Omega, \\ u_0 \ge \alpha &\text{on } \partial\Omega_\eps. \end{cases} \] Therefore, if we consider the solution $v$ to the following boundary value problem \[ \begin{cases} -\Delta v=0 &\text{in } \Omega,\\ v = 0 &\text{on } \partial\Omega, \\ v = \alpha &\text{on } \partial\Omega_\eps, \end{cases} \] we have that $u\ge u_0\ge v$ almost everywhere in $\Omega$ and \[ \Set{u<t}\cap\Omega=U_t \subseteq\Set{u_0<t}\cap\Omega\subseteq\Set{v<t}\cap\Omega. \] Hopf Lemma implies that there exists a constant $\tau=\tau(\Omega)>0$ such that \[ \frac{\partial v}{\partial \nu}<-\tau\qquad \text{on }\partial\Omega. \] Let $x\in\bar\Omega$, and let $x_0$ be a projection of $x$ onto the boundary $\partial\Omega$, then \[ \abs{x-x_0}=d(x,\partial\Omega), \qquad \qquad \frac{x-x_0}{\abs{x-x_0}}=-\nu_\Omega(x_0), \] where $\nu_\Omega$ denotes the exterior normal to $\partial\Omega$. We can write \begin{equation}\label{eq: lowboundboundary}\begin{split} v(x)&=\underbrace{v(x_0)}_{=0}+\nabla v(x_0)\cdot(x-x_0)+o\left(\abs{x-x_0}\right)\\[7 pt] &=-\frac{\partial v}{\partial\nu}(x_0)\abs{x-x_0}+o\left(\abs{x-x_0}\right) \\[5 pt] &\ge \tau\abs{x-x_0}+o\left(\abs{x-x_0}\right)\\[5 pt] &> \frac{\tau}{2}\abs{x-x_0}=\frac{\tau}{2} d(x,\partial\Omega) \end{split}\end{equation} for every $x$ such that $d(x,\partial\Omega)<\sigma_0$ for a suitable $\sigma_0=\sigma_0(\Omega,f)>0$. Notice that if $\bar{x}\in\overline{\Omega}$ and $\lim_{x\to \bar{x}}v(x)=0$ then necessarily $\bar{x}\in\partial\Omega$. Therefore, there exists a $t_0=t_0(\Omega,f)>0$ such that $v(x)<t_0$ implies $d(x,\partial\Omega)<\sigma_0$. Consequently, if $t<t_0$, we have that \[ \set{v<t}\subseteq\set{d(x,\partial\Omega)<\sigma_0}, \] and by \eqref{eq: lowboundboundary}, we get \[ \Ln\left(U_t\right)\le \Ln\left(\Set{v<t}\right)\le \Ln\left(\Set{x\in\Omega | d(x,\partial\Omega)\le \frac{2}{\tau }\, t}\right). \] Since $\Omega$ is $C^{1,1}$, by \autoref{teor: volumedistanza}, we conclude the proof. \end{proof} \end{lemma} \begin{lemma}\label{lemma: diffineq} Let $g: [0,t_1] \to [0,+\infty)$ be an increasing, absolutely continuous function such that \begin{equation} \label{eq: diffineq} g(t)\le C t^\alpha \left(g'(t)\right)^\sigma \qquad \forall t\in[0,t_1], \end{equation} with $C>0$ and $\alpha>\sigma>1$. Then there exists $t_0>0$ such that \[ g(t)=0 \qquad \forall t\le t_0. \] Precisely, \[ t_0=\left(\frac{C(\alpha-\sigma)}{\sigma-1}g(t_1)^{\frac{\sigma-1}{\sigma}}+t_1^{\frac{\sigma-\alpha}{\sigma}}\right)^{\frac{\sigma}{\sigma-\alpha}}. \] \end{lemma} \begin{proof} Assume by contradiction that $g(t)>0$ for every $t>0$. Inequality \eqref{eq: diffineq} implies \[ \frac{g'}{g^{\frac{1}{\sigma}}}\ge \frac{1}{C}t^{-\frac{\alpha}{\sigma}}. \] Integrating between $t_0$ and $t_1$, we have \[ \frac{\sigma}{\sigma-1}\left(g(t_1)^{\frac{\sigma-1}{\sigma}}-g(t_0)^{\frac{\sigma-1}{\sigma}}\right)\ge \frac{1}{C}\frac{\sigma}{\sigma-\alpha}\left(t_1^{\frac{\sigma-\alpha}{\sigma}}-t_0^{\frac{\sigma-\alpha}{\sigma}}\right). \] Since $\alpha>\sigma>1$, we have \[ 0\le g(t_0)^{\frac{\sigma-1}{\sigma}}\le \frac{\sigma-1}{C\left(\alpha-\sigma\right)}\left(t_1^{\frac{\sigma-\alpha}{\sigma}}-t_0^{\frac{\sigma-\alpha}{\sigma}}\right)+g(t_1)^{\frac{\sigma-1}{\sigma}}, \] which is a contradiction if \[ t_0\le \left(\frac{C(\alpha-\sigma)}{\sigma-1}g(t_1)^{\frac{\sigma-1}{\sigma}}+t_1^{\frac{\sigma-\alpha}{\sigma}}\right)^{\frac{\sigma}{\sigma-\alpha}}. \] \end{proof} \begin{oss}\label{oss:deltanou} Let $g$ be as in \autoref{lemma: diffineq} and assume that $g(t_1)\le K$, then $g(t)=0$ for all $0<t<\tilde{t}$ where \[\tilde{t}=\left(\frac{C(\alpha-\sigma)}{\sigma-1}K^{\frac{\sigma-1}{\sigma}}+t_1^{\frac{\sigma-\alpha}{\sigma}}\right)^{\frac{\sigma}{\sigma-\alpha}}. \] \end{oss} We now have the tools to prove the lower bound inside $\Omega$. \begin{prop}\label{teor: bound Omega}Let $n\ge2$ and assume that, if $n=2$, condition~\eqref{eq:cond n=2} holds true. Then there exists a positive constant $\delta=\delta(\Omega, f,p,\beta,C_0)>0$ such that if $u$ is a minimizer to problem \eqref{problemar} then \[ u\ge\delta\] almost everywhere in $\Omega$. \end{prop} \begin{proof} Assume that $\Omega$ is connected and define the function \[ u_t(x)=\begin{cases} \max\set{u,t} &\text{in }\Omega, \\ u &\text{in }\R^n\setminus\Omega \end{cases} \] Recalling that $U_t=\set{u<t}\cap\Omega$, we have \[ J_{u_t}\setminus\partial^*U_t = J_u\setminus\partial^*U_t, \] and on this set $\underline{u_t}=\underline{u}$ and $\overline{u_t}=\overline{u}$. Then we get by minimality of $u$, and using the fact that $J_{u_t}\cap\partial^* U_t\subseteq\partial\Omega$ \[ \begin{split} 0&\ge \mathcal{F}(u)-\mathcal{F}(u_t)\\[7 pt] &=\int_{U_t}\abs{\nabla u}^2\, d\Ln -2\int_{U_t}f(u-t)\,d\Ln+\beta\int_{\partial^*U_t\cap J_u}\left(\underline{u}^2+\overline{u}^2\right)\,d\Hn +\\[5 pt] &\hphantom{=}-\beta\int_ J_{u_t}\cap\partial^* U_t\cap J_u}\left[t^2+(\gamma_{\partial\Omega}^+(u))^2\right]\,d\H -\beta\int_{\left(J_{u_t}\cap\partial^*U_t\right)\setminus J_u} \left(t^2+u^2\right)\,d\Hn \\[7 pt] &\ge \int_{U_t}\abs{\nabla u}^2\, d\Ln-2\beta t^2 \Hn\left(\partial^*U_t\cap\partial\Omega\right \end{split} \] where we ignored all the non-negative terms except the integral of $\abs{\nabla u}^2$, and we used that $u\le t$ in $\partial^* U_t\setminus J_u$. By \autoref{lemma: stimasottolivelli}, we can choose $t$ small enough to have $\Ln(U_t)\le \Ln(\Omega)/2$, then applying the isoperimetric inequality in \autoref{teor: isopint} to the set $E U_t$, we get \begin{equation} \label{eq: stimagrad} \int_{U_t}\abs{\nabla u}^2\, d\Ln\le 2\beta \,C\, t^2 \, P(U_t;\Omega). \end{equation} Let us define \[ p(t)=P(U_t;\Omega), \] and consider the absolutely continuous function \[ g(t)=\int_{U_t}u\,\abs{\nabla u}\, d\Ln=\int_0^t sp(s)\, ds. \] By minimality of $u$ we can apply the a priori estimates \eqref{eq: variazione u} to prove the equiboundedness of $g$, i.e. there exists $K=K(\Omega,f,\beta,C_0)>0$ such that $g(t)\le K$ for all $t>0$. Using the Hölder inequality and the estimate \eqref{eq: stimagrad} we have \[ g(t)\le\left(\int_{U_t}u^2\, d\Ln\right)^\frac{1}{2}\left(\int_{U_t}\abs{\nabla u}^2\, d\Ln\right)^\frac{1}{2}\le \sqrt{2\beta C}\, t\,\Ln(U_t)^\frac{1}{2}(t^2p(t))^\frac{1}{2}. \] Fix $1>\varepsilon>0$. Then we can write $\Ln(U_t)=\Ln(U_t)^{\varepsilon}\,\Ln(U_t)^{1-\varepsilon}$, and by \autoref{lemma: stimasottolivelli} there exists a constant $C=C(\Omega, f,\beta)>0$ such that \[ g(t)\le C\, t^{2+\frac{1-\varepsilon}{2}}\,\Ln(U_t)^{\frac{\varepsilon}{2}}\, p(t)^\frac{1}{2}. \] By the relative isoperimetric inequality in \autoref{teor: relisop}, we can estimate \[ \Ln(U_t)^\frac{\eps}{2}\le C(\Omega, n) p(t)^{\frac{\varepsilon n}{2(n-1)}}, \] and, noticing that $p(t)=g'(t)/t$, we get \[ g(t)\le C t^\alpha (g'(t))^\sigma, \] where \[ \alpha=2-\frac{\varepsilon}{2}\left(1+\frac{n}{n-1}\right), \qquad \qquad \sigma=\frac{1}{2}+\frac{\varepsilon}{2}\frac{n}{n-1}. \] In particular, if we choose \[ \varepsilon\in\left(\frac{n-1}{n},\frac{3n-3}{3n-1}\right), \] we have that $\alpha>\sigma>1$, and then, using \autoref{lemma: diffineq} and \autoref{oss:deltanou}, there exists a $\delta=\delta(\Omega, f,p,\beta,C_0)>0$ such that $g(t)=0$ for every $t<\delta$. Then $\Ln(\set{u<t}\cap\Omega)=0$ for every $t<\delta$, hence \[u\ge\delta\] almost everywhere in $\Omega$. When $\Omega$ is not connected, then \[ \Omega=\Omega_1\cup\dots\cup\Omega_N, \] with $\Omega_i$ pairwise disjoint connected open sets. Using $u_t$ as the function $u$ truncated inside a single $\Omega_i$, we find constants $\delta_i>0$ such that \[ u(x)\ge \delta_i \] almost everywhere in $\Omega_i$. Therefore choosing $ \delta=\min\set{\delta_1,\dots,\delta_N}$ we have $ u(x)>\delta$ almost everywhere in $\Omega$. \end{proof} Finally, following the approach in \cite{CK}, we have \begin{prop}[Lower Bound] \label{teor: lowerbound} Let $n\ge2$ and assume that, if $n=2$, condition~\eqref{eq:cond n=2} holds true. Then there exists a positive constant $\delta_0=\delta_0(\Omega,f,p,\beta,C_0)$ such that if $u$ is a minimizer to problem~\eqref{problemar} then \[u\ge\delta_0\] almost everywhere in $\Set{u>0}$. \end{prop} \begin{proof} Let $\delta$ be the constant in \autoref{teor: bound Omega}. For every $0<t\le\delta$ let us define the absolutely continuous function \[h(t)=\int_{\set{u\leq t}\setminus J_u}u\abs{\nabla u}\,\Ln=\int_0^t s P(\set{u>s};\R^n\setminus J_u)\,ds.\] By minimality of $u$ we can apply the a priori estimates \eqref{eq: variazione u} to prove the equiboundedness of $h$, i.e. there exists $K=K(\Omega,f,\beta,C_0)>0$ such that $h(t)\le K$ for all $t>0$. We will show that $h$ satisfies a differential inequality. For any $0<t<\delta$, let us consider $u^t=u\chi_{\set{u>t}}$, where $\chi_{\set{u>t}}$ is the characteristic funtion of the set $\set{u>t}$, as a competitor for $u$. We observe that, by \autoref{teor: bound Omega}, $\Omega\subseteq\set{u>t}$, so we have that \[\begin{split} 0 \ge& \mathcal{F}(u)-\mathcal{F}(u^t)\\[5 pt] =&\int_{\set{u\le t}\setminus J_u} \abs{\nabla u}^2\,\Ln+\beta\int_{J_u\cap\set{u>t}^{(0)}}\left( \underline{u}^2+\overline{u}^2\right)\,d\Hn +\\[5 pt] &+\beta\int_{J_u\cap\partial^*\set{u>t}} \underline{u}^2\,d\Hn -\beta\int_{\partial^*\set{u>t}\setminus J_u}{u}^2\,d\Hn +\\[5 pt] &+C_0\Ln\left(\Set{0<u\le t}\right).\end{split}\] Rearranging the terms, \begin{equation} \label{eq: gest0} \begin{split} &\int_{\set{u\le t}\setminus J_u} \abs{\nabla u}^2\,\Ln+\beta\int_{J_u\cap\set{u>t}^{(0)}} \left(\underline{u}^2+\overline{u}^2\right)\,d\Hn +\\[5 pt] &+\beta\int_{J_u\cap\partial^*\set{u>t}} \underline{u}^2\,d\Hn +C_0\Ln\left(\Set{0<u\le t}\right)\\[ 5 pt] &\le \beta t^2 P(\set{u>t};\R^n\setminus J_u)=\beta t h'(t). \end{split} \end{equation} On the other hand using Hölder's inequality, we have \begin{equation} \label{eq: gest1} h(t)\le \left(\int_{\set{u\le t}} \abs{\nabla u}^2\,\Ln\right)^{\frac{1}{2}}\Ln\left(\Set{0<u\le t}\right)^{\frac{1}{2n}}\left(\int_{\set{u\le t}} u^{2\cdot 1^*}\,\Ln\right)^{\frac{1}{2\cdot 1^*} }. \end{equation} Classical Embedding of $\bv$ in $L^{1^*}$ applied to $u^2\chi_{\set{u\leq t}}$, ensures \[\left(\int_{\set{u\le t}} u^{2\cdot 1^*}\,\Ln\right)^{\frac{1}{1^*} }\le C(n) \abs*{D \big(u^2\chi_{\set{u\le t}}\big)}(\R^n),\] and, using \eqref{eq: gest0}, \begin{equation} \label{eq: gest2} \begin{split}\abs*{D \big(u^2\chi_{\set{u\le t}}\big)}(\R^n) =&2\int_{\set{u\le t}} u\abs{\nabla u}\,\Ln+\int_{J_u\cap\set{u>t}^{(0)}}\left( \underline{u}^2+\overline{u}^2\right)\,d\Hn +\\[5 pt]& + \int_{J_u\cap\partial^*\set{u>t}} \underline{u}^2\,d\Hn +\int_{\partial^*\set{u>t}\setminus J_u}{u}^2\,d\Hn\\[5 pt] \le& 2t\left(\Ln\left(\Set{0<u\le t}\right)\int_{\set{u\le t}\setminus J_u} \abs{\nabla u}^2\,\Ln\right)^{\frac{1}{2}}+3t h'(t) \\[5 pt] \le& \left(2\frac{\delta\beta}{\sqrt{C_0}}+3\right)th'(t). \end{split} \end{equation} Therefore, joining \eqref{eq: gest1}, \eqref{eq: gest0}, and \eqref{eq: gest2}, we have \begin{equation}\label{eq: gdiff}h(t)\le C_3\left(t h'(t)\right)^{1+\frac{1}{2n}},\end{equation} where \[C_3=\beta^\frac{1}{2}\left(\frac{\beta}{C_0}\right)^\frac{1}{2n}C(n)^\frac{1}{2}\left(2\frac{\delta\beta}{\sqrt{C_0}}+3\right)^\frac{1}{2}.\] By~\eqref{eq: gdiff} we now want to show that there exists $\delta_0=\delta_0(\Omega,f,p,\beta,C_0)>0$ such that $h(t)=0$ for every $0\le t<\delta_0$. Indeed assume by contradiction that $h(t)>0$ for every $0<t\le \delta$. We have \[\frac{h'(t)}{h(t)^{\frac{2n}{2n+1}}}\ge \frac{C_3^{-\frac{2n}{2n+1}}}{t}.\] Integrating from $t_0>0$ to $\delta$, we get \[\left(h(\delta)^{\frac{1}{2n+1}}-h(t_0)^{\frac{1}{2n+1}}\right)\ge C_4 \log\left(\frac{\delta}{t_0}\right),\] where \[C_4=\dfrac{C_3^{-\frac{2n}{2n+1}}}{2n+1}.\] Then \[h(t_0)^{\frac{1}{2n+1}}\le h(\delta)^{\frac{1}{2n+1}} + C_4\log\left(\frac{t_0}{\delta}\right).\] Finally, for any \[ 0<t_0\le \tilde{\delta}=\delta\exp\left(-h(\delta)^{\frac{1}{2n+1}}/C_4\right), \] we have $h(t_0)<0$, which is a contradiction. Then, setting $\delta_0=\delta\exp\left(-K^{\frac{1}{2n+1}}/C_4\right)\le\tilde{\delta}$, we conclude that $h(t)=0$ for any $0<t<\delta_0$, from which we have \[u\ge \delta_0\] almost everywhere in $\set{u>0}$. \end{proof} \begin{oss} From \autoref{teor: lowerbound}, if $u$ is a minimizer to problem~\eqref{problemar}, we have that \begin{equation}\label{eq: incljump}\partial^*\set{u>0}\subseteq J_u\subseteq K_u.\end{equation} Indeed, on $\partial^*\set{u>0}$ we have that, by definition, $\underline{u}=0$ and that, since $u\ge\delta_0$ $\Ln$-a.e. in $\set{u>0}$, $\overline{u}\ge\delta_0$. \end{oss} \begin{prop}[Upper Density Estimate] \label{teor: density1} Let $n\ge2$ and assume that, if $n=2$, condition~\eqref{eq:cond n=2} holds true. Then there exist positive constants $C=C(\Omega,f,p,\beta, C_0)$, ${c=c(\Omega,f,p,\beta,C_0)}$ and $\delta_1=\delta_1(\Omega,f,p,\beta,C_0)$ such that if $u$ is a minimizer to problem \eqref{problemar} then for every $B_r(x)$ such that $B_r(x)\cap\Omega=\emptyset$, we have \begin{enumerate}[label= (\alph*)] \item For every $x\in\R^n\setminus\Omega$, \begin{equation} \label{eq: updensity} \Hn(J_u\cap B_r(x))\le Cr^{n-1}; \end{equation} \item For every $x\in K_u$, \begin{equation} \label{eq: lowNdensity} \Ln(B_r(x)\cap\set{u>0})\ge cr^{n}; \end{equation} \item The function $u$ has bounded support, namely \[ \set{u>0}\subseteq B_{1/\delta_1}. \] \end{enumerate} \end{prop} \begin{proof} This theorem is a consequence of \autoref{teor: lowerbound}, since we immediately have\marcomm{\phantom{-}\\ \it (a)} \[ \int_{J_u\cap B_r(x)}\left(\underline{u}^2+\overline{u}^2\right)\,d\Hn\ge\delta_0^2\Hn(J_u\cap B_r(x)), \] and by minimality of $u$ we have \[ \begin{split} 0&\ge\mathcal{F}(u)-\mathcal{F}(u\chi_{\R^n\setminus B_r(x)})\\ &\ge\int_{J_u\cap B_r(x)}\left(\underline{u}^2+\overline{u}^2\right)\,d\Hn-\int_{\partial B_r(x)\setminus J_u}{u}^2\,d\Hn\\ &\ge\int_{J_u\cap B_r(x)}\left(\underline{u}^2+\overline{u}^2\right)\,d\Hn-\int_{\partial B_r(x)\cap\set{u>0}^{(1)}}\left(\underline{u}^2+\overline{u}^2\right)\,d\Hn \\ &\ge\int_{J_u\cap B_r(x)}\left(\underline{u}^2+\overline{u}^2\right)\,d\Hn-2\norma{u}_\infty^2\Hn\left(\partial B_r(x)\cap\set{u>0}^{(1)}\right), \end{split} \] where, in the second inequality, we have used~\eqref{eq: incljump}. Thus we have \begin{equation} \label{eq: updensity1} \Hn\left(J_u\cap B_r(x)\right)\le\frac{2\norma{u}_\infty^2}{\delta_0^2}\Hn(\partial B_r(x)\cap\set{u>0}^{(1)})\le C r^{n-1}, \end{equation} where $C=C(\Omega,f,p,\beta,C_0)>0$. We now\marcomm{\it (b)} want to use the estimate \eqref{eq: updensity} together with the relative isoperimetric inequality in order to get a differential inequality for the volume of $B_r(x)\cap\set{u>0}^{(1)}$. Let $x\in K_u$, then for almost every $r$ we hav \[ \begin{split} 0<V(r):=\Ln(B_r(x)\cap\set{u>0}^{(1)})&\le k\,P(B_r(x)\cap\set{u>0}^{(1)})^{\frac{n}{n-1}}\\ &\le k\, P(B_r(x);\set{u>0}^{(1)})^{\frac{n}{n-1}}, \end{split} \] where $k=k(\Omega,f,p,\beta,C_0)>0$, and in the last inequality we used that \eqref{eq: incljump} and \eqref{eq: updensity1} imply \[ \begin{split} P\left(B_r(x)\cap\set{u>0}^{(1)}\right)&\le \Hn\left(\partial B_r(x)\cap\set{u>0}^{(1)}\right)+\Hn\left(J_u\cap B_r(x)\right) \\ &\le \left(1+\frac{2\norma{u}_\infty^2}{\delta_0^2}\right) P(B_r(x);\set{u>0}^{(1)}). \end{split} \] Then we have \[ \frac{V'(r)}{V(r)^{\frac{n-1}{n}}}\ge \frac{1}{k}, \] which implies \[ \Ln(B_r(x)\cap\set{u>0}^{(1)})\ge c\,r^{n}. \] Finally, let\marcomm{\it (c)} $x\in K_u$ such that $d(x,\partial\Omega)\ge1/\delta_1$. From \eqref{eq: lowNdensity}, noticing that $\mathcal{F}(u)\le\mathcal{F}(0)=0$, we have that \[ c\,\delta_1^{-n}\le\Ln(\set{u>0}\setminus\Omega)\le\frac{2\norma{u}_\infty}{C_0}\int_\Omega f\,d\Ln, \ which is a contradiction if $\delta_1$ is sufficiently small. Then the thesis is given by \eqref{eq: incljump}. \end{proof} Finally, we have \begin{prop}[Lower Density Estimate] \label{teor: density2}Let $n\ge2$ and assume that, if $n=2$, condition~\eqref{eq:cond n=2} holds true. Then there exists a positive constant $c=c(\Omega,f,p,\beta,C_0)$ such that if $u$ is a minimizer to problem~\eqref{problemar} then \begin{enumerate} \item For any $x\in K_u$ and $B_r(x)\subseteq\R^n\setminus\Omega$, \[ \Hn(J_u\cap B_r(x))\ge c r^{n-1}; \] \item $J_u$ is essentially closed, namely \[ \Hn(K_u\setminus J_u)=0; \] \end{enumerate} \end{prop} The proof of \autoref{teor: density2} relies on classical techniques used in \cite{Giorgi} to prove density estimates for the jump set of almost-quasi minimizers of the Mumford-Shah functional. We refer to \cite[Theorem 5.1]{CK} and \cite[Corollary 5.4]{CK} for the details of the proof. \begin{proof}[Proof of \autoref{teor: mainth2}] The result is obtained by joining \autoref{teor: lowerbound}, \autoref{teor: density1}, and \autoref{teor: density2}. \end{proof} \begin{oss} Let $u$ be a minimizer to~\eqref{problemar} and let $A=\set{\overline{u}>0}\setminus K_u$, then the boundary of $A$ is equal to $K_u$: in first place, assume by contradiction that there exists an $x\in (\partial A)\setminus K_u$, then $u$ is harmonic in a small ball centered in $x$ with radius $r$. Therefore, being \[ \set{u>0}\cap B_r(x)\ne \emptyset, \] it is necessary that $u>0$ in the entire ball, and then $x\notin \partial A$, which is a contradiction. In other words, \[ \partial A\subseteq K_u \] By the same argument we also have that $A$ is open, and moreover $J_u\subseteq \partial A$, then \[ K_u\subseteq \partial A. \] In particular, the pair $(A,u)$ is a minimizer for the functional \[ \mathcal{F}(E,v)=\int_E\abs{\nabla v}^2\,d\Ln-2\int_{\Omega}fv\,d\Ln+\int_{\partial E}\left(\underline{v}^2+\overline{v}^2\right)\,d\Hn+C_0\Ln(E\setminus\Omega) \] over all pairs $(E,v)$ with $E$ open set of finite perimeter containing $\Omega$ and $v\in W^{1,2}(E)$. \end{oss} \newpage \printbibliography[heading=bibintoc] \newpage \Addresses \end{document}
{ "attr-fineweb-edu": 1.491211, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdLTxK0zjCsHeatuN
\section{Introduction}\label{sec:Intro} The imprints of new physics (NP) beyond the standard model (SM) can be examined via both the direct approach (probing for NP signals) and the indirect approach (precisely testing the SM). For the indirect approach to explore NP, semi-leptonic $B$ decays via both the charged and neutral current processes play pivotal roles, especially given that several anomalies in these decays have been observed in recent years, which indicate $(2-4\sigma)$ deviations from the measurements to the SM predictions and therefore have attracted a lot of interest (for reviews, See \cite{Liyingreview,Graverini:2018riw,Langenbruch:2018vuv}). One of the involved physical observables, for the charged current process $b\to c\tau\nu$, is defined as \begin{eqnarray} R(D^{(*)})=\frac{Br(B\to D^{(*)}\tau\nu)}{Br(B\to D^{(*)}\ell\nu)},\quad \text{ with } \ell=\mu, e.\label{eq:RDst} \end{eqnarray} Unlike the branching fractions of these decay modes which are largely affected by the uncertainties that originates from the Cabibbo-Kobayashi-Maskawa (CKM) matrix and the hadronic transition form factors, the reliance of $R(D)$ and $R(D^{(*)})$ on the CKM matrix exactly cancels out, and the uncertainties due to the form factors can be largely reduced in these ratios. Hence digression of their values from the SM results would indicate the signature of NP. The combined results of $R(D)$ and $R(D^*)$ measured by BaBar \cite{Lees:2012xj, Lees:2013uzd}, Belle \cite{Huschle:2015rga,Sato:2016svk,Hirose:2016wfn} and LHCb \cite{Aaij:2015yra,Aaij:2017uff, Aaij:2017deq} are $R(D)=0.407\pm 0.024$ and $R(D^*)=0.304\pm 0.013\pm 0.007$, which clearly indicates the deviation from the SM predictions by $2.3\sigma$ and $3.4\sigma$ respectively \cite{Amhis:2016xyh}. Most recently LHCb reported the ratio of branching fractions \begin{eqnarray} R(J/\psi)=\frac{Br(B_{c}\to J/\psi\tau\nu)}{Br(B_{c}\to J/\psi\mu\nu)}=0.71\pm 0.17\pm 0.18.\label{eq:Rjpsi} \end{eqnarray} This result deviates $2\sigma$ away from the SM predictions, which lie in the range 0.23--0.29 \cite{Watanabe:2017mip,Tran:2018kuv,Bhattacharya:2018kig}. In addition to the ratio of decay rates, the longitudinal $\tau$ polarization $P_\tau(D^*)$ has also been measured by Belle Collaboration for the $B\to D^*\tau\nu$ transition \cite{Hirose:2016wfn} with value $P^{D^{*}}_{\tau}=-0.38\pm 0.5^{+0.21}_{-0.16}$. All these measurements clearly stipulate the deviation from the SM predictions. To address these anomalies and chalk out the status of NP, various approaches \cite{Watanabe:2017mip,Tanaka:2012nw,Sakaki:2013bfa,Sakaki:2014sea,Iguro:2017ysu,Chauhan:2017uil,Dutta:2017wpq,Alok:2017qsi,Descotes-Genon:2017ptp,He:2017bft,Choudhury:2017ijp,Capdevila:2017iqn,Wei:2018vmk,Tran:2018kuv,Issadykov:2018myx,Ivanov:2017mrj,Ivanov:2016qtw,Yang:2018pyq,Abdullah:2018ets,Azatov:2018knx,Martinez:2018ynq,Fraser:2018aqj,Bhattacharya:2018kig,Rui:2018kqr,Kumar:2018kmr,Crivellin:2018yvo,Sannino:2017utc,Albrecht:2017odf,Bardhan:2016uhr,Li:2016pdv,Li:2018rax,Celis:2012dk,Hu:2018lmk,Wang:2017jow,Cohen:2018dgz,Cohen:2018vhw,Biancofiore:2013ki,Colangelo:2016ymy,Colangelo:2018cnj,Li:2016vvp,Celis:2016azn,Jaiswal:2017rve,Bhattacharya:2016zcw,Dutta:2017xmj,Dutta:2018jxz,Rajeev:2018txm,Rui:2016opu,Altmannshofer:2017poe,Fajfer:2012vx,Fajfer:2012jt,Dorsner:2013tla,Becirevic:2016hea,Becirevic:2018afm} are under consideration. In view of the new measurement done by LHCb for $R(J/\psi)$, we scrutinize $R(D), R(D^*), R(J/\psi)$ and $R(\eta_c)$ in the framework independent of specific new physics models, i.e. we consider the effects of each single NP operator in the general effective four-fermion lagrangian. To make a reliable analysis, we carefully consider the employ of the hadronic form factors for both the $B\to D(D^*)$ and $B_c \to J/\psi(\eta_c)$ transitions. For the formers, the heavy quark field theory (HQET) parametrization of the form factors are used in most of the existing experimental and theoretical analyses due to the lack of the experimental data to precisely determine them. Although HQET is expected to well describe the non-perturbative effects in these single heavy quark systems, deviations of its predictions for the $B \to D^{(*)}$ form factors from those obtained by using lattice QCD in regions of small hadronic recoil have been observed \cite{Bigi:2016mdz,Bigi:2017njr}, suggesting the importance of a reconsideration of the HQET form factors. In the study of $b\to cl\nu$ transitions \cite{Jung:2018lfu}, the authors included the $\mathcal{O}(\alpha_s,\Lambda_{\mathrm{QCD}}/m_{b,c})$ and (part of) the $\mathcal{O}(\Lambda_{\mathrm{QCD}}^2/m_c^2)$ corrections to the HQET form factors, and performed a global fit of the HQET parametrization to the lattice \cite{Harrison:2017fmw,Aoki:2016frl} and light-cone sum rule (LCSR) \cite{Faller:2008tr} pseudo data points respectively taken in complementary kinematical regions of hadronic recoil, taking into account the strong unitarity constraints \cite{Bigi:2016mdz,Bigi:2017jbd}. For the $B_c \to J/\psi(\eta_c)$ transitions, the theoretical determination of the form factors rely on other approaches and have been done in perturbative QCD (PQCD) \cite{Wen-Fei:2013uea}, QCD sum rules (QCDSR) \cite{Kiselev:2002vz}, light-cone QCD sum rules (LCSR) \cite{Fu:2018vap,Zhong:2018exo}, nonrelativistic QCD (NRQCD) \cite{Zhu:2017lqu,Shen:2014msa}, the covariant light-front quark model (CLFQM) \cite{Wang:2008xt}, the nonrelativistic quark model (NRQM) \cite{Hernandez:2006gt}, the relativistic quark model (RQM) \cite{Ebert:2003cn}, the covariant confined quark model (CCQM) \cite{Tran:2018kuv} etc.. Besides, the HPQCD Collaboration have also released the preliminary lattice QCD results on some of these form factors \cite{Lytle:2016ixw,Colquhoun:2016osw}. Detailed comparison of the $B_c \to \eta_c$ and $B_c \to J/\psi$ form factors calculated using different approaches can be found in \cite{Wen-Fei:2013uea,Wang:2008xt,Tran:2018kuv}. Among all these results, the CLFQM form factors computed by one of us and other collaborators have been found to be well consistent with the lattice results at all available $q^2$ points. Therefore, in this work we shall use the CLFQM form factors as our numerical inputs. By using the above hadronic form factors that combine the lattice calculation and model studies, the associated uncertainties are expected to be reduced compared with some other analyses (but may still affect the numerical analysis to a certain extent). We then study the constraints on the Wilson coefficients of the single NP operators from the measurement of $R(D)$, $R(D^*)$ and $R(J/\psi)$ within $1$ and $2\sigma$. We also consider the limit on the branching fraction $Br(B_c\to\tau\nu)$ obtained from the LEP1 data \cite{Akeroyd:2017mhr} as additional constraints on NP, which is much more stringent than the constraints from the $B_c$ lifetime \cite{Patrignani:2016xqp,Alonso:2016oyd} also considered in some other literatures. In addition, we also perform the minimum $\chi^2$ fit of the Wilson coefficients to the experimental data on $R(D^{(*)})$ obtained by LHCb \cite{Aaij:2015yra,Aaij:2017uff, Aaij:2017deq}, Belle \cite{Huschle:2015rga,Sato:2016svk,Hirose:2016wfn} and BaBar \cite{Lees:2012xj, Lees:2013uzd}, and the $\tau$ longitudinal polarization $P_\tau(D^*)$ obtained by Belle \cite{Hirose:2016wfn} and $R(J/\psi)$ obtained by LHCb \cite{Aaij:2017tyk}. Using the obtained favored ranges and fitted results for the Wilson coefficients, we give predictions for the physical observables including the ratio of decay rates, $\tau$ longitudinal polarization, final state vector meson polarization, and the forward-backward asymmetry as well as the corresponding $q^2$ distributions. This work will be organised as follows: In Section~\ref{sec:EFT}, we shall introduce the general formalism of the effective field theory for the $b\to cl\nu$ transitions. Then in Section~\ref{sec:EFT} and \ref{sec:Bcff} we shall present detailed description of the $B\to D(D^*)$ and $B\to \eta_c(J/\psi)$ form factors respectively. The numerical analysis will be performed in Section~\ref{sec:CWC} for the experimental constraints on the Wilson coefficients as well as the minimum $\chi^2$ fit, and in Section~\ref{sec:PRE} for the predictions on the physical observables. Finally, in Section~\ref{sec:SUM} we shall give the summary and conclusions. \section{Effective Four-Fermion Interactions and Operator Basis}\label{sec:EFT} The semileptonic decays of $B$ mesons via $b\to c\tau\nu$ in the SM can be described by the left handed four-fermion interaction as an effective theory. In the presence of NP the effective Lagrangian can get modified by incorporating new operator basis that includes all possible four-fermion interactions. If the neutrinos are assumed to be left-handed and their flavors are not differentiated, the effective Lagrangian can be expressed as \begin{equation} \mathcal{L}_{\rm eff} =- {4G_F \over \sqrt2} V_{cb}\left[ (1 + C_{V_1})\mathcal{O}_{V_1} + C_{V_2}\mathcal{O}_{V_2} + C_{S_1}\mathcal{O}_{S_1} + C_{S_2}\mathcal{O}_{S_2} + C_T\mathcal{O}_T \right] + \text{h.c.} \,, \label{eq:lag} \end{equation} where the four-fermion operator basis can be defined as \begin{eqnarray} &\mathcal{O}_{S_1} = (\overline{c}_L b_R)(\overline{\tau}_R \nu_{L}) \,, \,\,\, \mathcal{O}_{S_2} = (\overline{c}_R b_L)(\overline{\tau}_R \nu_{L}) \,, \nonumber \\ & \mathcal{O}_{V_1} = (\overline{c}_L \gamma^\mu b_L)(\overline{\tau}_L \gamma_\mu \nu_{L}) \,, \,\,\, \mathcal{O}_{V_2} = (\overline{c}_R \gamma^\mu b_R)(\overline{\tau}_L \gamma_\mu \nu_{L}) \,, \nonumber \\ & \mathcal{O}_T = (\overline{c}_R \sigma^{\mu\nu} b_L)(\overline{\tau}_R \sigma_{\mu\nu} \nu_{L}) \,, \label{eq:operators} \end{eqnarray} and $C_X$ ($X=S_1$, $S_2$, $V_1$, $V_2$, and $T$) are the corresponding Wilson coefficients with $C_X=0$ in the SM. By using the effective Lagrangian given in Eq.~\eqref{eq:lag} one can compute the following hadronic matrix elements for the decays $B\to D(D^*)\tau\nu$ and $B_{c}\to\eta_{c}( J/\psi)\tau\nu$ \cite{Tanaka:2012nw,Sakaki:2013bfa}: \begin{equation} \begin{split} H^{\lambda_{M}}_{V_{1,2,\lambda}}(q^{2})=&\varepsilon^{\ast}_{\mu}(\lambda)\langle M(\lambda_{M})|\bar c\gamma^{\mu}(1\mp\gamma_{5})b|B\rangle \, ,\\ H^{\lambda_{M}}_{S_{1,2,\lambda}}(q^{2})=&\langle M(\lambda_{M})|\bar c(1\pm\gamma_{5})b|B\rangle \, ,\\ H^{\lambda_{M}}_{T,\lambda\lambda^{\prime}}(q^{2})=-& H^{\lambda_{M}}_{T,\lambda^{\prime}\lambda}(q^{2})= \varepsilon^{\ast}_{\mu}(\lambda)\varepsilon^{\ast}_{\nu}(\lambda^{\prime})\langle M(\lambda_{M})|\bar c\sigma^{\mu\nu}(1-\gamma_{5})b|B\rangle \, , \end{split} \label{eq:Helicity} \end{equation} where $\lambda_{M}$ and $\lambda$ represent the meson and virtual particle helicities with the values $\lambda_{M}=0,\pm 1$ for pseudoscalar and vector meson respectively and $\lambda=0,\pm 1,t$ for the virtual particle. The amplitudes given in Eq.~\eqref{eq:Helicity} can be expressed in terms of transition form factors for the above mentioned decays and further used to calculate physical observables such as the unpolarized and polarized decay rates. Formulas for the physical observables in terms of the hadronic matrix elements are given in \cite{Tanaka:2012nw,Sakaki:2013bfa}. In order to compute the observables involved in semileptonic $B$ decays and make reliable conclusions on the possible NP effects, it is worthwhile to take care of the form factors which result in major theoretical uncertainties. In the next section we shall elaborate on the $B \to D( D^{(*)})$ and $B_c \to \eta_c(J/\psi)$ form factors used in our analysis. \section{$B \to D^{(*)}$ form factors}\label{sec:Bff} For the $B \to D^{(*)}$ form factors, we shall use as numerical inputs the results fitted in \cite{Jung:2018lfu}, of which the authors follow to use the HQET parametrization in \cite{Bernlochner:2017jka}. The unaccounted systematic uncertainties of these form factors due to the truncation of HQET series and perturbation series are suppressed by $\mathcal{O}(\Lambda_{\rm QCD}^2/m^2_{c,b}\,,\,\alpha_s\Lambda_{\rm QCD}/m_{c,b}\,,\, \alpha_s^2)$. The HQET parameters such as the sub-leading Isgur-Wise functions are determined by a global fit to the lattice results \cite{Harrison:2017fmw,Aoki:2016frl} at small hadronic recoil points and the LCSR results \cite{Faller:2008tr} in the region of large hadronic recoil, with the the strong unitarity constraints \cite{Bigi:2016mdz,Bigi:2017jbd} also being imposed. The HQET form factors for the $B \to D$ transitions are defined through \begin{eqnarray} \label{Eq:vectorD} \langle D(k)| \bar c\gamma^\mu b |\overline B(p)\rangle &=& \sqrt{m_Bm_D} \left [ h_+(w) (v+v')^\mu + h_-(w) (v-v')^\mu \right ] \,, \\ \label{Eq:scalarD} \langle D(k)|\bar c b|\bar B(p)\rangle &=& \sqrt{m_Bm_D} (w+1) h_S(w) \ , \\ \label{Eq:tensorD1} \langle D(k)| \bar c \sigma^{\mu \nu} b |\bar B(p)\rangle &=& -i \sqrt{m_Bm_D} h_T (w) (v^\mu v'^\nu - v^\nu v'^\mu )\,, \label{Eq:vectorDstar} \end{eqnarray} where $v=p/m_B$ and $v'=k/m_D$ are respectively the four velocities of the $B$ and $D$ mesons, and the dimensionless kinematic variable $w = v\cdot v' = \frac{m_B^2 + m_D^2 - (p_B-p_D)^2}{2m_B m_D}$ is used instead of the momentum transfer $q^2=(p-k)^2$. For the $B \to D^*$ transitions the following definitions are used: \begin{eqnarray} \langle D^*(k,\epsilon)| \bar c \gamma^\mu b| \bar B (p) \rangle &=& i \sqrt{m_Bm_{D^*}}\, h_V(w) \varepsilon^{\mu\nu\rho\sigma} \epsilon^*_\nu v'_\rho v_\sigma \,, \\ \label{Eq:axialvectorDstar} \langle D^*(k,\epsilon)| \bar c \gamma^\mu \gamma^5 b| \bar B (p) \rangle &=& \sqrt{m_Bm_{D^*}}\, \big [ h_{A_1}(w) (w+1) \epsilon^{*\mu}\notag \\ &\,& -(\epsilon^* \cdot v) \left(h_{A_2}(w) v^\mu +h_{A_3}(w) v'^\mu \right) \big ],\\ \label{Eq:scalarDstar} \langle D^*(k,\epsilon) | \bar c \gamma^5 b | \bar B (p) \rangle &=& -\sqrt{m_Bm_{D^*}}\, (\epsilon^* \cdot v ) h_P(w) ,\\ \label{Eq:tensorDstar} \langle D^*(k,\epsilon) | \bar c \sigma^{\mu \nu} b |\bar B (p) \rangle &=& -\sqrt{m_Bm_{D^*}}\, \varepsilon^{\mu\nu\rho\sigma} \Big [ h_{T_1}(w) \epsilon^*_\rho (v +v')_\sigma +h_{T_2}(w) \epsilon^*_\rho (v-v')_\sigma \notag \\ &\,&+h_{T_3}(w) (\epsilon^* \cdot v) v_\rho v'_\sigma \Big ] \,, \end{eqnarray} where $\sigma^{\mu\nu} =\frac{i}{2} \left[ \gamma^\mu, \gamma^\nu\right]$, $\varepsilon^{0123}=-1$, $v$, $v'$ and w are defined in the same way as for the $B \to D$ transitions, and $\epsilon^\mu$ denotes the polarization vector of $D^*$. In the heavy quark limit, only the single leading Isgur-Wise function $\xi(w)$ is needed for expressing the $B\to D^{(*)}$ form factors. With inclusion of the $\mathcal{O}(\alpha_s,\Lambda_{\mathrm{QCD}}/m_{b,c})$ contributions, these form factors are written as \cite{Bernlochner:2017jka}\footnote{Here we follow the notations in \cite{Bernlochner:2017jka} but the coefficients of the $\mathcal{O}(\alpha_s)$ contributions should not be mixed up with the Wilson coefficients $C_X$ in Eq.~\eqref{eq:lag}.} {\allowdisplaybreaks \begin{align}\label{eqn:BD1m} h_+ & = \xi\bigg\{1 + \hat{\alpha}_s\Big[C_{V_1} + \frac{w+1}2\, (C_{V_2}+C_{V_3})\Big] + (\varepsilon_c + \varepsilon_b)\, \hat{L}_1\bigg\} \,, \nonumber\\* h_- & = \xi\big[\hat{\alpha}_s\, \frac{w+1}2\, (C_{V_2}-C_{V_3}) + (\varepsilon_c - \varepsilon_b)\, \hat{L}_4\big] \,, \nonumber\\ h_S & = \xi\Bigg[1 + \hat{\alpha}_s\, C_S + (\varepsilon_c + \varepsilon_b) \bigg(\! \hat{L}_1 - \hat{L}_4\, \frac{w-1}{w+1} \bigg)\Bigg]\,, \nonumber\\ h_T & =\xi\Big[ 1 + \hat{\alpha}_s \big(C_{T_1}-C_{T_2}+C_{T_3}\big) + (\varepsilon_c + \varepsilon_b) \big( \hat{L}_1 - \hat{L}_4 \big)\Big] \,,\nonumber\\ h_V & = \xi\Big[1 + \hat{\alpha}_s\, C_{V_1} + \varepsilon_c \big(\hat{L}_2 - \hat{L}_5\big) + \varepsilon_b \big( \hat{L}_1 - \hat{L}_4 \big)\Big] \,,\nonumber\\ h_{A_1} & =\xi\Bigg[ 1 + \hat{\alpha}_s\, C_{A_1} + \varepsilon_c \bigg(\! \hat{L}_2 - \hat{L}_5\, \frac{w-1}{w+1} \bigg) + \varepsilon_b \bigg(\! \hat{L}_1 - \hat{L}_4\, \frac{w-1}{w+1} \bigg)\Bigg] \,,\nonumber\\ h_{A_2} & = \xi\Big[\hat{\alpha}_s\, C_{A_2} + \varepsilon_c \big(\hat{L}_3 + \hat{L}_6\big)\Big] \,,\nonumber\\ h_{A_3} & = \xi\Big[ 1 + \hat{\alpha}_s \big(C_{A_1} + C_{A_3}\big) + \varepsilon_c \big(\hat{L}_2 - \hat{L}_3 + \hat{L}_6 - \hat{L}_5 \big) + \varepsilon_b \big(\hat{L}_1 - \hat{L}_4\big)\Big] \,,\nonumber\\ h_P & = \xi\Big\{1 + \hat{\alpha}_s\, C_P + \varepsilon_c \big[\hat{L}_2 + \hat{L}_3 (w-1) + \hat{L} _5 - \hat{L}_6(w+1)\big] + \varepsilon_b \big( \hat{L}_1 - \hat{L}_4 \big)\Big\}\,,\nonumber\\ h_{T_1} & = \xi\bigg\{1 + \hat{\alpha}_s \Big[ C_{T_1} + \frac{w-1}2\, \big(C_{T_2}-C_{T_3}\big) \Big] + \varepsilon_c \hat{L}_2 + \varepsilon_b \hat{L}_1\bigg\} \,,\nonumber\\ h_{T_2} & = \xi\Big[ \hat{\alpha}_s\, \frac{w+1}2\, \big(C_{T_2}+C_{T_3}\big) + \varepsilon_c \hat{L}_5 - \varepsilon_b \hat{L}_4\Big] \,,\nonumber\\ h_{T_3} & = \xi\Big[\hat{\alpha}_s\, C_{T_2} + \varepsilon_c \big(\hat{L}_6 - \hat{L}_3\big)\Big] \,, \end{align} } where $\hat{\alpha}_s=\alpha_s/4\pi$, $\epsilon_{c,b}=\bar\Lambda/2m_{c,b}^{pole}$ with $\bar\Lambda = \overline m_B - m_b^{pole} + \lambda_1/(2m_b^{1S})$ in which $\overline m_B = (m_B + 3m_{B^*})/4 \simeq 5.313$, $C_X(\mu)$ are the coefficients of the $\mathcal{O}(\alpha_s)$ terms calculated in the perturbation theory, and the $L_{1\ldots6}$ functions can be expressed in terms of the sub-leading Isgur-Wise functions through: \begin{align}\label{eq:isgurwiseL} \hat{L}_1 &= - 4(w-1) \hat\chi_2 + 12 \hat\chi_3\,, \qquad \hat{L}_2 = - 4 \hat\chi_3\,, \qquad \hat{L}_3 = 4 \hat\chi_2\,, \nonumber\\* \hat{L}_4 &= 2 \eta - 1 \,, \qquad \hat{L}_5 = -1\,, \qquad \hat{L}_6 = - 2 (1 + \eta)/(w+1)\,. \end{align} With $\hat{\chi}_3(1) = 0$ implied by the Luke's theorem, up to $\mathcal{O}(\varepsilon_{c,b}(w-1))$ the subleading Isgur-Wise functions can be approximated as follows: \begin{equation} \label{eqn:FIWp} \hat{\chi}_2(w) \simeq \hat{\chi}_2(1) + \hat{\chi}'_2(1)(w-1)\,,\qquad \hat{\chi}_3(w) \simeq \hat{\chi}'_3(1)(w-1)\,,\qquad \eta(w) \simeq \eta(1) + \eta'(1)(w-1)\,. \end{equation} The expressions for $C_X(\mu)$ obtained by matching QCD and HQET at $\mu=\sqrt{m_b^{pole} m_c^{pole}}$ (corresponding to $\alpha_s\approx0.26$) can be found in \cite{Bernlochner:2017jka}, which are lengthy and not presented in this work. To ensure the cancellation of the leading renormalon associated with the pole mass, the $1S$ mass scheme has also been used, namely in the $\bar\Lambda/2m_{c,b}^{pole}$ terms not multiplied by the sub-leading Isgur-Wise functions, the pole mass is treated as $m_b^{pole}= m_b^{1S}(1 +2\alpha_s^2/9 + \ldots)$ where $m_b^{1S}$ is half of the $\Upsilon(1S)$ mass, while in the other terms $m_b^{pole}(m_b^{1S}) \to m_b^{1S}$ is imposed \cite{Bernlochner:2017jka}. In the global fit performed in \cite{Jung:2018lfu}, $\mathcal{O}(\epsilon_c^2)$ contributions to $h_{A_1}$, $h_{T_1}$ and $h_+$ of which the $\mathcal{O}(\epsilon_c)$ contributions vanish at zero hadronic recoil, have also been included, and the leading Isgur-Wise function is parameterized as $\xi(z) = 1 - 8 \rho^2 z + (64 c - 16 \rho^2) z^2$ in the $z$ expansion where $z(w) = (\sqrt{w+1}- \sqrt{2})/(\sqrt{w+1} + \sqrt{2})$. The fitted values of the parameters in the $B \to D^{(*)}$ form factors along with the other inputs in this work are listed in Appendix~\ref{app:A}. \section{$B_c \to \eta_c$ and $B_c \to J/\Psi$ form factors}\label{sec:Bcff} As mentioned in the introduction, the CLFQM form factors \cite{Wang:2008xt}\footnote{Very recent application of the CLFQM form factors can be found in \cite{Wang:2017mqp,Wang:2017azm}.} for $B_c \to \eta_c$ and $B_c \to J/\psi$ transitions are well consistent with the preliminary lattice results \cite{Colquhoun:2016osw,Lytle:2016ixw} obtained by the HPQCD Collaboration, therefore we use them as our numerical inputs in this work. The $B_c$ form factors of the vector and axial-vector operators are defined through the following matrix elements: \begin{align} \label{eq:Fp0_parametrization} \langle \eta_c(k)|\overline{c}\gamma_\mu b|B_c(p)\rangle & = \left[(p+k)_\mu-{m_{B_c}^2-m_{\eta_c}^2 \over q^2}q_\mu\right] F_1(q^2)+q_\mu{m_{B_c}^2-m_{\eta_c}^2 \over q^2}F_0(q^2) \,,\\ \langle J/\psi(k) | \bar c \gamma^\mu b | B_c(p) \rangle & = -{2i V(q^2) \over m_{B_c} + m_{J/\psi} }\, \varepsilon^{\mu\nu\rho\sigma}\, \epsilon_\nu^*\, {p}_\rho\, {k}_\sigma\,, \\ % \langle J/\psi(k) | \bar c \gamma^\mu\gamma^5 b | B_c(p) \rangle & = 2m_{J/\psi} A_0(q^2) { \epsilon^* \cdot q \over q^2 } q^\mu + (m_{B_c} + m_{J/\psi}) A_1(q^2) \left[ \epsilon^{*\mu} - { \epsilon^* \cdot q \over q^2 } q^\mu \right] \notag \\ &~~~ - A_2(q^2) { \epsilon^* \cdot q \over m_{B_c} + m_{J/\psi} } \left[ p^{\mu} + k^{\mu} - { m_{B_c}^2 - m_{J/\psi}^2 \over q^2 } q^\mu \right] \,, \end{align} where the form factors are parametrized as $F(q^2)=F(0)~{\rm exp}(c_1\hat s+c_2\hat s^2)$ with $\hat s=q^2/m_{B_c}^2$ in the full kinematical range of $q^2$, of which the results computed in the covariant light-front quark model \cite{Wang:2008xt} are listed in Table~\ref{tab:WSLffs}. These results are consistent with the preliminary lattice results for $F_0$, $F_1$, $A_1$ and $V$ at all available $q^2$ values obtained by the HPQCD Collaboration \cite{Lytle:2016ixw,Colquhoun:2016osw} , which can be clearly seen in Figure~\ref{fig:ffs}. \begin{large} \begin{table}[htbp] \renewcommand\arraystretch{1.5} \caption{Form factors for the $B_c\to \eta_c,J/\psi$ transitions calculated in the light-front quark model.} \label{tab:WSLffs} \begin{tabular}{ccccccccccc} \hline\hline & $FF$ & $F(0)$ &$F(q^2_{\rm {max}})$ & $c_1$ & $c_2$ \\ \hline & $F_1$ & $0.61^{+0.03+0.01}_{-0.04-0.01}$ & $1.09^{+0.00+0.05}_{-0.02-0.05}$ & $1.99^{+0.22+0.08}_{-0.20-0.08}$ & $0.44^{+0.05+0.02}_{-0.05-0.02}$ \\ \hline & $F_0$ & $0.61^{+0.03+0.01}_{-0.04-0.01}$ & $0.86^{+0.02+0.04}_{-0.03-0.04}$ & $1.18^{+0.26+0.09}_{-0.24-0.09}$ & $0.17^{+0.09+0.02}_{-0.09-0.02}$\\ \hline & $V$ & $0.74^{+0.01+0.03}_{-0.01-0.03}$ & $1.45^{+0.03+0.09}_{-0.04-0.08}$ & $2.46^{+0.13+0.10}_{-0.13-0.10}$ & $0.56^{+0.02+0.03}_{-0.03-0.03}$ \\ \hline & $A_0$ & $0.53^{+0.01+0.02}_{-0.01-0.02}$ & $1.02^{+0.02+0.07}_{-0.02-0.07}$ & $2.39^{+0.13+0.11}_{-0.13-0.11}$ & $0.50^{+0.02+0.02}_{-0.03-0.02}$\\ \hline & $A_1$ & $0.50^{+0.01+0.02}_{-0.02-0.02}$ & $0.80^{+0.00+0.05}_{-0.01-0.05}$ & $1.73^{+0.12+0.12}_{-0.12-0.12}$ & $0.33^{+0.01+0.02}_{-0.02-0.02}$ \\ \hline &$A_2$ & $0.44^{+0.02+0.02}_{-0.03-0.02}$ & $0.81^{+0.02+0.05}_{-0.03-0.04}$ & $2.22^{+0.11+0.11}_{-0.10-0.11}$ & $0.45^{+0.01+0.02}_{-0.01-0.02}$\\ \hline\hline \end{tabular} \end{table} \end{large} \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.25]{ffs_1.pdf} \includegraphics[scale=0.25]{ffs_2.pdf} \caption{. CLFQM form factors for the $B_c \to \eta_c$ and $B_c \to J/\psi$ hadronic transitions in comparison with the preliminary lattice results. } \label{fig:ffs} \end{center} \end{figure} The form factors of the tensor operators are defined through \begin{align} \langle \eta_c(k)|\overline{c}\sigma_{\mu\nu} b|B_c(p)\rangle & = -i ( p_\mu k_\nu - k_\mu p_\nu ) { 2F_T(q^2) \over m_{B_c}+m_{\eta_c} } \,,\\ \langle J/\psi(k) |\bar c \sigma^{\mu\nu} q_\nu b| B_c(p) \rangle & = -2T_1(q^2)\, \varepsilon^{\mu\nu\rho\sigma} \epsilon_\nu^*\, {p}_{\rho}\, {k}_{\sigma}\,, \\[0.5em] % \langle J/\psi(k) |\bar c \sigma^{\mu\nu}\gamma_5 q_\nu b| B_c(p) \rangle & = - T_2(q^2) \Big[(m_{B_c}^2-m_{J/\psi}^2) \epsilon^{*\mu} - (\epsilon^* \cdot q) (p + k)^\mu\Big] \notag \\ &~~~ - T_3(q^2) (\epsilon^* \cdot q) \left[q^\mu-{q^2 \over m_{B_c}^2-m_{J/\psi}^2} (p + k)^\mu \right] \,, \end{align} where $T(q^2)$ have not been calculated in CLFQM but can be related to $V(q^2)$ and $A(q^2)$ through the quark level equations of motion \cite{Ivanov:2016qtw,Azatov:2018knx}: \begin{align} F_T(q^2) & = -\frac{ \big(m_b+m_c\big)}{q^2}(m_{B_c}+m_{\eta_c})\big(F_0(q^2)-F_1(q^2)\big) \,, \\ T_1(q^2) & = {m_b+m_c \over m_{B_c} + m_{J/\psi}} V(q^2) \,, \\ T_2(q^2) & = {m_b-m_c \over m_{B_c} - m_{J/\psi}} A_1(q^2) \,, \\ T_3(q^2) & = -{m_b-m_c \over q^2 } \Big[ m_{B_c} \big( A_1(q^2)-A_2(q^2) \big) + m_{J/\psi} \big(A_2(q^2)+A_1(q^2)-2A_0(q^2)\big) \Big] \,, \end{align} where we use the quark masses in the $\overline{MS}$ renormalization scheme at the scale $\mu=\overline m_b$ and to be more conservative consider the differences in the numerical results induced by using the pole quark masses as a source of systematic uncertainties. \section{experimental constraints on the Wilson Coefficients}\label{sec:CWC} In this section, we shall perform our numerical analysis of the experimental constraints on the Wilson coefficients for single NP operators in the effective lagrangian given in Eq.~\eqref{eq:lag}. In order to make a general model-independent analysis, we first perform a minimum $\chi^2$ fit of the Wilson coefficients to the experimental data of different observables such as the ratios $R(D^{(*)})$ and $R(J/\psi)$ and the $\tau$ polarization $P_\tau(D^*)$ for each NP scenario. Other than the fit, we shall try to obtain the allowed region of the Wilson Coefficients by the current experimental data on $R(D^{(*)})$ and $R(J/\psi)$ within $1\sigma$ and $2\sigma$ and limit of $Br(B_c\to\tau\nu)$ obtained from LEP1 data. Final conclusions will be made based on both the results of the fit and the favored regions by the experimental constraints. In our methodology of minimum $\chi^2$ fit, the $\chi^2$ as a function of the Wilson coefficient $C_X$ is defined as \cite{Alok:2017qsi} \begin{align} \chi^2(C_X)=\sum_{m,n=1}^{\text{data}} (O^{th}(C_X)-O^{exp})_m(V^{exp}+V^{th})_{mn}^{-1} (O^{th}(C_X)-O^{exp})_n +\frac{(R_{J/\psi}^{th}(C_X)-R_{J/\psi}^{exp})^2}{\sigma_{R_{J/\psi}}^2}, \end{align} where $O^{th}_{m,n}(C_X)$ are the theoretical predictions for $R(D)$, $R(D^*)$, $P_\tau(D^*)$ etc., and $O^{exp}_{m,n}$ are the corresponding experimental measurements, which are listed in Table~\ref{tab:exdata}. $V^{exp}$ and $V^{th}$ are respectively the experimental and theoretical covariance matrices, which are calculated by taking the correlations listed in Table~\ref{tab:exdata} and Table~\ref{tab:SMcor}. \begin{table*}[!htbp] \centering \caption{Experimental data used in the fit.} \begin{tabular}{c c c c c c} \hline \hline & $R_D$ & $R_{D^*}$& Correlation &$P_\tau(D^*)$ &$R_{J/\psi}$ \\ \hline BaBar\cite{Lees:2012xj, Lees:2013uzd} &$0.440(58)(42)$ &$0.332(24)(18)$ &$-0.27$ &$-$&$-$\\ Belle\cite{Huschle:2015rga} &$0.375(64)(26)$ &$0.293(38)(15)$ &$-0.49$ &$-$&$-$\\ Belle \cite{Sato:2016svk} &$-$ &$0.302(30)(11)$ &$-$ &$-$&$-$\\ Belle\cite{Hirose:2016wfn} &$-$ &$0.270(35)(_{-0.025}^{+0.028})$ &$0.33$ &$-0.38(51)(_{-0.16}^{+0.21})$&$-$\\ LHCb \cite{Aaij:2015yra} &$-$ &$0.336(27)(30)$ &$-$ &$-$&$-$\\ LHCb \cite{Aaij:2017uff, Aaij:2017deq} &$-$ &$0.291(19)(26)(13)$ &$-$ &$-$&$-$\\ LHCb\cite{Aaij:2017tyk} &$-$ &$-$ &$-$ &$-$&$0.71(17)(18)$\\ \hline \hline \end{tabular} \label{tab:exdata} \end{table*} \begin{table*}[!htbp] \begin{center} \small \caption{Correlations between observables in the SM.} \begin{tabular}{cccccccc} \hline Observable & \multicolumn{7}{c}{Correlation} \\ \hline $R(D)$ & 1.00 & 0.17 & 0.41 & 0.22 & 0.22 & -0.86 & 0.18 \\ $R(D^*)$& & 1.00& -0.26 & -0.65 & -0.48 & -0.03 & -0.36 \\ $P_{\tau}(D)$ & & & 1.00 & 0.75 & 0.68 & -0.77 & 0.56 \\ $P_{\tau}(D^*)$ & & & & 1.00 & 0.96 & -0.52 & 0.88 \\ $P_{D^*}$ & & & & & 1.00 & -0.51 & 0.97 \\ $\mathcal{A}_{FB}(D)$ & & & & & & 1.00 & -0.43 \\ $\mathcal{A}_{FB}(D^*)$ & & & & & & & 1.00\\ \hline \end{tabular} \label{tab:SMcor} \end{center} \end{table*} The fitted Wilson coefficients in each NP scenario are listed in Table~\ref{tab:wcoef}. Using these values, we predict all the observables in the SM and the NP scenarios, which are respectively listed in Table~\ref{tab:obser1} and \ref{tab:obser2} in the next section. \begin{table*}[!htbp] \centering \caption{Fitted values of the Wilson coefficients in different NP scenarios.} \begin{tabular}{cccc} \hline \hline NP scenario & value& $\chi^2/dof$ &Correlation\\ \hline $C_{V_1}$ &$(1+Re[C_{V_1}])^2+(Im[C_{V_1}])^2=1.27(6) $ &$7.42/8$ & $-$\\ $C_{V_2}$ &$0.057(50)\pm0.573(73)i $ &$6.19/8$ & $0.750$ \\ $C_{S_1}$ &$0.405 (91)$ &$15.5/8$ & $-$ \\ $C_{S_2}$ &$-1.05(30)\pm1.09(12)i $ &$5.98/8$ & $0.589$\\ $C_T$ &$0.24(11)\pm0.13(8)i $ &$8.39/8$ & $-0.993$\\ \hline \hline \end{tabular} \label{tab:wcoef} \end{table*} Fig.~\ref{fig:constraint1} depicts the the correlations between $R(D)$, $R(D^{*})$ and $R(J/\psi)$ in the presence of each single NP operator. The horizontal and vertical bands represent the experimental constraints at $1\sigma$ confidence level (C.L.). One can see that the $S_2$, $V_2$ and $T$ operators can explain the experimental values of $R(D)$ and $R(D^*)$ within $1\sigma$, but when $R(J/\psi)$ is taken into account, all single-operator scenarios can no longer accommodate the $1\sigma$ experimental constraints. \begin{figure}[!htbp] \begin{tabular}{ccc} \includegraphics[scale=0.4]{RDRDst1sigma.pdf}& \includegraphics[scale=0.4]{RDRJpsi1sigma.pdf}& \includegraphics[scale=0.4]{RDstRJpsi1sigma.pdf} \end{tabular} \caption{Correlations between $R(D)$, $R(D^{*})$ and $R(J/\psi)$ in the presence of single NP operators. The vertical and horizontal bands show the experimental constraints and the black dots denote the SM predictions.} \label{fig:constraint1} \end{figure} At the C.L. of $2\sigma$, typical allowed regions for the Wilson coefficients are obtained, as depicted in Fig.~\ref{fig:constraint2}. In addition to the measurement of the ratios $R(D^{(*)})$ and $R(J/\psi)$, there have been other experimental constraints on the NP effects in $b \to c\tau\nu$ transitions, namely the measurement of the $B_c$ life time \cite{Patrignani:2016xqp,Alonso:2016oyd} and the branching fraction of $B_c \to \tau\nu$ \cite{Akeroyd:2017mhr}. Here we consider the constraint from the latter, which is more restrictive than that from the $B_c$ life time. The LEP1 Data taken at the Z peak has given an upper limit on $Br(B_c \to \tau\nu)$, which we consider in our analysis. The explicit expression for this constraint is \cite{Akeroyd:2017mhr} \begin{align} Br(B_c \to \tau\bar\nu) = \tau_{B_c} {1 \over 8\pi} m_{B_c} m_\tau^2 \left( 1 - {m_\tau^2 \over m_{B_c}^2} \right)^2 f_{B_c}^2 G_F^2 |V_{cb}|^2 \left| 1+C_{V_1}-C_{V_2} +{m_{B_c}^2 \over m_\tau(m_b+m_c)} (C_{S_1}-C_{S_2}) \right|^2 <10\%\,, \label{EQ:Bctaunu} \end{align} where $\tau_{B_c}$ and $f_{B_c}$ are respectively the $B_c$ lifetime and decay constant and their values used in this work are listed in Appendix~\ref{app:A}. In Fig.~\ref{fig:constraint2} one can see that the $S_1$ scenario is excluded merely taking into account the constraints from $R(D)$ and $R(D^*)$ within $2\sigma$. For the $S_2$ scenario, although it can simultaneously accommodate $R(D^{(*)})$ and $R(J/\psi)$ and has the best $\chi^2$ value (as can be seen in Table~\ref{tab:wcoef}), it is excluded by the constraint from $Br(B_c \to \tau\nu)$. The remaining NP scenarios, namely the $V_1$, $V_2$ and $T$ scenarios, are able to explain the current experimental data at $2\sigma$. Among these scenarios, the ones corresponding to the $V_1$ and $V_2$ operators have distinctive allowed regions by the experimental measurements of $R(D^{(*)})$ and $R(J/\psi)$ as well as small $\chi^2$ values in the fit. However the $T$ scenario is severely constrained and has a larger $\chi^2$ value. Therefore, in our analysis, the $V_1$ and $V_2$ scenarios are observed to be the most favored single-operator NP scenarios by the current experimental data, and $V_2$ has even a better $\chi^2$ value than $V_1$ in the fit. The favored regions of different NP scenarios obtained in this work can be compared with those in \cite{Watanabe:2017mip}, where the HQET form factors for the $B \to D^{(*)}$ transitions are also used but only part of the $\mathcal{O}(\alpha_s,\Lambda_{\mathrm{QCD}}/m_{b,c})$ contributions are included following earlier works \cite{Tanaka:2012nw,Sakaki:2013bfa}, and for the $B_c\to \eta_c(J/\psi)$ transitions \cite{Wen-Fei:2013uea} the PQCD form factors are used, instead of the CLFQM form factors adopted in this work. In \cite{Watanabe:2017mip}, $V_1$, $V_2$ and $T$ scenarios are favored by the experimental constraints from $R(D^{(*)})$ and $R(J/\psi)$ within $2\sigma$ and the LEP1 data on $Br(B_c\to\tau\nu)$, and the $V_1$ and $V_2$ scenarios have the same $\chi^2$ value. In contrast, in our analysis, the $V_1$ and $V_2$ scenarios are still favored by the experimental constraints but the $T$ scenario is severely restricted, while in the fit, the $V_2$ scenario has a better $\chi^2$ value than the $V_1$ scenario. One should also note that the above constraints on the Wilson coefficients are obtained under the assumption of single-operator dominance, and the potential of combination of more than one NP operators or specific NP models in explaining the current experimental data requires additional analyses\footnote{Such kind of model-independent or model discussions using different hadronic form factors as inputs can be seen in \cite{Tanaka:2012nw,Bhattacharya:2018kig,Alok:2017qsi,Alok:2018uft}.} which is beyond the scope of this work. Our results suggest that any NP model dominated by a single scalar or tensor operator is under challenge, such as some types of charged Higgs models \cite{Tanaka:2012nw,Hou:1992sy,Crivellin:2012ye} that generate the scalar operators $O_{S_1}$ or $O_{S_2}$. However, the combinations of scalar or tensor operators with other operators are still possible solutions to the $b\to c\tau\nu$ anomalies: as has been found in \cite{Watanabe:2017mip}, two classes of leptoquark models with $C_{S_2}=\pm7.2 C_T$ are still favoured within $2\sigma$ although in their study the scalar operator dominance hypotheses are also ruled out. \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.5]{CS12sigmaWSL.pdf} \includegraphics[scale=0.5]{CS22sigmaWSL.pdf} \includegraphics[scale=0.531]{CV12sigmaWSL.pdf} \includegraphics[scale=0.5]{CV12sigmaovWSL.pdf} \includegraphics[scale=0.5]{CV22sigmaWSL.pdf} \includegraphics[scale=0.521]{CT2sigmaWSL.pdf} \caption{ Constraints on the Wilson coefficients from the measurements of $R(D^{(*)})$ and $R(J/\psi)$ at the C.L. of $2\sigma$ and the branching fraction $Br(B_c \to \tau\nu)$ (black dashed curves). In each panel the red stars or dashed curves denote the optimal values obtained by using the fitted Wilson coefficients. } \label{fig:constraint2} \end{center} \end{figure} \section{predictions for the observables}\label{sec:PRE} The fitted results in Table~\ref{tab:wcoef} and the allowed range by the experimental constraints in Figure~\ref{fig:constraint2} of the Wilson coefficients can be used to probe the NP effects via different physical observables in $b\to c\tau\nu$ transitions. In this work other than the lepton universality ratio $R$, we also investigate the NP effects through the longitudinal polarization asymmetry of the $\tau$ lepton, longitudinal polarization of the final state vector meson $P_{\mathcal{M}}$ and the forward-backward asymmetry $\mathcal{A}_{FB}$ of the $\tau$ lepton, which are defined as \begin{eqnarray} P_\tau &=& { \Gamma(\lambda_\tau=1/2) - \Gamma(\lambda_\tau=-1/2) \over \Gamma(\lambda_\tau=1/2) + \Gamma(\lambda_\tau=-1/2) } \,,\\ \label{eq:Ptau} P_\mathcal{M} &=& { \Gamma(\lambda_M=0) \over \Gamma(\lambda_M=0) + \Gamma(\lambda_M=1) + \Gamma(\lambda_M=-1) } \,,\\ \label{eq:PDst} \mathcal{A}_{\rm FB} &=& { \int_0^1 {d\Gamma \over d\cos{\theta}}d\cos{\theta}-\int^0_{-1}{d\Gamma \over d\cos{\theta}}d\cos{\theta} \over \int_{-1}^1 {d\Gamma \over d\cos{\theta}}d\cos{\theta} }\,, \label{eq:AFB} \end{eqnarray} where $\lambda_\tau$ denotes the $\tau$ helicity in the rest frame of the leptonic system, $\lambda_M$ is the helicity of $D^*$($J/\psi$) in the $B_{(c)}$ rest frame, and $\theta$ is the angle between the momenta of $\tau$ and $B_{(c)}$ in the rest frame of $\tau\nu$. The detailed expression for the physical observables mentioned above are given in \cite{Sakaki:2013bfa}. We now discuss the numerical results for the above physical observables. Using the hadronic form factors given in Section~\ref{sec:Bff} and \ref{sec:Bcff} we obtain the SM and NP predictions. In Table~\ref{tab:obser1} and \ref{tab:obser2}, we list the predictions obtained by using the fitted Wilson coefficients given in Table~\ref{tab:wcoef}, while in Table~\ref{tab:obsB2sig} and Table~\ref{tab:obsBc2sig} we present the predicted ranges obtained by using the allowed regions of the Wilson coefficients shown in Figure~\ref{fig:constraint2}. From Table~\ref{tab:obser1}, we observe that the SM predictions for $R(D^{(*)})$ decrease as compared to the values $R(D)=0.305\pm0.012$ and $R(D^{*})=0.252\pm0.004$ obtained in early works \cite{Tanaka:2012nw,Sakaki:2013bfa}, where the HQET form factors used only include part of the $\mathcal{O}(\alpha_s,\Lambda_{\mathrm{QCD}}/m_{b,c})$ contributions. Furthermore, Our SM prediction for $R(J/\psi)$ presented in Table~\ref{tab:obser2} show that its value also decrease as compared to the value $R(J/\psi)=0.283$ obtained in \cite{Watanabe:2017mip}. Therefore from the analysis of the lepton universality ratios in the SM one can argue that the choice of transition form factors lead to decrease in the SM predictions and make them further deviate from the experimental values. By incorporating the effects from single NP operators, an interesting feature arises in $R(D)$ and $R(D^*)$ obtained using the fitted values of Wilson coefficients: in the $V_2$ scenario, the obtained values of the ratios (shown in Table~\ref{tab:obser1}) are quite close to the current world averaged values $R(D)=0.407\pm 0.024$ and $R(D^*)=0.304\pm 0.013\pm 0.007$ \cite{Amhis:2016xyh}. In addition to the ratios of the decay rates, we also give predictions for $P_\tau(D^*)$ in Table~\ref{tab:obser1}, which can be compared with the experimental value obtained by Belle (given in Table~\ref{tab:exdata}) and the theoretical predictions in \cite{Watanabe:2017mip}, which are -0.50 in the SM, the $V_1$ scenario and the $V_2$ scenario, and 0.14 in the $T$ scenario. Our results for the SM, the $V_1$ scenario and the $V_2$ scenario are very close to the ones obtained in \cite{Watanabe:2017mip} but for the $T$ scenario our value is about 0.27 of theirs. These results suggest $O_{V_1}$ and $O_{V_2}$ can better explain the measurement of $P_\tau(D^*)$ than $O_T$ does. \begin{table*}[!htbp] \small \centering \caption{Predictions for observables involved in the $B\to D^{(*)}$ decays. The first and second uncertainties respectively result from the input parameters and the fitted Wilson coefficients.} \begin{tabular}{cccccccc} \hline \hline Scenario & $R(D)$& $R(D^*)$ &$P_\tau(D)$ & $P_\tau(D^*)$ &$P_{D^*}$ &$\mathcal{A}_{FB}(D)$ &$\mathcal{A}_{FB}(D^*)$\\ \hline SM &$0.279(7)(0)$ & $0.249(4)(0)$ &$0.325(3)(0)$ &$-0.508(4)(0)$ &$0.441(6)(0)$ &$0.3606(6)(0)$ &$-0.084(13)(0)$\\ $V_1$ &$0.354(9)(19)$ & $0.317(5)(17)$ &$0.325(3)(0)$ &$-0.508(4)(0)$ &$0.441(6)(0)$ &$0.3606(6)(0)$ &$-0.084(13)(0)$\\ $V_2$ &$0.403(10)(48)$ & $0.307(5)(15)$ &$0.325(3)(0)$ &$-0.509(4)(1)$ &$0.436(7)(4)$ &$0.3606(6)(0)$ &$0.010(8)(17)$\\ $T$ &$0.371(10)(42)$ & $0.313(26)(15)$ &$0.180(5)(49)$ &$0.038(1)(118)$ &$0.173(11)(60)$ &$0.4311(5)(29)$ &$0.036(14)(44)$\\ \hline \hline \end{tabular} \label{tab:obser1} \end{table*} \begin{table*}[!htbp] \small \centering \caption{Predicted ranges for observables involved in the $B\to D^{(*)}$ decays from the experimental constraints within $2\sigma$ and the limit of $Br(B_c\to\tau\nu)$.} \begin{tabular}{cccccccc} \hline \hline Senario & $R(D)$& $R(D^*)$ &$P_\tau(D)$ & $P_\tau(D^*)$ &$P_{D^*}$ &$\mathcal{A}_{FB}(D)$ &$\mathcal{A}_{FB}(D^*)$\\ \hline $V_1$ &$[0.315,0.373]$ &$[0.282,0.334]$ & $[0.325,0.325]$ & $[-0.508,-0.508]$ & $[0.441,0.441]$ & $[0.361,0.361]$ & $[-0.084,-0.084]$\\ $V_2$ &$[0.315,0.499]$ & $[0.274,0.334]$ & $[0.325,0.325]$ & $[-0.511,-0.508]$ & $[0.429,0.441]$ & $[0.361,0.361]$ & $[-0.044,0.051]$\\ $T$ &[0.321,0.323]& $[0.274,0.334]$ & $[0.248,0.249]$ & $[-0.234,-0.182]$ & $[0.287,0.306]$ & $[0.395,0.398]$ & $[-0.027,0.023]$\\ \hline \hline \end{tabular} \label{tab:obsB2sig} \end{table*} \begin{table*}[!htbp] \small \centering \caption{Predictions for observables involved in the $B_c\to \eta_c(J/\psi)$ decays. The first, second and third uncertainties respectively result from the input parameters, the fitted Wilson coefficients and the quark mass schemes for the tensor form factors.} \begin{tabular}{cccc} \hline \hline Scenario & $R(\eta_c)$& $R(J/\psi)$ &$P_\tau(J/\psi)$ \\ \hline SM &$0.281(_{-0.030}^{+0.034})(0)$ & $0.248(6)(0)$ &$-0.512(_{-0.016}^{+0.021})(0)$\\ $V_1$ &$0.357(_{-0.038}^{+0.044})(19)$ & $0.315(7)(17)$ &$-0.512(_{-0.016}^{+0.021})(0)$\\ $V_2$ &$0.406(_{-0.044}^{+0.050})(49)$ & $0.304(7)(16)$ &$-0.512(_{-0.016}^{+0.021})(1)$\\ $T$ &$0.337(_{-0.015}^{+0.019})(26)(15)$ & $0.188(_{-0.012}^{+0.017})(10)(21)$ &$-0.028(_{-0.013}^{+0.016})(175)(53)$\\ \hline \hline \end{tabular} \begin{tabular}{cccc} \hline \hline $P_\tau(\eta_c)$ &$P_{J/\psi}$ &$\mathcal{A}_{FB}(\eta_c)$ &$\mathcal{A}_{FB}(J/\psi)$\\ \hline $0.347(81)(0)$ &$0.446(6)(0)$ &$0.364(_{-0.009}^{+0.007})(0)$ &$-0.042(11)(0)$\\ $0.347(81)(0)$ &$0.446(6)(0)$ &$0.364(_{-0.009}^{+0.007})(0)$ &$-0.042(11)(0)$\\ $0.347(81)(0)$ &$0.443(6)(3)$ &$0.364(_{-0.009}^{+0.007})(0)$ &$0.031(7)(13)$\\ $0.24(_{-0.13}^{+0.15})(4)(2)$ &$0.271(14)(64)(41)$ &$0.419(_{-0.031}^{+0.014})(23)(8)$ &$-0.036(_{-0.008}^{+0.010})(49)(21)$\\ \hline \hline \end{tabular} \label{tab:obser2} \end{table*} \begin{table*}[!htbp] \small \centering \caption{Predicted ranges for observables involved in the $B_c\to \eta_c(J/\psi)$ decays from the experimental constraints within $2\sigma$ and the limit of $Br(B_c\to\tau\nu)$.} \begin{tabular}{cccccccc} \hline \hline Scenario & $R(\eta_c)$& $R(J/\psi)$ &$P_\tau(J/\psi)$ & $P_\tau(\eta_c)$ &$P_{J/\psi}$ &$\mathcal{A}_{FB}(\eta_c)$ &$\mathcal{A}_{FB}(J/\psi)$\\ \hline $V_1$ & $[0.318,0.376]$ & $[0.280,0.332]$ & $[-0.512,-0.512]$ & $[0.347,0.347]$ & $[0.446,0.446]$ & $[0.364,0.364]$ & $[-0.042,-0.042]$\\ $V_2$ & $[0.318,0.502]$ & $[0.270,0.331]$ & $[-0.513,-0.512]$ & $[0.347,0.347]$ & $[0.439,0.446]$ & $[0.364,0.364]$ & $[-0.010,0.064]$\\ $T$ & $[0.306,0.307]$ & $[0.219,0.257]$ & $[-0.330,-0.273]$ & $[0.294,0.295]$ & $[0.357,0.380]$ & $[0.390,0.391]$ & $[-0.043,-0.005]$\\ \hline \hline \end{tabular} \label{tab:obsBc2sig} \end{table*} To further analyze the NP effects, we also study the $q^2$ distribution of the observables for the $V_1$, $V_2$ and $T$ scenarios, given the $S_1$ and $S_2$ scenarios are disfavoured by previous analysis. In Figure~\ref{fig:rBq2}, we plot the differential ratios $R_D(q^2)$ and $R_{D^*}(q^2)$ in the full kinematical ranges of $q^2$. We present both the SM results and the NP results (corresponding to fitted Wilson coefficients in Table~\ref{tab:wcoef} and the experimental constraints shown in Figure~\ref{fig:constraint2}). It is shown in Figure~\ref{fig:rBq2} that in the SM the differential ratios $R_{D^{(*)}}(q^2)$ monotonously increase with $q^2$ and the inclusion of $O_{V_1}$ and $O_{V_2}$ do not affect this trend but slightly increase the predictions in the full ranges of $q^2$. In contrast, although the $T$ scenario is strongly restricted by the experimental constraints within $2\sigma$ and the allowed region is small, the $O_T$ operator has notable effects on $R_{D^*}(q^2)$: the resulting curves present inflexions, which is an distinctive feature for identifying the NP effects generated by the tensor operator (if any). The differential ratios $R_{\eta_c}(q^2)$ and $R_{J/\psi}(q^2)$ (shown in Figure~\ref{fig:rBcq2}) have very similar behaviours as $R_D(q^2)$ and $R_{D^*}(q^2)$, with the main difference being that $O_T$ has less effects on $R_{J/\psi}(q^2)$: it only decreases the predictions at high $q^2$. \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.4]{RDlq2V1.pdf} \includegraphics[scale=0.4]{RDlq2V2.pdf} \includegraphics[scale=0.4]{RDlq2T.pdf} \includegraphics[scale=0.4]{RDstlq2V1.pdf} \includegraphics[scale=0.4]{RDstlq2V2.pdf} \includegraphics[scale=0.4]{RDstlq2T.pdf} \caption{Predictions for the differential ratios $R_{D}(q^2)$ and $R_{D^*}(q^2)$. The black dashed lines and the red solid lines respectively denote the SM predictions and the NP predictions corresponding to the best fitted Wilson coefficients. The light blue bands include NP effects corresponding to the experimental constraints within $2\sigma$.} \label{fig:rBq2} \end{center} \end{figure} \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.4]{Retaclq2V1.pdf} \includegraphics[scale=0.4]{Retaclq2V2.pdf} \includegraphics[scale=0.4]{Retaclq2T.pdf} \includegraphics[scale=0.4]{RJpsilq2V1.pdf} \includegraphics[scale=0.4]{RJpsilq2V2.pdf} \includegraphics[scale=0.4]{RJpsilq2T.pdf} \caption{ Predictions for the differential ratios $R_{\eta_c}(q^2)$ and $R_{J/\psi}(q^2)$. The black dashed lines and the red solid lines respectively denote the SM predictions and the NP predictions corresponding to the best fitted Wilson coefficients. The light blue bands include NP effects corresponding to the experimental constraints within $2\sigma$.} \label{fig:rBcq2} \end{center} \end{figure} In Figure~\ref{fig:Ptauq2}, we show the $q^2$ distributions of the $\tau$ longitudinal polarization. We find $O_{V_1}$ and $O_{V_2}$ have no effects on $P_\tau^D(q^2)$ and $P_\tau^{\eta_c}(q^2)$ and so the corresponding figures are not presented. Besides, $O_T$ only slightly decrease the predictions without changing the shape of the curves, therefore it might be difficult to identify the NP effects from these two differential observables. Similarly, $O_{V_1}$ has no effects on $P_\tau^{D^{*}}(q^2)$ and $P_\tau^{J/\psi}(q^2)$, and effects of the operator $O_{V_2}$ are almost negligible. On the contrary, $P_\tau^{D^{*}}(q^2)$ and $P_\tau^{J/\psi}(q^2)$ are very sensitive to $O_T$, which is particularly obvious when taking into account the best-fitted Wilson coefficients\footnote{However one should keep in mind that the $T$ scenario has a very large $\chi^2$ value as shown in Table~\ref{tab:wcoef}.}: large angles between the SM curves and the curves obtained by using the best fitted Wilson coefficients are shown in Figure~\ref{fig:Ptauq2}. The situations for $P_{D^*}(q^2)$ and $P_{J/\psi}(q^2)$ (shown in Figure~\ref{fig:Pq2}) are similar to those for $P_\tau^{D^{*}}(q^2)$ and $P_\tau^{J/\psi}(q^2)$ in the sense that the effects of the vector operators $O_{V_1}$ and $O_{V_2}$ are respectively nonexistent and almost negligible, while the effects of the tensor operator $O_T$ are shown to be significant. \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.4]{PtauDq2T.pdf} \includegraphics[scale=0.4]{PtauDstq2V2.pdf} \includegraphics[scale=0.4]{PtauDstq2T.pdf} \includegraphics[scale=0.4]{Ptauetacq2T.pdf} \includegraphics[scale=0.4]{PtauJpsiq2V2.pdf} \includegraphics[scale=0.4]{PtauJpsiq2T.pdf} \caption{ Predictions for the differential polarizations $P_\tau^{D^{(*)}}(q^2)$, $P_\tau^{\eta_c}(q^2)$ and $P_\tau^{J/\psi}(q^2)$. The black dashed lines and the red solid lines respectively denote the SM predictions and the NP predictions corresponding to the best fitted Wilson coefficients. The light blue bands include NP effects corresponding to the experimental constraints within $2\sigma$.} \label{fig:Ptauq2} \end{center} \end{figure} \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.4]{PDstq2V2.pdf} \includegraphics[scale=0.4]{PDstq2T.pdf}\\ \includegraphics[scale=0.4]{PJpsiq2V2.pdf} \includegraphics[scale=0.4]{PJpsiq2T.pdf} \caption{ Predictions for the differential polarizations $P^{D^*}(q^2)$ and $P^{J/\psi}(q^2)$. The black dashed lines and the red solid lines respectively denote the SM predictions and the NP predictions corresponding to the best fitted Wilson coefficients. The light blue bands include NP effects corresponding to the experimental constraints within $2\sigma$.} \label{fig:Pq2} \end{center} \end{figure} The $q^2$ distributions of the forward-backward asymmetries are plotted in Figure~\ref{fig:AFB}. We find the tensor operator $O_T$ has effects on $\mathcal A_{FB}^D(q^2)$ and $\mathcal A_{FB}^{\eta_c}(q^2)$ and both $O_T$ and $O_{V_2}$ have effects on $\mathcal A_{FB}^{D^*}(q^2)$ and $\mathcal A_{FB}^{J/\psi}(q^2)$. All the NP operators increase the predictions for these differential forward-backward asymmetries, except that $O_T$ decrease $\mathcal A_{FB}^{D^*}(q^2)$ and $\mathcal A_{FB}^{J/\psi}(q^2)$ at high $q^2$. It is worth noting that $\mathcal A_{FB}^{D^*}(q^2)$ and $\mathcal A_{FB}^{J/\psi}(q^2)$ are the two observables other than the differential ratios helpful in discriminating the $V_2$ scenario, which has the smallest $\chi^2$ value in the fit among the allowed scenarios. \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.4]{AFBDq2T.pdf} \includegraphics[scale=0.4]{AFBDstq2V2.pdf} \includegraphics[scale=0.4]{AFBDstq2T.pdf}\\ \includegraphics[scale=0.4]{AFBetacq2T.pdf} \includegraphics[scale=0.4]{AFBJpsiq2V2.pdf} \includegraphics[scale=0.4]{AFBJpsiq2T.pdf} \caption{Predictions for the differential forward-backward asymmetries. The black dashed lines and the red solid lines respectively denote the SM predictions and the NP predictions corresponding to the best fitted Wilson coefficients. The light blue bands include NP effects corresponding to the experimental constraints within $2\sigma$.} \label{fig:AFB} \end{center} \end{figure} \section{Summary and conclusions}\label{sec:SUM} In this work, we have studied the new physics effects in the $b\to c\tau\nu$ transitions within the framework of the effective field theory, by performing a combined model-independent analysis based on the experimental data of the semi-tauonic $B \to D^{(*)}\tau \nu$ and $B_c\to J/\psi \tau\nu$ decays. We have paid particular attention to the employ of the input hadronic form factors which are the major source of uncertainties in the analysis of the relevant exclusive $b\to c\tau\nu$ decays. We adopt a set of $B\to D^{(*)}$ form factors recently determined by performing a global fit of the HQET parametrization including higher order $\alpha_s$ and $\Lambda_{\mathrm{QCD}}/m$ contributions to the Lattice and LCSR results and imposing the strong unitarity constraints. For the $B_c\to J/\psi(\eta_c)$ form factors, we find the CLFQM form factors to be well consistent with the preliminary lattice QCD results and use them as numerical inputs in this work. Assuming such a choice of hadronic form factors to be reliable, we have also obtained the best fitted values and constraints for the Wilson coefficients in each single-operator NP scenario. It is found that none of the single operators can explain simultaneously the current experimental measurements of the ratios $R(D)$, $R(D^*)$ and $R(J/\psi)$ at the confidence level of $1\sigma$, while allowed regions for the Wilson coefficients of the vector and tensor operators are obtained from the experimental constraints within $2\sigma$ along with the limit $Br(B_c \to \tau\nu)<10\%$ from the LEP1 data. The obtained regions favor the vector operators $O_{V_1}$ and $O_{V_2}$ among all the NP operators, restrictively constrain the tensor operator $O_T$ favored in some other analyses performed by using different hadronic form factors, and rule out the scalar operators $O_{S_1}$ and $O_{S_2}$. These results pose challenges to the NP models in which the NP effects are dominated by the single scalar operator $O_{S_1}$, $O_{S_2}$ (such as some types of the charged Higgs models) or tensor operator $O_T$. Meanwhile, in the minimum $\chi^2$ fit of the Wilson coefficients to the experimental measurements of $R(D^{(*)})$, $R(J/\psi)$ and $P_\tau(D^*)$, the $V_2$ scenario gives the smallest $\chi^2$ value among the allowed scenarios by the $2\sigma$ constraints from the ratios of decay rates and the LEP1 data on $Br(B_c\to\tau\nu)$. Furthermore, we have made predictions on various physical observables for the $B \to D^{(*)}\tau \nu$ and $B_c\to \eta_c(J/\psi) \tau\nu$ decays, namely the ratio of decay rates $R$, the $\tau$ polarization $P_\tau$, the final state meson longitudinal polarization $P_M$, and the forward-backward asymmetry $\mathcal A_{FB}$, and the corresponding $q^2$ differential distributions. For all of these physical observables, we have given both the SM predictions and the NP predictions using the fitted Wilson coefficients, for which the most intriguing observation is probably that the predicted $R(D)$ and $R(D^*)$ in the $V_2$ scenario are in excellent agreement with the current world averaged values. We have also obtained allowed ranges of the same observables by the experimental constraints on $R(D)$, $R(D^*)$ and $R(J/\psi)$ within $2\sigma$ and the limit on $Br(B_c \to \tau\nu)$. The SM predictions can be compared with other theoretical predictions using different sets of form factors as inputs, and the NP predictions are expected to be helpful in discriminating different NP scenarios given that distinctive features of the NP predictions for some of the observables have been predicted. These results can be further tested in the current LHCb, Belle-II experiments, as well as the proposed high energy colliders. \begin{acknowledgments} The work is partly supported by National Science Foundation of China (11575151, 11521505, 11621131001), Natural Science Foundation of Shandong Province (Grant No.ZR2016JL001) and China Postdoctoral Science Foundation (2018M631572). Z.R. Huang is grateful to David Straub, Ji-bo He, Ryoutaro Watanabe, Hai-Bing Fu and Wen-Qian Huang for very useful discussions. M.A.P wants to thank the Centre for Future High Energy Physics, Beijing, China for hospitality provided during his visit. \end{acknowledgments} \begin{appendix} \section{Input parameters and correlation matrix} \label{app:A} In this appendix, we list the input parameters used in this work.\footnote{The parameters not presented in Table~\ref{tab:para} are the meson masses, for which we take their PDG values \cite{Patrignani:2016xqp}.} \begin{table*}[htbp] \begin{center} \begin{tabular}{cccccc} \hline \hline Parameters & Values & References& Parameters & Values &References \\ \hline $\overline m_b(\overline m_b)$ & $4.180{^{+0.040}_{-0.030}}~{\rm GeV}$ & \cite{Patrignani:2016xqp} & $m{_c^{pole}}$& $m{_b^{pole}}-(3.4\pm0.02)~{\rm GeV}$& \cite{Bernlochner:2017jka,Ligeti:2014kia}\\ $\overline m_c(\overline m_c)$ & $1.275{^{+0.025}_{-0.035}}~{\rm GeV}$ & \cite{Patrignani:2016xqp} & $f_{B_c}$ & $0.434(15)~{\rm GeV}$ & \cite{Akeroyd:2017mhr}\\ $m{_b^{1S}}$ & $4.710(50)$ GeV& \cite{Bernlochner:2017jka,Ligeti:2014kia} & $\tau_{B_c}$& $0.507(9)$ ps& \cite{Akeroyd:2017mhr} \\ $\lambda_1$ & $-0.3~{\rm GeV^2}$ & \cite{Bernlochner:2017jka,Ligeti:2014kia}& $V_{cb}$ & $0.0414(13)$ & \cite{Patrignani:2016xqp}\\ $\Lambda_{QCD}$ & $0.25~{\rm GeV}$ & \cite{Bernlochner:2017jka} & $G_F$ & $1.166\times10^{-5}~{\rm GeV^{-2}}$& \cite{Patrignani:2016xqp}\\ \hline \hline \end{tabular} \caption{Input parameters adopted in the numerical analysis.} \label{tab:para} \end{center} \end{table*} Parameters and correlation matrix in the $B\to D^{(*)}$ form factors given in \cite{Jung:2018lfu} are \begin{gather} \begin{pmatrix} \chi_2(1) \\ \chi_2'(1) \\ \chi_3'(1) \\ \eta(1) \\ \eta'(1) \\ \rho^2 \\ c \\ \delta_{h_{A_1}} \\ \delta_{h_+} \end{pmatrix} = \begin{pmatrix} -0.058 \pm 0.019 \\ -0.001 \pm 0.020 \\ 0.035 \pm 0.019 \\ 0.358 \pm 0.043 \\ 0.044 \pm 0.125 \\ 1.306 \pm 0.059 \\ 1.220 \pm 0.109 \\ -2.299 \pm 0.394 \\ 0.485 \pm 0.269 \end{pmatrix}, \\ \rho= \begin{pmatrix} 1.00 & 0.01 & 0.02 & -0.00 & 0.02 & -0.27 & -0.21 & -0.03 & 0.02 \\ 0.01 & 1.00 & -0.00 & -0.02 & -0.02 & 0.00 & 0.14 & 0.01 & 0.00 \\ 0.02 & -0.00 & 1.00 & 0.00 & -0.03 & 0.83 & 0.61 & -0.03 & 0.02 \\ -0.00 & -0.02 & 0.00 & 1.00 & 0.03 & 0.01 & 0.04 & 0.15 & 0.21 \\ 0.02 & -0.02 & -0.03 & 0.03 & 1.00 & -0.14 & -0.16 & -0.05 & -0.22 \\ -0.27 & 0.00 & 0.83 & 0.01 & -0.14 & 1.00 & 0.79 & 0.09 & -0.14 \\ -0.21 & 0.14 & 0.61 & 0.04 & -0.16 & 0.79 & 1.00 & 0.06 & -0.08 \\ -0.03 & 0.01 & -0.03 & 0.15 & -0.05 & 0.09 & 0.06 & 1.00 & -0.24 \\ 0.02 & 0.00 & 0.02 & 0.21 & -0.22 & -0.14 & -0.08 & -0.24 & 1.00 \end{pmatrix}. \end{gather} \end{appendix}
{ "attr-fineweb-edu": 1.848633, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdMo4eIXhu5p8hZvI
\chapter[Introduction to Hydrodynamics]{Introduction to Hydrodynamics}\label{ra_ch1} \author[S. Jeon and U. Heinz]{Sangyong Jeon$^1$ and Ulrich Heinz$^2$} \address{ $^1$Department of Physics, McGill University, Montreal, QC, Canada\\ $^2$Department of Physics, The Ohio State University, Columbus, Ohio, USA } \begin{abstract} \end{abstract} \body \section{Introduction} Application of hydrodynamics in high-energy physics has a long and illustrious history starting from L.D.~Landau's seminal paper\cite{Landau:1953gs}. In its history of more than half a century, many papers have been written on a broad spectrum of topics, too numerous to list them all here. In this review, our emphasis will be more on the basics of the theory of hydrodynamics than to report on the current phenomenological status, of which several excellent reviews already exist (for instance, see Refs.\cite{Kolb:2003dz,Huovinen:2003fa,Gale:2013da}). Recent ultra-relativistic heavy ion collision experiments at the Super Proton Synchrotron (SPS), Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC) have demonstrated beyond any doubt that Quark-Gluon Plasma (QGP) is being created in these collisions. Unfortunately, direct access to the QGP properties such as the temperature, equation of state, transport coefficients, etc.~is not very feasible. The only experimentally accessible information is contained in the spectra of the final state particles. To connect them to the QGP properties such as above, one must use theoretical models. It would be wonderful to have an analytic or a numerical method that can calculate evolution of heavy ion collisions from first principles. But this microscopic, non-equilibrium, many-body QCD problem is currently intractable. What is tractable is the coarse-grained collective motion of the system as a fluid after the local thermal equilibrium is established. Since the properties of QGP we are after are mostly (local) equilibrium properties, it is natural that the dynamics of collective motion -- hydrodynamics -- is an integral part of the theoretical modelling. What has been exciting and interesting in QGP research is the close discourse between the experiment and theory. In elementary particle experiments, perturbative QCD is being tested with amazing successes. More and more precise perturbative QCD calculations prove to describe experimental data more and more accurately. In contrast, QGP research is much more dynamic. For instance, before the discovery of QGP, theoretical expectation was that QGP would be a weakly coupled plasma of quarks and gluons based partially on the fact that the QGP properties seem to approach about 80\,\% of the Stefan-Boltzmann limit rather quickly on lattice, around $2T_c$ \cite{Karsch:2001vs}. But almost from the day-1 of RHIC operation, strong elliptic flow quickly proved this initial expectation not very viable. QGP around the transition temperature turned out to be {\em the} most strongly coupled many-body system ever observed. Soon after, the authors of Ref.\cite{Kovtun:2004de} used string theory techniques to calculate the infinite coupling limit of the shear viscosity and came up with a surprising result that the limit is small, but has a non-zero lower bound. Subsequent Hydrodynamic calculations then demonstrated the importance of small but finite shear viscosity in understanding the RHIC flow data \cite{Song:2007fn,Song:2008hj,Schenke:2010rr}. Comparing the ensuing LHC predictions with the LHC data now confirmed the expectation that as the temperature increases, shear viscosity of QGP should also increase~\cite{Schenke:2011tv,Song:2011qa,Petersen:2011sb,Shen:2011eg}. All these connections between exciting theoretical developments and experiments cannot be made without hydrodynamics. More recently, the systems created in the highest multiplicity proton-proton collisions and proton-nucleus collisions were also seen to exhibit strong collective behavior~\cite{Aad:2012gla,CMS:2012qk,Abelev:2012ola,Adare:2013piz}. This is deeply puzzling as the size of the system ought to be too small to behave collectively. It is hoped that more thorough investigation of the possible origin of the collectivity in such small systems can illuminate the inner workings of the QGP formation greatly~\cite{Kozlov:2014hya}. As mentioned in the beginning, the aim of this review is the introduction of the theory of the hydrodynamics in ultra-relativistic heavy ion collisions. This actually entails a large number of disciplines in addition to the relativistic fluid dynamics. Our plan for this review is as follows. Section 2 contains the basic concepts of hydrodynamics and their definitions. In sections 3, second order viscous hydrodynamics is derived from a very general linearly response theory of conserved currents. Section 4 discusses how coarse-graining of kinetic theory can result in more general form of viscous hydrodynamics. In section 5, various numerical techniques needed to implement relativistic viscous hydrodynamics in ultra-relativistic heavy ion collisions are discussed. We conclude in section 6. \section{Hydrodynamic Form of the Stress Energy Tensor and the Net Baryon Number Current} Hydrodynamics is all about flow of conserved quantities. In this review, we strictly deal with relativistic hydrodynamics. Therefore, unlike the non-relativistic case, mass is a part of the energy budget. In the Minkowski coordinates, conservation laws in their local form are \begin{eqnarray} \partial_\mu T^{\mu\nu} &=& 0 \nonumber\\ \partial_\mu J^\mu_i &=& 0 \label{eq:1} \end{eqnarray} where $T^{\mu\nu}$ is the stress-energy tensor and the roman letter $i$ on the current $J_i^\mu$ labels any other conserved charges such as the net baryon number, net electric charge, etc. For the bulk evolution in relativistic heavy ion collisions, usually only the net baryon number current, $J_B^\mu$, is considered. If needed, additional electric current and the strangeness current can be easily accommodated. Using the divergence theorem, the integral form of the conservation laws read \begin{eqnarray} {d\over dt} \int_V d^3x\, T^{0\nu} &=& -\int_{\partial V} dS_i T^{i\nu} \nonumber\\ {d\over dt} \int_V d^3x\, J_B^{0} &=& -\int_{\partial V} dS_i J_B^{i} \label{eq:conservation1} \end{eqnarray} where in the right hand side the integration is over the boundary of the volume $V$ assuming that the size and the shape of the volume is independent of time. This form admits a very physical interpretation that the rate of change of the conserved quantity in a fixed volume equals the net current entering the volume. Hence, the dynamics of conserved quantities are governed by the dynamics that governs the currents. In essence, hydrodynamics is all about the dynamics of the currents. Hydrodynamics is useful because it is a coarse-grained theory. When a system contains too many particles, it becomes difficult to follow microscopic details of the system. When the system contains sufficiently many particles, the system again starts to admit analytic studies because thermodynamic concepts start to apply, in particular the static equilibrium. A system in static equilibrium is characterized by only few quantities such as the temperature, collective velocity and chemical potential. These control the energy density, momentum density and charge density, respectively. The price to pay for this simplification is that questions on short time scale phenomena or short length scale phenomena cannot be answered any longer. The systems we would like to study, the ultra-relativistic heavy ion collisions, do contain a large number of particles, but they certainly are not static. In fact, they will never actually reach the state of static equilibrium. Nevertheless, if one is interested only in the coarse-grained collective motion of the system, the concept of {\em local equilibrium} may still apply provided that the expansion rate is much slower than the microscopic interaction rate. If one considers a macroscopically small but microscopically large fluid cell around a position ${\bf x}$ at a given time $t$, then within a macroscopically short but microscopically long time scale, the time averages should approach the static equilibrium values according to the ergodicity hypothesis of statistical mechanics. More details on the length and the time scale analysis can be found in Section \ref{sec:kinetic}. When the local equilibrium is reached, it becomes meaningful to describe the system with the local temperature $T(t,{\bf x})$, the collective velocity $u^\mu(t,{\bf x})$ of the fluid cell and the local chemical potential $\mu_B(t,{\bf x})$. One can then study dynamics of only those few thermodynamic quantities. Since $T, u^\mu, \mu_B$ are basically the Lagrange multipliers to fix the average energy, momentum and net charge, it is natural that we turn to the conservation laws for their dynamics. The goal of hydrodynamics is then to study collective motion of a system using the conservation laws with only the {\em statistical} inputs from the underlying microscopic theory. In an dynamically evolving fluid, the concept of the local rest frame is essential in order to apply the concept of local equilibrium. However defining the collective velocity of a fluid cell turned out to be quite intricate. This is not so simple even for the simplest system composed of a single kind of non-interacting particles in the non-relativistic setting. One kind of collective velocity comes from the average momentum \begin{eqnarray} {\bf u}_p = {\sum_i m{\bf v}_i\over \sum_i m} \end{eqnarray} or from the mass current. Here the sum is over all particles in the given fluid cell. Another comes from the energy-weighted average \begin{eqnarray} {\bf u}_E = {\sum_i (m{\bf v}_i^2/2) {\bf v}_i\over \sum_i (m{\bf v}_i^2/2)} \end{eqnarray} or from the energy current. If the particles in the system have an additional conserved charge, $B$, and the net charge is non-zero, then one can define yet another collective velocity by performing the charge-weighted average \begin{eqnarray} {\bf u}_B = {\sum_{i} B_i {\bf v}_i\over \sum_{i} B_i} \end{eqnarray} Here $B_i$ is the conserved charge of the $i$-th particle. For a state in static equilibrium, all three collective velocities above coincide because they must all vanish. However, they do not necessarily coincide to an observer moving with a uniform speed $-{\bf u}_O$. The mass weighted average velocity and the charge weighted average velocity are both ${\bf u}_O$. But the energy weighted average velocity \begin{eqnarray} {\bf u}'_E = {\sum_i (m({\bf v}_i + {\bf u}_O)^2/2) ({\bf v}_i + {\bf u}_O) \over \sum_i m({\bf v}_i + {\bf u}_O)^2/2} \ne {\bf u}_O \end{eqnarray} clearly does not coincide with ${\bf u}_O$. There is no unambiguous choice of the flow velocity even in this simple case. One must choose among these options what will be regarded as the flow velocity. In the relativistic setting, mass is a part of the energy. Hence, there are only two options for choosing the flow velocity: The energy current or the net baryon current. One must choose one of these velocity options in order to decompose $T^{\mu\nu}$ and $J_B^\mu$ into a useful hydrodynamic form. The net baryon number is relatively small in ultra-relativistic heavy ion collisions. Furthermore, the flow observables we are interested in are mostly patterns in energy and momentum distributions. Therefore, there is no real benefit to choose the net baryon number collective velocity as long as the heavy ion analysis is concerned. Choosing to follow the energy current \footnote This choice of frame is often referred to as the Landau-Lifshitz frame. If one choose to follow the charge current, it is referred to as the Eckart frame.} the flow velocity for the energy current is defined by the eigenvalue problem \begin{eqnarray} \label{EVeq} T^{\mu}_{\ \nu} u^\nu = \varepsilon u^\mu \end{eqnarray} where $T^\mu_{\ \nu} = T^{\mu\alpha}g_{\alpha\nu}$ and the flow vector is normalized to $u^\mu u_\mu = g_{\mu\nu}u^\mu u^\nu = 1$. We use $g_{\mu\nu} = {\rm diag}(1,-1,-1,-1)$ throughout this review. Note that while $T^{\mu\nu}$ is a symmetric matrix, $T^{\mu}_{\ \nu}$ is no longer a symmetric matrix. Therefore, there is no guarantee that the eigenvalues are real. But any physically consistent system should admit a positive eigenvalue $\varepsilon$ and the associated time-like real eigenvector $u^\mu$. Decomposing $T^{\mu\nu}$ using $\varepsilon$ and $u^\mu$, one gets \begin{eqnarray} T^{\mu\nu} = \varepsilon u^\mu u^\nu + {1\over 3} (T^\alpha_{\ \alpha} - \varepsilon) \Delta^{\mu\nu} + \pi^{\mu\nu} \label{eq:Tmunu1} \end{eqnarray} where the local 3-metric $\Delta^{\mu\nu}$ is given by \begin{eqnarray} \label{eq:Deltamunu} \Delta^{\mu\nu} = g^{\mu\nu} - u^\mu u^\nu \end{eqnarray} The residual shear tensor $\pi^{\mu\nu}$ is symmetric $\pi^{\mu\nu}=\pi^{\nu\mu}$, transverse $ \pi^{\mu\nu}u_\nu = 0 $ and traceless $ \pi^{\mu\nu}g_{\mu\nu} = 0 $. Hence altogether the expression (\ref{eq:Tmunu1}) has the required 10 independent degrees of freedom; the local energy density $\varepsilon$, the local fluid velocity ${\bf u}$, the trace $T^{\alpha}_{\ \alpha}$ and the residual shear tensor $\pi^{\mu\nu}$. The net-baryon current is \begin{eqnarray} J_B^\mu = J^\mu_{B,\rm id} + V_B^\mu \label{eq:JB1} \end{eqnarray} where $J^\mu_{B,\rm id} = \rho_B u^\mu$ is the ideal fluid part of the current and $V_B^\mu$ is a space-like vector satisfying the transversality condition $u_\mu V^\mu_{B} = 0$. It has the required 4 independent degrees of freedom; the local net baryon density $\rho_B$ and the residual vector ${\bf V}_B$. So far we did not use any thermodynamic information. We will do so now to re-write the trace $T^\alpha_{\ \alpha}$ in a more physical form. A static medium at rest has $T^{\mu\nu}_{\rm eq} = {\rm diag}(\varepsilon, P, P, P)$ where $P$ is the pressure. Therefore, the trace should contain the equilibrium piece $g_{\mu\nu}T^{\mu\nu}_{\rm eq} = \varepsilon - 3P$. Furthermore, the thermodynamic identities \begin{eqnarray} dP &=& sdT + \rho_B d\mu_B \\ ds &=& {1\over T}d\varepsilon - {\mu_B\over T} d\rho_B \end{eqnarray} where $s$ is the entropy density, indicate that the pressure $P$ is a function of the temperature $T$ and the baryon chemical potential $\mu_B$, and they are in turn function of the energy density and the net baryon density. Hence we must be able to find $P$ as a function of $\varepsilon$ and $\rho_B$: \begin{eqnarray} P = P(\varepsilon, \rho_B) \end{eqnarray} This relationship is known as the equation of state. Writing $T^{\alpha}_{\ \alpha} = \varepsilon - 3(P + \Pi)$, the stress-energy tensor then becomes \begin{eqnarray} T^{\mu\nu} = T_{\rm id}^{\mu\nu} - \Pi \Delta^{\mu\nu} + \pi^{\mu\nu} \label{eq:Tmunu2} \end{eqnarray} where $T_{\rm id}^{\mu\nu} = \varepsilon u^\mu u^\nu - P(\varepsilon, \rho_B) \Delta^{\mu\nu}$ is the ideal fluid stress-energy tensor. From the arguments presented above, it should be clear that the residual scalar term $\Pi$ and the tensor term $\pi^{\mu\nu}$ as well as the vector $V_B^\mu$ in the baryon current must vanish in the static equilibrium limit. As these quantities represent deviation from equilibrium, the size of these terms will depend on how fast the local equilibrium is achieved. If local equilibration is instantaneous in the macroscopic time scale, then fluid cells will always be in strict local equilibrium and hence $\Pi = \pi^{\mu\nu} = V_B^\mu = 0$. Microscopically, this would happen if scattering cross-section is large so that the mean free path is much shorter than any macroscopic time scale or length scale (c.f.~section \ref{sec:kinetic}). If this is the case, the number of unknowns $(\varepsilon, {\bf u}, \rho_B)$ matches the 5 conservation laws and one has the ideal hydrodynamics whose dynamics is completely specified by \begin{eqnarray} \partial_\mu \left(\varepsilon u^\mu u^\nu - P(\varepsilon, \rho_B)\Delta^{\mu\nu}\right) = 0 \end{eqnarray} and \begin{eqnarray} \partial_\mu (\rho_B u^\mu) = 0 \end{eqnarray} The information on the underlying system is only in the equation of state $P(\varepsilon, \rho_B)$. \section{Hydrodynamics from Linear Response Theory} \label{sec:linear_response} \subsection{Linear Response Theory} In realistic systems, the approach to the local equilibrium is never going to be infinitely fast. Therefore, $\Pi$, $\pi^{\mu\nu}$ and $V_B^\mu$ cannot simply be set to vanish, although if the system is conformal one strictly has $\Pi = 0$. This is because the trace must vanish $T^{\alpha}_{\ \alpha} = 0$ in a conformal system. When any of these quantities are non-zero, the system is out of equilibrium. Therefore, local entropy must increase. The evolution equations of these quantities are then necessarily of the dissipative type. In this and following sections, we use linear response theory to obtain such equations for dissipative hydrodynamics. In section \ref{sec:kinetic_hydro}, kinetic theory approaches that can go beyond the near-equilibrium restriction of the linear response theory is discussed. To gain more insights on the behavior of the dissipative quantities, we start with a system very slightly out of equilibrium at $t = 0$. We then consider how the system approaches the equilibrium. This is the realm of the linear response theory. Full analysis of the quantum linear response theory can be found in any number of standard text books (for example, see Ref.\cite{Kapusta:2006pm}). Here we will only go over the main ideas. Suppose that at the remote past, $t = -T$, the density operator had the equilibrium form $\hat\rho_0 = e^{-\beta {\hat{H}}_0}/Z_0$ where ${\hat{H}}_0$ is the system Hamiltonian and $Z_0 = {\rm Tr} e^{-\beta {\hat{H}}_0}$ is the partition function. Then a force term is adiabatically turned on $f(t,x) = \theta(-t)f(x)e^{\epsilon t}$ at an infinitesimally slow rate $\epsilon$. At $t=0$, the force term is turned off and at this point the system is out of equilibrium. The full time-dependent Hamiltonian for this process is \begin{eqnarray} \hat{H}(t) = {\hat{H}}_0 - \int d^3x\, {\hat{A}}(x)f(t,{\bf x}) \label{eq:fullH} \end{eqnarray} Here $\hat{A}$ represents a set of Hermitian operators for the conserved quantities we are interested in. Namely, $T^{0\mu}$ and $\rho_B$. Hence, the expressions presented below are in general matrix expressions. Treating $\delta{\hat{H}} = -\int d^3x\, f(t,{\bf x}){\hat{A}}({\bf x})$ as a perturbation, the formal first order solution of the quantum Liouville equation $i\partial_t{\hat{\rho}}(t) = [{\hat{H}}(t), {\hat{\rho}}(t)]$ is given by \begin{eqnarray} \delta\hat\rho_H(t) = -i\int_{-T}^t dt'\, [\delta\hat{H}_H(t'), \hat\rho_0(0)] \end{eqnarray} where $\delta{\hat{\rho}}(t) = {\hat{\rho}}(t) - {\hat{\rho}}_0$ and the subscript $H$ denotes Heisenberg picture operators \begin{eqnarray} {\hat{O}}_H(t) = e^{i{\hat{H}}_0 t}{\hat{O}} e^{-i{\hat{H}}_0 t} \end{eqnarray} Using $\delta{\hat{H}}$ from Eq.(\ref{eq:fullH}), the deviation of an observable $A$ from the equilibrium value is found to be \begin{eqnarray} \delta\langle \hat{A}(t,{\bf x}) \rangle & = & \int d^4x'\, G_R^{AA}(t-t',{\bf x}-{\bf x}') \theta(-t')e^{\epsilon t'}f({\bf x}') \label{eq:Aexpr} \end{eqnarray} where $t > 0$ and \begin{eqnarray} G_R^{AA}(t-t',{\bf x}-{\bf x}') = i\theta(t-t') {\rm Tr}\left( \hat\rho_0 [ \hat{A}_H(t,{\bf x}), \hat{A}_H(t',{\bf x}') ] \right) \end{eqnarray} is the retarded response function. We also took the $-T\to -\infty$ limit. Suppose that one can find an operator ${\hat{D}}_A$ for which $G_R^{AA}$ is the generalized retarded Green function, \begin{eqnarray} {\hat{D}}_A G_R^{AA}(t-t',{\bf x}-{\bf x}') = {\hat{d}}_A\delta(t-t')\delta({\bf x}-{\bf x}') \end{eqnarray} where ${\hat{d}}_A$ can contain a finite number of derivatives. For $t>0$, $t$ and $t'$ can never be the same in Eq.(\ref{eq:Aexpr}). Hence $\delta\ave{{\hat{A}}}$ satisfies the evolution equation \begin{eqnarray} {\hat{D}}_A \delta\ave{\hat{A}(t,{\bf x})} = 0 \end{eqnarray} for $t> 0$. Therefore finding the pole structure of the response function is equivalent to finding the evolution equation\cite{forster1975,Kadanoff1963}. We will use this to find hydrodynamic equations in the following sections. For further analysis, it is useful to define the spectral density \begin{eqnarray} \rho^{AA}(\omega,{\bf k}) & = & \int d^4x\, e^{i\omega t - i\dotpr{k}{x}}\, \ave{[{\hat{A}}_H(t,{\bf x}), {\hat{A}}_H^\dagger(0)]} \end{eqnarray} Using the thermal average $\ave{(\cdots)} = {\rm Tr}{\hat{\rho}}_0(\cdots)$, it is not hard to show (for instance, see Ref.\cite{Kapusta:2006pm}) \begin{eqnarray} \rho^{AA}(\omega,{\bf k}) & = & {1\over Z_0}\sum_{m,n}\left( e^{-\beta E_n} - e^{-\beta E_m}\right) (2\pi)^4\delta(k - p_m + p_n) \left| \bra{n}{\hat{A}}\ket{m} \right|^2 \end{eqnarray} where $k = (\omega, {\bf k})$ and $\ket{m}$ is the simultaneous eigenstate of the system Hamiltonian ${\hat{H}}_0$ and the total momentum $\hat{\bf P}$ of the system with the eigenvalue $p_{m} = (E_{m}, {\bf p}_{m})$. From this expression, we can derive \begin{eqnarray} \rho^{AA}(-\omega,-{\bf k}) = -\rho^{AA}(\omega,{\bf k}) \label{eq:odd_rhoAA} \end{eqnarray} by exchanging $m$ and $n$. When the underlying equilibrium system is isotropic, then $\rho^{AA}(\omega,{\bf k})$ must be a function of $|{\bf k}|$ only. Hence the spectral density is an odd function of $\omega$, $ \rho^{AA}(-\omega,{\bf k}) = -\rho^{AA}(\omega,{\bf k})$. We will use this property often in later sections to parametrize the analytic structure of the response functions. In terms of $\rho^{AA}$, the retarded correlator is \begin{eqnarray} G_R^{AA}(\omega, {\bf k}) & = & \int {d\omega'\over 2\pi}\, {\rho^{AA}(\omega',{\bf k})\over \omega' - \omega - i\epsilon } \label{eq:GRAA} \end{eqnarray} The imaginary part of the retarded correlator directly gives the spectral density \begin{eqnarray} {\rm Im}\,G_R^{AA}(\omega,{\bf k}) = {1\over 2}\rho^{AA}(\omega,{\bf k}) \end{eqnarray} It is also useful to know that the Euclidean correlator is given by (for derivations, see, for instance, Ref.\cite{Kapusta:2006pm}) \begin{eqnarray} G_E^{AA}(\omega_n, {\bf k}) & = & \int {d\omega'\over 2\pi}\, {\rho^{AA}(\omega',{\bf q})\over \omega' - i\omega_n } \label{eq:GEAA} \end{eqnarray} where $\omega_n = 2\pi nT$ is the Matsubara frequency. One important fact we will often use in the following sections is that the $\omega\to 0$ limit \begin{eqnarray} G_R^{AA}(0,{\bf k}) &=& G_E^{AA}(0,{\bf k}) \nonumber\\ & = & \int {d\omega'\over 2\pi}{\rho^{AA}(\omega,{\bf k}) \over \omega'} \nonumber\\ & = & {1\over Z_0} \sum_{m,n}\left( e^{-\beta E_n} - e^{-\beta E_m}\right) {1\over E_m - E_n} (2\pi)^3\delta({\bf k} - {\bf p}_m + {\bf p}_n) \left| \bra{n}{\hat{A}}\ket{m} \right|^2 \nonumber\\ \label{eq:GR0} \end{eqnarray} is real and positive and function only of the magnitude of ${\bf k}$.\footnote{To see this, just exchange $m$ and $n$.} \subsection{Baryon Density Diffusion} From the previous section, it is clear that the analytic structure of the response function determines the evolution of small disturbances. In this and the following sections, we show that this fact combined with the conservation laws is powerful enough to produce dissipative hydrodynamic equations. The analysis in this section and those up to section \ref{sec:sound_bulk} closely follow the unpublished note by Laurence G.~Yaffe (private communication, see also Refs.\cite{Herzog:2009xv,Kovtun:2012rj}) in a simplified form. Additional discussion on the 2nd-order formulation of dissipative hydrodynamics is given in section \ref{sec:2nd_order_visc}. We start with the net baryon number conservation which is the simplest to examine. Suppose we set up a system where only the net baryon density $\ave{\rho_B(0,{\bf x})}$ is non-uniform at $t = 0$. The perturbing Hamiltonian is \begin{eqnarray} \delta\hat{H}(t) = -\int d^3x\,{\hat{\rho}}_B({\bf x})\, e^{\epsilon t}\, \mu_B({\bf x}) \end{eqnarray} which results in the following linear response in the mixed space of $t$ and the wavevector ${\bf k}$, \begin{eqnarray} \delta\langle {\hat{\rho}}_B(t,{\bf k}) \rangle & = & \mu_B({\bf k}) \int_{-\infty}^{\infty} dt'\, \theta(-t') e^{\epsilon t'} G_R^{00}(t-t',{\bf k}) \end{eqnarray} for $t > 0$. Applying the current conservation, $\partial_t \rho_B = -\nabla{\cdot}{\bf J}_B$, to the retarded correlation functions $ G_R^{\mu\nu}(t,{\bf x}) = i\theta(t)\ave{[{\hat{J}}_B^\mu (t,{\bf x}), {\hat{J}}_B^\nu(0)]} $ results in the following relationships between them in the frequency-wavevector space \begin{eqnarray} \omega G_R^{00}(\omega,{\bf k}) = k_i G^{0i}_R(\omega, {\bf k}) \label{eq:GR001} \\ \omega G_R^{0j}(\omega,{\bf k}) = k_i G^{ij}_R(\omega, {\bf k}) \label{eq:GR002} \end{eqnarray} provided that $[J^0_B(0,{\bf x}), J^\nu_B(0,{\bf x}')] = 0$. The underlying isotropic equilibrium permits decomposition into transverse and the longitudinal parts \begin{eqnarray} G^{ij}_R(\omega,{\bf k}) = {\hat{k}}^i {\hat{k}}^j G_L(\omega,{\bf k}^2) + \hat\delta^{ij} G_T(\omega,{\bf k}) \end{eqnarray} where $\hat{\bf k} = {\bf k}/|{\bf k}|$ is the unit vector and $\hat\delta^{ij}=\delta^{ij} - {\hat{k}}^i {\hat{k}}^j$ is the transverse projector. Combining Eqs.(\ref{eq:GR001}) and (\ref{eq:GR002}) gives \begin{eqnarray} \omega^2 G_R^{00}(\omega,{\bf k}) & = & {\bf k}^2 G_L(\omega,{\bf k}) \label{eq:rhob_cons} \end{eqnarray} What we are interested in is the behaviour of $G_L(\omega,{\bf k})$. Consider the small $\omega$ limit of $G_L$ first. From Eq.(\ref{eq:GR0}), we know that \begin{eqnarray} g_{00}({\bf k}) = G_R^{00}(0,{\bf k}) = G_E^{00}(0,{\bf k}) \end{eqnarray} is real and positive. The small $\omega$ limit of $G_L$ can be then expressed as \begin{eqnarray} G_L(\omega,{\bf k}) \approx {\omega^2 g_{00}({\bf k})\over {\bf k}^2} \end{eqnarray} Now consider taking the ${\bf k}\to 0$ limit with a fixed $\omega\ne 0$. The retarded density-density correlation function in this limit is \begin{eqnarray} G_R^{00}(\omega,0) & = & i\int_0^\infty dt\, e^{-i\omega t}\, \int d^3x\, \ave{[\hat\rho_B(t,{\bf x}), \hat\rho_B(0)]} \nonumber\\ & = & i\int_0^\infty dt\, e^{-i\omega t}\, \ave{[{\hat{Q}}_B, \hat\rho_B(0)]} = 0 \end{eqnarray} where we used the fact that ${\hat{Q}}_B = \int d^3x\, \hat\rho_B(t,{\bf x})$ is the net baryon number operator. Since the net baryon number is conserved, ${\hat{Q}}_B$ is independent of time. In particular, it can be evaluated at $t=0$. Therefore, the commutator vanishes. This then indicates that $G_L(\omega, {\bf k})$ is well behaved in the zero $|{\bf k}|$ limit with $\omega \ne 0$. Consequently, the $\omega\to 0$ limit and the ${\bf k} \to 0$ limit do not commute, indicating the presence of a massless pole. We also know that the imaginary part of $G_R^{00}$ (the spectral density $\rho^{00}$) has to be an odd function of $\omega$ since isotropy in space demands that it be a function only of ${\bf k}^2$ (c.f.~Eq.(\ref{eq:odd_rhoAA})). The most general form of $G_L$ consistent with the above conditions is \begin{eqnarray} G_L(\omega,{\bf k}) = {\omega^2 (g_{00}({\bf k}) + i\omega A(\omega,{\bf k})) \over {\bf k}^2 - i\omega/D(\omega,{\bf k}) -\omega^2 B(\omega, {\bf k}) } \label{eq:GLgen} \end{eqnarray} The functions $D(\omega,{\bf k})$, $A(\omega,{\bf k})$ and $B(\omega,{\bf k})$ are all of the form \begin{eqnarray} D(\omega, {\bf k}) = D_R(\omega, {\bf k}) - i\omega D_I(\omega, {\bf k}) \end{eqnarray} where $D_R(\omega,{\bf k})$ and $D_I(\omega,{\bf k})$ are are real-valued even functions of $\omega$ and ${\bf k}$. The real parts $D_R$ and $B_R$ must have a non-zero limit as $\omega\to 0$ and ${\bf k} \to 0$. All other parts of $A,B$ and $D$ must have finite limits as $\omega\to 0$ and ${\bf k} \to 0$. In the small $\omega$ and ${\bf k}$ limit, the response function becomes \begin{eqnarray} G_R^{00}(\omega,{\bf k}) \approx {D {\bf k}^2 g_{00}(0) \over -i\omega + D\,{\bf k}^2} \end{eqnarray} where we defined the diffusion constant $D = D_R(0,0)$. The pole structure of $G_R^{00}$ dictates that in the small $\omega$ and $|{\bf k}|$ limit, $\delta\rho_B(t,{\bf x}) = \delta\langle {\hat{\rho}}_B(t,{\bf x}) \rangle$ obeys the diffusion equation \begin{eqnarray} \partial_t\delta\rho_B = D \nabla^2 \delta\rho_B \end{eqnarray} This is our first example of a dissipative hydrodynamic equation. The conservation law, current algebra, thermodynamic stability, and the general analytic structure of the correlation functions are all the ingredients one needs to get this diffusion equation for baryon density. Hence diffusion is a very general phenomenon whenever there is a conserved current. Microscopic dynamics only enters through the value of the diffusion constant. If we now go back to the conservation equation \begin{eqnarray} \partial_t \delta\rho_B = -\partial_i {\delta J}_B^i \end{eqnarray} we can see that the diffusion equation above is equivalent to the constitutive relationship \begin{eqnarray} {\delta J}^i = V_B^i = D\partial^i \delta\rho_B \end{eqnarray} valid in the fluid cell rest frame. In the more general frame boosted by $u^\mu$, it becomes \begin{eqnarray} V_B^\mu = D \Delta^{\mu\nu} \partial_\nu \rho_B \end{eqnarray} where again $\Delta^{\mu\nu} = g^{\mu\nu} - u^\mu u^\nu$ is the local 3-metric. The diffusion constant can be calculated by taking the appropriate limits of $G_L$ \begin{eqnarray} \lim_{\omega\to 0}\lim_{{\bf k} \to 0} {1\over \omega} {\rm Im}\, G_L(\omega,{\bf k}) & = & Dg_{00}(0) \label{eq:Kubo_B} \end{eqnarray} which is our first example of the Kubo formula which relates an {\em equilibrium} correlation function to a dissipative coefficient. As the response function is singular in the small $\omega$ and small ${\bf k}$ limit, the order of limits in the Kubo formula is important. The ${\bf k}\to 0$ limit must be taken first. Since the transport coefficients are defined as Lorentz scalars, Eq.(\ref{eq:Kubo_B}) is to be evaluated using the underlying microscopic theory such as thermal QCD in the rest frame of the equilibrium system. \subsection{Stress-Energy Tensor Correlation Functions} To carry out the linear response analysis of the energy-momentum currents and their hydrodynamic and dissipative behavior, one need to know the Ward identity among the correlation functions. Defining the retarded correlation functions of ${\hat{T}}^{\mu\nu}$ turned out to be not so straightforward due to the fact that the conserved quantities ${\hat{P}}^\mu = \int d^3x\, {\hat{T}}^{0\mu}$ are also the generators of the space-time evolution. Hence, in general the equal time commutators of ${\hat{T}}^{0\mu}$ are non-zero unlike the net baryon current case. To begin the analysis, consider the static partition function given by \begin{eqnarray} Z_E[g_E] = \int{\cal D}\phi\, e^{-S_E[\phi,g_E]} \end{eqnarray} where \begin{eqnarray} S_E[\phi,g_E] = \int d^3x\int_0^\beta d\tau\, \sqrt{g_E}\, {\cal L}(\phi,g_E) \end{eqnarray} is the Euclidean action and $\tau$ is the imaginary time. Here, $\phi$ denotes the collection of field variables and $g_E$ is the Euclidean metric. Using the Hilbert definition of the stress-energy tensor density, we have \begin{eqnarray} \ave{T^{\mu\nu}(x)}_E = -2{\delta\over \delta g^E_{\mu\nu}(x)}\ln Z_E[g_E] \end{eqnarray} The two point functions are given by \begin{eqnarray} {\bar{G}}^{\alpha\beta,\mu\nu}_R(x_E,y_E) &=& \ave{{\cal T}_\tau T^{\mu\nu}(x_E)T^{\alpha\beta}(y_E)}_E \nonumber\\ &=& 4{ \delta^2\over \delta g^E_{\alpha\beta}(y_E) \delta g^E_{\mu\nu}(x_E) }\ln Z_E[g_E] \end{eqnarray} where $x_E = (\tau,{\bf x})$ and ${\cal T}_\tau$ is the time ordering operator in $\tau$. For the tensor density $T^{\mu\nu}$, the covariant conservation law is \begin{eqnarray} \partial_\mu \ave{T^{\mu\nu}}_E + \Gamma^\nu_{\sigma\rho}\ave{T^{\sigma\rho}}_E =0 \label{eq:EuclCons} \end{eqnarray} where $\Gamma^\nu_{\sigma\rho} = {1\over 2}g_E^{\mu\nu} (g^E_{\sigma\mu,\rho} + g^E_{\rho\mu,\sigma} - g^E_{\sigma\rho,\mu})$ is the Christoffel symbol, By differentiating Eq.(\ref{eq:EuclCons}) once more with respect to $g^E_{\alpha\beta}(y)$, we obtain the Ward identity among the Euclidean correlation functions in the flat space \cite{Herzog:2009xv,Kovtun:2012rj} where $g^{\mu\nu}_E = \delta^{\mu\nu}$ \begin{eqnarray} 0 = k_\alpha^E \left({\bar{G}}_E^{\alpha\beta,\mu\nu}(k_E) + \delta^{\beta\mu}\ave{T^{\alpha\nu}} + \delta^{\beta\nu}\ave{T^{\alpha\mu}} - \delta^{\alpha\beta}\ave{T^{\mu\nu}} \right) \label{eq:kGEabmn} \end{eqnarray} Here $k_E^\alpha = (\omega_n, {\bf k})$ and $\omega_n = 2\pi n T$ is the Matsubara frequency. To obtain the real time correlation functions, we perform the analytic continuation. From Eqs.(\ref{eq:GRAA}) and (\ref{eq:GEAA}) one can see that the analytically continuation \begin{eqnarray} \omega_n &\to& -ik^0 + \epsilon \end{eqnarray} changes the Euclidean correlation function to the real-time retarded correlation function. This also means that each time index of $T^{\mu\nu}$ gets a factor of $(-i)$. Since our Minkowski metric is mostly negative, going from the Euclidean metric to the mostly negative Minkowski metric means $\delta_{\mu\nu} \to -g_{\mu\nu}$. The real-time version of Eq.(\ref{eq:kGEabmn}) is then \begin{eqnarray} 0 = k_\alpha \left({\bar{G}}_R^{\alpha\beta,\mu\nu}(k) -g^{\beta\mu}\ave{T^{\alpha\nu}} -g^{\beta\nu}\ave{T^{\alpha\mu}} +g^{\alpha\beta}\ave{T^{\mu\nu}} \right) \label{eq:kGRabmn} \end{eqnarray} where $k^\mu = (\omega, {\bf k})$. The presence of the single stress-energy tensor average terms in Eq.(\ref{eq:kGRabmn}) implies that the correlation function ${\bar{G}}_R^{\alpha\beta,\mu\nu}(x,x')$ is not the same as \begin{eqnarray} G_R^{\mu\nu,\alpha\beta}(x,x') = i\theta(t)\ave{[{\hat{T}}^{\mu\nu} (t,{\bf x}), {\hat{T}}^{\alpha\beta}(t',{\bf x}')]}_{\rm eq} \end{eqnarray} but differs by terms containing $\delta(x-x')$ (as well as the contact terms containing spatial derivatives of $\delta(x-x')$ \cite{Deser:1967zzf}). As the response function in the linear response theory, these delta-function terms do not matter since $t$ and $t' < t$ can never be the same. Defining the real-time correlation function by the analytic continuation of the Euclidean correlation function enables us to gain the following important relationship between the two \begin{eqnarray} {\bar{G}}_E^{\mu\nu,\alpha\beta}(0,{\bf k}) = {\bar{G}}_R^{\mu\nu,\alpha\beta}(0,{\bf k}) \end{eqnarray} which has a well-defined limit ($\mu$ and $\nu$ here are not summed) \begin{eqnarray} \lim_{{\bf k}\to 0}{\bar{G}}_R^{\mu\nu,\mu\nu}(0,{\bf k}) = \lim_{{\bf k}\to 0}{\bar{G}}_E^{\mu\nu,\mu\nu}(0,{\bf k}) > 0 \end{eqnarray} \subsection{Momentum Diffusion and Shear Viscosity} In this and the following section for the bulk viscosity, we will not consider finite net baryon density for simplicity. For analysis with finite $\mu_B$, see Ref.\cite{Kovtun:2012rj}. Suppose that we set up a system where the the flow velocity at $t =0$ has a single non-zero component in the $x$-direction $u_x(y)$ which depend only on $y$. In this situation, two layers of the fluid at $y$ and at $y+\Delta y$ have different fluid velocities in the orthogonal $x$-direction. In ideal hydrodynamics, this difference is maintained because there is no dissipation. In a normal fluid, however, particle diffusion between the two layers will eventually make them move with the same equilibrated speed. How fast two layers equilibrate depends on the size of the scattering mean free path, which in turn determines the diffusion constant, or the shear viscosity. To set up a shear flow, let the perturbing Hamiltonian be \begin{eqnarray} \delta\hat{H}(t) = -\int d^3x\, e^{\epsilon t}\, {\hat{T}}^{x0}(t,{\bf x})\beta_x(y) \end{eqnarray} The corresponding linear response is \begin{eqnarray} \delta\langle T^{x0}(t,k_y) \rangle & = & \beta_x(k_y) \int_{-\infty}^{\infty} dt'\, \theta(-t') e^{\epsilon t'} {\bar{G}}_R^{x0,x0}(t-t',k_y) \label{eq:LinResTx0} \end{eqnarray} for $t > 0$. In Eq.(\ref{eq:kGRabmn}), ${\bar{G}}_R^{x0,x0}$ appears in the following sequences when ${\bf k} = (0,k_y,0)$ \begin{eqnarray} \omega ({\bar{G}}_R^{x0,x0}(\omega,k_y) + \varepsilon) &=& k_y {\bar{G}}^{x0,xy}_R(\omega, k_y) \label{eq:omegaGRx0x0} \\ \omega {\bar{G}}_R^{x0,xy}(\omega,k_y) &=& k_y ({\bar{G}}^{xy,xy}_R(\omega, k_y) + P) \label{eq:omegaGRx0xy} \end{eqnarray} Combined, these become \begin{eqnarray} {\bar{G}}^{xy,xy}_R(\omega,k_y)+P & = & {\omega^2\over k_y^2} \left( {\bar{G}}_R^{x0,x0}(\omega,k_y) + \varepsilon \right) \end{eqnarray} Except the extra $P$ and $\varepsilon$, the structure of these equations is exactly the same as the baryon current case. The following analysis is therefore a repeat of that case. In the $\omega\to 0$ limit, ${\bar{G}}_R^{x0,x0}(\omega,k_y)$ must have a well defined limit since it is a thermodynamic quantity. Furthermore, the imaginary part of ${\bar{G}}^{xy,xy}_R(\omega,k_y)$ must be an odd function of $\omega$. As in the baryon current case, the $k_y\to 0$ limit of the correlation functions must be well-behaved. Thus, we can parametrize ${\bar{G}}_R^{xy,xy}$ as \begin{eqnarray} {\bar{G}}_R^{xy,xy}(\omega,k_y) = {\omega^2 (\varepsilon + g_T(k_y) + i\omega A_T(\omega, k_y)) \over k_y^2 - i\omega/D_T(\omega,k_y) - \omega^2 B_T(\omega,k_y) } - P \label{eq:GRxyxy} \end{eqnarray} and \begin{eqnarray} {\bar{G}}^{x0,x0}_R(\omega, k_y) = {k_y^2 (\varepsilon + g_T(k_y) + i\omega A_T(\omega, k_y)) \over k_y^2 - i\omega/D_T(\omega,k_y) - \omega^2 B_T(\omega,k_y) } - \varepsilon \nonumber\\ \end{eqnarray} where $g_T(k_y) = {\bar{G}}_R^{x0,x0}(0,k_y)$. Here the functions $A_T$, $B_T$ and $D_T$ all have the form \begin{eqnarray} D_T(\omega,k_y) = D_T^R(\omega, k_y) - i\omega D_T^I(\omega,k_y) \end{eqnarray} where $D_T^R(\omega,k_y)$ and $D_T^I(\omega,k_y)$ are real-valued even functions of $\omega$ and $k_y$. The real parts $D_T^R$ and $B_T^R$ must have a non-zero limit as $\omega\to 0$ and $k_y \to 0$. All other parts of $A_T, B_T$ and $D_T$ must have finite limits as $\omega\to 0$ and $k_y \to 0$. In the configuration space, the constant $-\varepsilon$ term becomes $-\varepsilon \delta^4(x-x')$. In Eq.(\ref{eq:LinResTx0}), this $\delta$-function term does not contribute. Hence, in the small $\omega$ and $k_y$ limit, the evolution of $T^{x0}$ is determined by $i\omega = D_T k_y^2$ or \begin{eqnarray} \left( \partial_t - D_T\partial_y^2\right) T^{x0}(t,y) = 0 \label{eq:Tx0eq} \end{eqnarray} where we defined the momentum diffusion constant $D_T = D_T^R(0,0)$. This is our second dissipative hydrodynamic equation. The diffusion equation combined with the conservation law implies the constitutive relationship \begin{eqnarray} T^{xy}(t,y) = D_T\partial^y T^{x0}(t,y) = \eta \partial^y u^x \label{eq:Txy} \end{eqnarray} valid in the local rest frame. Here $\eta = D_T (\varepsilon + P)$ is the shear viscosity. It is clear from Eq.(\ref{eq:Tx0eq}) that $D_T$ has the physical interpretation of the diffusion constant for momentum diffusion. Recognizing Eq.(\ref{eq:Txy}) to be a part of the spin 2 component of the second rank tensor, we can generalize this result to \begin{eqnarray} \pi_{\rm NS}^{ij}(t,{\bf x}) & = & \eta \left(\partial^i u^{j} + \partial^j u^{i} - {2g^{ij}\over 3}\partial_l u^{l}\right) \end{eqnarray} again in the rest frame of the fluid cell. In the moving frame, this becomes \begin{eqnarray} \label{eq:shear} \pi_{\rm NS}^{\mu\nu}(t,{\bf x}) & = & 2\eta \Delta^{\mu\nu}_{\alpha\beta} \partial^\alpha u^\beta \equiv 2\eta\sigma^{\mu\nu} \end{eqnarray} where $\sigma^{\mu\nu}$ is the velocity shear tensor and $\Delta^{\mu\nu}_{\alpha\beta}$ is the the spin-2 projector defined by \begin{eqnarray} \Delta^{\mu\nu}_{\alpha\beta} = {1\over 2} \left( \Delta_{\alpha}^{\mu}\Delta_{\beta}^{\nu} + \Delta_{\alpha}^{\nu}\Delta_{\beta}^{\mu} - {2\over 3}\Delta^{\mu\nu}\Delta_{\alpha\beta} \right) \label{eq:spin2proj} \end{eqnarray} Here the label NS indicates that this is the Navier-Stokes form of the shear tensor. The Kubo formula for the shear viscosity is \begin{eqnarray} \lim_{\omega\to 0}\lim_{k_y\to 0} {1\over \omega} {\rm Im}\, {\bar{G}}_R^{xy,xy}(\omega, k_y) = D_T (\varepsilon + P) = \eta \label{eq:KuboShear} \end{eqnarray} where we used the fact that $g_T(0) =P$ which can be determined from Eq.(\ref{eq:kGRabmn}). The Kubo formula for the shear viscosity $\eta$ can be also expressed in terms of the full shear-tensor correlation function\cite{forster1975,Kadanoff1963} \begin{eqnarray} \eta & = & \lim_{\omega\to 0}\lim_{{\bf k}\to 0} {1\over 10\omega} {\rm Im}{\bar{G}}_R^{\pi_{ij},\pi_{ij}}(\omega,{\bf k}) \label{eq:KuboShearFull} \end{eqnarray} where the shear-tensor is given by \begin{eqnarray} \pi_{ij} = T_{ij} - (\delta_{ij}/3) T^k_k \end{eqnarray} \subsection{Sound Propagation and Bulk Viscosity} \label{sec:sound_bulk} So far, only the diffusion type of hydrodynamic flow is discussed which are not the main bulk excitation. To get the main excitation which must also include the ideal hydrodynamics part, one needs to look at the disturbance in the energy density. This bulk excitation, of course, is the sound wave. Suppose we perturb the energy density with \begin{eqnarray} \delta\hat{H}(t) = -\int d^3x\, e^{\epsilon t}\, {\hat{T}}^{00}(t,{\bf x})\beta_0({\bf x}) \end{eqnarray} The linear response is then \begin{eqnarray} \delta\langle T^{00}(t,{\bf k}) \rangle & = & \beta_0({\bf k}) \int_{-\infty}^{\infty} dt'\, \theta(-t') e^{\epsilon t'} {\bar{G}}_R^{00,00}(t-t',{\bf k}) \label{eq:LinResT00} \end{eqnarray} Applying the conservation law to each index of ${\bar{G}}^{\mu\nu,\alpha\beta}_R$ in Eq.(\ref{eq:kGRabmn}), we get \begin{eqnarray} \omega^4 {\bar{G}}^{00,00}_R(\omega,{\bf k}) &=& \omega^4 \varepsilon - \omega^2 {\bf k}^2 (\varepsilon + P) + {\bf k}^4 {\bar{G}}_L(\omega,{\bf k}) \label{eq:EEcorr} \end{eqnarray} where \begin{eqnarray} {\bf k}^4 {\bar{G}}_L(\omega,{\bf k}) = k_i k_j k_l k_m \left({\bar{G}}^{ij,lm}_R(\omega,{\bf k}) + P(\delta^{il}\delta^{jm} + \delta^{im}\delta^{jl} - \delta^{ij}\delta^{lm}) \right) \end{eqnarray} In the small $\omega$ limit, Eq.(\ref{eq:EEcorr}) gives \begin{eqnarray} {\bar{G}}_L(\omega,{\bf k}) \approx {\omega^2\over {\bf k}^2}(\varepsilon + P) + {\omega^4\over {\bf k}^4}\left( {\bar{G}}_R^{00,00}(0,{\bf k}) - \varepsilon\right) \label{eq:GLomegalim} \end{eqnarray} We also know that the imaginary part of ${\bar{G}}_L$ must be an odd function of $\omega$ and the correlation functions are well-behaved in the ${\bf k}\to 0$ limit. The most general form consistent with these conditions is \begin{eqnarray} {\bar{G}}_L(\omega,{\bf k}) & = & {\omega^2 \left( \varepsilon + P + i\omega^3 Q(\omega, {\bf k}) \right) \over {\bf k}^2 - \omega^2/Z(\omega,{\bf k}) + i\omega^3 R(\omega,{\bf k}) } \label{eq:GLbulk} \end{eqnarray} Here $Z(\omega,{\bf k})$, $Q(\omega, {\bf k})$ and $R(\omega,{\bf k})$ all have the form \begin{eqnarray} Z(\omega,{\bf k}) = Z_R(\omega, {\bf k}) - i\omega Z_I(\omega,{\bf k}) \end{eqnarray} where $Z_R(\omega,{\bf k})$ and $Z_I(\omega,{\bf k})$ are real-valued even functions of $\omega$ and ${\bf k}$. The real parts $Z_R$ and $R_R$ must have non-zero limits as $\omega\to 0$ and ${\bf k} \to 0$. All other parts of $Z, Q$ and $R$ must have finite limits as $\omega\to 0$ and ${\bf k} \to 0$. Matching the small $\omega$ limit (\ref{eq:GLomegalim}) demands that \begin{eqnarray} Z_R(0,{\bf k}) = { \varepsilon + P\over {\bar{G}}^{00,00}_R(0,{\bf k}) - \varepsilon } \end{eqnarray} Up to the quadratic terms in $\omega$ and ${\bf k}$, the poles of ${\bar{G}}_L(\omega,{\bf k})$ for small $\omega$ and $|{\bf k}|$ are determined by \begin{eqnarray} \omega^2 - Z_R(0,0){\bf k}^2 + i\omega Z_I(0,0){\bf k}^2 = 0 \label{eq:sound_dispersion} \end{eqnarray} This has the structure of the dispersion relationship of a damped sound wave. Hence, in the small $\omega$ and the small $|{\bf k}|$ limit, $ Z_R(0,0) = v_s^2 $ is the speed of sound squared and $Z_I(0,0)$ is the sound damping coefficient. To relate $Z_I(0,0)$ to the shear and the bulk viscosities, let us consider the constitutive relationships once again. From the shear part, we already have the spin-2 part of the stress tensor in the fluid cell rest frame \begin{eqnarray} \pi_{\rm NS}^{ij}(t,{\bf x}) & = & D_T \left(\partial^i T^{j0} + \partial^j T^{i0} - {2g^{ij}\over 3}\partial_l T^{l0}\right) \label{eq:piij_consti} \end{eqnarray} To this we add a spin-0 part $-\gamma g^{ij} \partial_t\varepsilon = \gamma g^{ij}\partial_l T^{l0}$ to get \begin{eqnarray} \delta T^{ij}(t,{\bf x}) & = & D_T \left(\partial^i T^{j0} + \partial^j T^{i0} - {2g^{ij}\over 3}\partial_l T^{l0}\right) + \gamma g^{ij} \partial_l T^{l0} \label{eq:FirstConstRel} \end{eqnarray} The energy conservation law in the local rest frame\footnote{ In the local rest frame, ${\bf u}(t,{\bf x}) = 0$, but $\partial_i u_j \ne 0$. } now becomes using Eq.(\ref{eq:Tmunu2}) with the dissipative part given by Eq.(\ref{eq:FirstConstRel}) \begin{eqnarray} 0 & = & \partial_\mu \partial_\nu T^{\mu\nu} \nonumber\\ & = & \partial_t^2 \varepsilon - \nabla^2 P -D_T {4\over 3}\nabla^2 \partial_t\varepsilon -\gamma \nabla^2 \partial_t\varepsilon \nonumber\\ & \to & \left(-\omega^2 + v_s^2 {\bf k}^2 - i(4D_T/3 + \gamma){\bf k}^2\omega\right)\delta\varepsilon \label{eq:sound_eq} \end{eqnarray} where we used $g^{ij}\partial_i \partial_j = -\nabla^2$ and $ \partial_t\varepsilon = -\partial_l T^{l0} $. Comparing with Eq.(\ref{eq:sound_dispersion}), one can identify $v_s^2 = \partial P/\partial\varepsilon = Z_R(0,0)$, and \begin{eqnarray} Z_I(0,0) = \Gamma = 4D_T/3 + \gamma \label{eq:BulkGamma} \end{eqnarray} as the sound attenuation constant. Since we have already identified $D_T = \eta/(\varepsilon + P)$, this allows us to identify $\gamma = \zeta/(\varepsilon + P)$ where $\zeta$ is the bulk viscosity. In the fluid cell rest frame, the added term corresponds to the constitutive relationship $\Pi_{\rm NS} = -\zeta \partial_i u^i$. In the general frame, this becomes \begin{eqnarray} \Pi_{\rm NS} = -\zeta \partial_\mu u^\mu \label{eq:Pi_consti} \end{eqnarray} Again, the label NS indicates that this is the Navier-Stokes form of the bulk pressure. The minus sign in Eq.(\ref{eq:Pi_consti}) makes sense since the effective pressure $P + \Pi$ should be less than the equilibrium pressure when the fluid is expanding (positive $\partial_i u^i$). We have so far identified $Z_R(0,0)$ and $Z_I(0,0)$ as the speed of sound squared and the sound attenuation coefficient. The role of $R(\omega,{\bf k})$ is still to be identified. In the Kubo formula for the attenuation coefficient, $R_R(0,0)$ appears as \begin{eqnarray} \lim_{\omega\to 0}\lim_{{\bf k}\to 0}{\rm Im}\, {{\bar{G}}_L(\omega,{\bf k})\over \omega} = \left(\varepsilon + P)(Z_I(0,0) - Z_R(0,0)^2\, R_R(0,0)\right) \label{eq:KuboBulk} \end{eqnarray} which does not allow one to identify the right hand side with $\Gamma$ if $R_R(0,0)\ne 0$. Actually, the left hand side of Eq.(\ref{eq:KuboBulk}) does yield $\Gamma$. It is just that we have not been consistent in power counting since the wave equation Eq.(\ref{eq:sound_eq}) does not contain $O(\omega^3)$ term while the pole of ${\bar{G}}_L$ does. One may consider this discrepancy as the first sign of the trouble with the first order constitutive relationship Eq.(\ref{eq:FirstConstRel}). \subsection{Second Order Viscous Hydrodynamics} \label{sec:2nd_order_visc} Let us consider the consequence of having the first order constitutive relationship more closely. The diffusion equation with a source $S$ \begin{eqnarray} \left(\partial_t - D_T\nabla^2\right){\delta n}(t,{\bf x}) = S(t,{\bf x}) \end{eqnarray} has the solution \begin{eqnarray} {\delta n}(x) = \int d^4x'\, G_R(x-x')\, S(x') \end{eqnarray} Here the retarded Green function is \begin{eqnarray} G_R(x-x') = \theta(t-t') {e^{-{|{\bf x}-{\bf x}'|^2\over 4 D (t-t')}}\over 8 (\pi D(t-t'))^{3/2}} \label{eq:G_diff} \end{eqnarray} If one has a point source, $S(x') = N_0\delta(x')$, then ${\delta n}(t,{\bf x}) = N_0G_R(t,{\bf x})$. At $t=0$, the space is empty except at the origin. But at any time after that, there is non-zero ${\delta n}$ everywhere. This is clearly acausal. On the other hand, the solution of the sound equation \begin{eqnarray} -(\partial_t^2 - v_s^2 \nabla^2)\delta\epsilon(x) = S(x) \end{eqnarray} for a point source $S(x) = \Lambda_0 \delta(x)$ is \begin{eqnarray} \delta\epsilon(t,{\bf x}) = \Lambda_0\theta(t){1\over 4\pi} {\delta(|{\bf x}|-v_st)\over |{\bf x}|} \end{eqnarray} This is causal since the disturbance only moves with the speed of sound. The origin of acausality in diffusion is the mismatch between the number of time derivatives and the number of spatial derivatives in the diffusion equation. The diffusive dispersion relationship $ \omega = -iD{\bf k}^2 $ gives the group velocity \begin{eqnarray} {\partial\omega\over \partial {\bf k}} = -2iD{\bf k} \end{eqnarray} which becomes large in the large ${\bf k}$ limit. This problem can be remedied if one replaces the constitutive equation, $J^i = D\partial^i n$ with a relaxation type equation \begin{eqnarray} \partial_t J^i = -{1\over \tau_R} (J^i - D\partial^i n) \label{eq:JB_relax} \end{eqnarray} then the conservation law becomes \begin{eqnarray} \partial_t^2 n = -\partial_i \partial_t J^i = -{1\over \tau_R}\partial_t n + {D\over \tau_R}\nabla^2 n \end{eqnarray} For large $k$ where we previously had a problem, we now have \begin{eqnarray} \omega^2 \approx v_R^2 k^2 \end{eqnarray} with the propagation speed $v_R = \sqrt{D/\tau_R}$. This type of relaxation equation was actually anticipated already: Up to the second order in $\omega$, the poles of the the density-density correlator in Eq.(\ref{eq:GLgen}) are determined by \begin{eqnarray} D{\bf k}^2 - i\omega - \omega^2 D B = 0 \end{eqnarray} Comparing, we see that \begin{eqnarray} B =\tau_R/D \end{eqnarray} For the viscous stress-energy tensor components, the following relaxation equations apply in the local rest frame \begin{eqnarray} \left(\partial_t + {1\over\tau_\pi}\right) \pi^{ij} &=& {1\over \tau_\pi} \pi_{\rm NS}^{ij} \label{eq:shear_relax} \\ \left(\partial_t + {1\over\tau_\Pi}\right) \Pi &=& {1\over\tau_\Pi}\Pi_{\rm NS} \label{eq:bulk_relax} \end{eqnarray} where \begin{eqnarray} \pi^{lm} = T^{lm} - {g^{lm}\over 3}T^k_k \end{eqnarray} is the traceless part of the stress tensor and \begin{eqnarray} \Pi = -\left( {1\over 3}T^k_k + P\right) \end{eqnarray} is the bulk pressure. They are not to be identified with the Navier-Stokes forms (\ref{eq:piij_consti}) and (\ref{eq:Pi_consti}). Rather, they will relax to the Navier-Stokes forms. To see if the acausality in the momentum diffusion is cured, we start with the momentum conservation \begin{eqnarray} \partial_t T^{k0} = -\partial_l T^{kl} \end{eqnarray} Applying the curl gives \begin{eqnarray} \partial_t \pi^i_T & = & -\epsilon_{ijk} \partial_j \partial_l \pi^{kl} \end{eqnarray} where we defined $\pi^i_T = \epsilon_{ijk}\partial_j T^{k0}$. Applying $(\partial_t + 1/\tau_\pi)$ and using Eq.(\ref{eq:piij_consti}) yields \begin{eqnarray} 0 = \left(\tau_\pi \partial_t^2 + \partial_t - D_T\nabla^2\right)\pi^i_T \end{eqnarray} As long as $D_T/\tau_\pi < 1$, this is now causal. For the sound modes, we start with the conservation law in the local rest frame \begin{eqnarray} \partial_t^2\varepsilon = \nabla^2 P + \partial_l\partial_m \pi^{lm} - \nabla^2 \Pi \label{eq:varepsilon_eq} \end{eqnarray} Applying $(\tau_\pi\partial_t + 1)(\tau_\Pi\partial_t + 1)$ to Eq.(\ref{eq:varepsilon_eq}) and using Eqs.(\ref{eq:shear_relax}) and (\ref{eq:bulk_relax}), one obtains the following dispersion relation for the bulk mode propagation (in this case $\delta\varepsilon$) \begin{eqnarray} \lefteqn{ 0 = \tau_\pi\tau_\Pi \omega^4 - \tau_\pi\tau_\Pi v_s^2 \omega^2{\bf k}^2 - \tau_\Pi{4D_T\over 3}{\bf k}^2\omega^2 - \tau_\pi\gamma {\bf k}^2\omega^2 }&& \nonumber\\ && - \omega^2 + v_s^2{\bf k}^2 - i\left({4 D_T\over 3} + \gamma + v_s^2 \left(\tau_\pi + \tau_\Pi\right) \right) {\bf k}^2\omega + i\omega^3\left(\tau_\pi + \tau_\Pi\right) \label{eq:bulk_dispersion} \nonumber\\ \end{eqnarray} Comparing with the small $\omega$ and small $|{\bf k}|$ expansion of the denominator in Eq.(\ref{eq:GLbulk}), we can identify $Z_R(0,0) = v_s^2$, \begin{eqnarray} Z_I(0,0) = {4 D_T\over 3} + \gamma + v_s^2 \left(\tau_\pi + \tau_\Pi\right) \end{eqnarray} and \begin{eqnarray} R_R(0,0) = (\tau_\pi + \tau_\Pi)/v_s^2 \end{eqnarray} The Kubo formula for the damping constant now makes more sense \begin{eqnarray} \lim_{\omega\to 0}\lim_{{\bf k}\to 0}{\rm Im}\, {{\bar{G}}_L(\omega,{\bf k})\over \omega} &=& (\varepsilon + P)(Z_I(0,0) - Z_R(0,0)^2\, R_R(0,0)) \nonumber\\ & = & (\varepsilon + P)\left({4D_T\over 3} + \gamma \right) \nonumber\\ & = & {4\eta\over 3} + \zeta \end{eqnarray} and for the bulk viscosity only, \begin{eqnarray} \zeta = \lim_{\omega\to 0}\lim_{{\bf k}\to 0}\, {1\over\omega} \left( {\rm Im}\, {{\bar{G}}_L(\omega,{\bf k})} -{4\over 3} {\rm Im}\, {{\bar{G}}_R^{xy,xy}(\omega, k_y)} \right) \end{eqnarray} The Kubo formula for the bulk viscosity $\zeta$ is also available in terms of the pressure-pressure correlation function \cite{forster1975,Kadanoff1963} \begin{eqnarray} \zeta = \lim_{\omega\to 0}\lim_{{\bf k}\to 0} {1\over \omega} {\rm Im}\, G_R^{PP}(\omega, {\bf k}) \end{eqnarray} Using the fact that the correlation functions of $T^{00}$ vanishes in the ${\bf k}\to 0$ limit, one can use in place of $P$ the trace $T^{\mu}_\mu/3$ or the combination $P - v_s^2\varepsilon$ to make it more explicit that the bulk viscosity is non-zero only if the conformal symmetry is broken. The Kubo formulas for the relaxation times $\tau_\pi$ and $\tau_\Pi$ are not simple to determine in this analysis. Simple Kubo formulas for $\tau_\pi$ has been worked out in Refs.\cite{Baier:2007ix,Moore:2010bu} as \begin{eqnarray} \eta\tau_\pi = -\lim_{\omega\to 0}\lim_{{\bf k}\to 0} {1\over 2} {\rm Re}\,\partial_\omega^2 G^{xy,xy}_R(\omega, {\bf k}) \end{eqnarray} although a simple Kubo formula for $\tau_\Pi$ is still to be found. In the dispersion relation Eq.(\ref{eq:bulk_dispersion}), the 4-th order terms of $O(\omega^4)$ and $O(\omega^2{\bf k}^2)$ are present but $O({\bf k}^4)$ terms are not. This may seem unsatisfactory since there is no reason why this term should be small compared to the other two 4-th order terms when $\omega \sim v_s |{\bf k}|$. However, one should recall that hydrodynamics is valid only in the long wavelength and small frequency limits. From this point of view, the 4-th order terms are not so important. They become, however, significant when the equations are solved numerically that can include short wavelength excitations. Fortunately, this Israel-Stewart form of second order hydrodynamics\cite{Israel:1979wp} (comprising of Eqs.(\ref{eq:JB_relax}), (\ref{eq:shear_relax}) and (\ref{eq:bulk_relax})) is shown to be stable in Refs.\cite{Denicol:2008ha,Pu:2009fj}. If one wants to include $O({\bf k}^4)$ terms, then one can modify the relaxation equations (\ref{eq:shear_relax}) and (\ref{eq:bulk_relax}) to include second derivatives of $T^{ij}$. However, doing so not only generates $O({\bf k}^4)$ terms, but it also generates (incomplete) terms involving 5 and 6 factors of $\omega$ and ${\bf k}$. Since higher order terms begin to matter at large $\omega$ and $|{\bf k}|$, having higher and higher order of frequency and momentum (or derivatives in the configuration space) does not guarantee that the numerical solution in this limit becomes more and more faithful to the real spectrum. One just needs to be careful not to interpret high frequency and momentum modes as physical. As for the calculation of the viscosities, full leading order perturbative QCD results for both the shear viscosity and the bulk viscosity have been obtained in Refs.\cite{Arnold:2001ba,Arnold:2003zc,Arnold:2006fz} using the Kubo formulas illustrated above. QCD is an asymptotically free theory \cite{Gross:1973ju,Politzer:1974fr,Bethke:2012zza}. In principle, it admits a perturbative expansion only when the energy scale exceeds at least a few GeV. Since the typical QGP energy scale is less than $1\,\hbox{GeV}$, the strong coupling is not small. Phenomenologically, we must have $\alpha_S \approx 0.3$ \cite{Schenke:2009gb,Tribedy:2011aa}. This value may look weak, but the gauge coupling itself $g = \sqrt{4\pi \alpha_S} \approx 2$ is not so small. In the perturbative many-body QCD, $g$ (or $g/2\pi$) is the expansion parameter not $\alpha_S$~\cite{Gross:1980br,Andersen:2010ct,Andersen:2011sf,Haque:2012my}. Therefore although the analysis performed in Refs.\cite{Arnold:2001ba,Arnold:2003zc,Arnold:2006fz} are nothing short of tour de force, having $g \approx 2$ makes numerical values obtained in perturbation theory not too reliable. At this point, reliable first principle calculations at large $g$ can only be performed on numerical lattice in Euclidean space. Lattice QCD can straightforwardly compute static properties such as the equation of state. However, calculations of dynamic properties such as the viscosities become more complicated as they involve estimating real continuous functions from a finite set of discrete Euclidean data. Nonetheless, a great deal has been accomplished in computing the properties of QGP through lattice QCD calculations\cite{AliKhan:2001ek,Cheng:2009zi, Borsanyi:2013bia,Meyer:2007dy,Meyer:2007ic} as well as through effective models such as the hadron resonance gas model (HR) \cite{NoronhaHostler:2008ju,FernandezFraile:2009mi} and the AdS/CFT correspondence \cite{Kovtun:2004de,Buchel:2007mf,Gubser:2008sz,Springer:2010mw}. The purpose of this section has been to show that the hydrodynamics is very general. No matter what the system is, there usually is a regime where hydrodynamics is in some way applicable as long as there exist ``macroscopically small but microscopically large'' length and time scales. For more detailed analysis of the length and time scales and also for demonstrating more general structure of hydrodynamic equations, we now turn to the kinetic theory. \section{Hydrodynamics from kinetic theory} \label{sec:kinetic_hydro} \subsection{Length scales and validity of hydrodynamic approximations} \label{sec:kinetic} Kinetic theory describes a medium microscopically, by following the evolution of the phase-space distribution function $f(x,p)$, a Lorentz scalar that describes the probability of finding a particle with four-momentum $p^\mu$ at space-time position $x^\nu$. Classical kinetic theory assumes that the particle momenta are on-shell, $p^2=m^2$, which requires the system to be sufficiently dilute and the mean free paths sufficiently long to ignore collisional broadening effects on the spectral function $\rho(p)=2\pi\delta(p^2{-}m^2)$ that defines the particles'\ propagator. The defining equation of classical kinetic theory is the Boltzmann equation, \begin{eqnarray} \label{k:eq1} p^\mu\partial_\mu f(x,p) = C(x,p), \end{eqnarray} where $C(x,p)$ is the collision term in which the strength of the interaction enters through their scattering cross sections. Especially for massless degrees of freedom, its detailed form can be quite complicated \cite{Arnold:2002zm}. A popular simplification of the collision term is the relaxation time approximation (RTA \footnote The Boltzmann equation with this RTA-approximated collision term is known as the Anderson-Witting equation.} \begin{equation} \label{k:eq1a} C(x,p) = \frac{p^\mu u_\mu(x)}{\tau_\mathrm{rel}(x,p)}\,\Bigl[f_\mathrm{eq}(x,p){-}f(x,p)\Bigr]. \end{equation} where the relaxation time $\tau_\mathrm{rel}$ in general depends on position through the local density and can also depend on the local rest frame energy of the particles (indicated by the $p$-dependence). Classical kinetic theory is valid if this relaxation time $\tau_\mathrm{rel}$, and the associated mean free path $\lambda_\mathrm{mfp}=\langle (p/E) \tau_\mathrm{rel}\rangle$, are sufficiently large. In other words, interactions among the constituents must be weak. Hydrodynamics is valid if the system is close enough to thermal equilibrium that its local momentum distribution (and therefore its macroscopic fields, such as particle and energy density and pressure, which can all be expressed as moments of the local momentum distribution) can be characterized by a small number of thermodynamic and transport parameters, such as temperature, chemical potential, shear and bulk viscosity, etc. This requires efficient interactions among the constituents of the medium because otherwise any kind of macroscopic dynamics involving local expansion or compression or shear of the fluid will drive its local momentum distribution away from its near-equilibrium form. Hydrodynamics works best for systems made of strongly interacting constituents. Does this mean that the validity of kinetic theory and hydrodynamics are mutually exclusive? Not necessarily. To gain clarity consider a relativistic system of (almost) massless degrees of freedom. It can be characterized by three length scales, two microscopic and one macroscopic one: \begin{itemize} \item the thermal wavelength $\lambda_\mathrm{th}\sim 1/T$ \item the mean free path $\lambda_\mathrm{mfp}\sim 1/(\langle \sigma v\rangle n)$ where $\langle \sigma v\rangle$ is the momentum-averaged transport cross section times the relative speed ($\approx 1$ in units of $c$) of the colliding objects, and $n$ is the density of scatterers \item the length scale $L_\mathrm{hydro}$ over which macroscopic fluid dynamical variables vary; it can be defined in many ways that give quantitatively different but similar order of magnitude results: $L^{-1}_\mathrm{hydro}\sim \partial_\mu u^\mu \sim |\partial_\mu e|/e$ etc. \end{itemize} The ratio between the two microscopic scales characterizes the magnitude of the transport coefficients $\eta$ (shear viscosity), $\zeta$ (bulk viscosity), and $\kappa$ (heat conductivity): \begin{eqnarray} \label{k:eq1b} \frac{\lambda_\mathrm{mfp}}{\lambda_\mathrm{th}} \sim \frac{1}{\langle \sigma \rangle n}\,\frac{1}{\lambda_\mathrm{th}} \sim \frac{1}{\langle\sigma\rangle \lambda_\mathrm{th}}\frac{1}{s} \sim \frac{\eta}{s},\ \frac{\zeta}{s},\ \frac{T\kappa}{s}, \end{eqnarray} where we used $\eta,\ \zeta,\ T\kappa \sim 1/(\langle\sigma\rangle\lambda_\mathrm{th}) \sim \lambda_\mathrm{mfp}T^4$ and the entropy density $s\simeq 4n\sim T^3$ for a near-thermalized system of particle number density $n$ for massless degrees of freedom. In terms of the two microscopic length scales we can define three regimes of microscopic dynamics: \begin{enumerate} \item {\bf Dilute gas regime}: \begin{eqnarray} \label{k:eq1c} \frac{\lambda_\mathrm{mfp}}{\lambda_\mathrm{th}} \sim \frac{\eta}{s}\gg 1 \quad \Longleftrightarrow \quad \langle\sigma\rangle \ll \lambda_\mathrm{th}^2\sim\frac{1}{T^2} \end{eqnarray} This is the {\em weak-coupling regime} where the microscopic system dynamics can be described in terms of on-shell quasi-particles and many-body correlations are suppressed. In this regime the Boltzmann equation applies. \item {\bf Dense gas regime:} \begin{eqnarray} \label{k:eq1d} \frac{\lambda_\mathrm{mfp}}{\lambda_\mathrm{th}} \sim \frac{\eta}{s}\sim 1 \quad \Longleftrightarrow \quad \langle\sigma\rangle \sim \lambda_\mathrm{th}^2 \end{eqnarray} In this case interactions happen on the scale $\lambda_\mathrm{th}$. We call this the {\em moderate coupling regime} where the microscopic system dynamics must be described by off-shell quasiparticles (whose spectral functions have a finite collisional width) and many-body correlation effects are non-negligible. Here the Boltzmann equation must be replaced by a quantum kinetic approach based on Wigner distributions, and the BBGKY hierarchy of coupled equations for the $N$-body distribution functions can no longer be efficiently truncated. \item {\bf Liquid regime:} \begin{eqnarray} \label{k:eq1e} \frac{\lambda_\mathrm{mfp}}{\lambda_\mathrm{th}} \sim \frac{\eta}{s}\ll 1 \quad \Longleftrightarrow \quad \langle\sigma\rangle \gg \lambda_\mathrm{th}^2 \end{eqnarray} This is the {\em strong-coupling regime} where the system has no well-defined quasiparticles and no valid kinetic theory description. \end{enumerate} \noindent To judge the validity of a macroscopic hydrodynamic approach we compare the microscopic to the macroscopic length scales. To simplify the discussion, let us agree on using the inverse of the scalar expansion rate $\theta=\partial_\mu u^\mu$ to represent the macroscopic length scale $L_\mathrm{hydro}$ \footnote{% The shorter $L_\mathrm{hydro}$, the faster the system is driven away from local equilibrium. The scalar expansion rate directly drives the bulk viscous pressure $\Pi$. It is parametrically of the same order as the shear tensor $\sigma^{\mu\nu}=\Delta^{\mu\nu}_{\alpha\beta}\partial^\alpha u^\beta\equiv \nabla^{\langle\mu} u^{\nu\rangle}$ defined in Eq.~(\ref{eq:shear}) that drives the shear viscous pressure $\pi^{\mu\nu}$ and as the diffusion force $I^\mu{\,=\,}\nabla^\mu(\mu/T)$ associated with space-time gradients of conserved charge densities that drives the heat flow $V^\mu$. } The figure of merit controlling the validity of a fluid dynamic picture is the {\em Knudsen number}: \begin{eqnarray} \label{k:eq1f} \mathrm{Kn} = \lambda_\mathrm{mfp}\cdot\theta \sim \frac{\eta}{s} \lambda_\mathrm{th}\cdot\theta \sim \frac{\eta}{sT}\cdot\theta \sim \theta\tau_\mathrm{rel}. \end{eqnarray} The Knudsen number is the small parameter that controls the convergence of the expansion in gradients of thermodynamic quantities that underlies the derivation of hydrodynamics as an effective theory for the long-distance dynamics of a general quantum field theory \cite{Dubovsky:2011sj}. Again, we can use it to define three regimes: \begin{enumerate} \item{\bf Ideal fluid dynamics:} \begin{eqnarray} \label{k:eq1g} \mathrm{Kn}\approx 0 \quad \Longleftrightarrow \quad \frac{\eta}{s}\approx0\ \text{or}\ \theta\approx 0 \ \text{such that}\ \theta\tau_\mathrm{rel}\approx 0 \end{eqnarray} \item{\bf Viscous fluid dynamics:} \begin{eqnarray} \label{k:eq1h} \mathrm{Kn}\lesssim1 \quad \Longleftrightarrow \quad \frac{\eta}{s}\ \text{or}\ \theta\ \text{small} \ \text{such that}\ \theta\tau_\mathrm{rel}\lesssim 1 \end{eqnarray} \item{\bf Hydrodynamics breaks down:} \begin{eqnarray} \label{k:eq1i} \mathrm{Kn}\gg1 \quad \Longleftrightarrow \quad \frac{\eta}{s}\ \text{or}\ \theta\ \text{large} \ \text{such that}\ \theta\tau_\mathrm{rel}\gg 1 \end{eqnarray} \end{enumerate} In high-energy heavy-ion collisions, the initial energy deposition occurs in an approximately boost-invariant fashion along the beam direction, leading to an expansion rate $\theta$ that diverges like $1/\tau$ for small time $\tau$ after impact.% \footnote{Here, $\tau = \sqrt{t^2 - z^2}$ is the longitudinal proper time and the boost-invariance refers to independence of the space-time rapidity $\eta = \tanh^{-1}(z/t)$. For more details, see section \ref{sec:taueta}.} On the other hand we now know that the quantum mechanical uncertainty relation places a lower bound $\eta/s \gtrsim 1/(4\pi)$ on any system, even at infinitely strong coupling \cite{Danielewicz:1984ww,Policastro:2001yc,Kovtun:2004de}. Therefore, hydrodynamics is inapplicable during the earliest stage of a heavy-ion collision. At the end of a heavy-ion collision, the mean free paths of hadrons become large compared to the Hubble radius $\sim 1/\theta$ of the expanding fireball, and hydrodynamics breaks down again. This process is called {\em kinetic decoupling}. Between the early pre-equilibrium and the final decoupling stage stretches an extended period of applicability of viscous fluid dynamics. The most important factor ensuring this is the strong collective coupling of the quark-gluon plasma (QGP) phase which is characterized by a small specific shear viscosity $\eta/s\sim (2{-}3)/(4\pi)$ \cite{Romatschke:2007mq,Song:2010mg,Gale:2012rq,Heinz:2013th}. Note that the validity of hydrodynamics does not rely directly on $\eta/s$ being small, only on $(\eta/s)\cdot(\theta/T)$ being small. So, strictly speaking, strong coupling is not required for hydrodynamics to be valid. Only in extreme situations, such as heavy-ion collisions which are characterized by extreme expansion rates, does hydrodynamics require very strong coupling. In this case, hydrodynamics is applicable even though classical kinetic theory is not, because very strongly coupled quantum field theories do not allow a description in terms of on-shell quasi-particles. It is generally believed that the very earliest stage of a heavy-ion collision has no well-defined quasiparticles at all and is better described by a theory of classical or quantum fields than by a (quantum) kinetic approach. On the other hand, weakly coupled system, with very large values of $\eta/s$ in which the applicability of (even classical) kinetic theory is ensured, can still be describable macroscopically through fluid dynamics if they are sufficiently homogeneous and expand slowly. In this case the smallness of $\theta$ can compensate for the largeness of $\eta/s$, resulting in a small Knudsen number. Systems with a large $\eta/s$ but a small product $(\eta/s)\cdot(\theta/T)$ admit simultaneous microscopic classical kinetic and macroscopic hydrodynamic descriptions. In the following subsections we will study such systems to derive the macroscopic hydrodynamic equations from the microscopic kinetic theory. Hydrodynamics being an effective long-distance theory, the form of the resulting equations does not rely on the validity of the underlying kinetic theory (although the values for the transport coefficients do); they can therefore also be applied to a strongly-coupled liquid such as the QGP. \subsection{Ideal fluid dynamics} We define $p$-moments of the distribution function weighted with some momentum observable $O(p)$ by \begin{equation} \label{eq2} \langle O(p)\rangle \equiv \int_p O(p)\, f(x,p) \equiv g \int \frac{d^3p}{(2\pi)^3p^0} O(p)\, f(x,p) \end{equation} ($g$ is a degeneracy factor) and $p^0 = E_p = \sqrt{m^2 + {\bf p}^2}$. The particle number current and energy momentum tensor are then written as \begin{eqnarray} \label{eq3} j^\mu {\,=\,} \langle p^\mu\rangle,\quad T^{\mu\nu}{\,=\,}\langle p^\mu p^\nu\rangle. \end{eqnarray} Usually there is more than one particle species in the system, and the conserved baryon charge current $J^\mu_B$ and energy-momentum tensor $T^{\mu\nu}$ are given in terms of linear combinations of $\langle p^\mu\rangle_i$ and $\langle p^\mu p^\nu\rangle_i$ where the subscript $i$ labels the particle species whose distribution function is $f_i(x,p)$: \begin{equation} \label{eq3a} J^\mu_B = \sum_i b_i j^\mu_i = \sum_i b_i \langle p^\mu\rangle_i,\quad T^{\mu\nu} = \sum_i T^{\mu\nu}_i = \sum_i \langle p^\mu p^\nu\rangle_i; \end{equation} here $b_i$ is the baryon charge carried by each particle of species $i$. For simplicity, we restrict the following discussion to a single particle species. The particle number current and energy-momentum tensor take their ideal fluid dynamical form \begin{eqnarray} \label{eq3b} j^\mu_\mathrm{id}{\,=\,}{n}u^\mu, \qquad T_\mathrm{id}^{\mu\nu}{\,=\,}{\varepsilon} u^\mu u^\nu - P\Delta^{\mu\nu}, \end{eqnarray} where the spatial projector in the local rest frame (LRF) $\Delta^{\mu\nu}$ is given in Eq.~(\ref{eq:Deltamunu}), if we assume that the system is locally momentum isotropic: \begin{equation} \label{eq4} f(x,p)=f_\mathrm{iso}(x,p)\equiv f_\mathrm{iso}\left(\frac{p^\mu u_\mu(x)-\mu(x)}{T(x)}\right). \end{equation} The local equilibrium distribution \begin{eqnarray} \label{eq4a} f_\mathrm{eq}(\zeta)=\frac{1}{e^\zeta+a}, \end{eqnarray} where $\zeta\equiv(p^\mu u_\mu(x){-}\mu(x))/T(x)$ and $a=1,-1,0$ for Fermi-Dirac, Bose-Einstein, and classical Boltzmann statistics, respectively, is a special form of $f_\mathrm{iso}(x,p)$. It is defined as the distribution for which the collision term $C(x,p)$ in the Boltzmann equation (\ref{k:eq1}) vanishes. Note that the ideal fluid decomposition (\ref{eq3b}) does not require chemical equilibrium, i.e.~it holds for arbitrary values of the chemical potential $\mu(x)$, nor does it require complete thermal equilibrium, i.e.~$f_\mathrm{iso}$ is not required to depend on its argument exponentially as is the case for the equilibrium distribution (\ref{eq4a}). If the dependence is non-exponential, the collision term in the Boltzmann equation is non-zero, but its $p^\mu$-moment still vanishes, $\int_p p^\mu C\EQ0$, due to energy-momentum conservation. The ideal hydrodynamic equations follow by inserting the ideal fluid decomposition (\ref{eq3b}) into the conservation laws Eq.~(\ref{eq:1}): \begin{eqnarray} \label{eq:ideal} \dot{n} =-n\theta,\qquad \dot{\varepsilon} = -(\varepsilon{+}P)\theta,\qquad \dot{u}^\nu = \frac{\nabla^\mu P}{\varepsilon{+}P} = \frac{c_s^2}{1+c_s^2}\,\frac{\nabla^\mu \varepsilon}{\varepsilon}, \end{eqnarray} where the very last expression assumes an EOS of type $P=c_s^2 \varepsilon$. $\dot F$ denotes the LRF time derivative of a function $F$, $\dot F{\,\equiv\,}D_\tau F{\,\equiv\,}u^\mu \partial_\mu F$, and $\nabla^\mu=\Delta^{\mu\nu}\partial_\nu$ the spatial gradient in the LRF. Thus, $\partial_\mu = u^\mu D_\tau +\nabla^\mu$.% \footnote{% Note that in curvilinear coordinates or curved space-times, the partial derivative $\partial_\mu$ must be replaced by the covariant derivative $d_\mu$. } Equations~(\ref{eq:ideal}) can be solved numerically for the local particle density $n(x)$, energy density $\varepsilon(x)$, and flow velocity $u^\mu(x)$, with the temperature $T(x)$, chemical potential $\mu(x)$ and pressure $P(x)$ following from the equation of state (EOS) of the fluid. Local deviations from chemical equilibrium result in a non-equilibrium value of the local chemical potential $\mu(x)$ and a non-zero right hand side in the current conservation equation for $j^\mu$. Deviations from thermal equilibrium (while preserving local isotropy) must be accounted for by a non-equilibrium pressure in the EOS\, $P(\varepsilon,n)$. In both cases, the conservation laws, Eqs.~(\ref{eq:1}), lead to a non-vanishing entropy production rate $\partial_\mu S^\mu{\,\sim\,}1/\tau_\mathrm{rel}{\,\ne\,}0$. \subsection{Viscous fluid dynamics} \subsubsection{Navier-Stokes (NS), Israel-Stewart (IS) and Denicol-Niemi-Molnar-Rischke (DNMR) theory ({\sc vHydro})} Israel-Stewart (IS) and Denicol-Niemi-Molnar-Rischke (DNMR) second-order viscous fluid dynamics \cite{Israel:1979wp} are obtained by using in (\ref{eq3}) for $f(x,p)$ the ansatz \begin{equation} \label{eq6} f(x,p)= f_\mathrm{iso}\left(\frac{p^\mu u_\mu(x)-\mu(x)}{T(x)}\right) +\delta f(x,p). \end{equation} The correction $\delta f$ describes the deviation of the solution $f(x,p)$ of the Boltzmann equation from local momentum isotropy. It is supposed to be ``small'', in a sense that will become clearer below, and will thus be treated perturbatively. Most authors set $f_\mathrm{iso}=f_\mathrm{eq}$, i.e. they expand around a local equilibrium state. To obtain the correct form of the hydrodynamic equations this is not necessary; only the form of the equation of state $P(\varepsilon,n)$ and the values of the transport coefficients depend on this choice. We, too, will make this choice for simplicity, but emphasize that under certain conditions the perturbative treatment of $\delta f$ may be better justified if the leading-order distribution $f_\mathrm{iso}$ is not assumed to be thermal. For later convenience we decompose $p^\mu$ into its temporal and spatial components in the LRF: \begin{eqnarray} \label{eq6a} p^\mu{\,=\,}(u^\mu u^\nu{+}\Delta^{\mu\nu})p^\mu={{\bar{E}}_p}u^\mu {+} p^{\langle\mu\rangle} \end{eqnarray} where ${\bar{E}}_p{\,\equiv\,}u_\mu p^\mu$ and $p^{\langle\mu\rangle}{\,\equiv\,}\Delta^{\mu\nu}p_\nu$ are the energy and spatial momentum components in the LRF. Then \begin{eqnarray} \label{eq6b} n = \langle {\bar{E}}_p\rangle,\qquad \varepsilon = \langle {\bar{E}}_p^2\rangle. \end{eqnarray} The decomposition (\ref{eq6}) is made unique by Landau matching: First, define the LRF by solving the eigenvalue equation (\ref{EVeq}) with the constraint $u^\mu u_\mu\EQ1$ which selects among the four eigenvectors of $T^{\mu\nu}$ the timelike one. Eq.~(\ref{EVeq}) fixes the flow vector $u^\mu(x)$ and the LRF energy density. Next, we fix $T(x)$ and $\mu(x)$ by demanding that $\delta f$ gives no contribution to the local energy and baryon density: \begin{eqnarray} \label{eq6c} \langle {\bar{E}}_p\rangle_\delta = \langle {\bar{E}}_p^2\rangle_\delta=0. \end{eqnarray} Inserting (\ref{eq6}) into (\ref{eq3}) we find the general decomposition \begin{equation} \label{eq7} j^\mu = j^\mu_\mathrm{id} + V^\mu, \quad T^{\mu\nu}=T^{\mu\nu}_\mathrm{id} - \Pi\Delta^{\mu\nu} + \pi^{\mu\nu}, \end{equation} with a non-zero number flow in the LRF, \begin{eqnarray} \label{eq7a} V^\mu{\,=\,}\bigl\langle p^{\langle\mu\rangle}\bigr\rangle_\delta, \end{eqnarray} a bulk viscous pressure \begin{eqnarray} \label{eq7b} \Pi{\,=\,}-\frac{1}{3}\bigl\langle p^{\langle\alpha\rangle} p_{\langle\alpha\rangle}\bigr\rangle_\delta, \end{eqnarray} and a shear stress \begin{eqnarray} \label{eq7c} \pi^{\mu\nu}{\,=\,}\bigl\langle p^{\langle\mu} p^{\nu\rangle} \bigr\rangle_\delta, \end{eqnarray} where $\langle\dots\rangle_\delta$ indicates moments taken with the deviation $\delta f$ from $f_\mathrm{iso}$. In the last equation we introduced the notation \begin{eqnarray} \label{eq7d} A^{\langle\mu\nu\rangle}\equiv \Delta^{\mu\nu}_{\alpha\beta} A^{\alpha\beta}, \end{eqnarray} where $\Delta^{\mu\nu}_{\alpha\beta}$ is the spin-2 projector introduced in Eq.(\ref{eq:spin2proj}) denoting the traceless and transverse (to $u^\mu$) part of a tensor $A^{\mu\nu}$. The shear stress tensor $\pi^{\mu\nu}{\,=\,}{T}^{\langle\mu\nu\rangle}$ thus has 5 independent components while $V^\mu$, which is also orthogonal to $u^\mu$ by construction, has 3 independent components. Using the viscous hydrodynamic decomposition (\ref{eq7}) in the conservation laws $\partial_\mu T^{\mu\nu} = 0$ and $\partial_\mu j^\mu = 0$, we obtain the {\sc vHydro} viscous hydrodynamic evolution equations \begin{align} \label{eq:vhydro} &\dot n = -n\theta -\nabla_\mu V^\mu, \nonumber\\ &\dot{\varepsilon}=-(\varepsilon{+}P{+}\Pi)\theta + \pi_{\mu\nu}\sigma^{\mu\nu}, \nonumber\\ &(\varepsilon{+}P{+}\Pi)\dot{u}^\mu=\nabla^\mu(P{+}\Pi) - \Delta^{\mu\nu} \nabla^\sigma\pi_{\nu\sigma} + \pi^{\mu\nu}\dot{u}_\nu. \end{align} where $\sigma^{\mu\nu} = \nabla^{\langle\mu}u^{\nu\rangle}$ is the velocity shear tensor introduced in Eq.(\ref{eq:shear}). They differ from the ideal fluid dynamical equations (\ref{eq:ideal}) by additional source terms arising from the dissipative flows. Altogether, the deviation $\delta f$ has introduced (3+1+5)=9 additional dissipative flow degrees of freedom for which additional evolution equations are needed. These cannot be obtained from the macroscopic conservation laws but require input from the microscopic dynamics. In a system that is initially in local equilibrium, the deviation $\delta f$ is caused by the dynamical response of the system to gradients in the thermodynamic and flow variables. The forces that drive this deviation can be classified by their Lorentz structure as a scalar, a vector and a tensor force: \begin{align} \label{eq7e} \text{scalar force:}\quad \theta &= \partial_\mu u^\mu\ \text{(scalar expansion rate)}; \nonumber\\ \text{vector force:}\quad I^\mu &= \nabla^\mu\left(\frac{\mu}{T}\right)\ \text{(fugacity gradient)}; \nonumber\\ \text{symmetric tensor force:}\quad \sigma^{\mu\nu} &= \nabla^{\langle\mu}u^{\nu\rangle}\ \text{(velocity shear tensor)}. \nonumber\\ \text{antisymmetric tensor force:}\quad \omega^{\mu\nu} &= \frac{1}{2}\left(\nabla^\mu u^\nu{-}\nabla^\nu u^\mu\right)\ \text{(vorticity tensor)}. \end{align} These forces generate dissipative flows, the scalar bulk viscous pressure $\Pi$, the heat flow vector $V^\mu$, and the shear stress tensor $\pi^{\mu\nu}$ \footnote The energy-momentum tensor is symmetric, so dissipative flows have no antisymmetric tensor contribution, but the antisymmetric vorticity tensor couples to the other dissipative forces and flows at second order in the Knudsen and inverse Reynolds numbers. } The strength of the forces (\ref{eq7e}) driving the system away from local equilibrium is characterized by the Knudsen number. The system response can be characterized by inverse Reynolds numbers associated with the dissipative flows \cite{Denicol:2012cn}: \begin{eqnarray} \label{eq7f} \mathrm{R}_\Pi^{-1} = \frac{|\Pi|}{P},\quad \mathrm{R}_V^{-1} = \frac{\sqrt{-I_\mu I^\mu}}{P},\quad \mathrm{R}_\pi^{-1} = \frac{\sqrt{\pi_{\mu\nu}\pi^{\mu\nu}}}{P}. \end{eqnarray} Due to the time delay $\tau_\mathrm{rel}$ between the action of the force and the system response, built into the collision term of the Boltzmann equation, the inverse Reynolds numbers are not necessarily of the same order as the Knudsen number: For example, an initially small bulk viscous pressure can remain small, due to critical slowing down, as the system passes through a phase transition, even though the bulk viscosity becomes large during the transition \cite{Song:2009rh}. Conversely, strong deviations from local equilibrium during a rapidly expanding pre-equilibrium stage in heavy-ion collisions can lead to large initial values for the dissipative flows, and a slow equilibration rate may cause them to to stay large for a while even though the expansion rate decreases with longitudinal proper time as $1/\tau$. Deviations from equilibrium, and the accuracy of their description by viscous fluid dynamics, are therefore controlled by a combination of Knudsen and inverse Reynolds numbers \cite{Denicol:2012cn}. The 9 equations of motion describing the relaxation of the 9 dissipative flow components are controlled by microscopic physics, encoded in the collision term on the right hand side of the Boltzmann equation, and can be derived from approximate solutions of that equation. This was first done almost 50 years ago by Israel and Stewart in Ref.\cite{Israel:1979wp}, but when the problem was recently revisited it was found \cite{Baier:2007ix, Betz:2009zz, Denicol:2012cn, Denicol:2014vaa} that the relaxation equations take a much more general form than originally derived. Specifically, Denicol {\it et al.} \cite{Denicol:2012cn} found the following general structure \begin{eqnarray} \label{eq:DNMR} &\tau_\Pi \dot{\Pi} + \Pi = -\zeta \theta + \mathcal{J} + \mathcal{K} + \mathcal{R}, \nonumber\\ &\tau_V \Delta^{\mu\nu}\dot{V}_\nu + V^\mu = \kappa I^\mu + \mathcal{J}^\mu + \mathcal{K}^\mu + \mathcal{R}^\mu, \nonumber\\ &\tau_\pi \Delta^{\mu\nu}_{\alpha\beta}\dot{\pi}^{\alpha\beta} + \pi^{\mu\nu} = 2\eta \sigma^{\mu\nu} + \mathcal{J}^{\mu\nu} + \mathcal{K}^{\mu\nu} + \mathcal{R}^{\mu\nu}. \end{eqnarray} Here all calligraphic terms are of second order in combined powers of the Knudsen and inverse Reynolds numbers. $\mathcal{J}$ terms contain products of factors that are each of first order in the Knudsen and inverse Reynolds numbers; $\mathcal{K}$ terms are second order in Knudsen number, and $\mathrm{R}$ terms are second order in inverse Reynolds numbers. In relaxation time approximation, with an energy-independent relaxation time $\tau_\mathrm{rel}$, the relaxation times for the dissipative flows all agree with each other: $\tau_\Pi{\,=\,}\tau_V{\,=\,}\tau_\pi{\,=\,}\tau_\mathrm{rel}$ \cite{Denicol:2014vaa}. The same does not hold for more general forms of the collision term. If we set in Eqs.~(\ref{eq:DNMR}) the relaxation times and all other second order terms to zero, we obtain the equations of {\bf relativistic Navier-Stokes theory}: \begin{eqnarray} \label{eq:NS} \Pi= -\zeta\theta,\quad V^\mu = \kappa I^\mu,\quad \pi^{\mu\nu}=2\eta\sigma^{\mu\nu}. \end{eqnarray} The relaxation equations (\ref{eq:DNMR}) have solutions that, for sufficiently small expansion rates (see below), approach asymptotically (at times $\tau\gg\tau_{\Pi,V,\pi}$) the Navier-Stokes values (\ref{eq:NS}). However, plugging the Navier-Stokes solutions (\ref{eq:NS}) directly back into the decompositions (\ref{eq7}) and using them in the conservation laws (\ref{eq:1}) leads to viscous hydrodynamic equations of motion that are acausal and numerically unstable \cite{Hiscock:1983zz,Hiscock:1985zz}. The physical reason for this is the instantaneous response of the dissipative flows to the dissipative forces encoded in Eqs.~(\ref{eq:NS}) which violates causality. A causal and numerically stable implementation of viscous fluid dynamics must account for the time delay between cause and effect of dissipative phenomena and therefore be by necessity of second order in Knudsen and Reynolds numbers. The first relativistic causal second-order theory of viscous fluid dynamics was Israel-Stewart (IS) theory \cite{Israel:1979wp}. It amounts to dropping the $\mathcal{K}$ and $\mathcal{R}$ terms in (\ref{eq:DNMR}) and replacing (for massless particles) $\mathcal{J}\to{-}\frac{4}{3}\tau_\Pi\theta\Pi$, $\mathcal{J}^\mu\to{-}\tau_V\theta V^\mu$, and $\mathcal{J}^{\mu\nu}\to{-}\frac{4}{3}\tau_\pi\theta\pi^{\mu\nu}$. The importance of keeping specifically these second-order $\mathcal{J}$-terms for the preservation of conformal invariance in a system of massless degrees of freedom was stressed by Baier {\it et al.}.\cite{Baier:2007ix} For conformal systems the resulting Israel-Stewart relaxation equations can be written in the form \cite{Song:2008si} \begin{align} \label{eq:IS} \dot{\Pi} &= -\frac{1}{\tau'_\Pi}\Bigl(\Pi{+}\zeta'\theta\Bigr), \nonumber\\ \Delta^{\mu\nu}\dot{V}_\nu &= -\frac{1}{\tau'_V}\Bigl(V^\mu{-}\kappa' I^\mu\Bigr), \nonumber\\ \Delta^{\mu\nu}_{\alpha\beta}\dot{\pi}^{\alpha\beta} &= -\frac{1}{\tau'_\pi}\Bigl(\pi^{\mu\nu}{-}2\eta' \sigma^{\mu\nu}\Bigr), \end{align} with effective transport coefficients and relaxation times that are modified by the scalar expansion rate as follows: \begin{eqnarray} \label{eq7h} \zeta' = \frac{\zeta}{1{+}\gamma_\Pi},\quad \kappa' = \frac{\kappa}{1{+}\gamma_V},\quad \eta' = \frac{\eta}{1{+}\gamma_\pi},\quad \tau'_i = \frac{\tau_i}{1{+}\gamma_i} \ (i=\Pi,\,V,\,\pi), \end{eqnarray} where $\gamma_i=\frac{4}{3}\theta\tau_i$ for $i=\Pi,\pi$ and $\gamma_i=\theta\tau_i$ for $i=V$. These relaxation equations describe an asymptotic approach of the dissipative flows to effective Navier-Stokes values that, for a positive scalar expansion rate $\theta$, are reduced relative to their first-order values (\ref{eq:NS}) by a factor $1{+}\gamma_i$, while the effective rate of approach to this effective Navier-Stokes limit is sped up by the same factor. $\gamma_i$ involves the product of the scalar expansion rate and the respective relaxation time. So, compared to a more slowly expanding system, a rapidly expanding system with the same microscopic scattering cross sections is characterized by lower effective viscosities and shorter effective relaxation times.\cite{Heinz:2005bw,Song:2008si} When all the second order terms are kept, the DNMR equations become quite complicated. The $\mathcal{J}$, $\mathcal{K}$ and $\mathcal{R}$ terms listed by DNMR \cite{Denicol:2012cn} add up to 16 terms for $\Pi$, 18 terms for $V^\mu$, and 19 terms for $\pi^{\mu\nu}$, each with its own transport coefficient. While many of these transport coefficients have been computed for massless theories at both weak and strong coupling (a subject too rich to be fully reviewed here, see papers by Denicol {\it et al.}\cite{Denicol:2012cn,Molnar:2013lta,Denicol:2014vaa}, Moore {\it et al.}\cite{York:2008rr,Moore:2012tc} and Noronha {\it et al.}\cite{Finazzo:2014cna} for lists of references to relevant work), quite a few are still unknown. Furthermore, the QGP is neither weakly nor strongly enough coupled, nor is it sufficiently conformally symmetric for any of these calculations to be quantitatively reliable for heavy-ion collisions. For these reasons, many practical applications of viscous fluid dynamics employ phenomenological values for the transport coefficients, and work studying which terms need to be kept and which might be of lesser importance is still ongoing \cite{Molnar:2013lta}. One important non-linear coupling mechanism that enters at second order are bulk-shear couplings where shear stress drives a bulk viscous pressure and vice versa \cite{Denicol:2012cn,Molnar:2013lta,Denicol:2014vaa,Denicol:2014mca}. Heavy-ion collisions are characterized by initially very large differences between the longitudinal and transverse expansion rates that cause large shear stress. The latter, in turn, creates a bulk viscous pressure via bulk-shear coupling that can dominate over the one generated \`a la Navier-Stokes by the scalar expansion rate \cite{Denicol:2014mca} and may actually be able to flip its sign. This should be taken into account in phonemenological applications of viscous hydrodynamics to heavy-ion collisions. \subsubsection{Anisotropic hydrodynamics ({\sc aHydro})} The dissipative flows are given by moments of the deviation $\delta f(x,p)$ of the distribution function from local equilibrium, and their relaxation equations are derived from the Boltzmann equation using approximations that, in one way or another, assume that $\delta f$ is small. However, systems featuring strongly anisotropic expansion, such as the early evolution stage of the fireballs created in ultra-relativistic heavy-ion collisions, generate strong local momentum anisotropies: the width of the LRF momentum distribution along a certain direction is inversely proportional to the local expansion rate in that direction, and this momentum-space distortion growth with the magnitude of the shear viscosity. In viscous hydrodynamics, where we expand $f$ around a locally isotropic LO distribution (see Eq.~(\ref{eq6})), this local momentum anisotropy must be absorbed entirely by $\delta f$, making $\delta f$ large and rendering the approximations used for calculating the evolution of the dissipative flow generated by $\delta f$ unreliable. Indeed, even for moderate specific shear viscosities $\eta/s\sim 5{-}10$ the (negative) longitudinal component of the viscous shear pressure can become so large in Israel-Stewart theory that it overwhelms the thermal pressure, resulting in a negative total pressure along the beam direction -- which, according to the kinetic definition $P_L=\frac{1}{3}\langle p_z^2\rangle$, should never happen. Anisotropic hydrodynamics \cite{Martinez:2010sc,Florkowski:2010cf} is based on the idea to account already in the LO distribution for the local momentum anisotropy resulting from anisotropic expansion, by parametrizing \cite{Romatschke:2003ms} \begin{equation} \label{eq8} f(x,p) = f_\mathrm{RS}(x,p)\equiv f_\mathrm{iso}\left(\frac{\sqrt{p_\mu\Xi^{\mu\nu}(x) p_\nu}-\tilde\mu(x)}{\Lambda(x)}\right), \end{equation} where $\Xi^{\mu\nu}(x) = u^\mu(x) u^\nu(x) +\xi(x) z^\mu(x) z^\nu(x)$, $z^\mu(x)$ being a unit vector in longitudinal $z$ direction in the LRF. The subscript {\it RS} refers to Romatschke and Strickland who are the authors of Ref.\cite{Romatschke:2003ms}. This distribution is characterized by 3 flow parameters $u^\mu(x)$ and three ``thermodynamic'' parameters: the ``transverse temperature'' $\Lambda(x)$, the effective chemical potential $\tilde{\mu}(x)$, and the momentum-anisotropy parameter $\xi(x)$. Inserting (\ref{eq8}) into (\ref{eq3}) yields the {\sc aHydro} decomposition \begin{eqnarray} \label{eq9} \!\!\!\!\!\!\!\!\!\!\!\!\!\! && j^\mu_\mathrm{RS} = n_\mathrm{RS} u^\mu, \quad T^{\mu\nu}_\mathrm{RS} = \varepsilon_\mathrm{RS} u^\mu u^\nu - P_T \Delta^{\mu\nu} + (P_L-P_T)z^\mu z^\nu, \\\label{eq10} \!\!\!\!\!\!\!\!\!\!\!\!\!\! &&n_\mathrm{RS} = \langle E\rangle_\mathrm{RS} = {\cal R}_0(\xi)\, n_\mathrm{iso}(\Lambda,\tilde\mu), \quad \varepsilon_\mathrm{RS} = \langle E^2\rangle_\mathrm{RS} = {\cal R}(\xi)\, \varepsilon_\mathrm{iso}(\Lambda,\tilde\mu), \\\nonumber &&P_{T,L} = \langle p^2_{T,L}\rangle_\mathrm{RS} = {\cal R}_{T,L}(\xi) \, P_\mathrm{iso}(\Lambda,\tilde\mu). \end{eqnarray} For massless systems, the local momentum anisotropy effects factor out via the $\mathcal{R}(\xi)$-functions:\cite{Martinez:2010sc} \begin{align} \label{eq:R} &{\cal R}_0(\xi)=\frac{1}{\sqrt{1+\xi}}, &{\cal R}(\xi) = \frac{1}{2}\left(\frac{1}{1+\xi} +\frac{\arctan\sqrt{\xi}}{\sqrt{\xi}} \right) \, , \nonumber\\ &{\cal R}_\perp(\xi) = \frac{3}{2 \xi} \left( \frac{1+(\xi^2{-}1){\cal R}(\xi)}{\xi + 1}\right) \, , &{\cal R}_L(\xi) = \frac{3}{\xi} \left( \frac{(\xi{+}1){\cal R}(\xi)-1}{\xi+1}\right) \, . \end{align} The isotropic pressure is obtained from a locally isotropic equation of state $P_\mathrm{iso}(\Lambda,\tilde\mu){\,=\,} P_\mathrm{iso}(\varepsilon_\mathrm{iso}(\Lambda,\tilde\mu), n_\mathrm{iso}(\Lambda,\tilde\mu))$. For massless noninteracting partons, $P_\mathrm{iso}(\Lambda,\tilde\mu)= \frac{1}{3}\varepsilon_\mathrm{iso}(\Lambda,\tilde\mu)$ independent of chemical composition. To compare with ideal and IS viscous hydrodynamics, we need to assign the locally anisotropic system an appropriate temperature $T(x){\,=\,}{T}\bigl(\xi(x),\Lambda(x),\tilde{\mu}(x)\bigr)$ and chemical potential $\mu(x){\,=\,}\mu\bigl(\xi(x),\Lambda(x),\tilde\mu(x)\bigr)$, thinking of $f_\mathrm{RS}(\xi,\Lambda)$ as an expansion around the locally isotropic distribution $f_\mathrm{iso}(T)$. For this we impose the generalized Landau matching conditions $\varepsilon_\mathrm{RS}(\xi,\Lambda,\tilde\mu) {\,=\,}{\varepsilon}_\mathrm{iso}(T,\mu)$ and For example, using an exponential (Boltzmann) function for $f_\mathrm{iso}$ with $\mu=\tilde\mu=0$, one finds $T{\,=\,}\Lambda {\cal R}^{1/4}(\xi)$. With this matching we can write \begin{eqnarray} \label{eq11} \!\!\!\!\!\!\!\!\!\!\!\!\!\! &&T^{\mu\nu}_\mathrm{RS} = T^{\mu\nu}_\mathrm{id} - (\Delta P+\Pi_\mathrm{RS})\Delta^{\mu\nu} + \pi^{\mu\nu}_\mathrm{RS}, \\\label{eq12} \!\!\!\!\!\!\!\!\!\!\!\!\!\! &&\Delta P + \Pi_\mathrm{RS} = -\frac{1}{3}\int_p p_\alpha \Delta^{\alpha\beta} p_\beta (f_\mathrm{RS}-f_\mathrm{iso}) \qquad (= 0\ \mathrm{for}\ m=0), \\\label{eq13} \!\!\!\!\!\!\!\!\!\!\!\!\!\! &&\pi^{\mu\nu}_\mathrm{RS} = \int_p p^{\langle\mu} p^{\nu\rangle} (f_\mathrm{RS}{-}f_\mathrm{iso}) = (P_T{-}P_L) \,\frac{x^\mu x^\nu+y^\mu y^\nu-2 z^\mu z^\nu}{3}. \end{eqnarray} We see that $\pi^{\mu\nu}_\mathrm{RS}$ has only one independent component, $P_T{-}P_L$, so {\sc aHydro} leaves 4 of the 5 components of $\pi^{\mu\nu}$ unaccounted for. For massless particles we have $(P_T{-}P_L)/P_\mathrm{iso}(\varepsilon){\,=\,} {\cal R}_T(\xi){-}{\cal R}_L(\xi)$, so the equation of motion for $\pi^{\mu\nu}_\mathrm{RS}$ can be replaced by one for $\xi$. For (2+1)-dimensional expansion with longitudinal boost-invariance these equations can be found and were solved numerically by Martinez {\it et al.} \cite{Martinez:2012tu}. For $m\ne0$ we need an additional ``anisotropic EOS'' for $(\Delta P/P_\mathrm{iso}){\,\equiv\,} (2P_T{+}P_L)/(3P_\mathrm{iso}) - 1$, in order to separate $\Delta P$ from the viscous bulk pressure $\Pi$.\\ \subsubsection{Viscous anisotropic hydrodynamics ({\sc vaHydro})} As explained above, {\sc aHydro} \cite{Martinez:2010sc,Florkowski:2010cf} accounts only for one (albeit largest) of the five independent components of the shear stress tensor $\pi^{\mu\nu}$. It can therefore not be used to compute the viscous suppression of elliptic flow which is sensitive to e.g. $\pi^{xx}{-}\pi^{yy}$. On the other hand, since the four remaining components of the shear stress tensor never become as large as the longitudinal/transverse pressure difference (with smooth initial density profiles they start out as zero, and with fluctuating initial conditions they are initially small), they can be treated ``perturbatively'' \`a la Israel and Stewart, without running into problems even at early times. Combining the non-perturbative dynamics of $P_L{-}P_T$ via {\sc aHydro} with a perturbative treatment of the remaining viscous stress terms $\tilde{\pi}^{\mu\nu}$ \`a la Israel-Stewart defines the {\sc vaHydro} scheme \cite{Bazow:2013ifa}. {\sc vaHydro} is expected to perform better than both IS theory and {\sc aHydro} during all evolution stages. The {\sc vaHydro} equations are obtained by generalizing the ansatz (\ref{eq8}) to include arbitrary (but small) corrections to the spheroidally deformed $f_\mathrm{RS}(x,p)$: \begin{equation} \label{eq14} f(x,p) = f_\mathrm{RS}(x,p) + \delta\tilde f(x,p) = f_\mathrm{iso}\left(\frac{\sqrt{p_\mu\Xi^{\mu\nu}(x) p_\nu}-\tilde\mu(x)}{\Lambda(x)}\right) + \delta\tilde f(x,p) . \end{equation} The parameters $\Lambda$ and $\tilde{\mu}$ in (\ref{eq14}) are Landau-matched as before, {\it i.e.} by requiring $\langle E\rangle_{\tilde\delta}{\,=\,}\langle E^2\rangle_{\tilde\delta}\EQ0$. To fix the value of the deformation parameter $\xi$ one demands that $\delta\tilde{f}$ does not contribute to the pressure anisotropy $P_T{-}P_L$; this requires $(x_\mu x_\nu{+} y_\mu y_\nu{-}2 z_\mu z_\nu)\langle p^{\langle\mu} p^{\nu\rangle}\rangle_{\tilde\delta}\EQ0$. Then, upon inserting (\ref{eq14}) into (\ref{eq3}), we obtain the {\sc vaHydro} decomposition \begin{eqnarray} \label{eq15} j^\mu = j^\mu_\mathrm{RS} +\tilde{V}^\mu,\quad T^{\mu\nu}=T^{\mu\nu}_\mathrm{RS} - \tilde\Pi \Delta^{\mu\nu} + \tilde\pi^{\mu\nu}, \end{eqnarray} with \begin{eqnarray} \label{eq15a} \tilde{V}^\mu = \bigl\langle p^{\langle\mu\rangle}\bigr\rangle_{\tilde\delta},\quad \tilde\Pi = -{\textstyle\frac{1}{3}} \bigl\langle p^{\langle\alpha\rangle} p_{\langle\alpha\rangle}\bigr\rangle_{\tilde\delta},\quad \tilde{\pi}^{\mu\nu} = \bigl\langle p^{\langle\mu} p^{\nu\rangle}\bigr\rangle_{\tilde\delta}, \end{eqnarray} subject to the constraints \begin{eqnarray} \label{eq15b} u_\mu \tilde\pi^{\mu\nu}{\,=\,}\tilde\pi^{\mu\nu} u_\nu{\,=\,}(x_\mu x_\nu{+}y_\mu y_\nu{-}2 z_\mu z_\nu)\tilde\pi^{\mu\nu}{\,=\,}\tilde\pi^\mu_\mu\EQ0 \end{eqnarray} Clearly, the additional shear stress $\tilde\pi^{\mu\nu}$ arising from $\delta\tilde{f}$ has only 4 degrees of freedom. The strategy in {\sc vaHydro} is now to solve hydrodynamic equations for {\sc aHydro} \cite{Martinez:2012tu} (which treat $P_T{-}P_L$ nonperturbatively) with added source terms describing the residual viscous flows arising from $\delta\tilde f$, together with IS-like ``perturbative'' equations of motion for $\tilde\Pi,\,\tilde V^\mu$, and $\tilde\pi^{\mu\nu}$. The hydrodynamic equations are obtained by using the decomposition (\ref{eq15}) in the conservation laws (\ref{eq:1}). The evolution equations for the dissipative flows $\tilde \Pi,\, \tilde V^\mu$, and $\tilde\pi^{\mu\nu}$ are derived by generalizing the DNMR\cite{Denicol:2012cn} procedure to an expansion of the distribution function around the spheroidally deformed $f_\mathrm{RS}$ in (\ref{eq8}), using the 14-moment approximation. These equations are lengthy; for massless systems undergoing (2+1)-dimensional expansion with longitudinal boost invariance they were derived by Bazow\,{\it et al.} \cite{Bazow:2013ifa}. Generalizations to massive systems and full (3+1)-dimensional expansion are in progress. We give their simplified form for (0+1)-d expansion in the next subsection. Especially at early times $\delta\tilde f$ is much smaller than $\delta f$, since the largest part of $\delta f$ is already accounted for by the momentum deformation in (\ref{eq8}). The inverse Reynolds number $\tilde{\mathrm{R}}_\pi^{-1}=\sqrt{\tilde\pi^{\mu\nu}\tilde\pi_{\mu\nu}}/{\cal P}_\mathrm{iso}$ associated with the residual shear stress $\tilde\pi^{\mu\nu}$ is therefore strongly reduced compared to the one associated with $\pi^{\mu\nu}$, significantly improving the range of applicability of {\sc vaHydro} relative to standard second-order viscous hydrodynamics. \subsection{Testing different hydrodynamic approximations} For (0+1)-d longitudinally boost-invariant expansion of a transversally homogeneous system, the Boltzmann equation can be solved exactly in the relaxation time approximation (RTA), both for massless \cite{Baym:1984np,Florkowski:2013lza,Florkowski:2013lya} and massive particles\cite{Florkowski:2014sfa,Florkowski:2014bba}. More recently, an exact solution of this equation was also found for massless systems undergoing (1+1)-dimensional expansion,\cite{Denicol:2014xca,Denicol:2014tha} with a boost invariant longitudinal and azimuthally symmetric transverse velocity profile discovered by Gubser (``Gubser flow'') \cite{Gubser:2010ze,Gubser:2010ui} as the result of imposing a particular conformal symmetry (``Gubser symmetry'') on the flow. These exact solutions of the kinetic theory can be used to test various hydrodynamic approximation schemes, by imposing the symmetry of the exact solution also on the hydrodynamic solution, solving both with identical initial conditions, and comparing the predictions of both approaches for the evolution of macroscopic observables \cite{Florkowski:2013lza,Florkowski:2013lya,Bazow:2013ifa,Denicol:2014xca,Denicol:2014tha,Nopoush:2014qba}. We will here use the (0+1)-d case to test the {\sc vaHydro} approach \cite{Bazow:2013ifa}. This illustrates the procedure and the kind of conclusions one can draw from such a comparison. Setting homogeneous initial conditions in $r$ and space-time rapidity $\eta_s$ and zero transverse flow, $\tilde\pi^{\mu\nu}$ reduces to a single non-vanishing component $\tilde\pi$: $\tilde\pi^{\mu\nu}=\mathrm{diag}(0,-\tilde\pi/2,-\tilde\pi/2,\tilde\pi)$ at $z=0$. The factorization $n_\mathrm{RS}(\xi,\Lambda){\,=\,}{\cal R}_0(\xi)\,n_\mathrm{iso}(\Lambda)$ etc. are used to obtain equations of motion for $\dot\xi,\, \dot\Lambda,\,\dot{\tilde\pi}$ \cite{Bazow:2013ifa}: \begin{align} \label{eq:vahydro0+1} &\frac{\dot\xi}{1{+}\xi}-6\frac{\dot\Lambda}{\Lambda}= \frac{2}{\tau}+\frac{2}{\tau_\mathrm{rel}}\left(1-\sqrt{1{+}\xi}\,{\cal R}^{3/4}(\xi)\right)\;, \nonumber\\ &{\cal R}'(\xi)\, \dot\xi + 4 {\cal R}(\xi) \frac{\dot\Lambda}{\Lambda} = - \Bigl({\cal R}(\xi) + {\textstyle\frac{1}{3}} {\cal R}_L(\xi)\Bigr) \frac{1}{\tau} +\frac{\tilde\pi}{\varepsilon_\mathrm{iso}(\Lambda)\tau}, \nonumber\\ &\dot{\tilde\pi}= -\frac{1}{\tau_\mathrm{rel}}\Bigl[\bigl({\cal R}(\xi){\,-\,}{\cal R}_{\rm L}(\xi)\bigr)P_\mathrm{iso}(\Lambda)+\tilde\pi\Bigr] -\lambda(\xi)\frac{\tilde\pi}{\tau} \\\nonumber &\qquad +12\biggl[ \frac{\dot{\Lambda}}{3\Lambda}\Bigl({\cal R}_{\rm L}(\xi){\,-\,}{\cal R}(\xi)\Bigr) +\Bigl(\frac{1{+}\xi}{\tau}-\frac{\dot{\xi}}{2}\Bigr) \Bigl({\cal R}^{zzzz}_{-1}(\xi){\,-\,}\frac{1}{3}{\cal R}^{zz}_{1}(\xi)\Bigr) \biggr] P_\mathrm{iso}(\Lambda). \end{align} $\tau_\mathrm{rel}$ and the ratio of shear viscosity $\eta$ to entropy density $s$, $\eta/s$, are related by $\tau_\mathrm{rel}\EQ5\eta/(sT)\EQ5\eta/(\mathcal{R}^{1/4}(\xi)s\Lambda)$. The numerical solution of these equations \cite{Bazow:2013ifa} can be compared with the exact solution of the Boltzmann equation \cite{Florkowski:2013lza}, and also with the other hydrodynamic approximation schemes discussed above, plus a 3rd-order viscous hydrodynamic approximation \cite{Jaiswal:2013vta}. \begin{figure}[hbt!] \begin{center} \includegraphics[width=0.8\linewidth]{hydrocomp} \end{center} \caption{(Color online) The particle production measure $(\tau_f n(\tau_f))/(\tau_0 n(\tau_0)) -1$ as a function of $4\pi\eta/s$. The black points, red dashed line, blue dashed-dotted line, green dashed line, and purple dotted line correspond to the exact solution of the Boltzmann equation, {\sc vaHydro}, {\sc aHydro}, third-order viscous hydrodynamics \cite{Jaiswal:2013vta}, and DNMR second-order viscous hydrodynamics \cite{Denicol:2012cn}, respectively. The initial conditions are $T_0\EQ600$\,MeV, $\xi_0\EQ0$, and $\tilde\pi_0\EQ0$ at $\tau_0\EQ0.25$\,fm/$c$. The freeze-out temperature was taken to be $T_f\EQ150$\,MeV. \label{F1} } \end{figure} As an example, we show in Fig.~\ref{F1} the entropy production (measured by the increase in particle number $\tau n(\tau)$) between start and end of the dynamical evolution from an initial temperature of 600\,MeV to a final one of 150\,MeV. For this extreme (0+1)-d scenario, where the difference between longitudinal and transverse expansion rates is maximal, {\sc vaHydro} is seen to reproduce the exact solution almost perfectly, dramatically outperforming all other hydrodynamic approximations. In particular, it should be noted that only the {\sc aHydro} and {\sc vaHydro} approximations are able to correctly reproduce entropy production (or rather the lack thereof) in both the extreme strong coupling (ideal fluid dynamics, $\tau_\mathrm{real}{\,=\,}\eta/s\EQ0$) and the extreme weak coupling (free-streaming particles, no collisions, $\tau_\mathrm{real},\,\eta/s{\,=\,}\to\infty$) limits of the microscopic dynamics. {\sc vHydro} schemes based on an expansion around a locally isotropic equilibrium distribution cannot reproduce the constraint that entropy production should vanish as collisions cease; these schemes break down for large $\eta/s$ values. Similar comparisons have been done for the massive (0+1)-dimensional case \cite{Florkowski:2014sfa,Florkowski:2014bba,BHM} and for the (1+1)-dimensional Gubser flow. In all cases one finds the following hierarchy of hydrodynamic approximations, when listed in order of improving accuracy in their descriptions of the moments of the exactly known microscopic dynamics: first-order viscous hydrodynamics (Navier-Stokes theory), second-order Israel-Stewart theory, second-order DNMR theory, third-order viscous hydrodynamics \`a la Jaiswal, {\sc aHydro}, and {\sc vaHydro}. In view of the increasing sophistication of these approximation schemes, as discussed in the preceding subsections, this ordering is not surprising, and some variant of {\sc vaHydro} is likely to become the standard hydrodynamic modelling framework in the future. At the moment, however, only {\sc vHydro} and {\sc aHydro} have been implemented numerically for (2+1)-d and (3+1)-d expansion which do not rely on simplifying assumptions such as longitudinal boost-invariance and azimuthal symmetry. The fireballs created in heavy-ion collisions are not azimuthally symmetric, and experiments tell us that they feature characteristic anisotropic flow patterns that could never arise from an azimuthally symmetric initial condition. Longitudinal boost-invariance is not a good approximation either for particles emitted at large forward and backward rapidities, and it becomes worse when going to lower energies. Therefore, much effort is presently being expended into developing (2+1)-d and (3+1)-d implementations of the {\sc aHydro} and {\sc vaHydro} schemes. \section{Numerical Implementation of Hydrodynamics} \subsection{Need for $\tau$ and $\eta$} \label{sec:taueta} The hydrodynamic simulations of the ultra-relativistic heavy ion collisions is best implemented in the hyperbolic coordinate system\cite{Bjorken:1982qr} (also known as the Milne coordinate system) where instead of $t$ (the laboratory time) and $z$ (the beam direction), one uses the longitudinal proper time \begin{eqnarray} \tau = \sqrt{t^2 - z^2} \end{eqnarray} and the space-time rapidity \begin{eqnarray} \eta = \tanh^{-1}(z/t) \end{eqnarray} Equivalently, $t = \tau\cosh\eta$ and $z = \tau\sinh\eta$. \begin{figure}[th] \centerline{\includegraphics[width=0.4\textwidth]{why_tau}} \caption{A schematic diagram showing evolution of fireballs with differing $v_z$ after the collision of two heavy ions at $t = 0$ and $z = 0$.} \label{fig:why_tau} \end{figure} One reason this is useful is that as shown in Fig.~\ref{fig:why_tau}, the evolution of the ultra-relativistic heavy ion collisions can occur only in the forward light cone bounded by the light cone axes, equivalently $\tau = 0$. More physically, suppose two identical systems were created at $t = 0$ and $z = 0$ by the initial collision of the two nuclei. Further suppose that one of the two has the collective velocity $v$ in the $z$ direction, but the other one is at rest. In this case, due to the time dilation, the same stage of the evolution will be reached when the lab time is at $t_d$ for the system at rest and at $t_d/\sqrt{1 - v^2}$ for the moving system. In relativistic systems, these two Minkowski times can be very much different even though the two system are at the same stage in their respective evolution. However, in terms of the proper time both are at the same $\tau = t_d$ since $\tau$ is nothing but the local rest frame time. Hence, it is very natural that we should use the $\tau-\eta$ coordinate system when there is a strong longitudinal flow of matter. In the ultra-relativistic heavy ion collisions, this is the case due to the original beam momenta of the projectile and the target. For numerical implementation of hydrodynamics, one first needs to formulate the conservation laws in this coordinate system. For this, we need to know the transformation law between $\tau-\eta$ and $t-z$. We can start with the derivatives \begin{eqnarray} \partial_\tau & = & {\partial t\over \partial\tau}\partial_t + {\partial z\over \partial\tau}\partial_z \nonumber\\ & = & \cosh\eta \partial_t + \sinh\eta \partial_z \end{eqnarray} and \begin{eqnarray} \partial_{\eta} & = & {\partial t\over \partial\eta}\partial_t + {\partial z\over \partial\eta}\partial_z \nonumber\\ & = & \tau\sinh\eta \partial_t + \tau\cosh\eta \partial_z \end{eqnarray} which can be summarized as Lorentz transformations \begin{eqnarray} \tilde{\partial}_a = \Lambda_a^\mu \partial_\mu \ \ \hbox{and}\ \ \partial_\mu = \Lambda^a_\mu \tilde{\partial}_a \label{eq:L_transform} \end{eqnarray} where $\tilde{\partial}_a = (\partial_\tau, \nabla_\perp, (1/\tau)\partial_\eta)$ and \begin{eqnarray} \Lambda_\mu^a = \bmat{cccc} \cosh\eta & 0 & 0 & -\sinh\eta\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ -\sinh\eta & 0 & 0 & \cosh\eta \end{array}\right) \label{eq:Lambda_mua} \end{eqnarray} The inverse transform $\Lambda^\mu_a$ is obtained by substituting $\eta$ with $-\eta$. From now on, we will use the first letters of the roman alphabet ($a, b, \cdots$) to represent the components in the Milne space as above. To figure out the transformation of the conservation law, we apply Eq.(\ref{eq:L_transform}) to \begin{eqnarray} \partial_t j^0 + \partial_z j^3 & = & \cosh\eta \partial_\tau j^0 - {\sinh\eta\over \tau}\partial_{\eta} j^0 + {\cosh\eta\over\tau} \partial_{\eta} j^3 - {\sinh\eta}\partial_{\tau} j^3 \nonumber\\ & = & {1\over \tau}\partial_\tau(\tau \cosh\eta j^0 - \tau \sinh\eta j^3) - \partial_\eta( {1\over \tau}\sinh\eta j^0 - {1\over \tau}\cosh\eta j^3) \nonumber\\ \end{eqnarray} Defining \begin{eqnarray} q^\tau &=& \cosh\eta\, q^0 - \sinh\eta\, q^3 \nonumber\\ {\tilde{q}}^\eta &=& \cosh\eta\, q^3 - \sinh\eta\, q^0 \end{eqnarray} and \begin{eqnarray} {\tilde{q}}^a = (q^\tau, {\bf q}_\perp, {\tilde{q}}^\eta), \end{eqnarray} for any 4-vector $q^\mu$, the conservation law becomes \begin{eqnarray} 0 & = & \partial_\tau (\tau j^\tau) + \nabla_\perp{\cdot}(\tau{\bf j}_\perp) + \partial_\eta {\tilde{j}}^\eta = \tilde\partial_a (\tau {\tilde{j}}^a) \label{eq:consJ} \end{eqnarray} where ${\bf j}_\perp$ only has the $x,y$ components. One can also show \begin{eqnarray} D_\tau & = & u^\mu \partial_\mu = {\tilde{u}}^a \tilde\partial_a \end{eqnarray} Note that our definition of the $\eta$ component is different from the curvilinear definition of the $\eta$ component by a factor of $\tau$. We do this to keep the dimension of the $\eta$ component the same as the other components. If one uses the curvilinear definition \begin{eqnarray} j^\eta = {1\over \tau}{\tilde{j}}^\eta \end{eqnarray} then the conservation law becomes \begin{eqnarray} 0 & = & \partial_\tau (\tau j^\tau) + \nabla_\perp{\cdot}(\tau{\bf j}_\perp) + \partial_\eta (\tau j^\eta) = \partial_a (\tau j^a) \end{eqnarray} where $\partial_a = (\partial_\tau, \nabla_\perp, \partial_\eta)$. For the energy momentum conservation, one needs to apply the transformation law 3 times for the each index in $\partial_\mu T^{\mu\nu} = 0$. The algebra is a bit tedious but straightforward\cite{Kolb:2000sd,Schenke:2010nt}. The result is \begin{eqnarray} \partial_\tau {\cal T}^{\tau\tau} + {1\over \tau}\partial_\eta {\cal T}^{\eta\tau} + \partial_v {\cal T}^{v\tau} = - {1\over \tau} {\cal T}^{\eta\eta} \label{eq:calTtautau} \end{eqnarray} where we defined \begin{eqnarray} {\cal T}^{ab} = \tau \Lambda^a_\mu \Lambda^b_\nu\, T^{\mu\nu} \end{eqnarray} and the index $v = x, y$. For the $\tau\eta$ component, \begin{eqnarray} \partial_\tau {\cal T}^{\tau\eta} + {1\over \tau} \partial_\eta {\cal T}^{\eta\eta} + \partial_v {\cal T}^{v\eta} = - {1\over \tau} {\cal T}^{\eta\tau} \label{eq:calTtaueta} \end{eqnarray} Both ${\cal T}^{\tau\tau}$ and ${\cal T}^{\tau\eta}$ conservation laws contain the geometrical source term. Transverse momentum conservation is simpler: \begin{eqnarray} \partial_\tau {\cal T}^{\tau v} + {1\over \tau}\partial_\eta {\cal T}^{\eta v} + \partial_w {\cal T}^{w v} & = & 0 \end{eqnarray} where $v$ and $w$ are transverse coordinate indices. When testing one's code for the energy-momentum conservation, the following form may be more convenient \begin{eqnarray} {d\over d\tau} \int dx dy d\eta\, {\tilde{T}}^{\tau\mu} = 0 \end{eqnarray} The half-transformed ${\tilde{T}}^{a\mu} = \tau \Lambda^a_\nu T^{\nu\mu}$ satisfies the conservation law $ \partial_\tau {\tilde{T}}^{\tau \mu} + \tilde\partial_\eta {\tilde{T}}^{\eta \mu} + \partial_v {\tilde{T}}^{v \mu} = 0$ without the geometrical source terms exactly like Eq.(\ref{eq:consJ}). For the shear and the bulk evolution equations in Eq.(\ref{eq:DNMR}) or (\ref{eq:IS}), transformation from the Minkowski space to the $\tau-\eta$ coordinate space is straightforward, but it is a lot more involved algebraically. For more details see Refs.\cite{Muronga:2003ta,Heinz:2005bw,Schenke:2010rr}. \subsection{Numerical solution of conservation equations} \label{sec:numerical} In this section, we will first discuss conservation laws in the Minkowski coordinates where the conservation laws are \begin{eqnarray} \partial_\mu T^{\mu\nu} = 0,\ \ \partial_\mu J^\mu_B = 0 \label{eq:vhydro_eq} \end{eqnarray} In the Milne coordinate system, the energy-momentum conservation takes a slightly different form \begin{eqnarray} \tilde\partial_a {\cal T}^{ab} = {\cal S}^b, \end{eqnarray} where ${\cal S}^b$ is the geometric source term in Eqs.(\ref{eq:calTtautau}) and (\ref{eq:calTtaueta}). However, as will be shown shortly, the methods illustrated below can be easily adapted to this case. For {\sc vHydro}, the energy-momentum tensor is given in a general reference frame by the decomposition \begin{eqnarray} \label{eq:T} T^{\mu\nu} = T_{\rm id}^{\mu\nu} + \pi^{\mu\nu} - \Pi \Delta^{\mu\nu} \end{eqnarray} where \begin{eqnarray} T_{\rm id}^{\mu\nu} = \varepsilon u^\mu u^\nu - P(\varepsilon, \rho_B) \Delta^{\mu\nu} \label{eq:Tmunu_id} \end{eqnarray} is the ideal fluid part of the tensor. The net baryon current has the form $J^\mu_B = \rho_B u^\mu + V_B^\mu$. The equations that need to be solved are given in Eq.(\ref{eq:vhydro_eq}), together with the relaxation equations for the dissipative flows, for example the Israel-Stewart equations (\ref{eq:IS}). The first step in solving the hydrodynamic equations is their initialization. Let us assume that some microscopic pre-equilibrium dynamical theory provides us with a baryon current $J^\mu_B(x)$ and energy-momentum tensor $T^{\mu\nu}(x)$ for points $x^\mu$ on some Cauchy surface on which we want to initialize the hydrodynamic evolution stage. The following projection steps, to be taken at each point $x$ on that surface, yield the required initial value fields:% \footnote{% The following paragraph refers to the {\sc vHydro} decomposition (\ref{eq:JB1},\ref{eq:T}). A slightly modified projection method applies for the {\sc aHydro} and {\sc vaHydro} decompositions (\ref{eq9}), (\ref{eq10}) and (\ref{eq15}) \cite{Bazow:2013ifa}. } First we define the local fluid rest frame by solving the eigenvalue equation $T^{\mu}_{\ \nu} u^\nu = \varepsilon u^\mu$ for its normalized timelike eigenvector $u^\mu$. The associated eigenvalue gives us the LRF energy density $\varepsilon$. The LRF baryon density is obtained from $J^\mu_B$ by projecting onto $u_\mu$: $\rho_B{\,=\,}{u}_\mu J_B^\mu$. The initial heat flow $V_B^\mu$ is the component of $J_B^\mu$ perpendicular to $u^\mu$: $V^\mu{\,=\,}\Delta^{\mu\nu}J_{B,\nu}$. Now that we know the LRF energy and baryon densities we can compute the LRF pressure $P$ from the equation of state of the fluid, $P(\varepsilon,\rho_B)$. Next, the bulk viscous pressure is obtained from $\Pi{\,=\,}-\frac{1}{3}\Delta_{\mu\nu}T^{\mu\nu}-P$. Finally, the shear stress is obtained as $\pi^{\mu\nu}{\,=\,}{T}^{\mu\nu}-\varepsilon u^\mu u^\nu + (P{+}\Pi)\Delta^{\mu\nu}$ or, equivalently, as $\pi^{\mu\nu}{\,=\,}{T}^{\langle\mu\nu\rangle}{\,\equiv\,}\Delta^{\mu\nu}_{\alpha\beta} T^{\alpha\beta}$. For a simple illustration of numerical methods that can solve the hydrodynamic equations, let us first consider a single conservation law in 1-D. There is no difficult conceptual obstacle in extending this case to the multi-component, multi-dimension cases such as the Israel-Stewart equations. The conservation equation is \begin{eqnarray} \partial_t u = -\partial_x j \end{eqnarray} We need to supplement this equation with a relationship between the density $u$ and the current $j$. Simplest example is $j = vu$ with a constant speed of propagation $v$. But in general $j$ is a non-linear function(al) of $u$. For instance, the ideal part, Eq.(\ref{eq:Tmunu_id}), is certainly not in this simple form due to the normalization condition $u_0^2 = \sqrt{1 - {\bf u}^2}$ and also to the presence of the pressure term. In the dissipative cases, the relaxation equation \begin{eqnarray} (\partial_t + 1/\tau_R)j = -(D/\tau_R)\partial_x u \end{eqnarray} determines the relationship between $j$ and $u$. In such cases, the numerical methods discussed in this section needs to be applied in two steps. In the first step, the conservation laws are used to advance the time component of the currents using the methods that will be discussed here. In the second step, the spatial part of the currents needs to be reconstructed from the time components. The relaxation equations also need to be solved separately, although the techniques discussed here can be easily adapted to handle the relaxation equation as well. For the simple $j=vu$ case with a constant $v$, the equation becomes \begin{eqnarray} \partial_t u = -v\partial_x u \end{eqnarray} This is an advection equation and has a simple solution \begin{eqnarray} u(t,x) = f(x-vt) \label{eq:sol_advec} \end{eqnarray} That is, at any given time, the solution is just the translation of the initial profile by $vt$. Analytically, this is trivial. However, it is remarkable how difficult it can be to maintain the initial profile in numerical solutions. We will often use this as the simplest test case for our algorithms. To solve the conservation equation numerically, one first needs to discretize time and space. We define \begin{eqnarray} u^{n}_{i} = u(t_n, x_i) \end{eqnarray} where $t_n = t_0 + n h$ and $x_i = x_0 + ia$ for any function $u(t,x)$ of time and space. Here $h$ is the time step size and $a$ is the spatial cell size. Physically, it is important to have the following properties in the discretization method. First, the method should conserve total $u$ explicitly, that is, $\sum_i u^n_i = \sum_i u^{n+1}_i$ modulo the boundary terms. For this, one requires that the discretized form of the divergence to take the form \begin{eqnarray} (\partial_x j)^n_i \to {j_i(u^n) - j_{i-1}(u^n)\over a} \label{eq:discrete_j} \end{eqnarray} where $j_i(u^n)$ is the discretized representation of the current $j$ at $x_i$ ant $t_n$. One can easily see that in the sum $\sum_{i=0}^N (\partial_x j)_i^n$ only the boundary terms would survive. The details of the boundary terms depend on the method. However, as long as the boundaries are far away from the physical region, the boundary terms should be vanishingly small. If the boundary of the space is not too far away from the physical region, then some suitable discrete boundary conditions should be imposed. The second requirement is simple, yet quite demanding: If $u^0_{i} \ge 0$ for all $i$, then we would like this property to be maintained for any future time. For instance, if $u$ represents the energy density, then it should never become negative. To illustrate some of these issues, consider again the advection equation $\partial_t u = -v\partial_x u$ with $v > 0$. Let the initial condition be a rectangle: $u^0_i = u_c$ for $b \le i \le f$ and $u_i^0 = 0$ otherwise. This is a prototype of many situations where two smoothly varying regions are joined by a stiff gradient. The simplest discretization method of $\partial_t u = -v\partial_x u$ is the forward-time centered-space (FTCS) method \begin{eqnarray} u_i^{n+1} = u_i^{n} -{vh\over 2a}(u^{n}_{i+1} - u^{n}_{i-1}) \label{eq:ftcs1} \end{eqnarray} which is correct up to $O(h^2)$ and $O(a^2)$ errors. Since the second term in the right hand side is supposed to be a correction, we require $ |vh/a| < 1$. This is certainly in the form of Eq.(\ref{eq:discrete_j}) and hence conserves the total $u$. According to the analytic solution Eq.(\ref{eq:sol_advec}), the space behind the back edge ($x_b$) of the rectangle should always have $u^n = 0$. However, according to Eq.(\ref{eq:ftcs1}) the cell right behind the back edge becomes non-zero and negative after the first time step \begin{eqnarray} u_{b-1}^1 = -{vh\over 2a}u_c \end{eqnarray} At the same time, the front edge starts to get distorted by the same amount \begin{eqnarray} u_{f}^{1} = u_c - u_{b_1}^1 > u_c \end{eqnarray} At the next time step, $u_{b-1}^2$ becomes even more negative \begin{eqnarray} u_{b-1}^{2} = -{vh\over a}u_c + \left(vh\over 2a\right)^2 u_c \end{eqnarray} while $u_f$ deviates even more from $u_c$ \begin{eqnarray} u_f^{2} = u_c - u_{b-1}^2 \end{eqnarray} Furthermore at $t_2$, the next cell in the empty region becomes non-zero $ u_{b-2}^2 = \left({vh\over 2a}\right)^2 u_c $. Clearly, both the profile preservation and the positivity of the solution are grossly violated by this method even though the total $u$ is conserved. In addition, one can easily see that the growth at $x_{b-1}$ will continue, indicating that this method is unconditionally unstable. In order to cure the negativity problem, one may note that the trouble above mainly comes from the centered nature of the $O(a^2)$ numerical derivative in Eq.(\ref{eq:ftcs1}). Instead, one may try the first order approximation\footnote{ This is in fact trivially exact when $a = vh$ since in that case $u_i^{n+1} = u_{i-1}^n$. That is, the whole profile moves by one spatial cell at each time step. However, this wouldn't work for more general currents.} \begin{eqnarray} u_i^{n+1} = u_i^{n} -{vh\over a}(u^{n}_{i} - u^{n}_{i-1}) \label{eq:upwind} \end{eqnarray} In this ``up-wind scheme'', $u_{b-1}^1$ trivially vanishes at $t_1$. In fact, all $u_{b-i}$ for $i \ge 1$ will remain zero for all times. Similarly, one can show that the numerical solution is positive and bounded everywhere as long as $|vh/a| < 1$. If $v<0$, mirror-image conclusions can be reached if one uses $(\partial_x u)_i \approx (u_{i+1}-u_i)/a$. Positivity is thus maintained in this up-wind scheme. However, as the system evolves in time, the shape of the solution gets more and more distorted. This is because the first order difference \begin{eqnarray} (u_i^n - u^n_{i-1})/a = \partial_x u(t_n, x_i) - {a\over 2}\partial_x^2 u(t_n,x_i) + O(a^2) \label{eq:approx_ux} \end{eqnarray} is too crude an approximation of the first order derivative. The second derivative term in Eq.(\ref{eq:approx_ux}) in fact introduces too much artificial (numerical) damping to preserve the shape for long. In effect, the profile at $t$ is the convolution of the initial profile and the Gaussian Green function of the diffusion equation (Eq.(\ref{eq:G_diff})) with the diffusion constant given by $D = {a\over 2h}$. As one can see in Eq.(\ref{eq:G_diff}), the width of the Gaussian grows linearly with time. Hence, the initial profile will be smeared out quickly. \begin{figure}[th] \centerline{ \includegraphics[width=0.7\textwidth]{histo} } \caption{ Spatial grid used in deriving finite volume methods. The solid line is the lowest order histogram function approximation of $u(t,x)$ in the $x_i$-centered grid. The dot-dash line is the projection of the $x_i$-centered grid onto the $x_{i+1/2}$-centered grid. } \label{fig:histo} \end{figure} Better discretization methods must keep the positivity preserving nature of the up-wind method and at the same time must have a better approximation for the spatial derivative than the simple first order difference for shape preservation. To devise better discretization methods more systematically, consider first dividing the space into $N+1$ cells of size $a$ labelled by integers $0$ through $N$. The $i$-th cell starts at $x_{i-1/2} = x_i-a/2$ and ends at $x_{i+1/2} = x_i+a/2$. (See Fig.~\ref{fig:histo}.) Averaging over one spatial cell and integrating over one time step, the conservation law $\partial_t u = -\partial_x j$ becomes \begin{eqnarray} {\bar{u}}_{i}^{n+1} &=& {\bar{u}}_{i}^n -{1\over a}\int_{t_n}^{t_{n+1}}dt\, \Big( j(t,x_{i+1/2}) - j(t,x_{i-1/2}) \Big) \label{eq:exact1} \end{eqnarray} where ${\bar{u}}_{i}^n = {\bar{u}}_i(t_n)$ is the cell average. This is an exact expression which can be used as the basis for further approximation. This type of methods are known as the finite volume methods. Approximating the time integral using the midpoint rule, we get \begin{eqnarray} {\bar{u}}_{i}^{n+1} &=& {\bar{u}}_{i}^n -{h\over a} \Big( j(t_{n+1/2},x_{i+1/2}) - j(t_{n+1/2},x_{i-1/2}) \Big) + O(h^3) \label{eq:mid1} \end{eqnarray} where $t_{n+1/2} = t_n+h/2$. There are few things that should be mentioned here. First, the basic quantities to calculate are the cell-averaged values ${\bar{u}}_i^n$. Second, we need to approximate the function $u(t,x)$ itself from ${\bar{u}}_i^n$ because the right hand side contains $u(t,x)$ evaluated at $t_{n+1/2}$ and $x_{i\pm 1/2}$. An approximate form of $u(t_n,x)$ is also needed for obtaining ${\bar{u}}_i^{n+1}$ for the next time step (see below). Therefore, how we approximate $u(t,x)$ and evaluate $j^{n+1}_{i\pm 1/2}$ determine different numerical schemes and the accuracy of the given scheme. In many schemes, the values at spatial half points are not unique because the interpolating function approximating $u(t,x)$ is usually only piece-wise continuous. For instance, see Fig.~\ref{fig:histo} and Fig.~\ref{fig:KTScheme}. One way to deal with the ambiguity in evaluating $j(t_{n+1/2}, x_{i\pm 1/2})$ is to just avoid evaluating it at the boundaries. The staggered nature of the space and time indices in Eq.(\ref{eq:mid1}) suggests an obvious way to do so. Suppose that the initial data is given for ${\bar{u}}_{i}^n$ where the $i$-th grid is centered at $x_i$ and has the spatial interval $[x_{i-1/2}, x_{i+1/2}]$. For the next time step, instead of updating the values of ${\bar{u}}_i$ within the intervals $[x_{i-1/2}, x_{i+1/2}]$, we update the values of ${\bar{u}}_{i+1/2}$ within the shifted intervals $[x_i, x_{i+1}]$. Using the lowest order approximation for $u(t,x)$ (the histogram functions in Fig.~\ref{fig:histo}), this yields \begin{eqnarray} {\bar{u}}_{i+1/2}^{n+1} = {{\bar{u}}_{i+1}^n + {\bar{u}}_{i}^n\over 2} - {h\over a}\left( j(u^{n+1/2}_{i+1}) - j(u^{n+1/2}_{i}) \right) \label{eq:NT1} \end{eqnarray} where ${({\bar{u}}_{i}^n+{\bar{u}}_{i+1}^n)/2}$ is the average value of $u(t_n,x)$ in the shifted interval $[x_{i}, x_{i+1}]$ using the histogram function in Fig.~\ref{fig:histo}. For the values of $u_{i+1}^{n+1/2}$ at $t_{n+1/2}$, the forward Euler method \begin{eqnarray} u^{n+1/2}_{i} = {\bar{u}}^{n}_{i} - {h\over 2} (\partial_x j)^{n}_i + O(h^2) \label{eq:half_step} \end{eqnarray} is enough since the overall error in the midpoint rule is $O(h^3)$. In this way, we avoid evaluating the current at the cell boundaries. For the next time step, we shift back to the original grid. This staggered method is a slightly generalized form of the Lax-Friedrichs scheme which is second order accurate in space and third order accurate in time. The original Lax-Friedrichs scheme is only second order in time because the currents are evaluated at $t_n$ instead of at $t_{n+1/2}$. Eq.(\ref{eq:half_step}) involves evaluating numerical derivative of the current $j$ at $x_i$. This cannot be exactly determined when all one has is data on the average values ${\bar{u}}_i^n$. For instance, $(j_x)^n_i$ could be the first order approximations $(j^n_{i+1} - j^n_{i})/a$ or $(j^n_{i} - j^n_{i-1})/a$, or the second order approximation $(j^n_{i+1} - j^n_{i-1})/2a$, or any other approximate form. In normal situations, choosing a higher order formula should be better than the first order ones. But this is not always the case when the gradient is stiff. We have already shown that using the central difference $(j^n_{i+1} - j^n_{i-1})/2a$ in the forward Euler method (the forward-time-centered-space method in Eq.(\ref{eq:ftcs1})) can be disastrous when the gradient is stiff, although it can be safely (and preferably) used in smooth regions. When gradient is stiff, one should instead use the up-wind method Eq.(\ref{eq:upwind}) to maintain the positivity. Therefore, an intelligent scheme would choose the derivative according to some approximate measure of the true gradient. One way to do this is to choose the gradient according to the following scheme \begin{eqnarray} (\partial_x u)_{i}^n = \left\{ \begin{array}{l} 0 \hbox{ if ${\bar{u}}^n_i < {\bar{u}}^n_{i\pm 1}$ or ${\bar{u}}^n_i > {\bar{u}}^n_{i\pm 1}$} \\ \hbox{else } {\rm sign}({\bar{u}}^n_{i+1} - {\bar{u}}^n_i)\, {\rm min}(\theta {\left| {\bar{u}}^n_{i+1}-{\bar{u}}^n_i\right|\over a}, {\left| {\bar{u}}^n_{i+1}-{\bar{u}}^n_{i-1} \right|\over 2}, \theta {\left| {\bar{u}}^n_i - {\bar{u}}^n_{i-1} \right| \over a}) \end{array} \right. \end{eqnarray} The first line indicates that the function has either a maximum or a minimum within the interval. Therefore the slope at $x_i$ can be best approximated by 0. The second line applies when the function is changing monotonically near $x_i$. The parameter $1 \le \theta < 2$ is there to be slightly more general. This choice of the derivative is called the ``generalized minmod flux limiter''. The Lax-Friedrichs scheme represented by Eq.(\ref{eq:NT1}) is only $O(a^2)$ accurate because we have used the histogram function as an approximation for $u(t_n, x)$. Needless to say, this is the lowest order approximation. As a result, the Lax-Friedrichs scheme contains too much numerical diffusion to be practically useful. Both of these facts can be easily seen from the Taylor expansion \begin{eqnarray} {{\bar{u}}_{i}^n+{\bar{u}}_{i+1}^n\over 2} = {\bar{u}}_{i+1/2}^n + {a^2\over 8} \partial_x^2 u_{i+1/2}^n + O(a^4) \label{eq:Taylor} \end{eqnarray} where the second derivative term is the $O(a^2)$ error term that also causes strong diffusion. In time, this diffusion distorts the solution too much just as in the first order up-wind method. \begin{figure}[th] \begin{center} \includegraphics[width=0.8\textwidth]{kt} \end{center} \caption{A schematic view of the cell division used in the Kurganov-Tadmor scheme.} \label{fig:KTScheme} \end{figure} To obtain a better approximation, one needs to evaluate $u(t,x)$ more accurately using the average values from the nearest neighbor cells. If one uses the linear interpolant (see Fig.~\ref{fig:KTScheme}) \begin{eqnarray} {\hat{u}}^n_i(x) = {\bar{u}}^n_i + (\partial_x u)^n_i (x-x_i) \end{eqnarray} for each interval $[x_{i-1/2}, x_{i+1/2}]$, then the scheme should become at least $O(a^3)$. Using this to improve the estimate of ${\bar{u}}_{i+1/2}^n$ adds a correction term to the lowest order result Eq.(\ref{eq:NT1}) \begin{eqnarray} {\bar{u}}_{i+1/2}^{n+1} & = & {{\bar{u}}_{i}^n+{\bar{u}}_{i+1}^n\over 2} -{h\over a} \Big( j(t_{n+1/2},x_{i+1}) - j(t_{n+1/2},x_{i}) \Big) \nonumber\\ & & {} - {a\over 8}\left((\partial_x u)^n_{i+1} - (\partial_x u)^n_{i}\right) \label{eq:NT2} \end{eqnarray} This staggered approach is the second order NT (Nessyahu-Tadmor) scheme and works reasonably well \cite{NT90}. The last term in the right hand side of Eq.(\ref{eq:NT2}) cancels the second derivative term in Eq.(\ref{eq:Taylor}) so that together they represent ${\bar{u}}_{i+1/2}^n$ with the $O(a^4)$ error. In other words, the last term in Eq.(\ref{eq:NT2}) is the anti-diffusion term that is correcting the large second order diffusion introduced by the symmetric combination $ ({\bar{u}}_{i}^n+{\bar{u}}_{i+1}^n)/2 $. Therefore, this scheme is $O(a^4)$ in the smooth region. Why don't we then just replace these terms with ${\bar{u}}_{i+1/2}^{n}$? If one does that, it just becomes the forward-time-centered-space (FTCS) scheme in Eq.(\ref{eq:ftcs1}). The difference, the $O(a^4 \partial_x^4 u(x))$ term, provides just enough numerical diffusion so that spurious oscillations do not propagate from the stiff gradient. The second order NT algorithm represented by Eq.(\ref{eq:NT2}) is related to another often-used finite element method called the SHASTA algorithm. We start with Eq.(\ref{eq:NT2}) but replace \begin{eqnarray} {{\bar{u}}_i^n + {\bar{u}}_{i+1}^b\over 2} & \approx & {\bar{u}}_{i+1/2}^n + {1\over 8}(u_{i+1} - 2u_{i+1/2} + u_{i}) + O(a^4) \end{eqnarray} so that \begin{eqnarray} {\bar{u}}_{i+1/2}^{n+1} & = & {\bar{u}}_{i+1/2}^n -{h\over a} \Big( j(t_{n+1/2},x_{i+1}) - j(t_{n+1/2},x_{i}) \Big) \nonumber\\ & & {} + {1\over 8}({\bar{u}}^n_{i+1} - 2{\bar{u}}^n_{i+1/2} + {\bar{u}}^n_{i}) - {a\over 8}\left((\partial_x u)^n_{i+1} - (\partial_x u)^n_{i}\right) \label{eq:SHASTA1} \end{eqnarray} At this point, this is no longer a finite volume method even though we will keep using the notation ${\bar{u}}_i^n$ to represent the value of $u$ at $t_n$ and $x_i$. It is also not a staggered method any more as the right hand side contains ${\bar{u}}_{i+1/2}^n$. Re-labelling the spaces indices $i+1/2 \to i$ and $i\to i-1$ so that the new grid size is $a' = a/2$, we get \begin{eqnarray} {\bar{u}}_{i}^{n+1} & = & {\bar{u}}_{i}^n -{h\over 2a} \Big( j(t_{n+1/2},x_{i+1}) - j(t_{n+1/2},x_{i-1}) \Big) \nonumber\\ & & {} + {1\over 8}({\bar{u}}^n_{i+1} - 2{\bar{u}}^n_{i} + {\bar{u}}^n_{i-1}) - {a\over 16}\left((\partial_x u)^n_{i+1} - (\partial_x u)^n_{i-1}\right) \label{eq:SHASTA2} \end{eqnarray} with the appropriate scaling of the derivatives in the last term and after renaming $a' \to a$. With $u(t_{n+1/2}, x_{i\pm 1})$ given by Eq.(\ref{eq:half_step}), Eq.(\ref{eq:SHASTA2}) represents the basic SHASTA algorithm. In practice, Eq.(\ref{eq:SHASTA2}) is broken up into two stages to ensure positivity. Specifying $j = vu$, the first transport step is\footnote{ In the original SHASTA algorithm, $({\bar{u}}^n_{i+1} - 2{\bar{u}}^n_{i} + {\bar{u}}^n_{i-1})$ is used in place of $\left((\partial_x u)^n_{i+1} - (\partial_x u)^n_{i-1}\right)/2$ in this stage. In practice, as long as $|vh/a|$ is small, this difference does not matter much. But it is crucial that a flux limiter is used in the second anti-diffusion step. } \begin{eqnarray} w_{i}^{n+1} & = & {\bar{u}}_{i}^n + {1\over 8}({\bar{u}}^n_{i+1} - 2{\bar{u}}^n_{i} + {\bar{u}}^n_{i-1}) \nonumber\\ & & {} -{vh\over 2a} \Big( {\bar{u}}^{n}_{i+1} - {vh\over 2}(\partial_x u)^{n}_{i+1} - {\bar{u}}^{n}_{i-1} + {vh\over 2}(\partial_x u)^{n}_{i-1} \Big) \end{eqnarray} again using Eq.(\ref{eq:half_step}). If ${\bar{u}}_l^{n} \ge 0$ for all $l$, then as long as $|vh/a| < 1/4$, $w_i^{n+1}$ is positive. The second stage is the anti-diffusion step \begin{eqnarray} {\bar{u}}_{i}^{n+1} = w_{i}^{n+1} - {a^2\over 8} (\partial_x^2 w)^{n+1}_{i} \end{eqnarray} where $ (\partial_x^2 w)^{n+1}_{i} $ represents numerical estimate of the second derivative at $x_i$ that preserves the positivity. The numerical approximation suggested by the last term in Eq.(\ref{eq:SHASTA2}) turned out not to preserve the positivity. The original formulation of the SHASTA algorithm by Boris and Book uses a conservative form \begin{eqnarray} (\partial_x^2 w)^{n+1}_{i} = {1\over a} \left( (w_x)^{n+1}_{i+1} - (w_x)^{n+1}_{i} \right) \end{eqnarray} where \begin{eqnarray} (w_x)^{n+1}_{i+1} = \left\{ \begin{array}{l} 0 \hbox{ if $\Delta^{n+1}_{i}$, $\Delta^{n+1}_{i+1}$ and $\Delta^{n+1}_{i+2}$ do not all have the same sign.} \\ \hbox{else } {\rm sign}(\Delta^{n+1}_{i+1})\, {\rm min}( {8|\Delta^{n+1}_{i}|}, {|\Delta^{n+1}_{i+1}|}, {8|\Delta^{n+1}_{i+2}|}) \end{array} \right. \end{eqnarray} with \begin{eqnarray} \Delta^{n+1}_{i+1} = (w_{i+1}^{n+1} - w_i^{n+1})/a \end{eqnarray} This is similar to the minmod flux limiter and maintains the positivity. The SHASTA algorithm is used in many hydrodynamics simulation programs for ultra-relativistic heavy ion collisions\cite{Schneider:1993gd} including pioneering works in Refs.\cite{Sollfrank:1996hd,Kolb:2000sd} and also later works in Refs.\cite{Heinz:2005bw,Petersen:2010cw,Karpenko:2010te,Holopainen:2010gz, Pang:2012he,Roy:2011pk}. The second order NT scheme and the SHASTA scheme in practice work fairly well\cite{Akamatsu:2013wyk}. However, in these schemes the $h\to 0$ limit cannot be taken since the numerical viscosity behaves like $\sim 1/h$. It would be convenient to be able to take this limit because one can then formulate the discretized problem as a system of coupled ordinary differential equations in time. Many techniques for the ordinary differential equations such as the Runge-Kutta methods then become available to control the accuracy of the time evolution. So far different time integration techniques were not available other than the midpoint rule in Eq.(\ref{eq:mid1}). One way to achieve this is to subdivide the cells as shown in Fig.~\ref{fig:KTScheme} with the piece-wise linear approximation for $u(t,x)$. The size of the cell containing the discontinuity at the half integer point $x_{i+1/2}$ is controlled by the local propagation speed $c_{i+1/2}$. That is, the subcell surrounding $x_{i+1/2}$ is between the left boundary $x_{i+1/2}^l = x_{i+1/2} - c_{i+1/2}h$ and the right boundary $x_{i+1/2}^r = x_{i+1/2} + c_{i+1/2}h$. Then the cells containing the boundaries and the cells not containing the boundaries (between $x_{i-1/2}^r$ to $x_{i+1/2}^l$) are independently evolved. For the subcell containing $x_{i+1/2}$ \begin{eqnarray} w_{i+1/2}^{n+1} = {{\bar{u}}_{i+1/2}^l + {\bar{u}}_{i+1/2}^r\over 2} - {1\over 2c_{i+1/2}} \left(j({\bar{u}}_{i+1/2}^r)-j({\bar{u}}_{i+1/2}^l)\right) + O(h) \end{eqnarray} which is basically Eq.(\ref{eq:NT1}). Here the superscripts $r$ and $l$ means the value of the approximate $u(t_n,x)$ at the boundary points $x_{i+1/2}^{r}$ and $x_{i+1/2}^l$, respectively. Within the smooth region between $x_{i-1/2}^r$ and $x_{i+1/2}^l$, we get using Eq.(\ref{eq:mid1}) \begin{eqnarray} w_{i}^{n+1} = {{\bar{u}}^l_{i+1/2} + {\bar{u}}^r_{i-1/2}\over 2} - {h\over a} \left(j({\bar{u}}_{i+1/2}^l)-j({\bar{u}}_{i-1/2}^r)\right) + O(h^2) \end{eqnarray} where the first term in the right hand side is obtained by applying the trapezoid rule. The divided cells are then projected onto the original grid using the size of the cells as the weight to get \begin{eqnarray} {\bar{u}}_{i}^{n+1} & = & {c_{i-1/2}h\over a} w_{i-1/2}^{n+1} + {c_{i+1/2}h\over a} w_{i+1/2}^{n+1} + \left( 1 - {(c_{i-1/2}+c_{i+1/2})h\over a} \right) w_i^{n+1} \nonumber\\ \end{eqnarray} When the $h\to 0$ limit is taken, this procedure yields \begin{eqnarray} {d{\bar{u}}_i\over dt} & = & -{H_{i+1/2} - H_{i-1/2}\over a} \label{eq:KT_generic} \end{eqnarray} where \begin{eqnarray} H_{i+1/2} = {j({\bar{u}}_{i+1/2}^+)+j({\bar{u}}_{i+1/2}^-) \over 2} - c_{i+1/2}{{\bar{u}}_{i+1/2}^+-{\bar{u}}_{i+1/2}^-\over 2} \label{eq:KT_H_generic} \end{eqnarray} where now ${\bar{u}}^{+}_{i+1/2}$ and ${\bar{u}}^{-}_{i+1/2}$ are the values of the piece-wise linear $u(t, x)$ when approaching $x_{i+1/2}$ from the right and from the left, respectively. They are given by \begin{eqnarray} {\bar{u}}_{i+1/2}^+ &=& {\bar{u}}_{i+1} - (a/2) (\partial_x u)_{i+1} \\ {\bar{u}}_{i+1/2}^- &=& {\bar{u}}_{i} + (a/2) (\partial_x u)_{i} \end{eqnarray} again using the minmod flux limiter for the derivatives. This is known as the second order Kurganov-Tadmor (KT) scheme \cite{KT00} and it is implemented in the 3+1D event-by-event viscous hydrodynamics simulation program {\sc Music} \cite{Schenke:2010nt,Schenke:2010rr}. The numerical viscosity in the smooth regions is known to be $O(a^3\partial_x^4 u)$. The structure of the KT algorithm, Eqs.(\ref{eq:KT_generic}) and (\ref{eq:KT_H_generic}), is the same for the lowest order, second order and the third order algorithms. All one needs to do to improve accuracy is to get a better estimate of ${\bar{u}}^{\pm}_{i+1/2}$. Actually, it is instructive to consider the lowest order result which uses the histogram function as the approximation of $u(x)$. One then has ${\bar{u}}^{+}_{i+1/2} = {\bar{u}}_{i+1}$ and ${\bar{u}}^{-}_{i+1/2} = {\bar{u}}_{i}$. It is easy to see in this case that if $j = vu$ and hence $c_{i+1/2} = |v|$, Eq.(\ref{eq:KT_generic}) automatically becomes the up-wind method. We now have a set of ODE's. What is a good choice of the ODE solver? One of the physical requirement is again the positivity. For instance, suppose $u$ represents particle density. One knows that $u$ can never be negative. One of the schemes that preserves the positivity is the Heun's method which is one of the second order Runge-Kutta schemes. The equation $du/dt = f$ is numerically solved by following these steps \begin{eqnarray} {\bar{u}}_j^* &=& {\bar{u}}_j^n + h f(t_n, {\bar{u}}^n) \\ {\bar{u}}_j^{**} &=& {\bar{u}}_j^* + h f(t_{n+1}, {\bar{u}}^*) \\ {\bar{u}}_j^{n+1} &=& {1\over 2}(u_j^n + u_j^{**}) \end{eqnarray} Positivity is maintained at each stage with a suitable choice of the flux limiter such as the minmod flux limiter. For 3-D (and similarly for 2-D), a simple extension \begin{eqnarray} {d\over dt}{\bar{u}}_{ijk} = -{H^x_{i+1/2,j,k} - H^x_{i-1/2,j,k}\over a_x} -{H^y_{i,j+1/2,k} - H^x_{i,j-1/2,k}\over a_y} -{H^z_{i,j,k+1/2} - H^z_{i,j,k-1/2}\over a_z} \nonumber\\ \end{eqnarray} works well. In curvilinear coordinate systems such as the Milne coordinates, one may not have the conservation law in the form $\partial_\mu J^\mu = 0$. Instead, it may take the form \begin{eqnarray} \partial_a J^a = S \end{eqnarray} where $S$ is the geometrical source term that does not involve derivatives. The KT algorithm can be easily adapted to this case by simply adding the source term on the right hand side. Namely, \begin{eqnarray} {d{\bar{u}}_i\over dt} & = & -{H_{i+1/2} - H_{i-1/2}\over a} + S({\bar{u}}_i) \end{eqnarray} In implementing the KT scheme, one needs the maximum speed of propagation at $x_{i\pm 1/2}$. If there is only one variable $u$ and one current $j$, then the speed of propagation in the $i$-th direction is \begin{eqnarray} c_i = \left| {\partial j_i\over \partial u} \right| \end{eqnarray} If there are more than one current, then we first define the Jacobian matrix \begin{eqnarray} J_i^{ab} = {\partial j_i^a \over \partial u_b} \end{eqnarray} where $i = 1, 2, 3$ is the space index and $a = 1, \cdots, M$ labels the conserved quantities. Therefore we have 3 $M\times M$ matrices. The maximum propagation speed in the $i$-th direction is \begin{eqnarray} c_i = {\rm max}(|\lambda_1|,\cdots,|\lambda_M|) \end{eqnarray} where $\lambda$'s are the eigenvalues of the Jacobian $J_i^{ab}$. When discretizing in time, the original authors of the KT algorithm recommended the time step $h$ to be small enough so that $|{\rm max}(c_i) h/a| < 1/8$. For the explicit form of the $c_i$ for the 3+1d ideal hydrodynamics including the net baryon currently used in {\sc Music}, see Ref.\cite{Schenke:2010nt}. For implementation of the event-by-event 3+1d viscous hydrodynamics in {\sc Music}, see Refs.\cite{Schenke:2010rr,Gale:2012rq}. Additional valuable information on the algorithms used in solving the {\sc vHydro} equations can be found in Ref.\cite{Shen:2014vra}. For some comparisons of various schemes discussed in the section, see Ref.\cite{Akamatsu:2013wyk}. There are many other numerical schemes that are currently in use but we unfortunately did not have space to discuss. These include, but not limited to, a PPM (Piece-wise Parabolic Method) scheme\cite{Hirano:2012yy}, a Lagrangian scheme where the grid points follow the movement of fluid cells\cite{Nonaka:1999et}, Riemann solvers \cite{DelZanna:2013eua,Akamatsu:2013wyk}, and a SPH (smoothed particle hydrodynamics) method \cite{Aguiar:2000hw}. The simple FTCS scheme has also been employed for the viscous hydrodynamics \cite{Luzum:2008cw,Bozek:2011ua} for smooth initial conditions. \subsection{Freeze-out Hypersurface and Cooper-Frye Formula for Particle Production} \begin{figure}[t] \includegraphics[width=6cm,height=5cm]{gubserT} \includegraphics[width=6cm,height=5cm]{gubserT_r} \caption{Plot of $T(\tau,r)/T_0$ for the Gubser solution. Here $qr_0 = (3+\sqrt{5})/2$. The left panel shows $T(\tau,r)/T_0$ at fixed $r$'s and the right panel shows $T(\tau,r)$ at fixed $\tau$'s.} \label{fig:gubserT} \end{figure} \begin{figure}[t] \centerline{\includegraphics[width=7cm,height=5cm]{fo_curve}} \caption{Plot of the hypersurface with $T(\tau,r)/T_0 = 0.17$ for the Gubser solution.} \label{fig:fo_curve} \end{figure} When a hydrodynamic system is expanding, there comes a time when the system is too dilute to be treated with hydrodynamics (c.f.~section \ref{sec:kinetic}). From this point on, the system is basically a collection of non-interacting particles. In realistic systems, this time is not the same for all fluid cells. As the system starts to expand at $\tau_0$, there are cells at the periphery of the system that are dilute enough to ``freezes out'' in a very short time. As time elapses, expansion of the system can cause hot and dense matter to flow into the location of those frozen cells. These will eventually freeze-out, too. Therefore, the freeze-out surface volume cannot be a simple 3-dimensional volume. It is a complicated 3-d volume in the 4-d space-time. To illustrate this point, consider the following Gubser solution of the boost-invariant and azimuthally symmetric ideal-hydrodynamics\cite{Gubser:2010ze}.\footnote{ Analytic solutions of the ideal hydrodynamics do exist for special cases, but not for general cases.} \begin{eqnarray} T(\tau,r) = {2 T_0 \over \big(2q\tau \left[1 + 2q^2(\tau^2 + r^2) + q^4(\tau^2 - r^2)^2\right] \big)^{1/3} } \end{eqnarray} where $r^2 = x^2 + y^2$ and $\tau = \sqrt{t^2 - z^2}$. It is a simple matter of taking the $\tau$ derivative of $T(\tau, r)$ to see that for $r > r_0$ where $r_0= {3 + \sqrt{5}\over 2 q}$, there are two values of $\tau$ where $\partial_\tau T$ vanishes. Since $\partial_\tau T$ is negative for small $\tau$, this means that $T(\tau, r)$ at fixed $r > r_0$ will have a minimum and then a maximum. This is illustrated in the left panel in Fig.~\ref{fig:gubserT}. The solid line is for $qr = 1$ which is near the center of the system. The temperature at that position decreases monotonically. At $r = r_0$, there is an inflection point but the behavior is till monotonic. In the $qr = 5$ case, one can clearly see that the temperature decreases at first but a some point it starts to rise again as the pressure pushes hot matter from the central region towards the periphery of the system. Assuming that the freeze-out temperature is between the minimum and the maximum of $T/T_0$, the position $qr = 5$ will contribute to the final particle spectrum (freeze-out) three times. The plot of isothermal curve with $T(\tau,r)/T_0 = 0.17$ is shown in Fig.~\ref{fig:fo_curve}. One may think of this as representing the ``freeze-out'' hypersurface in the Gubser solution. The long tail that can be seen above $qr \,\, \vcenter{\hbox{$\buildrel{\displaystyle >}\over\sim$}} \,\, 5$ is unrealistic. In more realistic simulations, the simulation starts at times above the long tail. Nevertheless, the Gubser solution contains much of the features that the more realistic numerical solutions exhibit \cite{Heinz:2002sq,Song:2010aq,Schenke:2010rr,Holopainen:2012id}. At the freeze-out hypersurface, the fluid is to be converted to particles according to the local equilibrium condition. In the kinetic theory, the particle number current for the $i$-th particle is given by \begin{eqnarray} j_i^\mu(x) = g_i \int {d^3 p\over (2\pi)^3 p^0}\, f_i(x,p)\, p^\mu \end{eqnarray} where $f_i(x,p)$ is the phase space density and $g_i$ is the degeneracy. The number of $i$-th particle in a 3-d hypersurface is then given by \begin{eqnarray} \int d\sigma_\mu j_i^\mu(x) = g_i \int {d^3 p\over (2\pi)^3 p^0}\, \int d\sigma_\mu\, p^\mu\, f_i(x,p) \end{eqnarray} where $d\sigma_\mu$ is the hypersurface element. Hence, the momentum spectrum is \begin{eqnarray} E_p {dN_i\over d^3 p} = {g_i\over (2\pi)^3} \int d\sigma_\mu\, p^\mu\, f_i(x,p) \label{eq:CFmom} \end{eqnarray} This is the celebrated Cooper-Frye formula \cite{Cooper:1974mv}. Hydrodynamics deals with the energy density and its flow. Experiments measure the momentum distribution of identified particles. Cooper-Frye formula provides the link between them. In ideal hydrodynamics, local equilibrium is strictly maintained. Hence once we find the freeze-out hypersurface (equivalently the energy density via the equation of state), all one has to do is to integrate Eq.(\ref{eq:CFmom}) over the freeze-out hypersurface with $f_i(x,p) = 1/(e^{(p^\mu u_\mu - \mu_B)/T_{\rm FO}} + a_i)$ where $T_{\rm FO}$ is the freeze-out temperature, $a_i = -1$ for bosonic statistics, $a_i = 1$ for fermionic statistics, and $a_i = 0$ for classical (Boltzmann) statistics. The freeze-out surface we need to integrate over has the shape shown in Fig.~\ref{fig:fo_curve} which requires a closer inspection. Suppose all parts of the system reaches the freeze-out temperature at the same Minkowski time $t_{\rm FO}$ and only once. In that case, the freeze-out hypersurface is just the Minkowski spatial volume. The integration element is just then $d\sigma_0 = dx dy dz$. All other components vanish. This is simple, but not a realistic scenario in relativistic heavy ion collisions as explained in the previous section. In the Milne coordinate system, the volume elements in each direction are given by \begin{eqnarray} d\sigma_a = \left( \tau dx dy d\eta, -\tau d\tau dy d\eta, -\tau d\tau dx d\eta, -\tau d\tau dx dy \right) \label{eq:dSigma_a} \end{eqnarray} When the freeze-out hypersurface is specified by the freeze-out proper time $\tau_f(x,y,\eta)$, the surface element on this surface is obtained by replacing $d\tau$ in Eq.(\ref{eq:dSigma_a}) with $(\partial\tau_f/\partial x_i)dx_i$ \begin{eqnarray} d\sigma^f_a = \left(1, -{\partial\tau_f\over \partial x}, -{\partial\tau_f\over \partial y}, -{\partial\tau_f\over \partial \eta} \right) \tau_f dx dy d\eta \label{eq:hypersurface_dV} \end{eqnarray} If $d\sigma_a^f$ is time-like, then it is guaranteed that $p^a d\sigma^f_a > 0$. Therefore, Eq.(\ref{eq:CFmom}) has a well defined interpretation that the particles are flowing out of the hypersurface. If $d\sigma_a^f$ is space like, then depending on the direction of $p^a$, $p^a d\sigma_a^f$ could be either positive or negative. Equivalently, particles can be flowing into or out of the hypersurface. Before we discuss the physical situation when this can happen, first we define the dynamic rapidity $y = \tanh^{-1}(p_z/E)$ so that \begin{eqnarray} E & = & m_T \cosh y \\ p^z & = & m_T\sinh y \end{eqnarray} with $m_T = \sqrt{m^2 + {\bf p}_T^2}$. The Milne space momentum is \begin{eqnarray} p^a = \left( m_T \cosh(y - \eta), {\bf p}_T, m_T\sinh(y-\eta)/\tau \right) \end{eqnarray} which is equivalent to using $\Lambda^a_\mu p^\mu$ from (c.f.~Eq.(\ref{eq:Lambda_mua})) but with an extra factor of $1/\tau$ for the $\eta$ component to conform with the geometrical definition of the hypersurface element Eq.(\ref{eq:hypersurface_dV}). To illustrate what happens when $d\sigma_a^f$ is space-like, let's consider the Gubser solution again. In this case, $\tau_f(r)$ is a function of the transverse radius $r = \sqrt{x^2 + y^2}$ only. Since the system is boost-invariant, we can also set $\eta = 0$ without loss of generality. The inner product of the momentum and the volume element is \begin{eqnarray} p^a d\sigma^f_a = \left( m_T\cosh y - ({\bf p}_T{\cdot}\hat{{\bf r}})\partial_r \tau_f \right)\tau_f dx dy d\eta \end{eqnarray} where $\hat{{\bf r}}$ is the unit vector in the transverse direction. This can become negative if $\partial\tau_f/\partial r > (m_T/p_T) \cosh y > 1$. {}From Figs.~\ref{fig:gubserT} and \ref{fig:fo_curve}, one can see that large positive gradient $\partial\tau_f/\partial r$ exists in the region where the temperature reaches $T_{\rm FO}$ the second time. At this time, the temperature is going {\em up} from the minimum in the left panel of Fig.~\ref{fig:gubserT}. Therefore, this part of the $\tau_f$ is not about the fluid freezing-out. It is rather colder matter being heated up to become the a part of the fluid again. Before one could use the Cooper-Frye formula, one needs to know the freeze-out hypersurface. This is not a trivial problem because hydrodynamic simulations only provide the freeze-out {\em space-time points.} The freeze-out hypersurface needs to be reconstructed from these points. In 2+1d, this is not so complicated. But in 3+1d, it can become very involved. Discussion of this important topic however is beyond the scope of this review. An interested reader should consult Ref.\cite{Huovinen:2012is}. How do we use the Cooper-Frye formula Eq.(\ref{eq:CFmom})? If hydrodynamics is not coupled to the hadronic cascade after-burner, then usually the hypersurface integral in Eq.(\ref{eq:CFmom}) is performed as it is after the hypersurface is reconstructed. The rationale behind it is that when the cell crosses the freeze-out boundary 3 times, the contributions from the first two times should largely cancel each other. This is physically sound because when the fluid cell crosses the freeze-out surface the second time, the particles that are being heated up again are the remnants of the first crossing. When coupling to the hadronic after-burner, an additional condition such as $p^a d\sigma^f_a > 0$ is usually employed \cite{Grassi:2004dz,Sinyukov:2009ce,Huovinen:2012is,Oliinychenko:2014tqa}. The presence of non-zero shear tensor $\pi^{\mu\nu}$ and the bulk pressure $\Pi$ signals non-equilibrium. In this case, the Cooper-Frye formula needs to be modified to take into account the non-equilibrium phase space density. Let \begin{eqnarray} f(x,p) = f_{\rm eq}(x,p) + \delta f(x,p) \end{eqnarray} Then the consistency between the hydrodynamic $T^{\mu\nu}$ and the kinetic theory $T_{\rm Kin}^{\mu\nu} = \int {d^3p\over (2\pi)^3p^0} p^\mu p^\nu f$ requires \begin{eqnarray} \pi^{\mu\nu} = \int {d^3 p\over (2\pi)^3 p^0}\, p^{\langle\mu} p^{\nu\rangle}\, {\delta f}(x,p) \label{eq:deltaf_pimunu} \end{eqnarray} and \begin{eqnarray} \Pi = -{m^2\over 3}\int {d^3p\over (2\pi)^3 p^0}\,{\delta f}(x,p) \label{eq:deltaf_Pi} \end{eqnarray} with \begin{eqnarray} p^{\langle\mu}p^{\nu\rangle} & = & \Delta^{\mu\nu}_{\alpha\beta} p^\alpha p^\beta \end{eqnarray} These conditions can be satisfied by \begin{eqnarray} {\delta f}(x,p) = \left( A({\bar{E}}_p)\Pi(x,p) + B({\bar{E}}_p) p^{\langle\mu}p^{\nu\rangle} \pi_{\mu\nu}(x,p) + \cdots \right) f_{\rm eq}(x,p)(1+ a_i f_{\rm eq}(x,p)) \nonumber\\ \label{eq:deltaf_exp} \end{eqnarray} where again ${\bar{E}}_p = p^\mu u_\mu$ is the energy in the local rest frame. Here $A({\bar{E}}_p)$ and $B({\bar{E}}_p)$ must be consistent with Eqs.(\ref{eq:deltaf_pimunu}) and (\ref{eq:deltaf_Pi}), but otherwise arbitrary at this point. In the presence of the net-baryon current Eq.(\ref{eq:deltaf_exp}) also includes $C({\bar{E}}_p)p^{\ave{\mu}}q_\mu$ with the constraint on $C({\bar{E}}_p)$ given by Eq.(\ref{eq7a}). Exact forms of $A({\bar{E}}_p)$ and $B({\bar{E}}_p)$ (and $C({\bar{E}}_p)$) depend on the form of the scattering cross-sections in the underlying microscopic system. In Ref.\cite{Dusling:2009df}, it is argued that for most theories, the ${\bar{E}}_p$ dependence of $A$ and $B$ should be between $1$ and $1/{\bar{E}}_p$. See also Refs.\cite{Denicol:2012yr,Noronha-Hostler:2013gga}. In most simulations, the constant ansatz is used. \section{Summary} In this review, we have tried to deliver a more general and pedagogic view of the relativistic hydrodynamics currently used in the study of ultra-relativistic heavy ion collisions. One message we tried to convey to the reader was that hydrodynamics is a very general framework and yet it can describe a vast set of complex phenomena. Especially in QGP studies, hydrodynamics is an indispensable tool that connects the QGP properties to the actual observables. Another message we tried to convey was that the theory of hydrodynamics is a fascinating subject by itself. As discussed in section \ref{sec:linear_response} and section \ref{sec:kinetic_hydro}, there are still many unsolved problems such as finding Kubo formulas for the second order transport coefficients and finding the right anisotropic equation of state. In view of the apparent collectivity in the high-multiplicity proton-proton and proton-nucleus collisions at the LHC, the theory of collective motion in small systems is also in urgent need of development. In these systems, thermal fluctuations during the hydrodynamic evolution may not be completely ignored\cite{Kapusta:2011gt,Murase:2013tma,Hirano:2014dpa,Young:2014xda,Young:2014pka}. We hope that this review has provided interested readers enough starting points to pursue these and other interesting topics on their own. In any short review, omission of some important subjects inevitably occurs due to the lack of space. One notable omission in this review is the discussion of the initial state models. As briefly discussed in section \ref{sec:numerical}, the initial condition of the hydrodynamic evolution must be given outside of the theory of hydrodynamics. For a meaningful comparison with the experimental data, having the right initial condition including the right energy-momentum fluctuation spectra is crucial. Unfortunately, it is outside the scope of this review and must be left as the subject of a future review. \section{Acknowledgments} S.J.~thanks C.~Gale and G.~Denicol for many discussions on finer points of hydrodynamics and L.G.~Yaffe for generous permission to use some materials from his unpublished note. S.J.~is supported in part by the Natural Sciences and Engineering Research Council of Canada. U.H.~acknowledges support by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics under Awards No. \rm{DE-SC0004286} and (within the framework of the JET Collaboration) \rm{DE-SC0004104}. \bibliographystyle{ws-rv-van}
{ "attr-fineweb-edu": 1.916992, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdO04uBhi07hH1rVG
\section{conclusion} The extracted $\sigma_{L}$ and $\sigma_{T}$ as a function of $-u$ at $Q^2=1.6$ and 2.45~GeV$^2$ are shown in Fig.~\ref{fig:sigt}. The two sets of TDA predictions for $\sigma_{T}$ each assume different nucleon DAs as input. The predictions were calculated at the specific $\overline{Q^2}$, $\overline{W}$ values of each $u$ bin. The predictions at three $u$ bins are joined by straight lines. At $Q^2 = 2.45$ GeV$^2$, TDA predictions are within the same order of magnitude as the data; whereas at $Q^2 = 1.6$ GeV$^2$, the TDA model over predicts the data by a factor of $\sim$10. This is very similar to the recent backward-angle $\pi^+$ data from CLAS \cite{park18}, where the TDA prediction is within 50\% of the data at $Q^2$=2.5 GeV$^2$, but far higher than the unseparated data at $Q^2=$1.7 GeV$^2$. Together, data sets suggest that the boundary where the TDA factorization applies may begin around $Q^2 = 2.5$ GeV$^2$. The behavior of $\sigma_{L}$ differs greatly at the two $Q^2$ settings. At $Q^2=$1.6~GeV$^2$, $\sigma_{L}$ falls almost exponentially as a function of $-u$; at $Q^2=$2.45~GeV$^2$, $\sigma_{L}$ is constant near zero (within one standard deviation) and this is consistent with the leading-twist TDA prediction: $\sigma_{L}\approx 0$. \begin{figure}[ht] \centering \includegraphics[scale=0.44]{combined_t_slope.eps} \caption{Exclusive $\omega$ electroproduction cross section as a function of $-t$ at $Q^2=1.75$ (left panel) and $Q^2=2.35$~GeV$^2$ (right panel). The CLAS data are the black dots in the near-forward kinematics region ($-t<2.5$~GeV$^2$), and the F$_\pi$-2 are the red crosses in the backward region ($-t>5$~GeV$^2$), scaled to the kinematics of the CLAS data, as described in the text. The blue and magenta dashed thick lines are Regge trajectory based JML04 and JML18 predictions, respectively. The short curves above the F$_\pi$-2 data are TDA predictions based on COZ~\cite{chernyak89} (blue solid) and KS~\cite{king87} (red solid) DAs.} \label{omega_t_slope} \end{figure} The combined data from CLAS~\cite{morand05} and F$_{\pi}$-2 cover both forward and backward-angle kinematics, and jointly form a complete $-t$ evolution picture for the $e p \rightarrow e^{\prime} p \omega$ reaction. The CLAS data, at $W\sim2.48$~GeV$^2$, $Q^2=1.75$ and 2.35 GeV$^2$, are shown in the left and right panels of Fig.~\ref{omega_t_slope}, respectively. Because of the similarities in the kinematics, the F$_{\pi}$-2 data (this work) are scaled to those of the CLAS data. The $W$ dependence of the backward-angle cross section is unknown, therefore the scaling procedure: $(W^2-m_p^2)^{-2}$, based on the forward-angle phenomenology studies, is applied~\cite{brauel79}. The $Q^2$ scaling is based on the empirical fit used to extract the separated cross sections of this work. This empirical model is documented in Ref.~\cite{wenliang17}. In addition to the scaling, the extracted $-u$ dependent cross section from F$_\pi$-2 is translated to the $-t$ space of the CLAS data. Fig.~\ref{omega_t_slope} indicates strong evidence of the existence of the backward-angle peak at $-t > 5$ GeV$^2$ for both $Q^2$ settings, with strength $\sim 1/10$ of the forward-angle cross section. Previously, the ``forward-backward'' peak phenomenon was only observed in $\pi^+$ photoproduction data~\cite{vgl96,guidal97,anderson69,anderson69b,boyarski68}. This was successfully interpreted using the Regge trajectory based VGL model \cite{vgl96,guidal97}. The results presented in this paper have demonstrated that the missing-mass reconstruction technique, in combination with the high precision spectrometers in coincidence mode at Hall C of Jefferson Lab, is able to reliably perform a full L/T separation of the backward-angle exclusive reaction $ep \to e'p \omega$. Since the missing mass reconstruction method does not require the detection of the produced meson, this allows the possibility to extend experimental kinematic coverage that was considered to be inaccessible through the standard direct detection method. If used in combination with a large acceptance detector, such as CLAS-12, one could systematically study the complete $t$ evolution of a given interaction, thus unveiling new aspects of nucleon structure. The separated cross sections show indications of a regime where $\sigma_T \gg \sigma_L$ for $ep \to e'p \omega$, qualitatively consistent with the TDA factorization approach in backward-angle kinematics. However, the approach relying on the QCD partonic picture applying at large enough $Q^2$ involves different mechanisms for the forward and backward peaks and could not provide a unique description in the whole range in $-t$. An alternative description for the $\omega$-meson electroproduction cross section is given by the Regge based JML model. It describes the JLab $\pi$ electroproduction cross sections over a wide kinematic range without destroying good agreement at $Q^2=0$~\cite{laget10, laget11}. Two JML model predictions are plotted in Fig. 6: JML04~\cite{laget04} (prior to F$_\pi$-2 data) and JML18. JML04 includes the near-forward Regge contribution at $-t < 1$ GeV$^2$ and $N$-exchange in the $u$-channel with a $t$-dependent cutoff mass. It significantly underpredicts the backward-angle cross section. In JML18~\cite{laget18}, the principle of the $u$-channel treatment is the same as in the $t$-channel neutral pion electroproduction~\cite{laget11}. It includes, in addition, an estimation of the contribution of the $\rho$-$N$ and $\rho$-$\Delta$ unitarity rescattering (Regge) cuts, allowing an excellent description of the combined data within a unique framework. In particular, the $-u$ dependence and the strength of the backward angle peak are described well at both $Q^2$ settings. The inelastic exchange diagrams are the main sources to the observed backward-angle peak, with one third of the contribution coming from the $\rho^0$-$\omega$ transition, and the rest coming from $\rho^+$-$N$ and $\Delta$ resonance. However, JML18 lacks the prediction of the $Q^2$-dependence of the $\sigma_{L}/\sigma_{T}$ ratio. In conclusion, the presented experimental data hint on the early onset of the QCD-based factorized description of electroproduction of $\omega$ in the backward kinematics regime for $Q^2$ in the few GeV$^2$ range. This opens a way to the experimental access of nucleon-to-meson TDAs and provides a new window on the quark-gluon structure of nucleons. These data also supply a new interesting testing bench for Regge-based hadronic models. \begin{acknowledgments} We acknowledge the excellent efforts provided by the staff of the Accelerator and the Physics Divisions at Jefferson Lab. This work is supported by NSERC (Canada) FRN: SPAPIN-2016-00031, DOE and NSF (USA), FOM (Netherlands), NATO, and NRF (Rep. of Korea). Additional support from Jefferson Science Associates and the University of Regina is gratefully acknowledged. This material is based upon work supported by the U.S. Department of Energy under contracts DE-AC05-06OR23177 and DE-AC02-06CH11357. L. S. is supported by the grant 2017/26/M/ST2/01074 of the National Science Center in Poland. He thanks the French LABEX P2IO and the French GDR QCD for support. \end{acknowledgments}
{ "attr-fineweb-edu": 1.5, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdO3xK7IDBrtwRCC5
\section{Introduction} \noindent Extensions of the Standard Model (SM) involving vectorlike quarks (VLQs) are frequently invoked to explain various experimental observations or discrepancies (like the current anomalies). Generally, these extensions are, from a top-down point of view, well-motivated and can successfully address some of the shortcomings of the SM~\cite{Kaplan:1983fs,Kaplan:1991dc,Agashe:2004rs,Ferretti:2013kya,Gherghetta:2000qt,Contino:2003ve,Arkani-Hamed:2002iiv,Arkani-Hamed:2002ikv}. Depending on their quantum numbers, VLQs can decay to the SM particles by mixing with the SM quarks. The mixing is generally with the third-generation quarks as those are comparatively less constrained by the flavour-changing neutral-current data. Looking for VLQs exclusively decaying to a third-generation quark and a heavy boson ($W/Z/H$) is a major beyond-the-SM (BSM) search program at the LHC. However, in many BSM theories, an extended quark sector with vectorlike quarks naturally appears along with an extended scalar sector containing heavy scalar/pseudoscalar particles. In some cases, a VLQ can also decay to a new scalar or pseudoscalar, often significantly (see, e.g., Ref.~\cite{Bhardwaj:2022nko}). The null results in the current LHC searches motivate us to seriously investigate such exotic decay modes of the VLQs~\cite{Gopalakrishna:2011ef,Gopalakrishna:2013hua,Barcelo:2014kha,Gopalakrishna:2015wwa,Serra:2015xfa,Anandakrishnan:2015yfa,Banerjee:2016wls,Kraml:2016eti,Dobrescu:2016pda,Aguilar-Saavedra:2017giu,Chala:2017xgc,Moretti:2017qby,Bizot:2018tds,Colucci:2018vxz,Han:2018hcu,Dermisek:2019vkc,Kim:2019oyh,Xie:2019gya,Benbrik:2019zdp,Cacciapaglia:2019zmj,Dermisek:2020gbr,Wang:2020ips,Choudhury:2021nib,Dermisek:2021zjd,Corcella:2021mdl,Dasgupta:2021fzw,Verma:2022nyd,Bhardwaj:2022nko,Bhardwaj:2022wfz,Cline:2021iff}. In Ref.~\cite{Bhardwaj:2022nko}, we presented a roadmap to search for vectorlike top/bottom partners in the presence of a lighter weak-singlet scalar or pseudoscalar ($\Phi$) at the LHC. Even though the $\Phi$ has no direct couplings with the SM quarks, they are generated through the mixing of VLQs ($Q$) with the corresponding third-generation quarks ($q$) after electroweak-symmetry breaking (EWSB). As a result, a new VLQ-decay mode opens up: $Q\to q\Phi$. A singlet $\Phi$ can decay to digluon ($gg$), diphoton ($\gm\gm$) and diboson ($VV^\prime$) states through the VLQ loops. The $q\leftrightarrow Q$ mixing also allows the $\Phi$ to have a $q\bar{q}$ decay at the tree level. As charted out in Ref.~\cite{Bhardwaj:2022nko}, the presence of an additional decay mode for the VLQs and the subsequent decay of the $\Phi$ opens a host of new and interesting possibilities in a large region of parameter space without any fine-tuning. In a follow-up paper~\cite{Bhardwaj:2022wfz}, we studied the prospects of the pair production of $T$ quark decaying exclusively to the new boson, $pp\to T\bar T\to t \Phi \bar t \Phi$, with each $\Phi$ decaying into two gluon jets at the high-luminosity LHC (HL-LHC). In this paper, we look at the case of the $B$ quark. There are a few major differences between these two cases. For example, in the case of $B\to b\Phi$, the $\Phi$ dominantly decays to two jets, either two gluons or two $b$ quarks, in the entire parameter region of interest (see Ref.~\cite{Bhardwaj:2022nko} for the other possible modes). As a result, when the new decay mode of $B$ dominates, the resultant final states are purely hadronic, i.e., $B\bar{B}\to \left(b g g\right) \left(\bar{b} g g\right) / \left(b b \bar{b}\right) \left(\bar{b} b \bar{b}\right)$. This is unlike the case of $T$, where the final states have at least one top quark---one can use various analysis strategies (like, tagging a boosted top quark, or looking for a leptonically decaying top, etc.) depending on the top quark decay modes to identify such a topology. The absence of a top quark in the fully-hadronic final states makes the searches for the $pp\to B\bar B\to b\Phi\bar b\Phi$ mode challenging. \begin{figure}[t!] \centering \includegraphics[scale=0.9]{B2_bPhi_inclusiveLimits} \caption{LHC exclusion limits on $B$ in the singlet $B$ and doublet model as a function of the branching ratio in the new decay mode.} \label{fig:b2_LHClimits} \end{figure} Even though the LHC is yet to search for a $B$ decaying to a singlet scalar, it is possible to draw bounds on the possible scenarios from the LHC data. In Ref.~\cite{Bhardwaj:2022nko}, we recast the current VLQ searches~\cite{CMS:2020ttz, CMS:2019eqb, ATLAS:2021ibc} to put mass exclusion bounds on $B$ in the presence of a new decay mode by adapting the branching ratio (BR) condition as \begin{equation} \label{eq:brEq} \beta_{bH} + \beta_{bZ} + \beta_{tW} = 1\to \left(1 - \beta_{b\Phi}\right) \end{equation} where $\beta_{X}=$ BR$(B \to X)$. When the mass of the $B$ is more than a TeV, $\beta_{t W} \approx 2\beta_{b Z} \approx 2\beta_{b H}$ if the $B$ is a weak-singlet (the singlet $B$ model in Ref.~\cite{Bhardwaj:2022nko}), and $ \beta_{b Z} \approx \beta_{b H},\ \beta_{t W}\approx 0$ if it is component of a $(T\ B)$ doublet (the doublet model). We show the recast limits for the two possible $B$ representations in Fig.~\ref{fig:b2_LHClimits}. [In addition to the rescaled bound on VLQs, we also draw the bounds on the square of the $\Phi\to gg$ coupling times BR$(\Phi\to\gm\gm)$ from the ATLAS heavy-resonance search data in the diphoton mode~\cite{ATLAS:2021uiz} in the same paper. Constraints from the measurement of $Z$ boson coupling to the left-handed $b$ quark~\cite{Gopalakrishna:2013hua, Agashe:2006at} and flavour-changing neutral-current couplings~\cite{Nir:1990yq} restrict the $b \leftrightarrow B$ mixing from being arbitrarily large. From the rescaled $B$ limits in Fig.~\ref{fig:b2_LHClimits}, we see another difference from the $T$ case. The limits are not as restrictive as they are for the $T$---the mass limits relax significantly, especially for the singlet $B$. This can also be read as a relaxing condition on the BR in the extra mode. For example, if a $1.2$ TeV $T$ exists then BR$(T\to t\Phi)$ must be more than $70\%$, whereas for a $1.2$ TeV $B$, $\bt_{b\Phi}$ can be just above $33\%$. A smaller BR in the new mode implies larger BRs in the SM modes. Hence, in addition to the challenging fully-hadronic final states, one can also make use of the semileptonic final states to search for the $B$ quark, like $pp\to B\bar B\to b\Phi\, \bar tW^+ + \bar b\Phi\, tW^-$ where the $t$ or the $W$ decays leptonically, especially since identifying a lepton is much easier at the LHC (which can be used as a trigger). From the above mixed-decay modes of $B\bar{B}$, we see that two kinds of signal topologies are possible: (a) one with exactly one lepton and the other with (b) two leptons. The monolepton signature is exclusive to the singlet $B$ model as the $tW$ mode is negligible in the doublet model~\cite{Bhardwaj:2022nko}. The dilepton signature is possible when a $B$ decays to a leptonically-decaying $Z$ boson. Hence it is common to both singlet and doublet models. However, to suppress the huge Drell-Yan dilepton background, a $Z$-veto cut is necessary. This cut not only suppresses the background but also kills the signal. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{FeynD} \caption{Signal topology} \label{fig:FeynDiag} \end{figure} In this paper, we investigate the prospects of the mixed-decay mode with monolepton final states (see Fig.~\ref{fig:FeynDiag}) at the HL-LHC since it is an exclusive possibility for a close-to-TeV $B$. For the singlet $B$ model, this relatively simpler channel might have good prospects compared to the purely hadronic ones. Among the lepton channels the reasons for selecting the monolepton mode are mainly two---(a) $\bt_{tW} > \bt_{bZ/H}$ in the singlet $B$ model and (b) the leptonic branching of $W$ is more than that of $Z$, leading to more signal events in the monolepton channel that the dilepton channel. The rest of the paper is organised as follows. We describe the singlet $B$ model in Sec.~\ref{sec:model}; discuss the collider phenomenology of the model and define the event selection criteria in Sec.~\ref{sec:colliderpheno}; describe a neural network (NN) that we finally use to isolate the signal from the background in Sec.~\ref{sec:dnn}, and present the final results in Sec.~\ref{sec:results}. Finally, we conclude in Sec.~\ref{sec:conclu}. \begin{figure*}[htp!] \captionsetup[subfigure]{labelformat=empty} \subfloat[\quad\quad(a)]{\includegraphics[width=0.75\columnwidth]{BR_Bprime_bMark1}\label{fig:br_B_bMark1}}\hspace{0.5cm} \subfloat[\quad\quad(b)]{\includegraphics[width=0.75\columnwidth]{BR_Phi_BSing_bMark1}\label{fig:br_Phi_bMark1}}\\ \subfloat[\quad\quad(c)]{\includegraphics[width=0.75\columnwidth]{BR_Bprime_bMark2}\label{fig:br_B_bMark2}}\hspace{0.5cm} \subfloat[\quad\quad(d)]{\includegraphics[width=0.75\columnwidth]{BR_Phi_BSing_bMark2}\label{fig:br_Phi_bMark2}}\\ \subfloat[\quad\quad(e)]{\includegraphics[width=0.75\columnwidth]{BR_Bprime_bMark3}\label{fig:br_B_bMark3}}\hspace{0.5cm} \subfloat[\quad\quad(f)]{\includegraphics[width=0.75\columnwidth]{BR_Phi_BSing_bMark3}\label{fig:br_Phi_bMark3}}\\ \subfloat[\quad\quad(g)]{\includegraphics[width=0.75\columnwidth]{BR_Bprime_bMark4}\label{fig:br_B_bMark4}}\hspace{0.5cm} \subfloat[\quad\quad(h)]{\includegraphics[width=0.75\columnwidth]{BR_Phi_BSing_bMark4}\label{fig:br_Phi_bMark4}}\\ \caption{Decays of $B$ and $\phi$ in the singlet $B$ model for benchmark choice of parameters. The $B\to b\Phi$ decay dominates over all the other decay modes in (a)--(d) whereas BR$(B\to tW)\approx$ BR$(B\to b\phi)$ in (e)--(h). On the right panel, we show that either $\phi\to gg$ or $\phi\to b\bar{b}$ can dominate in both scenarios.} \label{fig:br} \end{figure*} \section{Singlet $B$ Model}\label{sec:model} \noindent We consider the singlet $B$ model from Ref.~\cite{Bhardwaj:2022nko} which is a simple extension of the SM containing a TeV-scale weak-singlet $B\equiv\left(\mathbf{3},\mathbf{1}, -1/3\right)$ and a sub-TeV weak-singlet scalar or pseudoscalar $\Phi\equiv\left(\mathbf{1},\mathbf{1}, 0\right)$. In a similar notation, the mass terms relevant to the bottom sector can be written as \begin{align} \mathcal{L} \supset -\Big\{&\lambda_{b}\left(\bar{Q}_LH_B\right)b_R +\omega_{B}\left(\bar{Q}_LH_B\right)B_R\ \nonumber\\ & + M_{B}\bar{B}_LB_R+ h.c.\Big\}, \end{align} where $Q_L$ is the third-generation quark doublet and $H_B^T=1/\sqrt{2} \begin{pmatrix} 0 & v \end{pmatrix}$, with $v$ being the vacuum expectation value of the Higgs field. The off-diagonal Yukawa coupling is denoted by $\lambda_b$, $\omega_B$ denotes the mixing between the SM and the vectorlike $B$, and $M_{B}$ is the mass term of $B$. After EWSB, we get the mass matrix $\mathcal{M}$ as, \begin{equation} \mathcal{L}_{mass} = \begin{pmatrix} \bar{b}_L & \bar{B}_L \end{pmatrix} \begin{pmatrix} \lambda_b \frac{v}{\sqrt2} & \omega_{B}\,\frac{v}{\sqrt2} \\ 0 & M_B \end{pmatrix} \begin{pmatrix} b_R \\ B_R \end{pmatrix} + h.c. \end{equation} The matrix is diagonalised by a bi-orthogonal rotation with two mixing angles $\theta_L$ and $\theta_R$, \begin{equation} \begin{pmatrix} b_{L/R} \\ B_{L/R}\end{pmatrix} = \begin{pmatrix} c_{L/R} & s_{L/R} \\ -s_{L/R} & c_{L/R} \end{pmatrix} \begin{pmatrix} b_{1_{L/R}} \\ b_{2_{L/R}} \end{pmatrix}, \end{equation} where $s_{P}=\sin\theta_{P}$ and $c_{P}=\cos\theta_{P}$ for the two chirality projections, and $b_1$ and $b_2$ are the mass eigenstates. We identify $b_1$ as the physical bottom quark and $b_2$ is essentially the $B$ quark for small $\om_B$. Hence, we use the notations $B$ and $b_2$ interchangeably. The Lagrangian for $\Phi$ interactions is given by \begin{equation} \mathcal{L} = -\lambda^{a}_{\Phi}\Phi \bar{B}_L \Gamma B_R - \lambda_{\Phi}^{b}\Phi \bar{B}_L\Gamma b_R + h.c., \end{equation} where $\Gamma=\left\{1, i\gamma_{5}\right\}$ for $\Phi=\left\{ \mbox{scalar~}\phi, \mbox{pseudoscalar~}\eta \right\}$. Expanding in terms of $b_1, b_2$ the above Lagrangian gives, \begin{align} \mathcal{L} = &-\lambda_{\Phi}^{a}\Phi\left(c_L \bar{b}_{2L} - s_L \bar{b}_L\right)\Gamma\left(c_R b_{2R} - s_R b_R\right) \nonumber \\ &-\lambda_{\Phi}^{b}\Phi\left(c_L \bar{b}_{2L} - s_L \bar{b}_L\right)\Gamma\left(c_R b_{R} + s_R b_{2R}\right) + h.c. \end{align} Figure 4 of Ref.~\cite{Bhardwaj:2022nko} shows that in the singlet $B$ model, the dominant decay mode of $\Phi$ is not determined by the choice of the $\lm$ couplings shown in the above equations. It rather depends on the value of the off-diagonal mass term $\mu_{B1}=\om_B/\sqrt2$; the $\Phi\to gg$ mode dominates for small $\m_{B1}$ and the $\Phi\to b\bar b$ mode starts dominating with increasing $\m_{B1}$. In Fig.~\ref{fig:br}, we show some benchmark choices of parameters and the corresponding decays of $B$ and the scalar $\phi$. The $B\to b\phi$ decay dominates over all the other decay modes in Figs.~\ref{fig:br_B_bMark1}--\ref{fig:br_Phi_bMark2} whereas BR$(B\to tW)\approx$ BR$(B\to b\phi)$ in Figs.~\ref{fig:br_B_bMark3}--\ref{fig:br_Phi_bMark4}. The same also holds true for a pseudoscalar $\eta$. To cover both possibilities, our analysis relies only on the two-pronged nature of $\Phi$ (i.e., no $b$-tagging) for a comprehensive study of the prospects of the singlet $B$ model in the monolepton channel. \section{Collider phenomenology}\label{sec:colliderpheno} \noindent We use \textsc{FeynRulesv2.3}~\cite{Alloul:2013bka} to build the singlet $B+\Phi$ model and obtain the {\sc Universal FeynRules Output}~\cite{Degrande:2011ua} model files. We use \textsc{MadGraph5v3.3}~\cite{Alwall:2014hca} to simulate the hard-scatterings at the leading order, \textsc{Pythia8}~\cite{Sjostrand:2014zea} for showering and hadronisation, and \textsc{Delphes3}~\cite{deFavereau:2013fsa} for simulating a generic LHC detector environment. The events are generated at $\sqrt{s} = 14$ TeV. To account for the boosts of the final state objects, we have modified the default \textsc{Delphes} card. For the electron, we update the \texttt{DeltaRMax} parameter (the maximum radius of an electron cone centred around the identified track) from $0.5$ to $0.2$, as per Ref.~\cite{ATLAS:2019qmc}. The $b$-tagging efficiency and the mistag rate for the lighter quarks were updated to reflect the medium working point of the DeepCSV tagger from Ref.~\cite{CMS:2017wtu}. We consider leptons with $p_T > 10$~GeV and $|\eta| < 2.5$. Our analysis relies on two types of jets, both clustered using the anti-$k_T$ algorithm~\cite{Cacciari:2008gp}, one with $R=0.4$ (AK-4) and the other with $R=1.2$ (fatjet). The AK-4 objects are required to pass a minimum-$p_T$ cut, $p_T > 20$ GeV and have $|\eta|<5$. We use the next-to-next-to-leading order (NNLO) signal cross sections (Fig.~\ref{fig:ppXS}) and for the background processes, we use the highest-order cross sections available in the literature (Table~\ref{tab:bg-crossx}). \begin{figure}[t!] \centering \includegraphics[scale=0.9]{ppXS_Bp} \caption{The signal cross section at the $14$ TeV LHC calculated from the leading order cross section as $\sg$(LO)$\times K_{\rm NNLO}$. We estimate the NNLO QCD $K$-factor from Ref.~\cite{CMS:2019eqb} as $1.3$.} \label{fig:ppXS} \end{figure} \begin{table}[t!] \centering \caption{Higher-order cross sections of the SM backgrounds considered in our analysis.\label{tab:bg-crossx}} {\renewcommand\baselinestretch{1.25}\selectfont \begin{tabular*}{\columnwidth}{l @{\extracolsep{\fill}} lrc} \hline Background & & $\sigma$ & QCD \\ Processes & & (pb) & order \\ \hline \hline \multirow{2}{*}{$V$ + jets~\cite{Balossini:2009sa, PhysRevLett.103.082001}} & $W$ + jets & $1.95 \times 10^5$ & NLO \\ & $Z$ + jets & $6.33 \times 10^4$ & NNLO \\ \hline $tt$~\cite{Muselli:2015kba} & $tt$ + jets & $988.57$ & N$^3$LO \\ \hline \multirow{3}{*}{Single $t$~\cite{Kidonakis:2015nna}} & $tW$ & $83.10$ & N$^{2}$LO \\ & $tb$ & $248.00$ & N$^{2}$LO \\ & $t$ + jets & $12.35$ & N$^{2}$LO \\ \hline \multirow{2}{*}{$VV$ + jets~\cite{Campbell:2011bn}} & $WW$ + jets & $124.31$ & NLO \\ & $WZ$ + jets & $51.82$ & NLO \\ \hline \multirow{3}{*}{$ttV$~\cite{Kulesza:2018tqz,LHCHiggsCrossSectionWorkingGroup:2016ypw}} & $ttZ$ & $1.05$ & NLO + NNLL \\ & $ttW$ & $0.65$ & NLO + NNLL \\ & $ttH$ & $0.6113$ & NLO \\ \hline \end{tabular*}} \end{table} \subsection{Signal topology} \noindent The signal process we look for is $pp \to B\bar{B} \to \left(b\Phi\right)\:\left(\bar{t} W^{+}\right)$, where one of the $B$'s decays to a $b$ quark and a $\Phi$, the other to a $t$ quark and a $W$ boson. The $\Phi$ then decays to a pair of $b$ quarks or gluons [the parity of $\Phi$ does not affect our results, so we refer to the new boson as $\Phi$ even though we have used a scalar ($\phi$) to generate the signal events] and the $tW$ pair decays semileptonically, i.e., exactly one lepton comes out the decay of either the top quark or the $W$ boson. Thus, the essential topology of our signal can be summarised in terms of high-$p_T$ objects as \begin{align} \mbox{exactly } 1 \mbox{ lepton } + \mbox{ at least } 1~ b \mbox{ jet } + \mbox{ jets}. \end{align} To generate events for each signal benchmark, we pick model parameters such that BR$(B \to b\Phi) \approx 0.4$ while ensuring that narrow width approximation remains valid for both $B$ and $\Phi$. Thus, for the $M_{B}$--$M_{\Phi}$ parameter space that we scan over, the results can be easily scaled for any BR in the new mode. We choose our analysis to be independent of the $\Phi$ branching in $b\bar{b}$ or $gg$ mode by not explicitly ($b$-)tagging the decay products of $\Phi$. (However, we evaluate the analysis strategy on the events generated with $\Phi \to b\bar{b}$ and remark about the $gg$ mode analysis when it is relevant.) \begin{table*} \begin{centering} \caption{Cut flow for the three benchmark choices of the signal and the relevant background processes. We use the MLM jet-parton shower matching technique~\cite{Mangano:2006rw} to generate the background samples as indicated by the additional jets in brackets. The events are estimated for luminosity $\mathcal{L}=3$ ab$^{-1}$.\label{tab:cut-flow}} {\renewcommand\baselinestretch{1.25}\selectfont \begin{tabular*}{\textwidth}{l @{\extracolsep{\fill}} rrrrrr} \hline & \multicolumn{6}{c}{Selection Criteria}\\\cline{2-7} & $\mathfrak{C}_1$ & $\mathfrak{C}_2$ & $\mathfrak{C}_3$ & $\mathfrak{C}_4$ & $\mathfrak{C}_5$ & $\mathfrak{C}_6$ \tabularnewline \hline\hline \multicolumn{7}{c}{Signal benchmarks} \\ \hline $M_{B}=1200, M_{\Phi}=400$ & $2619$ & $1681$ & $1677$ & $1628$ & $1176$ & $1029$ \tabularnewline $M_{B}=1500, M_{\Phi}=400$ & $478$ & $333$ & $332$ & $321$ & $258$ & $223$ \tabularnewline $M_{B}=1800, M_{\Phi}=700$ & $102$ & $75$ & $75$ & $72$ & $60$ & $53$ \tabularnewline \hline \multicolumn{7}{c}{Background processes} \\ \hline $t_{\ell}t_{h}~(+ 2j)$ & $2122983$ & $1864463$ & $1738473$ & $1400910$ & $483907$ & $277537$ \tabularnewline $W_{\ell}~(+ 2j)$ & $1451429$ & $1359057$ & $782998$ & $157115$ & $75886$ & $40337$ \tabularnewline $t_{\ell}t_{\ell}~(+ 2j)$ & $31460$ & $28244$ & $23377$ & $18490$ & $8833$ & $5519$ \tabularnewline $t_{h/\ell}W_{\ell/h} ~(+ 1j)$ & $63778$ & $53199$ & $43691$ & $36235$ & $9693$ & $4708$ \tabularnewline $Z_{\ell} ~(+ 2j)$ & $33771$ & $31786$ & $22847$ & $5324$ & $3247$ & $1962$ \tabularnewline $W_{\ell} Z_h ~(+ 1j)$ & $17206$ & $16209$ & $10070$ & $7350$ & $3290$ & $1670$ \tabularnewline $t_{\ell} t_{h}Z_h ~(+ 1j)$ & $6830$ & $6084$ & $5871$ & $4908$ & $2017$ & $1348$ \tabularnewline $ W_{\ell}W_{h}~(+ 1j)$ & $44568$ & $38566$ & $25330$ & $8709$ & $2427$ & $1325$ \tabularnewline $t_{\ell} t_{h}H ~(+ 1j)$ & $3360$ & $2976$ & $2907$ & $2740$ & $1219$ & $853$ \tabularnewline $t_{\ell} + b/j$ & $10830$ & $9324$ & $4357$ & $3311$ & $962$ & $608$ \tabularnewline $t_{h}t_{\ell/h}W_{h/\ell}~(+ 1j)$ & $556$ & $509$ & $501$ & $445$ & $113$ & $79$ \tabularnewline \hline \multicolumn{2}{c}{ } & \multicolumn{4}{r}{Total number of background events:} & $335946$\\ \hline\hline \end{tabular*}} \end{centering} \end{table*} \subsection{Topology motivated selection criteria} \noindent As the mass difference between $B$ and $\Phi$ increases, the $\Phi$ can be tagged as a boosted fatjet. Looking at the signal topology, we design a set of selection criteria to select events to pass on to a Deep Neural Network (DNN). We demand that each selected event must have the following characteristics: \begin{itemize} \item[$\mathfrak{C}_1$:] \emph{Exactly 1 lepton} ($\ell \in \left\{e,\mu\right\}$). \\ The lepton is required to have a $p_T > 100$ GeV, $|\eta| < 2.5$ and must obey the updated isolation criteria mentioned earlier. \item[$\mathfrak{C}_2$:] \emph{$H_T > 900$~GeV}, where $H_T$ is the scalar sum of the transverse momenta of all hadronic objects in an event. \item[$\mathfrak{C}_3$:] \emph{At least 3 AK-4 jets with $p_T > 60$~GeV.} \\ The leading jet must have $p_T > 120$~GeV. \item[$\mathfrak{C}_4$:] \emph{At least 1 b-tagged jet with $p_T > 60$~GeV.} \\ At least one of the AK-4 jets must be identified as a $b$ jet. \item[$\mathfrak{C}_5$:] \emph{At least 1 fatjet ($J$) with $R=1.2$ and $p_T > 500$ GeV.} \\ The fatjet is clustered using the anti-$k_T$ algorithm and the parameters have been optimised to tag a $\Phi$ fatjet. We also demand the invariant mass of the fatjet, $M_{J}$, to be more than $250$ GeV. \item[$\mathfrak{C}_6$:] \emph{$\Delta R_{b J} > 1.2$.}\\We demand that at least one of the identified $b$ jets is well separated from the leading fatjet passing $\mathfrak{C}_5$, i.e., the $b$ jet lies outside the fatjet cone. \end{itemize} The invariant mass cut on the $\Phi$ fatjet limits contamination from the top or $W$ fatjets and the $b$ isolation criterion in $\mathfrak{C}_6$ reduces the background contribution from the semileptonic $t\bar t$ process significantly but does not affect the signal greatly (since the hardest $b$ comes from a $B$ quark in the signal). We set the mass range of $\Phi$ as $M_{\Phi} \in \left[ 300,700 \right]$ in accordance with the benchmark masses of $\Phi$ considered in Ref.~\cite{Bhardwaj:2022nko}. We see that for a given mass of $B$, the final efficiency after these cuts scales positively with $M_{\Phi}$---the fatjet criteria is more efficient at detecting $\Phi$ of higher masses. \subsection{Background processes} \noindent We identify the SM background processes that can mimic the signal and project their contributions at the HL-LHC. \begin{itemize} \item[\ding{111}] \emph{$V+$ jets}, where $V \in \left\{W^{\pm}, Z\right\}$ : A priori, $W^\pm (+ 2j)$ is the most dominant monolepton background of all the SM processes. While generating the events we ensure that the $W$-boson decays leptonically and up to two jets are matched with parton showers. This background is severely cut at the analysis stage due to the requirement of a $b$ jet as well as a high invariant mass fatjet. Furthermore, the requirement of high $p_T$ jets also cuts this background. We also consider leptonically decaying $Z (+ 2j)$, but this background falls considerably after the lepton, $b$-jet, and fatjet cuts. \item[\ding{111}] Semileptonic $t\bar{t}+$ jets: We simulate the process by matching up to two extra jets. Its topology closely matches our signal as a fatjet can come from the hadronically decaying top. However, the additional requirement of a reasonably well-separated $b$ jet from the fatjet significantly cuts down this process contribution. This is due to the fact that the separated $b$ jet has to come from the leptonically decaying $t$ quark. \item[\ding{111}] Fully leptonic $t\bar{t}$ + jets: We also consider the fully leptonic case after matching up to two extra jets for completeness' sake. This background is mitigated by the requirements on the fatjet. \item[\ding{111}] Diboson backgrounds ($VV +$ jets): Of the diboson backgrounds, we consider the ones that match the selection criteria: $W_{\ell}W_h$ and $W_{\ell}Z_h$. We simulate these processes by matching one extra jet. The $b$ jet and heavy fatjet requirements reduce these backgrounds. \item[\ding{111}] $t\bar{t}V + $ jets, where $V \in \left\{W^{\pm},Z,H\right\}$: It is similar to the semileptonic $t\bar{t}$ process with an additional SM boson. These processes mimic the signal topology well---more than the $t\bar{t}$ background. When $V = W^\pm$, the lepton can come from either of the top quarks or the $W$ boson. The leptonic mode of $Z$ has a dileptonic signature and is not considered. The reconstructed fatjet is generally the hadronic top quark, but $V$ may also be reconstructed. The requirement of $\Delta R_{bJ} > 1.2$ cuts out the $ttW^\pm$ background, but due to the significant branching of the $b\bar{b}$ mode of the $Z/H$ decay has a lesser effect on the processes containing them. \item[\ding{111}] $tW^{\pm} + $ jets: Here, either the $t$ quark or $W$ decays leptonically and the hadronically decaying object can form a fatjet. However, the $\Delta R_{b J} > 1.2$ requirement cuts the number of top fatjets. \item[\ding{111}] $t+b$/jet: We consider the leptonically decaying single-top process. Its contribution falls significantly after the kinematic cuts are enforced (e.g., Scalar $H_T$, $p_T$ cuts on the jets and the lepton, etc.). \end{itemize} \label{sec:gen-info}In order to save computation time, we generate these background processes with some generation-level cuts which are reinforced at the analysis level selection criteria. We summarise the effects of the cuts $\mathfrak{C}_1$--$\mathfrak{C}_6$ on the signal and background events in Table~\ref{tab:cut-flow}. \subsection{Kinematic features of the signal} \noindent Since the selection cuts in $\mathfrak{C}_1$--$\mathfrak{C}_6$ are pretty strong, the surviving signal and background events look topologically very similar. Hence, we use a DNN on the events passing $\mathfrak{C}_1$--$\mathfrak{C}_6$ to isolate the signal. The network is trained on different kinematic distributions of the signal and background events. Each selected events has some well-defined objects---the high-$p_T$ lepton, the three AK-4 jets, the $b$-tagged jet, the fatjet, and missing $E^{\text{miss}}_T$ (since the lepton in the signal comes from the decay of a $W$-boson). We feed the network the following kinematic properties of these objects: \begin{figure*} \centering \captionsetup[subfigure]{labelformat=empty} \subfloat[\quad\quad(a)]{\includegraphics[width=0.27\textwidth]{scalarHT}\label{fig:featDistA}}\quad\quad \subfloat[\quad\quad(b)]{\includegraphics[width=0.27\textwidth]{MET}\label{fig:featDistB}}\quad\quad \subfloat[\quad\quad(c)]{\includegraphics[width=0.27\textwidth]{pt_subsubleading_jet}\label{fig:featDistC}}\\ \subfloat[\quad\quad(d)]{\includegraphics[width=0.27\textwidth]{m_leading_bjet_selected_fjet}\label{fig:featDistD}}\quad\quad \subfloat[\quad\quad(e)]{\includegraphics[width=0.27\textwidth]{m_leading_jet_subleading_jet}\label{fig:featDistE}}\quad\quad \subfloat[\quad\quad(f)]{\includegraphics[width=0.27\textwidth]{girth_subsubleading_jet}\label{fig:featDistF}}\\ \subfloat[\quad\quad(g)]{\includegraphics[width=0.27\textwidth]{r_leading_bjet_selected_fjet}\label{fig:featDistG}}\quad\quad \subfloat[\quad\quad(h)]{\includegraphics[width=0.27\textwidth]{nSubFatJet_2_21}\label{fig:featDistH}}\quad\quad \subfloat[\quad\quad(i)]{\includegraphics[width=0.27\textwidth]{r_lepton_met}\label{fig:featDistI}}\\ \caption{Density plots of select features for signal and background at various signal benchmarks.}\label{fig:featDist} \end{figure*} \begin{enumerate} \item \emph{Basic variables:} For each identified object, we consider the transverse momentum $(p_T)$. The scalar $H_T$ of the event and missing energy is also considered. The set of kinematic variables chosen is $\left\{H_T, |E^{\text{miss}}_T|, p_{T_\ell}, p_{T_{j1}}, p_{T_{j2}}, p_{T_{j3}}, p_{T_b}, p_{T_J}\right\}$. \item \emph{Jet-substructure variables:} For the fatjet, we calculate the $n$-subjettiness ratios $(n=1,2,3)$ for multiple $\beta$ values $(\beta = 1.0, 2.0)$ to take the prongness of $J$ into account. The set used is $\left\{ \tau_{21}^{\beta=1},\tau_{21}^{\beta=2}, \tau_{32}^{\beta=1}, \tau_{32}^{\beta=2}\right\}$. \item \emph{Distance in the $\eta-\phi$ plane:} We calculate the separation between two objects as $\Delta R_{ij} = \sqrt{\Delta \Phi_{ij}^2 + \Delta \eta_{ij}^2}$. We choose all possible pairs from the reconstructed objects and calculate the distance between them. \item \emph{Invariant masses of objects and their combinations:} We consider the masses of hadronic objects and the invariant masses of combinations of $2$ or $3$ objects. The set of (invariant) mass variables is $\left\{m_{i'}, m_{ij}, m_{ijk} \right\}$, where $i'$ denotes an reconstructed hadronic object and each of $i$,$j$, and $k$ represent any reconstructed object. \item \emph{Girth/width of hadronic objects:} The girth (width) of a hadronic object is the $p_T$-weighted average distance of the constituents of the jet from the jet axis~\cite{Behr:2015oqq}. We also consider the higher-order central moments---variance, skewness and kurtosis of the distribution. The set of girth (width) related variables considered is $\left\{ g_{i'}, \text{Skew}[i'], \text{Kurt}[i'], \text{Var}[i']\right\}$, where $i'$ denotes a reconstructed hadronic object and Skew, Kurt, Var stand for the skewness, kurtosis, and the variance of the $p_T$-weighted distribution of the constituents of the hadronic object. \end{enumerate} We note that some of these variables can be correlated; for example, from the selection criteria, we expect that one of the leading three AK4 jets will be identical to the $b$ jet in some events since the $b$ jet is also an AK4 jet. In Fig.~\ref{fig:featDist}, we show the distributions for a subset of variables where the separation between the signal and the background distributions is clear. The distributions are shown for three benchmark parameter choices: (a) $M_{B} = 1.4$ TeV, $M_{\Phi} = 0.2$ TeV, (b) $M_{B} = 1.5$ TeV, $M_{\Phi} = 0.5$ TeV, and (c) $M_{B} = 1.8$ TeV, $M_{\Phi} = 0.9$ TeV. We choose the benchmarks across the signal mass range to understand the key trends. The total background distribution is generated by combining events from different background processes in proportion to their cross sections. The boosts of the identified objects in the signal are significantly higher than those in the background. This trend is evident from the distribution of transverse momentum of the sub-subleading jet, $j_3$ [Fig.~\ref{fig:featDistC}] where the signal peaks appear at higher values than the SM processes. Scalar $H_T$ [Fig.~\ref{fig:featDistA}] and missing transverse energy [Fig.~\ref{fig:featDistB}] distributions also indicate that the signal is well separated in the variables defined on the transverse plane. Fig.~\ref{fig:featDistD} and~\ref{fig:featDistE} show two invariant masses constructed from the identified objects---the invariant mass of the $b$ jet-fatjet pair $(m_{bJ})$ and the invariant mass of the leading two AK-4 jets $(m_{j_1 j_2})$, respectively. We expect the $m_{bJ}$ to reconstruct the $B$ mass most of the time; we see that peaks of the signal distributions appear close to the benchmark $B$ masses. Fig.~\ref{fig:featDistF} shows the girth of the sub-subleading jet. The distributions for the signal benchmarks are almost identical and separated from the background distribution. As the boost of a jet increases, it becomes more collimated. Since the jets from the signal process are significantly more boosted than those from the background, the girth of the background peaks at a higher value than the signal points. This trend is broadly true for the girth distributions of all hadronic objects. The separations between the fatjet and the $b$ jet, shown in Fig.~\ref{fig:featDistG}, indicate that the fatjet and the selected $b$ jet are mostly back to back. In the signal, the $b$ jet from the $B\to b\Phi$ decay has higher $p_T$ than the one coming from the $B\to tW$ branch, but the $b$ tagging efficiency falls considerably as the $p_T$ jet increases. Hence, the $b$ jet closest to the fatjet is less likely to be $b$ tagged. In the case of the background, the fatjet is more often the hadronic top quark from the semileptonic $t\bar t$ process, where the $b$ jet comes from the leptonic top. The $n$-subjettiness ratio $(\tau_{21}^{\beta=2})$ in Fig.~\ref{fig:featDistH} shows that the signal distributions mostly have a two-pronged fatjet whereas the background fatjet is more likely to be a top quark. The separation between the missing energy and the lepton $(\Delta R_{E_{T}^{\text{miss}}\ell})$ shown in Fig.~\ref{fig:featDistI} indicate how boosted the leptonic $W$ is---the signal distributions peak at a lower value compared to the background distribution since, generally, the $W$'s in the signal have more boost. As the mass of the $B$ quark increases, the peak shifts to slightly lower values indicating a more collimated $W$. \section{Analysis: Deep Neural Network}\label{sec:dnn} \noindent NNs consist of a series of perceptron blocks with a non-linear activation function. A perceptron block is simply a linear transformation of the input vector. NNs are motivated by the human brain, where a biological neuron is modelled by a perceptron block and the neuron interactions are modelled with non-linear activation functions. A `neuron interaction' can be mathematically expressed as \begin{align} f(\mathbf{x}) = \sigma(\mathbf{W} \mathbf{x} + \mathbf{b}) \end{align} \begin{figure}[t!] \centering \captionsetup[subfigure]{labelformat=empty} \subfloat[\quad\quad(a)]{\includegraphics[width=0.47\columnwidth]{ns_full}} \quad \subfloat[\quad\quad(b)]{\includegraphics[width=0.47\columnwidth]{nb_full}} \caption{The number of (a) signal and (b) background events surviving the various network-response thresholds for the signal benchmark $M_B = 1.5$ TeV and $M_\Phi = 0.4$ TeV.} \label{fig:DNN_NsNb} \end{figure} where $\sigma$ is a non-linear activation function such as the sigmoid function. Deep NNs can effectively approximate any real continuous function~\cite{HornikEtAl89} provided they have sufficiently large hidden layers. As indicated earlier, we use a DNN to obtain the significance of the $B$ signal at the HL-LHC. \begin{figure*}[htp] \centering \includegraphics[width=0.6\textwidth]{B-Phi_SigGrid} \caption{Statistical significances predicted by the DNN model for various combinations of $M_{B}$ and $M_{\Phi}$. BR$(B\to b\Phi)$ for each parameter point with which the signal events are generated is shown on the top. Significance estimates for $M_{B} <1.2$ TeV are presented separately---from the LHC limits in Fig.~\ref{fig:b2_LHClimits}, we see that these masses are allowed only for a higher branching in the new decay mode. } \label{fig:sigGrid} \end{figure*} \begin{figure*}[htp!] \captionsetup[subfigure]{labelformat=empty} \subfloat[\quad\quad(a)]{\includegraphics[width=0.9\columnwidth]{BRContour_Phi300}}\hspace{0.5cm} \subfloat[\quad\quad(b)]{\includegraphics[width=0.9\columnwidth]{BRContour_Phi700}}\\ \caption{HL-LHC reach plots for (a) $M_{\Phi}=300$ TeV and (b) $M_{\Phi} = 700$ GeV. The green areas are discoverable ($5\sigma$) and the red area can be excluded ($2\sigma$). The grey areas are ruled out by LHC limits in Fig.~\ref{fig:b2_LHClimits}.} \label{fig:brContours} \end{figure*} \subsection{Network architecture and optimisation} \noindent A standard DNN architecture is used in our analysis. The selected network is composed of two linear layers (of $128$ dimension) with Mish Activation~\cite{2019arXiv190808681M} and Batch Norm~\cite{2015arXiv150203167I}. Additionally, Dropout~\cite{JMLR:v15:srivastava14a} with a dropout probability of $0.2$ and L2 weight decay of $10^{-4}$ are used to regularise the training. The NN architecture is obtained by performing a grid search over the hyper-parameters. The network is trained using the \texttt{AdamW} optimizer~\cite{2017arXiv171105101L}. Classification tasks generally minimise a cross-entropy loss between classes to get the best performance. However, a simple cross-entropy loss can not account for the differences in cross sections of the various processes because each event is weighed equally for training the network. Since we are considering multiple processes with different cross sections, the higher the cross section of a process, the higher should be the penalty for misclassifying events from that process. In other words, the weight of each event should scale positively with the cross section of the generating process. Similarly, if we feed the network a large number of events for a single process, the weight of an individual event of that process should decrease with that number---there are more examples for the classifier to learn from. Therefore, we use a weighted form of cross-entropy loss to train the network where the weight $\omega$ of an event of a particular process $p_i$ is given by: \begin{equation} \omega_{p_i} = \sqrt{\frac{\sigma_{p_i}\ \mathcal{L}}{\mathcal{N}_{p_i}}}. \end{equation} Here, $\sigma_{p_i}$ denotes the cross section of the process $p_i$, $N_{p_i}$ is for the number of events that are fed into the network, and $\mc L=3000$ fb$^{-1}$ is the experimental luminosity. The square root in the weight is determined empirically, for the weights perform better with the square root than without. Disregarding the effect of discretisation when calculating the signal significance, we find a good correlation between loss and significance. The models are implemented using \textsc{Pytorch} ~\cite{NEURIPS2019_bdbca288} and trained on a \texttt{Nvidia GTX 1080Ti GPU}, (but we find that it trains fairly quickly on a CPU too.) While estimating the significance of the signal over the background, we perform a scan across the network response to achieve the best performance. To tune the network, we pick the signal benchmark point $(M_{B},M_{\Phi}) = (1.5, 0.4)$ TeV which is near the centre of the parameter space considered. We use \textsc{Weights and Biases}~\cite{wandb} which provides tools and neat visualisations for metrics required to pick the best model. It also provides a grid-search feature, which we use to scan over a multi-dimensional grid of hyperparameters. We search over the following set of hyperparameters: (a) the number of hidden layers of the network, (b) the number of nodes in each hidden layer, (c) the learning rate, (d) L2 regularisation co-efficient, and (e) the dropout rate. Regularisation and dropout are ways to prevent the network from overfitting to the training data. L2 regularisation shrinks the weights of the layers and prevents the network from learning complex functions (which are usually highly sensitive to noise and prone to overfitting), whereas dropout reduces the network's dependence on a particular set of variables by randomly 'dropping' (making them unavailable) certain variables while training the network. We pick the best-performing model with the least complexity, i.e., the least number of hidden layers and nodes in each layer as the final candidate for our analysis. The performance metric is the signal significance ($\mc Z$ score), \begin{equation} \label{eq:Zscore} \mathcal{Z} = \sqrt{2\left(N_S + N_B\right)\ln\left(\frac{N_S+N_B}{N_B}\right) - 2N_S} \end{equation} where $N_S$ and $N_B$ are the numbers of signal and background events allowed by the network at $3$ ab$^{-1}$. We scan over the network response, a value between $0$ (ideal background) and $1$ (ideal signal), on the validation dataset against the predicted significance and pick the network response with the maximum value as the threshold for classifying an event as signal. Fig.~\ref{fig:DNN_NsNb} shows that as we increase the network response i.e. demand a stringent classification, $N_S$ decreases smoothly where as $N_B$ falls drastically. \section{Results}\label{sec:results} \noindent We present the statistical significance of the signal [Eq.~\eqref{eq:Zscore}] for $\bt_{b\Phi}=0.4$ as predicted by the DNN model over the $M_{B}-M_{\Phi}$ plane in Fig.~\ref{fig:sigGrid}. We show the significance predictions for $M_{B}<1.2$ TeV in a separate panel---since such lighter $B$ quarks are ruled out by the LHC for low BRs in the new mode (Fig.~\ref{fig:b2_LHClimits} shows that for $M_{B}=1.0, 1.1$ TeV to exist, BR$(B\to b\Phi)$ has to be greater than $0.71, 0.55$ respectively). In Fig.~\ref{fig:brContours}, we present the discovery $(5\sigma)$ and exclusion $(2\sigma)$ regions in the $\beta_{b\Phi}-M_{B}$ plane for various benchmark masses of $\Phi$, $M_{\Phi}=300, 700$ GeV. The trends in Fig.~\ref{fig:brContours} can be understood by looking at the dependence of signal yield on the BR in the new mode. From Eq.~\eqref{eq:brEq}, we see that signal yield scales as $\beta_{b\Phi} \left(1 - \beta_{b\Phi}\right)$ and becomes maximum for $\beta_{b\Phi}=0.5$. For every mass point, we search over $\beta_{b\Phi}\in [0.1,0.9]$ to find the maximum and minimum values for $\mc Z=5$ and $2$. The grey regions are excluded by the rescaled LHC limits (shown in Fig.~\ref{fig:b2_LHClimits}) for the singlet $B$ model. \section{Conclusions}\label{sec:conclu} \noindent As a followup to Refs~\cite{Bhardwaj:2022nko, Bhardwaj:2022wfz}, in this paper, we studied the discovery/exclusion prospects of the vectorlike weak-singlet $B$ quark in the presence of a lighter weak-singlet spinless state $\Phi$ at the HL-LHC. The singlet $\Phi$, which otherwise shares no direct couplings with the SM quarks, couples with the third-generation quarks when they mix with VLQs (which directly couple with $\Phi$) after EWSB. Hence, after EWSB, $\Phi$ can have tree-level decays to third-generation quark pairs. It can also decay into a pair of gluons (or, in fewer cases, into pairs of other SM bosons) through quark loops. In Ref.~\cite{Bhardwaj:2022nko}, we mapped the possibilities to explore such setups at the LHC. We showed that for a VLQ to decay into $\Phi$ and the $\Phi$ to decay to a pair of gluons or third-generation quark(s), no fine-tuning of the parameters is needed. For this study, in particular, we focused on the $B$ quark pair production channel where one $B$ quark decays to a $b$ quark and a $\Phi$, and the other decays to a top quark and a $W$ boson. This is a unique signature of the singlet $B$ model~\cite{Bhardwaj:2022nko}, which, when the $W$ decays leptonically, offers a leptonic handle at the LHC. Otherwise, the fully hadronic final states from the $B$ quark pair production are considerably more challenging to probe compared to the semileptonic case or the case of the $T$ quark in the presence of $\Phi$. We postpone the analysis of the fully hadronic channel to a future publication. Since the $\Phi$ dominantly decays to a pair of $b$ quarks or gluons in the singlet $B$ model~\cite{Bhardwaj:2022nko}, we considered a set of signal selection criteria that is agnostic of $\Phi$ decay modes, which helped us to explore a large part of the parameter space of the singlet $B$ model. We further used a DNN model (trained on a large set of kinematic variables) for various benchmark masses of $B$ and $\Phi$. We presented the statistical significances obtained by the DNN model along with the discovery and exclusion regions as functions of BR in the new mode. \acknowledgements \noindent C. N. acknowledges Department of Science and Technology (DST)-Inspire for his fellowship. \def\small{\small}
{ "attr-fineweb-edu": 1.768555, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdOE5i7PA9PGpdQGp
\subsection*{#1}} \newcommand{\sssparagraph}[1]{\subsubsection*{#1}} \date{} \newcommand{\paperabstract}{% We describe a framework for constructing an efficient non-interactive key exchange (NIKE) protocol for $n$ parties for any $n \geq 2$. Our approach is based on the problem of computing isogenies between isogenous elliptic curves, which is believed to be difficult. We do not obtain a working protocol because of a missing step that is currently an open mathematical problem. What we need to complete our protocol is an efficient algorithm that takes as input an abelian variety presented as a product of isogenous elliptic curves, and outputs an isomorphism invariant of the abelian variety. Our framework builds a {\em cryptographic invariant map}, which is a new primitive closely related to a cryptographic multilinear map, but whose range does not necessarily have a group structure. Nevertheless, we show that a cryptographic invariant map can be used to build several cryptographic primitives, including NIKE, that were previously constructed from multilinear maps and indistinguishability obfuscation.} \newcommand{\paperkeywords}{% Multilinear maps, Non-Interactive Key Exchange, Isogenies, Witness Encryption, Abelian Varieties} \begin{document} \begin{abstract}\paperabstract\end{abstract} \keywords{\paperkeywords} \subjclass[2010]{Primary 14K02; Secondary 14Q20, 11Y16, 94A60} \maketitle \section{Introduction} Let $\mathbb{F}_q$ be a finite field, let $E$ be an ordinary elliptic curve over $\mathbb{F}_q$, and let $X$ be the set of elliptic curves over $\mathbb{F}_q$ that are isogenous to $E$. The set $X$ is almost always large (containing on the order of $\sqrt{q}$ elements). Moreover, under suitable conditions on $E$, the set $X$ is endowed with a free and transitive action $\ast$ by a certain abelian group $G$, which is the ideal class group of the endomorphism ring of $E$. The action $\ast$ maps a given $g \in G$ and $E \in X$ to a curve $g \ast E \in X$. This action, originally defined by Deuring~\cite{Deuring}, has a number of properties that makes it useful in cryptography. First, for a fixed curve $E \in X$, the map $G \to X$ defined by $g \mapsto g \ast E$ is believed to be a one-way function. In other words, given a random curve $E' \in X$ it is difficult to find an element $g \in G$ such that $E' = g \ast E$. This suggests a Diffie--Hellman two-party key exchange protocol, proposed by Couveignes~\cite{EPRINT:Couveignes06} and Rostovtsev and Stolbunov~\cite{EPRINT:RosSto06}: Alice chooses a random $a \in G$ and publishes $E_a \mathrel{\mathop:}= a \ast E$; Bob chooses a random $b \in G$ and publishes $E_b \mathrel{\mathop:}= b \ast E$. Their shared key is the curve $$E_{ab} \mathrel{\mathop:}= (ab) \ast E = a \ast E_b = b \ast E_a,$$ which they can both compute. To ensure that both parties obtain the same key, their shared key is the $j$-invariant of the curve $E_{ab}$. More recently, De Feo, Jao, and Pl{\^u}t~\cite{FJP14}, Galbraith~\cite{Galbraith}, Castryck {et al.}~\cite{CSIDH}, and De Feo, Kieffer, and Smith~\cite{cryptoeprint:2018:485} proposed variants of this protocol with better security and efficiency. Moreover, a supersingular version of the isogeny problem was introduced and proposed as the basis for a collision resistant hash function~\cite{JC:ChaLauGor09}. Security of this one-way function was further studied in~\cite{EC:EHLMP18}. Second, as alluded to above, the star operator satisfies the following useful property: for all $g_1,\ldots,g_n \in G$ the abelian varieties \[ A_1 \mathrel{\mathop:}= (g_1 \ast E) \times \cdots \times (g_n \ast E) \quad\text{ and }\quad A_2 \mathrel{\mathop:}= (g_1\cdots g_n) \ast E \times E^{n-1} \] are isomorphic (see Appendix~\ref{sec:lowbrow}). As we will see in the next section, this suggests an $n$-party non-interactive key exchange protocol, as well as many other cryptographic constructions. This property leads to a more general cryptographic primitive that we call a \defn{cryptographic invariant map}, defined in the next section. This primitive has properties that are similar to those of cryptographic multilinear maps~\cite{boneh2003applications,EC:GarGenHal13}, which have found numerous applications in cryptography (e.g,~\cite{FOCS:GGHRSW13,STOC:GGSW13,AC:BonWat13,EC:BLRSZZ15}). We discuss applications of cryptographic invariant maps in Section~\ref{sec:applications}. In Remark~\ref{rem:supersingrmk} we explain why we use ordinary and not supersingular elliptic curves. Section~\ref{Sec:IsogMaps} describes our approach to constructing cryptographic invariant maps from isogenies. This work leads to the following question in algebraic geometry. \ssparagraph{An open problem} To make the cryptographic applications discussed above viable we must first overcome an important technical challenge. While the varieties $A_1$ and $A_2$ defined above are isomorphic, they are presented differently. Our applications require an efficient way to compute an invariant that is the same for $A_1$ and $A_2$. In addition, the invariant must distinguish non-isomorphic varieties. We do not know any such computable isomorphism invariant, and we present this as an open problem. In Section~\ref{sec:failure-theta-null} we explain why some natural proposals for isomorphism invariants do not seem to work. In Remarks~\ref{rem:DDH} and~\ref{rem:isogDDH} we show that a solution to this open problem, even for $n=2$, would solve the isogeny decision Diffie--Hellman problem. Further, we give evidence that computing a particular isomorphism invariant might be equivalent to solving the elliptic curve isogeny problem, which is believed (or hoped) to be a quantum-resistant hard problem. Thus, Section~\ref{sec:failure-theta-null} might be useful from the point of view of cryptanalysis of isogeny-based cryptography. \section{Cryptographic invariant maps} \label{sec:defs} \begin{definition}\label{Def:EFTAction} Let $X$ be a finite set and let $G$ be a finite abelian group. We say that \defn{$G$ acts efficiently on $X$ freely and transitively} if there is an efficiently computable map $\ast: G \times X \to X$ such that: \begin{itemize} \item the map is a group action: $g \ast (h \ast x) = (gh) \ast x$, and there is an identity element $\id \in G$ such that $\id \ast x = x$, for all $x \in X$ and all $g,h \in G$; \item the action is transitive: for every $(x,y) \in X\times X$ there is a $g \in G$ such that $g \ast x = y$; and \item the action is free: if $x \in X$ and $g,h \in G$ satisfy $g \ast x = h \ast x$, then $g=h$. \end{itemize} \end{definition} \begin{definition}\label{Def:CrInvMap} By a \defn{cryptographic invariant map} we mean a randomized algorithm ${\normalfont\textsf{MapGen}}$ that inputs a security parameter $\lambda$, outputs public parameters ${\normalfont\textsf{pp}} = (X,S,G,e)$, and runs in time polynomial in $\lambda$, where: \begin{itemize} \item $X$ and $S$ are sets, and $X$ is finite, \item $G$ is a finite abelian group that acts efficiently on $X$ freely and transitively, \item $e$ is a deterministic algorithm that runs in time polynomial in $\lambda$ and $n$, such that for each $n>0$, algorithm $e$ takes $\lambda$ as input and computes a map $e_n:X^n \to S$ that satisfies: \begin{itemize} \item \defn{Invariance property} of $e_n$: for all $x \in X$ and $g_1,\ldots,g_n \in G$, \[ e_n(g_1 \ast x, \ldots, g_n \ast x) = e_n\big((g_1 \cdots g_n) \ast x, x, \ldots, x\big); \] \item \defn{Non-degeneracy} of $e_n$: for all $i$ with $1 \le i \le n$ and $x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n \in X$, the map $X \to S$ defined by $y \mapsto e_n(x_1,\ldots,x_{i-1},y,x_{i+1},\ldots,x_n)$ is injective. \end{itemize} \end{itemize} \end{definition} In our candidate instantiation for cryptographic invariant maps the set $X$ is a set of isogenous elliptic curves and the group $G$ acting on $X$ is a class group. The elements of $S$ are isomorphism invariants of products of elliptic curves. Definition \ref{Def:CrInvMap} is quite ambitious in that it asks that $e_n$ be defined for all $n>0$ and run in polynomial time in $n$ (and $\lambda$). A cryptographic invariant map that is defined even for a single $n>2$, and satisfies the security assumptions in the next subsection, would still be quite interesting. We require a construction that works for all $n$ because our framework using elliptic curve isogenies seems to support it. Similarly, we note that a construction that works for all $n>0$, but runs in time exponential in~$n$ is still useful. It would limit our ability to evaluate $e_n$ to relatively small $n$, but that is still of great interest. In the first three proposals in Section~\ref{sec:failure-theta-null} we study candidates for $e_n$ that run in time exponential in $n$, satisfy the non-degeneracy property, but do not satisfy the invariance property. It is an open problem to find a map that also satisfies the invariance property. \ssparagraph{Security assumptions} Next, we define some security assumptions on cryptographic invariant maps. The notation $x \mathrel{\mathpalette\rgetscmd\relax} X$ will denote an independent uniform random variable $x$ over the set $X$. Similarly, we use $x' \mathrel{\mathpalette\rgetscmd\relax} A(y)$ to define a random variable $x'$ that is the output of a randomized algorithm $A$ on input $y$. The $n$-way computational Diffie--Hellman assumption states that, given only the public parameters and $(g_1 \ast x, \ldots, g_n \ast x) \in X^n$, it is difficult to compute $e_{n-1}\big((g_1 \cdots g_n) \ast x, x, \ldots, x\big)$. A precise definition is the following: \begin{definition} We say that ${\normalfont\textsf{MapGen}}$ satisfies the $n$-way \defn{computational Diffie--Hellman assumption ($n$-CDH)} if for every polynomial time algorithm $\mathcal{A}$, \[ \Pr\Big[ \mathcal{A}({\normalfont\textsf{pp}},\ g_1 \ast x, \ldots, g_n \ast x) = e_{n-1}\big((g_1 \cdots g_n) \ast x, x, \ldots, x\big) \Big] \] is a negligible function of $\lambda$, when ${\normalfont\textsf{pp}} \mathrel{\mathpalette\rgetscmd\relax} {\normalfont\textsf{MapGen}}(\lambda)$, $g_1,\ldots,g_n \mathrel{\mathpalette\rgetscmd\relax} G$, and $x \mathrel{\mathpalette\rgetscmd\relax} X$. \end{definition} \begin{remark} \label{rem:DDH} The natural $n$-way decision Diffie--Hellman assumption on $X$ does not hold when invariant maps exist. That is, for all $n>0$ it is easy to distinguish $(g_1 \cdots g_n) \ast x \in X$ from a random element of $X$, given only $x,g_1 \ast x, \ldots, g_n \ast x$. Given a challenge $y \in X$, simply check if $$e_n(y,x,\ldots,x) = e_n(g_1 \ast x, \ldots, g_n \ast x).$$ Equality holds if and only if $y = (g_1 \cdots g_n) \ast x$. However, in Definition \ref{nDDHdefn} we define an $n$-way decision Diffie--Hellman assumption for $e_{n-1}$. It states that it is hard to distinguish $e_{n-1}\big((g_1 \cdots g_n) \ast x, x, \ldots, x\big)$ from a random element in the image of $e_{n-1}$, given only the public parameters, $x$, and $(g_1 \ast x, \ldots, g_n \ast x) \in X^n$. \end{remark} \begin{definition} \label{nDDHdefn} We say that ${\normalfont\textsf{MapGen}}$ satisfies the $n$-way \defn{decision Diffie--Hellman assumption ($n$-DDH)} if the following two distributions, $\mathcal{P}_0$ and $\mathcal{P}_1$, are polynomially indistinguishable, when ${\normalfont\textsf{pp}} \mathrel{\mathpalette\rgetscmd\relax} {\normalfont\textsf{MapGen}}(\lambda)$, $g_1,\ldots,g_n \mathrel{\mathpalette\rgetscmd\relax} G$, and $x \mathrel{\mathpalette\rgetscmd\relax} X$: \begin{itemize} \item $\mathcal{P}_0$ is $({\normalfont\textsf{pp}},\ g_1 \ast x, \ldots, g_n \ast x,\ s_0)$ where $s_0 = e_{n-1}\big((g_1 \cdots g_n) \ast x, x, \ldots, x\big)$, \item $\mathcal{P}_1$ is $({\normalfont\textsf{pp}},\ g_1 \ast x, \ldots, g_n \ast x,\ s_1)$ where $s_1$ is random in $\text{Im}(e_{n-1}) \subseteq S$. \end{itemize} \end{definition} \section{Applications} \label{sec:applications} We show that suitable cryptographic invariant maps can be used to solve a number of important problems in cryptography. \ssparagraph{$n$-way Non-Interactive Key Exchange (NIKE)} We show how to use a cryptographic invariant map to construct a Non-Interactive Key Exchange (NIKE) protocol in which $n$ parties create a shared secret key that only they can efficiently calculate, without any interaction among the $n$ parties. Currently, secure $n$-party NIKE for $n>3$ is only known from general purpose indistinguishability obfuscation (e.g.,~\cite{boneh2017multiparty}). Our NIKE construction is similar to the one in~\cite{JC:Joux04,boneh2003applications,EC:GarGenHal13} and satisfies a ``static'' notion of security. \begin{itemize} \item $\text{Setup}(\lambda)$: run $(X,S,G,e) \mathrel{\mathpalette\rgetscmd\relax} {\normalfont\textsf{MapGen}}(\lambda)$ and choose $x \mathrel{\mathpalette\rgetscmd\relax} X$. Output ${\normalfont\textsf{pp}} \mathrel{\mathop:}= (X,S,G,e,x)$. \item For $i=1,\ldots,n$, party~$i$ chooses a random $g_i \mathrel{\mathpalette\rgetscmd\relax} G$, computes $x_i \mathrel{\mathop:}= g_i \ast x \in X$, and publishes $x_i$ on a public bulletin board. \item The shared key between the $n$-parties is $$k \mathrel{\mathop:}= e_{n-1}\big((g_1 \cdots g_n) \ast x, x, \ldots, x\big) \in S.$$ Party $i\in\{ 1,\ldots,n\}$ computes $k$ by obtaining $x_1,\ldots,x_n$ from the bulletin board, then choosing some $j \in \{1,\ldots,n\}$ where $j \neq i$, and computing $$k = e_{n-1}(x_1,\ldots,x_{j-1},\ g_i \ast x_j,\ x_{j+1},\ldots,x_n) \in S,$$ where $x_i$ is omitted from the input to $e_{n-1}$. \end{itemize} All $n$ parties obtain the same key $k$ by the invariance property of $e_{n-1}$. Static security follows from the $n$-way decision Diffie--Hellman assumption, as in~\cite{boneh2003applications}. Alternatively, we can rely on the weaker $n$-way computational Diffie--Hellman assumption by applying a hash function $H:S \to K$ to the key $k$. We model $H$ as a random oracle in the security analysis. We leave the question of an adaptively-secure NIKE, in the sense of~\cite{PKC:FHKP13,EPRINT:Rao14}, from an invariant map for future work. \ssparagraph{Unique signatures and verifiable random functions (VRF)} A digital signature scheme is made up of three algorithms: a key generation algorithm that outputs a public key and a secret key, a signing algorithm that signs a given message using the secret key, and a verification algorithm that verifies a signature on a given message using the public key. A signature scheme is a \defn{unique signature scheme} if for every public key and every message, there is at most one signature that will be accepted as a valid signature for that message under the public key. While a number of unique signature schemes are known in the random oracle model (e.g.,~\cite{EC:BelRog96,AC:BonLynSha01}), it is quite hard to construct unique signatures without random oracles~\cite{C:Lysyanskaya02,PKC:DodYam05}. Unique signatures are closely related to a simpler object called a verifiable random function, or VRF~\cite{FOCS:MicRabVad99}. Previous results show how to construct unique signatures and VRFs from multilinear maps without random oracles~\cite{boneh2003applications}. The same constructions work with a cryptographic invariant map. The unique signature scheme works as follows: The secret key is a random $(g_{1,0},g_{1,1},\ldots,g_{n,0},g_{n,1}) \mathrel{\mathpalette\rgetscmd\relax} G^{2n}$. The public key is $(x,y_{1,0},\ldots,y_{n,1}) \in X^{2n+1}$ where $x \mathrel{\mathpalette\rgetscmd\relax} X$ and $y_{i,b} \mathrel{\mathop:}= g_{i,b} \ast x$ for $i=1,\ldots,n$ and $b=0,1$. The signature on an $n$-bit message $m \in \{0,1\}^n$ is $\sigma \mathrel{\mathop:}= (\prod_{i=1}^n g_{i,m_i}) \ast x \in X$. To verify a signature~$\sigma$, check that $e_n(\sigma, x,\ldots,x) = e_n\big(y_{1,m_1}, \ldots, y_{n,m_n}\big)$. The security analysis of this construction is the same as in~\cite{boneh2003applications}. \ssparagraph{Constrained PRFs and broadcast encryption} We next describe how to construct \emph{constrained pseudorandom functions}~\cite{AC:BonWat13,CCS:KPTZ13,PKC:BoyGolIva14} for \emph{bit-fixing constraints} from a cryptographic invariant map. Such constrained PRFs in turn can be used to build broadcast encryption with short ciphertexts~\cite{AC:BonWat13}. A pseudorandom function (PRF) is a function $F:\mathcal{K}\times\mathcal{A}\rightarrow\mathcal{B}$ that is computable in polynomial time. Here, $\mathcal{K}$ is the key space, $\mathcal{A}$ is the domain, and $\mathcal{B}$ is the codomain. Intuitively, PRF security requires that, for a random key $k \in \mathcal{K}$, an adversary who obtains pairs $\big(a,\ F(k,a)\big)$, for $a \in \mathcal{A}$ of its choice, cannot distinguish these pairs from pairs $\big(a, f(a)\big)$ where $f$ is a random function $\mathcal{A} \to \mathcal{B}$. A \emph{bit-fixing constrained PRF} is a PRF where a key $k \in \mathcal{K}$ can be constrained to only evaluate the PRF on a subset of the domain $\mathcal{A}$, where $\mathcal{A} = \{0,1\}^n$. Specifically, for $V \subseteq [n] = \{1,\ldots,n\}$ and a function $v: V \to \{0,1\}$, let $\mathcal{A}_v = \{a\in\mathcal{A}:\forall i \in V, a_i=v(i) \}$. A constrained key $k_v$ enables one to evaluate $F(k,a)$ for all $a \in \mathcal{A}_v$, but reveals nothing about $F(k,a)$ for $a \notin \mathcal{A}_v$. We refer to~\cite{AC:BonWat13} for the complete definition of this concept, and its many applications. We now explain how to construct bit-fixing constrained PRFs from cryptographic invariant maps. The construction and security proof are essentially the same as in Boneh and Waters~\cite{AC:BonWat13}, but translated to our setting. One complication is that the construction of Boneh and Waters requires a way to operate on invariants in $S$. We get around this by delaying the evaluation of the invariant to the very last step. We thus obtain the following bit-fixing constrained PRF: \begin{itemize} \item ${\normalfont\textsf{Setup}}(\lambda)$: run $(X,S,G,e)\mathrel{\mathpalette\rgetscmd\relax}{\normalfont\textsf{MapGen}}(\lambda)$ and choose $x\mathrel{\mathpalette\rgetscmd\relax} X$. \\ Next choose $\alpha\mathrel{\mathpalette\rgetscmd\relax} G$ and $d_{i,b}\mathrel{\mathpalette\rgetscmd\relax} G$ for $i\in[n]$ and $b\in\{0,1\}$. \\ Output the key $k=(X,S,G,e,\alpha,\{d_{i,b}\}_{i,b})$. \item The PRF is defined as: $F(k,a)=e_n\big(\;(\alpha\times\prod_{i=1}^n d_{i,a_i})\ast x,\ x,\ \dots,\ x\big)$. \\ Here, $a \in \{0,1\}^n$ specifies a subset product of the set of $d_{i,b}$'s. \item ${\normalfont\textsf{Constrain}}(k,v)$: Let $V \subseteq [n]$ be the support of the function $v$, and assume $V$ is not empty. The constrained key $k_v$ is constructed as follows. Set $D_{i,b}=d_{i,b}\ast x$ for $i\notin V$. Let $i_0$ be the smallest element of $V$. Choose $|V|-1$ random $g_i \in G$ for $i\in V\setminus \{i_0\}$, and set $g_{i_0}=\alpha\times\prod_{i\in V} d_{i,v_i}\times(\prod_{i\in V\setminus \{i_0\}} g_i)^{-1} \in G$. Let $h_i = g_i\ast x$ for $i\in V$. \\ The constrained key is $k_v=\big(\{D_{i,b}\}_{i\notin V,b\in\{0,1\}},\ \{h_i\}_{i\in V}\big)$. \item ${\normalfont\textsf{Eval}}(k_v,a)$: To evaluate $F(k,a)$ using the constrained key $k_v$ do the following. If $a\notin \mathcal{A}_v$, output $\diamond$. Otherwise, for $i=1,\ldots,n,$ let $C_i= D_{i,a_i}$ if $i\notin V$, and let $C_i= h_i$ otherwise. Output $e_n(C_1,\dots,C_n)$. Then, by construction, \[ e_n(C_1,\dots,C_n) = e_n\left(\left(\prod_{i\notin V} d_{i,a_i}\prod_{i\in V}g_i\right)\ast x,x,\dots,x\right) = F(k,a), \] as required. \end{itemize} The security proof for this construction is as in~\cite{AC:BonWat13}. This construction can be further extended to a verifiable random function (VRF) by adapting Fuchsbauer~\cite{SCN:Fuchsbauer14} in a similar fashion. \ssparagraph{Witness encryption} Witness encryption, due to Garg et al.~\cite{STOC:GGSW13}, can be used to construct Identity-Based Encryption, Attribute-Based Encryption, broadcast encryption~\cite{TCC:Zhandry16}, and secret sharing for ${\normalfont\textsf{NP}}$ statements. Witness encryption is a form of encryption where a public key is simply an ${\normalfont\textsf{NP}}$ statement, and a secret key is a witness for that statement. More precisely, a witness encryption scheme is a pair of algorithms: \begin{itemize} \item ${\normalfont\textsf{Enc}}(x,m)$ is a randomized polynomial-time algorithm that takes as input an ${\normalfont\textsf{NP}}$ statement $x$ and a message $m$, and outputs a ciphertext $c$; \item ${\normalfont\textsf{Dec}}(x,w,c)$ is a deterministic polynomial-time algorithm that takes as input a statement $x$, supposed witness $w$, and ciphertext $c$, and attempts to produce the message $m$. \end{itemize} We require that if $w$ is a valid witness for $x$, then for any message $m$, if $c\mathrel{\mathpalette\rgetscmd\relax}{\normalfont\textsf{Enc}}(x,m)$, then ${\normalfont\textsf{Dec}}(x,w,c)$ outputs $m$ with probability 1. The basic notion of security for witness encryption is \emph{soundness security}, which requires that if $x$ is false, then ${\normalfont\textsf{Enc}}(x,m)$ hides all information about $m$. A stronger notion called \emph{extractable security}, due to Goldwasser et al.~\cite{C:GKPVZ13}, requires, informally, that if one can learn any information about $m$ from ${\normalfont\textsf{Enc}}(x,m)$, then it must be the case that one ``knows'' a witness for $x$. We briefly describe how to construct witness encryption from invariant maps. It suffices to give a construction from any ${\normalfont\textsf{NP}}$-complete problem. There are at least two natural constructions from multilinear maps that we can use. One approach is to adapt the original witness encryption scheme of Garg et al.~\cite{STOC:GGSW13} based on the Exact Cover problem. This approach unfortunately also requires the same graded structure as needed by Boneh and Waters~\cite{AC:BonWat13}. However, we can apply the same ideas as in our constrained PRF construction to get their scheme to work with invariant maps. Another is the scheme of Zhandry~\cite{TCC:Zhandry16} based on Subset Sum.\footnote{The basic scheme shown by Zhandry requires an ``asymmetric'' multilinear map, where the inputs to the map come from different sets. However, he also explains how to instantiate the scheme using symmetric multilinear maps. The symmetric scheme easily translates to use invariant maps.} As with the constructions of Garg et al.{} and Zhandry, the security of these constructions can be justified in an idealized attack model for the cryptographic invariant map, allowing only the operations explicitly allowed by the map---namely the group action and the map operation. Justification in idealized models is not a proof, but provides heuristic evidence for security. \section{Cryptographic invariant maps from isogenies} \label{Sec:IsogMaps} We begin by recalling some facts that are presented in more detail in Appendix~\ref{sec:isog}. Let $E$ be an ordinary elliptic curve over a finite field $\mathbb{F}_q$ such that the ring $\mathbb{Z}[\pi]$ generated by its Frobenius endomorphism $\pi$ is integrally closed. This implies in particular that $\mathbb{Z}[\pi]$ is the full endomorphism ring $\mathscr{O}$ of $E$. Let $\Cl(\mathscr{O})$ denote the ideal class group of this ring, and let $\Ell(\mathscr{O})$ denote the isogeny class of $E$. There exists a free and transitive action $\ast$ of $\Cl(\mathscr{O})$ on $\Ell(\mathscr{O})$, and there is a way to represent elements of $\Cl(\mathscr{O})$ (namely, as products of prime ideals of small norm) that makes this action efficiently computable. Moreover, one can efficiently sample close to uniform elements in $\Cl(\mathscr{O})$ under that representation. In addition, the ``star operator'' $\ast$ satisfies the following property: for any choice of ideal classes $\mathfrak{a}_1,\dots,\mathfrak{a}_n,\mathfrak{a}'_1,\dots,\mathfrak{a}'_n$ in $\Cl(\mathscr{O})$, the abelian varieties \begin{equation} \label{eq:prodabvarisomcond} (\mathfrak{a}_1\ast E)\times\cdots\times(\mathfrak{a}_n\ast E) \quad\textrm{and}\quad (\mathfrak{a}'_1\ast E)\times\cdots\times(\mathfrak{a}'_n\ast E) \end{equation} are isomorphic over $\mathbb{F}_q$ if and only if $\mathfrak{a}_1\cdots\mathfrak{a}_n = \mathfrak{a}'_1\cdots\mathfrak{a}'_n$ in $\Cl(\mathscr{O})$. In particular: \begin{equation} \label{eq:prodabvar} (\mathfrak{a}_1\ast E)\times\cdots\times(\mathfrak{a}_n\ast E) \cong (\mathfrak{a}_1\cdots\mathfrak{a}_n)\ast E\times E^{n-1}. \end{equation} Denote by $\Ab(E)$ the set of abelian varieties over $\mathbb{F}_q$ that are a product of the form~\eqref{eq:prodabvar}, and assume that we can efficiently compute an isomorphism invariant for abelian varieties in $\Ab(E)$. In other words, assume that we have an efficiently computable map $\isom\colon \Ab(E)\to S$ to some set $S$ that to any tuple $E_1,\dots,E_n$ of elliptic curves isogenous to $E$ associates an element $\isom(E_1\times\cdots\times E_n)$ of $S$ such that $\isom(E_1\times\cdots\times E_n) = \isom(E'_1\times\cdots\times E'_n)$ if and only if the products $E_1\times\cdots\times E_n$ and $E'_1\times\cdots\times E'_n$ are isomorphic as abelian varieties. The curves $E_i$ are given for example by their $j$-invariants, and in particular, the ideal classes $\mathfrak{a}_i$ such that $E_i \cong \mathfrak{a}_i \ast E$ are not supposed to be known. Based on such an isomorphism invariant $\isom$, we construct a cryptographic invariant map as follows. The algorithm ${\normalfont\textsf{MapGen}}(\lambda)$ computes a sufficiently large base field $\mathbb{F}_q$, and an elliptic curve $E$ over $\mathbb{F}_q$ such that the ring $\mathbb{Z}[\pi]$ generated by its Frobenius endomorphism is integrally closed (this can be done efficiently: see again Appendix~\ref{sec:isog}). The algorithm then outputs the public parameters ${\normalfont\textsf{pp}} = (X,S,G,e)$ where: \begin{itemize} \item $X = \Ell(\mathscr{O})$ is the isogeny class of $E$ over $\mathbb{F}_q$; \item $S$ is the codomain of the isomorphism invariant $\isom$; \item $G = \Cl(\mathscr{O})$ is the ideal class group of $\mathscr{O}$; and \item the map $e_n\colon X^n\to S$ is given by $e_n(E_1,\dots,E_n) = \isom(E_1\times\cdots\times E_n)$. \end{itemize} The facts recalled at the beginning of this section show that $G$ acts efficiently on $X$ freely and transitively in the sense of Definition \ref{Def:EFTAction}, and that the properties of Definition \ref{Def:CrInvMap} are satisfied. In particular, the invariance property follows from~\eqref{eq:prodabvar}, and the non-degeneracy from the fact that the abelian varieties in~\eqref{eq:prodabvarisomcond} are isomorphic \emph{only if} the corresponding products of ideal classes coincide. Thus, this approach does provide a cryptographic invariant map assuming $\isom$ exists. \begin{remark} \label{rem:nike2rmk} In the $2$-party case, the NIKE protocol obtained from this construction coincides with the isogeny key exchange protocols over ordinary curves described by Couveignes~\cite{EPRINT:Couveignes06} and Rostovtsev--Stolbunov~\cite{EPRINT:RosSto06}. \end{remark} \begin{remark} \label{rem:isogDDH} The existence of $\isom$ breaks the isogeny decision Diffie--Hellman problem. Indeed, given three elliptic curves $(\mathfrak{a} \ast E,\mathfrak{b} \ast E, \mathfrak{c} \ast E)$ isogenous to $E$, one can check whether $\mathfrak{c}=\mathfrak{a}\mathfrak{b}$ in $\Cl(\mathscr{O})$ by testing whether the surfaces $$(\mathfrak{a} \ast E) \times (\mathfrak{b} \ast E) \quad {\text{ and}} \quad (\mathfrak{c} \ast E) \times E$$ are isomorphic. This does not prevent the construction of secure NIKE protocols (as those can be based on the computational isogeny Diffie--Hellman problem by applying a hash function: see Section~\ref{sec:applications}), but currently, no efficient algorithm is known for this isogeny decision Diffie--Hellman problem. \end{remark} \begin{remark} \label{rem:nikerrmk} For certain applications, it would be interesting to be able to hash to the set $X = \Ell(\mathscr{O})$, i.e., construct a random-looking curve $E'$ in the isogeny class of $E$ without knowing an isogeny walk from $E$ to $E'$. An equivalent problem is to construct a random-looking elliptic curve with exactly $\#E(\mathbb{F}_q)$ points over $\mathbb{F}_q$. This seems difficult, however; the normal way of doing so involves the CM method, which is not efficient when the discriminant is large. \end{remark} \begin{remark} \label{rem:supersingrmk} One can ask whether this construction extends to the supersingular case. Over $\mathbb{F}_{p^2}$ with $p$ prime, the answer is clearly no, as the isogeny class of a supersingular elliptic curve is not endowed with a natural free and transitive group action by an abelian group. More importantly, isomorphism classes of products of isogenous supersingular elliptic curves over $\mathbb{F}_q$ are essentially trivial at least in a geometric sense. Indeed, according to a result of Deligne (see \cite[Theorem 3.5]{Shioda78}), if $E_1,\dots,E_n,E'_1,\dots,E'_n$ are all isogenous to a supersingular elliptic curve $E$, then \[ E_1\times\cdots\times E_n \cong E'_1\times\cdots\times E'_n \quad \text{over $\overline{\mathbb{F}_q}$} \] as soon as $n\geq 2$. In fact, the result holds over any extension of the base field over which all the endomorphisms of $E$ are defined, so already over $\mathbb{F}_{p^2}$. However, for a supersingular elliptic curve $E$ over a prime field $\mathbb{F}_p$, the number of $\mathbb{F}_p$-iso\-mor\-phism classes of products $E_1\times\cdots\times E_n$ with all $E_i$ isogenous to $E$ can be large. For example, this is shown when $n=2$ in \cite[Section 5]{XYY16}. Therefore, one could conceivably obtain a ``commutative supersingular'' version of the construction above, which would generalize the recent $2$-party key exchange protocol CSIDH~\cite{CSIDH}, assuming that $\mathbb{F}_p$-isomorphism invariants can be computed in that setting. Since those invariants must be arithmetic rather than geometric in nature, however, this seems even more difficult to achieve than in the ordinary case. \end{remark} \section{Some natural candidate cryptographic invariant maps} \label{sec:failure-theta-null} In order to instantiate a cryptosystem based on the ideas in this paper, it remains to find an efficiently computable map $\isom\colon \Ab(E)\to S$ for some set $S$, as in the previous section. Below we give evidence that several natural candidates fail, either because efficiently computing them would break the cryptographic security, or because they are not in fact isomorphism invariants. Our primary roadblock is that while $E_1 \times \cdots \times E_n$ and $E_1' \times \cdots \times E_n'$ can be isomorphic as unpolarized abelian varieties, they are not necessarily isomorphic as polarized abelian varieties with their product polarizations. The first three proposals below for invariants are invariants of the isomorphism class as polarized abelian varieties, but are not invariants of the isomorphism class as unpolarized abelian varieties. We do not know a way for the different parties to choose polarizations on their product varieties in a compatible way, to produce the same invariant, without solving the elliptic curve isogeny problem. At present, we do not know an invariant of abelian varieties in dimension $\geq 2$ that does not require choosing a polarization, with the exception of what we call the ``Deligne invariant'', described below. \ssparagraph{The theta null invariant} One natural candidate is given by Mumford's \emph{theta nulls}, presented in detail in Appendix~\ref{sec:algebr-theta-funct}. Unfortunately, in order to compute even a single theta null, one must first choose a principal polarization, and the resulting invariant does depend on this choice of polarization in a crucial way. In Proposition~\ref{prop:failure-theta-nulls} below we show that, as a result, the theta nulls do not in fact provide an isomorphism invariant as unpolarized abelian varieties. \ssparagraph{Igusa invariants} Suppose $n = 2$ and $\End E \otimes \mathbb{Q} \cong \mathbb{Q}(\sqrt{-d})$ with $d \in \mathbb{N}$ square-free. If $d \neq 1,3,7,15$, then for $E_1$ and $E_2$ in the isogeny class of $E$, the product $E_1 \times E_2$ is the Jacobian of a genus $2$ curve $C$ (see~\cite{HayashidaNishi}). It is possible to compute such a genus $2$ curve $C$, given a suitable principal polarization on $E_1 \times E_2$. For each such $C$, one could then compute the Igusa invariants \cite{Igusa60} of $C$. The number of genus $2$ curves $C$ such that $E_1 \times E_2$ is isomorphic to the Jacobian variety of $C$ is large (\cite{Hayashida} and \cite[Theorem 5.1]{Lange}), and unfortunately the Igusa invariants are different for different choices of $C$. There are many principal polarizations on each element of $\Ab(E)$, and no compatible way for the different parties to choose the same one. \ssparagraph{Invariants of Kummer surfaces} When $n = 2$, another approach is to consider the Kummer surface of $A = E_1 \times E_2$, which is the quotient $K = A/\{\pm1\}$. The surface $K$ itself does not depend on a polarization. But extracting an invariant from $K$, for example as in \cite[Chapter 3]{Prolegomena}, \emph{does} depend on having a projective embedding of $K$. \ssparagraph{Deligne invariant} A natural candidate is an isomorphism invariant studied by Deligne \cite{Deligne69}. Suppose $A$ is an ordinary abelian variety over $k=\mathbb{F}_q$. The Serre-Tate canonical lift of $A$ to characteristic 0 produces an abelian variety over the ring of Witt vectors $W(\bar{k})$. Fixing an embedding $\alpha$ of $W(\bar{k})$ into $\mathbb{C}$, we can view this lift as a complex abelian variety ${A}^{(\alpha)}$. Let $T_\alpha(A)$ denote the first integral homology group of ${A}^{(\alpha)}$. The Frobenius endomorphism $F$ of $A$ also lifts to characteristic 0 and defines an action of $F$ on $T_\alpha(A) . The theorem in~\cite[\S7]{Deligne69} shows that ordinary abelian varieties $A$ and $B$ over $\mathbb{F}_q$ are isomorphic if and only if there is an isomorphism $T_\alpha(A) \to T_\alpha(B)$ that respects the action of $F$. A natural candidate for a cryptographic invariant map is the map that sends $(E_1,\ldots,E_n)$ to the isomorphism invariant $$T_\alpha(E_1 \times\cdots \times E_n)=T_\alpha(E_1)\oplus \cdots\oplus T_\alpha(E_n).$$ Specifying the isomorphism class of $T_\alpha(E_1 \times \cdots \times E_n)$ as a $\mathbb{Z}[F]$-module is equivalent to specifying the action of $F$ as a $2n \times 2n$ integer matrix, unique up to conjugacy over $\mathbb{Z}$. However, we show in Theorem \ref{Delinvthm} below that being able to compute $T_\alpha(E)$ for an elliptic curve $E$ in polynomial time would yield a polynomial-time algorithm to solve the elliptic curve isogeny problem of recovering $\mathfrak{a}$ given $E$ and $\mathfrak{a}\ast E$, and conversely. \begin{theorem} \label{Delinvthm} An efficient algorithm to compute Deligne invariants $T_\alpha(E)$ on an isogeny class of ordinary elliptic curves over a finite field $k$ gives an efficient algorithm to solve the elliptic curve isogeny problem in that isogeny class. Conversely, an efficient algorithm to solve the elliptic curve isogeny problem on an isogeny class of ordinary elliptic curves over $k$ yields an efficient algorithm to compute, for some embedding $\alpha:W(\bar{k}) \hookrightarrow \mathbb{C}$, the Deligne invariants $T_\alpha(E)$ on the isogeny class. \end{theorem} \begin{proof} Suppose that $E_1$ and $E_2$ are in the isogeny class, and suppose that for $i=1,2$ we have a $\mathbb{Z}$-basis $\{ u_i, v_i\}$ for $T_\alpha(E_i)$ and a $2 \times 2$ integer matrix giving the action of $F$ with respect to this basis. We will efficiently compute a fractional ideal $\mathfrak{a}$ such that $\mathfrak{a} \ast E_1 \cong E_2$. Let $f(t)$ be the characteristic polynomial of Frobenius acting on $E_1$ or $E_2$; these are the same since $E_1$ and $E_2$ are isogenous. Let $R = \mathbb{Z}[t]/(f)$ and $R_\mathbb{Q} = R \otimes_\mathbb{Z} \mathbb{Q}$. Then $T_\alpha(E_i)$ is a rank one $R$-module, with $t$ acting as $F$. Compute $a_i, b_i\in\mathbb{Z}$ such that $F(u_i) = a_iu_i + b_iv_i$. Let $\mathfrak{a}_i$ be the fractional $R$-ideal generated by $1$ and $(t - a_i)/b_i$. Compute and output $\mathfrak{a} = \mathfrak{a}_1 \mathfrak{a}_2^{-1}$. We claim that $\mathfrak{a} \ast E_1 \cong E_2$. Define $\lambda_i: T_\alpha(E_i) \hookrightarrow R_\mathbb{Q}$ by sending $w\in T_\alpha(E_i)$ to the unique $\lambda_i(w)\in R_\mathbb{Q}$ such that $\lambda_i(w) \cdot u_i = w$. Then $\lambda_i(u_i)=1$ and $\lambda_i(v_i) = (t - a_i)/b_i$, so the fractional ideal $\mathfrak{a}_i$ is the image of the map $\lambda_i$. Suppose $M$ is a positive integer such that $M\mathfrak{a}$ is an integral ideal of $R$, and let $h=\lambda_2^{-1} \circ M\lambda_1$. Then $h(T_\alpha(E_1))$ is an $R$-submodule of $T_\alpha(E_2)$. By \cite[\S7]{Deligne69}, the map $E \mapsto T_\alpha(E)$ is a fully faithful functor, i.e., it induces a bijection \[ \Hom_k(E_1,E_2) \to \Hom_R(T_\alpha(E_1), T_\alpha(E_2)). \] Thus $h$ arises from a unique isogeny $\phi:E_1 \to E_2$. By~\cite[\S4]{Deligne69}, the kernel of $\phi$ is isomorphic as an $R$-module to $T_\alpha(E_2)/h(T_\alpha(E_1))$. The latter $R$-module is isomorphic to $R/M\mathfrak{a}$, and hence is exactly annihilated by $M\mathfrak{a}$. Thus $\ker(\phi) \cong E_1[M\mathfrak{a}]$, so $E_2 \cong E_1/E_1[M\mathfrak{a}] \cong (M\mathfrak{a}) \ast E_1.$ Since $M\mathfrak{a}$ and $\mathfrak{a}$ are in the same ideal class, we have $E_2 \cong \mathfrak{a} \ast E_1$, as desired. Fractional ideals can be inverted in polynomial time by \cite[Algorithm 5.3]{belabas} or \cite[\S 4.8.4]{Cohen} (see \cite[p. 21]{belabas} for the complexity). Conversely, suppose we have an algorithm that efficiently solves the isogeny problem in the isogeny class of an ordinary elliptic curve $E_0$. Take $R$ as above. We show below that there exists an embedding $\alpha:W(\bar{k}) \hookrightarrow \mathbb{C}$ such that $T_\alpha(E_0) \cong R$. Given $E$ isogenous to $E_0$, use the isogeny problem algorithm to compute $\mathfrak{a}$ such that $E_0 \cong \mathfrak{a} \ast E$. Output $T_\alpha(E) = \mathfrak{a}$. It remains to show that an embedding $\alpha:W(\bar{k}) \hookrightarrow \mathbb{C}$ exists such that $T_\alpha(E_0) \cong R$ and $T_\alpha(E) = \mathfrak{a}$. We follow an argument in the proof of~\cite[Theorem 2.1]{Duke-Toth}. There exists an elliptic curve $E'$ over $\mathbb{C}$ with CM by $R$ for which $H_1(E',\mathbb{Z}) \cong R$ as $R$-modules. Take any embedding $\beta: W(\bar{k}) \hookrightarrow \mathbb{C}$. Then the complex elliptic curve $E^{(\beta)}$ has CM by $R$, and by the theory of complex multiplication there exists $\sigma \in \mathrm{Gal}(\mathbb{C}/\mathbb{Q})$ such that $E' = \sigma(E^{(\beta)}) = E^{(\sigma\circ \beta)}$. Let $\alpha = \sigma\circ \beta$. By construction, $T_\alpha(E_0) = H_1(E', \mathbb{Z}) \cong R$. Further, by~\cite[Prop. II.1.2]{SilvermanAT}, $T_\alpha(E) \cong \mathfrak{a} \otimes_R T_\alpha(E_0) \cong \mathfrak{a}$, as claimed. \end{proof} \section*{Acknowledgments} We thank the American Institute of Mathematics (AIM) for supporting a workshop on multilinear maps where the initial seeds for this work were developed, and the Banff International Research Station (BIRS) where our collaboration continued. We also thank Michiel Kosters and Yuri Zarhin. Boneh was partially supported by NSF, DARPA, and ONR. Silverberg was partially supported by a grant from the Alfred P.\ Sloan Foundation and NSF grant CNS-1703321. \newcommand{\etalchar}[1]{$^{#1}$}
{ "attr-fineweb-edu": 1.274414, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdP45qX_Bk6r0r4dM
\section{Introduction} Highly oscillatory functions $\psi \in L^2({\mathbb{R}^d})$, $d\geq 1$, play a prominent role in many areas of science, including quantum molecular dynamics, wave mechanics, and signal processing. The semiclassical analysis and algorithmic simulation of such systems often requires a represention of $\psi$ on the classical phase space $T^*{\mathbb{R}^d} \cong{\mathbb{R}^{2d}}$. In this paper we construct novel phase space representations that are well-suited for numerical sampling purposes. As usual, we assume that $\psi$ is $L^2$-normalized and oscillates with frequencies of size $O({\varepsilon}^{-1})$, where $0<{\varepsilon} \ll 1$ is a small parameter. Then, representing $\psi$ via its Wigner transform \begin{equation}\label{eq:wigner} {\mathcal W}_\psi(q,p) = (2\pi{\varepsilon})^{-d} \int_{\mathbb{R}^d} {\rm e}^{{\rm i} p y/{\varepsilon}} \psi(q-\tfrac{y}2)\overline\psi(q+\tfrac{y}2){\rm{d}} y, \quad (q,p)\in {\mathbb{R}^{2d}}, \end{equation} facilitates to express expectation values of Weyl quantized operators ${\rm op}(a)$ exactly via the weighted phase space integral \begin{equation}\label{eq:wigner_exp_exact} \left\langle} \def\rw{\right\rangle \psi ,{\rm op}(a)\psi\rw = \int_{\mathbb{R}^{2d}} a(z) {\mathcal W}_\psi(z) {\rm{d}} z; \end{equation} see, e.g., \cite[\S9 and \S10.1]{dG11}. Despite its favorable properties, using Wigner functions has a major drawback for applications: In chemical physics quantum expectation values are often computed via a Monte Carlo discretization of~\eqref {eq:wigner_exp_exact}; see~\eqref{eq:wigner_method_exp} and \cite{TW04,KL14}. However, Wigner functions generically attain negative values and, hence, are not probability densities. Consequently, they often cannot be sampled directly, and discretizing~\eqref{eq:wigner_exp_exact} becomes difficult or even unfeasible. Convolving ${\mathcal W}_\psi$ with another Wigner function results in a so-called spectrogram, which is a nonnegative function. For a Gaussian wave packet $g_0$ centered in the origin, the spectrogram $S^{g_0}_\psi := {\mathcal W}_\psi * {\mathcal W}_{g_0}$ is a smooth probability density known as the Husimi function of $\psi$. Since one can sample from $S^{g_0}_\psi$, it suggests itself for replacing the Wigner function in~\eqref{eq:wigner_exp_exact}. However, this heavily deteriorates the results by introducing errors of order $O({\varepsilon})$, \begin{equation}\label{eq:husimi_expec} \left\langle} \def\rw{\right\rangle \psi ,{\rm op}(a)\psi\rw = \int_{\mathbb{R}^{2d}} a(z) S^{g_0}_\psi(z) {\rm{d}} z + O({\varepsilon}), \end{equation} see~\cite{KL13}. This is often far from being satisfactory. In~\cite{KLO15} we recently introduced a novel phase space density $\mu^2_\psi$, given as a linear combination of the Husimi function $S^{g_0}_\psi$ and spectrograms associated with first order Hermite functions. Using $\mu^2_\psi$ instead of the Husimi function improves the errors in \eqref{eq:husimi_expec} to order $O({\varepsilon}^2)$. It turns out that --- as conjectured in~\cite[\S10.5]{K15} --- the results from~\cite[Theorem 3.2]{KLO15} can be generalized in a systematic way. We provide a procedure to construct spectrogram approximations with errors of arbitrary order $O({\varepsilon}^N)$, $N\in \mathbb N$. Our main results are summarized in Theorem~\ref{thm:spec_exp}. We introduce novel phase space densities $\mu^N_\psi$ by suitably combining Hermite spectrograms of $\psi$ of order less than $N$. Then, using these densities gives the approximation \begin{equation}\label{eq:Nth_order_approx} \left\langle} \def\rw{\right\rangle \psi ,{\rm op}(a)\psi\rw = \int_{\mathbb{R}^{2d}} a(z) \mu^N_\psi(z) {\rm{d}} z + O({\varepsilon}^N), \quad N\in \mathbb N, \end{equation} where the error term vanishes as soon as $a$ is a polynomial of degree less than $2N$. This approximation is well-suited for computing quantum expectations with high accuracy: One only needs to sample from the densities $\mu^N_\psi$, which are linear combinations of smooth probability densities. We provide a Markov chain Monte Carlo method for the sampling that merely requires quadratures of inner products of $\psi$ with shifted Hermite functions. Our approximation indicates a way to circumvent the sampling problem for Wigner functions and, hence, might be useful in various applications. Moreover, the spectrogram expansion provides insight into the structure of Wigner functions that can be employed for developing new characterizations and approximations of functions in phase space. An important application of our result lies in quantum molecular dynamics: one can approximate the quantum evolution of expectation values by sampling from the density $\mu_\psi^N$ associated with the initial state and combine it with suitable semiclassical approximations for the dynamics; see~\S\ref{sec:quant_dyn} and~\cite{GL14,BR02}. \subsection{Outline} After recalling Wigner functions and spectrograms in \S\ref{sec:phase_repr}, in \S\ref{sec:spec_expansion} we present our main results. The proof is prepared and completed in \S\ref{sec:laplace_laguerre} and \S\ref{sec:proof_main_res}, respectively, and~\S\ref{sec:example_densities} contains illustrative examples. In \S\ref{sec:quant_exp} and~\S\ref{sec:MCMC} we explore the application of our new density for the computation of quantum expectations, and present a Metropolis sampling method. In~\S\ref{sec:quant_dyn} we briefly discuss applications in quantum dynamics. Finally, in~\S\ref{sec:accuracy} and~\S\ref{sec:hat}, we present numerical experiments that illustrate the validity and applicability of our results and methods. \subsection{Related Research} Spectrograms and combinations of spectrograms have been extensively studied in the context of time-frequency analysis, e.g. for signal reassignment~\cite{AF13}, filtering~\cite{Fl15} and cross-entropy minimization~\cite{LPH94}. However, to the best of our knowledge, apart from our preceding work~\cite{KLO15}, there are no results on the combination of spectrograms for approximating Wigner functions and expectation values. Husimi functions have been widely used in the context of quantum optics and quantum dynamics, see, e.g.,~\cite{AMP09,Schl11}~and~\cite[\S2.7]{F89}. In~\cite{KL13} one can find second order approximations for the quantum evolution of expectation values with Husimi functions and corrected operator symbols. \section{Phase space representations via spectrograms}\label{sec:phase_repr_spec} \subsection{High frequency functions in phase space}\label{sec:phase_repr} We start by reviewing several representations of functions $\psi \in L^2({\mathbb{R}^d})$ by real-valued distributions on phase space; see also \cite{KLO15} and \cite{dG11} for more details. The most prominent phase space representation of $\psi$ is given by its Wigner function ${\mathcal W}_\psi$ defined in~\eqref{eq:wigner}. It has the property that expectation values of Weyl quantized operators \begin{equation}\label{eq:weyl_quant} ({\rm op}(a)\psi)(q) = (2\pi {\varepsilon})^{-d} \int_{\mathbb{R}^{2d}} a(\tfrac12(y+q),p) {\rm e}^{{\rm i} (q-y)p/{\varepsilon}} \psi(y) {\rm{d}} y\, {\rm{d}} p \end{equation} with sufficiently regular symbol $a:{\mathbb{R}^{2d}} \to \mathbb C$ can be exactly expressed via the weighted phase space integral~\eqref{eq:wigner_exp_exact}. Whenever $ {\mathcal W}_\psi$ is a probability density,~\eqref{eq:wigner_exp_exact} suggests to approximate expectation values by means of a Monte Carlo type quadrature, see~\S\ref{sec:quant_exp}. However, as soon as $\psi$ is not a Gaussian, ${\mathcal W}_\psi$ attains negative values (see~\cite{SC83,J97}) and hence is not a probability density. This imposes severe difficulties for computations, since $ {\mathcal W}_\psi$ cannot be sampled directly. One can turn ${\mathcal W}_\psi$ into a nonnegative function by convolving it with another Wigner function. For $\psi \in L^2({\mathbb{R}^d})$ and a Schwartz class window $\phi \in \mathcal{S}({\mathbb{R}^d})$, $\|\psi\|_{L^2} = \|\phi\|_{L^2} = 1$, the convolution \[ S^\phi_\psi := {\mathcal W}_\psi * {\mathcal W}_\phi: {\mathbb{R}^{2d}} \to \mathbb R \] is a smooth probability density, as can be deduced from \cite[Proposition 1.42]{F89}. In time-frequency analysis $S^\phi_\psi$ is called a \emph{spectrogram} of $\psi$; see, e.g., the introduction in~\cite{F13}. Spectrograms belong to Cohen's class of phase space distributions; see \cite[\S3.2.1]{F99}. A popular window function is provided by the Gaussian wave packet \begin{equation}\label{eq:gaussian_wp} g_{(q,p)}(x) = (\pi{\varepsilon})^{-d/4} \exp\(- \tfrac{1}{2{\varepsilon}}|x-q|^2 + \tfrac{\rm i}{\varepsilon} p\cdot (x-\tfrac12q)\), \quad (q,p)\in {\mathbb{R}^{2d}}, \end{equation} centered in the origin $q=p=0$; see~\eqref{eq:gaussian_wp}. The corresponding spectrogram \begin{equation}\label{eq:husimi_def} S_\psi^{g_0}(z) = \int_{\mathbb{R}^{2d}} {\mathcal W}_\psi(w) (\pi{\varepsilon})^{-d} {\rm e}^{|z-w|^2/{\varepsilon}}~{\rm{d}} w \end{equation} is known as the \emph{Husimi function} of $\psi$, first introduced in~\cite{H40}. By~\eqref{eq:wigner_exp_exact} one has \begin{equation}\label{eq:husimi_exp} \int_{\mathbb{R}^{2d}} a(z) S_\psi^{g_0}(z) {\rm{d}} z =\int_{\mathbb{R}^{2d}} ({\mathcal W}_{g_0}*a)(z) {\mathcal W}_\psi(z) {\rm{d}} z = \left\langle} \def\rw{\right\rangle \psi, {\rm op}_{\rm aw} (a) \psi\rw, \end{equation} where ${\rm op}_{\rm aw}(a) = {\rm op}({\mathcal W}_{g_0}*a)$ is the so-called anti-Wick quantized operator associated with $a$; see \cite[\S2.7]{F89}. As a more general class of windows, we consider the eigenfunctions $\{\ph_k\}_{k\in\mathbb N^d}\subset L^2({\mathbb{R}^d})$ of the harmonic oscillator \[ - \tfrac{{\varepsilon}^2}2\Delta_q + \tfrac12|q|^2, \quad q \in {\mathbb{R}^d}. \] It is well-known that $\ph_k$ is a rescaled multivariate Hermite function and, in particular, $\ph_0 = g_0$. The corresponding Wigner functions take the form \begin{equation}\label{eq:Wigner-Laguerre} {\mathcal W}_{\ph_k}(z) = (\pi {\varepsilon})^{-d} {\rm e}^{-|z|^2/{\varepsilon}} (-1)^{|k|} \prod_{j=1}^d L_{k_j}\(\tfrac2{\varepsilon} |z_j|^2\) \end{equation} where $z=(q,p)\in{\mathbb{R}^{2d}}$, $z_j = (q_j,p_j)\in\mathbb R^2$, and $L_n$ denotes the $n$th Laguerre polynomial \begin{equation}\label{eq:laguerre_pol} L_n(x) = \sum_{j=0}^n {n \choose n-j} \frac{(-x)^j}{j!} , \quad n\in \mathbb N, \quad x \in \mathbb R; \end{equation} see, e.g., \cite[\S1.9]{F89} and~\cite[\S1.3]{Th93}. The Laguerre connection~\eqref{eq:Wigner-Laguerre} will play a crucial role in our proof of the spectrogram expansion. \subsection{The spectrogram expansion}\label{sec:spec_expansion} In this section we present the core result of our paper, which is the asymptotic expansion of Wigner functions in terms of Hermite spectrograms. We start by taking a closer look on the connection between Weyl and anti-Wick operators. \begin{lem}\label{lem:weyl--aw} Let ${\varepsilon}>0$, $a:{\mathbb{R}^{2d}} \to \mathbb R$ be a Schwartz function and $N\in \mathbb N$. Then, there is a family of Schwartz functions $r_N^{\varepsilon}:{\mathbb{R}^{2d}} \to \mathbb R$ and a constant $C>0$ independent of $a$ and ${\varepsilon}$ with \[ \sup_{{\varepsilon}>0}\|{\rm op}(r_N^{\varepsilon})\|_{L^2\to L^2}<C \sup_{|\alpha|,|\beta|\leq \lceil \tfrac d2 \rceil +1} \| \partial_q^\alpha \partial_p^\beta a^{(2N)}\|_\infty \] such that \[ {\rm op}(a) = {\rm op}_{\rm aw}\(\sum_{j=0}^{N-1} \frac{(-{\varepsilon})^k}{4^k k!}\Delta^k a\) + {\varepsilon}^{N} {\rm op}(r_N^{\varepsilon}), \] where anti-Wick quantization has been defined in~\eqref{eq:husimi_exp}. \end{lem} \begin{proof}[Sketch of proof] The assertion has been shown in~\cite[Lemma 1 and 2]{KL13}, see also~\cite[Proposition 2.4.3]{L10}. The proof builds on a Taylor expansion of $a$ around the point $z$ in the convolution integral \[ ({\mathcal W}_{g_0} * a)(z) = (\pi {\varepsilon})^{-d}\int_{\mathbb{R}^{2d}} a(\zeta) {\rm e}^{-|z-\zeta|^2/{\varepsilon}} {\rm{d}} \zeta \] that defines the Weyl symbol of ${\rm op}_{\rm aw}(a)$. \end{proof} We can combine Lemma~\ref{lem:weyl--aw} and~\eqref{eq:husimi_exp} in order to approximate quantum expectation values by an integral with respect to the Husimi function, \[ \left\langle} \def\rw{\right\rangle \psi,{\rm op}(a)\psi\rw = \int_{\mathbb{R}^{2d}} a(z) {\mathcal W}_\psi(z) {\rm{d}} z = \int_{\mathbb{R}^{2d}} \sum_{k=0}^{N-1} \frac{(-{\varepsilon})^k}{4^k k!}\Delta^k a(z) S^{g_0}_\psi(z) {\rm{d}} z + O({\varepsilon}^N). \] Performing integration by parts on the above integral directly leads to the definition of a new family of smooth phase space densities. \begin{mdef}\label{def:density} Let ${\varepsilon}>0$. For any $\psi\in L^2({\mathbb{R}^d})$ and $N\in \mathbb N$ we define \[ \mu_\psi^{N}:{\mathbb{R}^{2d}} \to \mathbb R , \quad \mu_\psi^{N}(z) := \sum_{k=0}^{N-1} \frac{(-{\varepsilon})^k}{4^k k!}\Delta^k S^{g_0}_\psi(z). \] \end{mdef} Our following main Theorem shows that $ \mu_\psi^{N}$ can be used to replace the Wigner function ${\mathcal W}_\psi$ for approximating expectation values of Weyl quantized operators with $O({\varepsilon}^N)$ accuracy. Moreover, $ \mu_\psi^{N}$ can be written as a linear combination of Hermite spectrograms. \begin{thm}[Spectrogram expansion]\label{thm:spec_exp} Let $\psi \in L^2({\mathbb{R}^d})$, $N\in \mathbb N$, and ${\varepsilon}>0$. Then, the density $\mu_\psi^{N}$ can be expressed in terms of Hermite spectrograms, \begin{equation}\label{eq:def_mudens} \mu_\psi^{N}(z) = \sum_{j=0}^{N-1} (-1)^j C_{N-1,j} \sum_{\substack{k\in \mathbb N^d \\ |k|=j}} S_\psi^{\ph_k}(z), \quad C_{k,j} = \sum_{m=j}^k 2^{-m} {d-1+m \choose d-1 + j}; \end{equation} see also Definition~\ref{def:density}. Furthermore, if $a:{\mathbb{R}^{2d}} \to \mathbb C$ is a Schwartz function, there is a constant $C\geq 0$ such that \begin{equation}\label{eq:spec_approx} \bigg| \int a(z) {\mathcal W}_\psi(z){\rm{d}} z- \int_{\mathbb{R}^{2d}} a(z) \mu_\psi^{N}(z) {\rm{d}} z \bigg| \leq C {\varepsilon}^{N} \|\psi\|^2_{L^2} , \end{equation} where $C$ only depends on bounds on derivatives of $a$ of degree $2N$ and higher. In particular, if $a$ is a polynomial of maximal degree $\deg(a)<2N$, it holds $C=0$ and the error in~\eqref{eq:spec_approx} vanishes. \end{thm} We postpone the proof of Theorem~\ref{thm:spec_exp} to chapter~\S\ref{sec:proof_main_res}. Firstly, in~\S\ref{sec:laplace_laguerre}, we derive an expansion for iterated Laplacians of~${\mathcal W}_{g_0}$. This is the main ingredient for identifying $\mu_\psi^N$ with a linear combination of Hermite spectrograms. The second order version of Theorem~\ref{thm:spec_exp} has already been shown in~\cite[Theorem 3.2 and Proposition 3.4]{KLO15}. There, we proved that one has \[ \mu_\psi^{2}(z) = (1 + \tfrac{d}2) S_\psi^{g_0} - \tfrac12 \sum_{j=1}^d S_\psi^{\ph_{e_j}} \] as well as \begin{equation}\label{eq:spec_approx2} \bigg| \int a(z) {\mathcal W}_\psi(z){\rm{d}} z- \int_{\mathbb{R}^{2d}} a(z) \mu_\psi^{2}(z) {\rm{d}} z \bigg| \leq C {\varepsilon}^{2} \|\psi\|^2_{L^2}. \end{equation} for a constant $C>0$ depending on third and higher derivatives of $a$. \begin{rem} Theorem~\ref{thm:spec_exp} remains true for more general operators ${\rm op}(a)$ as long as $a$ is sufficiently regular; see also~\cite[\S4.4]{Z12}. If ${\rm op}(a)$ is unbounded, one has to choose $\psi$ from a suitable subset of $L^2({\mathbb{R}^d})$. \end{rem} \begin{rem}\label{eq:weak_approx} The approximation~\eqref{eq:spec_approx} of expectation values can also be seen as a weak approximation of Wigner functions. In other words, we have \[ {\mathcal W}_\psi = \mu_\psi^{N} + O({\varepsilon}^N), \quad N\in \mathbb N, \] in the distributional sense. This observation is particularly interesting since ${\mathcal W}_\psi$ is only continuous in general, whereas $\mu_\psi^{N}$ is always real analytic. \end{rem} \subsection{Iterated Laplacians of phase space Gaussians}\label{sec:laplace_laguerre} There are many famous interrelations between the derivatives of Gaussians and Hermite and Laguerre polynomials; see, e.g.,~\cite{Th93} and~\cite[\S V]{Sz75}. We present an expansion of iterated Laplacians of the phase space Gaussian ${\mathcal W}_{g_0}$ based on Laguerre polynomials. To the best of our knowledge, this formula did not appear in the literature before. We aim to express the polynomial factors arising in iterated Laplacians of ${\mathcal W}_{g_0}$ as linear combinations of the product polynomials \begin{equation}\label{eq:sum_lagu} \mathcal L_k(\r(z)) : =\prod_{j=1}^d L_{k_j}(\r_j(z)),\quad z \in {\mathbb{R}^{2d}}, \quad k\in \mathbb N^d, \end{equation} where we use the variables \begin{equation}\label{eq:rho_variables} \r_j(q,p) = \tfrac{2}{\varepsilon} (q_j^2 + p_j^2), \quad j=1,\hdots, d, \end{equation} for readability. As known from~\eqref{eq:Wigner-Laguerre}, these polynomials also appear in the Wigner functions of Hermite functions. We split our proof into two parts and treat the one-dimensional case first. \begin{prop}\label{lem:1d_gauss_lagu} Let $d=1$ and ${\varepsilon}>0$. Then, for all $N\in \mathbb N$ we have \begin{equation*} \( -\tfrac{{\varepsilon}}2 \Delta \)^N {\mathcal W}_{g_0}(z)= N! \; {\mathcal W}_{g_0}(z) \sum_{n=0}^N {N \choose n} L_n(\r(z)), \quad z\in\mathbb R^2, \end{equation*} where $L_n$ is the $n$th Laguerre polynomial, and $\r$ has been defined in~\eqref{eq:rho_variables}. \end{prop} An induction proof of Proposition~\ref{lem:1d_gauss_lagu} can be found in~\ref{app:proof_lem}. In higher dimensions one has to sum over the Laguerre products $\mathcal L_k(\r)$ instead of the polynomials $L_n(\r)$. However, by applying Proposition~\ref{lem:1d_gauss_lagu}, the proof for the multi-dimensional formula reduces to a bookkeeping exercise. In the proof of the following Theorem we repeatedly use the binomial identity \begin{equation}\label{eq:sum_binom_product} \sum_{j=0}^{N-m}{N-j \choose m} {k+j \choose j} = {N+k+1 \choose N-m}, \quad k,N,m \in \mathbb N, \quad m\leq N. \end{equation} For the reader's convenience we include a short proof of~\eqref{eq:sum_binom_product} in~\ref{sec:proof_binom}. \begin{thm}\label{thm:gauss-laguerre} Let ${\varepsilon}>0$, $d\in \mathbb N$ and $N\in \mathbb N$. Then, \begin{equation}\label{eq:multidi_laplace_gauss_formula} \( -\tfrac{{\varepsilon}}2 \Delta \)^N {\mathcal W}_{g_0}(z)= N! \, {\mathcal W}_{g_0}(z) \sum_{n=0}^N {N +d -1 \choose n+d-1} \sum_{k\in \mathbb N^d, |k|=n} \mathcal L_k(\r(z)), \end{equation} where $z\in{\mathbb{R}^{2d}}$ and the polynomials $\mathcal L_k\circ \r$ have been defined in~\eqref{eq:sum_lagu}. \end{thm} \begin{proof} Since ${\mathcal W}_{g_0}$ is a tensor product of $d$ bivariate Gaussians of the form \[ G(x,\xi) = (\pi {\varepsilon})^{-1}{\rm e}^{-(x^2+\xi^2)/{\varepsilon}}, \quad (x,\xi)\in\mathbb R^2, \] the multinomial theorem implies \begin{align} \( -\tfrac{{\varepsilon}}2 \Delta \)^{N} {\mathcal W}_{g_0}(z)&\nonumber= \( -\tfrac{{\varepsilon}}2 (\Delta_{z_1} + \hdots + \Delta_{z_d}) \)^{N} \prod_{j=1}^d G(z_j)\\ &\nonumber= \sum_{k\in \mathbb N^d, |k|=N} {N \choose k_1,\hdots,k_d} (-\tfrac{{\varepsilon}}2 \Delta)^k {\mathcal W}_{g_0}(z) \end{align} where $\Delta_{z_j} = \partial_{q_j}^2 + \partial_{p_j}^2$ and $\Delta ^k = \Delta_{z_{1}}^{k_1}\cdots \Delta_{z_d}^{k_d}$. Consequently, after applying Proposition~\ref{lem:1d_gauss_lagu} and reordering the sum, we arrive at \begin{align} \( -\tfrac{{\varepsilon}}2 \Delta \)^{N} {\mathcal W}_{g_0}(z)&\nonumber= \sum_{k\in \mathbb N^d, |k|=N} {N \choose k_1,\hdots,k_d} k! \prod_{j=1}^d \sum_{m=0}^{k_j} {k_j \choose m} G(z_j)L_{m}(\r_j(z))\\ &\label{eq:mehrdi_vor_reorder}= N!{\mathcal W}_{g_0}(z) \sum_{k \in \mathbb N^d, |k|=N} \prod_{j=1}^d \sum_{m=0}^{k_j} {k_j \choose m} L_{m}(\r_j(z)). \end{align} Now, we collect all binomial coefficients belonging to one polynomial $\mathcal L_\ell(\r)$ with~$0\leq |\ell| \leq N$. We treat the simple cases $|\ell|\leq 1$ separately in order to illustrate our counting procedure. \begin{description} \item[$\ell=0:$] In the sum~\eqref{eq:mehrdi_vor_reorder}, the polynomial $\mathcal L_0 \circ \r$ appears \[ | \{k \in \mathbb N^d:|k |=N\} |= {N+d-1\choose d-1} \] times. For all $k\in \mathbb N^d$ and $1\leq j \leq d$ we get the prefactor ${k_j \choose 0} =1$. \item[$|\ell|=1:$] For $\ell=e_i$, $i\in\{1,\hdots,d\}$, the coefficient of $\mathcal L_{e_i}\circ \r$ can be computed as follows. If $k_i= N$ in~\eqref{eq:mehrdi_vor_reorder}, the binomial prefactor is $ {N \choose 1}$. If $k_i = N-1$, there are ${d-1 \choose 1}$ ways to distribute the excessive index point, and this choice does not influence the prefactor $ {N-1\choose 1}$. For $k_i = N-2$ there are ${d \choose 2}$ ways to distribute the two excessive index points, and the prefactor is ${N-2\choose 1}$. Continuing in the same way, and computing the sum via~\eqref{eq:sum_binom_product}, we obtain \[ \sum_{j=0}^{N-1} {N-j \choose 1}{d-2+j \choose j} = {N+d-1 \choose d}, \] which is the coefficient of the $n=1$ term in~\eqref{eq:multidi_laplace_gauss_formula}. \item[$|\ell|=n\leq N:$] Without loss of generality, assume that $\ell$ has $1\leq r \leq d$ nonzero entries $\ell_1,\hdots,\ell_r >0$ and $\ell_{r+1},\hdots,\ell_d = 0$. Otherwise rename the coordinates. For every $k\in \mathbb N^d$ and $s \leq d $ we define the partial sums $|k|_s = k_1 + \hdots +k_s$ such that $|\ell|_r = n$. Then, if $|k|_r = N$ in~\eqref{eq:mehrdi_vor_reorder}, one has to sum all prefactors of the form \[ \prod_{j=1}^r {k_j \choose \ell_j}, \quad k_j \geq \ell_j, \quad \sum_{j=1}^r k_j=N. \] If $|k|_r = N-1$, one additionally has ${d-r \choose 1}$ ways to distribute the excessive index point et cetera. In total, all prefactors of $\mathcal L_\ell(\r(z))$ are given by \begin{align*} \sum_{m_1=0}^{N-n}&\sum_{m_2=\ell_{2}}^{M+|\ell|_2 - |m|_1} \sum_{m_3=\ell_{3}}^{M+|\ell|_3-|m|_2} \cdots \sum_{m_r=\ell_{r}}^{M+|\ell|_r - |m|_{r-1}} {N-|m|_r \choose \ell_{1}} \\ & \times {m_2 \choose \ell_{2}} \cdots {m_r \choose \ell_{r}} {d-r -1+m_1 \choose m_1} \end{align*} where $m=(m_1,\hdots,m_r)\in \mathbb N^r$ and $M=N-n-\ell_{1}$. The summation over $m_1$ captures all index points of $k$ in the components $r+1,\hdots,d$. For the innermost sum we compute \begin{align*} &\sum_{m_r=\ell_r}^{M+|\ell|_r - |m|_{r-1}} {N-|m|_r \choose \ell_1} {m_r \choose \ell_r} \\ &= \sum_{m_r=0 }^{M+|\ell|_{r-1} - |m|_{r-1}} {N-|m|_{r-1}-m_r -\ell_{r} \choose \ell_1 } {m_r + \ell_r \choose m_r} \\ &={ N-|m|_{r-1} + 1 \choose 1+\ell_r +\ell_1 } \end{align*} by invoking~\eqref{eq:sum_binom_product}. Repeating this computation in a similar way for the sums over $m_{r-1} ,\hdots, m_2$ one is left with the last sum over $m_1$, which gives \begin{align*} \sum_{m_1=0}^{N-n} {N- m_1 + r -1 \choose n + r -1} {d-r -1+m_1 \choose m_1} &= {N+d -1 \choose n+d -1}, \end{align*} again by using~\eqref{eq:sum_binom_product}. \end{description} Rewriting~\eqref{eq:mehrdi_vor_reorder} by incorporating the above calculations completes the proof. \end{proof} \subsection{Proof of the main result}\label{sec:proof_main_res} We can now prove our main result Theorem~\ref{thm:spec_exp}. The central idea is to employ the Laplace-Laguerre formula from Theorem~\ref{thm:gauss-laguerre}, and to identify the Laguerre polynomials with the prefactors appearing in the Wigner functions~\eqref{eq:Wigner-Laguerre}. These Wigner functions in turn are the convolution kernels of Hermite spectrograms. \begin{proof}[Proof of Theorem~\ref{thm:spec_exp}] Let $a:{\mathbb{R}^{2d}} \to \mathbb R$ be an ${\varepsilon}$-independent Schwartz function. Then, by invoking \eqref{eq:husimi_exp} and Lemma~\ref{lem:weyl--aw}, we have \begin{align*} \left\langle} \def\rw{\right\rangle\psi,{\rm op}(a) \psi \rw &= \int_{\mathbb{R}^{2d}} \sum_{m=0}^{N-1}\frac{(-{\varepsilon}\Delta)^m}{4^k m!} a(z) ( {\mathcal W}_{g_0}* {\mathcal W}_\psi)(z) {\rm{d}} z + {\varepsilon}^{N} \left\langle} \def\rw{\right\rangle \psi,{\rm op}(r_{N}^{\varepsilon}) \psi \rw \end{align*} where $\{r_{N}^{\varepsilon}\}_{{\varepsilon}>0}$ is a family of Schwartz functions with uniformly bounded operator norm. Note that $r_N^{\varepsilon}$ only depends on $2N$th and higher order derivatives of $a$; see also~\cite[\S2.3]{KLO15}. Repeated integration by parts yields \begin{align*} \left\langle} \def\rw{\right\rangle\psi,{\rm op}(a) \psi \rw &= \int_{\mathbb{R}^{2d}} a(z) \sum_{m=0}^{N-1}\frac{(-{\varepsilon}\Delta)^m}{4^k m!} ( {\mathcal W}_{g_0}* {\mathcal W}_\psi)(z) {\rm{d}} z + {\varepsilon}^{N} \left\langle} \def\rw{\right\rangle \psi,{\rm op}(r_{N}^{\varepsilon}) \psi \rw, \end{align*} and we recognize the phase space density $\mu^{N}_\psi$ from Definition~\ref{def:density}. Now, by Theorem~\ref{thm:gauss-laguerre} we have \begin{align*} \frac{(-{\varepsilon}\Delta)^m}{4^k m!} & ( {\mathcal W}_{g_0}* {\mathcal W}_\psi)(z)= \frac{1}{m!} \((-\tfrac\eps4\Delta)^m {\mathcal W}_{g_0}* {\mathcal W}_\psi\)(z) \\ &= \(2^{-m} {\mathcal W}_{g_0} \sum_{j=0}^m {m +d -1 \choose j+d-1} \sum_{k\in \mathbb N^d, |k|=j} \mathcal L_k(\varrho) *{\mathcal W}_\psi \) (z), \end{align*} and using formula~\eqref{eq:Wigner-Laguerre} leads us to \begin{align*} \frac{(-{\varepsilon}\Delta)^m}{4^k m!} & \({\mathcal W}_{g_0}*{\mathcal W}_\psi\)(z)= 2^{-m} \sum_{j=0}^m (-1)^{j} {m +d -1 \choose j+d-1} \sum_{k\in \mathbb N^d, |k|=j} S_\psi^{\varphi_k} (z). \end{align*} Finally, summing over all $m=1\hdots N-1$ and reordering the sum gives \begin{align*} \sum_{m=0}^{N-1} 2^{-m} \sum_{j=0}^m (-1)^{j} {m +d -1 \choose j+d-1} \sum_{k\in \mathbb N^d, |k|=j} S_\psi^{\varphi_k} (z) &= \sum_{j=0}^{N-1} C_{N-1,j} \sum_{k\in \mathbb N^d, |k|=j} S_\psi^{\varphi_k} (z) \end{align*} with \[ C_{k,j} = \sum_{m=j}^{k} 2^{-m} (-1)^{j} {m +d -1 \choose j+d-1}, \quad j=0,\hdots,k, \] and the assertion follows. \end{proof} \subsection{Examples}\label{sec:example_densities} From~\cite[Proposition 5]{LT14} we know that the Husimi functions of the Hermite functions $\{\ph_k\}_{k\in\mathbb N^d}$ are given by the formula \begin{equation*}\label{eq:spec_herm_hus} S^{g_0}_{\ph_k}(z) = S^{\ph_k}_{g_0}(z) = (2\pi {\varepsilon})^{-d} \frac{{\rm e}^{-|z|^2/2{\varepsilon}}}{(2 {\varepsilon})^{|k|} k!} |z|^{2k}. \end{equation*} Therefore, by using the covariance property of Wigner functions with respect to Heisenberg-Weyl operators $T_z$, \begin{equation}\label{eq:heisenberg_weyl} T_{(q,p)}\psi = {\rm e}^{{\rm i} p(\bullet - q/2)/{\varepsilon}} \psi(\bullet -q), \quad \psi \in L^2({\mathbb{R}^d}), \end{equation} see~\cite[Proposition 174]{dG11}, one can easily compute the new phase space densities $\mu_\psi^N$ for a one-dimensional Gaussian wave packet $\psi = g_w$, $w\in \mathbb R^2$. Namely, we find the weak approximations \begin{align} {\mathcal W}_{g_w}(z) &= \sum_{j=0}^N (-1)^j \sum_{m=j}^N 2^{-m} {m \choose j} S^{\varphi_j}_{g_w}(z) + O({\varepsilon}^{N+1})\\ &\nonumber= \sum_{j=0}^N (-1)^j \frac{(2\pi{\varepsilon})^{-1}}{j!} \big| \tfrac{z-w}{\sqrt{2{\varepsilon}}} \big|^{2j} {\rm e}^{-|z-w|^2/2{\varepsilon}} \sum_{m=j}^N 2^{-m} {m \choose j} + O({\varepsilon}^{N+1}) \end{align} for $z\in \mathbb R^2$, such that the first three nontrivial approximations read \begin{align} {\mathcal W}_{g_w}(z)&\nonumber= (2\pi{\varepsilon})^{-1}{\rm e}^{-|z-w|^2/2{\varepsilon}} \( \tfrac32 - \tfrac12 \tfrac{|z-w|^2}{2{\varepsilon}}\)+O({\varepsilon}^2), \\ {\mathcal W}_{g_w}(z)&\label{eq:mu_gauss1d}= (2\pi{\varepsilon})^{-1}{\rm e}^{-|z-w|^2/2{\varepsilon}} \( \tfrac74 - \tfrac{|z-w|^2}{2{\varepsilon}} + \tfrac14 \tfrac{|z-w|^4}{ 2! 4{\varepsilon}^2} \)+O({\varepsilon}^3), \\ {\mathcal W}_{g_w}(z)&\nonumber=(2\pi{\varepsilon})^{-1} {\rm e}^{-|z-w|^2/2{\varepsilon}}\( \tfrac{15}8 - \tfrac{11}{8} \tfrac{|z-w|^2}{2{\varepsilon}} + \tfrac{5}{8} \tfrac{|z-w|^4}{2! 4{\varepsilon}^2} - \tfrac{1}8\tfrac{|z-w|^6}{3!8{\varepsilon}^3} \)+O({\varepsilon}^4). \end{align} It is striking that the sequence of densities $\mu_{g_w}^N$ does not only approximate ${\mathcal W}_{g_w}$ weakly as ${\varepsilon} \to 0$, but even seems to yield a strong approximation as $N\to \infty$, see Figure~\ref{fig:gauss_densities}. \begin{figure}[h!] \includegraphics[width=\textwidth]{decay} \caption{\label{fig:gauss_densities} Decay of the Wigner and Husimi function as well as the densities $\mu_{g_w}^{N}$, $N\leq 4$, for a Gaussian wave packet $g_w$ in dependence of the distance $|z-w|$ from the phase space center.} \end{figure} In higher dimensions one has to incorporate different prefactors and sum over all Hermite spectrograms of the same total degree, but the structure of the approximations~\eqref{eq:mu_gauss1d} remains the same. For Gaussian superpositions $\psi = g_{z_1} + g_{z_2}$ with phase space centers $z_1,z_2 \in {\mathbb{R}^{2d}}$, and Hermite functions, $\psi = \ph_k$, the second order density $\mu^2_\psi$ has been computed in~\cite[\S5]{KLO15} by using ladder operators. The same technique can in principle also be applied to compute higher order densities $\mu^N_\psi$, but will lead to tedious calculations. The structure of the densities for a Gaussian superposition, however, is always of the form \[ \mu^N_{g_{z_1} + g_{z_2}} = \mu^N_{g_{z_1}} + \mu^N_{g_{z_2}} + {\rm e}^{-|z_1-z_2|/8{\varepsilon}} C^N_{z_1,z_2}, \] where $C^N_{z_1,z_2}$ is an oscillatory cross term, see also~\cite[\S5]{KLO15}. The damping factor ${\rm e}^{-|z_1-z_2|/8{\varepsilon}}$ is exponentially small in ${\varepsilon}$, such that one can safely ignore the cross term in computations as soon as the centers $z_1$ and $z_2$ are sufficiently apart. In contrast, the cross term in the Wigner function of a Gaussian superposition does not contain a damping factor. Hence, the interferences are large and cannot be neglected. In this paper we do not further investigate explicit formulas for spectrogram densities. Instead, we discuss a Markov Chain method for sampling from spectrogram densities that is tailored to practical applications. In particular, the method can be applied to a wide range of states and circumvents the difficulties of explicitely computing Wigner or Husimi functions, see~\S\ref{sec:MCMC}. \section{Applications}\label{sec:applications} \subsection{Quantum Expectations}\label{sec:quant_exp} In chemistry, the expectation values of Weyl quantized observables are often computed via the Monte Carlo quadrature \begin{equation}\label{eq:wigner_method_exp} \left\langle} \def\rw{\right\rangle \psi, {\rm op}(a) \psi \rw =\int_{\mathbb{R}^{2d}} a(z){\mathcal W}_\psi(z) {\rm{d}} z \approx \frac1n \sum_{m=1}^n a(z_m) \end{equation} where $z_1,\hdots,z_n\sim {\mathcal W}_\psi$ are distributed with respect to the Wigner function, see~\cite{TW04,KL14}. Generically, however, ${\mathcal W}_\psi$ is not a probability density and direct sampling techniques cannot be applied. Instead of using methods like importance sampling we propose to replace ${\mathcal W}_\psi$ by a spectrogram density $\mu^N_\psi$, which is a linear combination of smooth probability densities. That is, we approximate \begin{align} \left\langle} \def\rw{\right\rangle \psi, {\rm op}(a) \psi \rw &\nonumber=\int_{\mathbb{R}^{2d}} a(z)\mu^N_\psi(z) {\rm{d}} z +O({\varepsilon}^N) \\ &\label{eq:approx_exp_mu}\approx \sum_{j=0}^{N-1} (-1)^jC_{N-1,j}{N+d-1\choose d-1} \frac1n \sum_{m=1}^n a(z^j_m), \end{align} where the phase space points are sampled from the probability densities given by the averaged Hermite spectrograms of a given order, \begin{equation}\label{eq:smapling_densities} z^j_1,\hdots,z^j_n\sim {N+d-1\choose d-1}^{-1} \sum_{k\in \mathbb N^d,|k|=j} S_\psi^{\ph_k}, \quad j=1,\hdots N-1. \end{equation} Obviously, method~\eqref{eq:approx_exp_mu} is typically only practicable if the dimension $d$ is not too large and one does not need to go to a very high order $N$. However, for the majority of applications in physical chemistry this is the case. \begin{rem} Instead of considering the probability densities~\eqref{eq:smapling_densities} it could often be more practicable to sample from each spectrogram $S_\psi^{\ph_k}$, $|k|<N$, seperately. Alternatively, sometimes it might be possible to combine all spectrograms that appear with positive or negative prefactors. In that case, one would only need to sample from two probability densities. \end{rem} \subsection{Sampling via Metropolis-Hastings}\label{sec:MCMC} Evaluating the highly oscillatory integral~\eqref{eq:wigner} defining the Wigner function in several dimensions is numerically extremely challenging or --- for the majority of systems --- simply unfeasible. Together with the sampling problem arising from the fact that Wigner functions may attain negative values, this is a major bottleneck for the applicability of~\eqref{eq:wigner_method_exp}. Moreover, often one also cannot explicitely compute the spectrogram densities~\eqref{eq:smapling_densities} either. Instead, we propose a Markov chain sampling scheme for spectrograms based on the inner product representation with Hermite functions \begin{equation}\label{eq:hermite_spec_quad} S^{\ph_k}_\psi(z) = (2 \pi {\varepsilon})^{-d} |\left\langle} \def\rw{\right\rangle \psi, T_z \ph_k \rw|^2, \quad z \in {\mathbb{R}^{2d}}, \end{equation} where the Heisenberg-Weyl operator $T_z$ has been defined in~\eqref{eq:heisenberg_weyl}; see also~\cite{KLO15}. This method does not require to determine $S^{\ph_k}_\psi$ globally as a function, but only involves pointwise evaluations. For approximating the inner products~\eqref{eq:hermite_spec_quad} one can use different methods. Natural choices certainly include Gauss-Hermite, Monte Carlo or Quasi-Monte Carlo quadrature rules. All these schemes exploit the Gaussian factor appearing in the Hermite functions. Monte Carlo quadrature is especially useful in higher dimensions, where one would need to employ sparse grids when applying Gauss-Hermite quadrature, see, e.g., \cite[\S III.1]{Lu08}. We propose to generate a Markov chain with stationary distribution $S^{\ph_k}_\psi$ via the Metropolis-Hastings algorithm. We implement the following iteration that starts from a seed $z_0\in {\mathbb{R}^{2d}}$ with probability $S^{\ph_k}_\psi(z_0)$. \begin{enumerate} \item Proposition: set $z = z_n + \sqrt{{\varepsilon}} \zeta$ with a random vector $\zeta \sim N(0,{\rm Id}_{2d})$. \item Quadrature: approximately evaluate $S^{\ph_k}_\psi(z)$ via~\eqref{eq:hermite_spec_quad}. \item Acceptance: generate a uniform random number $\rho \sim U([0,1])$. Accept the trial point if $\rho < S^{\ph_k}_\psi(z)/S^{\ph_k}_\psi(z_{n})$, and set $z_{n+1}=z$. Otherwise, reject the proposition and keep the old point $z_{n+1} = z_{n}$. \end{enumerate} We used a normal density of variance ${\varepsilon}$ as proposal distribution, since --- as the Husimi function of a Gaussian wave packet --- it is a prototype spectrogram for functions with $O({\varepsilon}^{-1})$ frequencies. If one knows in advance that the spectrogram $S^{\ph_k}_\psi$ has a disconnected effective support in phase space, one may additionally incorporate a jump step in the spirit of~\cite[\S5.1]{KLW09}. If the Markov chain $\{z_n\}_n$ is uniformly ergodic, the central limit theorem implies weak convergence of averages, see~\cite{T94}. More precisely, for any function $a:{\mathbb{R}^{2d}} \to \mathbb R$ that is square-integrable with respect to $S^{\ph_k}_\psi$ there is a constant $c_a$ such that \[ \lim_{n\to \infty} \mathbb{P}\(\bigg| \tfrac1n \sum_{j=1}^n a(z_j) - \int_{\mathbb{R}^{2d}} a(z) S^{\ph_k}_\psi(z) {\rm{d}} z \bigg| \leq \frac{r c_a}{\sqrt{n} }\) = \frac{1}{\sqrt{2\pi}} \int_{-r}^r {\rm e}^{-t^2/2}{\rm{d}} t \] for any $r>0$. In particular, this implies convergence of the method~\eqref{eq:approx_exp_mu} for the computation of quantum expectation values. We stress that the convergence rate of $n^{-1/2}$ does not depend on the dimension $d$ of the configuration space. \subsection{Quantum dynamics}\label{sec:quant_dyn} In physical chemistry, the computation of stationary quantum expectation values itself is not of central interest. Instead, one would like to compute the evolution of expectation values \[ t \mapsto \left\langle} \def\rw{\right\rangle \psi_t,{\rm op}(a) \psi_t\rw. \] where the wave function $\psi_t$ represents the state of the molecule's nuclei at time $t$ in the Born-Oppenheimer approximation. The wave function $\psi_t$ evolves according to a semiclassical Schr\"odinger equation \[ i {\varepsilon} \partial_t \psi_t = -\tfrac{{\varepsilon}^2}2\Delta \psi_t + V \psi_t \] on the electronic potential energy surface $V$; see~\cite{ST01}. By combining Egorov's theorem (see~\cite{BR02,KL14}) with~\eqref{eq:spec_approx2} one obtains the second order approximation \begin{align}\label{eq:egorov_wig} \left\langle} \def\rw{\right\rangle \psi_t,{\rm op}(a) \psi_t\rw &= \int_{\mathbb{R}^{2d}} {\mathcal W}_{\psi_0}(z)(a\circ \Phi^t)(z){\rm{d}} z + O({\varepsilon}^2) \\ &\label{eq:egorov_mu2}= \int_{\mathbb{R}^{2d}} \mu^2_{\psi_0}(z)a(z){\rm{d}} z + O({\varepsilon}^2), \end{align} where $\Phi^t$ is the the flow of the underlying classical Hamiltonian system $\dot q = p$, $\dot p = -\nabla V(q)$. This approximation and its discretization has been studied in~\cite{KLO15}. The spectrogram method~\eqref{eq:egorov_mu2} improves the Wigner function method~\eqref{eq:egorov_wig} that has been widely used in chemical physics since decades under the name linearized semiclassical initial value representation (LSC-IVR) or Wigner phase space method; see, e.g., \cite{TW04,KL14}. One can construct higher order versions of~\eqref{eq:egorov_mu2} that only require sampling from probability densities and solving ordinary equations. For this purpose one combines the densities $\mu^N_{\psi_0}$ from Theorem~\ref{thm:spec_exp} with higher order corrections of Egorov's theorem for the quantum dynamics, see~\cite{BR02,GL14}. We leave the details to future investigations. \section{Numerical Experiments} \subsection{Accuracy}\label{sec:accuracy} In a first set of experiments we investigate if the asymptotic error of our approximation from Theorem~\ref{thm:spec_exp} is observed in practice. For this purpose we consider a one-dimensional Gaussian wave packet $\psi = g_{z_0}$ centered in $z_0=(\tfrac12,-1)$ and varying values of ${\varepsilon}$. We compare the expectation values of the following observables with their approximation via the spectrogram approximation with density $\mu_\psi^N$ for the orders $N=1,\hdots,4$: \begin{enumerate} \item $a(q,p) = q^4+1$ \item $b(q,p) = \tfrac14(p^2 - q )^3$ \item $c(q,p) = \cos(q)$ \item $d(q,p) = \exp(\sin(q))$. \end{enumerate} We used the formulas for the spectrogram densities $\mu_\psi^N$ from~\eqref{eq:mu_gauss1d}. For the observables $a$, $b$ and $c$ all computations can be done explicitely. For the observable $d$ we used a highly accurate quadrature scheme. The results depicted in figure~\ref{fig:gauss_error} show that the errors are indeed of order $O({\varepsilon}^{N})$. Moreover, as expected, in the cases $N=3$ and $N=4$ the observables $a$ respectively $a$ and $b$ are reproduced without error. \begin{figure}[h!] \includegraphics[width=\textwidth]{Gauss_error} \caption{\label{fig:gauss_error} Errors of the expectation values of the observables $a$ (dashed), $b$ (solid), $c$ (dashed dotted) and $d$ (dotted) computed by the spectrogram approximations of order $N\in\{1\hdots 4\}$. The state is a Gaussian wave packet centered in $(\tfrac12,-1)$. } \end{figure} We highlight that the error constants do not seem to grow with the order, although $\mu^N_\psi$ only weakly approximates ${\mathcal W}_\psi$. This indicates that stronger types of convergence might hold for particular states and observables. \subsection{Sampling a hat function}\label{sec:hat} In a second set of experiments we consider the normalized semiclassical hat function \begin{equation}\label{eq:hat} \psi(x) = \sqrt{\tfrac3{2\sqrt{{\varepsilon}}}} \(1-\frac{|x-q|}{\sqrt{\varepsilon}}\) 1_{|x-q|<\sqrt{{\varepsilon}}}, \quad x\in\mathbb R, \end{equation} that is localized around $q \in \mathbb R$. Computing the Wigner functions and spectrograms of $\psi$ explicitely is difficult. Therefore, we sample from the densities $\mu_\psi^N$ by means of the Markov chain Monte Carlo algorithm introduced in \S\ref{sec:MCMC}, and discretize the inner product~\eqref{eq:hermite_spec_quad} by Quasi-Monte Carlo quadrature with $10^3$ Sobol points. In figure~\ref{fig:hat} one can see that the numerically computed Wigner function and its approximative reconstruction via the weighted histogram \begin{equation}\label{eq:weighted_hist} z \mapsto \frac1n \sum_{j=0}^2 (-1)^jC_{2,j}\#\{k: z^j_k \approx z\}, \quad z^j_1,\hdots, z^j_n \sim S_\psi^{\ph_j} \end{equation} of the signed density $\mu^3_\psi$ look very similar. In fact, the weighted histogram attains negative values in the same regions where also the Wigner functions becomes negative. \begin{figure}[h!] \includegraphics[width=0.48\textwidth]{wigner_hat} \includegraphics[width=0.48\textwidth, height=0.4\textwidth]{mu3_hat} \caption{\label{fig:hat} The Wigner function of the hat function~\eqref{eq:hat} and a weighted histogram reconstruction~\eqref{eq:weighted_hist} of the spectrogram density~$\mu^3_\psi$ with $10^6$ samples, where ${\varepsilon} = 5\cdot 10^{-2}$. } \end{figure} In order to investigate the applicability of the Markov chain sampling algorithm from \S\ref{sec:MCMC} we now explore the errors for observables in dependence of the chosen number of Monte Carlo points. We consider the expectations of the position observable $a(q,p)=q$, that is, the center of the sampled distribution, as well as of the complicated observable $d$ from the previous section. We consider samplings of both a single Husimi function and the second order spectrogram density $\mu_\psi^{2}$ for the fixed parameter ${\varepsilon} = 10^{-2}$. \begin{figure}[h!] \includegraphics[width=\textwidth]{hat_error} \caption{\label{fig:hat_errors} Errors for the sampling method from~\S\ref{sec:MCMC} in dependence of the used Monte Carlo points for the position in the case $N=0$, the position in the case $N=2$, and the observable $d$ in the case $N=2$. The results are averaged over ten independent runs. } \end{figure} Figure~\ref{fig:hat_errors} illustrates that, as expected, the asymptotic sampling error of order $O(n^{-1/2})$ for Markov chain Monte Carlo methods is also observed for our algorithm, although the probability densities are only approximately evaluated via quadrature. We note that it is necessary to use a sufficiently accurate quadrature in order to observe decent convergence results. Our experiments confirm that the Markov chain method from \S\ref{sec:MCMC} is applicable for the approximative sampling of Wigner functions in the semiclassical regime. The method could prove particularly useful in higher dimensions, where Wigner functions typically cannot be computed. \section*{Acknowledgements} It is a pleasure to thank Caroline Lasser for many valuable remarks and suggestions that helped to improve our manuscript. The author gratefully acknowledges support by the German Research Foundation (DFG), Collaborative Research Center SFB-TRR 109. \begin{appendix} \section{Proof of Proposition~\ref{lem:1d_gauss_lagu}}\label{app:proof_lem} \begin{proof}[Proof of Proposition~\ref{lem:1d_gauss_lagu}] We prove the assertion by induction. Since $L_0 \equiv1$, the base case $N=0$ is clear and we assume that the assertion is true for some $N\in \mathbb N$. We compute \begin{align*} \nabla {\mathcal W}_{g_0}(z) &= -\tfrac2{\varepsilon} z {\mathcal W}_{g_0}(z) & \nabla L_n(\r(z)) &= \tfrac4{\varepsilon} z L_n'(\r(z)) \\ \Delta {\mathcal W}_{g_0}(z) &= \tfrac4{\varepsilon} (\tfrac{\r(z)}2 -1) {\mathcal W}_{g_0}(z) & \Delta L_n(\r(z)) &= \tfrac4{\varepsilon} \( 2L_n'(\r(z)) + 2\r(z) L''_n(\r(z)) \), \end{align*} and from now on write $ \r(z)=\r$ for simplicity. One has \begin{align} \tfrac{{\varepsilon}}2 &\nonumber \Delta ({\mathcal W}_{g_0}(z)L_n(\r)) = {\mathcal W}_{g_0}(z) \,\left[ \,} \def\rk{\,\right] \,} \def\rki{\,\right] L_n(\r)(\r-2) + 4L_n'(\r) + 4\r L_n''(\r) - 4 \r L_n'(\r)\rk\\ &\label{eq:laguerre_gauss_laps}= - {\mathcal W}_{g_0}(z) \,\left[ \,} \def\rk{\,\right] \,} \def\rki{\,\right] L_n(\r)(2-\r) -4 L_n'(\r) - 4\r L_n''(\r) + 4\r L_n'(\r)\rk \end{align} and, hence, the polynomial factor in~\eqref{eq:laguerre_gauss_laps} can be rewritten as \begin{align} L_n(\r)(2-\r) &\nonumber - 4\r L_n''(\r) + 4(\r-1) L_n'(\r)= L_n(\r)(2-\r+4n)\\ &\label{eq:Laguerre_reformu}=L_n(\r)(1+2n) + (n+1)L_{n+1}(\r) + nL_{n-1}(\r). \end{align} For verifying~\eqref{eq:Laguerre_reformu} one combines Laguerre's differential equation \[ x L_n''(x) = (x-1)L_n'(x) - nL_n(x), \] and the three-term recurrence relation \[ (n+1)L_{n+1}(x) =(2n +1 -x)L_n(x) - nL_{n-1}(x) ,\quad n \in \mathbb N, \] where $L_0\equiv 1$ and $L_{-1}\equiv 0$. Consequently, by the induction hypothesis and~\eqref{eq:Laguerre_reformu}, \begin{align} &\nonumber\( -\tfrac{{\varepsilon}}2 \Delta \)^{N+1} {\mathcal W}_{g_0}(z) = N! \sum_{n=0}^N {N \choose n} \( -\tfrac{{\varepsilon}}2 \Delta \)( {\mathcal W}_{g_0}(z) L_n(\r) ) \\ &\label{eq:sum_laplace_induct}= N! {\mathcal W}_{g_0}(z) \sum_{n=0}^N {N \choose n} \,\left[ \,} \def\rk{\,\right] \,} \def\rki{\,\right] L_n(\r)(1+2n) + (n+1)L_{n+1}(\r) + nL_{n-1}(\r) \rk \end{align} and we have to count the prefactors of the polynomials $L_n(\r)$ for $n=0,\hdots,N+1$ in the sum. For $L_{N+1}(\r)$ we have the prefactor \[ (N+1) L_{N+1}(\r) = (N+1) {N+1 \choose N+1} L_{N+1}(\r) \] from the $N$th summand in~\eqref{eq:sum_laplace_induct}. For $L_N(\r)$ we get contributions from the $N$th and the $(N-1)$th summand, and observe \[ \( (1+2N) + N {N \choose N-1} \) L_N(\r) = (N+1) {N+1 \choose N} L_N(\r). \] For all $1\leq n \leq N-1$ we get contributions from the $n$th, the $(n+1)$th, and the $(n-1)$th summand. Combining them yields \begin{align*} &\({N \choose n-1}n +{N \choose n}(1+2n) + {N \choose n+1}(n+1)\) L_n(\r) \\ &= \( n{N+1 \choose n} + (n+1) {N+1 \choose n+1} \) L_n(\r) = (N+1) {N+1 \choose n} L_n(\r), \end{align*} and for $L_0$ we again have the prefactor $(N+1)$. Finally, rewriting~\eqref{eq:sum_laplace_induct} as a sum over $L_n(\r)$ with $n=0,\hdots,N+1$ completes the proof. \end{proof} \section{Binomial identities}\label{sec:proof_binom} We summarize some binomial identities we repeatedly employ in our proofs. By applying Pascal's identity multiple times one directly obtains the formula \begin{align} \sum_{j=0}^N {k+j \choose j} &= {k+N+1 \choose N}. \end{align} Furthermore, for all $N,m,k\in N$ one has \begin{equation}\label{eq:binom_sum_prod1} \sum_{j=0}^{N}{N+m-j \choose m} {k+j \choose j} = {N+m+k+1 \choose N}. \end{equation} For the proof of~\eqref{eq:binom_sum_prod1} we use generating functions, and set \[ a_N = {N+m \choose N}, \quad b_N = {N+k \choose N}, \quad c_N:= \sum_{j=0}^{N}{N+m-j \choose m} {k+j \choose j} \] such that \begin{equation}\label{eq:power_series_binom} \sum_{j\geq 0}a_j x^j = (1-x)^{-(m+1)}, \quad \sum_{j\geq 0}b_j x^j = (1-x)^{-(k+1)} \end{equation} for all $|x|<1$. Then, $c_N$ is the $N$th term in the Cauchy product of $(a_j)_{j\geq0}$ and $(b_j)_{j\geq 0}$, and hence \[ \sum_{j\geq 0}c_j x^j = (1-x)^{-(m+k+2)}. \] Comparing the coefficients with the power series~\eqref{eq:power_series_binom} implies the assertion. \end{appendix} \bibliographystyle{elsarticle-num} \providecommand{\noopsort}[1]{}\providecommand{\singleletter}[1]{#1}%
{ "attr-fineweb-edu": 1.256836, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdP7xaKgTyeUb1ysS
\section{Appendix} In this appendix, we provide further detail to the arguments of the main text, on the stability of fMPO-symmetry, the identification of the ground state space, and the transformation of Hamiltonians under gauge transformations. \subsection{Stability of fMPO-symmetry} One can easily verify that fMPO-symmetry is stable under concatenation for a branching structure admitting a global flow by considering all different concatenation cases. To this end we first make a distinction between an ``open'' concatenation where the MPO tensor concatenation depicted in Fig.~\ref{concat}a is relevant and a partially closed concatenation (Fig.~\ref{concat}b,c) relevant when an outer vertex of a triangle tensor becomes an inner vertex during the concatenation process. Before we discuss the different cases occurring we introduce a short-hand notation to symbolize the concatenation of MPO-symmetric tensors. Instead of drawing the full symmetry MPO, we just indicate the positions of $Y$-tensors before and after the concatenation (Fig.~\ref{abb}) by circles at the respective boundary vertices. In the case of ``open'' concatenation it is sufficient to consider concatenation along an edge pointing towards the global flow direction and then categorize the different cases according to the shared angles with the neighboring edges at the origin vertex. Since the placement of $Y$-tensors only depends on the the criteria whether angles are smaller or larger than $\pi$ and the edge orientations the exact angles are irrelevant and we only have to distinguish four cases distinguished by $0,\pi/2,\pi,3\pi/2$. Taking into account geometric constraints imposed by the global flow criterion and making use of the mirror symmetry along the global flow direction there are 6 distinct cases left to consider. All of them can be shown to yield the correct MPO-symmetry after concatenation as shown in Fig.~\ref{casesopen}. The second case is relevant when two MPO-symmetric tensors are contracted along a common boundary of length two or more. Performing the contraction sequentially along the common boundary amounts to a step-by-step reduction of the size of the boundary and thus to a reduction of the size of the symmetry MPO. Note that in this case there are two different concatenation rules depending on the orientation of the inner edge relative to the position of the already contracted transversal indices (Fig.~\ref{concat}b and c). If the edges to be contracted share a vertex at their origin the contraction does not yield additional $Y$-tensors and independently from the rest of the tensor the contracted tensor will have the expected MPO-symmetry (Fig.~\ref{closingcases} upper panel). In the other case, one can consider all different cases of the edge configurations in the immediate vicinity of the contracted edge. Using symmetry arguments, there are three distinct cases which can be checked explicitly (Fig.~\ref{closingcases} panel 2 to 4). \begin{figure} \centering \includegraphics[width=0.45\textwidth]{abreviate0b.pdf} \caption{The concatenation of two MPO-symmetric tensors written in explicit notation (upper panel) and in abbreviated notation only depicting the positions of $Y$-tensors. } \label{abb} \end{figure} \begin{figure} \centering \includegraphics[width=0.4\textwidth]{opencasessmall.pdf} \caption{All six distinct cases for concatenating two MPO-symmetric tensors along an open edge.} \label{casesopen} \end{figure} \begin{figure} \centering \includegraphics[width=0.4\textwidth]{closingcases.pdf} \caption{Contracting two edges of the same MPO-symmetric tensor.} \label{closingcases} \end{figure} \subsection{ Ground state space} The coefficients $\lambda(\alpha;g,h)$ defining the state $M(g,h)$ in Eq.~(\ref{Mgh}) are given by \begin{align} \lambda(\alpha;g,h)=&\frac{\omega(h,g,\alpha) \omega(h, g\alpha, ^\alpha g^{-1} )}{\omega(g,h,\alpha) \omega(g,h\alpha ,^\alpha h^{-1} )} \label{lambda} \\ \times&(-1)^{s(gh \alpha,^\alpha h^{-1} ) s(g\alpha,^\alpha g^{-1} )+s(hg\alpha,^\alpha g^{-1}) s(h \alpha,^\alpha h^{-1} )} \nonumber \\ \times&(-1)^{(s(h,\alpha)+s(h\alpha,^\alpha h^{-1} ))(s(g,\alpha)+s(g\alpha,^\alpha g^{-1} ))+s(hg,\alpha)}. \nonumber \end{align} The MPO-symmetric boundary state is given by \begin{align} M'(g,h) =& \frac{1}{|G|} \sum_k V(k) M(g,h) \label{Mprimeghappendix}\\ =& \frac{1}{|G|} \sum_k \eta_g(h,k) M(g^k,h^k) \;, \nonumber \end{align} where $g^k=kgk^{-1}$ denotes conjugation and \begin{align} \eta_g(h,k) =& \frac{\omega(g,k^{-1},h^k) \omega(k^{-1},h^k,g^k) \omega(h,g,k^{-1})}{\omega(h,k^{-1},g^k) \omega(k^{-1},g^k,h^k) \omega(g,h,k^{-1})} \nonumber \\ &(-1)^{[s(k^{-1},kh)+s(kh,k^{-1})][s(k^{-1},kg)+s(kg,k^{-1})]} \nonumber \\ &(-1)^{s(k^{-1},kgh)+s(kgh,k^{-1})} \label{eta} \;. \end{align} The fact that $M'(g,h)$ and $M'(j,l)$ are linear dependent if $(j,l)=(g^t,h^t)$ for some $t$, i.e. if they are in the same pair conjugacy class follows from the identity \begin{equation} \eta_{g^t}(x^t,yt^{-1}) = \frac{\eta_g(x,y)}{\eta_g(x,t)} \label{lemma10} \end{equation} that holds formally as in the bosonic setting despite the fact that $\eta$ has additional sign factors and is given by a product of super 3-cocycles $\omega$. Eq.~(\ref{lemma10}) is also used in order to derive that only $c$-regular pair conjugacy classes contribute to the ground state dimension. To this end, first note that if \begin{equation} \sum_{s\in Z(g,h)} \eta_g(h,s)=0 \;, \label{centralizer_coeff} \end{equation} then also \begin{equation} \sum_{s | g^s=g^{t_i}, h^s=h^{t_i}} \eta_g(h,s) =0 , \quad \forall i \;. \label{coeff} \end{equation} Writing out the state $M'(g,h)$ in the basis of the elements of the pair conjugacy class $\mathcal C(g,h)$ we make use of the fact above to conclude that \begin{equation} M'(g,h)=0 \Leftrightarrow \sum_{s\in\mathcal Z(g,h)} \eta_g(h,s)=0 \;.\label{coeff2} \end{equation} To single out the pair conjugacy classes for which Eq.~\ref{coeff2} is fulfilled we note that for all elements $k$ in the centralizer of $(g,h)$ with $[g,h]=0$ we have \begin{equation} \eta_g(h,k) = \frac{c_g(k^{-1},h)}{c_g(h,k^{-1})} \;, \label{etac} \end{equation} where $c_g(h,k)$ as defined in Eq.~(\ref{comega}). For $g,h,k \in \mathcal Z(g,h)$ and $[g,h]=0$, i.e. $g,h,k$ are mutually commuting $c_g(h,k)$ is a 2-cocycle. This insight is used to apply the same arguments as in Ref.~\cite{TwistedQMDouble} and show that \begin{equation} \sum_{s\in\mathcal Z(g,h)} \eta_g(h,s)=0 \Leftrightarrow c_g(h,s) \neq c_g(s,h) \; . \label{coeff3} \end{equation} In other words, only pair conjugacy classes $\mathcal C(g,h)$ contribute to the total ground state space dimension for which $[g,h]=0, s(g,h)=s(h,g)$ and $c_g(h,k)=c_g(k,h)$. \subsection{Hamiltonian} As addressed in Ref.~\cite{LinLevin} string-net models can be defined using different gauges. Under such gauge transformations the Hamiltonian changes according to local physical unitary operations that do not alter the topological phase. A typical gauge degree of freedom is the choice of the loop weight, also referred to as quantum dimension $d_i$. For all bosonic twisted quantum double models an MPO-injective PEPS description can be found. Here, a particular gauge degree of freedom is the choice of the representative $\omega$ of a certain cohomology class which defines the model. A natural gauge is to choose a normalized 3-cocycle, which will yield a wave-function invariant under adding a closed-loop, i.e. $d_i=1$. Thus, the tensor network gauge suggests a particular Hamiltonian gauge of the corresponding string net model. The same applies in the fermionic setting. The Hamiltonian in Ref.~\cite{GuWen} is given in a specified gauge ($\beta=1$). To obtain the gauge compatible with a tensor network description we apply the transformation \begin{equation} c^\dag \mapsto \frac{1}{\sqrt{\beta}} c^\dag , \qquad c \mapsto \sqrt{\beta} c \;. \label{gauge} \end{equation} to ungauge the Hamiltonian and then choose $\beta=\mathrm -i$ which yields the gauge with trivial closed loop factors. The matrix elements in tensor-network gauge of the plaquette operator written as \begin{align} Q_p =& \sum_{g_1,\ldots,g_6} p(g_1,\ldots,g_6) \mathcal{F}_p(g_1,\ldots,g_6) \\ & \times \ket{g_1\oplus 1,\ldots, g_6 \oplus1}\bra{g_1,\ldots,g_6} \;, \nonumber \end{align} where $\oplus$ denotes addition modulo two are given in Table~\ref{table_hamiltonian}. \onecolumngrid \begin{table*}[t] \centering \begin{tabular}{ |c|c|c|c|} \hline $ i,j,k,l,m,n $ & $p(i,j,k,l,m,n) \mathcal{F}_p(i,j,k,l,m,n)$ & $ i,j,k,l,m,n $ & $p(i,j,k,l,m,n) \mathcal{F}_p(i,j,k,l,m,n)$ \\ \hline 000000 & $ - \alpha/\beta c_3^\dagger c_6^\dagger$ & & \\ \hline 100000 & $c_3^\dagger c_1$ & 010000 & $1/\beta^2 c_1^\dagger c_2^\dagger c_3^\dagger c_6^\dagger $ \\ 001000 & $c_6^\dagger c_2$ & 000100 & $c_6^\dagger c_4$ \\ 000010 & $-1/\beta^2 c_3^\dagger c_4^\dagger c_5^\dagger c_6^\dagger $ & 000001 & $ c_3^\dagger c_5$ \\ \hline 110000 & $-1/\beta c_2^\dagger c_3^\dagger$ & 011000 & $-1/\beta c_1^\dagger c_6^\dagger $ \\ 001100 & $-\beta c_6^\dagger c_4 c_3 c_2$ & 000110 & $1/\beta c_5^\dagger c_6^\dagger$ \\ 000011 & $-1/\beta c_3^\dagger c_4^\dagger $ & 100001 & $\beta c_3^\dagger c_6 c_5 c_1$ \\ \hline 101000 & $-\alpha \beta c_2 c_1$ & 010100 & $ \alpha/\beta c_1^\dagger c_2^\dagger c_6^\dagger c_4 $ \\ 001010 & $ -\alpha/\beta c_4^\dagger c_5^\dagger c_6^\dagger c_2$ & 000101 & $ \alpha\beta c_5 c_4$ \\ 100010 & $ -\alpha/\beta c_3^\dagger c_4^\dagger c_5^\dagger c_1 $ & 010001 & $ \alpha/\beta c_1^\dagger c_2^\dagger c_3^\dagger c_5$ \\ \hline 100100 & $ -\alpha \beta c_4 c_1$ & 010010 & $ \alpha/\beta^2 c_1^\dagger c_2^\dagger c_3^\dagger c_4^\dagger c_5^\dagger c_6^\dagger $ \\ 001001 & $\alpha\beta c_5 c_2$ & &\\ \hline 000111 & $1$ & 001110 & $- c_5^\dagger c_6^\dagger c_3 c_2 $ \\ 011100 & $c_1^\dagger c_6^\dagger c_4 c_3 $ & &\\ \hline 101100 & $ \alpha \beta^2 c_4 c_3 c_2 c_1$ & 010110 & $ \alpha / \beta^2 c_1^\dagger c_2^\dagger c_5^\dagger c_6^\dagger $ \\ 001011 & $ -\alpha c_4^\dagger c_2$ & 100101 & $ -\alpha \beta^2 c_6 c_5 c_4 c_1$ \\ 110010 & $ \alpha/ \beta^2 c_2^\dagger c_3^\dagger c_4^\dagger c_5^\dagger $ & 011001 & $ -\alpha c_1^\dagger c_5$ \\ \hline 010101 & $ - c_4^\dagger c_5^\dagger c_2 c_1$ & & \\ \hline \end{tabular} \caption{The first $32$ matrix element of the plaquette operator $Q_p$ defined on a hexagon with physical spins $i,j,k,l,m,n$ after ungauging the Hamiltonian given in Ref.~\cite{FermionicToricCode}. The remaining 32 matrix elements follow by Hermitian conjugation.} \label{table_hamiltonian} \end{table*} \end{document}
{ "attr-fineweb-edu": 1.855469, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdRU4uzlhgyWvJb-o
\section{Conclusion} \label{section7} In this paper, we propose the task of multi-turn sticker response selection, which recommends an appropriate sticker based on multi-turn dialog context history without relying on external knowledge. To tackle this task, we proposed the \emph{sticker response selector} (SRS). Specifically, SRS first learns the representation of each utterance using a self-attention mechanism, and learns sticker representation by CNN. Next, a deep interaction network is employed to fully model the dependency between the sticker and utterances. The deep interaction network consists of a co-attention matrix that calculates the attention between each word in an utterance and each unit in a sticker representation. Then, a bi-directional attention is used to obtain utterance-aware sticker representation and sticker-aware utterance representations. Finally, a fusion network models the short-term and long-term relationship between interaction results, and a fully-connected layer is applied to obtain the final selection result. Our model outperforms state-of-the-art methods in all metrics and the experimental results also demonstrate the robustness of our model on datasets with different similarity between candidate stickers. In the near future, we aim to propose a personalized sticker response selection system. \section{Dataset} \label{sec:dataset} In this section, we introduce our multi-turn dialog dataset with sticker as response in detail. \subsection{Data Collection} We collect the large-scale multi-turn dialog dataset with stickers from one of the most popular messaging apps. In this app, a large mount of sticker sets are published, and everyone can use the sticker when chatting with a friend or in a chat group. Specifically, we select 20 public chat groups consisting of active members, which are all open groups that everyone can join it without any authorities. The chat history of these groups is collected along with the complete sticker sets. These sticker sets include stickers with similar style. All stickers are resized to a uniform size of $128 \times 128$ pixels. We use 20 utterances before the sticker response as the dialog context, and then we filter out irrelevant utterance sentences, such as URL links and attached files. Due to privacy concern, we also filter out user information and anonymize user IDs. To construct negative samples, 9 stickers other than the ground truth sticker are randomly sampled from the sticker set. After pre-processing, there are 320,168 context-sticker pairs in the training dataset, 10,000 pairs in the validation, and 10,000 pairs in test datasets respectively. We make sure that there is no overlap between these three datasets. Two examples are shown in Figure~\ref{fig:dataset-case}. We publish this dataset to communities to facilitate further research on dialog response selection task. \subsection{Statistics and Analysis} \begin{table}[t] \centering \caption{Statistics of Response Selection Dataset.} \label{tab:stat-dataset} \begin{tabular}{llll} \toprule & Train & Valid & Test \\ \midrule \# context-stickers pairs & 320,168 & 10,000 & 10,000 \\ Avg. words of context utterance & 7.54 & 7.50 & 7.42 \\ Avg. users participate & 5.81 & 5.81 & 5.79 \\ \bottomrule \end{tabular} \end{table} In total, there are 3,516 sets of sticker which contain 174,695 stickers. The average number of stickers in a sticker set is 49.64. Each context includes 15.5 utterances on average. The average number of users who participate in the dialog context over each dataset is shown in the third row of Table~\ref{tab:stat-dataset}. \subsection{Sticker Similarity} \begin{figure}[t] \centering \includegraphics[scale=0.40]{figs/sticker-simi-count.pdf} \caption{ Similarity distribution among all stickers in test dataset. } \label{fig:similarity-distribution} \end{figure} \begin{figure*}[t] \centering \includegraphics[scale=0.58]{figs/sticker-dataset-case.pdf} \caption{ Example cases in the dataset with different similarity scores. } \label{fig:dataset-case} \end{figure*} Stickers in the same set always share a same style or contain the same cartoon characters. Intuitively, the more similar the candidate stickers are, the more difficult it is to choose the correct sticker from candidates. In other words, the similarity between candidate stickers determines the difficulty of the sticker selection task. To investigate the difficulty of this task, we calculate the average similarity of all the stickers in a specific sticker set by the Structural Similarity Index (SSIM) metric~\cite{wang2004image,avanaki2008exact}. We first calculate the similarity between the ground truth sticker and each negative sample, then average the similarity scores. The similarity distribution among test data is shown in Figure~\ref{fig:similarity-distribution}, where the average similarity is 0.258. The examples in Figure~\ref{fig:dataset-case} are also used to illustrate the similarity of stickers more intuitively, where the left one has a relatively low similarity score, and the right one has a high similarity score. \section{Experimental Setup} \label{section5} \subsection{Research Questions} We list four research questions that guide the experiments: \noindent $\bullet$ \textbf{RQ1} (See \S~\ref{subsec:Overall}): What is the overall performance of SRS compared with all baselines? \noindent $\bullet$ \textbf{RQ2} (See \S~\ref{subsec:ablation}): What is the effect of each module in SRS? \noindent $\bullet$ \textbf{RQ3} (See \S~\ref{subsec:number}): How does the performance change when the number of utterances changes? \noindent $\bullet$ \textbf{RQ4} (See \S~\ref{subsec:attention}): Can co-attention mechanism successfully capture the salient part on the sticker image and the important words in dialog context? \noindent $\bullet$ \textbf{RQ5} (See \S~\ref{subsec:features}): What is the influence of the similarity between candidate stickers? \noindent $\bullet$ \textbf{RQ6} (See \S~\ref{subsec:hidden}): What is the influence of the parameter settings? \subsection{Comparison Methods} We first conduct an ablation study to prove the effectiveness of each component in SRS as shown in Table~\ref{tab:ablations}. Specifically, we remove each key part of our SRS to create ablation models and then evaluate the performance of these models. \begin{table}[t] \centering \caption{Ablation models for comparison.} \label{tab:ablations} \begin{tabular}{ll} \toprule Acronym & Gloss \\ \midrule SRS w/o pretrain & \multicolumn{1}{p{5cm}}{\small SRS w/o pre-trained Inception-v3 model}\\ SRS w/o Classify & \multicolumn{1}{p{5cm}}{\small SRS w/o emoji classification task}\\ SRS w/o DIN & \multicolumn{1}{p{5cm}}{\small SRS w/o \textbf{D}eep \textbf{I}nteraction \text{N}etwork}\\ SRS w/o FR & \multicolumn{1}{p{5cm}}{\small SRS w/o \textbf{F}usion \textbf{R}NN}\\ \bottomrule \end{tabular} \end{table} Next, to evaluate the performance of our model, we compare it with the following baselines. Note that, since no existing models can be directly applied to our task, we adapt VQA and multi-turn response selection models to the sticker response selection task. \noindent (1) \textbf{Synergistic}: \cite{guo2019image} devises a novel synergistic network on VQA task. First, candidate answers are coarsely scored according to their relevance to the image-question pair. Afterward, answers with high probabilities of being correct are re-ranked by synergizing with image and question. This model achieves the state-of-the-art performance on the Visual Dialog v1.0 dataset~\cite{das2017visual}. \noindent (2) \textbf{PSAC}: \cite{li2019beyond} proposes the positional self-attention with co-attention architecture on VQA task, which does not require RNNs for video question answering. We replace the output probability on the vocabulary size with the probability on candidate sticker set. \noindent (3) \textbf{SMN}: \cite{Wu2017SequentialMN} proposes a sequential matching network to address response selection for the multi-turn conversation problem. SMN first matches a response with each utterance in the context. Then vectors are accumulated in chronological order through an RNN. The final matching score is calculated with RNN. \noindent (4) \textbf{DAM}: \cite{zhou2018multi} extends the transformer model~\cite{vaswani2017attention} to the multi-turn response selection task, where representations of text segments are constructed using stacked self-attention. Then, truly matched segment pairs are extracted across context and response. \noindent (5) \textbf{MRFN}: \cite{tao2019multi} proposes a multi-representation fusion network, where the representations can be fused into matching at an early stage, at the intermediate stage or the last stage. This is the state-of-the-art model on the multi-turn response selection task. For the three baselines above, we replace the candidate embedding RNN network with the image encoding CNN network Inception-v3, as used in our model. This network is initialized using a pre-trained model\footnote{\url{https://github.com/tensorflow/models/tree/master/research/slim}} for all baselines and SRS. \subsection{Evaluation Metrics} Following~\cite{tao2019multi,zhou2018multi}, we employ recall at position $k$ in $n$ candidates $R_n@k$ as an evaluation metric, which measures if the positive response is ranked in the top $k$ positions of $n$ candidates. Following~\cite{zhou2018multi}, we also employ mean average precision (MAP)~\cite{baeza2011modern} as an evaluation metric. The statistical significance of differences observed between the performance of two runs is tested using a two-tailed paired t-test and is denoted using $^{\blacktriangle}$\ (or $^{\blacktriangledown}$) for strong significance at $\alpha=0.01$. \subsection{Implementation Details} We implement our experiments using TensorFlow~\cite{abadi2016tensorflow} on an NVIDIA P100 GPU. If the number of words in an utterance is less than 30, we pad zeros, otherwise, the first 30 words are kept. The word embedding dimension is set to 100 and the number of hidden units is 100. The batch size is set to 32. 9 negative samples are randomly sampled from the sticker set containing the ground truth sticker, and we finally obtain 10 candidate stickers for the model to select. We use Adam optimizer~\cite{Kingma2015AdamAM} as our optimizing algorithm, and the learning rate is $1 \times 10^{-4}$. \section{Experimental result} \label{section6} \subsection{Overall Performance} \label{subsec:Overall} \begin{table}[t] \centering \caption{RQ1: Automatic evaluation comparison. Significant differences are with respect to MRFN.} \begin{tabular}{@{}l cc cc @{}} \toprule & MAP & $R_{10}@1$ & $R_{10}@2$ & $R_{10}@5$ \\ \midrule \multicolumn{5}{@{}l}{\emph{Visual Q\&A methods}}\\ Synergistic & 0.593 \phantom{0} & 0.438\phantom{0} & 0.569\phantom{0} & 0.798\phantom{0} \\ PSAC & 0.662\phantom{0} & 0.533\phantom{0} & 0.641\phantom{0} & 0.836\phantom{0} \\ \midrule \multicolumn{5}{@{}l}{\emph{Multi-turn response selection methods}}\\ SMN & 0.524\phantom{0} & 0.357\phantom{0} & 0.488\phantom{0} & 0.737\phantom{0} \\ DAM & 0.620\phantom{0} & 0.474\phantom{0} & 0.601\phantom{0} & 0.813\phantom{0} \\ MRFN & 0.684\phantom{0} & 0.557\phantom{0} & 0.672\phantom{0} & 0.853\phantom{0}\\ \midrule SRS & \textbf{0.709} & \textbf{0.590}$^{\blacktriangle}$ & \textbf{0.703}$^{\blacktriangle}$ & \textbf{0.872} \\ \bottomrule \end{tabular} \label{tab:comp_auto_baselines} \end{table} \begin{figure*} \centering \subfigure[$MAP$ score]{ \label{figs:MAP.png} \includegraphics[scale=0.26]{figs/MAP.pdf} } \subfigure[$R_{10}@1$ score]{ \label{figs:r1.png} \includegraphics[scale=0.26]{figs/r1.pdf} } \subfigure[$R_{10}@2$ score]{ \label{figs:r2.png} \includegraphics[scale=0.26]{figs/r2.pdf} } \subfigure[$R_{10}@5$ score]{ \label{figs:r5.png} \includegraphics[scale=0.26]{figs/r5.pdf} } \caption{ Performance of SRS on all metrics when reading different number of utterances. } \label{fig:turns} \end{figure*} For research question \textbf{RQ1}, we examine the performance of our model and baselines in terms of each evaluation metric, as shown in Table~\ref{tab:comp_auto_baselines}. First, the performance of the multi-turn response selection models is generally consistent with their performances on text response selection datasets. SMN~\cite{Wu2017SequentialMN}, an earlier work on multi-turn response selection task with a simple structure, obtains the worst performance on both sticker response and text response selection. DAM~\cite{zhou2018multi} improves the SMN model and gets the second best performance. MRFN~\cite{tao2019multi} is the state-of-the-art text response selection model and achieves the best performance among baselines in our task as well. Second, VQA models perform generally worse than multi-turn response selection models, since the interaction between multi-turn utterances and sticker is important, which is not taken into account by VQA models. Finally, SRS achieves the best performance with 3.36\%, 5.92\% and 3.72\% improvements in MAP, $R_{10}@1$ and $R_{10}@2$ respectively, over the state-of-the-art multi-turn selection model, \emph{i.e.,} MRFN, and with 6.80\%, 10.69\% and 8.74\% significant increases over the state-of-the-art visual dialog model, PSAC. This proves the superiority of our model. \subsection{Ablation Study} \label{subsec:ablation} \begin{table}[t] \centering \caption{RQ2: Evaluation of different ablation models.} \begin{tabular}{@{}lcc cc@{}} \toprule & MAP & $R_{10}@1$ & $R_{10}@2$ & $R_{10}@5$ \\ \midrule SRS w/o pretrain & 0.650 & 0.510 & 0.641 & 0.833\\ SRS w/o Classify & 0.707 & 0.588 & 0.700 & 0.871 \\ SRS w/o DIN & 0.680 & 0.552 & 0.669 & 0.854 \\ SRS w/o FR & 0.677 & 0.551 & 0.663 & 0.863 \\ SRS & \textbf{0.709} & \textbf{0.590} & \textbf{0.703} & \textbf{0.872} \\ \bottomrule \end{tabular} \label{tab:comp_rouge_ablation} \end{table} For research question \textbf{RQ2}, we conduct ablation tests on the use of the pre-trained Inception-v3 model, the sticker classification loss, the deep interaction network and the fusion RNN respectively. The evaluation results are shown in Table~\ref{tab:comp_rouge_ablation}. The performances of all ablation models are worse than that of SRS under all metrics, which demonstrates the necessity of each component in SRS. We also find that the sticker classification makes the least contribution to the overall performance. But this additional task can speed up the training process, and help our model to converge quickly. We use 19 hours to train the SRS until convergence, and we use 30 hours for training SRS w/o Classify. The fusion RNN brings a significant contribution, improving the MAP and $R_{10}@1$ scores by 4.43\% and 7.08\%, respectively. Besides, the deep interaction network also plays an important part. Without this module, the interaction between the sticker and utterance are hindered, leading to a 6.88\% drop in $R_{10}@1$. \subsection{Analysis of Number of Utterances} \label{subsec:number} For research question \textbf{RQ3}, in addition to comparing with various baselines, we also evaluate our model when reading different number of utterances to study how the performance relates to number of context turns. Figure~\ref{fig:turns} shows how the performance of the SRS changes with respect to different numbers of utterances turns. We observe a similar trend for SRS on the first three evaluation metrics $MAP$, $R_{10}@1$ and $R_{10}@2$: they first increase until the utterance number reaches 15, and then fluctuate as the utterance number continues to increase. There are two possible reasons for this phenomena. The first reason might be that, when the information in the utterances is limited, the model can capture the features well, and thus when the amount of information increases, the performance gets better. However, the capacity of the model is limited, and when the amount of information reaches its upper bound, it gets confused by this overwhelming information. The second reason might be of the usefulness of utterance context. Utterances that occur too early before the sticker response may be irrelevant to the sticker and bring unnecessary noise. As for the last metric, the above observations do not preserve. The $R_{10}@5$ scores fluctuate when the utterance number is below 15, and drop when the utterance number increases. The reason might be that $R_{10}@5$ is not a strict metric, and it is easy to collect this right sticker in the set of half of the whole candidates. Thus, the growth of the information given to SRS does not help it perform better but the noise it brings harms the performance. On the other hand, though the number of utterances changes from 3 to 20, the overall performance of SRS generally remains at a high level, which proves the robustness of our model. \begin{figure*}[h] \centering \includegraphics[scale=0.58]{figs/sticker-predict-case.pdf} \caption{ Examples of sticker selection results produced by SRS. We show the selected sticker and three random selected candidate stickers with the attention heat map. The lighter the area on image is, the higher attention weight it gets. } \label{fig:predict-case} \end{figure*} \begin{figure}[h] \centering \includegraphics[scale=0.55]{figs/sticker-text-attention-case.pdf} \caption{ Examples of the attention weights of the dialog utterance. We translate Chinese to English word by word. The darker the area is, the higher weight the word gets. } \label{fig:text-attention-case} \end{figure} \subsection{Analysis of Attention Distribution in Interaction Process} \label{subsec:attention} Next, we turn to address \textbf{RQ4}. We also show three cases with the dialog context in Figure~\ref{fig:predict-case}. There are four stickers under each dialog context, one is the selected sticker by our model and other three stickers are random selected candidate stickers. As a main component of SRS, the deep interaction network comprises a bi-directional attention mechanism between the utterance and the sticker, where each word in the utterance and each unit in the sticker representation have a similarity score in the co-attention matrix. To visualize the sticker selection process and to demonstrate the interpretability of SRS, we visualize the sticker-wise attention $\tau^s$ (Equation~\ref{equ:ua-sticker}) on the original sticker image and show some examples in Figure~\ref{fig:predict-case}. The lighter the area is, the higher attention it gets. Facial expressions are an important part in sticker images. Hence, we select several stickers with vivid facial expression in Figure~\ref{fig:predict-case}. Take forth sticker in Case 1 for example where the character has a wink eye and a smiling mouth. The highlights are accurately placed on the character's eye, indicating that the representation of this sticker is highly dependent on this part. Another example is the last sticker of Case 3, there is two question marks on the top right corner of the sticker image which indicates that the girl is very suspicious of this. In addition to facial expression, the characters gestures can also represent emotions. Take the third sticker in Case 2 for example, the character in this sticker gives a thumbs up representing support and we can find that the attention lies on his hand, indicating that the model learns the key point of his body language. Furthermore, we randomly select three utterances from the test dataset, and we also visualize the attention distribution over the words in a utterance, as shown in Figure~\ref{fig:text-attention-case}. We use the weight $\tau_j^u$ for the $j$-th word (calculated in Equation~\ref{equ:sa-utterance}) as the attention weight. We can find that the attention module always gives a higher attention weight on the salience word, such as the ``easy method'', ``make a lot of money'' and ``use Chine Mobile''. \subsection{Influence of Similarity between Candidates} \label{subsec:features} \begin{figure}[h] \centering \includegraphics[scale=0.40]{figs/similarity-recall.pdf} \caption{ Performance of SRS on groups of different candidate similarity. } \label{fig:similarity-recall} \end{figure} In this section, we turn to \textbf{RQ5} to investigate the influence of the similarities between candidates. The candidate stickers are sampled from the same set, and stickers in a set usually have a similar style. Thus, it is natural to ask: Can our model identify the correct sticker from a set of similar candidates? What is the influence of the similarity between candidate stickers? Hence, we use the Structural Similarity Index (SSIM) metric~\cite{wang2004image,avanaki2008exact} to calculate the average similarity among all candidates in a test sample and then aggregate all test samples into five groups according to their average similarities. We calculate the $R_{10}@1$ of each group of samples, as shown in Figure~\ref{fig:similarity-recall}. The x-axis is the average similarity between candidate stickers and the y-axis is the $R_{10}@1$ score. Not surprisingly, SRS gains the best performance when the average similarity of the candidate group is low and its performance drops as similarity increases. However, we can also see that, though similarity varies from minimum to maximum, the overall performance can overall stay at high level. $R_{10}@1$ scores of all five groups are above 0.42, and the highest score reaches 0.59. That is, our model is highly robust and can keep giving reasonable sticker responses. \subsection{Robustness of Parameter Setting}\label{subsec:hidden} \begin{figure}[h] \centering \includegraphics[scale=0.45]{figs/hidden.pdf} \caption{ Performance of SRS with different parameter settings. } \label{fig:hidden} \end{figure} Finally, we turn to address \textbf{RQ6} to investigate the robustness of parameter setting. We train our model in different parameter setting as shown in Figure~\ref{fig:hidden}. The hidden size of the RNN, CNN and the dense layer in our model is tuned from 50 to 250, and we use the MAP and $R_n@k$ to evaluate each model. As the hidden size grows larger from 50 to 100, the performance rises too. The increment of hidden size improves the MAP and $R_{10}@1$ scores by 0.4\% and 1.0\%. When the hidden size continuously goes larger from 100 to 250, the performance is declined slightly. The increment of hidden size leads to a 2.2\% and 3.9\% drop in terms of MAP and $R_{10}@1$ respectively. Nonetheless, we can find that each metric maintained at a stable interval, which demonstrates that our SRS is robust in terms of the parameter size. \section{Introduction} \label{sec:intro} Images are another important approach for expressing feelings and emotions in addition to using text in communication. In mobile messaging apps, these images can generally be classified into emojis and stickers. Emojis are usually used to help reinforce simple emotions in a text message due to their small size, and their variety is limited. Stickers, on the other hand, can be regarded as an alternative for text messages, which usually include cartoon characters and are of high definition. They can express much more complex and vivid emotion than emojis. Most messaging apps, such as WeChat, Telegram, WhatsApp, and Slack provide convenient ways for users to download stickers for free, or even share self-designed ones. We show a chat window including stickers in Figure~\ref{fig:example}. \begin{figure} \centering \includegraphics[scale=0.13]{figs/case.png} \caption{ An example of stickers in a multi-turn dialog. Sticker response selector automatically selects the proper sticker based on multi-turn dialog history. } \label{fig:example} \end{figure} Stickers are becoming more and more popular in online chat. First, sending a sticker with a single click is much more convenient than typing text on the 26-letter keyboard of a small mobile phone screen. Second, there are many implicit or strong emotions that cannot be accurately explained by words but can be captured by stickers with vivid facial expressions and body language. However, the large scale use of stickers means that it is not always straightforward to think of the sticker that best expresses one's feeling according to the current chatting context. Users need to recall all the stickers they have collected and selected the appropriate one, which is both difficult and time-consuming. Consequently, much research has focused on recommending appropriate emojis to users according to the chatting context. Existing works, such as~\cite{xie2016neural}, are mostly based on emoji recommendation, where they predict the probable emoji given the contextual information from multi-turn dialog systems. In contrast, other works~\cite{barbieri2017emojis,barbieri2018multimodal} recommend emojis based on the text and images posted by a user. However, the use of emojis is restricted due to their limited variety and small size, while stickers are more expressive and of a great variety. As for sticker recommendation, existing works such as~\cite{laddha2019understanding} and apps like Hike or QQ directly match the text typed by the user to the short text tag assigned to each sticker. However, since there are countless ways of expressing the same emotion, it is impossible to capture all variants of an utterance as tags. In this paper, we address the task of sticker response selection in multi-turn dialog, where an appropriate sticker is recommended based on the dialog history. There are two main challenges in this task: (1) To the best of our knowledge, no existing image recognition methods can model the sticker image, how to capture the semantic meaning of sticker is challenging. (2) Understanding multi-turn dialog history information is crucial for sticker recommendation, and jointly modeling the candidate sticker with multi-turn dialog is challenging. Herein, we propose a novel sticker recommendation model, namely \emph{sticker response selector} (SRS), for sticker response selection in multi-turn dialog. Specifically, SRS first learns representations of dialog context history using a self-attention mechanism and learns the sticker representation by a convolutional network. Next, SRS conducts deep matching between the sticker and each utterance and produces the interaction results for every utterance. Finally, SRS employs a fusion network which consists of a sub-network fusion RNN and fusion transformer to learn the short and long term dependency of the utterance interaction results. The final matching score is calculated by an interaction function. To evaluate the performance of our model, we propose a large number of multi-turn dialog dataset associated with stickers from one of the popular messaging apps. Extensive experiments conducted on this dataset show that SRS significantly outperforms the state-of-the-art baseline methods in commonly-used metrics. \noindent \noindent Our contributions can be summarized as follows: $\bullet$ We employ a deep interaction network to conduct matching between candidate sticker and each utterance in dialog context. $\bullet$ We propose a fusion network that can capture the short and long dependency of the interaction results of each utterance simultaneously. $\bullet$ Experiments conducted on a large-scale real-world dataset\footnote{https://github.com/gsh199449/stickerchat} show that our model outperforms all baselines, including state-of-the-art models. \section{Problem formulation} \label{sec:formulation} Before presenting our approach for sticker response selection in multi-turn dialog, we first introduce our notations and key concepts. Similar to the multi-turn dialog response selection~\cite{Wu2017SequentialMN,zhou2018multi}, we assume that there is a multi-turn dialog context $s=\{u_{1},\dots,u_{T_u}\}$ and a candidate sticker set $C=\{c_{1},...c_{T_c}\}$, where $u_{i}$ represents the $i$-th utterance in the multi-turn dialog. In the $i$-th utterance $u_i=\{x^i_1,\dots,x^{i}_{T_x^i}\}$, $x^i_j$ represents the $j$-th word in $u_i$, and $T_x^i$ represents the total number of words in $u_i$ utterance. In dialog context $s$, $c_{i}$ represents a sticker image with a binary label $y_i$, indicating whether $c_i$ is an appropriate response for $s$. $T_u$ is the utterance number in the dialog context and $T_c$ is the number of candidate stickers. For each candidate set, there is only one ground truth sticker, and the remaining ones are negative samples. Our goal is to learn a ranking model that can produce the correct ranking for each candidate sticker $c_i$; that is, can select the correct sticker among all the other candidates. For the rest of the paper, we take the $i$-th candidate sticker $c_{i}$ as an example to illustrate the details of our model and omit the candidate index $i$ for brevity. \section{SRS model} \label{section4} \begin{figure*} \centering \includegraphics[scale=0.8]{figs/sticker-model.pdf} \caption{ Overview of SRS. We divide our model into four ingredients: (1) \textit{Sticker encoder} learns sticker representation; (2) \textit{Utterance encoder} learns representation of each utterance; (3) \textit{Deep interaction network} conducts deep matching interaction between sticker representation and utterance representation in different levels of granularity. (4) \textit{Fusion network} combines the long-term and short-term dependency feature between interaction results produced by (3). } \label{fig:model} \end{figure*} \subsection{Overview} In this section, we propose our \emph{sticker response selector}, abbreviated as SRS. An overview of SRS is shown in Figure~\ref{fig:model}, which can be split into four main parts: $\bullet$ \textit{Sticker encoder} is a convolutional neural network (CNN) based image encoding module that learns a sticker representation. $\bullet$ \textit{Utterance encoder} is a self-attention mechanism based module encoding each utterance $u_{i}$ in the multi-turn dialog context $s$. $\bullet$ \textit{Deep interaction network} module conducts deep matching between each sticker representation and each utterance, and outputs each interaction result. $\bullet$ \textit{Fusion network} learns the short-term dependency by the fusion RNN and the long-term dependency by the fusion Transformer, and finally outputs the matching score combining these features using an interaction function. \subsection{Sticker Encoder} \label{subsec:sticker_encoder} Much research has been conducted to alleviate gradient vanishing~\cite{he2016deep} and reduce computational costs~\cite{he2015delving} in image modeling tasks. We utilize one of these models, \emph{i.e.,} the Inception-v3~\cite{szegedy2016rethinking} model rather than plain CNN to encode sticker image: \begin{align} O, O_{\text{flat}} &= \text{Inception-v3}(c) , \label{eq:inceptionv3} \end{align} where $c$ is the sticker image. The sticker representation is $O \in \mathbb{R}^{p \times p \times d}$ which conserves the two-dimensional information of the sticker, and will be used when associating stickers and utterances in \S\ref{deep_int}. We use the original image representation output of Inception-v3 $O_{\text{flat}} \in \mathbb{R}^{d}$ as another sticker representation. However, existing pre-trained CNN networks including Inception-v3 are mostly built on real-world photos. Thus, directly applying the pre-trained networks on stickers cannot speed up the training process. In this dataset, sticker author give each sticker $c$ an emoji tag which denotes the general emotion of the sticker. Hereby, we propose an auxiliary sticker classification task to help the model converge quickly, which uses $O_{\text{flat}}$ to predict which emoji is attached to the corresponding sticker. More specifically, we feed $O_{\text{flat}}$ into a linear classification layer and then use the cross-entropy loss $\mathcal{L}_s$ as the loss function of this classification task. \subsection{Utterance Encoder} To model the semantic meaning of the dialog context, we learn the representation of each utterance $u_i$. First, we use an embedding matrix $e$ to map a one-hot representation of each word in each utterance $u_i$ to a high-dimensional vector space. We denote $e(x^i_j)$ as the embedding representation of word $x^i_j$. From these embedding representations, we use the attentive module from Transformer~\cite{vaswani2017attention} to model the temporal interactions between the words in an utterance. Attention mechanisms have become an integral part of compelling sequence modeling in various tasks~\cite{bahdanau2014neural,fan2018hierarchical,Gao2019Abstractive,li2019beyond}. In our sticker selection task, we also need to let words fully interact with each other to model the dependencies of words without regard to their locations in the input sentence. The attentive module in the Transformer has three inputs: the query $Q$, the key $K$ and the value $V$. We use three fully-connected layers with different parameters to project the embedding of dialog context $e(x^i_j)$ into three spaces: \begin{align} Q^i_j=FC(e(x^i_j)), K^i_j=FC(e(x^i_j)) , V^i_j=FC(e(x^i_j)). \end{align} The attentive module then takes each $Q^i_j$ to attend to $K^i_\cdot$, and uses these attention distribution results $\alpha^{i}_{j, \cdot} \in \mathbb{R}^{T_x^i}$ as weights to gain the weighted sum of $V^i_j$ as shown in Equation~\ref{equ:transformer-sum}. Next, we add the original word representations on $\beta^i_{j}$ as the residential connection layer, shown in Equation~\ref{equ:drop-add}: \begin{align} \alpha^i_{j,k} &= \frac{\exp\left( Q^i_j \cdot K^i_k \right)}{\sum_{n=1}^{T_x^i} \exp\left(Q^i_j \cdot K^i_n\right)}, \label{equ:attention}\\ \beta^i_{j} &= \textstyle \sum_{k=1}^{T_x^i} \alpha^i_{j,k} \cdot V^i_{k}, \label{equ:transformer-sum}\\ \hat{h}^i_j &= \text{Dropout} \left( e(x^i_j) + \beta^i_j \right), \label{equ:drop-add} \end{align} where $\alpha^i_{j,k}$ denotes the attention weight between $j$-th word to $k$-th word in $i$-th utterance. To prevent vanishing or exploding of gradients, a layer normalization operation~\cite{lei2016layer} is also applied on the output of the feed-forward layer as shown in Equation~\ref{equ:ffn}: \begin{align} h^i_j &= \text{norm}\left(\max(0, \hat{h}^i_j \cdot W_1 + b_1) \cdot W_2 + b_2 \right), \label{equ:ffn} \end{align} where $W_1, W_2, b_1, b_2$ are all trainable parameters. $h^{i}_j$ denotes the hidden state of $j$-th word in the Transformer for the $i$-th utterance. \subsection{Deep Interaction Network} \label{deep_int} \begin{figure*} \centering \includegraphics[scale=0.75]{figs/sticker-interaction.pdf} \caption{Framework of deep interaction network.} \label{fig:interaction} \end{figure*} Now that we have the representation of the sticker and each utterance, we can conduct a deep matching between these components. On one hand, there are some emotional words in dialog context history that match the expression of the stickers such as ``happy'' or ``sad''. On the other hand, specific part of the sticker can also match these corresponding words such as dancing limbs or streaming eyes. Hence, we employ a bi-directional attention mechanism between a sticker and each utterance, that is, from utterance to sticker and from sticker to utterance, to analyze the cross-dependency between the two components. The interaction is illustrated in Figure~\ref{fig:interaction}. We take the $i$-th utterance as an example and omit the index $i$ for brevity. The two directed attentions are derived from a shared relation matrix, $M \in \mathbb{R}^{(p^2) \times T_{u}}$, calculated by sticker representation $O \in \mathbb{R}^{p \times p \times d}$ and utterance representation $h \in \mathbb{R}^{T_{u} \times d}$. The score $M_{kj} \in \mathbb{R}$ in the relation matrix $M$ indicates the relation between the $k$-th sticker representation unit $O_k$, $k \in [1,p^2]$ and the $j$-th word $h_j$, $j \in [1, T_{u}]$ and is computed as: \begin{equation} M_{kj} = \sigma(O_k, h_j), \quad \sigma(x, y) = w^\intercal [x \oplus y \oplus (x \otimes y)] , \label{eq:alpha} \end{equation} where $\sigma$ is a trainable scalar function that encodes the relation between two input vectors. $\oplus$ denotes a concatenation operation and $\otimes$ is the element-wise multiplication. Next, a max pooling operation is conducted on $M$, \emph{i.e.,} let $\tau_j^u = \max(M_{:j}) \in \mathbb{R}$ represent the attention weight on the $j$-th utterance word by the sticker representation, corresponding to the ``utterance-wise attention''. This attention learns to assign high weights to the important words that are closely related to sticker. We then obtain the weighted sum of hidden states as ``\textbf{sticker-aware utterance representation}'' $l$: \begin{equation}\label{equ:sa-utterance} l = \textstyle \sum^{T_{u}}_j {\tau_j^u h_j} . \end{equation} Similarly, sticker-wise attention learns which part of a sticker is most relevant to the utterance. Let $\tau_k^s=\text{max}(M_{k:}) \in \mathbb{R}$ represent the attention weight on the $k$-th unit of the sticker representation. We use this to obtain the weighted sum of $O_{k}$, \emph{i.e.,} the ``\textbf{utterance-aware sticker representation}'' $r$: \begin{equation}\label{equ:ua-sticker} r = \textstyle \sum^{p^2}_k {\tau_k^s O_{k}} . \end{equation} After obtaining the two outputs from the co-attention module, we combine the sticker and utterance representations and finally get the ranking result. We first integrate the utterance-aware sticker representation $r$ with the original sticker representation $O_{\text{flat}}$ using an \textbf{integrate function}, named $IF$: \begin{equation} Q_1 = IF(O_{\text{flat}}, r) , \quad IF(x, y) = FC(x \oplus y \oplus ( x \otimes y) \oplus (x+y)) . \label{triliner} \end{equation} We add the sticker-aware utterance representation $l$ into $Q_1$ together and then apply a fully-connected layer: \begin{align}\label{equ:q2} Q_2 &= FC(Q_1 \oplus l) . \end{align} \subsection{Fusion Network} \begin{figure} \centering \includegraphics[scale=0.5]{figs/sticker-fusion.pdf} \caption{Framework of fusion network.} \label{fig:fusion} \end{figure} Up till now, we have obtained the interaction result between each utterance and the candidate sticker. Here we again include the utterance index $i$ where $Q_2$ now becomes $Q_2^i$. Since the utterances in a multi-turn dialog context are in chronological order, we employ a \textbf{Fusion RNN} and a \textbf{Fusion Transformer} to model the short-term and long-term interaction between utterance $\{Q_2^1, \dots, Q_2^{T_u}\}$. \subsubsection{Fusion RNN} Fusion RNN first reads the interaction results for each utterance $\{Q_2^1, \dots, Q_2^{T_u}\}$ and then transforms into a sequence of hidden states. In this paper, we employ the gated recurrent unit (GRU)~\cite{Chung2014EmpiricalEO} as the cell of fusion RNN, which is popular in sequential modeling~\cite{Gao2019How, Wu2017SequentialMN}: \begin{align} g_i &= \text{RNN}(Q_2^i, g_{i-1}) , \label{equ:fusion-rnn} \end{align} where $g_i$ is the hidden state of the fusion RNN. Finally, we obtain the sequence of hidden states $\{g_1, \dots, g_{T_u}\}$. One can replace GRU with similar algorithms such as LSTM~\cite{hochreiter1997long}. We leave the study as future work. \subsubsection{Fusion Transformer} To model the long-term dependency and capture the salience utterance from the context, we employ the self-attention mechanism introduced in Equation~\ref{equ:attention}-\ref{equ:ffn}. Concretely, given $\{Q_2^1, \dots, Q_2^{T_u}\}$, we first employ three linear projection layers with different parameters to project the input sequence into three different spaces: \begin{equation} \mathcal{Q}^i = FC( Q_2^i ), \quad \mathcal{K}^i = FC(Q_2^i ), \quad \mathcal{V}^i = FC( Q_2^i). \end{equation} Then we feed these three matrices into the self-attention algorithm illustrated in Equation~\ref{equ:attention}-\ref{equ:ffn}. Finally, we obtain the long-term interaction result $\{\hat{g}_1, \dots, \hat{g}_{T_u}\}$. \subsubsection{Prediction Layer} To combine the interaction representation generated by fusion RNN and fusion Transformer, we employ the SUMULTI function proposed by \cite{Wang2016ACM} to combine these representations, which has been proven effective in various tasks: \begin{align} \overline{g}_i = \text{ReLU}(\mathcal{W}^s \begin{bmatrix} (\hat{g}_i - g_i) \odot (\hat{g}_i - g_i) \\ \hat{g}_i \odot g_i \end{bmatrix} + \mathbf{b}^s). \end{align} The new interaction sequence $\{\overline{g}_1, \dots, \overline{g}_{T_u}\}$ is then boiled down to a matching vector $\Tilde{g}_{T_u}$ by another GRU-based RNN: \begin{align} \Tilde{g}_i = \text{RNN}(\Tilde{g}_{i-1}, \overline{g}_i) . \end{align} We use the final hidden state $\Tilde{g}_{T_u}$ as the representation of the overall interaction result between the whole utterance context and the candidate sticker. Finally, we apply a fully-connected layer to produce the matching score $\hat{y}$ of the candidate sticker: \begin{equation} \hat{y} = FC(\Tilde{g}_{T_u}) , \end{equation} where $\hat{y} \in (0,1)$ is the matching score of the candidate sticker. \subsection{Learning} Recall that we have a candidate sticker set $C=\{c_{1},...c_{T_c}\}$ which contains multiple negative samples and one ground truth sticker. We use hinge loss as our objective function: \begin{align} \mathcal{L}_{r} &= \textstyle \sum^{N} \max \left( 0 , \hat{y}_{\text{negative}}- \hat{y}_{\text{positive}} +\text{margin} \right), \label{eq:loss-generator} \end{align} where $\hat{y}_{\text{negative}}$ and $\hat{y}_{\text{positive}}$ corresponds to the predicted labels of the negative sample and ground truth sticker, respectively. The margin is the margin rescaling in hinge loss. The gradient descent method is employed to update all the parameters in our model to minimize this loss function. \section{Related Work} \label{sec:related} We outline related work on sticker recommendation, visual question answering, visual dialog, and multi-turn response selection. \textbf{Sticker recommendation.} Most of the previous works emphasize the use of emojis instead of stickers. For example, \cite{barbieri2017emojis, barbieri2018multimodal} use a multimodal approach to recommend emojis based on the text and images in an Instagram post. However, emojis are typically used in conjunction with text, while stickers are independent information carriers. What is more, emojis are limited in variety, while there exists an abundance of different stickers. The most similar work to ours is \cite{laddha2019understanding}, where they generate recommended stickers by first predicting the next message the user is likely to send in the chat, and then substituting it with an appropriate sticker. However, more often than not the implication of the stickers cannot be fully conveyed by text and, in this paper, we focus on directly generating sticker recommendations from dialog history. \textbf{Visual question answering.} Sticker recommendation involves the representation of and interaction between images and text, which is related to the Visual Question Answering (VQA) task~\cite{Goyal2018Think,Gao2019Multi,Chao2018Cross,Wang2017Explicit,Noh2019Transfer}. Specifically, VQA takes an image and a corresponding natural language question as input and outputs the answer. It is a classification problem in which candidate answers are restricted to the most common answers appearing in the dataset and requires deep analysis and understanding of images and questions such as image recognition and object localization~\cite{malinowski2015ask,xiong2016dynamic,wu2016ask,goyal2017making}. Current models can be classified into three main categories: early fusion models, later fusion models, and external knowledge-based models. One state-of-the-art VQA model is \cite{li2019beyond}, which proposes an architecture, positional self-attention with co-attention, that does not require a recurrent neural network (RNN) for video question answering. \cite{guo2019image} proposes an image-question-answer synergistic network, where candidate answers are coarsely scored according to their relevance to the image and question pair in the first stage. Then, answers with a high probability of being correct are re-ranked by synergizing with images and questions. The difference between sticker selection and VQA task is that sticker selection task focus more on multi-turn multimodal interaction between stickers and utterances. \textbf{Visual dialog.} Visual dialog extends the single turn dialog task~\cite{Tao2018Get,Gao2019Product} in VQA to a multi-turn one, where later questions may be related to former question-answer pairs. To solve this task, \cite{lu2017best} transfers knowledge from a pre-trained discriminative network to a generative network with an RNN encoder, using a perceptual loss. \cite{wu2018you} combines reinforcement learning and generative adversarial networks (GANs) to generate more human-like responses to questions, where the GAN helps overcome the relative paucity of training data, and the tendency of the typical maximum-likelihood-estimation-based approach to generate overly terse answers. \cite{jain2018two} demonstrates a simple symmetric discriminative baseline that can be applied to both predicting an answer as well as predicting a question in the visual dialog. Unlike VQA and visual dialog tasks, in a sticker recommendation system, the candidates are stickers rather than text. \textbf{Multi-turn response selection.} Multi-turn response selection~\cite{Tao2019One,Feng2019Learning,Yan2018Coupled,Yan2017Joint,Yan2016LearningTR} takes a message and utterances in its previous turns as input and selects a response that is natural and relevant to the whole context. In our task, we also need to take previous multi-turn dialog into consideration. Previous works include \cite{zhou2016multi}, which uses an RNN to represent context and response, and measure their relevance. More recently, \cite{Wu2017SequentialMN} matches a response with each utterance in the context on multiple levels of granularity, and the vectors are then combined through an RNN. The final matching score is calculated by the hidden states of the RNN. \cite{zhou2018multi} extends this work by considering the matching with dependency information. More recently, \cite{tao2019multi} proposes a multi-representation fusion network where the representations can be fused into matching at an early stage, an intermediate stage, or at the last stage. Traditional multi-turn response selection deals with pure natural language processing, while in our task, we also need to obtain a deep understanding of images. \section*{Acknowledgments} We would like to thank the anonymous reviewers for their constructive comments. We would also like to thank Anna Hennig in Inception Institute of Artificial Intelligence for her help on this paper. This work was supported by the National Key Research and Development Program of China (No. 2017YFC0804001), the National Science Foundation of China (NSFC No. 61876196 and NSFC No. 61672058). Rui Yan is partially supported as a Young Fellow of Beijing Institute of Artificial Intelligence (BAAI). \clearpage \bibliographystyle{ACM-Reference-Format}
{ "attr-fineweb-edu": 1.654297, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdSM5qoYAttXCyxWr
\section*{\uppercase{Appendix}} \section{\uppercase{Benchmarks}} \label{section:benchmarks} \section{Circuit Privacy} Suppose $P_1$ and $P_2$ are both input and computation parties and want to perform a 2-PC on their inputs. Without interacting with a third party they could use Yao's Garbled circuit protocol or generate correlated randomness using oblivious transfer. Both approaches come with higher computational complexity than utilizing an auxiliary $P_3$ to perform a semi-honest three-party protocol. For instance, they might use $P_3$ as a trusted dealer for a linear secret sharing scheme. As a trusted dealer, $P_3$ does not learn the function that $P_1$ and $P_2$ want to compute. Our scheme can also achieve circuit privacy against $P_3$. On top, our scheme should be preferred over the trusted dealer model if $P_1$ and $P_2$ share a high latency link but find an auxiliary $P_3$ with a low latency link to any of the two players. In the trusted dealer model, circuit privacy against $P_3$ can only be achieved if $P_1$ and $P_2$ are performing the online phase. With our scheme $P_2$ and $P_3$ are performing the online phase, thus overcoming the high latency issue between $P_1$ and $P_2$. Since $P_3$ can be any semi-honest party, finding a computation node with a low latency link to $P_2$ could be as simple as involving a public cloud service provider with a node physically close to $P_2$. The main idea of our approach to achieve circuit privacy is that $P_3$ utilizes an identical protocol for addition as for multiplication but by adjusting the messages $P_1$ and $P_2$ send to $P_3$, they ensure that $P_3$ obtains a sharing of the sum instead of the product of two values. We show that $P_3$'s view when performing a multiplication is indistinguishable from its view when performing an addition. To achieve this, addition requires the same communication complexity as multiplication in our protocol if circuit privacy against $P_3$ should be achieved. Evaluating $NOT$ gates does not involve $P_3$ and is therefore already private. \subsection{Private XOR(addition) gates} Again, we start describing the protocol for boolean circuits. Let $a_1$,$a_2$ be a secret sharing of $a$, and let $b_1$,$b_2$ be a secret sharing of $b$. In order to compute a secret sharing of $c = a \oplus b$, $P_1$ and $P_2$ have two options, one favoring the prepossessing phase and one favoring the online phase. The messaging protocol of both approaches is identical to a multiplication. $P_1$ sends $o_1$ and $o_2$ in the preprocessing phase to $P_3$. $P_2$ and $P_3$ exchange $m_2$ and $m_3$ in parallel but this time receive a share of $c = a \oplus b$. \subsubsection{Injecting} We start with the simpler approach favoring the preprocessing phase. We refer to this approach as \textit{Injecting} since $P_2$ cancels out $P_3$'s terms of the multiplication protocol and inserts $a_2 \oplus b_2$ into the equation without $P_3$ realizing. In order to achieve this, $P_2$ performs a different computation than in the multiplication protocol. As a consequence the value of $m_2$ changes. Also $P_1$ changes the values of its messages. $P_3$'s protocol remains identical to performing a multiplication. Our main idea to achieve a valid sharing with this approach is that $P_2$ and $P_3$ first calculate an identical $d_2$ and $d_3$. Clearly, $d_2 \oplus d_3 = 0$. $P_2$ then inserts $a_2 \oplus b_2$ in its message such that $P_3$ will later obtain a valid sharing of $c = a \oplus b$. The injected values are masked by $r_x$ to prevent $P_3$ from detecting this modification. $P_1$ prepares the necessary messages and adjustments to its share to ensure correctness. $P_1$ calculates in the preprocessing phase: \begin{equation} \begin{split} o_1 &= r_r \\ o_2 &= r_l \\ c_1 &= r_{a \oplus b} = r_a \oplus r_b \oplus r_x \oplus r_y \\ \end{split} \end{equation} $P_2$ calculates in the online phase: \begin{equation} \begin{split} d_2 &= (a_2 \oplus r_r) (b_2 \oplus r_l) \\ i_2 &= d_2 \oplus a_2 \oplus b_2 \\ m_2 &= i_2 \oplus r_x \\ \end{split} \end{equation} $P_3$ calculates in the online phase: \begin{equation} \begin{split} d_3 &= (a_2 \oplus o_1) (b_2 \oplus o_2) \\ m_3 &= d_3 \oplus r_y \\ \end{split} \end{equation} Correctness is ensured, since: \begin{equation} \begin{split} m_2 &= i_2 \oplus r_x = d_2 \oplus a_2 \oplus b_2 \oplus r_x \\ &= (a_2 \oplus r_r) (b_2 \oplus r_l) \oplus a_2 \oplus b_2 \oplus r_x \\ m_3 &= d_3 \oplus r_y = (a_2 \oplus o_1) (a_2 \oplus o_2) \oplus r_y \\ &= (a_2 \oplus r_r) (b_2 \oplus r_l) \oplus r_y \\ c_2 &= m_2 \oplus m_3 \\ &= a_2 \oplus b_2 \oplus r_x \oplus r_y \\ &= a \oplus r_a \oplus b \oplus r_b \oplus r_x \oplus r_y \\ &= a \oplus b \oplus r_{a \oplus b} \\ c_1 &= r_{a \oplus b} \end{split} \end{equation} Note that $c_1$ only relies on values that are available to $P_1$ in the preprocessing phase. Note also that all terms of $d_3$ are canceled out by $d_2$. By inserting $a_2 \oplus b_2$ in $i_2$, $P_2$ ensures that $P_3$ is performing the correct computation. The messages sent to $P_3$ all contain a random mask not known to $P_3$ in prior: $o_1$ contains $r_r$, $o_2$ contains $r_l$, $m_2$ contains $r_x$. Thus, the messages look as random to $P_3$ as the messages for a multiplication would look like. Injecting requires $P_1$ to perform 4 basic operations in the prepossessing phase, $P_3$ to perform 4 basic operations in the online phase, and $P_2$ to perform 6 basic operations in the online phase. \subsubsection{Neutralizing} We call the second approach \textit{Neutralizing}. In contrast to Injecting, $P_2$ does not insert $a_2 \oplus b_2$ into $m_2$ but $P_1$ prepares its messages $o_1, o_2$ in a way that some terms in $b_3$ cancel out and the remaining ones create a valid sharing of $a \oplus b$. Neutralizing should be the preferred approach due to its more efficient online phase. $P_1$ calculates in the prepossessing phase: \begin{equation} \begin{split} o_1 &= \neg r_r \\ o_2 &= \neg r_l \\ c_1 &= r_{a \oplus b} = r_a \oplus r_b \oplus r_x \oplus r_y \oplus \neg rl \oplus rr \\ \end{split} \end{equation} $P_2$ calculates in the online phase: \begin{equation} \begin{split} d_2 &= (a_2 \oplus r_r) (b_2 \oplus r_l) \\ m_2 &= d_2 \oplus r_x \\ \end{split} \end{equation} $P_3$ calculates in the online phase: \begin{equation} \begin{split} d_3 &= (a_2 \oplus o_1) (b_2 \oplus o_2) \\ m_3 &= d_3 \oplus r_y \\ \end{split} \end{equation} Correctness is ensured, since: \begin{equation} \begin{split} m_2 &= d_2 \oplus r_x = (a_2 \oplus r_r) (b_2 \oplus r_l) \oplus r_x \\ m_3 &= d_3 \oplus r_y = (a_2 \oplus o_1) (b_2 \oplus o_2) \oplus r_y \\ &= (a_2 \oplus \neg r_r) (b_2 \oplus \neg r_l) \oplus r_y \\ c_2 &= m_2 \oplus m_3 \\ &= (a_2 \oplus r_r) (b_2 \oplus r_l) \\ &\oplus (a_2 \oplus \neg r_r) (b_2 \oplus \neg r_l) \oplus r_x \oplus r_y \\ &= a_2 b_2 \oplus a_2 r_l \oplus b_2 r_r \oplus r_l r_r \\ &\oplus a_2 b_2 \oplus a_2 \neg r_l \oplus b_2 \neg r_r \oplus \neg r_l \neg r_r \oplus r_x \oplus r_y \\ &= a_2 \oplus b_2 \oplus \neg r_l \neg r_r \oplus r_l r_r \oplus r_x \oplus r_y \\ &= a \oplus r_a \oplus b \oplus r_b \oplus \neg rl \oplus rr \oplus r_x \oplus r_y \\ &= a \oplus b \oplus r_{a \oplus b} \\ c_1 &= r_{a \oplus b} \end{split} \end{equation} Note that $c_1$ only relies on values that are available to $P_1$ in the preprocessing phase. The messages sent to $P_3$ all contain a random mask not known to $P_3$ in prior: $o_1$ contains $\neg r_r$, $o_2$ contains $\neg r_l$, $m_2$ contains $r_y$. Thus, the messages look as random to $P_3$ as the messages for a multiplication would look like. Neutralizing requires $P_1$ to perform 7 basic operations in the prepossessing phase, and $P_2$ and $P_3$ to perform 4 basic operations each in the online phase. \subsection{Arithemtic Circuits} Enabling circuit privacy against $P_3$ for arithmetic circuits again requires little modifications. As usual, $XOR$ is replaced by addition, $AND$ is replaced by multiplication. Input parties share a value $v$ with $v_1 = r, v_2 = v - r$ . Reconstructing a value $v$ can be done by calculating $v_1 + v_2$. The formula (after replacing $XOR$ with addition) to correctly calculate an $XOR$ gate with the Injecting method has to be adjusted as follows: \begin{equation} \begin{split} i_2 &= d_2 - a_2 - b_2 \\ c_1 &= r_{a+b} = r_a + r_b + r_x - r_y\\ c_2 &= m_3 - m_2. \end{split} \end{equation} Correctness is ensured, since: \begin{equation} \begin{split} m_2 &= i_2 + r_x = d_2 - a_2 - b_2 + r_x \\ &= (a_2 + r_r) (b_2 + r_l) - a_2 - b_2 + r_x \\ m_3 &= d_3 + r_y = (a_2 + o_1) (a_2 + o_2) + r_y \\ &= (a_2 + r_r) (b_2 + r_l) + r_y \\ c_2 &= m_3 - m_2 \\ &= a_2 + b_2 - r_x + r_y \\ &= a - r_a + b - r_b - r_x + r_y \\ &= a + b - r_{a + b} \\ c_1 &= r_{a+b} \end{split} \end{equation} The formula (after replacing $XOR$ with addition) to correctly calculate an $XOR$ gate with the Neutralizing method has to be adjusted as follows: \begin{equation} \begin{split} o_1 &= r_r - 1 \\ o_2 &= r_l -1 \\ c_1 &= r_{a+b} = r_a + r_b + r_x + 1 - r_l -r_r - r_y\\ c_2 &= m_3 - m_2. \end{split} \end{equation} Correctness is ensured, since: \begin{equation} \begin{split} m_2 &= d_2 + r_x = (a_2 + r_r) (b_2 + r_l) + r_x \\ m_3 &= d_3 + r_y = (a_2 + o_1) (b_2 + o_2) + r_y \\ &= (a_2 + r_r - 1) (b_2 + r_l - 1) + r_y \\ c_2 &= m_3 - m_2 \\ &= (a_2 + r_r) (b_2 + r_l) + r_y - r_x \\ &- (a_2 + r_r - 1) (b_2 + r_l - 1) \\ &= a_2 b_2 + a_2 r_l + b_2 r_r + r_l r_r + r_y - r_x \\ &- a_2 b_2 - a_2 (r_l - 1) - b_2 (r_r -1) - (r_l - 1) (r_r - 1)\\ &= a_2 + b_2 - (r_l - 1) (r_r - 1) - r_x + r_y \\ &= a_2 + b_2 - r_l r_r + r_l + r_r - 1 + r_l r_r - r_x + r_y \\ &= a - r_a + b - r_b + r_l + r_r +r_y - r_x - 1\\ &= a + b - r_{a + b} \\ c_1 &= r_{a + b} \end{split} \end{equation} \section{\uppercase{Conclusion}} In this work, we proposed a novel honest majority semi-honest 3-PC protocol. Compared to other 3-PC protocols, our protocol offers the most efficient online phase both in terms of communication and computation complexity. Only $P_2$ and $P_3$ are involved in the online phase. Thus, $P_1$ can interact with the other parties with arbitrary latency without hurting our scheme's performance. Our protocol can achieve circuit privacy against $P_3$ if $P_1$ and $P_2$ also send bits for every addition gate to $P_3$. This can prevent an auxiliary computation node $P_3$ from learning the function being computed. \section{Evaluation} We are currently working on an implementation of our scheme and will show a comparison with other schemes' implementations in this section soon. \begin{table*}[ht] \centering \begin{tabular}{lllll} \hline Scheme & Com. Off & Com. On & SRNG Off. & Mult. On \\ \hline Sharemind (4) & 0 & 3 & 15 & 9 \\ High-Throughput (5) & 0 & 3 & 6 & 6 \\ Ours (6) & 2 & 2 & 8 & 3 \\ \hline \end{tabular} \caption{Comparison of SRNG approaches} \label{tab:SRNG schemes compared} \end{table*} \subsection{Comparison with related SRNG based schemes} For our comparison, we assume that all schemes shift the maximum possible amount of required computation and communication to the offline phase. We first compare our scheme to the other SRNG-based schemes regarding communication complexity. In the online phase, our protocol requires $P_2$ and $P_3$ to each send a single bit to each other per $AND$ gate. Schemes 4,5 require all three parties to send one bit to another party. Thus, our protocol requires 33\% less global communication in the online phase than the two existing schemes. In the offline phase, our protocol requires $P_1$ to send two bits per $AND$ gate to $P_3$. Schemes 4,5 allow all parties to generate required randomness non-interactively in the preprocessing phase. Thus, our protocol requires 33\% more global communication in both phases combined. We now compare our protocol to the other SRNG-based protocols in terms of computational complexity. In the online phase of our protocol, $P_2$ performs 2 multiplications and 3 additions, and $P_3$ performs 1 multiplication and 4 additions per multiplication gate in the online phase. $P_1$ does not need to perform any computation in the online phase. Scheme 4 requires each party to perform 4 additions and 3 multiplications per multiplication gate in the online phase. Scheme 5 requires each party to perform 3 additions and 2 multiplications per multiplication gate in the online phase. Thus, globally our protocol requires 3 multiplications and 7 additions, scheme 4 requires 9 multiplications and 12 additions, and scheme 5 requires 6 multiplications and 9 additions. In the offline phase of our protocol, $P_1$ needs to sample 4 random bits per $AND$ gate, perform one addition per $XOR$ gate, and 7 basic operations (4 additions, 3 multiplications) per $AND$ gate on its shares. $P_2$ needs to sample 3 random bits per $AND$ gate. $P_3$ needs to sample one random bit per $AND$ gate. Scheme 4 requires each party to sample 5 random bits and perform one addition per $AND$ gate in the online phase. Scheme 5 requires each party to sample 2 random values and perform one addition per $AND$ gate. Thus, globally our protocol requires sampling 8 random bits, scheme 4 requires sampling 15 random bits, and scheme 5 requires sampling 6 random bits. Table \ref{tab:SRNG schemes compared} sums up the comparison of the three schemes. The numbers refer to the global communication or computation required by all parties combined per multiplication gate. We conclude that scheme 5 is overall a better choice than scheme 4, independent of the scenario. Our scheme provides the fastest online phase both in communication and computational complexity. In the offline phase, scheme 5 is at least 25\% faster in computational complexity and does not require interaction. Since their benchmarks show, however, that prepossessing only accounts for 9.54\% of total computation in their scheme, we conclude that the efficiency of the online phase should be prioritized in most settings. \label{section:benchmarks} \section{Input Privacy} In the future, we will provide formal proofs in this section. For now, we quickly argue without formal proof why our scheme provides input privacy. Since $P_1$ never receives any messages from an input party or a player, simulating its view is trivial. We will now show that each message that one of the other two parties receive is hidden by a random mask not held by that party. First observe that the following masks are present: $r_a, r_b, r_l, r_r, r_x, r_y$. Notice that all values are generated completely at random and do not share any relation to each other. Further, masks are never reused for multiple gates or messages. $P1$ holds seeds to generate each of these masks. From those masks, $P_2$ holds a seed to sample $r_l, r_r, r_x$. $P_2$ only receives $m_3$ from $P_3$ that is masked by $r_y$. Since $P_2$ does not hold $r_y$, it cannot infer anything from the received message. At the start of the protocol the respective input parties distribute $a_2$ and $b_2$ to $P_2$. As $P_2$ does not hold $r_a = a_1, r_b = b_1$ it cannot infer $a$, or $b$. $P_3$ holds a seed to sample $r_y$. It receives the messages $o_1$ masked by $r_r$, $o_2$ masked by $r_l$, and $m_2$ masked by $r_x$. Since $P_2$ does not hold any of the random masks, it cannot infer anything from the received messages. At the start of the protocol the respective input parties distribute $a_2$ and $b_2$ to $P_3$. As $P_3$ does not hold $r_a = a_1, r_b = b_1$ it cannot infer $a$, or $b$. It is clear that a malicious adversary cannot learn any inputs even if it adjusts its messages. However, by adjusting a message, any malicious player can break the correctness of the computation's result. In future work, we may investigate how to achieve correctness against malicious adversaries. \section{\uppercase{Introduction}} Secure Multiparty Computation enables parties to execute functions on obliviously shared inputs without revealing them \cite{lindell2020secure}. \section{\uppercase{Our Approach}} \label{section:approach} In this section, we describe our novel protocol to achieve secure three-party computation. Our protocol works for arithmetic and boolean circuits consisting of any binary or unary gate. We first cover the evaluation of boolean circuits. When circuit privacy is not relevant, $XOR$ gates can be evaluated by each party locally by $XOR$-ing their shares. In contrast to previously proposed secret-sharing protocols, each party in our approach performs a different computation for an $AND$ gate. Our scheme does not make use of correlated randomness but instead of what refer to as \textit{shared randomness}. By shared randomness, we denote that two players share a seed to sample a bit from a PRG with the same value. A seed in our scheme is always shared between $P_1$ and exactly one input or computation node. \subsection{Secret Sharing} In order to share a bit $v$ in our scheme, input party $I_i$ samples a random bit $r_v$ from a SRNG using seed $s_{I_i}$ received by $P_1$. It then distributes its share in the following way: \begin{itemize} \item $P_1$'s share is $v_1 = r_v$. Since it shares $s_{I_i}$ with $I_i$ this requires no interaction. \item $P_2$'s share is $v_2 = v + r_v$. \item $P_3$'s share is also $v_2 = v + r_v$. \end{itemize} It is clear that no single party's share reveals anything about $v$. In addition, holding the distinct shares $v_1$ and $v_2$ suffices to obtain $v$. $P_2$ and $P_3$ hold equal shares but different shared randomness for each multiplication gate. \subsection{$XOR$ (addition) gates} Let $a_1$,$a_2$ be a secret sharing of $a$, and let $b_1$,$b_2$ be a secret sharing of $b$. In order to compute a secret sharing of $c = a \oplus b$, each party holding share $a_i, b_i$ locally computes $c_i = a_i \oplus b_i$. Correctness is ensured, since: \begin{equation} \begin{split} c_1 &= a_1 \oplus b_1 = r_a \oplus r_b \\ c_2 &= a_2 \oplus b_2 = (a \oplus r_a) \oplus (b \oplus r_b) \\ &= (a \oplus b) \oplus (r_a \oplus r_b) \end{split} \end{equation} It is clear that $c_1 \oplus c_2 = c$ after this local computation. Notice that $c_1$ only depends on random values generated by $P_1$ and can thus be computed in the prepossessing phase. \subsection{$NOT$ gates} Let $v_1$,$v_2$ be a secret sharing of $v$. To obtain a sharing of $c = \neg v$, $P_1$ computes $c_1 = \neg v_1$ in the prepossessing phase. The other parties do nothing ($c_2 = v_2$). Correction is ensured, since $c = c_1 \oplus c_2 = \neg r_v \oplus v = \neg v$. \subsection{$AND$ (multiplication) gates} Let $a_1$,$a_2$ be a secret sharing of $a$, and let $b_1$,$b_2$ be a secret sharing of $b$. In order to compute a secret sharing of $c = a \wedge b$ (from now on $ab$), each $P_i$ performs a different computation on its shares. Assume $P_1$ and $P_2$ both have access to $seed_2$ to sample three random bits $r_l, r_r, r_x$ from a SRNG in the prepossessing phase. Assume $P_1$ and $P_3$ both have access to $seed_3$ to sample a random bit $r_y$ from a SRNG in the preprocessing phase. Assume further that $P_3$ received $o_1 = r_a \oplus r_r$ and $o_2= r_b \oplus r_l$ from $P_1$ in the preprocessing phase. $P_1$ sends $seed_2, seed_3$ and $o_1, o_2$ for all gates in the circuit to the respective parties using a single communication round. $P_1$ calculates in the preprocessing phase: \begin{equation} \begin{split} o_1 &= r_a \oplus r_r \\ o_2 &= r_b \oplus r_l \\ c_1 &= r_{ab} = r_a r_l \oplus r_b r_r \oplus r_l r_r \oplus r_x \oplus r_y. \end{split} \end{equation} $P_2$ calculates in the online phase: \begin{equation} \begin{split} d_2 &= a_2 r_l \oplus b_2 r_r \\ m_2 &= d_2 \oplus r_x \\ \end{split} \end{equation} $P_3$ calculates in the online phase: \begin{equation} \begin{split} d_3 &= (a_2 \oplus o_1) (b_2 \oplus o_2) \\ m_3 &= d_3 \oplus r_y \\ \end{split} \end{equation} $P_1$ and $P_2$ exchange $m_2$ and $m_3$ in parallel. A fterwards they compute: $c_2 = m_2 \oplus m_3$. Correctness is ensured, since: \begin{equation} \begin{split} m_2 &= d_2 \oplus r_x = a_2 r_l \oplus b_2 r_r \oplus r_x \\ &= (a \oplus r_a) r_l \oplus (b \oplus r_b) r_r \oplus r_x \\ &= a r_l \oplus r_a r_l \oplus b r_r \oplus r_b r_r \oplus r_x \\ m_3 &= d_3 \oplus r_y = (a_2 \oplus o_1) (b_2 \oplus o_2) \oplus r_y \\ &= (a \oplus r_a \oplus r_a \oplus r_r) (b \oplus r_b \oplus r_b \oplus r_l) \oplus r_y \\ &= (a \oplus r_r) (b \oplus r_l) \oplus r_y \\ &= ab \oplus a r_l \oplus b r_r \oplus r_l r_r \oplus r_y \\ c_2 &= m_2 \oplus m_3 \\ &= a r_l \oplus r_a r_l \oplus b r_r \oplus r_b r_r \oplus r_x \\ &\oplus ab \oplus a r_l \oplus b r_r \oplus r_l r_r \oplus r_y \\ &= ab \oplus r_a r_l \oplus r_b r_r \oplus r_l r_r \oplus r_x \oplus r_y \\ &= ab \oplus r_{ab} \\ c_1 &= r_{ab} \end{split} \end{equation} Note that $c = c_1 \oplus c_2$. Note also that $c_1$ only relies on values that are available to $P_1$ in the preprocessing phase. \subsection{Arithmetic circuits} Enabling arithmetic circuits for arbitrary rings and fields requires little modifications. $XOR$ can be replaced by addition, $AND$ can be replaced by multiplication. Input parties share a value $v$ with $v_1 = r_v, v_2 = v - r_v$ . Reconstructing a value $v$ can be done by calculating $v_1 + v_2$. The formula (after replacing $XOR$ with addition) to correctly calculate a multiplication has to be adjusted as follows. \begin{equation} \begin{split} c_1 &= r_{ab} = r_x - r_a r_l - r_b r_r - r_l r_r - r_y \\ c_2 &= m_3 - m_2. \end{split} \end{equation} Correctness is ensured, since: \begin{equation} \begin{split} m_2 &= d_2 + r_x = a_2 r_l + b_2 r_r + r_x \\ &= (a - r_a) r_l + (b - r_b) r_r + r_x \\ &= a r_l - r_a r_l + b r_r - r_b r_r + r_x \\ m_3 &= d_3 + r_y = (a_2 + o_1) (b_2 + o_2) + r_y \\ &= (a - r_a + r_a + r_r) (b - r_b + r_b + r_l) + r_y \\ &= (a + r_r) (b + r_l) + r_y \\ &= ab + a r_l + b r_r + r_l r_r + r_y \\ c_2 &= m_3 - m_2 \\ &= ab + a r_l + b r_r + r_l r_r + r_y \\ &- (a r_l - r_a r_l - b r_r - r_b r_r + r_x) \\ &= ab + r_l r_r + r_y + r_a r_l + r_b r_r - r_x \\ &= ab - r_{ab} \\ c_1 &= r_{ab} \end{split} \end{equation} \section{\uppercase{Related Work}} \label{section:relatedWork} The following approaches enable semi-honest three-party computation (3-PC) with an honest majority: \begin{table*}[ht] \centering \begin{tabular}{llllll} \hline Scheme & Com. Off & Com. On & Bottleneck & Circuit Privacy against $P_3$ \\ \hline Replicated (1) & 0 & 12 & $P_1$-*, $P_2$-*, $P_3$-* & no \\ Additive (3) & 4 & 4 & $P_1$-$P_2$ & yes \\ BGW (4) & 0 & 6/12 & $P_1$-*, $P_2$-*, $P_3$-* & no \\ Sharemind (4) & 0 & 3 & $P_1$-*, $P_2$-*, $P_3$-* & no \\ High-Throughput (5) & 0 & 3 & $P_1$-*, $P_2$-*, $P_3$-* & no \\ Ours (6) & 2 & 2 & $P_2$-$P_3$ & yes (requires additional com.) \\ \hline \end{tabular} \caption{Comparison of approaches} \label{tab:Schemes compared} \end{table*} \begin{enumerate} \item \textit{Replicated Secret Sharing (RSS)}. $P_1$, $P_2$, and $P_3$ all receive two secret shares of each input. The efficiency of the online phase depends on the weakest link between any two parties. $P_3$ always learns the circuit to be computed. This approach does not require a prepossessing phase. Multiplication requires resharing, where each party sends 4 bits per multiplication to each other party. Thus, for each $AND$ gate, the online phase requires 12 bits of communication. \item \textit{Additive Secret Sharing with a trusted dealer}. $P_1$ and $P_2$ both receive one secret share of each input. $P_3$ generates correlated randomness in a prepossessing phase and sends two bits per multiplication gate to each party in a single communication round. Multiplication requires opening triples where $P_1$ and $P_2$ each send 2 bits to each other. Thus, for each $AND$ gate, the online phase requires 4 bits of communication, and both phases combined require 8 bits of communication. The efficiency of the online phase depends only on the link between $P_1$ and $P_2$. $P_3$ does not learn the circuit to be computed. \item \textit{BGW (Shamir) \cite{bgw}}. $P_1$, $P_2$, $P_3$ each receive $k \in \{1,2\}$ secret shares of each input. This approach does not require a prepossessing phase. For $k=1$, multiplication requires each party to send 1 bit to each other party. Thus, for each $AND$ gate, the online phase requires 6 bits of communication. The communication complexity doubles to 12 bits for boolean circuits \cite{high}. The efficiency of the online phase depends on the weakest link between any two parties. $P_3$ always learns the circuit to be computed. \item \textit{Sharemind RSS \cite{sharemindSRNG}}. Sharemind proposes an RSS scheme where each pair of parties shares a unique key to sample identical bits from a pseudo-random number generator. They refer to this approach as SRNG. In contrast to (1), the parties can utilize their SRNGs to reduce total communication to 3 bits per $AND$ gate. Each party needs to sample 5 bits per $AND$ gate using different SRNGs. \item \textit{High-throughput RSS \cite{high}}. $P_1$, $P_2$, and $P_3$ all at least one secret share of each input. All parties engage in a non-interactive prepossessing phase to generate correlated randomness from SRNGs. Multiplication requires each party to send 1 bit to one other party. Thus, for each $AND$ gate, the online phase requires 3 bits of communication, and both phases combined also require 3 bits of communication. The solution should be preferred over (4) as it requires only 2 SRNG calls per party for each $AND$ gate. The efficiency of the online phase depends on the weakest link between any two parties. $P_3$ always learns the circuit to be computed. \item \textit{Our approach}. $P_1$, $P_2$, and $P_3$ each hold one secret share of each input. $P_1$ engages in a prepossessing phase and sends two bits per multiplication gate to $P_3$ in a single communication round. Multiplication requires $P_1$ and $P_2$ to send 1 bit to each other. The parties sample the remaining required randomness with SRNGs. Thus, for each $AND$ gate, the online phase requires 2 bits of communication, and both phases combined require 4 bits of communication. $P_1$ samples 4 bits, $P_2$ samples 3 bits, and $P_3$ samples 1 bit per $AND$ gate from SRNGs. The efficiency of the online phase depends only on the link between $P_2$ and $P_3$. If circuit privacy against $P_3$ is desired, our scheme also requires (in total) 4 bits of communication and 8 SRNG calls per $XOR$ gate. \end{enumerate} Table \ref{tab:Schemes compared} compares the number of bits and SRGNs calls required per $AND$ gate in the different phases of the schemes. Note that in the case of arithmetic circuits, the number of bits exchanged/sampled translates to the number of field elements or ring elements exchanged/sampled. From the comparison, it should be noted that our scheme requires the least amount of communication in the online phase. In both phases combined, only the two SRNG-based schemes 4, 5 beat our solution by 1 bit of communication exchanged. However, in our scheme, only the latency between $P_2$ and $P_3$ can be the bottleneck of the protocol, while in (4,5) all links can become the latency bottleneck. We conclude that our scheme should be the preferred choice over previously proposed schemes if the link properties between computation nodes differ. This is most likely the case unless all computation nodes are located in the same data center. Also, in cases where the online phase should be prioritized, our scheme is the most efficient option by requiring less communication and computation in the online phase than the other schemes. This comes at the cost of a more complex prepossessing phase than 5, both in terms of communication and computational complexity. As common, our prepossessing phase can be performed $n$ times in advance to prepare $n$ circuits and is independent of any player inputs. All described protocols support the client-server model. The client-server model enables any number of input nodes $n$ to distribute their shares to a fixed number $k$ of computation nodes and reveal the output to any number $m$ of result nodes. Hence, any number of input and result parties can be supported by a 3-party SMC scheme. The computation nodes learn nothing about the inputs unless a party is both input- and computation node. They also do not learn the result of the computation unless a party is both result- and computation node. It should be mentioned that the approaches of schemes 1-4 are not restricted to 3 computation nodes. Scheme 5, and our protocol only work in the 3-PC setting. For 2-PC with an auxiliary $P_3$, it could be useful that our approach only involves $P_2$ and $P_3$ in the online phase. In case $P_1$ and $P_2$ share a high latency link with each other, $P_3$ can be chosen as a computation node physically close to $P_2$, e.g., the nearest cloud service provider. If $P_1$ and $P_2$ do not want auxiliary $P_3$ to learn the function they compute, we can also achieve circuit privacy against $P3$ if the parties exchange bits for both addition and multiplication gates.
{ "attr-fineweb-edu": 1.790039, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdSc5qoYAlWqzxr1K
\section{Introduction} The subject of eigenvalues of random matrices is very rich. The eigenvalue spacings of a complex unitary matrix chosen from Haar measure relate to the spacings between the zeros of the Riemann zeta function (\cite{Odlyzko}, \cite{RS1}, \cite{RS2}). For further recent work on random complex unitary matrices, see \cite{DS}, \cite{Rains}, \cite{Wieand}. The references \cite{Dyson} and \cite{Mehta} contain much of interest concerning the eigenvalues of a random matrix chosen from Dyson's orthogonal, unitary, and symplectic circular ensembles, for instance connections with the statistics of nuclear energy levels. Little work seems to have been done on the eigenvalue spacings of matrices chosen from finite groups. One recent step is Chapter 5 of Wieand's thesis \cite{Wieand}. She studies the eigenvalue spacings of a random element of the symmetric group in its permutation representation on the set $\{1,\cdots,n\}$. This note gives two natural $q$-analogs of Wieand's work. For the first $q$-analog, let $\alpha \in GL(n,q)$ be a random unipotent matrix. Letting $V$ be the vector space on which $\alpha$ acts, we consider the eigenvalues of $\alpha$ in the permutation representation of $GL(n,q)$ on the lines of $V$. Let $X^{\theta}(\alpha)$ be the number of eigenvalues of $\alpha$ lying in a fixed arc $(1,e^{i2\pi \theta}], 0 < \theta < 1$ of the unit circle. Bounds are obtained for the mean of $X^{\theta}$ (we believe that as $n \rightarrow \infty$ with $q$ fixed, a normal limit theorem holds). A second $q$-analog which we analyze is the case when $\alpha$ is a randomly chosen unipotent upper triangular matrix over a finite field. A third interesting $q$-analog would be taking $\alpha$ uniformly chosen in $GL(n,q)$; however this seems intractable. The main method of this paper is to interpret identities of symmetric function theory in a probabilistic setting. Section \ref{ident} gives background and results in this direction. This interaction appears fruitful, and it is shown for instance that a probabilistic algorithm of Borodin describing the Jordan form of a random unipotent upper triangular matrix \cite{B} follows from the combinatorics of symmetric functions. This ties in with work of the author on analogous algorithms for the finite classical groups \cite{F}. The applications to the eigenvalue problems described above appear in Section \ref{applications}. We remark that symmetric function theory plays the central role in work of Diaconis and Shahshahani \cite{DS} on the eigenvalues of random complex classical matrices. \section{Symmetric functions} \label{ident} To begin we describe some notation, as on pages 2-5 of \cite{Mac}. Let $\lambda$ be a partition of a non-negative integer $n = \sum_i \lambda_i$ into non-negative integral parts $\lambda_1 \geq \lambda_2 \geq \cdots \geq 0$. The notation $|\lambda|=n$ will mean that $\lambda$ is a partition of $n$. Let $m_i(\lambda)$ be the number of parts of $\lambda$ of size $i$, and let $\lambda'$ be the partition dual to $\lambda$ in the sense that $\lambda_i' = m_i(\lambda) + m_{i+1}(\lambda) + \cdots$. Let $n(\lambda)$ be the quantity $\sum_{i \geq 1} (i-1) \lambda_i$. It is also useful to define the diagram associated to $\lambda$ as the set of points $(i,j) \in Z^2$ such that $1 \leq j \leq \lambda_i$. We use the convention that the row index $i$ increases as one goes downward and the column index $j$ increases as one goes across. So the diagram of the partition $(5441)$ is: \[ \begin{array}{c c c c c} . & . & . & . & . \\ . & . & . & . & \\ . & . & . & . & \\ . & & & & \end{array} \] Let $G_{\lambda}$ be an abelian $p$-group isomorphic to $\bigoplus_i Cyc(p^{\lambda}_i)$. We write $G=\lambda$ if $G$ is an abelian $p$-group isomorphic to $G_{\lambda}$. Finally, let $(\frac{1}{p})_r = (1-\frac{1}{p}) \cdots (1-\frac{1}{p^r})$. The rest of the paper will treat the case $GL(n,p)$ with $p$ prime as opposed to $GL(n,q)$. This reduction is made only to make the paper more accessible at places, allowing us to use the language of abelian $p$-groups rather than modules over power series rings. From Chapter 2 of Macdonald \cite{Mac} it is clear that everything works for prime powers. \subsection{Unipotent elements of $GL(n,p)$} \label{subunip} It is well known that the unipotent conjugacy classes of $GL(n,p)$ are parametrized by partitions $\lambda$ of $n$. A representative of the class $\lambda$ is given by \[ \left( \begin{array}{c c c c} M_{\lambda_1} & 0 & 0 &0 \\ 0 & M_{\lambda_2} & 0 & 0\\ 0 & 0 & M_{\lambda_3} & \cdots \\ 0 & 0 & 0 & \cdots \end{array} \right), \] where $M_i$ is the $i*i$ matrix of the form \[ \left( \begin{array}{c c c c c c} 1 & 1 & 0 & \cdots & \cdots & 0\\ 0 & 1 & 1 & 0 & \cdots & 0\\ 0 & 0 & 1 & 1 & \cdots & 0\\ \cdots & \cdots & \cdots & \cdots & \cdots & \cdots \\ \cdots & \cdots & \cdots & 0 & 1 & 1\\ 0 & 0 & 0 & \cdots & 0 & 1 \end{array} \right). \] Lemmas \ref{classsize}-\ref{prod} recall elementary facts about unipotent elements in $GL(n,p)$. \begin{lemma} \label{classsize} (\cite{Mac} page 181,\cite{SS}) The number of unipotent elements in $GL(n,p)$ with conjugacy class type $\lambda$ is \[ \frac{|GL(n,p)|}{p^{\sum (\lambda_i')^2} \prod_i (\frac{1}{p})_{m_i(\lambda)}} .\] \end{lemma} Chapter 3 of \cite{Mac} defines Hall-Littlewood symmetric functions $P_{\lambda}(x_1,x_2,\cdots;t)$ which will be used extensively. There is an explicit formula for the Hall-Littlewood polynomials. Let the permutation $w$ act on the $x$-variables by sending $x_i$ to $x_{w(i)}$. There is also a coordinate-wise action of $w$ on $\lambda=(\lambda_1, \cdots,\lambda_n)$ and $S^{\lambda}_n$ is defined as the subgroup of $S_n$ stabilizing $\lambda$ in this action. For a partition $\lambda=(\lambda_1,\cdots,\lambda_n)$ of length $\leq n$, two formulas for the Hall-Littlewood polynomial restricted to $n$ variables are: \begin{eqnarray*} P_{\lambda}(x_1,\cdots,x_n;t) & = & [\frac{1}{\prod_{i \geq 0} \prod_{r=1}^{m_i(\lambda)} \frac{1-t^r}{1-t}}] \sum_{w \in S_n} w(x_1^{\lambda_1} \cdots x_n^{\lambda_n} \prod_{i<j} \frac{x_i-tx_j} {x_i-x_j})\\ & = & \sum_{w \in S_n/S_n^{\lambda}} w(x_1^{\lambda_1} \cdots x_n^{\lambda_n} \prod_{\lambda_i > \lambda_j} \frac{x_i-tx_j}{x_i-x_j}) \end{eqnarray*} \begin{lemma} \label{formula} The probability that a unipotent element of $GL(n,p)$ has conjugacy class of type $\lambda$ is equal to either of \begin{enumerate} \item $\frac{p^n (\frac{1}{p})_n}{p^{\sum (\lambda_i')^2} \prod_i (\frac{1}{p})_{m_i(\lambda)}}$ \item $\frac{p^n (\frac{1}{p})_n P_{\lambda}(\frac{1}{p},\frac{1}{p^2},\frac{1}{p^3},\cdots;\frac{1}{p})}{p^{n(\lambda)}}$ \end{enumerate} \end{lemma} \begin{proof} The first statement follows from Lemma \ref{classsize} and Steinberg's theorem that $GL(n,p)$ has $p^{n(n-1)}$ unipotent elements. The second statement follows from the first and from elementary manipulations applied to Macdonald's principal specialization formula (page 337 of \cite{Mac}). Full details appear in \cite{F2}. \end{proof} One consequence of Lemma \ref{formula} is that in the $p \rightarrow \infty$ limit, all mass is placed on the partition $\lambda=(n)$. Thus the asymptotics in this paper will focus on the more interesting case of the fixed $p$, $n \rightarrow \infty$ limit. \begin{lemma} \label{prod} \[ \sum_{\lambda \vdash n} \frac{1}{p^{\sum (\lambda_i')^2} \prod_i (\frac{1}{p})_{m_i(\lambda)}} = \frac{1}{p^n(\frac{1}{p})_n}\] \end{lemma} \begin{proof} Immediate from Lemma \ref{formula}. \end{proof} Lemmas \ref{duality} and \ref{likeMac} relate to the theory of Hall polynomials and Hall-Littlewood symmetric functions \cite{Mac}. Lemma \ref{duality}, for instance, is the duality property of Hall polynomials. \begin{lemma} \label{duality} (Page 181 of \cite{Mac}) For all partitions $\lambda,\mu,\nu$, \[|\{G_1 \subseteq G_{\lambda}:G_{\lambda}/G_1=\mu,G_1=\nu\}| = |\{G_1 \subseteq G_{\lambda}:G_{\lambda}/G_1=\nu,G_1=\mu\}| .\] \end{lemma} \begin{lemma} \label{likeMac} Let $G_{\lambda}$ denote an abelian $p$-group of type $\lambda$, and $G_1$ a subgroup. Then for all types $\mu$, \[ \sum_{\lambda \vdash n} \frac{\{|G_1 \subseteq G_{\lambda}:G_1=\mu|\}}{p^{\sum (\lambda_i')^2} \prod_i (\frac{1}{p})_{m_i(\lambda)}} = \frac{1}{p^{\sum (\mu_i')^2} \prod_i (\frac{1}{p})_{m_i(\mu)}} \frac{1}{p^{n-|\mu|} (\frac{1}{p})_{n-|\mu|}}.\] \end{lemma} \begin{proof} Macdonald (page 220 of \cite{Mac}), using Hall-Littlewood symmetric functions, establishes for any partitions $\mu,\nu$, the equation: \[ \sum_{\lambda: |\lambda|=|\mu|+|\nu|} \frac{|\{G_1 \subseteq G_{\lambda}:G_{\lambda}/G_1=\mu,G_1=\nu\}|}{p^{\sum (\lambda_i')^2} \prod_i (\frac{1}{p})_{m_i(\lambda)}} = \frac{1} {p^{\sum (\mu_i')^2} \prod_i (\frac{1}{p})_{m_i(\mu)}} \frac{1}{p^{\sum (\nu_i')^2} \prod_i (\frac{1}{p})_{m_i(\nu)}}. \] Fixing $\mu$, summing the left hand side over all $\nu$ of size $n-|\mu|$, and applying Lemma \ref{duality} yields \begin{eqnarray*} \sum_{\lambda} \sum_{\nu} \frac{|\{G_1 \subseteq G_{\lambda}:G_{\lambda}/G_1=\mu,G_1=\nu\}|}{p^{\sum (\lambda_i')^2} \prod_i (\frac{1}{p})_{m_i(\lambda)}} & = & \sum_{\lambda} \sum_{\nu} \frac{|\{G_1 \subseteq G_{\lambda}:G_{\lambda}/G_1=\nu,G_1=\mu\}|}{p^{\sum (\lambda_i')^2} \prod_i (\frac{1}{p})_{m_i(\lambda)}}\\ & = & \sum_{\lambda} \frac{|\{G_1 \subseteq G_{\lambda}:G_1=\mu\}|}{p^{\sum (\lambda_i')^2} \prod_i (\frac{1}{p})_{m_i(\lambda)}}. \end{eqnarray*} Fixing $\mu$, summing the right hand side over all $\nu$ of size $n-|\mu|$, and applying Lemma \ref{prod} gives that \[ \frac{1} {p^{\sum (\mu_i')^2} \prod_i (\frac{1}{p})_{m_i(\mu)}} \sum_{\nu \vdash n-|\mu|} \frac{1}{p^{\sum (\nu_i')^2} \prod_i (\frac{1}{p})_{m_i(\nu)}} = \frac{1}{p^{\sum (\mu_i')^2} \prod_i (\frac{1}{p})_{m_i(\mu)}} \frac{1}{p^{n-|\mu|} (\frac{1}{p})_{n-|\mu|}},\] proving the lemma. \end{proof} \subsection{Upper triangular matrices over a finite field} Let $T(n,p)$ denote the set of upper triangular elements of $GL(n,p)$ with $1$'s along the main diagonal. From the theory of wild quivers there is a provable sense in which the conjugacy classes of $T(n,p)$ cannot be classified. Nevertheless, as emerges from work of Kirillov \cite{K1,K2} and Borodin \cite{B}, it is interesting to study the Jordan form of elements of $T(n,p)$. As with the unipotent conjugacy classes of $GL(n,p)$, the possible Jordan forms correspond to partitions $\lambda$ of $n$. Theorem \ref{express} gives five expressions for the probability that an element of $T(n,p)$ has Jordan form of type $\lambda$. As is evident from the proof, most of the hard work at the heart of these formulas has been carried out by others. Nevertheless, at least one of these expressions is useful, and to the best of our knowledge none of these formulas has appeared elsewhere. $P_{\lambda}$ will denote the Hall-Littlewood polynomial of the previous subsection. By a standard Young tableau $S$ of size $|S|=n$ is meant an assignment of $\{1,\cdots,n\}$ to the dots of the partition such that each of $\{1,\cdots,n\}$ appears exactly once, and the entries increase along the rows and columns. For instance, \[ \begin{array}{c c c c c} 1 & 3 & 5 & 6 & \\ 2 & 4 & 7 & & \\ 8 & 9 & & & \end{array} \] is a standard Young tableau. \begin{theorem} \label{express} The probability that a uniformly chosen element of $T(n,p)$ has Jordan form of type $\lambda$ is equal to each of the following: \begin{enumerate} \item $\frac{(p-1)^n P_{\lambda}(\frac{1}{p},\frac{1}{p^2},\frac{1}{p^3},\cdots;\frac{1}{p}) fix_{\lambda}(p)}{p^{n(\lambda)}}$, where $fix_{\lambda}(p)$ is the number of complete flags of an $n$-dimensional vector space over a field of size $p$ which are fixed by a unipotent element $u$ of type $\lambda$. \item $\frac{(p-1)^n P_{\lambda}(\frac{1}{p}, \frac{1}{p^2},\frac{1}{p^3},\cdots;\frac{1}{p}) Q_{(1)^n}^{\lambda}(p)}{p^{n(\lambda)}}$, where $Q_{(1)^n}^{\lambda}(p)$ is a Green's polynomial as defined on page 247 of \cite{Mac}. \item $(p-1)^n P_{\lambda}(\frac{1}{p} ,\frac{1}{p^2}, \frac{1}{p^3},\cdots;\frac{1}{p}) \sum_{\mu} dim(\chi^{\mu}) K_{\mu,\lambda}(\frac{1}{p})$, where $\mu$ is a partition of $n$, $dim(\chi^{\mu})$ is the dimension of the irreducible representation of $S_n$ of type $\mu$, and $K_{\mu,\lambda}$ is the Kostka-Foulkes polynomial. \item $\frac{(p-1)^n P_{\lambda}(\frac{1}{p},\frac{1}{p^2},\frac{1}{p^3},\cdots;\frac{1}{p}) chain_{\lambda}(p)}{p^{n(\lambda)}}$, where $chain_{\lambda}(p)$ is the number of maximal length chains of subgroups in an abelian $p$-group of type $\lambda$. \item $P_{\lambda}(1,\frac{1}{p},\frac{1}{p^2},\frac{1}{p^3},\cdots;\frac{1}{p}) \sum_{S} \prod_{j=1}^n (1-\frac{1}{p^{m^*(\Lambda_j)}})$, where the sum is over all standard Young tableaux of shape $\lambda$, and $m^*(\Lambda_j)$ is the number of parts in the subtableau formed by $\{1,\cdots,j\}$ which are equal to the column number of $j$. \end{enumerate} \end{theorem} \begin{proof} For the first assertion, observe that complete flags correspond to cosets $GL(n,p)/B(n,p)$ where $B(n,p)$ is the subgroup of all invertible upper triangular matrices. Note that $u \in GL(n,p)$ fixes the flag $gB(n,p)$ exactly when $g^{-1}ug \in B(n,p)$. The unipotent elements of $B(n,p)$ are precisely $T(n,p)$. Thus the number of complete flags fixed by $u$ is $\frac{1}{(p-1)^n |T(n,p)|} |\{g: g^{-1}ug \in T(n,p)\}|$. It follows that the sought probability is equal to $(p-1)^n fix_{\lambda}(p)$ multiplied by the probability that an element of $GL(n,p)$ is unipotent of type $\lambda$. The first assertion then follows from Lemma \ref{classsize}. The second part follows from the first part since by page 187 of \cite{Mac}, $Q_{(1)^n}^{\lambda}(p)$ is the number of complete flags of an $n$-dimensional vector space over a field of size $p$ which are fixed by a unipotent element of type $\lambda$. The third part follows from the second part and a formula for $Q_{(1)^n}^{\lambda}(p)$ on page 247 of \cite{Mac}. The fourth part follows from the third part and a formula for $\sum_{\mu} dim(\chi^{\mu}) K_{\mu,\lambda}(\frac{1}{p})$ in \cite{Kir}. For the fifth assertion, a result on page 197 of \cite{Mac} gives that the number of maximal length chains of subgroups in an abelian $p$-group of type $\lambda$ is equal to $\frac{p^{n(\lambda)}} {(1-\frac{1}{p})^n} \sum_{S} \prod_{j=1}^n (1-\frac{1}{p^{m^*(\Lambda_j)}})$. Observing that for a partition $\lambda$ of $n$, $P_{\lambda}(1,\frac{1}{p}, \frac{1}{p^2},\frac{1}{p^3},\cdots;\frac{1}{p})=p^n P_{\lambda}(\frac{1}{p}, \frac{1}{p^2},\frac{1}{p^3}, \cdots;\frac{1}{p})$, the result follows. \end{proof} As a corollary of Theorem \ref{express}, we recover the ``Division Algorithm'' of Borodin \cite{B}, which gives a probabilistic way of growing partitions a dot at a time such that the chance of getting $\lambda$ after $n$ steps is equal to the chance that a uniformly chosen element of $T(n,p)$ has Jordan type $\lambda$. We include our proof as it uses symmetric functions, which aren't mentioned in the literature on probability in the upper triangular matrices. We remark that a wonderful application of the division algorithm was found by Borodin \cite{B}, who proved asymptotic normality theorems for the lengths of the longest parts of the partition corresponding to a random element of $T(n,p)$, and even found the covariance matrix. We give another application in Section \ref{applictriang}. \begin{cor} (\cite{B}) Starting with the empty partition, at each step transition from a partition $\lambda$ to a partition $\Lambda$ by adding a dot to column $i$ chosen according to the rules \begin{itemize} \item $ i=1$ with probability $\frac{1}{p^{\lambda_1'}}$ \item $i=j>1$ with probability $\frac{1}{p^{\lambda_j'}}-\frac{1}{p^{\lambda_{j-1}'}}$ \end{itemize} \end{cor} \begin{proof} For a standard Young tableau $S$, let $\Lambda_j(S)$ be the partition formed by the entries $\{1,\cdots,j\}$ of $S$. It suffices to prove that at step $j$ the division algorithm goes from $\Lambda_{j-1}$ to $\Lambda_j$ with probability $\frac{P_{\Lambda_j}(1,\frac{1}{p},\frac{1}{p^2}, \frac{1}{p^3},\cdots;\frac{1}{p})}{P_{\Lambda_{j-1}}(1,\frac{1}{p},\frac{1}{p^2}, \frac{1}{p^3} ,\cdots;\frac{1}{p})} (1-\frac{1}{p^{m^*(\Lambda_j)}})$, because then the probability the Borodin's algorithm gives $\lambda$ at step $n=|\lambda|$ is \[ \sum_{S : shape(S)=\lambda} \prod_{j=1}^n \frac{P_{\Lambda_j}(1,\frac{1}{p},\frac{1}{p^2}, \frac{1}{p^3},\cdots;\frac{1}{p})}{P_{\Lambda_{j-1}}(1,\frac{1}{p},\frac{1}{p^2}, \frac{1}{p^3} ,\cdots;\frac{1}{p})} (1-\frac{1}{p^{m^*(\Lambda_j)}}) = P_{\lambda}(1,\frac{1}{p},\frac{1}{p^2},\frac{1}{p^3},\cdots;\frac{1}{p}) \sum_{S} \prod_{j=1}^n (1-\frac{1}{p^{m^*(\Lambda_j)}}),\] as desired from part 5 of Theorem \ref{express}. The fact that the division algorithm goes from $\Lambda_{j-1}$ to $\Lambda_j$ with probability $\frac{P_{\Lambda_j}(1,\frac{1}{p},\frac{1}{p^2}, \frac{1}{p^3},\cdots;\frac{1}{p})}{P_{\Lambda_{j-1}}(1,\frac{1}{p},\frac{1}{p^2}, \frac{1}{p^3} ,\cdots;\frac{1}{p})} (1-\frac{1}{p^{m^*(\Lambda_j)}})$ follows, after algebraic manipulations, from Macdonald's principle specialization formula (page 337 of \cite{Mac}) \[ P_{\lambda}(1,\frac{1}{p},\frac{1}{p^2}, \frac{1}{p^3},\cdots;\frac{1}{p}) = p^{n+n(\lambda)} \prod_i \frac{1}{p^{\lambda_i'^2} (\frac{1}{p})_{m_i(\lambda)}}.\] \end{proof} As a remark, we observe that the division algorithm ties in with an algorithm of the author for growing random parititions distributed according to the $n \rightarrow \infty$ law of the partition corresponding to the polynomial $z-1$ in the Jordan form of a random element of $GL(n,p)$. One version of that algorithm \cite{F} is \begin{description} \item [Step 0] Start with $\lambda$ the empty partition and $N=1$. Also start with a collection of coins indexed by the natural numbers such that coin $i$ has probability $\frac{1}{p^i}$ of heads and probability $1-\frac{1}{p^i}$ of tails. \item [Step 1] Flip coin $N$. \item [Step 2a] If coin $N$ comes up tails, leave $\lambda$ unchanged, set $N=N+1$ and go to Step 1. \item [Step 2b] If coin $N$ comes up heads, let $j$ be the number of the last column of $\lambda$ whose size was increased during a toss of coin $N$ (on the first toss of coin $N$ which comes up heads, set $j=0$). Pick an integer $S>j$ according to the rule that $S=j+1$ with probability $\frac{1}{p^{\lambda_{j+1}'}}$ and $S=s>j+1$ with probability $\frac{1}{p^{\lambda_s'}} - \frac{1}{p^{\lambda_{s-1}'}}$ otherwise. Then increase the size of column $S$ of $\lambda$ by 1 and go to Step 1. \end{description} {\bf Remarks:} \begin{enumerate} \item Probabilistic algorithms similar to the one just described for $GL(n,p)$ were used profitably in \cite{F} to prove group theoretic results of Lusztig, Rudvalis/Shinoda, and Steinberg typically proved by techniques such as character theory or Moebius inversion. \item Observe that if one condition on the (probability zero) event that each coin comes up heads exactly once, the transition rules are the same as for the division lemma. \end{enumerate} \section{Applications} \label{applications} In this section we return to the problem which motivated this paper: studying the eigenvalues of unipotent matrices in the permutation representation on lines. Lemma \ref{translate} describes the cycle structure of the permutation action of a unipotent element $\alpha$ of $GL(n,p)$ on lines in $V$ in terms of the partition parametrizing the conjugacy class of $\alpha$. \begin{lemma} \label{translate} Let $\alpha$ be a unipotent element of $GL(n,p)$ with conjugacy class of type $\lambda$. Every orbit of the action of $\alpha$ on the set of lines in $V$ has size $p^{r}$ for some $r \geq 0$. The number of orbits of size $p^{r}$ is \[\begin{array}{ll} \frac{p^{\lambda_1'+\cdots+\lambda_{p^r}'}- p^{\lambda_1'+\cdots+\lambda_{p^{r-1}}'}}{p-1}& \mbox{if $r \geq 1$}\\ \frac{p^{\lambda_1'}-1}{p-1} & \mbox{if $r=0$}. \end{array} \] \end{lemma} \begin{proof}As discussed at the beginning of Section \ref{ident}, the matrix $\alpha$ may be assumed to be \[ \left( \begin{array}{c c c c} M_{\lambda_1} & 0 & 0 &0 \\ 0 & M_{\lambda_2} & 0 & 0\\ 0 & 0 & M_{\lambda_3} & \cdots \\ 0 & 0 & 0 & \cdots \end{array} \right), \] where $M_i$ is the $i*i$ matrix of the form \[ \left( \begin{array}{c c c c c c} 1 & 1 & 0 & \cdots & \cdots & 0\\ 0 & 1 & 1 & 0 & \cdots & 0\\ 0 & 0 & 1 & 1 & \cdots & 0\\ \cdots & \cdots & \cdots & \cdots & \cdots & \cdots \\ \cdots & \cdots & \cdots & 0 & 1&1\\ 0 & 0 & 0 & \cdots & 0 & 1 \end{array} \right). \] Let $E_i=M_i-Id$, where $Id$ is the identity matrix. From this explicit form all eigenvalues of $\alpha^m, m \geq 0$ are $1$. Thus if $\alpha^m$ fixes a line, it fixes it pointwise. Hence the number of lines fixed by $\alpha^m$ is one less than the number of points it fixes, all divided by $p-1$, and we are reduced to studying the action of $\alpha$ of non-zero vectors. It is easily proved that $M_i$ has order $p^a$, where $p^{a-1} < i \leq p^a$. Hence if $\alpha^m(x_1,\cdots,x_n)=(x_1,\cdots,x_n)$ with some $x_i$ non-zero, and $m$ is the smallest non-negative integer with this property, then $m$ is a power of $p$. Thus all orbits of $\alpha$ on the lines of $V$ have size $p^r$ for $r \geq 0$. We next claim that $\alpha^{p^r}, r \geq 0$ fixes a vector \[ (x_1,\cdots,x_{\lambda_1}, x_{\lambda_1+1}, \cdots, x_{\lambda_1+\lambda_2},x_{\lambda_1+\lambda_2+1},\cdots,x_{\lambda_1+\lambda_2+\lambda_3},\cdots,x_n) \] if and only if \[ x_{\lambda_1+\cdots+\lambda_{i-1}+p^a+1} = x_{\lambda_1+\cdots+\lambda_{i-1}+p^a+2} = \cdots = x_{\lambda_1+\cdots+\lambda_i}=0 \ for \ i: \lambda_i > p^a.\] It suffices to prove this claim when $\lambda$ has one part $\lambda_1$ of size $n$. Observe that the $i$th coordinate of $\alpha^{p^a} (x_1,\cdots,x_n)$ is $\sum_{j=i}^n {p^a \choose j-i} x_j$. If $n \leq p^a$, then $\alpha^{p^a}$ fixes all $(x_1,\cdots,x_n)$ because ${p^a \choose r} = 0 \ mod \ p$ for $r<p^a$. If $n>p^a$, then $\alpha^{p^a}$ fixes all $(x_1,\cdots,x_n)$ such that $x_{p^a+1}=\cdots=x_n=0$, for the same reason. Finally, if $n>p^a$ and $x_j \neq 0$ for some $j>p^a$, let $j$ be the largest such subscript. Then the $j-p^a$th coordinate of $\alpha^{p^a} (x_1,\cdots,x_n)$ is equal to $x_{j-p^a} + x_j$ mod $p$, showing that $\alpha^{p^a}$ does not fix such $(x_1,\cdots,x_n)$. This explicit description of fixed vectors (hence of fixed lines) of $\alpha^{p^a}$ yields the formula of the lemma for $r \geq 1$, because the number of lines in an orbit of size $p^r$ is the difference between the number of lines fixed by $\alpha^{p^r}$ and the number of lines fixed by $\alpha^{p^{r-1}}$. The formula for the number of lines in an orbit of size 1 follows because there are a total of $\frac{p^n-1}{p-1}$ lines.\end{proof} \subsection{Unipotent elements of $GL(n,p)$} \label{applicunip} Let $\alpha$ be a uniformly chosen unipotent element of $GL(n,p)$. Each element of $GL(n,p)$ permutes the lines in $V$ and thus defines a permutation matrix, which has complex eigenvalues. Each size $p^r$ orbit of $\alpha$ on lines gives $p^r$ eigenvalues, with one at each of the $p^r$th roots of unity. For $\theta \in (0,1)$, define a random variable $X^{\theta}$ by letting $X^{\theta}(\alpha)$ be the number of eigenvalues of $\alpha$ in the interval $(1,e^{2 \pi i \theta}]$ on the unit circle. For $r \geq 1$, define random variables $X_r$ on the unipotent elements of $GL(n,p)$ by \[ X_r(\alpha) = \frac{p^{\lambda_1'(\alpha)+\cdots+\lambda_r'(\alpha)}-p^{\lambda_1'(\alpha)+\cdots+\lambda_{r-1}'(\alpha)}}{p-1}. \] Clearly $X_r(\alpha)=0$ if $r>n$. Let $\lfloor y \rfloor$ denote the greatest integer less than $y$. Lemma \ref{translate} implies that \[ X^{\theta} = X_1 \lfloor \theta \rfloor + \sum_{r \geq 1} \frac{X_{p^{r-1}+1} + \cdots + X_{p^r}}{p^r} \lfloor p^r \theta \rfloor.\] This relationship (analogous to one used in \cite{Wieand}) will reduce the computation of the mean of $X^{\theta}$ to similar computations for the random variables $X_r$, which will now be carried out. Let $E_n$ denote the expected value with respect to the uniform distribution on the unipotent elements of $GL(n,p)$. \begin{theorem} \label{mean} For $1 \leq r \leq n$, \[ E_n(X_r) = \frac{p^r (1-\frac{1}{p^{n-r+1}})\cdots(1-\frac{1}{p^n})}{p-1} .\] \end{theorem} \begin{proof} By Lemma \ref{formula}, \[ E_n(X_r) = \sum_{\lambda \vdash n} \frac{p^n (\frac{1}{p})_n}{p^{\sum (\lambda_i')^2} \prod_i (\frac{1}{p})_{m_i(\lambda)}} \frac{p^{\lambda_1'(\alpha)+\cdots+\lambda_r'(\alpha)} -p^{\lambda_1'(\alpha)+\cdots+\lambda_{r-1}'(\alpha)}}{p-1}.\] Observe that $\frac{p^{\lambda_1'+\cdots+\lambda_r'}- p^{\lambda_1'+\cdots+\lambda_{r-1}'}}{p^r-p^{r-1}}$ is the number of subgroups of $G_{\lambda}$ of type $\nu=(r)$. This is because the total number of elements of order $p^r$ in $G_{\lambda}$ is $p^{\lambda_1'+\cdots+\lambda_r'}-p^{\lambda_1'+\cdots+\lambda_{r-1}'}$, and every subgroup of type $\nu=(r)$ has $p^r-p^{r-1}$ generators. Therefore, using Lemma \ref{likeMac}, \begin{eqnarray*} E_n(X_r) &=& p^n (\frac{1}{p})_n \frac{p^r-p^{r-1}}{p-1} \sum_{\lambda \vdash n} \frac{|\{G_1 \subseteq G_{\lambda} : G_1=(r)\}|}{p^{\sum (\lambda_i')^2} \prod_i (\frac{1}{p})_{m_i(\lambda)}}\\ & = & \left(p^n (\frac{1}{p})_n \frac{p^r-p^{r-1}}{p-1} \right) \left(\frac{1}{p^r(1-\frac{1}{p})} \frac{1}{p^{n-r} (\frac{1}{p})_{n-r}} \right)\\ & = & \frac{p^r (1-\frac{1}{p^{n-r+1}})\cdots(1-\frac{1}{p^n})}{p-1}. \end{eqnarray*} \end{proof} Corollary \ref{mean2} uses Theorem \ref{mean} to bound the mean of $X^{\theta}$. \begin{cor} \label{mean2} $E_n(X^{\theta}) = \theta \frac{p^n-1}{p-1} - O(\frac{p^n}{n})$. \end{cor} \begin{proof} Let $\{y\}=y-\lfloor y \rfloor$ denote the fractional part of a positive number $y$. Theorem \ref{mean} and the writing of $X^{\theta}$ in terms of the $X_r$'s imply that \begin{eqnarray*} E_n(X^{\theta}) & = & \theta E_n(\sum_{i \geq 1} X_i) - \sum_{r \geq 1} \{p^r \theta\} E_n (\frac{X_{p^{r-1}+1}+\cdots+X_{p^r}}{p^r})\\ & = & \theta \frac{p^n-1}{p-1} - \sum_{r \geq 1} \{p^r \theta\} E_n (\frac{X_{p^{r-1}+1}+\cdots+X_{p^r}}{p^r})\\ & \geq & \theta \frac{p^n-1}{p-1} - \sum_{r \geq 1} E_n (\frac{X_{p^{r-1}+1}+\cdots+X_{p^r}}{p^r})\\ & \geq & \theta \frac{p^n-1}{p-1} - (\sum_{r=1}^{\lfloor log_p(n) \rfloor)} \frac{p^{p^{r-1}+1} + \cdots + p^{p^r}}{(p-1)p^r}) - (\frac{p^{p^{\lfloor log_p(n) \rfloor}+1} + \cdots + p^n}{(p-1)p^{\lfloor log_p(n) \rfloor +1}}). \end{eqnarray*} We suppose for simplicity that $n \neq p^{p^r}+1$ for some $r$ (the case $n = p^{p^r}+1$ is similar). Continuing, \begin{eqnarray*} E_n(X^{\theta}) & \geq &\theta \frac{p^n-1}{p-1} - (\sum_{r=1}^{\lfloor log_p(n) \rfloor} \frac{p^{p^r+1}}{(p-1)^2 \frac{n}{p^{\lfloor log_p(n) \rfloor -r +1}}}) - \frac{p^{n+1}}{(p-1)^2 n} = \theta \frac{p^n-1}{p-1} - O(\frac{p^n}{n}). \end{eqnarray*} \end{proof} The approach here appears to extend to the computation of higher moments, but the computations are formidable. For example one can show that if $1 \leq r \leq s \leq n$, then \[ E_n(X_rX_s) = \frac{p^{r+s-1}}{p-1} [\frac{p}{p-1}(1-\frac{1}{p^{n-s-r+1}}) \cdots (1-\frac{1}{p^n}) + \sum_{a=0}^{r-1} (1-\frac{1}{p^{n-a-s+1}}) \cdots (1-\frac{1}{p^n})] . \] \subsection{Upper triangular matrices over a finite field} \label{applictriang} Let $\alpha$ be a uniformly chosen element of $T(n,p)$. Recall that $\alpha$ is unipotent by the definition of $T(n,p)$. Each element of $T(n,p)$ permutes the lines in $V$ and thus defines a permutation matrix, which has complex eigenvalues. Each size $p^r$ orbit of $\alpha$ on lines gives $p^r$ eigenvalues, with one at each of the $p^r$th roots of unity. For $\theta \in (0,1)$, define a random variable $X^{\theta}$ by letting $X^{\theta}(\alpha)$ be the number of eigenvalues of $\alpha$ in the interval $(1,e^{2 \pi i \theta}]$ on the unit circle. For $r \geq 1$, define random variables $X_r$ on the unipotent elements of $T(n,p)$ by \[ X_r(\alpha) = \frac{p^{\lambda_1'(\alpha)+\cdots+\lambda_r'(\alpha)}-p^{\lambda_1'(\alpha)+\cdots+\lambda_{r-1}'(\alpha)}}{p-1}. \] Let $\lfloor y \rfloor$ denote the greatest integer less than $y$. Lemma \ref{translate} implies that \[ X^{\theta} = X_1 \lfloor \theta \rfloor + \sum_{r \geq 1} \frac{X_{p^{r-1}+1} + \cdots + X_{p^r}}{p^r} \lfloor p^r \theta \rfloor.\] As for the case of $GL(n,p)$ this relationship will reduces the computation of the mean of $X^{\theta}$ to similar computations for the random variables $X_r$. Let $E_n$ denote the expected value with respect to the uniform distribution on the unipotent elements of $T(n,p)$. Theorem \ref{mean3} shows that the expected value of $X_r$ is surprisingly simple. As one sees from the case $p=2$, the result is quite different from that of Theorem \ref{mean}. \begin{theorem} \label{mean3} For $1 \leq r \leq n$, \[ E_n(X_r) = (p-1)^{r-1} {n \choose r} .\] \end{theorem} \begin{proof} We proceed by joint induction on $n$ and $r$, the base case $n=r=1$ being clear. Let $Prob(S)$ denote the probability that Borodin's growth algorithm yields the standard Young tableau $S$ after $|S|$ steps. Let $col(n)$ be the column number of $n$ in $S$. With all sums being over standard Young tableaux, observe that \begin{eqnarray*} E_n(p^{\lambda_1'+\cdots+\lambda_r'}) & = & \sum_{S: |S|=n} p^{\lambda_1'(S)+\cdots+\lambda_r'(S)} Prob(S)\\ & = & \sum_{S: |S|=n,col(n)=1} p^{\lambda_1'(S)+\cdots+\lambda_r'(S)} Prob(S)\\&& + \sum_{S:|S|=n,1<col(n)=j\leq r} p^{\lambda_1'(S)+\cdots+\lambda_r'(S)} Prob(S)\\ &&+ \sum_{S:|S|=n,col(n)>r} p^{\lambda_1'(S)+\cdots+\lambda_r'(S)} Prob(S)\\ & = & \sum_{S':|S'|=n-1} p^{\lambda_1'(S')+\cdots+\lambda_r'(S')+1} Prob(S') \frac{1}{p^{\lambda_1'(S')}}\\&& + \sum_{j=2}^r \sum_{S':|S'|=n-1} p^{\lambda_1'(S')+\cdots+\lambda_r'(S')+1} Prob(S') (\frac{1}{p^{\lambda_j'(S')}}-\frac{1}{p^{\lambda_{j-1}'(S')}})\\&&+ \sum_{j>r} \sum_{S':|S'|=n-1} p^{\lambda_1'(S')+\cdots+\lambda_r'(S')} Prob(S') (\frac{1}{p^{\lambda_j'(S')}}-\frac{1}{p^{\lambda_{j-1}'(S')}})\\ & = & p E_{n-1}(p^{\lambda_2'+\cdots+\lambda_r'}) + p E_{n-1}(p^{\lambda_1'+\cdots+\lambda_{r-1}'}-p^{\lambda_2'+\cdots+\lambda_r'})\\&& + E_{n-1}(p^{\lambda_1'+\cdots+\lambda_{r}'}-p^{\lambda_1'+\cdots+\lambda_{r-1}'})\\ & = & (p-1) E_{n-1}(p^{\lambda_1'+\cdots+\lambda_{r-1}'}) + E_{n-1}(p^{\lambda_1'+\cdots+\lambda_{r}'})\\ & = & (p-1)^{r-1} {n-1 \choose r-1} + (p-1)^{r-1} {n-1 \choose r}\\ & = & (p-1)^{r-1} {n \choose r}. \end{eqnarray*} \end{proof} Corollary \ref{mean4} follows by using Theorem \ref{mean3} and arguing along the lines of Corollary \ref{mean2}. \begin{cor} \label{mean4} $\theta \frac{p^n-1}{p-1}-p \sum_{r=1}^n \frac{(p-1)^{r-1} {n \choose r}}{r} \leq E_n(X^{\theta}) \leq \theta \frac{p^n-1}{p-1}$. \end{cor} \section{Acknowledgments} The author thanks Persi Diaconis for helpful references. This research was supported by an NSF Postdoctoral Fellowship.
{ "attr-fineweb-edu": 1.527344, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdSo5qdmDCTsM6INC
\section{Introduction} Ring-polymer molecular dynamics (RPMD) rate-theory is a powerful method for calculating approximate thermal quantum reaction rates. The method has been applied to a variety of reactions, in both the gas and condensed phases,\cite{rpmd1,rpmd2,rpmd3,rpmd4,rpmd5,rpmd6,rpmd7,rpmd8,rpmd9,rpmd10,rpmd11,rpmd12} where it has been found to give a good approximation to the exact quantum result (where this is available) across a wide temperature range, from the classical to the deep-tunnelling regime. The success of RPMD rate-theory was initially a mystery, as the method was proposed on a heuristic basis,\cite {rpmd1,rpmd2} and it was not clear how a method that involves classical molecular dynamics in an extended ring-polymer space could reproduce deep-tunnelling rates. A subsequent analysis\cite{jeremy} at low temperatures showed that the $t\to 0_+$ limit of the RPMD flux-side time-correlation function, i.e.\ the RPMD transition-state-theory rate (RPMD-TST), contains a quantum-Boltzmann ensemble of Feynman paths which fluctuate around the instanton\cite{inst1} (periodic orbit); this holds even for highly asymmetric reaction barriers, for which the earlier centroid-density approximation\cite{gillan,vcm} (which is the special case of RPMD-TST with a centroid dividing surface) breaks down.\cite{jeremy,vothas} More recently, it was found that the RPMD-TST rate also emerges naturally as a quantum transition-state-theory (QTST), corresponding to the $t\to 0_+$ limit of a new type of quantum flux-side time-correlation function.\cite{tim1,tim2,tim3} By placing the flux and side dividing surface in the same place in path-integral space, this function gives a non-zero $t\to 0_+$ limit, and by making these surfaces invariant to imaginary-time translation, it gives the correct quantum Boltzmann statistics (thereby avoiding the problem of negative rates, encountered in the related classical Wigner expression\cite{wiggy}). It was further shown\cite{tim2} that this $t\to 0_+$ limit (i.e.\ RPMD-TST) gives the exact quantum rate in the absence of recrossing of the dividing surface (and of surfaces orthogonal to it in path-integral space), and gives an approximate upper bound to the exact quantum rate, which becomes an exact upper bound in the high-temperature limit (where classical TST is recovered as a special limiting case). A recent paper by Jang and Voth\cite{jv} appears to contradict these findings; these authors derive the $t\to 0_+$ limit of the same quantum time-correlation function as in ref.~\onlinecite{tim1}, but claim to find that it gives the centroid-density approximation. Here we show that there is no such contradiction, because the $t\to 0_+$ limit obtained by Jang and Voth is in fact RPMD-TST. The article is structured as follows: Sec.~II summarises the key equations of RPMD rate theory and gives the quantum time-correlation function of ref.~\onlinecite{tim1}; Sec.~III presents an analysis of the $t\to 0_+$ limit derived by Jang and Voth; Sec.~IV concludes the article. \section{Summary of previous results} Here we summarise previous results from RPMD rate-theory and give the quantum time-correlation function introduced in ref.~\onlinecite{tim1}, of which the RPMD-TST rate is the $t\to 0_+$ limit. We will confine the analysis to a one-dimensional system with classical Hamiltonian \begin{align}\label{1d} H={p^2\over 2m}+V(q) \end{align} It is straightforward to generalize these approaches to multi-dimensional systems.\cite{rpmd1,rpmd2,rpmd3,tim1,tim2,tim3} \subsection{RPMD-TST} For the system of \eqn{1d}, the RPMD Hamiltonian is \begin{align} H_N=\sum_{i=1}^N{p_i^2\over 2 m}+U_N({\bf q}) \end{align} in which ${\bf q}=\{q_1,\dots,q_N\}$ are a set of $N$ replicas of the system coordinate $q$, ${\bf p}=\{p_1,\dots,p_N\}$ are the conjugate momenta, and $U_N({\bf q})$ is the ring-polymer potential \begin{align} U_N({\bf q})=\sum_{i=1}^N{m(q_{i+1}-q_i)^2\over 2(\beta_N\hbar)^2}+V(q_i) \end{align} with $q_{i\pm N}=q_i$. Clearly $U_N({\bf q})$ is the exponent in the standard path-integral expression\cite{chandler,parrinello,ceperley} for the quantum Boltzmann operator $\exp({-\beta \hat H})$. The dynamics generated by $H_N$ is fictitious, but satisfies two important criteria: it is exact in the limit $t\to 0$, and it preserves the quantum Boltzmann distribution. These properties allow one to apply (standard) classical rate theory in the extended phase space $({\bf p},{\bf q})$, to compute a rate coefficient which gives a lower-bound estimate of the $t\to 0_+$ flux through some {\em dividing surface} $f({\bf q})$; this initial flux is the RPMD-TST approximation to the quantum rate coefficient: \begin{align}\label{rpmd1} k^\ddag_{\rm RP}(T)Q(T)=\lim_{N\to\infty}{1\over (2\pi\hbar)^N}\int\!d{\bf p}\int\!d{\bf q}\,e^{-\beta_NH_N}\delta[f({\bf q})]{\dot f}({\bf q})h[{\dot f}({\bf q})] \end{align} where $Q(T)$ is the reactant partition function, and \begin{align} {\dot f}({\bf q})=\sum_{i=1}^N{\partial f({\bf q})\over \partial q_i}{p_i\over m} \end{align} is the $t\to 0_+$ flux through $f({\bf q})$ (and $h(x)$ denotes the Heaviside step-function, and we use the notation $\int\!d{\bf q}=\int_{-\infty}^\infty dq_1\dots\int_{-\infty}^\infty dq_N$ throughout). An important property of $f({\bf q})$ is that, in order to maximise the free energy, it must be invariant under cyclic permutation of the replicas, i.e. \begin{align}\label{perm} {\cal P}_{i\rightarrow i+k}\,f({\bf q})=f({\bf q}) \end{align} where ${\cal P}_{i\rightarrow i+k}$ indicates that each $q_i$ is moved to the position previously occupied by $q_{i+k}$. A common choice of $f({\bf q})$ satisfying this condition is $f({\bf q})={Q}_0-q^\ddag$, where ${Q}_0=\sum_i^Nq_i/N$ is the ring-polymer {\em centroid} (centre of mass). This important special case of RPMD-TST is often referred to as the centroid-density approximation.\cite{gillan,vcm} As mentioned in the Introduction, the centroid dividing-surface works well above the cross-over temperature to deep tunnelling, but more general forms of $f({\bf q})$ need to be used at lower temperatures if the barrier is asymmetric (in which case the optimal dividing surface involves ring-polymer stretch modes).\cite{jeremy} In the limit $N\to\infty$, \eqn{perm} is equivalent to making $f({\bf q})$ invariant to imaginary-time translation, provided $f({\bf q})$ is also a smooth function of imaginary time (see the Appendix), which we will assume in what follows. Equation (\ref{rpmd1}) can be obtained in more compact form by integrating out the momenta ${\bf p}$, to give \begin{align}\label{comp} k^\ddag_{\rm RP}(T)Q(T)=\lim_{N\to\infty}{1\over 2\pi\hbar\beta_N}\left(m\over 2\pi\beta_N\hbar^2\right)^{(N-1)/2}\int\!d{\bf q}\,e^{-\beta_NU_N}\sqrt{B_N({\bf q})}\delta[f({\bf q})] \end{align} where \begin{align}\label{bn} B_N({\bf q})=\sum_{i=1}^N\left[{\partial f({\bf q})\over \partial q_i}\right]^2 \end{align} normalises the flux. This expression will turn out to be useful in Sec.~III. \subsection{Quantum $t\to 0_+$ TST} In ref.~\onlinecite{tim1}, we found a quantum flux-side time-correlation function whose $t\to 0_+$ limit gives $k^\ddag_{\rm RP}(T)$. The standard forms of flux-side time-correlation function (obtained from linear response\cite{yama} or scattering theory\cite{mst}) give zero as $t\to 0_+$. This property was shown in ref.~\onlinecite{tim1} to be the result of putting the flux and side dividing surfaces in different locations in path integral space, with the result that the flux and side are initially decorrelated and therefore zero. When the flux and side dividing surfaces are in the same place, and when they are taken to be a smooth permutationally invariant function $f({\bf q})$ as defined above, then the resulting quantum flux-side time-correlation function $C_{\rm fs}(T,t)$ satisfies\cite{tim1} \begin{align} k^\ddag_{\rm RP}(T)Q(T)=\lim_{t\to 0_+}C_{\rm fs}(T,t) \end{align} The simplest way to write out $C_{\rm fs}(T,t)$ is as the derivative of the corresponding side-side function \begin{align}\label{qfs} C_{\rm fs}(T,t)=-{d C_{\rm ss}(T,t)\over d t} \end{align} where \begin{align}\label{quant} C_{\rm ss}^{[N]}(T,t)=\lim_{N\to\infty}&\int\!d{\bf q}\int\!d{\bf \Delta}\int\!d{\bf z}\,h[f({\bf q})]h[f({\bf z})]\rho_N({\bf q},{\bf \Delta})\nonumber\\ &\times\bra{q_{i}-\Delta_{i}/2}e^{i{\hat H}t/\hbar}\ket{z_i}\bra{z_i}e^{-i{\hat H}t/\hbar}\ket{q_{i}+\Delta_{i}/2} \end{align} with \begin{align} \rho_N({\bf q},{\bf \Delta})=\prod_{i=1}^N\bra{q_{i-1}-\Delta_{i-1}/2}e^{-\beta_N{\hat H}}\ket{q_{i}+\Delta_{i}/2} \end{align} and \begin{align} {\hat H}={\hat p^2\over 2m}+V({\hat q}) \end{align} The $t\to\infty$ limit of $C_{\rm fs}(T,t)$ [of \eqn{qfs}] does not give the exact quantum rate, since one must also account for recrossing of dividing surfaces orthogonal to $f({\bf q})$ in path-integral space. However, it was shown in ref.~\onlinecite{tim2} that the flux through these orthogonal dividing surfaces is zero in the limit $t\to 0_+$, and thus that $k^\ddag_{\rm RP}(T)$ gives the instantaneous thermal quantum flux from reactants to products. \section{The alternative derivation} In ref.~\onlinecite{jv}, Jang and Voth rederived the $t\to 0_+$ limit of $C_{\rm fs}(T,t)$ and found that it gives\cite{suppress} \begin{align} k^\ddag_{\rm JV}(T)Q(T)=\lim_{t\to 0_+}C_{\rm fs}(T,t) \end{align} where \begin{align}\label{jv} k^\ddag_{\rm JV}(T)Q(T)=\lim_{N\to\infty}\,&{1\over 2\pi\hbar\beta_N}\int\!d{\bf q}\int\!d \eta\,{\tilde \rho}_N({\bf q},\eta)\delta[f({\bf q})]\nonumber\\ &\times\sum_{k=1}^N{\partial f({\bf q})\over \partial q_k}{T_{k-1}+2T_k+T_{k+1}\over 4} \end{align} with \begin{align} {\tilde \rho}_N({\bf q},\eta)=\prod_{i=1}^N\bra{q_{i-1}-T_{i-1}\eta/2}e^{-\beta_N{\hat H}}\ket{q_{i}+T_{i}\eta/2} \end{align} and \begin{align}\label{tt} T_i({\bf q})={1\over \sqrt{B_N({\bf q})}}{\partial f({\bf q})\over \partial q_i} \end{align} After analysing \eqn{jv}, Jang and Voth concluded that it gave the centroid-density rate instead of RPMD-TST. We now show that \eqn{jv} does in fact give RPMD-TST, i.e.\ that \begin{align}\label{eq} k^\ddag_{\rm JV}(T)\equiv k^\ddag_{\rm RP}(T) \end{align} We first note that Jang and Voth's analysis\cite{jv} considered only the special case of a centroid dividing surface. We therefore need to generalize the analysis to a smooth, permutationally invariant, $f({\bf q})$, which satisfies \eqn{perm} (which includes the centroid dividing surface as a special case). Exploiting first the smoothness of $f({\bf q})$, we note that the last term in \eqn{jv} can be replaced by $T_k({\bf q})$ (see the Appendix), such that \eqn{jv} simplifies to \begin{align}\label{re} k^\ddag_{\rm JV}(T)Q(T)=\lim_{N\to\infty}\,&{1\over 2\pi\hbar\beta_N}\int\!d{\bf q}\int\!d \eta\,{\tilde \rho}_N({\bf q},\eta)\delta[f({\bf q})]\sqrt{B_N({\bf q})} \end{align} [where we have used \eqnn{bn}{tt} to replace the sums over $T_k$ by $\sqrt{B_N({\bf q})}$]. A similar procedure allows us to evaluate ${\tilde \rho}_N({\bf q},\eta)$ explicitly in terms of matrix elements over the position coordinates, replacing instances of $T_{k+1}+T_k$ by $2T_k$, to give \begin{align} {\tilde \rho}_N({\bf q},\eta)=&\left(m\over 2\pi\beta_N\hbar^2\right)^{N/2}\exp\left[-\sum_{i=1}^Nm(q_{i+1}-q_i)^2/2\beta_N\hbar^2\right]\nonumber\\ &\times e^{-\beta\Phi_N({\bf q})}e^{-\eta^2m/2\beta_N\hbar^2}e^{-g_N({\bf q})\eta/\hbar}+{\cal O}(N^{-1}) \end{align} with \begin{align} \Phi_N({\bf q})=&\frac{1}{2N}\sum_{i=1}^NV(q_i+T_i\eta/2)+V(q_i-T_i\eta/2)\\ =&\left[{1\over N}\sum_{i=1}^NV(q_i)\right]+{\cal O}(\eta^2N^{-1})\label{vv} \end{align} (where the last line uses the property $T_i\sim N^{-1/2}$) and \begin{align} g_N({\bf q})=&{m\over 2\beta_N\hbar}\sum_{i=1}^N(q_{i+1}-q_{i})T_i({\bf q}) \end{align} Equation (\ref{vv}) ensures that $V$ depends only on ${\bf q}$ in the limit $N\to\infty$, allowing us to integrate over $\eta$. Because of the cross term $g_N({\bf q})$, this integral will, in the case of a completely general (i.e.\ non-permutationally invariant) dividing surface, give a complicated expression involving repulsive `springs' between all pairs of `beads' $q_i$. However, for the smooth, permutationally invariant, $f({\bf q})$, it is sufficient to note that \begin{align} \left({\cal P}_{i\to i+1}-1\right)f({\bf q})=\sum_{i=1}^N(q_{i+1}-q_{i}){\partial f({\bf q})\over \partial q_i}+{\cal O}[(q_{i+1}-q_{i})^3] \end{align} and that $q_{i+1}-q_{i}\sim N^{-1/2}$, from which it follows that the cross-term $g_N({\bf q})$ disappears in the limit $N\to\infty$. The integral over $\eta$ in \eqn{re} then closes up the matrix elements in ${\tilde \rho}_N({\bf q},\eta)$ into an ensemble of intact ring-polymers, giving \begin{align} \int\!d \eta\,{\tilde \rho}_N({\bf q},\eta)=\left(m\over 2\pi\beta_N\hbar^2\right)^{(N-1)/2}e^{-\beta_NU_N({\bf q})}+{\cal O}(N^{-1}) \end{align} Substituting this expression back into \eqn{re} gives the righthand side of \eqn{comp}, thus proving \eqn{eq}.\cite{error} \section{Summary} We have shown that ref.~\onlinecite{jv} gives an alternative derivation of RPMD-TST. There is thus no contradiction between the $t\to 0_+$ limits derived in refs.~\onlinecite{tim1} and \onlinecite{jv}, and we can be clear that RPMD-TST is a quantum transition-state theory (QTST), obtained as the $t\to 0_+$ limit of a quantum time-correlation function describing the flux through a dividing surface that is invariant to imaginary-time translation. As discussed in ref.~\onlinecite{tim2}, this does not imply that RPMD-TST is a good approximation to all quantum reaction rates: RPMD-TST works if the reaction is direct, and if the temperature is not too far below the instanton cross-over temperature. There are of course many reactions for which these conditions apply, and the range of applications of RPMD rate-theory is constantly growing.\cite{rpmd1,rpmd2,rpmd3,rpmd4,rpmd5,rpmd6,rpmd7,rpmd8,rpmd9,rpmd10,rpmd11,rpmd12} \begin{acknowledgments} We acknowledge funding from the UK Science and Engineering Research Council. TJHH also acknowledges a Research Fellowship from Jesus College, Cambridge. \end{acknowledgments}
{ "attr-fineweb-edu": 1.791992, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdTE5qdmDC8BJn0hs
\section{Introduction} We consider the $r$-entropy ({R\'enyi entropy} of exponent $r$, where $r>0$ and $r\ne 1$) of a $n$-dimensional zero-mean random vector $X\in\mathbb{R}^n$ having density $f\in L^r(\mathbb{R}^n)$:\\[-1.5ex] \begin{equation} h_r(X)=\frac{1}{1-r}\log \int_{\mathbb{R}^n} f^r(x) \d x = -r' \log\|f\|_r \label{hpdef2} \end{equation} where $\|f\|_r$ denotes the $L^r$ norm of $f$, and $r'=\frac{r}{r-1}$ is the \emph{conjugate exponent} of $r$, such that $\frac{1}{r}+\frac{1}{r'}=1$. Notice that either $r>1$ and $r'>1$, or $0<r<1$ and $r'<0$. The limit as $r\to 1$ is the classical $h_1(X)=h(X)= -\int_{\mathbb{R}^n} f(x) \log f(x) \d x$. Letting $N(X)=\exp\bigl(2h(X)/n\bigr)$ be the corresponding entropy power~\cite{Shannon48}, the famous entropy power inequality (EPI)~\cite{Shannon48,Rioul11} writes $N\Bigl( \sum_{i=1}^m X_i\Bigr) \geq \sum_{i=1}^m N(X_i)$ for any independent random vectors $X_1,X_2,\ldots,X_m\in\mathbb{R}^n$. The link with the Rényi entropy $h_r(X)$ was first made in~\cite{DemboCoverThomas91} in connection with a strengthened Young's convolutional inequality, where the EPI is obtained by letting exponents tend to $1$~\cite[Thm~17.8.3]{CoverThomas06}. Recently, there has been increasing interest in {R\'enyi} entropy-power inequalities~\cite{MadimanMelbourneXu17}. The Rényi entropy-power $N_r(X)$ is defined~\cite{BobkovChistyakov15} as the average power of a white Gaussian vector having the same Rényi entropy as~$X$. If $X^*\sim \mathcal{N}(0,\sigma^2\mathbf{I})$ is white Gaussian, an easy calculation yields \begin{equation}\label{renyiGauss} h_r(X^*) = \tfrac{n}{2} \log (2\pi\sigma^2) + \tfrac{n}{2} r' \tfrac{\log r}{r}. \end{equation} Since equating $h_r(X^*)=h_r(X)$ gives $\sigma^2 = \frac{e^{2h_r(X)/n}}{2\pi r^{r'/r}}$, we define $N_r(X)= e^{2h_r(X)/n}$ as the $r$-entropy power. . Bobkov and Chistyakov~\cite{BobkovChistyakov15} extended the classical EPI to the $r$-entropy by incorporating a $r$-dependent constant $c>0$: \begin{equation}\label{repic} N_r\Bigl( \sum\nolimits_{i=1}^m X_i\Bigr) \geq c \sum\nolimits_{i=1}^m N_r(X_i). \end{equation} Ram and Sason~\cite{RamSason16} improved (increased) the value of $c$ by making it depend also on the number $m$ of independent vectors $X_1,X_2,\ldots,X_m$. Bobkov and Marsiglietti~\cite{BobkovMarsiglietti17} proved another modification of the EPI for the Rényi entropy: \begin{equation}\label{repialpha} {N_r^{\vphantom{2}}}^{\!\alpha}\Bigl( \sum\nolimits_{i=1}^m X_i\Bigr) \geq \sum\nolimits_{i=1}^m {N_r^{\vphantom{2}}}^{\!\alpha}(X_i) \end{equation} with~a power exponent parameter $\alpha>0$. Due to the non-increasing property of the $\alpha$-norm, if~\eqref{repialpha} holds for $\alpha$ it also holds for any $\alpha'>\alpha$. The value of $\alpha$ was further improved (decreased) by Li~\cite{Li18}. All the above EPIs were found for Rényi entropies of orders $r>$1. Recently, the $\alpha$-modification of the Rényi EPI~\eqref{repialpha} was extended to orders~$<$1 for two independent variables having log-concave densities by Marsiglietti and Melbourne~\cite{MarsigliettiMelbourne18}. The starting point of all the above works was Young's strengthened convolutional~inequality. In this paper, we build on the results of~\cite{Rioul18} to provide simple proofs for Rényi EPIs of the general form \begin{equation}\label{repig} {N_r^{\vphantom{2}}}^{\!\alpha}\Bigl( \sum\nolimits_{i=1}^m X_i\Bigr) \geq c\sum\nolimits_{i=1}^m {N_r^{\vphantom{2}}}^{\!\alpha}(X_i) \end{equation} with constant $c>0$ and exponent $\alpha>0$. The present framework uses only basic properties of Rényi entropies and is based on a transportation argument from normal densities and a change of variable by rotation, which was previously used to give a simple proof of Shannon's original EPI~\cite{Rioul17}. \section{Linearization} \noindent The first step toward proving~\eqref{repig} is the following linearization lemma which generalizes~\cite[Lemma~2.1]{Li18}. \begin{Lemma}\label{charact} For independent $X_1,X_2,\ldots,X_m$, the Rényi EPI in the general form~\eqref{repig} is equivalent to the following inequality \begin{equation}\label{repi1bis} h_r\bigl( \textstyle\sum\limits_{i=1}^m\sqrt{\lambda_i}X_i\bigr) - \sum\limits_{i=1}^m \lambda_i h_{r}(X_i) \geq \frac{n}{2} \bigl( \frac{\log c}{\alpha} + \bigl(\frac{1}{\alpha}-1\bigr) H(\lambda) \bigr) \end{equation} for any distribution $\lambda=(\lambda_1,\ldots,\lambda_m)$ of entropy $H(\lambda)$. \end{Lemma} \begin{proof} Note the scaling property $h_r(aX)=h_r(X) + n\log|a|$ for any $a\ne 0$, established by a change of variable. It follows that $N_r(aX)=a^2 N_r(X)$. Now first suppose~\eqref{repig} holds. Then \begin{align} h_r\bigl( &\textstyle\sum_{i=1}^m\sqrt{\lambda_i}X_i\bigr) =\tfrac{n}{2\alpha}\log {N_r^{\vphantom{2}}}^\alpha\bigl( \textstyle\sum_{i=1}^m\sqrt{\lambda_i}X_i\bigr)\\ &\geq \tfrac{n}{2\alpha}\log\textstyle\sum_{i=1}^m {N_r^{\vphantom{2}}}^\alpha(\sqrt{\lambda_i}X_i) + \tfrac{n}{2\alpha}\log c\notag\\ &=\tfrac{n}{2\alpha}\log\textstyle\sum_{i=1}^m \lambda_i^\alpha {N_r^{\vphantom{2}}}^\alpha(X_i) + \tfrac{n}{2\alpha}\log c \label{a}\\ &\geq \tfrac{n}{2\alpha}\textstyle\sum_{i=1}^m \lambda_i \log\bigl( \lambda_i^{\alpha-1} {N_r^{\vphantom{2}}}^\alpha(X_i)\bigr) + \tfrac{n}{2\alpha}\log c \label{b}\\ &= \textstyle\sum_{i=1}^m \lambda_i h_{r}(X_i) +\tfrac{n(\alpha-1)}{2\alpha}\textstyle\sum_{i=1}^m \lambda_i\log {\lambda_i} + \tfrac{n}{2\alpha}\log c \notag \end{align} which proves~\eqref{repi1bis}. The scaling property is used in~\eqref{a} and the concavity of the logarithm is used in~\eqref{b}. Conversely, suppose that~\eqref{repi1bis} is satisfied for all $\lambda_i>0$ such that $\sum_{i=1}^m \lambda_i=1$. Set~$\lambda_i~=~{N_r^{\vphantom{2}}}^\alpha(X_i)/ \sum_{i=1}^m {N_r^{\vphantom{2}}}^\alpha(X_i)$. Then \begin{align*} {N_r^{\vphantom{2}}}^\alpha\bigl( &\textstyle\sum_{i=1}^m X_i\bigr) = \exp \tfrac{2\alpha}{n} h_r\bigl( \textstyle\sum_{i=1}^m\sqrt{\lambda_i}\frac{X_i}{\sqrt{\lambda_i}}\bigr)\\ &\geq \exp \tfrac{2\alpha}{n} \textstyle\sum_{i=1}^m \lambda_i h_{r}\Bigl(\frac{X_i}{\sqrt{\lambda_i}}\Bigr) \times c \!\cdot\! e^{(1-\alpha) \textstyle\sum_{i=1}^m \lambda_i\log \frac{1}{\lambda_i}}\\ &= c \textstyle\prod\limits_{i=1}^m \Bigl({N_r^{\vphantom{2}}}^\alpha\Bigl(\frac{X_i}{\sqrt{\lambda_i}}\Bigr) \lambda_i^{\alpha-1} \Bigr)^{\lambda_i} = c \textstyle\prod\limits_{i=1}^m \Bigl({N_r^{\vphantom{2}}}^\alpha(X_i) \lambda_i^{-1} \Bigr)^{\lambda_i} \\& =c\bigl(\textstyle\sum_{i=1}^m {N_r^{\vphantom{2}}}^\alpha(X_i)\bigr)^{\textstyle\sum_{i=1}^m \lambda_i} =c\textstyle\sum_{i=1}^m {N_r^{\vphantom{2}}}^\alpha(X_i). \end{align*} which proves~\eqref{repig}. \end{proof} \section{The REPI of Dembo-Cover-Thomas} As a second ingredient we have the following result, which was essentially established by Dembo, Cover and Thomas~\cite{DemboCoverThomas91}. It is this Rényi version of the EPI which led them to prove Shannon's original EPI by letting Rényi exponents $\to 1$. \begin{Theorem}\label{repi1m} Let $r_1,\ldots,r_m,r$ be exponents those conjugates $r'_1,\ldots,r'_m,r'$ are of the same sign and satisfy $ \sum_{i=1}^m\frac{1}{r'_i}=\frac{1}{r'} $ and let $\lambda_1,\ldots,\lambda_m$ be the discrete probability distribution $\lambda_i = \frac{r'}{r'_i}$. Then, for independent zero-mean $X_1,X_2,\ldots,X_m$, \begin{equation} \begin{split}\label{repi1mineqGauss} h_r\Bigl( \textstyle\sum\limits_{i=1}^m\sqrt{\lambda_i}&X_i\Bigr) - \textstyle\sum\limits_{i=1}^m \lambda_i h_{r_i}(X_i) \\& \geq h_r\Bigl(\textstyle\sum\limits_{i=1}^m\sqrt{\lambda_i}X^*_i\Bigr) -\textstyle\sum\limits_{i=1}^m \lambda_i h_{r_i}(X^*_i) \end{split} \end{equation} where $X^*_1,X^*_2,\ldots,X^*_m$ are i.i.d.\@ standard Gaussian $\mathcal{N}(0,\mathbf{I})$. Equality holds if and only if the $X_i$ are i.i.d.\@ {Gaussian}. \end{Theorem} It is easily seen from the expression~\eqref{renyiGauss} of the Rényi entropy of a Gaussian that \eqref{repi1mineqGauss} is equivalent to \begin{equation}\label{repi1mineq} h_r\Bigl( \textstyle\sum\limits_{i=1}^m\sqrt{\lambda_i}X_i\Bigr) - \textstyle\sum\limits_{i=1}^m \lambda_i h_{r_i}(X_i) \geq \tfrac{n}{2} r' \Bigl( \frac{\log r}{r}-\textstyle\sum\limits_{i=1}^m\frac{\log r_i}{r_i}\Bigr) \end{equation} Note that the l.h.s. is very similar to that of~\eqref{repi1bis} except that different Rényi exponents are present. This will be the crucial step toward proving~\eqref{repig}. Theorem~\ref{repi1m} (for $m=2$) was derived in~\cite{DemboCoverThomas91} as a rewriting of Young's strengthened convolutional~inequality with optimal constants. Section~\ref{repi1sec} provides a simple transportation proof, which uses only basic properties of Rényi entropies. \section{REPIs for Orders >1}\label{repi2sec} If $r>1$, then $r'>0$ and all $r'_i$ are positive and greater than~$r'$. Therefore, all $r_i$ are less than~$r$. Using the well-known fact that $h_r(X)$ is non increasing in $r$ (see also~\eqref{identitydiff} below), \begin{equation}\label{monori} h_{r_i}(X_i)\geq h_r(X_i) \qquad (i=1,2,\ldots,m). \end{equation} Plugging this into~\eqref{repi1mineq}, one obtains \begin{equation}\label{repi2mineq} h_r\bigl( \textstyle\sum\limits_{i=1}^m\sqrt{\lambda_i}X_i\bigr) -\textstyle\sum\limits_{i=1}^m \lambda_i h_{r}(X_i) \geq \frac{n}{2} r' \bigl( \frac{\log r}{r}-\sum_{i=1}^m\frac{\log r_i}{r_i}\bigr) \end{equation} where $\lambda_i=r'/r'_i$. From Lemma~\ref{charact} is suffices to establish that the r.h.s. of this inequality exceeds that of~\eqref{repi1bis} to prove~\eqref{repig} for appropriate constants $c$ and $\alpha$. For future reference define \begin{align} A(\lambda)&=|r'| \bigl( \tfrac{\log r}{r}-\textstyle\sum\limits_{i=1}^m\tfrac{\log r_i}{r_i}\bigr)\\[-1.5ex] &=|r'| \sum_{i=1}^m (1-\tfrac{\lambda_i}{r'})\log(1\!-\!\tfrac{\lambda_i}{r'}) \!-\!(1\!-\!\tfrac{1}{r'})\log(1\!-\!\tfrac{1}{r'}) .\notag \label{Alambda} \end{align} (The absolute value $|r'|$ is needed in the next section where $r'$ is negative.) This function is strictly convex in $\lambda=(\lambda_1,\lambda_2,\ldots,\lambda_m)$ because $x\mapsto (1-x/r')\log(1-x/r')$ is strictly convex. Note that $A(\lambda)$ vanishes in the limiting cases where $\lambda$ tends to one of the standard unit vectors $(1,0,\ldots,0)$, \ldots, $(0,0,\ldots,0,1)$ and since every $\lambda$ is a convex combination of these vectors and $A(\lambda)$ is strictly convex, one has $A(\lambda)<0$. Using the properties of $A(\lambda)$ it is immediate to recover known Rényi EPIs: \begin{Proposition}[Ram and Sason~\cite{RamSason16}]\label{thmrepic} The Rényi EPI~\eqref{repic} holds for $r>1$ and $c=r^{r'/r}\bigl(1-\frac{1}{mr'}\bigr)^{mr'-1}$. \end{Proposition} \begin{proof} By Lemma~\ref{charact} for $\alpha=1$ we only need to check that the r.h.s. of~\eqref{repi2mineq} is greater than $\frac{n}{2}\log c$ for any choice of the $\lambda_i$'s, that is, for any choice of exponents $r_i$ such that $\sum_{i=1}^m \frac{1}{r'_i}~=~\frac{1}{r'}$. Thus, \eqref{repic} will hold for $\log c = \min_{\lambda} A(\lambda)$. Now, by the log-sum inequality~\cite[Thm~2.7.1]{CoverThomas06}, \begin{align} \sum\limits_{i=1}^m\frac{1}{r_i}\log \frac{1}{r_i} &\!\geq\! \bigl(\sum\limits_{i=1}^m\frac{1}{r_i}\bigr) \log \frac{\textstyle\sum_{i=1}^m\frac{1}{r_i}}{m} = (m-\tfrac{1}{r'})\log \frac{m-\frac{1}{r'}}{m} \end{align} with equality if and only if all $r_i$ are equal, that is, the $\lambda_i$ are equal to $1/m$. Thus, $\min_\lambda A(\lambda)= r' \bigl[ \frac{\log r}{r}+ (m-1/r')\log \frac{m-1/r'}{m} \bigr]=\log c$. \end{proof} Note that $\log c= r' \frac{\log r}{r} +(mr'-1) \log \bigl(1-\frac{1}{mr'}\bigr)<0$ decreases (and tends to $r' \frac{\log r}{r} -1$) as $m$ increases; in fact $\frac{\partial \log c}{\partial m} = r' \log \bigl(1-\frac{1}{mr'}\bigr) +\frac{mr'}{r'm^2} < r'(-\frac{1}{mr'})+\frac{1}{m}=0$. Thus, a universal constant independent of $m$ is obtained by takin \begin{align} c&= \inf_m \;r^{r'/r}\bigl(1-\frac{1}{mr'}\bigr)^{mr'-1} =\frac{r^{r'/r}}{e}, \end{align} as was established by Bobkov and Chistyakov~\cite{BobkovChistyakov15}. \begin{Proposition}[Li~\cite{Li18}]\label{thmrepialpha} The Rényi EPI~\eqref{repialpha} holds for $r>1$ and $\alpha=\bigl[1+{r' \frac{\log_2 r}{r} +(2r'-1) \log_2 \bigl(1-\frac{1}{2r'}\bigr)} \bigr]^{-1}$. \end{Proposition} Li~\cite{Li18} remarked that this value of $\alpha$ is strictly smaller (better) than the value $\alpha=\frac{r+1}{2}$ obtained previously by Bobkov and Marsiglietti~\cite{BobkovMarsiglietti17}. In~\cite{Rioul18} it is shown that it cannot be further improved in our framework by making it depend on~$m$. \begin{proof} Since the announced $\alpha$ does not depend on $m$, we can always assume that $m=2$. By Lemma~\ref{charact} for $c=1$, we only need to check that the r.h.s. of~\eqref{repi2mineq} is greater than $\frac{n}{2}(1/\alpha-1)H(\lambda)$ for any choice of $\lambda_i$s, that is, for any choice of exponents $r_i$ such that $\sum_{i=1}^2 \frac{1}{r'_i} = \frac{1}{r'}$. Thus,~\eqref{repialpha} will hold for $\frac{1}{\alpha}-1 = \min_\lambda \frac{A(\lambda)}{H(\lambda)}$. Li~\cite{Li18} showed---this is also easily proved using~\cite[Lemma~8]{MarsigliettiMelbourne18}---that the~minimum is obtained when $\lambda=(1/2,1/2)$. The corresponding value of $A(\lambda)/H(\lambda)$ is $\bigl[r' \frac{\log r}{r} +(2r'-1) \log \bigl(1-\frac{1}{2r'}\bigr)\bigr]/\log 2=1/\alpha -1$. \end{proof} The above value of $\alpha$ is $>1$. However, using the same method, it is easy to obtain Rényi EPIs with exponent values $\alpha<1$. In this way we obtain a new Rényi EPI: \begin{Proposition}\label{thmrepig} The Rényi EPI~\eqref{repig} holds for $r>1$, $0<\alpha<1$ with $c=\bigl[m\;r^{r'/r}\bigl(1-\frac{1}{mr'}\bigr)^{mr'-1}\bigr]^\alpha / m$. \end{Proposition} \begin{proof} {By Lemma~\ref{charact} we only need to check that the r.h.s. of~ Equation \eqref{repi2mineq} is greater than $\frac{n}{2}\bigl((\log c)/\alpha+(1/\alpha-1)H(\lambda)\bigr)$, that is, $A(\lambda)\geq (\log c)/\alpha+(1/\alpha-1)H(\lambda)$ for any choice of $\lambda_i$s, that is, for any choice of exponents $r_i$ such that $\sum_{i=1}^m \frac{1}{r'_i} = \frac{1}{r'}$. Thus, for a given $0<\alpha<1$, \eqref{repig} will hold for $\log c = \min_\lambda \alpha A(\lambda) - (1-\alpha) H(\lambda)$. From the preceding proofs (since both $A(\lambda)$ and $-H(\lambda)$ are convex functions of $\lambda$), the minimum is attained when all $\lambda_i$s are equal. This gives $\log c =\alpha \Bigl(r' \frac{\log r}{r} +(mr'-1) \log \bigl(1-\frac{1}{mr'}\bigr)\Bigr)-(1-\alpha) \log m$.} \end{proof} \section{REPIs for Orders <1 and Log-Concave Densities} If $r<1$, then $r'<0$ and all $r'_i$ are negative and $<r'$. Therefore, all $r_i$ are $>r$. Now the opposite inequality of~\eqref{monori} holds and the method of the preceding section fails. For~log-concave densities, however,~\eqref{monori} can be replaced by a similar inequality in the right direction. A density $f$ is \emph{log-concave} if $\log f$ is concave in its support, i.e., for all $0<\mu<1$, \begin{equation}\label{logconcave} f(x)^\mu f(y)^{1-\mu} \leq f(\mu x+ (1-\mu) y). \end{equation} \begin{Theorem}[Fradelizi, Madiman and Wang~\cite{FradeliziMadimanWang16}]\label{logconcaveentropyconcave} If $X$ has a log-concave density, then $h_r(rX)-rh_r(X)= (1-r) h_r(X)+n\log r $ is concave in~$r$. \end{Theorem} This concavity property is used in~\cite{FradeliziMadimanWang16} to derive a sharp ``varentropy bound''. Section~\ref{transportationvarentropy} provides an alternate transportation proof along the same lines as in Section~\ref{repi1sec}. By Theorem~\ref{logconcaveentropyconcave}, since $n\log r + (1-r) h_r(X)$ is concave and vanishes for $r\!=\!1$, the slope $\frac{n\log r + (1-r) h_r(X)-0}{r-1}$ is nonincreasing in $r$. In other words, $h_r(X) + n \frac{\log r }{1-r}$ is nondecreasing. Now since all $r_i$ are~$>r$, \begin{equation} h_{r_i}(X)+n \tfrac{\log r_i}{1-r_i} \geq h_r(X)+n \tfrac{\log r}{1-r} \qquad (i=1,\ldots,m). \end{equation} Plugging this into~\eqref{repi1mineq}, one obtains \begin{align} h_r&\Bigl( \textstyle\sum\limits_{i=1}^m\sqrt{\lambda_i}X_i\Bigr) - \textstyle\sum\limits_{i=1}^m \lambda_i h_{r}(X_i) \notag\\& \geq n \bigl(\tfrac{\log r}{1-r}- \textstyle\sum\limits_{i=1}^m \lambda_i\frac{\log r_i}{1-r_i}\bigr) + \frac{n}{2} r' \bigl( \frac{\log r}{r}-\textstyle\sum\limits_{i=1}^m\frac{\log r_i}{r_i}\bigr)\notag\\ &=\frac{n}{2} r' \bigl( \textstyle\sum\limits_{i=1}^m\frac{\log r_i}{r_i}-\frac{\log r}{r}\bigr) \label{repi2mineqlogc} \end{align} where we have used that $\lambda_i=r'/r'_i$ for $i=1,2,\ldots,m$. Notice that, quite surprisingly, the r.h.s. of~\eqref{repi2mineqlogc} for $r<1$ ($r'<0$) is the opposite of that of~\eqref{repi2mineq} for $r>1$ ($r'>0$). However, since $r'$ is now negative, the r.h.s. is exactly equal to $\frac{n}{2}A(\lambda)$ which is still convex and negative. For this reason, the proofs of the following theorems for $r<1$ are such repeats of the theorems obtained previously for $r>1$. \begin{Proposition} The Rényi EPI~\eqref{repic} for log-concave densities holds for $c=r^{-r'/r}\bigl(1-\frac{1}{mr'}\bigr)^{1-mr'}$ and $r<1$. \end{Proposition} \begin{proof} Identical to that of Theorem~\ref{thmrepic} except for the change $|r'|=-r'$ in the expression of $A(\lambda)$. \end{proof} \begin{Proposition}[Marsiglietti and Melbourne\cite{MarsigliettiMelbourne18}] The Rényi EPI~\eqref{repialpha} log-concave densities holds for $\alpha=\bigl[1+{|r'| \frac{\log_2 r}{r} +(2|r'|+1) \log_2 \bigl(1+\frac{1}{2|r'|}\bigr)} \bigr]^{-1}$ and $r<1$. \end{Proposition} \begin{proof} Identical to that of Theorem~\ref{thmrepialpha} except for the change $|r'|=-r'$ in the expression of $A(\lambda)$. \end{proof} \begin{Proposition} The REPI~\eqref{repig} for log-concave densities holds for $c\!=\!\bigl[mr^{-r'/r}\bigl(1-\frac{1}{mr'}\bigr)^{1-mr'}\bigr]^\alpha \!/ m$ where $r<1$, $0\!<\!\alpha\!<\!1$. \end{Proposition} \begin{proof} It is identical to that of Theorem~\ref{thmrepig} except for the change $|r'|=-r'$ in the expression of $A(\lambda)$. \end{proof} \section{Relative and Conditional Rényi Entropies}\label{iisec} Before turning to transportations proofs of Theorems~\ref{repi1m} and~\ref{logconcaveentropyconcave}, it is convenient to review some definitions and properties. The following notions were previously used for discrete variables, but can be easily adapted to variables with densities. \begin{Definition}[Escort Variable~\cite{Bercher09}]\label{escort} If $f\in L^r(\mathbb{R}^n)$, its {escort density} of exponent $r$ is defined by \begin{equation} f_r(x) = {f^r(x)}\Big/{\int_{\mathbb{R}^n} f^r(x) \d x}. \end{equation} Let $X_r\sim f_r$ denote the corresponding \emph{escort random variable}. \end{Definition} \begin{Proposition} Let $r\ne 1$ and assume that $X\sim f\in L^s(\mathbb{R}^n)$ for all $s$ in a neighborhood of $r$. Then \begin{align} \frac{\partial}{\partial r} \bigl( (1-r) h_r(X) \bigr) &= \mathbb{E} \log f(X_r) = - h(X_r\|X)\\ \frac{\partial}{\partial r} h_r(X) &= - \frac{1}{(1-r)^2} D(X_r\|X)\leq 0\label{identitydiff}\\ \frac{\partial^2}{\partial r^2} \bigl( (1-r) h_r(X) \bigr) &= \mathrm{Var} \log f(X_r).\label{identityvar} \end{align} where $h(X\|Y)=\int f \log (1/g)$ denotes cross-entropy and $D(X\|Y)=\int f \log (f/g)$ is the Kullback-Leibler divergence. \end{Proposition} \begin{proof} By the hypothesis, one can differentiate under the integral sign. It is easily seen that $\frac{\partial}{\partial r} \bigl( (1-r) h_r(X) \bigr)\!=\!\frac{\partial}{\partial r} \log \int f^r$ $=\! \int f_r \log f$. Taking another derivative yields $\frac{\partial}{\partial r} \frac{\int f^r \log f}{\int f^r} = \int f_r(\log f)^2 - (\int f_r \log f)^2$. Since $\frac{\partial}{\partial r} \bigl( (1-r) h_r(X) \bigr)=(1-r)\frac{\partial}{\partial r} h_r(X) - h_r(X)$ we have $(1-r)^2 \frac{\partial}{\partial r} h_r(X)=\int f_r \log (f/f^r)+\log\int f^r =\int f_r \log (f/f_r)$. \end{proof} Eq.~\eqref{identitydiff} gives a new proof that $h_r(X)$ is nonincreasing in~$r$. It is strictly decreasing if $X_r$ is not distributed as $X$, that is, if $X$ is not uniformly distributed. Equation~\eqref{identityvar} shows that $(1-r)h_r(X)$ is convex in $r$, that is, $\int f^r$ is log-convex in $r$ (which is essentially equivalent to H\"older's inequality). \begin{Definition}[Relative Rényi Entropy~\cite{LapidothPfister16}] Given $X\sim f$ and $Y\sim g$, their \emph{relative Rényi entropy} of exponent $r$ (relative $r$-entropy) is given by $$ \Delta_r(X\|Y)=D_{\frac{1}{r}}(X_r\|Y_r) $$ where $D_r(X\|Y)=\frac{1}{r-1}\log\int f^r g^{1-r}$ is the $r$-divergence~{\upshape\cite{vanErvenHarremoes14}}. \end{Definition} When $r\to 1$ both the relative $r$-entropy and the $r$-divergence tend to the Kullback-Leibler divergence $D(X\|Y)=\Delta(X\|Y)$ (also known as the relative entropy). For $r\ne 1$ the two notions do not co\"incide. It is easily checked from the definitions that \begin{equation} \Delta_r(X\|Y)\!=\! -r'\log\!\!\int\!\! f_r^{1/r} g_r^{1/r'} \!=\!-r'\!\log\mathbb{E}\bigl( g_r^{1/r'}\!(X)\bigr) - h_r\!(X)\label{rentropydiff} \end{equation} \begin{equation}\label{rentropy} h_r(X)= -r'\log\mathbb{E}\bigl( f_r^{1/r'}(X)\bigr). \end{equation} Thus, just like for the case $r=1$, the relative $r$-entropy~\eqref{rentropydiff} is the difference between the expression of the $r$-entropy~\eqref{rentropy} in which $f$ is replaced by $g$, and the $r$-entropy itself. Since the Rényi divergence $D_r(X\|Y)=\frac{1}{r-1}\int f^r g^{1-r}$ is nonnegative and vanishes if and only if the two distributions $f$ and $g$ co\"incide, the relative entropy $\Delta_r(X\|Y)$ enjoys the same property. From~\eqref{rentropydiff} we have the following \begin{Proposition}[Rényi-Gibbs' inequality]\label{RGI}% If $X\sim f$, \begin{equation} h_r(X)\leq -r'\log\mathbb{E}\bigl( g_r^{1/r'}(X)\bigr) \end{equation} for any density $g$, with equality if and only if $f=g$ a.e. \end{Proposition} \noindent Letting $r\to 1$ one recovers the usual Gibbs' inequality. \begin{Definition}[Arimoto's Conditional R\'enyi Entropy~\cite{FehrBerens14}] $$ h_r(X|Z)=-r'\!\log\mathbb{E} \| f(\cdot|Z)\|_r = -r' \!\log \mathbb{E} f_r^{1/r'}\!(X|Z) $$ \end{Definition} Proposition~\ref{RGI} applied to $f(x|z)$ and $g(x|z)$ gives the inequality $h_r(X|{Z=z})\leq -r'\log\mathbb{E}\bigl( g_r^{1/r'}(X|Z=z)\bigr)$ which, averaged over $Z$, yields the following conditional Rényi-Gibbs' inequality \begin{equation}\label{condrgibbsineq} h_r(X|Z)\leq -r'\log\mathbb{E}\bigl( g_r^{1/r'}\!(X|Z)\bigr). \end{equation} If in particular we put $g(x|z)=f(x)$ independent of $z$, the r.h.s. becomes equal to~\eqref{rentropy}. We have thus obtained a simple proof of the following \begin{Proposition}[Conditioning reduces $r$-entropy~\cite{FehrBerens14}] \begin{equation} h_r(X|Z)\leq h_r(X) \end{equation} with equality if and only if $X$ and $Z$ are independent. \end{Proposition} Another important property is the data processing inequality~\cite{vanErvenHarremoes14} which implies $D_r(T(X)\|T(Y)) \leq D_r(X\|Y)$ for any transformation $T$. The same holds for relative $r$-entropy when the transformation is applied to escort variables: \begin{Proposition}[Data processing inequality for relative $r$-entropy] If $X^*, Y^*, X, Y$ are random vectors such that \begin{equation}\label{transportescort} X_r = T(X^*_r) \quad \text{and}\quad Y_r = T(Y^*_r), \end{equation} then $D(X\|Y)\leq D(X^*\|Y^*)$. \end{Proposition} \begin{proof} $D(X\|Y)=D_{\frac{1}{r}}(X_r\|Y_r)=D_{\frac{1}{r}}(T(X^*_r)\|T(Y^*_r))\leq D_{\frac{1}{r}}(X^*_r\|Y^*_r) =D(X^*\|Y^*)$. \end{proof} When $T$ is invertible, inequalities in both directions hold: \begin{Proposition}[Relative $r$-entropy preserves transport]\label{transportpreserv} For an invertible transport $T$ satisfying~\eqref{transportescort}, $D(X\|Y)= D(X^*\|Y^*)$. \end{Proposition} From~\eqref{rentropydiff} the equality $D(X\|Y)= D(X^*\|Y^*)$ can be rewritten as the following identity: \begin{equation}\label{transportpreservationidentity} -r'\log\mathbb{E}\bigl( g_r^{\frac{1}{r'}}\!(X)\bigr) - h_r(X) \!=\!-r'\log\mathbb{E}\bigl( {g^*}_r^{\frac{1}{r'}}\!(X^*)\bigr) - h_r(X^*). \end{equation} Assuming $T$ is a diffeomorphism, the density $g^*_r$ of $Y^*_r$ is given by the change of variable formula $g^*_r(u)=g_r(T(u)) |T'(u)|$ where the Jacobian $|T'(u)|$ is the absolute value of the determinant of the Jacobian matrix $T'(u)$. In this case~\eqref{transportpreservationidentity} can be rewritten as \begin{equation}\label{transportpreservationidentity2} \begin{split} -r'&\log\mathbb{E}\bigl( g_r^{\frac{1}{r'}}\!(X)\bigr) - h_r(X) \\& =-r'\log\mathbb{E}\bigl( {g}_r^{\frac{1}{r'}}\!(T(X^*))|T'(X^*)|^{\frac{1}{r'}}\bigr) - h_r(X^*). \end{split} \end{equation} \section{A Transportation Proof of Theorem~\ref{repi1m}}\label{repi1sec} We proceed to prove~\eqref{repi1mineqGauss}. It is easily seen, using finite induction on $m$, that it suffices to prove the corresponding inequality for $m=2$ arguments: \begin{equation}\label{repi1equiv} \begin{split} &h_r(\sqrt{\lambda}X\!+\! \sqrt{1\!-\!\lambda} Y) \!-\! \lambda h_p(X) \!-\! (1\!-\!\lambda) h_q(Y) \\&\;\geq h_r(\sqrt{\lambda}X^*\!+\! \sqrt{1\!-\!\lambda} Y^*) \!-\! \lambda h_p(X^*) \!-\! (1\!-\!\lambda) h_q(Y^*) \end{split} \end{equation} with equality if and only if $X,Y$ are i.i.d.\@ Gaussian. Here $X^*$ and $Y^*$ are i.i.d.\@ standard Gaussian $\mathcal{N}(0,\mathbf{I})$ and the triple $(p,q,r)$ and its associated $\lambda\in(0,1)$ satisfy the following conditions: $p,q,r$ have conjugates $p',q',r'$ of the same sign which satisfy $ \frac{1}{p'}+\dfrac{1}{q'}=\frac{1}{r'} $ (that is, $\frac{1}{p}+\frac{1}{q}=1+\frac{1}{r}$) and $ \lambda= \frac{r'}{p'} = 1- \frac{r'}{q'}. $ \begin{Lemma}[{Normal Transport}]\label{gt} Let $f$ be given and $X^*\!\sim\! \mathcal{N}(0,\sigma^2\mathbf{I})$. There exists a diffeomorphism $T:\mathbb{R}^n\to\mathbb{R}^n$ with log-concave Jacobian $|T'|$ such that $X=T(X^*) \sim f$. \end{Lemma} \noindent Thus $T$ transports normal $X^*$ to $X$. The log-concavity property is that for any such transports $T,U$ and $\lambda\in(0,1)$, we have \begin{equation}\label{kyfan} |T'(X^*)|^\lambda |U'(Y^*)|^{1-\lambda} \leq |\lambda T'(X^*)+(1-\lambda)U'(Y^*)|. \end{equation} The proof of Lemma~\ref{gt} is very simple for one-dimensional variables~\cite{Rioul17a}, where $T$ is just an increasing function with continuous derivative $T'>0$ and where~\eqref{kyfan} is the classical arithmetic-geometric inequality. For dimensions $n>1$, Lemma~\ref{gt} comes into two flavors:\\ \emph{(i) Kn\"othe maps:} $T$ can be chosen such that its Jacobian matrix $T'$ is (lower) triangular with positive diagonal elements (Kn\"othe--Rosenblatt map~\cite{Rosenblatt52,Knothe57}). Two different elementary proofs are given in~\cite{Rioul17}. Inequality~\eqref{kyfan} results from the concavity of the logarithm applied to the Jacobian matrices' diagonal elements. \\\emph{(ii) Brenier maps:} $T$ can be chosen such that its Jacobian matrix $T'$ is symmetric positive definite (Brenier map~\cite{Brenier91,McCann95}). In this case~\eqref{kyfan} is Ky Fan's inequality~\cite[\S~17.9]{CoverThomas06}. \medskip The key argument is now the following. Considering escort variables, by transport (Lemma~\ref{gt}), one can write $X_p = T(X^*_p)$ and $Y_q = U(Y^*_q)$ for two diffeomorphims $T$ and $U$ satisfying~\eqref{kyfan}. Then by transport preservation (Proposition~\ref{transportpreserv}), we have $\lambda\Delta_p(X\|U)+(1-\lambda)\Delta_p(Y\|V)=\lambda\Delta_p(X^*\|U^*)+(1-\lambda)\Delta_p(Y^*\|V^*)$ for any $U\sim \phi$ and $V\sim\psi$, which from~\eqref{transportpreservationidentity2} can be easily rewritten in the form \begin{multline}\label{someidentity} -r'\log\mathbb{E}\bigl( \chi^\frac{1}{r'}(X,Y)\bigr) - \lambda h_p(X)-(1-\lambda)h_q(Y) \\=-r'\log\mathbb{E}\Bigl(\!\bigl( \chi(T(X^*),U(Y^*)) |T'(X^*)|^\lambda |U'(Y^*)|^{1-\lambda}\bigr)^\frac{1}{r'}\!\Bigr)\\ - \lambda h_p(X^*)-(1-\lambda) h_q(Y^*) \end{multline} where we have noted $\chi(x,y)=\phi_p^{\lambda}(x)\psi_q^{1-\lambda}(y)$. Such an identity holds, by the change of variable $x=T(x^*),y=U(y^*)$, for any function $\chi(x,y)$ of $x$ and $y$. Now from~\eqref{rentropy} we have \begin{equation*} h_r(\sqrt{\lambda}X\!+\! \sqrt{1\!-\!\lambda} Y) = -r'\log\mathbb{E}\bigl( \theta_r^{1/r'}\!\!(\sqrt{\lambda}X\!+\! \sqrt{1\!-\!\lambda} Y)\bigr). \end{equation*} where $\theta$ is the density of $\sqrt{\lambda}X\!+\! \sqrt{1\!-\!\lambda} Y$. Therefore, the l.h.s. of~\eqref{repi1equiv} can be written as \begin{align} &h_r(\sqrt{\lambda}X\!+\! \sqrt{1\!-\!\lambda} Y) \!-\! \lambda h_p(X) \!-\! (1\!-\!\lambda) h_q(Y) \\ &=\!-\!r'\log\mathbb{E}\bigl( \theta_r^{\frac{1}{r'}}\!\!(\sqrt{\lambda}X\!+\! \sqrt{1\!-\!\lambda} Y)\bigr) \!-\! \lambda h_p(X) \!-\! (1\!-\!\lambda) h_q(Y) \notag \end{align} Applying~\eqref{someidentity} to $\chi(x,y)=\theta_r(\sqrt{\lambda}x\!+\! \sqrt{1\!-\!\lambda} y)$ and using the inequality~\eqref{kyfan} gives \begin{align}\label{ineqqq} h_r&(\sqrt{\lambda}X\!+\! \sqrt{1\!-\!\lambda} Y) \!-\! \lambda h_p(X) \!-\! (1\!-\!\lambda) h_q(Y)\\ &\geq -r'\log\mathbb{E}\bigl( \phi^\frac{1}{r'}(X^*,Y^*) \bigr) \!-\! \lambda h_p(X^*) \!-\! (1\!-\!\lambda) h_q(Y^*)\notag \end{align} where $\phi(x^*,y^*)=\theta_r(\sqrt{\lambda}T(x^*)\!+\! \sqrt{1\!-\!\lambda} U(y^*))\cdot |\lambda T'(x^*)\!+\!(1\!-\!\lambda)U'(y^*)|$. To conclude we need the following \begin{Lemma}[{Normal} Rotation~\cite{Rioul17}]\label{gr} If $X^*,Y^*$ are i.i.d. Gaussian, then~for any $0<\lambda<1$, the rotation\pagebreak[1] \begin{equation} \tilde{X}\!=\!\sqrt{\lambda}\;X^* \!+\! \sqrt{1-\lambda}\;Y^*, \quad \tilde{Y}\!=\!-\sqrt{1-\lambda}\;X^* \!+\! \sqrt{\lambda}\;Y^* \end{equation} yields i.i.d.\@ Gaussian variables $\tilde{X},\tilde{Y}$. \end{Lemma} Lemma~\ref{gr} is easy proved considering covariance matrices. A deeper result (Bernstein's lemma, not used here) states that this property of remaining i.i.d.\@ by rotation characterizes the Gaussian distribution~\cite[Lemma~4]{Rioul17a}~\cite[Chap.~5]{Bryc95}). Since the starred variables can be expressed in terms of the tilde variables by the inverse~rotation $X^*\!=\!\sqrt{\lambda}\;\tilde{X} - \sqrt{1-\lambda}\;\tilde{Y}$, $Y^*\!=\!\sqrt{1-\lambda}\;\tilde{X} + \sqrt{\lambda}\;\tilde{Y}$, inequality~\eqref{ineqqq} can be written as \begin{align}\label{afterrotation} &h_r(\sqrt{\lambda}X+\sqrt{1-\lambda}Y) - \lambda h_p(X) -(1-\lambda)h_q(Y)\\ &\;\geq -r' \log \mathbb{E}\bigl( \psi^{1/r'}(\tilde{X}|\tilde{Y}) \bigr)- \lambda h_p(X^*) -(1-\lambda)h_q(Y^*),\notag \end{align} where $ \psi(\tilde{x}|\tilde{y}) = \theta_r(\sqrt{\lambda}T(\sqrt{\lambda}\tilde{x} \!-\! \sqrt{1\!-\!\lambda}\tilde{y})\!+\! \sqrt{1\!\!-\!\!\lambda} U(\sqrt{1\!-\!\lambda}\tilde{x} + \sqrt{\lambda}\tilde{y}))\cdot |\lambda T'(\sqrt{\lambda}\tilde{x} \!-\! \sqrt{1\!-\!\lambda}\tilde{y})\!+\!(1\!-\!\lambda)U'(\sqrt{1\!-\!\lambda}\tilde{x} + \sqrt{\lambda}\tilde{y})|$. Making the change of variable $z=\sqrt{\lambda}T(\sqrt{\lambda}\tilde{x}-\sqrt{1-\lambda}\tilde{y})+\sqrt{1-\lambda}U(\sqrt{1-\lambda}\tilde{x}+\sqrt{\lambda}\tilde{y})$, we check that $\int\!\psi(\tilde{x}|\tilde{y}) \d \tilde{x} = \int\!\theta_r(z)\d z =1$ since $\theta_r$ is a density. Hence, $\psi(\tilde{x}|\tilde{y})$ is a conditional density, and by~\eqref{condrgibbsineq}, \begin{equation}\label{condpsi} -r' \log \mathbb{E}\bigl( \psi^{1/r'}(\tilde{X}|\tilde{Y}) \bigr) \geq h_r(\tilde{X}|\tilde{Y}) \end{equation} where $h_r(\tilde{X}|\tilde{Y}) = h_r(\tilde{X})=h_r(\sqrt{\lambda}\;X^* + \sqrt{1-\lambda}\;Y^*)$ since $\tilde{X}$ and $\tilde{Y}$ are independent. Combining with~\eqref{afterrotation} yields the announced inequality~\eqref{repi1equiv}. It remains to settle the equality case in~\eqref{repi1equiv}. From the above proof, equality holds in~\eqref{repi1equiv} if and only if both~\eqref{kyfan} and~\eqref{condpsi} are equalities. The rest of the argument depends on whether Kn\"othe or Brenier maps are used:\\ \emph{(i) Kn\"othe maps:} In the case of Kn\"othe maps, Jacobian matrices are triangular and equality in~\eqref{kyfan} holds if and only if for all $i=1,2,\ldots,n$, $\frac{\partial T_i}{\partial x_i}(X^*) = \frac{\partial U_i}{\partial y_i}(Y^*) \text{ a.s.}$ Since $X^*$ and $Y^*$ are independent Gaussian variables, this implies that $\frac{\partial T}{\partial x_i}$ and $\frac{\partial U}{\partial y_i}$ are constant and equal. In particular the Jacobian $|\lambda T'(\sqrt{\lambda}\tilde{x}-\sqrt{1-\lambda}\tilde{y}) + (1-\lambda) U'(\sqrt{1-\lambda}\tilde{x}+\sqrt{\lambda}\tilde{y})| $ is~constant. Now since $h_r(\tilde{X}|\tilde{Y}) = h_r(\tilde{X})$ equality in~\eqref{condpsi} holds only if $ \psi(\tilde{x}|\tilde{y})$ does not depend on $\tilde{y}$, which implies that $\sqrt{\lambda}T(\sqrt{\lambda}\tilde{x}-\sqrt{1-\lambda}\tilde{y})+\sqrt{1-\lambda}U(\sqrt{1-\lambda}\tilde{x}+\sqrt{\lambda}\tilde{y})$ does not depend on the value of $\tilde{y}$. Taking~derivatives with respect to $y_j$ for all $j=1,2,\ldots,n$, we have $ -\sqrt{\lambda}\sqrt{1-\lambda} \frac{\partial T_i}{\partial x_j}(\sqrt{\lambda}\tilde{X}-\sqrt{1-\lambda}\tilde{Y}) + \sqrt{\lambda}\sqrt{1-\lambda} \frac{\partial U_i}{\partial x_j}(\sqrt{1-\lambda}\tilde{X}+\sqrt{\lambda}\tilde{Y})=0 $ which implies $\frac{\partial T_i}{\partial x_j}(X^*) = \frac{\partial U_i}{\partial y_j}(Y^*)$ a.s. for all $i,j=1,2,\ldots,n$. In other words, $T'(X^*)=U'(Y^*)$ a.s. \\ \emph{(ii) Brenier maps:} In the case of Brenier maps the argument is simpler. Jacobian matrices are symmetric positive definite and by strict concavity, Ky Fan's inequality~\eqref{kyfan} is an equality only if $T'(X^*)=U'(Y^*)$ a.s. \medskip In both cases, since $X^*$ and $Y^*$ are independent, this implies that $T'(X^*)=U'(Y^*)$ is constant. Therefore, $T$ and $U$ are linear transformations, equal up to an additive constant ($=0$ since the random vectors are assumed of zero mean). It follows that $X_p = T(X^*_p)$ and $Y_q = U(Y^*_q)$ are Gaussian with respective distributions $X_p\sim \mathcal{N}(0,\mathbf{K}/p)$ and $Y_q\sim \mathcal{N}(0,\mathbf{K}/q)$. Hence, $X$ and $Y$ are i.i.d.\@ Gaussian $\mathcal{N}(0,\mathbf{K})$. This ends the proof of Theorem~\ref{repi1m}.\hfill\qedsymbol We note that this section has provided an information-theoretic proof the strengthened Young's convolutional inequality (with optimal constants), since~\eqref{repi1equiv} is a rewriting of this convolutional inequality~\cite{DemboCoverThomas91}. \section{A Transportation Proof of Theorem~\ref{logconcaveentropyconcave}} \label{transportationvarentropy} Define $r=\lambda p + (1-\lambda) q$ where $0<\lambda<1$. It is required to show that $(1-r) h_r(X)+n\log r \geq \lambda\bigl((1-p) h_p(X)+n\log p\bigr)+(1-\lambda)\bigl((1-q) h_q(X)+n\log q\bigr)$. By Lemma~\ref{gt} there exists two diffeomorphisms $T,U$ such that one can write $pX_p=T(X^*)$ and $qX_q=U(X^*)$. Then, by these changes of variables $X^*$ has density \begin{equation} \tfrac{1}{p^n} f_p\bigl(\tfrac{T(x^*)}{p}\bigr) |T'(x^*)| = \tfrac{1}{q^n} f_q\bigl(\tfrac{U(x^*)}{q}\bigr) |U'(x^*)| \end{equation} which can be written $$ \frac{ f^p\bigl(\tfrac{T(x^*)}{p}\bigr) |T'(x^*)|}{\exp\bigl((1-p) h_p(X)+n\log p\bigr)} = \frac{f^q\bigl(\tfrac{U(x^*)}{q}\bigr) |U'(x^*)|}{\exp\bigl((1-q) h_q(X)+n\log q\bigr)} $$ Taking the geometric mean, integrating over $x^*$ and taking the logarithm gives the representation \begin{multline*} \lambda\bigl((1-p) h_p(X)+n\log p\bigr)+(1-\lambda)\bigl((1-q) h_q(X)+n\log q\bigr) \\=\! \log \!\int\! f^{\lambda p}\bigl(\tfrac{T(x^*)}{p}\bigr) f^{(1\!-\!\lambda)q}\bigl(\tfrac{U(x^*)}{q}\bigr) |T'(x^*)|^\lambda|U'(x^*)|^{1\!-\!\lambda}\d x^*\!. \end{multline*} Now, by log-concavity~\eqref{logconcave} (with $\mu=\lambda p/r$) and~\eqref{kyfan}, \begin{align*} &\lambda\bigl((1\!-\!p) h_p(X)+n\log p\bigr)+(1\!-\!\lambda)\bigl((1\!-\!q) h_q(X)+n\log q\bigr)\notag\\ &\leq \log \smash{\int} f^r\bigl(\tfrac{\lambda T(x^*)\!+\!(1\!-\!\lambda)U(x^*)}{r}\bigr ) |\lambda T'(x^*)\!+\!(1\!-\!\lambda)U'(x^*)|\d x^*\\%\notag\\ &= \log \bigl(r^n\!\int\!\! f^r\bigr) = (1-r) h_r(X)+n\log r . \end{align*} This ends the proof of Theorem~\ref{logconcaveentropyconcave}.\hfill\qedsymbol This theorem asserts that the second derivative $\frac{\partial^2}{\partial r^2} \bigl( (1-r) h_r(X)+n\log r \bigr)\leq 0$. From~\eqref{identityvar} this gives $\mathrm{Var} \log f(X_r) \leq n/r^2$, that is, $\mathrm{Var} \log f_r(X_r) \leq n$. Setting $r=1$, this is the varentropy bound $\mathrm{Var} \log f(X) \leq n$ of~\cite{FradeliziMadimanWang16}.
{ "attr-fineweb-edu": 1.833008, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdUM5qoTAoIoYJGaD
\section{Introduction} The possibility of engineering devices at the nano and micro scales has created the conditions for testing fundamental aspects of quantum theory \cite{Brandes}, otherwise difficult to probe in natural atomic size systems. In particular, quantum dots (QD) have largely been considered as a physical realization of quantum billiards \cite{Berggren,Bruus,Beenakker} and mesoscopic structures have played an important role in the experimental study of quantum chaos \cite{Stockmann}, mainly through the investigation of the transport properties of quantum dot \cite{Marcus} and quantum well \cite{Fromhold} structures in the presence of magnetic field. However, some extraneous effects can prevent the full observation of the quantum chaotic behavior. For instance, impurities and soft confining potentials may mask the chaotic dynamics predicted for some semiconductor quantum billiards ({\it e.g}., the stadium)\cite{Berggren} and the incoherent influence of the bulk on the electronic dynamics hinders the observation of the so called eigenstate scars \cite{Heller} in quantum corrals \cite{Crommie}. Furthermore, Random Matrix Theory (RMT) predictions for the Coulomb blockade peaks in quantum dots may fail as a result of the coupling with the environment \cite{Madger}. Alternatively, suspended nanostructures are ideal candidates for implementing and investigating coherent phenomena in semiconductor devices, because, at low temperatures, they provide excellent isolation for the quantum system from the bulk of the sample \cite{nano,Blencowe}. The nanoelectromechanical systems (NEMS), in particular, are specially suited to study the effects of a phonon bath on the electronic states, possibly leading to a chaotic behavior. Such point is of practical relevance since it bears the question of the stability of quantum devices \cite{Chuang,Cleland}, whose actual implementation could be prevented by the emergence of chaos \cite{Georgeot}. In a recent paper \cite{rego}, we have shown that in fact suspended nanostructures can display quantum chaotic behavior. In this article we extend such studies and perform a detailed analysis of the coupling between the phonons of a suspended nanoscopic dielectric plate and the electrons of a two-dimensional electron gas (2DEG). The phonons are associated with the vibrational modes of a suspended rectangular plate ({\it i.e.}, the phonon cavity) and the 2DEG (in the free electron approximation) is confined to a large quantum dot (billiard) built on the plate's surface. Two different scenarios are considered for the shape of the quantum dot: circular and rectangular geometries (Fig. \ref{system}), which yield distinct chaotic features. As for the coupling mechanisms, we take into account the deformation and piezoelectric potentials. By performing energy-level statistics we show that, for sufficiently strong electron-phonon coupling, such electromechanical nanostructures can exhibit quantum chaos for a large range of material and geometry parameters. The resulting spectral correlation functions, which depend on the geometry and location of the center of the QD on the surface of the plate as well as on the plate's boundary conditions (free or clamped), are those expected from the Gaussian Orthogonal Ensemble (GOE) or Gaussian Unitary Ensemble (GUE) of the Random Matrix Theory (RMT) \cite{Mehta}. We present a detailed explanation for the occurrence of such different statistics distributions. Noteworthy are the results for the circular QD, since in this case the GUE statistics can be obtained in spite of the fact that the system is time-reversal invariant. By investigating the influence of material and geometrical parameters on the unfolding of chaos, we indicate the conditions for its experimental observation. \begin{figure}[h] \includegraphics[width=8cm]{figure1.eps} \caption{Schematics of the suspended nanoelectromechanical structures depicting the cases of a circular and a rectangular quantum dot on the surface of a suspended dielectric plate.} \label{system} \end{figure} \section{The system Hamiltonian} The full Hamiltonian of the problem is composed of three parts: the phonons, the electrons and the electron-phonon interactions, which are formulated in the sequence. \subsection{Phonons} At temperatures below 1 Kelvin the acoustic phonon mean free path in SiN (silicon nitride), for instance, can be as large as 10 $\mu$m \cite{phonongas}. This implies that a plane wave acoustic phonon propagating through a suspended mesoscopic system whose dimensions are much smaller than this mean free path hits the boundaries many times during its expected lifetime, giving rise to standing waves. Thus, the phonons in such systems can be described in terms of the plate's normal modes of deflection, instead of the plane wave phonon description that is more appropriate for bulk systems. Therefore, in the following we associate the phonons with the vibrational modes of the suspended mesoscopic system. In addition, at low temperatures the semiconductor can be treated as a continuum elastic material due to the large wavelength of the phonons. To obtain the long wavelength vibrational modes of the plate we use the Classical Plate Theory (CPT) approximation \cite{Graff}. The CTP describes adequately the vibrations of a plate whose thickness is much smaller than its lateral dimensions, which is the characteristic of our NEMS. The deflections of a plate lying in the $(x,y)$ plane are thus described by a vector field $\left[U({\bf r}) \, \hat{\imath} + V({\bf r}) \, \hat{\jmath} + W({\bf r}) \, \hat{k}\right] \exp(- i \omega t)$ of components \begin{eqnarray} & & U(x,y,z) = - z \frac{\partial W}{\partial x}, \ \ V(x,y,z) = - z \frac{\partial W}{\partial y}\ , \\ \label{W} & & W(x,y) = \sum_{m,n} A_{mn} X_m(x)Y_n(y). \label{w(x,y)} \end{eqnarray} In Eq. (\ref{w(x,y)}), $W(x,y)$ is written in terms of the one-dimensional transverse modes $X_m$ and $Y_n$, which are the solutions of the Bernoulli-Euler equation \cite{Graff,Leissa} under the appropriate boundary conditions. Considering that each of the four sides of the plate can be either clamped (C) or free (F) (corresponding to the Dirichlet or Neumann boundary conditions, respectively), we have \begin{eqnarray}} \newcommand{\eea}{\end{eqnarray} X_m(x) &=& \sin[k_mx] \pm \sinh[k_mx] \nonumber \\ & & + \zeta \left\{\cos[k_mx] \pm \cosh[k_mx]\right\} \ , \label{b-e} \eea where \begin{equation}} \newcommand{\ee}{\end{equation} \zeta = \frac{\cos[k_m L_x] - \cosh[k_m L_x]} {\sin[k_m L_x] + \sinh[k_m L_x]}. \ee Likewise for $Y_n(y)$. The signs in Eq. (\ref{b-e}) are positive (negative) for the FF (CC and CF) boundary conditions \cite{orthonormal}. The $k_m$'s are solutions of $\cos[k L_x] \cosh[k L_x] = 1$. Under given boundary conditions, the Rayleigh-Ritz method is used to obtain the coefficients $A_{mn}^\alpha$ of Eq. (\ref{w(x,y)}) and the eigenfrequencies $\omega_\alpha$ corresponding to the eigenmode ${\bf u}_\alpha({\bf r})$ of the plate. It is done by imposing $\partial\mathcal{U}/\partial A_{mn} = 0$ to the energy functional ${\cal{U}} = \int dx dy \, [{\cal{K}}(x,y) - {\cal{V}}(x,y)]$ \cite{Leissa}, where the kinetic and strain energies are written as \begin{eqnarray} {\cal{K}}(x,y) &=& \rho_{_{2D}} \frac{\omega^2}{2} W^2(x,y), \\ \label{T_xy} {\cal{V}}(x,y) &=& \frac{D}{2} \left(\frac{\partial^2 W}{\partial x^2} + \frac{\partial^2W} {\partial y^2} \right) - (1-\nu) D \nonumber \\ & & \times \left( \frac{\partial^2 W}{\partial x^2} \frac{\partial^2 W}{\partial y^2} - \left(\frac{\partial^2 W}{\partial x \partial y}\right)^2 \right). \label{V_xy} \end{eqnarray} Here $\rho_{_{2D}}$ denotes the two-dimensional density, $\nu$ is the Poisson constant and $D$ is the rigidity constant. In the CPT approximation, the most important motion is that in the $z$ direction, given by $W(x,y)$, whereas the displacements along the $x$ and $y$ directions are described approximately by the first term of an orthogonal basis expansion. Therefore, an arbitrary transverse motion can be expanded in the basis of the orthonormal vibrational modes $W_\alpha(x,y)$, for which \begin{eqnarray}} \newcommand{\eea}{\end{eqnarray} & & \langle W_{\alpha} (x,y) | W_{\beta} (x,y) \rangle = \delta_{\alpha \beta}, \nonumber \\ & & \sum_{\alpha} W_{\alpha} (x,y) W_{\alpha} (x',y') = \delta(x-x') \delta (y-y'). \label{ortho} \eea The $U_\alpha$ and $V_\alpha$ components are only approximately orthonormal. As an illustration, we show in Fig. \ref{modes} the vector field components $(U_\alpha,V_\alpha, W_{\alpha})$ of the SA2 eigenmode (i.e., the second symmetric/antisymmetric eigenmode) for the \{FFFF\} boundary conditions. Hereafter, we refer to the set of boundary conditions of the plate, either C or F, as \{$X_1 X_2 Y_1 Y_2$\}, in accordance to Fig. 1. \begin{figure}[h] \includegraphics[width=7cm]{modes.eps} \caption{Contour plots for the cavity deflection mode SA2 (the second symmetric/antisymmetric eigenmode) for the \{FFFF\} boundary conditions, calculated at the surface of the plate: (a) $W(x,y)$, (b) $U(x,y)$, and (c) $V(x,y)$. The amplitudes and lateral dimensions are given in arbitrary units.} \label{modes} \end{figure} An arbitrary vibration field of the cavity is written in terms of its deflection modes $\alpha$ as \begin{eqnarray}} \newcommand{\eea}{\end{eqnarray} {\bf u}({\bf r},t) &=& \sum_{\alpha} \left[Q_{\alpha}(t) + Q^*_{\alpha}(t)\right] \nonumber \\ & & \times \left[ U_{\alpha}({\bf r}) \, \hat{\imath} + V_{\alpha}({\bf r}) \, \hat{\jmath} + W_{\alpha}({\bf r}) \, \hat{k} \right], \label{modo_geral} \eea together with the normal coordinates $Q_{\alpha}(t) = Q_{\alpha} \exp[-i\omega_\alpha t]$. In writing Eq. (\ref{modo_geral}), we have taken into account that the modes $\alpha$ are real. To provide the same level of description for the elastic and electronic degrees of freedom of the electromechanical nanostructure, we perform the canonical quantization of the vibration field given by Eq. (\ref{modo_geral}). As a result, we associate the classical field ${\bf u}({\bf r},t)$ with the quantum operator $\hat{{\bf u}}({\bf r},t)$, which must satisfy the equal-time commutation relation $[{\bf \hat{u}}_j({\bf r},t), {\bf \hat{\pi}}_j({\bf r'},t)] = i \hbar \delta({\bf r - r'})$, with the conjugate momentum operator ${\bf \hat{\pi}} ({\bf r},t) = m \partial_t{\bf\hat{u}}$. Particularly, for the $\hat{k}$ component of the field we have \begin{eqnarray}[W(x,y),\pi_z(x,y)] = i \hbar \delta (x - x) \delta (y - y). \label{comut_w} \end{eqnarray} But if we write ($\mathcal{X}_\alpha(t) \equiv \left[ \hat{Q}_\alpha(t) + \hat{Q}_\alpha^\dagger \right]$) \begin{eqnarray} W(x,y,t) &=& \sum_\alpha \mathcal{X}_\alpha(t) W_\alpha, \nonumber \\ \pi_z(x,y,t) &=& \rho V \sum_\alpha (-i \omega_\alpha) \mathcal{X}_\alpha(t) W_\alpha = \sum_\alpha \mathcal{P}_\alpha(t) W_\alpha, \nonumber \\ \end{eqnarray} the commutation relation (\ref{comut_w}) yields \begin{equation}} \newcommand{\ee}{\end{equation} [W(x,y),\pi_z(x,y)] = \sum_{\alpha,\beta} [\mathcal{X}_\alpha, \mathcal{P}_\beta] W_\alpha W_\beta. \ee Then, by requiring that $[\mathcal{X}_\alpha,\mathcal{P}_\beta] = i \hbar \delta_{\alpha, \beta}$ one can use Eq. (\ref{ortho}) to show that Eq. (\ref{comut_w}) is satisfied. Thus, $\mathcal{X}_\alpha$ and $\mathcal{P}_\alpha$ are canonically conjugated operators, satisfying $[\mathcal{X}_\alpha,\mathcal{X}_\beta] = [\mathcal{P}_\alpha,\mathcal{P}_\beta] = 0$ as well. The normal coordinates are now the quantum mechanical operators $\hat{Q}_\alpha(t)$ and $\hat{Q}_\alpha^\dagger(t)$, which are used to define the dimensionless number operators \begin{equation}} \newcommand{\ee}{\end{equation} a^\dagger_\alpha = \sqrt{\frac{2V \rho \omega_\alpha}{\hbar}} \hat{Q}^\dagger_\alpha, \qquad a_\alpha = \sqrt{\frac{2V \rho \omega_\alpha}{\hbar}} \hat{Q}_\alpha. \ee From the previous commutation relations it can be shown that $[a_\alpha (t),a_\beta^\dagger (t)] = \delta_{\alpha, \beta}$ and $[a_\alpha,a_{\alpha'}]=[a^\dagger_\alpha,a^\dagger_{\alpha'}]=0$. Therefore, $a^\dagger_\alpha$ and $a_\alpha$ are the creation and annihilation operators of the phonon deflection modes $\alpha$ and the vibration field operator ${\bf \hat{u}}({\bf r},t)$ is \begin{equation} \hat{{\bf u}} = \sum_{\alpha} \frac{[a_{\alpha}(t) + a^\dag_{\alpha}(t)]} {\sqrt{2\ V\rho\ \omega_{\alpha}/\hbar}} \left[U_{\alpha}({\bf r})\, \hat{\imath} + V_{\alpha}({\bf r})\, \hat{\jmath} + W_{\alpha}({\bf r})\, \hat{k} \right]. \label{modo_quantico} \end{equation} \subsection{Electrons} \label{el-gas} We consider the free electron approximation and assume the electrons to be completely confined to a narrow quantum dot, forming a quasi-2DEG of thickness $d$. The normalized electronic eigenstates are written as $\phi_{\kappa,\gamma}({\bf r}) = \varphi_{\kappa}(x,y) \sqrt{2/d} \sin[\gamma \pi z/d]$. Due to the quasi-2D assumption, the electrons always occupy the lowest state in the $z$ direction, so in our calculations we set the quantum number $\gamma=1$. For the rectangular QD of sides $L_x$ and $L_y$, we have \begin{equation}} \newcommand{\ee}{\end{equation} \varphi_{\kappa}(x,y) = \frac{2}{\sqrt{L_x L_y}} \sin[ \frac{p \pi x}{L_x}] \sin[\frac{q \pi y}{L_y}], \ee with $\kappa \equiv (p, q)$ and $p, q$ assuming positive integer values. The corresponding eigenenergies are \begin{equation}} \newcommand{\ee}{\end{equation} E_{\kappa,\gamma=1} = \frac{\pi^2 \hbar^2}{2 m_e} \left( \frac{p^2}{L_x^2} + \frac{q^2}{L_y^2} + \frac{1}{d^2} \right), \label{E_square} \ee where $m_e$ is the effective electron mass in the QD. For the circular QD of radius $R$, we have \begin{equation} \varphi_{\kappa}(r,\theta) = \frac{{\mbox J}_{|l|} \left(\alpha_{l\nu} \frac{r}{R}\right) \exp[i \, l \, \theta]} {\sqrt{\pi}R|J_{|l|+1}(\alpha_{l\nu})|}, \label{pureelectron} \end{equation} with $\kappa \equiv (l, \nu)$, $l = 0, \pm 1, \pm 2,\ldots$ and $\alpha_{l \nu}$ the $\nu$-th root of the Bessel function of order $|l|$. Here, the eigenenergies are \begin{equation}} \newcommand{\ee}{\end{equation} E_{\kappa,\gamma=1} = \frac{\hbar^2}{2 m_e} \left(\frac{\alpha_{l \nu}^2}{R^2} + \frac{\pi^2}{d^2}\right). \label{E_circle} \ee \subsection{Electron-phonon interactions} The electrons interact with the lattice vibrations through different mechanisms, depending on the characteristics of the solid and the temperature. In addition, from the theoretical point of view there can be several approaches to describe the coupling between electrons and phonons. Next we formulate the electron-phonon interaction terms that are more relevant to our problem. \subsubsection{Deformation potential - DP} At low temperatures only the long wavelength acoustic modes are populated and the semiconductor can be described by the continuum approximation. As a result of the cavity deflections, local volume changes take place, thus modifying the lattice constant and the electronic energy bands. In first order, such volume changes are due to longitudinal (compressional) acoustic modes and the scattering potential acting on the electrons is proportional to $\hat{\Delta}({\bf r}) = \nabla \cdot \hat{{\bf u}}({\bf r})$. Therefore, the Hamiltonian for the DP interaction is \begin{eqnarray}} \newcommand{\eea}{\end{eqnarray} \hat{H}_{DP} &=& C_{DP} \int_{\mathcal{D}} d{\bf r} \ \Psi^\dagger({\bf r}) \nabla \cdot \hat{{\bf u}}({\bf r}) \Psi({\bf r}) \nonumber \\ &=& C_{DP} \sqrt{\frac{\hbar}{2 V \rho}} \sum_{\alpha \, \kappa'' \kappa'} \frac{V_{\alpha \, \kappa'' \, \kappa'}^{DP}}{\sqrt{\omega_{\alpha}}} \, b^\dag_{\kappa''} \left[a_{\alpha}^\dag + a_{\alpha}\right] b_{\kappa'}, \nonumber \\ \label{DP} \eea with $C_{PD}$ denoting the deformation potential constant for the material. $\Psi ({\bf r}) = \sum_{\kappa} b_{\kappa} \phi_{\kappa}({\bf r})$ and $\Psi^{\dagger} ({\bf r}) = \sum_{\kappa} b^{\dagger}_{\kappa} \phi_{\kappa}^{*}({\bf r})$ are the electron field operators and $b_\kappa$ ($b^\dagger_\kappa$) is the fermionic creation (annihilation) operator satisfying usual anti-commutation relations. The integral is performed over the volume $\mathcal{D}$ comprising the 2DEG. In the above equation, $V_{\alpha \, \kappa'' \, \kappa'}^{DP}$ is given by \begin{equation}} \newcommand{\ee}{\end{equation} V_{\alpha \, \kappa'' \, \kappa'}^{DP} = \int_{\mathcal{D}} d{\bf r} \ \phi^*_{\kappa''} \nabla \cdot \left(U_{\alpha} \hat{\imath} + V_{\alpha} \hat{\jmath} + W_{\alpha} \hat{k} \right) \phi_{\kappa'}. \ee Since \begin{equation}} \newcommand{\ee}{\end{equation} \nabla \cdot \left(U_{\alpha} \hat{\imath} + V_{\alpha} \hat{\jmath} + W_{\alpha} \hat{k} \right) = -z \left(\frac{\partial^2 W_\alpha}{\partial x^2} + \frac{\partial^2 W_\alpha}{\partial y^2}\right), \ee we have from Eq. (\ref{w(x,y)}) that \begin{equation}} \newcommand{\ee}{\end{equation} V_{\alpha \, \kappa'' \, \kappa'}^{DP} = - \sum_{mn} A^{\alpha}_{mn} \int_{\mathcal{D}} d{\bf r} \ z \, \phi^*_{\kappa''} \left( X''_m Y_n + X_m Y''_n \right) \phi_{\kappa'}. \label{F} \ee \subsubsection{Piezoelectric potential - PZ} In piezoelectric materials, the acoustic lattice vibrations produce polarization fields that act back on the vibrational modes. The result is a set of coupled equations for the acoustic and polarization fields. However, taking into account the difference between the sound and light velocities, such equations can be decoupled, yielding the following electric field in the semiconductor \cite{Auld} \begin{equation} {\bf E} = -2 \frac{\varrho_{14}}{\epsilon} (\varepsilon_{yz}, \varepsilon_{xz}, \varepsilon_{xy}). \label{piezo_field} \end{equation} $\varrho_{14}$ and $\epsilon$ are, respectively, elements of the piezoelectric and dielectric tensors. Expression (\ref{piezo_field}) is obtained taking into account the cubic symmetry of the lattice. Furthermore, from the CPT approximation, the strain tensor elements $\varepsilon_{xz}=\varepsilon_{yz}=0$ and \begin{eqnarray} \varepsilon_{xy} = \left(\frac{\partial U}{\partial y} + \frac{\partial U}{\partial x} \right) = -z \frac{\partial^2 W}{\partial x \partial y}. \end{eqnarray} Therefore, for a given transverse mode $\alpha$, the resulting electric field is perpendicular to the plane of the cavity \begin{eqnarray} {\bf E}_{pz} = 2 \Lambda(z) \ \frac{\varrho_{14}}{\epsilon} \ \frac{\partial^2 W_{\alpha}}{\partial x \partial y} \hat{k}, \label{field_pz} \end{eqnarray} with $\Lambda(z) = d (2 d - z)/2$. The potential energy of the electrons can be written as $-e \int {\bf E}_{pz} \cdot d{\bf l}$, leading to \begin{eqnarray} 2\frac{e \varrho_{14}}{\epsilon} \Lambda(z) \sum_{\alpha} \sqrt{\frac{\hbar}{2V\rho\omega_{\alpha}}} \, [a_{\alpha} + a^\dag_{\alpha}] \frac{\partial^2 W_{\alpha}}{\partial x \partial y}. \end{eqnarray} Finally, we write down the PZ electron-phonon Hamiltonian as ($C_{PZ} = 2 e \varrho_{14}/\epsilon$) \begin{equation} \hat{H}_{PZ} = C_{PZ} \sqrt{\frac{\hbar}{2 V \rho}} \sum_{\alpha \, \kappa'' \, \kappa'} \frac{V_{\alpha \, \kappa'' \, \kappa'}^{PZ}} {\sqrt{\omega_{\alpha}}} \, b^\dag_{\kappa''} \left[a_{\alpha}^\dag + a_{\alpha}\right] b_{\kappa'}, \label{G} \end{equation} with \begin{equation} V_{\alpha \, \kappa'' \, \kappa'}^{PZ} = \sum_{mn} A^{\alpha}_{mn} \int_{\mathcal{D}} d{\bf r} \, \Lambda(z) \, \phi^*_{\kappa''} X'_m Y'_n \, \phi_{\kappa'}. \label{GV} \end{equation} \subsection{The full Hamiltonian} The total Hamiltonian of the system, when both the DP and PZ interactions are included, is \begin{eqnarray}} \newcommand{\eea}{\end{eqnarray} \hat{H} &=& \hat{H}_{el} + \hat{H}_{ph} + \hat{H}_{el-ph} \nonumber \\ &=& \sum_\kappa E_\kappa b^\dagger_\kappa b_\kappa + \sum_\alpha \left( \hat{n}_\alpha +\frac12 \right) \hbar \omega_\alpha + \hat{H}_{DP} + \hat{H}_{PZ}. \nonumber \\ \eea The basis in which $\hat{H}$ is represented is constructed as the product of the one-electron state $|\phi_\kappa \rangle$ with the multi-phonon state $|n_1,n_2,n_3,...,n_N \rangle$. Here, $n_\alpha = 0,1, \ldots, n$ denotes the number of phonon quanta in mode $\alpha$, with maximum population set by $n$. A total of $N$ distinct phonon modes are considered. The values of $n$ and $N$ are chosen to be compatible with the thermodynamics of the system. At low temperatures (below 1K) $n$ is of the order of a few tens, the average phonon occupation number. On the other hand, $N$ ranges from $\sim 10$, at the lowest temperatures, up to $\sim 30$ at the highest ones. Hence, in the numerical calculations we set $n = 20$ and $N=21$. It has been verified, however, that by varying $n$ and $N$ through a considerably wide range does not alter our main results. A typical basis vector is written, for a given ${\bf n} \equiv (n_1,n_2,\ldots,n_N)$, as \begin{equation} \left|\kappa; {\bf n} \right> = |\phi_{\kappa} \rangle \otimes \prod_{\alpha=1}^N \frac{1}{\sqrt{n_{\alpha}!}} \, (a_{\alpha}^\dag)^{n_{\alpha}} \, |0\rangle. \label{base} \end{equation} For the diagonalization procedure, we energy-sort the basis set up to a maximum energy value. The diagonalization is then performed with such set of vectors. Obviously, the energy of each basis state is given by the sum of the electron and the phonon energies, $E_{\kappa \, {\bf n}} = E_\kappa + \sum_{\alpha} (n_{\alpha} + 1/2)\hbar \omega_\alpha$. $E_\kappa$ comes from either Eq. (\ref{E_square}) or Eq. (\ref{E_circle}), depending on the specific geometry of the 2DEG. For the formation of the original basis set $10^5$ levels were taken into account, however, the diagonalization of $\hat{H}$ is performed in the truncated basis that varied from $3 \times 10^3$ to $15 \times 10^3$ basis states. It is important to notice that the proportion of different phonon states $|{\bf n}\rangle$ to electron states $|\phi_{\kappa} \rangle$, comprising the truncated basis, ranges from several tens to about a hundred, depending on the details of the NEMS. That is, the number of phonon states taking part in the calculations is much larger. \section{Energy level statistics} To determine whether the electron-phonon interaction generates chaos in the NEMS, we consider the standard approach of looking into the statistical properties of the system eigenenergies \cite{Stockmann, Gutzwiller}. For completeness, here we give a brief summary of the main ideas. A general technical overview can be found in Ref. [\onlinecite{Guhr}], whereas a very instructive discussion is presented for a particular case in Ref. [\onlinecite{Miltenburg}]. Consider the ordered sequence $\{E_1, E_2, \ldots\}$ of eigenenergies of an arbitrary quantum mechanical problem. The cumulative spectral function, counting the number of levels with energy up to $E$, is written as \begin{equation}} \newcommand{\ee}{\end{equation} \eta(E) = \sum_n \Theta(E-E_n). \ee In principle, we can always separate $\eta(E)$ into smooth (average) and oscillatory (fluctuating) parts, so that \begin{equation}} \newcommand{\ee}{\end{equation} \eta(E) = \eta_{\mbox{\scriptsize smooth}}(E) + \eta_{\mbox{\scriptsize osc}}(E). \ee The smooth part is given by the cumulative mean level density \cite{Guhr}. To make the analysis independent of the particular scales of the spectrum, one can use the so called ``unfolding'' procedure \cite{Gutzwiller}. It allows the comparison of the results obtained from any specific system with the predictions of the RMT \cite{Mehta}. The unfolding is done basically by mapping the sequence $\{E_1,E_2,\ldots\}$ onto the numbers $\{s_1,s_2, \ldots\}$, where \begin{equation}} \newcommand{\ee}{\end{equation} s_n \equiv \eta_{\mbox{\scriptsize smooth}}(E_n). \ee In the new variables, the cumulative spectral function simply reads $\tilde{\eta}(s) = s + \tilde{\eta}_{\mbox{\scriptsize osc}}(s)$, so that the smooth part of $\tilde{\eta}$ has unity derivative. Hence, for our statistical studies we consider the resulting sets $\{s_1,s_2, \ldots\}$. In this work we calculate two of the most used spectral distributions \cite{Vessen-Xavier}: the nearest-neighbor spacing distribution, $P(s)$, and the spectral rigidity $\overline{\Delta}_3(l)$. The $P(s)$ distribution probes the short scale fluctuations of the spectrum. It corresponds to the probability density of two neighboring unfolded levels $s_n$ and $s_{n+1}$ being a distance $s$ apart. $\overline{\Delta}_3(l)$ is an example of a distribution that quantifies the long scale correlations of the energy spectrum. It measures the deviation of the cumulative number of states (within an unfolded energy interval $l$) from a straight line. Formally \begin{equation}} \newcommand{\ee}{\end{equation} \overline{\Delta}_3(l) = \frac{1}{l} \left \langle \mbox{min}_{\{A,B\}} \int_{s_0}^{s_0 + l} ds \left[\tilde{\eta}(s) - A s -B\right]^2 \right \rangle \ , \ee where $\langle \cdot \rangle$ denotes the averaging over different possible positions $s_0$ along the $s$ axes. The parameters $A$ and $B$ are chosen to minimize $\left[\tilde{\eta}(s) - A s -B\right]^2$ in each corresponding interval. The RMT predicts three different classes of Gaussian ensembles \cite{Mehta,Guhr}, having distinct $P(s)$ and $\overline{\Delta}_3(l)$: the {\it Gaussian Orthogonal Ensemble} (GOE), the {\it Gaussian Unitary Ensemble} (GUE), and the {\it Gaussian Sympletic Ensemble} (GSE), constituted by matrices whose elements are random and obey certain Gaussian-like distribution relations \cite{Stockmann,Mehta}. Furthermore, these ensembles are invariant under orthogonal, unitary and sympletic transformations, respectively. Bohigas et al. \cite{Bohigas} conjectured that the spectrum fluctuations of any quantum chaotic system should have the same features of one of such three cases. This proposal has been firmly established by theoretical and experimental examinations \cite{Stockmann,Gutzwiller,Guhr}. When spin is not involved, it is expected that the spectrum statistics of a chaotic system is similar to that obtained from the GOE (GUE) if it is (is not) time reversal invariant (TRI). However, there are exceptions to this rule, consisting of a special class of TRI systems with point group irreducible representations, which does exhibit the GUE statistics \cite{Leyvraz,Keating}. Until recently \cite{rego}, the only family of systems known to show this anomalous behavior was formed by billiards having threefold symmetry, implemented experimentally in classical microwave cavities \cite{Dembowski1,Dembowski2,Schafer}. For regular (integrable) systems the resulting statistics follow Poisson $P(s) = \exp[-s]$ and linear $\overline{\Delta}_3(l) = l/15$ distributions \cite{Gutzwiller}. For GOE and GUE, $P(s)$ is described with high accuracy by the Wigner distributions \cite{Guhr} \begin{eqnarray} P(s) &=& \frac{\pi}{2} s \exp[-\frac{\pi}{4} s^2] \ \ \ \qquad (\mbox{GOE}), \nonumber \\ P(s) &=& \frac{32}{\pi^2} s^2 \exp[-\frac{4}{\pi} s^2] \qquad \, (\mbox{GUE}). \end{eqnarray} Finally, $\overline{\Delta}_3(l)$ can be approximated by the expressions \begin{eqnarray} \overline{\Delta}_3(l) &=& \frac{1}{\pi^2} \left(\ln[2 \pi l] + \gamma -\frac{5}{4} -\frac{\pi^2}{8} \right) \ \ \ (\mbox{GOE}), \nonumber \\ \overline{\Delta}_3(l) &=& \frac{1}{2\pi^2} \left( \ln[2\pi l] + \gamma -\frac{5}{4} \right) \qquad \ \ \ (\mbox{GUE}). \end{eqnarray} Here, $\gamma = 0.5772\ldots$ is the Euler constant. To characterize our nanostructures, we compare the numerically calculated distributions $P(s)$ and $\overline{\Delta}_3(l)$ with the corresponding analytical expressions for the regular and chaotic cases. Very good statistics are obtained using 2000 up to 2500 energy levels. \section{Results} \label{result} \begin{figure}[h] \includegraphics[width=6.5cm,height=6.5cm]{abcde.eps} \caption{The distinct positions, A, B, C, D, and E (on the plate) used as centers for the quantum dot. The dashed lines represent the possible symmetry axes for the plate phonon modes $\alpha$, depending on the boundary conditions \{$X_1 X_2 Y_1 Y_2$\}.} \label{abcde} \end{figure} We have applied the previous analysis to the eigenenergies of our suspended NEMS, considering a wide range of material and geometrical parameters, and it was found that chaos emerges in the system for a sufficiently strong electron-phonon (el-ph) coupling. Although the phenomenon proved to be quite robust with respect to variations of physical dimensions, boundary conditions and basis size, it was observed that the chaotic features depend on some material parameters, like the electronic effective mass and the el-ph coupling constants. The materials used to model the NEMS comprise an AlAs dielectric phonon cavity and an Al$_{0.5}$Ga$_{0.5}$As quantum dot, where the 2DEG is formed. This choice takes advantage of the very small lattice parameter mismatch in the interface as well as the large electronic effective mass of the $X$ valley in AlGaAs \cite{Adachi}. In our investigation we varied the DP and PZ interaction strengths (by means of the multiplicative factors $\beta_{DP}$ and $\beta_{PZ}$), the stiffness tensor elements $c_{11}, c_{12}$ and $c_{44}$, the mass density of the cavity and the in-plane electron effective mass. As for the geometrical parameters, we also varied the size and aspect ratio of the dielectric plate, the area of the QD, the thicknesses of the plate ($\delta$) and of the 2DEG ($d$). More interestingly, however, we considered different positions for the center of the QD (shown in Fig. \ref{abcde}), which produces distinct chaotic behaviors. The most representative results will be presented throughout this section. A detailed analysis is found in Section \ref{interpreting}. \subsection{Circular 2DEG} \label{circular} Here we present a detailed analysis for the spectral statistics of the NEMS containing a circular quantum dot. Unless mentioned otherwise, the system comprises a QD of radius $R=450\ nm$ and thickness $d = \delta/5$ on the surface of a square phonon cavity of sides $L = 1\ \mu$m and width $\delta = 40$ nm. Once enough electron-phonon coupling is assured, regular or chaotic spectral features will emerge depending on the interplay between the symmetries of the cavity phonon modes and the electronic wavefunctions. In this respect the boundary conditions of the phonon cavity ({\it i.e.}, the dielectric plate) and the localization of the circular 2DEG play a crucial role. In order to systematically investigate this effect we make use of the scheme presented in Fig. \ref{abcde}. Slight displacements of the QD out of the center of the plate suffice to generate different spectral features. So, the relative coordinates ($-0.5 < x, y < 0.5$) used in the calculations are: A = (0,0), B = (0.05,0.05), C = (0.05,0.025), D = (0.05,0), and E = (0,-0.05). Figure \ref{statistics} shows $P(s)$ and $\overline{\Delta}_3(l)$ for cases A,B,C and D in the \{FFFF\} phonon cavity, taking into account only the DP interaction, with $\beta_{DP}=10$. For A, the spectral statistics indicates a regular dynamics, but in B the occurrence of quantum chaos is clear and the level distributions are well described by the predictions of GOE random matrices. The same occurring for D. The more interesting case, however, is C, for which the statistics belongs to the GUE class, although the system is time-reversal invariant. The same behavior is obtained for a phonon cavity with \{CCCC\} boundary conditions \cite{rego}. The reasons for obtaining GUE statistics in this time-reversal invariant system will be discussed in Section \ref{interpreting} \begin{figure}[h] \includegraphics[width=8.5cm]{figure4.eps} \caption{ Energy-level statistics for the nanostructure with only the DP interaction, for $\beta_{DP}=10$ and four different positions (A,B,C, and D) for the center of the QD (refer to Fig. \ref{abcde}). The cavity boundary conditions are \{FFFF\}. The symbols $+$ represent the numerically calculated results. The curves indicate the expected behavior for regular (solid), chaotic GOE-type (dashed), and chaotic GUE-type (dot-dashed) systems.} \label{statistics} \end{figure} \begin{figure}[] \includegraphics[width=8.5cm]{figure5.eps} \caption{ $\overline{\Delta}_3(l) \times l$ for different positions of the center of the circular QD. The cavity boundary conditions are \{CCCC\} and $\beta_{DP}=10$.} \label{figDP} \end{figure} It is also instructive to look at the evolution of the spectral statistics as a function of the position of the QD. The effect is illustrated by the $\overline{\Delta}_3(l)$ statistics in Fig. \ref{figDP}, for the \{CCCC\} plate and the parameters of Fig. \ref{statistics}. Leaving A along the S$_x$ axis the statistics evolves from regular to GOE, at (0.02,0), passing through a mixed behavior at the locus (0.005,0). Proceeding perpendicularly to the S$_x$ axis, the statistics evolves from GOE towards GUE, (0.02,0) $\rightarrow$ (0.02,0.005) $\rightarrow$ (0.02,0.01) $\equiv$ C, going back to GOE when the S$_{x-y}$ axis is reached at (0.02,0.02). \begin{table*} \label{tablecirc} \caption{Symmetry axes and spectral statistics for points A,B,C,D and E, as defined in the text, for a circular 2DEG. Here, 2 GOE means that the statistics can be described by the uncorrelated superposition of two GOE distributions.} \begin{tabular}{c c c c c c c} \hline \hline \hspace{0.4 cm} Boundary Conditions \hspace{0.4 cm} & \hspace{0.4 cm} Symmetry axes \hspace{0.4 cm} & \hspace{0.4 cm} A \hspace{0.4 cm} & \hspace{0.4 cm} B \hspace{0.4 cm} & \hspace{0.4cm} C \hspace{0.4 cm} & \hspace{0.4 cm} D \hspace{0.4 cm} & \hspace{0.4 cm} E \hspace{0.4cm} \\ \hspace{0.4 cm} & \hspace{0.4 cm} for the phonon modes \hspace{0.4 cm} & \hspace{0.4 cm} & \hspace{0.4 cm} & \hspace{0.4 cm} & \hspace{0.4 cm} & \hspace{0.4cm} \\ \hline \{FFFF\} & S$_x$, S$_y$, S$_{x-y}$, S$_{x+y}$ & Regular & GOE & GUE & GOE & GOE \\ \{CCCC\} & S$_x$, S$_y$, S$_{x-y}$, S$_{x+y}$ & Regular & GOE & GUE & GOE & GOE \\ \{CCFF\} & S$_x$, S$_y$ & 2 GOE & GUE & GUE & GOE & GOE \\ \{CFCF\} & S$_{x-y}$ & GOE & GOE & GUE & GUE & GUE \\ \{CCCF\} & S$_y$ & GOE & GUE & GUE & GUE & GOE \\ \{CFFF\} & S$_x$ & GOE & GUE & GUE & GOE & GUE \\ \hline \hline \end{tabular} \end{table*} By the examination of several different scenarios we were able to classify the general behavior of our system. Table I summarizes the results obtained for the center of the circular 2DEG located at points A,B,C,D and E with either the DP or the PZ interaction taken into account. Furthermore, we have considered a comprehensive set of boundary conditions, which are representative of all possible combinations of the Dirichlet and Neumann conditions for the phonon cavity, thus, producing distinct symmetry axes for the phonon modes: \{CCCC\}, \{FFFF\}, \{CCFF\}, \{CFCF\}, \{CCCF\}, and \{CFFF\}. From Table I, we see that the chaotic behavior is determined by the overall (or global) symmetries of the NEMS, that is, the one that results from the joint combination of the boundary conditions of the phonon cavity and the position of the QD. For instance, if the boundary conditions are \{CCCC\} and the QD is located at D, the S$_{y}$, S$_{x-y}$ and S$_{x+y}$ are not symmetry axes for the coupled electromechanical system. According to Table I, if the present NEMS has: (i) four symmetry axes (position A for \{CCCC\} and \{FFFF\} plates), the statistics indicates a regular (integrable) problem; (ii) two symmetry axes (position A for \{CCFF\}), then the statistics corresponds to the uncorrelated superposition of two distributions of the GOE type \cite{twowigner}; (iii) one symmetry axis, the results are those of GOE; and finally (iv) no symmetry axes at all, the spectrum exhibits the GUE statistics. The boundary conditions determine not only the position of the center of the 2DEG at which regular, GOE or GUE statistics are obtained, but also the intensity of the quantum chaos. This happens because the electron-phonon coupling depends on the phonon energies, which vary according to the boundary conditions. The higher the phonon energy, the stronger the electron-phonon interaction, thus, leading to spectral fluctuations that are more faithful to the typical chaotic features. The energies of the phonon modes decrease according to the following sequence: \{FFFF\}, \{CCFF\}, \{CCCC\}, \{CFFF\}, \{CCCF\}, and \{CFCF\}. The energies for the first five boundary conditions are similar, and quantum chaos can be observed for essentially the same values of the interaction strength $\beta$. For the \{CFCF\} case, however, the phonon energies can be one order of magnitude smaller than the ones for the other cases, requiring larger values for the parameters $\beta_{DP}$ and $\beta_{PZ}$ (approximately 3 times larger). \begin{figure}[h] \includegraphics[width=6cm]{figure6.eps} \caption{The calculated spectral rigidity for various DP coupling constants: $\beta_{DP}$=1 (open circle), 3 (filled circle), 5 ($\times$) and 10 ($+$). The system corresponds the \{CCCC\} plate with the QD located at position C. The curves represent the regular (solid), GOE (dashed) and GUE (dot-dashed) cases.} \label{beta} \end{figure} It is, however, important to notice that, regardless of the geometry adopted, the nanostructure shows a regular spectrum for the {\it bare} parameters of the reference materials. We present in Fig. \ref{beta} the dependence of the spectral rigidity $\overline{\Delta}_3(l)$ on the electron-phonon coupling strength. For the locus C of the \{CCCC\} phonon cavity, we take into account only the DP interaction, with $\beta_{DP}$ = 1, 3, 5, and 10. As $\beta$ increases, the calculated statistics gradually converges to the GUE prediction. Note that the numerical calculations are never well fitted by the GOE distribution. The inclusion of more basis states does not alter the observed results. At this point we observe that the strong el-ph coupling regime ($\beta>3$) can be achieved by using different materials. For instance, aluminum nitride (AlN) is a strong piezoelectric semiconductor, with $\varrho_{33}$=1.5\ C/m$^2$, that is currently been used to produce nanomechanical resonators \cite{AlN}. The piezoelectric constant for GaAs is $\varrho_{14}$=0.16\ C/m$^2$ Next, we summarize the effects of the geometrical and material parameters on the chaotic behavior. Irrespective of the boundary conditions, when the QD radius $R$ decreases to less than one third of the plate side $L$, the system starts to become regular and, for about $L/R \approx 5$, the nanostructure presents no clear signs of chaos in its spectrum. This is illustrated in the top panel of Fig. \ref{parameters}, by the $\overline{\Delta}_3(l)$ statistics calculated for case A in the \{CFFF\} plate. On the other hand, an important physical parameter for the occurrence of chaos is the in-plane electron effective mass $m^*$. It is shown in the bottom panel of Fig. \ref{parameters} that chaos arises as $m^*/m_e$ is increased. Indeed, for $m^* \gtrsim 0.6$ the system is clearly chaotic, becoming regular for $m^* \alt 0.2 \, m_e$. As before, similar results hold for other boundary conditions. A weak dependence on both the density and the value of the stiffness tensors $c_{12}$ and $c_{44}$ is also observed. In essence, lighter and softer materials favor the appearance of chaos. As for the size of the dielectric plate, chaos is favored by short and thin plates. The former should be expected because the phonon energies increase as the area of the plate decreases. However, structures much smaller than the one here considered do not present a significantly higher tendency to chaotic behavior. \begin{figure}[] \includegraphics[width=6cm]{figure7.eps} \caption{Top panel: $\overline{\Delta}_3(l)$ statistics at point A of the \{CFFF\} plate for various radii of the QD: $R$ = 200 nm (filled dot), 300 nm (star) and 400 nm (open dot); Bottom panel: same statistics for various $m^*/m_e$ ratios: 0.2 (filled dot), 0.4 (star), 0.6 (open dot), and 0.8 (cross). $\beta_{DP}$=10.} \label{parameters} \end{figure} So far, we have considered only the DP or PZ interactions acting individually. When acting together, the spectrum statistics of loci B and D change from GOE to GUE. Fig. \ref{DP-PZ} demonstrates this effect by showing the $P(s)$ distribution for the circular QD at locus D in the \{CCCC\} plate, for both the DP and PZ interactions included with $\beta_{DP} = \beta_{PZ} = 10$. The agreement with the GUE statistics is excellent, in contrast to case D of Fig. \ref{statistics} (we recall that the \{CCCC\} and \{FFFF\} cases give similar results). Because the AlGaAs alloy is a weak piezoelectric material, the DP coupling shows a stronger effect in promoting the chaos, whereas the main action of the PZ interaction (in the presence of DP) is to break the system's overall symmetry. The explanation for such a change in the spectrum statistics is left to Section V. \begin{figure} \includegraphics[width=6cm]{figure8.eps} \caption{The nearest neighbor spacing distribution for the \{CCCC\} plate with the circular QD located at D. Both the DP and PZ interactions are included with $\beta_{DP}=\beta_{PZ}=10$. The curves correspond to GOE (dashed) and GUE (dot-dashed).} \label{DP-PZ} \end{figure} \subsection{Rectangular 2DEG} \label{rectangular} \begin{table*} \label{tablesquare} \caption{The same as Table I, but for a rectangular quantum dot. } \begin{tabular}{c c c c c c c} \hline \hline \hspace{0.4 cm} Boundary Conditions \hspace{0.4 cm} & \hspace{0.4 cm} Symmetry axes \hspace{0.4 cm} & \hspace{0.4 cm} A \hspace{0.4 cm} & \hspace{0.4 cm} B \hspace{0.4 cm} & \hspace{0.4cm} C \hspace{0.4 cm} & \hspace{0.4 cm} D \hspace{0.4 cm} & \hspace{0.4 cm} E \hspace{0.4cm} \\ \hspace{0.4 cm} & \hspace{0.4 cm} for the phonon modes \hspace{0.4 cm} & \hspace{0.4cm} & \hspace{0.4 cm} & \hspace{0.4 cm} & \hspace{0.4cm} & \hspace{0.4 cm} \\ \hline \{FFFF\} & S$_x$, S$_y$, S$_{x-y}$, S$_{x+y}$ & Regular & 2 GOE & GOE & 2 GOE & 2 GOE \\ \{CCCC\} & S$_x$, S$_y$, S$_{x-y}$, S$_{x+y}$ & Regular & 2 GOE & GOE & 2 GOE & 2 GOE \\ \{CCFF\} & S$_x$, S$_y$ & Regular & GOE & GOE & 2 GOE & 2 GOE \\ \{CFCF\} & S$_{x-y}$ & 2 GOE & 2 GOE & GOE & GOE & GOE \\ \{CCCF\} & S$_y$ & 2 GOE & GOE & GOE & GOE & 2 GOE \\ \{CFFF\} & S$_x$ & 2 GOE & GOE & GOE & 2 GOE & GOE \\ \hline \hline \end{tabular} \end{table*} \begin{figure}[h] \includegraphics[width=4.5cm]{figure9.eps} \caption{$P(s)$ for the \{CCFF\} boundary conditions and the rectangular QD centered at positions A, B, C, D, and E. Only the DP interaction is considered ($\beta_{DP} = 10$). The dashed lines represent the uncorrelated superposition of two GOEs and the continuous lines the regular and GOE cases.} \label{square} \end{figure} \begin{figure}[h] \includegraphics[width=7cm]{figure10.eps} \caption{$P(s)$ for the \{CFFF\} boundary conditions and both interaction potentials acting together ($\beta_{DP}=\beta_{PZ} = 10$). The rectangular QD is located at positions A (right panel) and D (left panel).} \label{DPZ} \end{figure} Chaos is also observed in the calculations for a rectangular 2DEG interacting with the suspended phonon cavity. In this section we investigate such nanostructures following the procedures previously described. The obtained statistics are summarized in Table II for the interactions, either DP or PZ, taking into account individually. The calculations were made for the same phonon cavity considered throughout Section \ref{circular}, but now supporting a square QD of sides 400 nm and thickness equal to the circular case. Representative results of the $P(s)$ distribution are shown in Fig.\ref{square}, which illustrates the chaotic behavior of the \{CCFF\} phonon cavity through cases A to E. Here too, the global symmetries of the system depend on the combination of the symmetry axes of the plate with the position of the square QD. From extensive simulations we verified that (see Table II) whenever the full problem has only one global symmetry axis, either S$_x$, S$_y$, S$_{x-y}$ or S$_{x+y}$, the resulting spectral statistics corresponds to the superposition of two uncorrelated GOEs, contrasting with the case of the circular QD-NEMS (see Table I). If there are no overall symmetry axes, the statistics is that of the GOE type. Finally, the spectrum is regular if there are at least two global symmetry axes, namely, the position A for the \{CCFF\}, \{CCCC\} or \{FFFF\} boundary conditions. Despite the fact that circular and rectangular QD-NEMSs display chaotic features, it is important to emphasize that the GUE statistics never occurs for the rectangular 2DEG coupled to a rectangular phonon cavity. This is, therefore, an effect that results from the interplay between the cylindrical and rectangular symmetries in the circular QD-NEMS. We discuss this phenomenon in detail in the next Section. The dependence of the spectrum statistics on the geometrical and material parameters is, nonetheless, similar to that observed for the circular QD. Specifically, heavier in-plane electron effective masses, lighter and softer materials and larger and thinner quantum dots favor the appearance of chaos. The \{CFCF\} plate requires stronger interaction strengths than the other boundary conditions to give rise to a chaotic spectrum. Finally, when the system has one global symmetry axis and both the DP and PZ interactions act simultaneously, the spectrum statistics changes from two uncorrelated GOEs to a single GOE. This effect is illustrated in Fig \ref{DPZ} for the rectangular 2DEG centered at points A (right panel) and D (left panel) of the \{CFFF\} cavity. A comparison with Table II evidences the aforementioned transformation. On the other hand, when the nanostructure displays either regular or chaotic (GOE) distributions for one of the interactions, the inclusion of the other does not alter the original statistics, regardless of the strength of the interactions. \section{Discussion} \label{interpreting} It has been shown that distinct geometrical configurations of the QD-NEMS produce different energy-level statistics, in most cases typical of chaotic dynamics. To understand this effect we investigate the structure of the Hamiltonian matrix of our systems and explain the previous results in terms of the underlying symmetries of the problem. At the end, we explain the anomalous GUE statistics in the light of a more general analysis \cite{Leyvraz,Keating}. \subsection{The Hamiltonian block structure due to the phonons} \label{va} \begin{figure}[h] \includegraphics[width=8cm]{blockstrucutre.eps} \caption{ (a) A block of the interaction potential matrix representing fixed electron quantum numbers and all possible combinations for the phonon states. The small blocks $d$, shown in (b), are diagonal matrices whose diagonal elements $d_{i i}$ are all equal. The $t$'s forming the main diagonal in (a), depicted in (c), are also block tridiagonal matrices. Noticeably, d and t in (a) are different from the corresponding blocks in (c).} \label{blockstructure} \end{figure} From the Eqs. (\ref{DP}) and (\ref{G}), for the deformation and piezoelectric potentials, one verifies that the interaction mechanism is mediated by one phonon processes. This becomes clear by writing the matrix elements in the basis (\ref{base}) [$\xi \equiv (\kappa,{\bf n})$] \begin{eqnarray} h(\xi'';\xi') &=& C \sqrt{\frac{\hbar}{2 \rho V}} \sum_{\alpha} \frac{{\cal I}_{\alpha \, \kappa'' \kappa'}} {\sqrt{\omega_{\alpha}}} \, \nonumber \\ & & \times \left[\delta_{n_1'' \, n_1'} \, \ldots \, \delta_{n_{\alpha}'' \, n_{\alpha}'-1} \, \ldots \delta_{n_N'' \, n_N'} \right. \nonumber \\ & & + \left. \delta_{n_1'' \, n_1'} \, \ldots \, \delta_{n_{\alpha}'' \, n_{\alpha}'+1} \, \ldots \delta_{n_N'' \, n_N'} \right], \label{phononmatrix} \end{eqnarray} where $C$ and ${\cal I}_{\alpha \, \kappa'' \kappa'}$ denote, respectively, the appropriate coupling constant and the overlap of the phonon mode $\alpha$ with the electronic eigenfunctions $\kappa''$ and $\kappa'$. Notice that the Kronecker $\delta$'s allow only a single phonon transition. In this representation we have a very particular form for the interaction matrix. Consider the block ${\bf n} \times {\bf n}$, schematically depicted in Fig. \ref{blockstructure} (a). It corresponds to fixed values for the electron quantum numbers $\kappa''$ and $\kappa'$, but embraces all possible configurations for the phonon states. The structure of the block ${\bf n} \times {\bf n}$ is such that the outermost block spans all possible states for the phonon quantum number $n_1 = 0, 1, \ldots,n$. Then, the next internal block spans the quantum number $n_2$, followed by the inner blocks $n_3,n_4 \, \ldots \, n_N$. Due to the action of the Kronecker $\delta$'s in (\ref{phononmatrix}), the ${\bf n} \times {\bf n}$ matrix is block tridiagonal. Consequently, the small blocks $d$ (refer to Fig. \ref{blockstructure} (b)) must be diagonal, since the interactions are mediated by one phonon only. On the other hand, the small blocks $t$ are also tridiagonal (Fig. \ref{blockstructure} (c)). Such self-similar arrangement goes on at all block levels $n_1 \, n_2 \, \ldots \, n_N$. \subsection{Phonon mode parities} The reflection symmetries of the phonon cavity lead to phonon modes of well defined parity. Such properties are examined in this section; for guidance refer to Fig. \ref{abcde}. For instance, when the boundary conditions at $Y_1$ and $Y_2$ are equal, {\it i.e.}, both C or both F, the modes $\alpha$ have either a symmetric ($+$) or an anti-symmetric ($-$) parity with respect to S$_x$. The same holds for S$_y$ regarding the edges $X_1$ and $X_2$. If the boundary conditions are opposite at $X_1$ and $X_2$ and also at $Y_1$ and $Y_2$, then one of the main diagonals of the plate, S$_{x-y}$ or S$_{x+y}$, is the only symmetry axis. Therefore the phonon modes $\alpha$ have well defined $+$ and $-$ parities about it. Finally, the cases \{CCCC\} and \{FFFF\} have definite parities about all the symmetry axes: S$_x$, S$_{y}$, S$_{x-y}$ and S$_{x+y}$. \subsection{Circular quantum dot} In the following we analyze the element ${\cal I}_{\alpha \, \kappa''\kappa'}$, appearing in Eq. (\ref{phononmatrix}). For the circular quantum dot case we have, generalizing Eqs. (\ref{F}) and (\ref{GV}), \begin{eqnarray} {\cal I}_{\alpha\,\kappa''\kappa'} &=& f(d) \int_{\mathcal{D}_{x y}} dx \, dy \ \ g_{\kappa''\kappa'}(r) \ \ F_\alpha(x,y) \nonumber \\ &\times& \big\{\cos[(l'' - l') \, \theta] + i \sin[(l'' - l') \, \theta] \big\}. \label{integralcircle} \end{eqnarray} Here, $g_{\kappa''\kappa'}(r)$ denotes the product of Bessel functions coming from Eq. (\ref{pureelectron}), $r=\sqrt{(x-x_0)^2 + (y-y_0)^2}$ with $(x_0,y_0)$ as the center of the QD, $\theta$ is measured from the S$_x$ axis and $f(d)$ results from a simple integration along the $z$ axis. For the deformation potential $F_\alpha^{DP}(x,y) = \nabla^2 \sum_{m,n} A_{mn}^{\alpha} \, X_m(x) \, Y_n(y)$, whereas for the piezoelectric interaction $F_\alpha^{PZ}(x,y) = \frac{\partial^2}{\partial x \,\partial y} \sum_{m,n} A_{mn}^{\alpha} \, X_m(x) \, Y_n(y)$. Notice that the Laplacian is a second order operator, therefore the function $F_\alpha^{DP}$ has the same parity as the phonon mode $\alpha$. On the other hand, $F_\alpha^{PZ}$ results from first order derivatives, so it has the opposite parity of $\alpha$. \begin{figure}[h] \includegraphics[width=8.6cm]{localization2.eps} \caption{Circular quantum dot positioned at: (a) locus A for the \{FCCC\} (or \{CFFF\}) plate; and (b) locus D for the \{$X_1X_2$CC\} (or \{$X_1X_2$FF\}) plate. Notice that both cases have overall symmetry only about the $x$ axis, {\it i.e.}, S$_x$.} \label{localization} \end{figure} A first examination of Eq. (\ref{integralcircle}) reveals that if $l'' = l'$ the sine function vanishes and the matrix element is real. For $l'' \neq l'$, it will be complex, real or purely imaginary depending on the system's characteristics. For the sake of understanding we shall briefly analyze a representative case. Let us assume the same boundary conditions for $Y_1$ and $Y_2$, then consider two scenarios: the circular QD at locus A and different boundary conditions for $X_1$ and $X_2$ (Fig. \ref{localization}(a)), or locus D regardless of $X_1$ and $X_2$ (Fig \ref{localization}(b)). In both cases only the phonon parity about the S$_x$ axis will be relevant for the evaluation of (\ref{integralcircle}). In fact, the $+$ ($-$) parity of $F_\alpha$ about S$_x$ leads to a matrix element that is real (purely imaginary) for the DP interaction and purely imaginary (real) for the PZ interaction. That is a consequence of the parity of the sine and cosine functions about $\theta=0$ together with the parity of $F_\alpha$ regarding the same axis. On the basis of the above analysis and the discussion of Section \ref{va}, it follows that if there is one or more global symmetry axes in the system, then each matrix block ${\bf n} \times {\bf n}$ of Fig. \ref{blockstructure}(a) can be written as $\mathbb{A} + i \mathbb{B}$, with $\mathbb{A}$ and $\mathbb{B}$ originating from the cosine and sine parts of the integral in Eq. (\ref{integralcircle}), respectively. Moreover, those matrices are real symmetric and mutually disjoint, that is, for $\mathbb{A}_{r s} \neq 0$ ($\mathbb{B}_{r s} \neq 0$) necessarily $\mathbb{B}_{r s} = 0$ ($\mathbb{A}_{r s} = 0$). Therefore, when a single interaction mechanisms is acting, we have the following scenarios: \begin{itemize} \item If the geometrical configuration of the nanostructure is such that there is only one global symmetry axis (e.g., locus A or D for plate \{CFFF\}), the ensuing partial symmetry break is enough to generate chaos. Moreover, the matrix representation of the Hamiltonian $\hat{H} = \hat{H}_0 + \hat{H}_{DP(PZ)}$ can be written in blocks of fixed $l$'s as $\mathbb{H}_{l''l'} = \mathbb{A}_{l''l'} + (i)^{\mbox{\scriptsize{sign}}(l'' - l')} \, \mathbb{B}_{l''l'}$, where $\mathbb{A}$ and $\mathbb{B}$ are disjoint, real and symmetric. Thus, $\mathbb{H}$ is completely characterized by orthogonal matrices, so belonging to the GOE universality class. It is straightforward to verify that the present reasoning encompasses all the cases of a single GOE statistics listed in Table I. \item For locus A and \{CCFF\} boundary conditions, the above structure for $\mathbb{H}$ is still valid. However, now the system has two symmetry axes, leading to new restrictions for the matrix elements. In fact, denote by $\sigma_x \sigma_y$ (with $\sigma = \pm$) the $\alpha$ mode parities with respect to S$_x$ and S$_y$. One finds that the integral over the cosine (sine) in Eq. (\ref{integralcircle}) is different from zero only if $|l'' -l'|$ is even and the $\alpha$ mode is $++$ ($--$), or $|l'' -l'|$ is odd and the $\alpha$ mode is $+-$ ($-+$). Such selection rules produce two different families of eigenvalues for the problem. For \{CCFF\} (and \{FFCC\}) each distinct family is chaotic, explaining the occurrence of two superposed uncorrelated Wigner distributions in the $P(s)$ statistics (for an explicit example, see the simpler case of a rectangular QD in Sec. V-D). \item For locus A and boundary conditions \{CCCC\} (or \{FFFF\}) there exists one further global symmetry, namely, the equivalence of the $x$ and $y$ directions. The extra symmetry prevents the emergence of chaos. \item Finally, in the absence of a global symmetry axis ({\it e.g.}, the quantum dot at C for any boundary condition, or loci B, C or E for \{CFFF\}) the Hamiltonian matrix does not separate into real and purely imaginary disjoint parts. Hence, it is a full complex unitary matrix and the chaotic behavior takes place with the system belonging to the GUE universality class. \end{itemize} The last case to be considered is the inclusion of both interactions in the Hamiltonian. From the previous discussion we know that for a given parity of the mode $\alpha$ the DP and PZ potentials lead to exactly opposite types of matrix elements. Indeed, if the DP matrix element is real (pure imaginary), necessarily that corresponding to PZ is pure imaginary (real). Therefore, the Hamiltonian $\hat{H} = \hat{H}_0 + \hat{H}_{DP} + \hat{H}_{PZ}$ has a complex matrix representation that results in a GUE statistics for the energy levels. \subsection{Rectangular quantum dot} \begin{figure}[h] \includegraphics[width=4cm]{squareQD.eps} \caption{ Top: schematics of the $32 \times 32$ interaction matrix for locus D in the \{CCFF\} nanostructure, ordered as $p \, q \, \alpha_1 \, \alpha_2 \, \alpha_3$ with $p,q = 1 \ \mbox{or} \ 2$ and $\alpha_j = 0 \ \mbox{or} \ 1$. Bottom: the transformed matrix in a block form. The filled dots indicate the nonzero elements. } \label{matrixsquare} \end{figure} For the rectangular quantum dot the element ${\cal I}_{\alpha \, \kappa''\kappa'}$ can be written as \begin{eqnarray} {\cal I}_{\alpha \, \kappa''\kappa'} &=& f(d) \int_{\mathcal{D}_{x y}} dx \, dy \, F_\alpha(x,y) \nonumber \\ & & \sin[p''\pi \frac{(x-\overline{x})}{L_x}] \, \sin[q''\pi \frac{(y-\overline{y})}{L_y}] \nonumber \\ & & \times \sin[p'\pi \frac{(x-\overline{x})}{L_x}] \, \sin[q'\pi \frac{(y-\overline{y})}{L_y}], \label{integralsquare} \end{eqnarray} where $\overline{x} = x_0 - L_x/2$, $\overline{y} = y_0 - L_y/2$ and $(x_0,y_0)$ are the coordinates of the center of the QD (for guidance refer to Section \ref{el-gas}). From Eq. (\ref{integralsquare}) it is evident that the matrix elements are always real numbers. Consequently, any chaotic behavior must belong to the GOE class and the occurrence of the GUE statistics is ruled out for this nanostructure. With respect to the quantum numbers, the conditions for which the above integral is different from zero are again entirely dependent on the global symmetries of the system. For instance, if the whole nanostructure has S$_x$ as a symmetry axis, then ${\cal I}_{\alpha \, \kappa''\kappa'}$ is nonzero for the following combinations: $p''+ p'\equiv$ even and $\alpha$ mode parity $\sigma_x \equiv +$, or $p''+ p' \equiv$ odd and $\sigma_x \equiv -$. Similar relations hold for $q''+ q'$ regarding S$_y$. The behavior of the spectral statistics generated by the rectangular QD-NEMS can be summarized by the following representative situations: for the loci B, C or E in the \{CFFF\} plate there are no global symmetry axes and we obtain GOE distributions. For loci A and D in the \{CFFF\} plate, there is one overall symmetry axis (S$_x$) and the resultant statistics is the superposition of two uncorrelated GOE distributions. Finally, for locus A in the \{CCFF\} and \{FFFF\} (or \{CCCC\}) plates, which contain more than one global symmetry axes, no chaotic features are observed. One can verify that all cases in Table II follow the same trends. In order to visualize the occurrence of the 2-GOE statistics, consider the case D in the \{CCFF\} plate. Despite the fact that the phonon modes $\alpha$ have two symmetry axes, S$_x$ and S$_y$, only the parity about S$_x$ is a global symmetry, due to the position of the QD. Assume then 3 phonon modes, such that a basis state is written as $|p,q;n_1,n_2,n_3\rangle$, with $p,q$=1 or 2 and $n_{\alpha}$=0 or 1. In addition, the 3 phonon mode parities with respect to S$_x$ are taken to be \{$+,-,+$\}. It results in the $32 \times 32$ matrix schematically represented in the top of Fig. \ref{matrixsquare}, where the filled dots indicate the nonzero elements. It is possible to transform the original matrix in that shown at the bottom of Fig. \ref{matrixsquare} just by rearranging its rows and columns. By labelling the original rows (from left to right) and columns (from top to bottom) as $1,2,\ldots,32$, we obtain the first nonzero block of the transformed matrix by performing the operation $1 \, 2 \, 3 \, 4 \, 5 \, 6 \, 7 \, 8 \, 9 \, 10 \, 11 \, 12 \, 13 \, 14 \, 15 \, 16 \rightarrow 1 \, 2 \, 5 \, 27 \, 13 \, 19 \, 10 \, 9 \, 32 \, 31 \, 6 \, 20 \, 14 \, 28 \, 23 \, 24$. A similar procedure, {\it i.e.}, operating over the remaining $17 \ldots 32$ positions, leads to the other nonzero block. Here, the el-ph interaction generates chaos in each family of eigenvalues, originating from the two independent blocks. Consequently, the spectrum of the full matrix gives rise to the superposition of two uncorrelated GOE distributions. The above analysis is valid for any matrix size. Finally, if both interactions act together in a system with a single global symmetry axis, say S$_x$, their effect is to break the selection rules previously described. This happens because $F_{\alpha}(x,y)$ has opposite parities for the DP and PZ interactions. As a result the Hamiltonian matrix does not have a block form and a pure GOE statistics emerges from the 2-GOE case, as seen in Fig. \ref{DPZ}. \subsection{Symmetry operator analysis of the anomalous GUE statistics} So far we have examined the structure of the Hamiltonian matrix to explain the chaotic features exhibited by our NEMS. Here, we make a link between our results and a more general analysis \cite{Leyvraz} to clarify the appearance of the anomalous GUE statistics in our time-reversal invariant (TRI) system. As already mentioned, the spectral fluctuations of TRI chaotic systems typically correspond to the GOE distribution. However, Leyvraz et al. \cite{Leyvraz} have shown that there are exceptions to this rule, which can be interpreted even semiclassically \cite{Keating}. Suppose a TRI chaotic system that has a discrete point symmetry represented by the operator ${\cal S}$, then $[H,{\cal S}] = [H,{\cal T}] = 0$, where $H$ is the Hamiltonian and ${\cal T}$ the time reversal operator. More importantly for the effect, assume also that ${\cal S}$ has two invariant subspaces whose representations are complex conjugate of each other. We call them $\{\Psi^{(+)}\}$ and $\{\Psi^{(-)}\}$, which are solutions of $H \Psi_n^{(\pm)} = E_n^{(\pm)} \Psi_n^{(\pm)}$. Since ${\cal T} \Psi_n^{(\pm)} = [{\Psi_n^{(\pm)}}]^* = \Psi_n^{(\mp)}$, it may seem that the problem is not TRI because each subspace changes into the other under ${\cal T}$, therefore causing a GUE statistics (notice that the Hamiltonian matrix is complex Hermitian in this basis). However, this is just an artefact of the particular structure of the subspaces. Actually, the full Hilbert space is TRI, as can be verified after the simple basis transformation $\Phi^{(\pm)} = i^{(\pm 1 - 1)/2} [\Psi^{(+)} \pm \Psi^{(-)}]/\sqrt{2}$, for which ${\cal T} \Phi_n^{(\pm)} = {\Phi_n^{(\pm)}}$. Note also that the Kramers theorem \cite{Sakurai,Dembowski2} imposes $E_n^{(+)} = E_n^{(-)}$. Finally, as pointed out in Ref. [\onlinecite{Leyvraz}], the present phenomenon is rare because often there exists an extra operator ${\cal P}$ (e.g., the parity symmetry operator) for which $[H,{\cal P}] = [{\cal T},{\cal P}] =0$. This operator is responsible for combining the two complex conjugate representations of ${\cal S}$ into an irreducible representation that is self-conjugate \cite{Schafer}, therefore producing a GOE statistics. Prior to our earliest paper \cite{rego}, the only systems known to show such behavior were billiards with three-fold but no mirror (parity) symmetries, which have been realized experimentally in microwave cavities \cite{Schafer,Dembowski1,Dembowski2}. They are chaotic by construction (due to their particular geometry) and have their eigenstates composed by complex degenerate doublets (of GUE statistics) and real singlets (of GOE statistics). It is possible, however, to establish a parallel between our circular QD-NEMS and these billiards. In our case the electron states, Eq. (\ref{pureelectron}), naturally provide the necessary complex representation through the angular momentum quantum number $l$. They are divided in singlets, for $l=0$, and degenerate doublets, for $l=\pm 1, \pm 2, \ldots$. Of course, the original electron states as well as the phonon states are regular, but the el-ph coupling generates chaos. If, nevertheless, the boundary conditions and the location of the QD are such to give rise to an global symmetry axis, the energy-level statistics is of the GOE type due to the ensuing definite parity. On the other hand, in the absence of an overall symmetry axis (e.g., location C for any plate), no ${\cal P}$ operator exists and GUE statistics arises. As a last comment, we recall that in our system the original electron degeneracies are destroyed by the interaction with the phonons. Nonetheless, the last behave as a perturbation for the electronic spectrum, because the energies of the electrons are much higher than those for an individual phonon. It is important to mention, however, that the occurrence of the doublets is not necessary for the manifestation of the GUE statistics. Actually, even when an additional small perturbation breaks that degeneracy, the GUE statistics also arises for each split family of eigenstates. It has been confirmed experimentally by the study of imperfect three-fold microwave triangular billiards \cite{Dembowski1}. \section{Conclusion} We have presented a theoretical study of the electron-phonon coupling in nanoelectromechanical systems (NEMS) comprised of a suspended dielectric plate and a quantum dot on its surface. It is shown that a quantum chaotic behavior develops as a result of the el-ph interaction, for a wide range of geometrical and material parameters of the QD-NEMS. A method is developed to treat this novel class of systems. It associates the phonons with the vibrational modes of a suspended rectangular plate, for clamped and free boundary conditions. The electrons are confined to a large QD, of either circular or rectangular symmetry, and described by the free electron gas approximation. The deformation potential and piezoelectric interactions are included non-perturbatively in the model, by calculating the eigenenergies of the NEMS on the basis of the el-ph states. By performing standard energy-level statistics we demonstrate that the resulting spectral fluctuations are very well described by those of the Gaussian Orthogonal Ensemble (GOE) or the Gaussian Unitary Ensemble (GUE). It is evidenced that the combination of the phonon mode parities together with the position of the QD determine the overall symmetries of the system, which are ultimately the responsible for the distinct chaotic features observed. Although, quantum chaos is commonly obtained in the system, the GUE statistics occurs only in the case of a circular QD-NEMS. It represents an anomalous phenomenon, since the problem is time-reversal invariant. The fundamental reason for this effect lies in the structure of the electronic spectrum, which is formed by doublets with $l=\pm 1, \pm 2, \ldots$. In the absence of any overall geometrical symmetry, the complex conjugate doublets transform into each other under the action of the time reversal operator, thus simulating the behavior of a non-TRI system. Finally, calculations are under way to include the effects of the electron-electron interaction in the model. We conjecture that the same chaotic behavior can also arise in this case, because the el-el interaction preserves the total angular momentum of the electronic system, justifying the previous analysis. \section*{Acknowledgments} We thank CNPq/Edital Universal, Funda\c c\~ao Arauc\'aria, Finep/CT-Infra1, CNPq/CT-Energ and CNPq (MGEL and AG) for research grants.
{ "attr-fineweb-edu": 1.983398, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdVk5qg5A6DwoMXBz
\section{Introduction} \label{sec:Introduction}% The low energy limit of the bound states of the D1-D5 brane system is described by a two-dimensional supersymmetric conformal field theory (SCFT) holographically dual to ${\mathrm{AdS}}_3 \times {\mathbb S}^3 \times {\mathbb T}^4$ \cite{Maldacena:1997re}.% % % \footnote{The torus ${\mathbb T}^4$ can be replaced by K3, but we will only consider the former.} % % In the supergravity description, with the D1-branes wrapped around a large ${\mathbb S}^1 $ and the D5-branes wrapped around ${\mathbb S}^1 \times {\mathbb T}^4$, the system assumes the form of an asymptotically flat black ring at spatial infinity, with six large dimensions, whose geometry becomes ${\mathrm{AdS}}_3 \times {\mathbb S}^3 \times {\mathbb T}^4$ in the near-horizon scaling limit,% % % \footnote{See \cite{David:2002wn} for a review} % % and whose Bekenstein-Hawking entropy was derived microscopically by Strominger and Vafa \cite{Strominger:1996sh}. In a certain point of its moduli space, the D1-D5 SCFT becomes a free ${\mathcal N}=(4,4)$ supersymmetric sigma model on the orbifold $({\mathbb T}^4)^N/S_N$, where $S_N$ is the symmetric group of $N$ elements \cite{Larsen:1999uk,Seiberg:1999xz}. But this is not the same point in moduli space as the supergravity black hole description, so to make contact between the two one should turn on marginal deformations on the SCFT side. Supergravity properties, such as the entropy of the Strominger-Vafa black hole, can be obtained from protected objects in the SCFT which are unaffected by these deformations. With the development of the fuzzball program \cite{Lunin:2001jy,Mathur:2005zp,Kanitscheider:2007wq,Kanitscheider:2006zf,Skenderis:2008qn,Mathur:2018tib}, a dictionary between states of the free-orbifold SCFT and some `microstate geometries' of supergravity was developed \cite{Lunin:2001fv,Lunin:2004uu, Giusto:2004id,Giusto:2004ip,Giusto:2004kj,Giusto:2011fy,Giusto:2012yz,Giusto:2013bda,Giusto:2013rxa,Mathur:2005zp,Skenderis:2008qn,Mathur:2012tj,Taylor:2007hs}, and significant progress has been achieved in the construction of geometries supporting the horizon-scale scale structure necessary for addressing the information loss problem \cite{Bena:2010gg,Bena:2013dka,Bena:2015bea,Bena:2016ypk,Bena:2018mpb,Warner:2019jll}. This success in the supergravity side is one important motivation for an even deeper understanding of both the free orbifold SCFT and its deformation. Even at the free-orbifold point, the D1-D5 SCFT is quite non-trivial. The twisted boundary conditions of the symmetric orbifold lead to correlation functions with complicated monodromies, and one must resort to specific techniques to compute them \cite{Dixon:1986qv,Arutyunov:1997gt,Lunin:2000yv,Lunin:2001pw,Pakman:2009zz}. The explicit construction of these functions remains an active area of research \cite{Burrington:2012yn,Burrington:2012yq,Burrington:2015mfa,Burrington:2018upk,deBeer:2019ioe,Galliani:2016cai,Galliani:2017jlg,Roumpedakis:2018tdb,Tormo:2018fnt,Giusto:2020mup,Dei:2019iym}. In fact, the study of the $({\mathbb T}^4)^N/S_N$ orbifold SCFT has been associated with the development of an explicit realization of AdS$_3$/CFT$_2$ by mapping the D1-D5 system to the F1-NS5 system by S-duality, and working on the ${\mathrm{AdS}}_3 \times {\mathbb S}^3 \times {\mathbb T}^4$ background with only (one unit of) NS fluxes \cite{Dabholkar:2007ey,Gaberdiel:2007vu,Dei:2019iym,Dei:2019osr,Eberhardt:2018ouy,Eberhardt:2019qcl,Eberhardt:2019ywk,Gaberdiel:2020ycd}. Describing the deformation of this non-trivial theory is not an easy task, but progress can be made by working perturbatively with respect to the dimensionless coupling $\lambda$ of the deformation operator $O^{(\rm{int})}_{[2]}$ \cite{Avery:2010er,Avery:2010hs,Pakman:2009mi,Carson:2014ena,Carson:2015ohj,Burrington:2017jhh,Carson:2016uwf,Guo:2019pzk,Hampton:2018ygz,Guo:2019ady,Guo:2020gxm}. In doing perturbation theory, it is necessary to compute correlation functions at the free orbifold point, involving the deformation operator among other fields. For the the analysis of some important fields (as in the case of the present paper), it is necessary to go to second order in perturbation theory, and hence to compute four-point functions, which are dynamical and not fixed by the symmetries. One complicating factor is that $O^{(\rm{int})}_{[2]}$ has twist two, so it can join and split the ``twisted strings'' of the effective string description of the orbifold, and correlation functions have the associated non-trivial monodromies. Meanwhile, based both on results from bulk supergravity in the D1-D5 system and from AdS$_3$/CFT$_2$ in the F1-NS5 system, non-renormalization theorems are believed to exist, with explicit proofs available in some cases, e.g.~for three-point functions \cite{deBoer:2008ss,Baggio:2012rr} and for the extremal fields in the NS chiral ring \cite{Pakman:2009mi}. This latter proof is given at order $\lambda^2$ by computing explicitly the necessary integral of a four-point function giving the one-loop correction to the propagator. The result confirms the expectation that BPS operators are protected. The states most relevant for fuzzball and black hole microstates are in the Ramond sector of the SCFT, in particular the Ramond ground states. Since this sector is more complicated than the NS sector precisely because of the presence of `spin fields' and a non-trivial set of ground states, it is very convenient, for many purposes, to work on the NS sector and then perform a spectral flow \cite{Schwimmer:1986mf} of the ${\mathcal N}=4$ superconformal algebra. Spectral flow maps the NS chiral ring to the set of degenerate Ramond ground states, a fact which has been extensively used to simplify computations, classify states, etc. The purpose of this paper is to study the effects of the deformation on generic Ramond ground states of the full orbifold theory, that is the Ramond fields with $(h,\tilde h) = (\frac14 N , \frac14 N)$, by working directly on the Ramond sector, without recurring to spectral flow. The generic composite Ramond states, which have the form \begin{equation} \label{CompleteRamondIntro} \Big[ \prod_i (R^{\zeta_i}_{[n_i]})^{q_i} \Big] , \qquad \sum_{i} n_i q_i = N , \end{equation} are in fact highly degenerate: they are made of all the allowed compositions of $n_i$-twisted ground states whose cycles $(n_i)$ form a conjugacy class of $S_N$. To obtain $S_N$-invariance, one must sum over the orbits of the cycles. The labels $\zeta_i$ indicate the charges of the single-cycle components, which are doublets of the R-symmetry SU(2) groups and the SU(2) groups that form the automorphism of the ${\mathcal N} = (4,4)$ algebra. Our goal is to compute the four-point functions of (\ref{CompleteRamondIntro}) with their conjugates and two deformation operators: \begin{equation} \label{GuIntro} \Big\langle \Big[ \prod_i (R^{\zeta_i}_{[n_i]})^{q_i} \Big]^\dagger (z_1,\bar z_1) \; O^{(\rm{int})}_{[2]}(z_2, \bar z_2) \; O^{(\rm{int})}_{[2]} (z_3 , \bar z_3) \; \Big[ \prod_i (R^{\zeta_i}_{[n_i]})^{q_i} \Big] (z_4,\bar z_4) \Big\rangle . \end{equation} From this four-point function one can derive several dynamical data following the same steps as in Refs.\cite{Lima:2020boh,Lima:2020kek,Lima:2020nnx,Lima:2020urq}. Twisted correlators are associated with ramified coverings of the sphere, and their large-$N$ expansion is associated with an expansion in covering surfaces with higher genera \cite{Lunin:2000yv,Pakman:2009zz}. We compute (\ref{GuIntro}) at the leading order, i.e.~for genus-zero covering surfaces. Our first result is to show that (\ref{GuIntro}) factorizes into a sum of \emph{connected} correlation functions \begin{equation} \label{GuIntro2} \Big\langle \Big[ R^{\zeta_1}_{[n_1]} R^{\zeta_2}_{[n_2]} \Big]^\dagger (z_1,\bar z_1) \; O^{(\rm{int})}_{[2]}(z_2, \bar z_2) \; O^{(\rm{int})}_{[2]} (z_3 , \bar z_3) \; \Big[ R^{\zeta_1}_{[n_1]} R^{\zeta_2}_{[n_2]} \Big] (z_4,\bar z_4) \Big\rangle , \end{equation} with only \emph{double}-cycle Ramond fields. This reduces the problem considerably --- in fact, the connected function above has already been considered recently in Ref.\cite{Lima:2020nnx} for Ramond fields with R-charged single-cycle components, for which (\ref{GuIntro2}) was computed with the stress-tensor method. Here we compute (\ref{GuIntro2}) for all the possible combinations of R-charged and R-neutral single-cycle constituents, by using both the stress-tensor method \cite{Dixon:1985jw,Arutyunov:1997gt,Pakman:2009ab,Pakman:2009zz,Pakman:2009mi} and the Lunin-Mathur covering surface technique \cite{Lunin:2000yv,Lunin:2001pw}. This is our second main result. Knowledge of these four-point functions gives us important dynamical information about the D1-D5 CFT: one can take coincidence limits such as $z_3 \to z_4$, to find the fusion rules of the operator product expansions (OPEs) \begin{equation} \label{OPEointRRIntro} O^{(\rm{int})}_{[2]} \times \big[ R^{\zeta_1}_{[n_1]} R^{\zeta_2}_{[n_2]} \big] = \sum_{\frak a} Y^{\zeta_1\zeta_2}_{\frak a, [n_1+n_2]} . \end{equation} We are able to find the conformal dimensions of the twisted operators $Y^{\zeta_1\zeta_2}_{\frak a, [n_1+n_2]}$ in the (two, ${\frak a}={\frak1},{\frak2}$) channels of these OPEs, and to compute a collection of structure constants. Furthermore, using conformal perturbation theory, the function (\ref{GuIntro}) can be used to derive the $\lambda^2$-correction% % % \footnote{% The first-order correction vanishes automatically by $S_N$ selection rules obeyed by the relevant three-point functions.} % % to the conformal dimension of the Ramond fields (\ref{CompleteRamondIntro}). For that, one needs to integrate over the positions of the two deformation operators. The factorization means that this integral must be computed for all the functions (\ref{GuIntro2}). These integrals are divergent, but can be regularized with the same framework developed in Refs.\cite{Lima:2020boh,Lima:2020kek,Lima:2020nnx,Lima:2020urq}. We show that the regularized integrals of every function (\ref{GuIntro2}) vanish --- a direct verification of the non-renormalization of the generic Ramond ground states. \bigskip The structure of the paper is the following. In Sect.\ref{SectSCFT} we describe the relevant properties of the free orbifold SCFT and recall some aspects of conformal perturbation theory which we use throughout the paper. In Sect.\ref{SectFactorization} we discuss the factorizations of (\ref{GuIntro}), and show that it factorizes into a sum of functions (\ref{GuIntro2}) at genus-zero order. In Sect.\ref{SectDoublCyclFunc} we compute the function (\ref{GuIntro2}) for every combination of single-cycle Ramond field components. In Sect.\ref{SectOPEs} we use these functions to extract conformal data from the fusion rules of the interaction operator with the Ramond fields. In Sect.\ref{SectNonRenor} we compute the one-loop integrals which give the second-order correction to the conformal dimensions of the Ramond fields, and show that they vanish. We conclude in Sect.\ref{SectConclusion} by putting our results in perspective. Several auxiliary computations and examples, as well as lists of the structure constants, are presented in the appendices. \section{The free orbifold theory and its deformation} \label{SectSCFT} In this section, we describe the twisted Ramond fields which will be the main subject of our work, and the necessary tools for their description in the deformed theory. \subsection{Twisted Ramond fields} In the free orbifold point, the D1-D5 CFT is made by $N$ copies of the `seed' $\mathcal N = (4,4)$ superconformal field theory of four free bosons $X_I^{A\dot A} (z,\bar z)$, four free holomorphic fermions $\psi_I^{\alpha A} (z)$, and four anti-holomorphic fermions $\tilde \psi_I^{\dot \alpha \dot A}(\bar z)$, on which $S_N$ acts on the `copy index' $I = 1,\cdots,N$. The total central charge is $c_{orb} = (6N, 6N)$. Each copy SCFT has has central charge $c = (6,6)$, R-symmetry group $\mathrm{SU}(2)_L \times \mathrm{SU}(2)_R$ and an ``internal'' automorphism group $\mathrm{SO}(4)_I = \mathrm{SU}(2)_1 \times \mathrm{SU}(2)_2$. Indices $\alpha = + , -$ and $\dot \alpha = \dot +, \dot -$ transform as a doublets of SU(2)$_L$ and SU(2)$_R$, and $A=1,2$ and $\dot A=\dot1,\dot2$ transform as doublets of SU(2)$_1$ and SU(2)$_2$, respectively. The fields satisfy the reality conditions \begin{equation} \label{RealiCondXpsi} ( X_I^{\dot A A} )^\dagger = - \epsilon_{\dot A \dot B} \epsilon_{A B} X_I^{\dot B B} \ , \quad ( \psi_I^{\alpha \dot A} )^\dagger = - \epsilon_{\alpha \beta} \epsilon_{\dot A \dot B} \psi_I^{\beta\dot B} . \end{equation} The non-vanishing bosonic two-point functions are \begin{align} \Big\langle \partial X_I^{\dot 1 1}(z) ( \partial X_I^{\dot 1 1} )^\dagger (z') \Big\rangle = \frac{2 }{(z - z')^2} = - \Big\langle \partial X_I^{\dot 1 2}(z) ( \partial X_I^{\dot 1 2} )^\dagger (z') \Big\rangle , \label{twopntboconj} \end{align} and the fermionic two-point functions are $\langle \psi_I^{\alpha \dot A} (z) \psi_I^{\beta \dot B} (z') \rangle = - \epsilon^{\alpha\beta} \epsilon^{\dot A \dot B} / (z - z') $. Similar formulae hold in the anti-holomorphic sector. It is convenient to work with bosonized fermions% % % \footnote{% For a detailed account of cocycles in the bosonization see Refs.\cite{Burrington:2012yq,Burrington:2015mfa}. Our notation follows closely what is used in these references; the most relevant changes are that we call $\phi_1, \phi_2$ the fields they call $\phi^5,\phi^6$, and we call $R^\alpha$, $R^{\dot A}$, etc.~the spin fields which they call ${\mathcal S}^\alpha$, ${\mathcal S}^{\dot A}$, etc. There is also a change of sign in the SU(2)$_2$ current, see Footnote \ref{FootfrakJ} below. } % % \begin{subequations} \label{FermionsBoson} \begin{align} \psi_I^{+ \dot 1}(z) &= e^{-i \phi_{I,2}(z)} , & \psi_I^{+ \dot 2}(z) &= e^{i \phi_{I,}1(z)} , \label{Fermholoconline1} \\ \psi_I^{- \dot 1}(z) &= e^{- i \phi_{I,1} (z)} , & \psi_I^{- \dot 2}(z) &= - e^{i\phi_{I,2} (z)} , \label{Fermholoconline} \end{align}\end{subequations} The Ramond ground states are created from the NS vacuum by `spin fields'. and then the holomorphic spin fields are given explicitly by \begin{subequations}\label{spinfields}\begin{align} R_I^+(z) &= e^{+ \frac{i}{2} [ \phi_{I,1}(z) - \phi_{I,2} (z) ]} , & R_I^-(z) &= e^{- \frac{i}{2} [ \phi_{I,1}(z) - \phi_{I,2} (z) ]} , \label{spinfildpm} \\ R_I^{\dot 1}(z) &= e^{- \frac{i}{2} [ \phi_{I,1} (z) + \phi_{I,2} (z) ]} , & R_I^{\dot 2}(z) &= e^{+ \frac{i}{2} [ \phi_{I,1} (z) + \phi_{I,2} (z) ]} , \label{Spinfield12} \end{align}\end{subequations} all with conformal weights $h = \frac14$. The R-charges and the internal SU(2)$_2$ charges of each of these fields are given in Table \ref{TabQuNaRam}. The orbifold SCFT has `twisted sectors' created by `twist operators' $\sigma_g$, $g \in S_N$, with conformal weight \cite{Dixon:1986qv,Lunin:2000yv} \begin{equation}\label{twistdim} h_n^\sigma = \frac{1}{4} \Big( n - \frac{1}{n} \Big) = \tilde h^\sigma_n , \end{equation} which introduce the boundary conditions $ \mathscr O_I(e^{2\pi i} z ) \sigma_g(0) = \mathscr O_{g(I)}(z) \sigma_g(0) $ for an operator $\mathscr O_I$ in the copy $I$. Let us denote by $(n) = (I_1, \cdots, I_{n}) \in {\mathbb Z}_{n}$ a generic cyclic permutation of length $n$. Any $g \in S_N$ can be expressed as a product of disjoint cycles, \begin{equation} \label{gPartition} {\textstyle g = \prod_{i =1} (n_i)^{q_i} , \qquad \sum_{i=1} n_i q_i = N ,} \end{equation} with the partition of $N$ on the right defining the conjugacy class $[g]$. As a consequence, single-cycle permutations can be seen as ``fundamental'' permutations, out of which the elements of $S_N$ are built. The twist field $\sigma_{(n)}$ connects the $n$ copies of the seed CFT entering the cycle $(n)$. Then we have $n$ sets of free fields on a circle of radius $nR$, which can be interpreted as a string winding $n$ times around the ${\mathbb S}^1$ of radius $R$ parallel to the D1 branes. This is often called a `$n$-wound component string'. On the the $n$-wound string, the Ramond ground states are created by the $n$-twisted Ramond fields \begin{subequations} \label{RamondFields} \begin{align} R^{\pm}_{(n)}(z) &\equiv \exp \left( \pm \frac{i}{2n} \sum_{I = 1}^n \big[ \phi_{1,I}(z) - \phi_{2,I}(z) \big] \right) \sigma_{(1 \cdots n)}(z) \label{Rampmnnoninv} \\ R^{\dot1}_{(n)}(z) &\equiv \exp \left( - \frac{i}{2n} \sum_{I = 1}^n \big[ \phi_{1,I}(z) + \phi_{2,I}(z) \big] \right) \sigma_{(1 \cdots n)}(z) \label{Rampmnnoninv} \\ R^{\dot2}_{(n)}(z) &\equiv \exp \left( + \frac{i}{2n} \sum_{I = 1}^n \big[ \phi_{1,I}(z) + \phi_{2,I}(z) \big] \right) \sigma_{(1 \cdots n)}(z) \label{Rampmnnoninv} \end{align}\end{subequations} where we have chosen the representative $n$-cycle to be $(n) = (1, \cdots ,n)$. The spin fields (\ref{spinfields}) are the untwisted Ramond fields, i.e.~the Ramond fields with $n=1$. To obtain $S_N$-invariant operators from the single-cycle fields, we sum over the orbits of $(n)$ to obtain its conjugacy class $[n]$, and divide by the appropriate combinatorial factor $\mathscr S_n(N)$ so that the field remains renormalized \cite{Lunin:2000yv,Pakman:2009zz}. Thus, for example, \begin{equation} \label{RnpmInv} R^{\pm}_{[n]}(z) \equiv \frac{1}{\mathscr S_n(N)} \sum_{h \in S_N} \exp \left( \pm \frac{i}{2n} \sum_{I = 1}^n \big[ \phi_{1,h(I)}(z) - \phi_{2,h(I)}(z) \big] \right) \sigma_{h^{-1}(1 \cdots n) h}(z) \end{equation} The Ramond fields $R^{\alpha}_{(n)}$ and $R^{\dot A}_{(n)}$ form two SU(2) doublets, according to the spinorial indices $\alpha = \pm$ and $\dot A = \dot1,\dot2$ in the fermions $\psi^{\alpha\dot A}$. The $R^\alpha_{(n)}$ are charged under the holomorphic R-symmetry group SU(2)$_L$, and the $R^{\dot A}_{(n)}$ charged under the internal symmetry SU(2)$_2$. The charges are respectively the eigenvalues $j^3$ and $\frak j^3$ of the currents \begin{align} J^3(z) &= \frac{i}{2} \sum_{I = 1}^N \big[ \partial \phi_{1,I} (z) - \partial \phi_{2,I} (z) \big] , \\ {\frak J}^3(z) &= \frac{i}{2} \sum_{I=1}^N \big[ \partial \phi_{1,I} (z) + \partial \phi_{2,I}(z) \big] . \end{align} The right-moving fields $\tilde R^{\dot \alpha}_{(n)}(\bar z)$ and $\tilde R^{\dot A}_{(n)}(\bar z)$ are charged under the anti-holomorphic currents $\tilde J^3$ and $\tilde{\frak J}^3$ with charges $\tilde \jmath^3$ and $\tilde{\frak j}^3$.% % % \footnote{\label{FootfrakJ}% The total SU(2)$_2$ current is the \emph{sum} ${\frak J}^3(z) + \tilde{\frak J}^3(\bar z)$, see \cite{Burrington:2015mfa}. Note that the SU(2)$_2$ current ${\mathcal J}^3(z)$ defined in \cite{Burrington:2015mfa} has its sign opposite from ours, i.e.~${\mathcal J}^3(z) = - {\frak J}^3(z)$. } % % We will use the same notation for the left-moving fields and for the left-right moving operators $R^{\alpha}_{(n)}(z,\bar z) = R^\alpha_{(n)}(z) \tilde R^{\dot\alpha}_{(n)}(\bar z)$ and $R^{\dot A}_{(n)}(z,\bar z) = R^{\dot A}_{(n)}(z) \tilde R^{\dot A}_{(n)}(\bar z)$. The values of the charges for each Ramond field is given in Table \ref{TabQuNaRam}. All $n$-twisted Ramond fields all have conformal weights \begin{equation} h^\mathrm{R}_n = \frac{nc}{24} = \frac{n}{4} = \tilde h^\mathrm{R}_n \end{equation} which is appropriate for the Ramond ground states of the $n$-wound string. One can deduce this value by adding the weights of the exponentials in (\ref{RamondFields}) with the weight (\ref{twistdim}) of the twist fields. Of course, the $S_N$-invariant fields $R^\alpha_{[n]}$, $R^{\dot A}_{[n]}$ have the same charges and weights as their non-$S_N$-invariant components. \begin{table} \begin{center} \begin{tabular}{r|| r r r r} & $R^+_{(n)}$ & $R^-_{(n)}$ & $R^{\dot 1}_{(n)}$ & $R^{\dot 2}_{(n)}$ \\ \hline \hline \text{\footnotesize (R-charge)} \ $j^3$ & $+ \tfrac{1}{2} \ $ & $-\tfrac{1}{2} \ $ & $0 \ $ & $0 \ $ \\ \text{\footnotesize (R-charge)} \ $\tilde \jmath^3$ & $+ \tfrac{1}{2} \ $ & $- \tfrac{1}{2} \ $ & $0 \ $ & $0 \ $ \\ \text{\footnotesize (internal)} \ $\frak j^3$ & $0 \ $ & $0 \ $ & $-\tfrac{1}{2} \ $ & $+\tfrac{1}{2} \ $ \\ \text{\footnotesize (internal)} \ $\tilde{\frak j}^3$ & $0 \ $ & $0 \ $ & $-\tfrac{1}{2} \ $ & $+\tfrac{1}{2} \ $ \end{tabular} \caption{SU(2) charges of Ramond fields} \label{TabQuNaRam} \end{center} \end{table} The Ramond ground states of the orbifold with $c = 6N$ are compositions of single-cycle fields with \emph{disjoint} twists defining a conjugacy class of $S_N$, \begin{equation} \label{CompleteRamond} {\textstyle \prod_i (R^{\zeta_i}_{(n_i)})^{q_i} , \qquad \sum_{i=1} n_i q_i = N } , \end{equation} where $\zeta_i = \pm, \dot1, \dot2$. Since each component of the product is made from different copies (i.e.~the cycles $(n_i)$ are disjoint), when applying the stress-tensor or the SU(2) currents, we find that the dimension and the charges of the composite operator are the sums of the respective quantum numbers of the component strings, hence \begin{equation} h^\mathrm{R} = \frac{\sum_i q_i n_i}{4} = \frac{N}{4} = \tilde h^\mathrm{R} . \end{equation} We define composite operators with sum-over-orbits as in (\ref{RnpmInv}) by \begin{equation} \prod_i (R^{\zeta_i}_{[n_i]})^{q_i} = \sum_{h \in S_N} \left[ \prod_i \frac{1}{{\mathscr S}_{n_i}^{q_i}} (R^{\zeta_i}_{h^{-1} (n_i) h} )^{q_i} \right] , \end{equation} i.e.~we permute all cycles with the same $h \in S_N$, ensuring that only disjoint cycles enter the products of twists. This is similar to defining a ``normal ordering'', see \cite{Roumpedakis:2018tdb,Lima:2020nnx}, which we indicate by writing the composite operator inside square brackets. As we will see later, a prominent r\^ole will be played by the double-cycle operator with only two twisted strings of lengths $n_1$ and $n_2$, each in its Ramond ground state; it can be expressed explicitly as \begin{align} \label{Rn1n2ppInv} \begin{split} &\big[ R_{[n_1]}^{\dot1} R_{[n_2]}^{+} \big] (z) = \\ & \frac{1}{{\mathscr S}_{n_1} {\mathscr S}_{n_2}} \sum_{h\in S_N} \exp \Bigg[ - \frac{i}{2n_1}\sum_{I=1}^{n_1} \big( \phi_{1,h(I)} + \phi_{2,h(I)} \big) - \frac{i}{2n_2} \sum_{I=n_1+1}^{n_1+n_2} \big( \phi_{1,h(I)} -\phi_{2,h(I)} \big) \Bigg] \\ &\qquad\qquad\qquad \times \sigma_{h^{-1}(1\cdots n_1)h} \sigma_{h^{-1}(n_1+1\cdots n_1+n_2)h} . \end{split} \end{align} Cf.~Ref.\cite{Lima:2020nnx}. Note how the definition ensures that the cycles $(n_1)$ and $(n_2)$ appearing the product are always disjoint. In the copies with trivial cycles there are, implicitly, identity operators. As mentioned, the conformal weight and charges are the sums of the components, hence for (\ref{Rn1n2ppInv}) \begin{equation} h^\mathrm{R}_{n_1,n_2} = \frac{n_1 + n_2}{4} , \quad j^3 = \tfrac12 , \quad \frak j^3 = - \tfrac12 . \end{equation} We can compose the build a set of R-neutral operators not only from the R-neutral fields $R^{\dot A}_{[n]}$, but also by combining R-charged single-cycle fields as $[ R_{[n_1]}^{\pm} R_{[n_2]}^{\mp} ]$, which has $h^\mathrm{R}_{n_1,n_2} = \frac{n_1 + n_2}{4}$, $j^3 = 0$, and $\frak j^3 = 0$. A complete list of the SU(2) charges of the double-cycle Ramond fields is given in Table \ref{TabQuNaRamComp}. \begin{table} \begin{center} \begin{tabular}{r|| c c c c c c c} & {\footnotesize $ R^\pm_{[n_1]} R^\pm_{[n_2]} $ } & {\footnotesize $ R^\pm_{[n_1]} R^\mp_{[n_2]} $ } & {\footnotesize $ R^{\dot2}_{[n_1]} R^{\dot2}_{[n_2]} $ } & {\footnotesize $ R^{\dot1}_{[n_1]} R^{\dot1}_{[n_2]} $ } & {\footnotesize $ R^{\dot2}_{[n_1]} R^{\dot1}_{[n_2]} $ } & {\footnotesize $ R^{\dot1}_{[n_1]} R^{\pm}_{[n_2]} $ } & {\footnotesize $ R^{\dot2}_{[n_1]} R^{\pm}_{[n_2]} $ } \\ \hline \hline $j^3$ & $\pm 1$ & $0$ & $0$ & $0$ & $0$ & $\pm\tfrac12$ & $\pm\tfrac12$ \\ $\frak j^3$ & $0$ & $0$ & $+1$ & $-1$ & $0$ & $-\tfrac12$ & $+\tfrac12$ \end{tabular} \caption{SU(2) charges of composite Ramond fields} \label{TabQuNaRamComp} \end{center} \end{table} \subsection{The interacting CFT and renormalization} The interacting CFT is a deformation of the free orbifold by the scalar modulus \begin{align} \label{DeformwithMo} \begin{split} \lambda O^{(\rm{int})}_{[2]}(z_* , \bar z_* ) &= \lambda \epsilon_{A B} G^{- A}_{-\frac{1}{2}} \tilde G^{ \dot - B}_{-\frac{1}{2}} O^{(0,0)}_{[2]}(z_*, \bar z_*) \\ &= \lambda \epsilon_{A B} \oint \frac{d z}{2\pi i} \oint \frac{d\bar z}{2\pi i} G^{- A}(z) \tilde G^{ \dot - B}(\bar z) O^{(0,0)}_{[2]}(z_*, \bar z_*) \end{split} \end{align} where $\lambda$ is the coupling parameter. This is one of the SUGRA moduli \cite{Avery:2010er}, an $S_N$-invariant singlet of the SU(2) symmetries, and it is marginal, with dimension $\Delta = h + \tilde h = 2$. The deformation is a descendant of the NS chiral operator $O^{(0,0)}_{[n]}$ with twist $n = 2$, which for generic $n$ have $j^3 = \frac{n-1}{2} = h$, after the action of the $-\frac12$-modes of the supercharges. When the deformation is turned on, generic fields are renormalized while some selected others remain protected by algebraic conditions such as BPS bounds. Since the deformation is exactly marginal, a renormalized propagator (two-point function) is still fixed by conformal symmetry, but with a corrected conformal dimension. This correction can be obtained by computing the functional integral with the perturbed action $ S_\rm{int} = S_\rm{free} + \lambda \int d^2z \, O^{(\rm{int})}_{[2]}(z, \bar z) , $ and looking at the coefficient of logarithmic divergencies. At first order in $\lambda$, the change in the conformal dimension $\Delta_{\mathscr R} = h^{\mathscr R} + \tilde h^{\mathscr R}$ of an operator $\mathscr R(z,\bar z)$ is proportional to \begin{equation} \label{3ptStrcR} \big\langle \mathscr R^\dagger(\infty) O^{(\rm{int})}_{[2]}(1,\bar 1) \mathscr R(0) \big\rangle, \end{equation} a structure constant in the free theory \cite{KADANOFF197939}.% % % \footnote{% Unless explicitly indicated otherwise, all correlators $\langle \, \cdots \rangle$ in this paper should be understood as evaluated on the free orbifold theory.} % % When this structure constant vanishes, the second-order correction to the conformal dimension is given by the ``one-loop'' integral over the positions of the interaction operators in the four-point function \begin{equation} \Big\langle \mathscr R(z_1, \bar z_1) O^{(\rm{int})}_{[2]}(z_3, \bar z_3) O^{(\rm{int})}_{[2]} (z_4 , \bar z_4) \mathscr R (z_2 , \bar z_2) \Big\rangle = \frac{G_{\mathscr R} (u, \bar u) }{ |z_{13}|^2 |z_{32}|^2|z_{12}|^{2 (\Delta_{\mathscr R} - 1)} } , \label{4ptgenG} \end{equation} where $z_{ij} \equiv z_i - z_j$ and $u$ is the anharmonic ratio $u \equiv {z_{12}z_{34}} / { z_{13}z_{24} }$. The integral over the position $z_3$ gives rise to the logarithmic divergence $\log \Lambda$, with a cutoff $\Lambda \ll 1$, and the integral over $z_4$, which can be exchanged for an integral over $u$, \begin{equation} \label{JscrR} J_{\mathscr R} \equiv \int \! d^2u \, G_{\mathscr R}(u,\bar u) , \end{equation} gives the corrected dimension \begin{subequations} \label{Renormalization} \begin{equation} \Delta_{\mathscr R}^{(ren)} = \Delta_{\mathscr R} - \tfrac{\pi}{2} \lambda^2 J_{\mathscr R} + \mathrm{O}(\lambda^3) \end{equation} for the renormalized field \begin{equation} {\mathscr R}^{(ren)} (z,\bar z) = \Lambda^{\frac12 \pi \lambda^2 J_{\mathscr R}} {\mathscr R}(z,\bar z) . \end{equation}\end{subequations} The integral (\ref{JscrR}) also gives the first-order expression for the structure constant (\ref{3ptStrcR}) in the interacting theory. See \cite{Lima:2020kek} for details, and also \cite{Keller:2019yrr,Burrington:2012yq} for interesting expositions on conformal perturbation theory. \section{Factorizations of the four-point function with Ramond ground states} \label{SectFactorization} Consider the Ramond ground state (\ref{CompleteRamond}). The four-point function \begin{equation} \label{Gu} G(u,\bar u) = \Big\langle \Big[ \prod_i (R^{\zeta_i\dagger}_{[n_i]})^{q_i} \Big] (\infty,\bar \infty) \; O^{(\rm{int})}_{[2]}(1, \bar 1) \; O^{(\rm{int})}_{[2]} (u , \bar u) \; \Big[ \prod_i (R^{\zeta_i}_{[n_i]})^{q_i} \Big] (0,\bar 0) \Big\rangle \end{equation} is a sum of terms with specific twist cycles, which we denote by \begin{equation} \label{Fourpttofact} \Big\langle \Big[ \prod_i (R^{\zeta_i\dagger}_{(n_i)_\infty})^{q_i} \Big] (\infty,\bar \infty) \; O^{(\rm{int})}_{(2)_1}(1, \bar 1) \; O^{(\rm{int})}_{(2)_u} (u , \bar u) \; \Big[ \prod_i (R^{\zeta_i}_{(n_i)_0})^{q_i} \Big] (0,\bar 0) \Big\rangle . \end{equation} That is, we label a cycle $(m)_z$ by the position $z$ where it is inserted. Let us recall some properties of twisted correlation functions \cite{Lunin:2000yv}. In a generic $Q$-point function the monodromy conditions impose that \begin{flalign} && &\left\langle \mathscr O^1_{(n_1)} (z_1 , \bar z_1) \mathscr O^2_{(n_2)}(z_2,\bar z_2) \cdots \mathscr O^Q_{(n_Q)} (z_Q , \bar z_Q) \right\rangle \neq 0 && \label{QptscrO} \\ \text{only if} && &(n_1) \cdots (n_Q) = \mathds 1. && \label{compto1} \end{flalign} The correlator is connected if no collection of cycles is disjoint from all the others; let us assume that this is the case in (\ref{QptscrO}). The monodromies define a ramified surface $\Sigma$ with coordinates $(t,\bar t)$ covering the base Riemann sphere $S^2_{\mathrm{base}}$, with its coordinates $(z,\bar z)$. The covering surface $\Sigma$ has one ramification point of order $n_r-1$ for each cycle $(n_i)$ entering the correlator, irrespective of the position where the operators are inserted. (That is, we can take $z_2 \to z_1$ in (\ref{QptscrO}), and the number of ramification points is the same.) The genus $\bf g$ of $\Sigma$ is determined by $\bf s$, the number of \emph{distinct} copies entering the permutations, via the Riemann-Hurwitz formula \begin{equation} \label{RHur} {\bf g} = \frac{1}{2} \sum_{r=1}^Q (n_r - 1) - {\bf s} + 1 \end{equation} If we sum over the orbits of the cycles $(n_1), \cdots , (n_Q)$, the resulting $S_N$-invariant function depends on $N$ via normalization factors and combinatorial properties of the different solutions of Eq.(\ref{compto1}), and have the large-$N$ scaling \cite{Lunin:2000yv,Pakman:2009zz} \begin{equation} \label{Ndep} \begin{split} \left\langle \mathscr O^1_{[n_1]} (z_1 , \bar z_1) \mathscr O^2_{[n_2]}(z_2,\bar z_2) \cdots \mathscr O^Q_{[n_Q]} (z_Q , \bar z_Q) \right\rangle &\sim N^{{\bf s} - \frac12 \sum_{r=1}^Q n_r} (1 + N^{-1} + \cdots) \\ &= N^{1 - {\bf g} - \frac12 Q} (1 + N^{-1} + \cdots) \end{split} \end{equation} It is easy to see that a generic term like (\ref{Fourpttofact}) factorizes because, by construction, the cycles in the Ramond fields $R^{(\zeta_i)}_{(n_i)}$ are all disjoint. Hence the overlapping of copy indices is governed by the twist-2 cycles of the interaction operators, which can ``sew together'' a very limited number of cycles. \subsection{Factorization into double-cycle Ramond fields} \label{SectFactTypes} Let us examine in detail the function (\ref{Fourpttofact}). The cycles of each composite Ramond field are all disjoint; to make this more explicit, we can relabel the components by writing each cycle separately, i.e.~% $[ \prod_i (R^{\zeta_i}_{(n_i)})^{q_i} ] = [ \prod_r R^{\zeta_r}_{(n_r)} ] $, where now $\sum_i q_i n_i \equiv \sum_{r} n_{r} = N $. In this notation, there are (in general) cycles of same length $n_{r} = n_{r'}$, but they are always disjoint: $(n_{r}) \neq (n_{r'})$. Now, every one of the $N$ copies is present (only once) in the cycles of $\prod_{r} R^{\zeta_r}_{(n_{r})}$, hence \emph{each of the cycles $(2)_1$ and $(2)_u$ necessarily overlap with one or, at most, two of the cycles $(n_r)$.} As a result, the possible factorizations of (\ref{Fourpttofact}) are the following. \begin{itemize}[label={}, leftmargin=*] \item \underline{$\cdot$ Three-point functions:} In principle, the $S_N$ rules allow for a factorization into three-point functions, \begin{flalign} \label{3ptFact} \begin{split} &\Big\langle R^{\zeta_1\dagger}_{(M)_\infty} (\infty,\bar \infty) \; O^{(\rm{int})}_{(2)_1}(1, \bar 1) R^{\zeta_1}_{(n_1)_0} R^{\zeta_2}_{(n_2)_0} (0,\bar 0) \Big\rangle \\ &\quad \times \Big\langle R^{\zeta_1\dagger}_{(n_1)_0} R^{\zeta_2\dagger}_{(n_2)_0} (\infty,\bar \infty) O^{(\rm{int})}_{(2)_u}(u, \bar u) R^{\zeta_M}_{(M)_0} (0,\bar 0) \Big\rangle \, \prod_k \Big\langle R^{\zeta_k\dagger}_{(n_k)_\infty} \Big| R^{\zeta_k}_{(n_k)_0} \Big\rangle \end{split} \end{flalign} where $M = n_1 + n_2$. But charge conservation implies that these three-point functions vanish, since no double-cycle operator has the same charges as a single-cycle operator; see Tables \ref{TabQuNaRam} and \ref{TabQuNaRamComp}. \item \underline{$\cdot$ Single-Cycle:} \begin{equation} \label{SingleCycle4pt} \Big\langle R^{\zeta_1\dagger}_{(n)_\infty} (\infty,\bar \infty) \; O^{(\rm{int})}_{(2)_1}(1, \bar 1) \; O^{(\rm{int})}_{(2)_u} (u , \bar u) \; R^{\zeta_1}_{(n)_0} (0,\bar 0) \Big\rangle \prod_s \Big\langle R^{\zeta_s\dagger}_{(n_s)_\infty} \Big| R^{\zeta_s}_{(n_s)_0} \Big\rangle \end{equation} For this factorization to be possible, there must be ${\bf s} = n$ distinct copies entering the remaining, connected four-point function. The associated covering surface has $Q = 4$ ramification points, and the Riemann-Hurwitz formula gives ${\bf g} = 1$. The covering surface associated with these functions is therefore a torus. Note that, in this case, we must necessarily have $n \geq 2$, because, by hypothesis, \emph{both} copy indices in $(2)_1$ have to overlap with indices of the cycle $(n)_\infty$. The fields with $n = 1$ all factorize into the double-cycle case described below. \item \underline{$\cdot$ Double-Cycle:} \begin{equation} \label{DoubleCycle4pt} \begin{split} \Big\langle R^{\zeta_1\dagger}_{(n_1)_\infty} R^{\zeta_2\dagger}_{(n_2)_\infty} (\infty,\bar \infty) \; O^{(\rm{int})}_{(2)_1}(1, \bar 1) \; O^{(\rm{int})}_{(2)_u} (u , \bar u) \; R^{\zeta_1}_{(n_1)_0} R^{\zeta_2}_{(n_2)_0} (0,\bar 0) \Big\rangle \qquad \\ \times \prod_s \Big\langle R^{\zeta_s\dagger}_{(n_s)_\infty} \Big| R^{\zeta_s}_{(n_s)_0} \Big\rangle \end{split} \end{equation} We can normalize the twisted Ramond fields such that the factorized bra-kets are $ \braket{R^{\zeta_s\dagger}_{(n_s)_\infty} | R^{\zeta_s}_{(n_s)_0} } = 1 $. (In these factors we have always $(n_s)_\infty = (n_s)_0^{-1}$, in accordance with Eq.(\ref{compto1}).) Because of Eq.(\ref{compto1}), the twists of the remaining four-point function in Eq.(\ref{DoubleCycle4pt}) must involve ${\bf s} = n_1 + n_2$ distinct copies. There are $Q = 6$ ramification points, corresponding to the six different cycles, and the Riemann-Hurwitz equation now gives ${\bf g} = 0$. So the covering surface associated with (\ref{DoubleCycle4pt}) is a sphere. There is a subtlety here when one or both of the cycles has length one. Say $n_2 = 1$ and $n_1 = n \leq 2$: the covering surface loses two ramification points, corresponding to $(n_2)_0$ and $(n_2)_\infty$, and we are left with four ramification points, as in (\ref{SingleCycle4pt}); but now the number of distinct copies is ${\bf s} = n + 1$, hence ${\bf g} = 0$. If both $n_1 = n_2 = 1$, there are only two ramification points, corresponding to the cycles $(2)_1$ and $(2)_u = (2)_1^{-1}$; hence ${\bf s} = 2$ and Eq.(\ref{RHur}) gives again ${\bf g} = 0$. \end{itemize} In summary: \emph{All genus-zero contributions to the four-point function (\ref{Gu}) are given by the double-cycle factorization (\ref{DoubleCycle4pt}).} \subsection{Sums over orbits} \label{SectSumoveOrb1} We have analyzed the factorization of individual terms in the sum over orbits (\ref{Gu}). Now we must explain how these factorizations are organized when we perform the summation. It is very instructive to look at specific examples of configurations of $[ \prod_i ( R^{\zeta_i}_{[n_i]})^{q_i} ]$; we give detailed examples in Appendix \ref{AppSumoveOrb1}. Here we develop the general argument. For now we omit the positions of the fields for brevity; they will always follow the order $\infty, 1, u, 0$, and can be read in the labels of the cycles. Start with fixing the cycles of the interaction operators, while summing over the orbits of the Ramond fields: \begin{equation} \label{dobsumorbGen} \Big\langle \Big[ \sum_{h \in S_N} \prod_i (R^{\zeta_i\dagger}_{h (n_i)_\infty h^{-1}})^{q_i} \Big] \; O^{(\rm{int})}_{(2)_1} \; O^{(\rm{int})}_{(2)_u} \; \Big[ \sum_{g \in S_N} \prod_i (R^{\zeta_i}_{g (n_i)_0 g^{-1}})^{q_i} \Big] \Big\rangle . \end{equation} In each term of the (double) sum (\ref{dobsumorbGen}), the copies entering the cycles $(2)_1$ and $(2)_u$ will select a pair of single-cycle Ramond fields from the square brackets on the left, and another pair from the square brackets on the right. The remaining single-cycle Ramond fields, which do not share any copies with the interaction operators, will factorize out of the four-point function. Thus (\ref{dobsumorbGen}) becomes a sum of terms like (\ref{DoubleCycle4pt}). Since in the sum over orbits every Ramond field component will have its copies shuffled into every possible configuration, the fixed copies of $(2)_1$ and $(2)_u$ will select every possible pairings of fields at least once. In fact, the same pairs will be selected many times: there are many rearrangements of the copies inside their cycles whose overlap with $(2)_1$ and $(2)_u$ satisfy Eq.(\ref{compto1}). More precisely, the factorized terms will organize not as individual terms like (\ref{DoubleCycle4pt}), but rather as sums over orbits themselves, such as \begin{equation} \label{DoubleCycle4ptSm} \begin{split} \Big\langle \Big[ \sum_{h \in S_N} R^{\zeta_1\dagger}_{h (n_1)_\infty h^{-1}} R^{\zeta_2\dagger}_{h (n_2)_\infty h^{-1}} \Big] O^{(\rm{int})}_{(2)_1} O^{(\rm{int})}_{(2)_u} \Big[ \sum_{g \in S_N} R^{\zeta_1}_{g (n_1)_0 g^{-1}} R^{\zeta_2}_{g (n_2)_0 g^{-1}} \Big] \Big\rangle \\ \times \prod_s \sum_{h,g \in S_N} \Big\langle R^{\zeta_s\dagger}_{h (n_s)_\infty h^{-1}} \Big| R^{\zeta_s}_{g (n_s)_0 g^{-1}} \Big\rangle . \end{split} \end{equation} We now perform a sum over orbits for the cycles of the interactions, divide by the factors $\mathscr C_{n_1 n_2}$, $\mathscr S_2$, etc.% % % \footnote{\label{FootSr}% See Eqs.(\ref{RnpmInv}) and (\ref{Rn1n2ppInv}); note that the factors are ``extensive'': ${\mathscr C}_{n_1n_2} = {\mathscr S}_{n_1} {\mathscr S}_{n_2}$, etc. so the normalization of the full Ramond field is obtained with $\prod {\mathscr S}_{n_i}$. In (\ref{DoubleCycle4ptSmCl}), most ${\mathscr S}_{n_i}$ have gone into the normalized two-point functions in the second line of (\ref{DoubleCycle4ptSm}), which are \emph{not} normalized. } Eq.(\ref{dobsumorbGen}) becomes (\ref{Gu}), \begin{equation} \label{GuShor} G = \Big\langle \big[ \prod_i (R^{\zeta_i}_{[n_i]_\infty})^{q_i} \big]^\dagger O^{(\rm{int})}_{[2]_1} O^{(\rm{int})}_{[2]_u} \big[ \prod_i (R^{\zeta_i}_{[n_i]_0})^{q_i} \big] \Big\rangle \end{equation} while (\ref{DoubleCycle4ptSm}) becomes \begin{equation} \label{DoubleCycle4ptSmCl} \Big\langle \big[ R^{\zeta_1}_{[n_1]_\infty} R^{\zeta_2}_{[n_2]_\infty} \big]^\dagger O^{(\rm{int})}_{[2]_1} O^{(\rm{int})}_{[2]_u} \big[ R^{\zeta_1}_{[n_1]_0} R^{\zeta_2}_{[n_2]_0} \big] \Big\rangle \equiv G^{n_1,n_2}_{\zeta_1,\zeta_2}. \end{equation} where we have used the fact that $\langle R^{\zeta_s\dagger}_{[n_s]_\infty} | R^{\zeta_s}_{[n_s]_0} \rangle = 1$.% % \footnote{See Footnote \ref{FootSr}.} % % In short, we have found that \begin{equation} \label{GuFact} G = \sum_{\text{pairings}} G^{n_j,n_k}_{\zeta_j,\zeta_k}. \end{equation} where the sum is over all possible pairings of single-cycle components of the multi-cycle Ramond field $[\prod_i (R^{\zeta_i}_{[n_i]})^{q_i}]$. For a generic configuration of the Ramond ground state, there will be many terms in the r.h.s.~of Eq.(\ref{GuFact}) which are the same function. For example, if the Ramond ground state is $[ (R^{+}_{[n]})^{N/n}]$, every pairing will give the same function $G^{n,n}_{+,+}$, and the r.h.s.~of Eq.(\ref{GuFact}) will result in ${\mathscr P}^2(N/n) G^{n,n}_{+,+}$, where ${\mathscr P}(q)$ is the number of ways of pairing $q$ objects. This number can be written as \begin{equation} \label{scrPq} {\mathscr P}(q) \equiv \frac{(2 \lfloor q \rfloor)! }{ ( \lfloor \frac12 q \rfloor )! \ 2^{\lfloor \frac12 q \rfloor} } + 2 (\tfrac12 q - \lfloor \tfrac12 q \rfloor) \lfloor \tfrac12 q \rfloor , \end{equation} see Appendix \ref{AppSumoveOrb1}. Note that two functions $G^{n_1,n_2}_{\zeta_1,\zeta_2}$ and $G^{n_1',n_2'}_{\zeta_1',\zeta_2'}$ are equal iff $(n_1,\zeta_1) = (n_1',\zeta_1')$ and $(n_2,\zeta_2) = (n_2',\zeta_2')$. Let us write the most general Ramond field as \begin{subequations} \label{GenRam} \begin{flalign} && & \Big[ \prod_i ( R^+_{[n_i]})^{q^+_i} ( R^-_{[n_i]})^{q^-_i} ( R^{\dot1}_{[n_i]})^{q^{\dot1}_i} ( R^{\dot2}_{[n_i]})^{q^{\dot2}_i} \Big] = \Big[ \prod_i \prod_{\zeta = +,-,\dot1,\dot2} ( R^\zeta_{[n_i]})^{q^\zeta_i} \Big] , && \\ \text{where} && & \sum_i (q^+_i + q^-_i + q^{\dot1}_i + q^{\dot2}_i) n_i = \sum_i \sum_\zeta q^\zeta_i n_i = N . && \end{flalign} \end{subequations} We have thus shown that its four-point function with the interaction operator factorizes as \begin{align} \label{Gufact} \begin{split} & G(u,\bar u) = \Big\langle \Big[ \prod_{i,\zeta} ( R^\zeta_{[n_i]})^{q^\zeta_i} \Big]^\dagger \!\! (\infty,\bar \infty) \, O^{(\rm{int})}_{[2]}(1,\bar 1) \, O^{(\rm{int})}_{[2]} (u,\bar u) \Big[ \prod_{i,\zeta} ( R^\zeta_{[n_i]})^{q^\zeta_i} \Big] (0,\bar 0) \Big\rangle \\ & = \sum_{i>j} \sum_{\zeta, \zeta' \neq \zeta} (q_i^\zeta q_j^{\zeta'})^2 \Big\langle \big[ R^{\zeta}_{[n_i]} R^{\zeta'}_{[n_j]} \big]^\dagger (\infty,\bar \infty) \, O^{(\rm{int})}_{[2]}(1,\bar 1) \, O^{(\rm{int})}_{[2]} (u,\bar u) \big[ R^{\zeta}_{[n_i]} R^{\zeta'}_{[n_j]} \big] (0,\bar 0) \Big\rangle \\ & + \sum_{i} \sum_{\zeta, \zeta' \neq \zeta} (q_i^\zeta q_i^{\zeta'})^2 \Big\langle \big[ R^{\zeta}_{[n_i]} R^{\zeta'}_{[n_i]} \big]^\dagger (\infty,\bar \infty) \, O^{(\rm{int})}_{[2]}(1,\bar 1) \, O^{(\rm{int})}_{[2]} (u,\bar u) \big[ R^{\zeta}_{[n_i]} R^{\zeta'}_{[n_i]} \big] (0,\bar 0) \Big\rangle \\ & + \sum_{i,\zeta} {\mathscr P}^2(q_i^\zeta) \Big\langle \big[ R^{\zeta}_{[n_i]} R^{\zeta}_{[n_i]} \big]^\dagger (\infty,\bar \infty) \, O^{(\rm{int})}_{[2]}(1,\bar 1) \, O^{(\rm{int})}_{[2]} (u,\bar u) \big[ R^{\zeta}_{[n_i]} R^{\zeta}_{[n_i]} \big] (0,\bar 0) \Big\rangle \end{split} \end{align} where ${\mathscr P}(q)$ is given by Eq.(\ref{scrPq}), and the factors $q^\zeta q^{\zeta'}$ are the number of ways of pairing a set of $q^\zeta$ objects with the a set of $q^{\zeta'}$ different objects. The coefficients are squared because is if have, e.g.~${\mathscr P}(q)$ pairings on the left, we also have ${\mathscr P}(q)$ different pairings on the right, hence ${\mathscr P}(q) \times {\mathscr P}(q)$ equivalent factorizations. Again, we refer to Appendix \ref{AppSumoveOrb1} for more details. \subsection{$N$-scaling} \label{SectNscaling} When restricted to correlators with ${\bf g} = 0$, every function appearing in the r.h.s.~of Eq.(\ref{Gufact}) is connected. Their scaling in the large-$N$ limit is given by Eq.(\ref{Ndep}), along with the analysis for double-cycle factorizations below Eq.(\ref{DoubleCycle4pt}). When $n_i > 1$ for all $q_i \neq 0$, i.e.~when every component of the Ramond field (\ref{GenRam}) is twisted, then every correlator in the r.h.s.~of Eq.(\ref{Gufact}) is a function with $Q = 6$ ramification points, and from Eq.(\ref{Ndep}) we have that \begin{equation} G(u,\bar u) |_{{\bf g} = 0} \sim N^{-2} , \qquad \text{if} \quad n_i > 1 \quad \forall \quad q_i^\zeta \neq 0 . \end{equation} When exactly one of the cycles has length one, i.e.~when $n_k = 1$ with $q_k^{\zeta'} = 1$ and $n_i > 1$ for all $q_i^\zeta \neq 0$ and $i \neq k$, $\zeta\neq\zeta'$ then there will be terms like \begin{align*} \Big\langle \big[ R^{\zeta'}_{[1]_\infty} R^{\zeta}_{[n_i]_\infty} \big]^\dagger O^{(\rm{int})}_{[2]_1} O^{(\rm{int})}_{[2]_u} \big[ R^{\zeta'}_{[1]_0} R^{\zeta}_{[n_i]_0} \big] \Big\rangle_{{\bf g} = 0} &\sim N^{1 - {\bf g} - 2} = N^{-1} \end{align*} because this correlators have only $Q = 4$ ramification points. Since the other functions scale as $N^{-2}$, the components with the untwisted field dominate at large $N$. When there is \emph{at least two} untwisted fields in the operator (\ref{GenRam}), there will be correlators pairing these two untwisted components; the covering surface associated with these correlators has only $Q = 2$ ramification points, the ones given by the cycles $(2)_1$ and $(2)_u$ of the interaction. Then Eq.(\ref{Ndep}) gives \begin{align*} \Big\langle \big[ R^{\zeta'}_{[1]_\infty} R^{\zeta}_{[1]_\infty} \big]^\dagger O^{(\rm{int})}_{[2]_1} O^{(\rm{int})}_{[2]_u} \big[ R^{\zeta'}_{[1]_0} R^{\zeta}_{[1]_0} \big] \Big\rangle_{{\bf g} = 0} &\sim N^{1 - {\bf g} - 1} = \mathrm{const.} \end{align*} So these functions do not scale with $N$ in the large-$N$ limit. (This is actually expected: these are simply two-point functions of $O^{(\rm{int})}_{[2]}$ in an untwisted Ramond state, and the normalization factor $\mathscr S_n(N)$ is such that two-point functions are $N$-independent.) In summary, we can sketch the $N$-scaling of a general four-point function as \begin{equation} G(u,\bar u) |_{{\bf g}=0} \sim \underbrace{ N^0 }_{\substack{ \text{Terms with} \\ \text{two untwisted} \\ \text{components}}} + \underbrace{ N^{-1} }_{\substack{ \text{Terms with} \\ \text{one twisted and} \\ \text{one untwisted} \\ \text{components}}} + \underbrace{ N^{-2} }_{\substack{ \text{Terms with} \\ \text{two twisted} \\ \text{components}}} \end{equation} Hence we can see that, because of the factorizations, for the four-point function (\ref{Gufact}) the genus expansion is not exactly the same thing as the large-$N$ expansion. In what follows, we will work at the genus-zero order, and evaluate all possible contributions to $G(u,\bar u)$. That is, we will compute every four-point functions containing two interaction operators and two \emph{double-cycle} Ramond ground states. \section{Double-cycle connected four-point functions} \label{SectDoublCyclFunc} We have shown that the generic multi-cycle Ramond ground state four-point function (\ref{Gu}) factorizes into functions with only double-cycle composite Ramond fields as in Eq.(\ref{GuFact}). We are now going to compute the $S_N$-invariant functions \begin{equation} \label{Gcfull} G_{\zeta_1, \zeta_2}(u,\bar u) = \Big\langle \big[ R^{\zeta_1}_{[n_1]} R^{\zeta_2}_{[n_2]} \big]^\dagger (\infty,\bar \infty) \; O^{(\rm{int})}_{[2]}(1, \bar 1) \; O^{(\rm{int})}_{[2]} (u , \bar u) \; \big[ R^{\zeta_1}_{[n_1]} R^{\zeta_2}_{[n_2]} \big] (0,\bar 0) \Big\rangle \end{equation} for all possible combination of $\zeta_i = \pm, \dot1, \dot2$, with ${\bf g} = 0$. (From now on, for the sake of simplicity, we drop the labels $n_1,n_2$ used in Eq.(\ref{DoubleCycle4ptSmCl}).) We are going to derive $G_{\zeta_1 \zeta_2}(u,\bar u)$ using two different methods: the so-called `stress tensor method' \cite{Dixon:1986qv} and the path-integral Lunin-Mathur (LM) construction \cite{Lunin:2000yv,Lunin:2001pw}. Both have been widely used in the context of the $\mathcal N = (4,4)$ $S_N$ orbifold, see e.g.~\cite{Arutyunov:1997gi,Arutyunov:1997gt,Pakman:2009zz,Pakman:2009ab,Pakman:2009mi,Lima:2020boh,Lima:2020kek,Lima:2020nnx,Lima:2020urq} for the former and \cite{Burrington:2012yq,Burrington:2012yn,Burrington:2015mfa,Burrington:2017jhh,Tormo:2018fnt,Burrington:2018upk,deBeer:2019ioe} for the latter. They provide complementary points of view of the same problem, and each has its advantages. The LM technique computes the four-point function directly, but it requires a complicated regularization of the path integral around the ramification points. The stress-tensor method makes use of conformal symmetry only, but it requires that one solves a first-order differential equation to obtain the four-point function. \subsection{Covering map for connected functions} \label{SectCovMaps} The non-trivial monodromies of a twisted correlation function defined on the `base' Riemann sphere ${\mathbb S}^2_{\mathrm{base}}$ can be implemented \cite{Lunin:2000yv} by lifting ${\mathbb S}^2_{\mathrm{base}}$ to a ramified covering surface $\Sigma_c$, where there is only one copy of the $\mathcal N = (4,4)$ SCFT --- a single field on multiple sheets of $\Sigma_c$ being mapped to different-copy fields on ${\mathbb S}^2_{\mathrm{base}}$. We are interested in genus-zero covering surfaces, hence $\Sigma_c = {\mathbb S}^2_{\mathrm{cover}}$. This surface is defined by a covering map $\{z \in {\mathbb S}^2_{\mathrm{base}} \}\mapsfrom \{t \in {\mathbb S}^2_{\mathrm{cover}}\}$, with one ramification point for each twist on the base; for the connected four-point function, we have six points: $t_* = \{0, t_0, t_1, x\}$, the point at infinity, and the finite images $t_\infty$ of $z=\infty$, where \begin{subequations} \label{localzt} \begin{align} z(t) &= z_* + b_{t_*} (t-t_*)^{n_*} + \cdots , && t \to t_* \\ z(t) &= b_\infty t^{n_\infty} + \cdots, && t \to \infty \\ z(t) &= b_{t_\infty} (t - t_\infty)^{- n_{t_\infty}} + \cdots , && t \to t_\infty \end{align} \end{subequations} The coefficients $b_*$ enter the lifting of fermions, see Eqs.(\ref{rim}) below. For example, the lifting of $O^{(\rm{int})}_{[2]}$ is% % \footnote{% Note that the contour integrals in (\ref{DeformwithMo}) disappear on the covering, because they simply pick up a residue. The power of $b_{t_*}$ also gets a contribution from the Jacobian of the transformation from $z$ to $t$. Finally, one must take into account the cocycles (on the covering surface) to obtain correct signs; see e.g.~\cite{Burrington:2012yq} for a detailed derivation. } % \begin{equation} \begin{split} O^{(\rm{int})}(t_*,\bar t_*) = & 2 |b_{t_*}|^{-\frac54} \Big[ \, \colon \! \partial X^{\dot 1 1} \, e^{+\frac{i}{2} (\phi_1 + \phi_2)} \left( \bar \partial X^{\dot 1 2} e^{+\frac{i}{2} (\tilde \phi_1 + \tilde \phi_2)} - (\bar \partial X^{\dot 1 1})^\dagger e^{- \frac{i}{2} (\tilde \phi_1 + \tilde \phi_2)} \right) \! \colon \, \\ - & \, \colon \! \partial X^{\dot 1 2} \, e^{+\frac{i}{2} (\phi_1 + \phi_2)} \left( (\bar \partial X^{\dot 1 2})^\dagger e^{- \frac{i}{2} (\tilde \phi_1 + \tilde \phi_2)} + \bar \partial X^{\dot 1 1} e^{+ \frac{i}{2} (\tilde \phi_1 + \tilde \phi_2)} \right) \! \colon \, \\ + & \, \colon \! (\partial X^{\dot 1 1})^\dagger e^{- \frac{i}{2} (\phi_1 + \phi_2)} \left( (\bar \partial X^{\dot 1 2})^\dagger e^{-\frac{i}{2} (\tilde \phi_1 + \tilde \phi_2)} + \bar \partial X^{\dot 1 1} e^{+\frac{i}{2} (\tilde \phi_1 + \tilde \phi_2)} \right) \! \colon \, \\ + & \, \colon \! (\partial X^{\dot 1 2})^\dagger \, e^{-\frac{i}{2} (\phi_1 + \phi_2)} \left( \bar \partial X^{\dot 1 2} e^{+ \frac{i}{2} (\tilde \phi_1 + \tilde \phi_2)} - (\bar \partial X^{\dot 1 1})^\dagger e^{- \frac{i}{2} (\tilde \phi_1 + \tilde \phi_2)} \right) \! \colon \, \Big] \end{split} \label{InteraOpera} \end{equation} where $(t_*,\bar t_*)$ are coordinates on the covering with $n_* =2$. We drop the twist index in the lifted operator $O^{(\rm{int})}$. For the fully-connected four-point function (\ref{Gcfull}), the covering surface ${\mathbb S}^2_{\mathrm{cover}}$ is given by the covering map \cite{Lima:2020nnx} \begin{equation} \label{coverm} z(t) = \left( \frac{t}{t_1}\right)^{n_1} \left( \frac{t-t_0}{t - t_\infty} \right)^{n_2} \left( \frac{t_1-t_\infty }{t_1-t_0} \right)^{n_2}. \end{equation} One can see that the two independent twist cycles $(n_1)_0$ and $(n_2)_0$ inserted at the same point $z = 0$ on the base are reproduced on the covering by two separate ramification points $t = 0$ and $t = t_0$. The same goes for the twists at $z = \infty$, lifted to $t = \infty$ and $t = t_\infty$. The twists $(2)_1$ and $(2)_u$ impose the condition of a vanishing derivative, $z'(t) \sim (t - t_1)(t-x)$, where we have defined the point $x \in {\mathbb S}^2_{\mathrm{cover}}$ by $u \equiv z(x)$. Thus $t_1$ and $x$ must be the solutions of the quadratic equation $z'(t) = 0$, which imposes two relations between the four parameters $t_0$, $t_\infty$, $t_1$ and $x$. We are free to choose one final relation between them, and then three of the parameters become functions of the remaining one. We choose the following \cite{Lima:2020nnx} parameterization in terms of the ``free-roving'' coordinate $x$, \begin{align} \label{tim} \begin{split} t_0 &= x-1 , \\ t_1 &= \frac{(x-1) (n_1+ n_2 x- n_2)}{n_1 + n_2 x} , \\ t_\infty &= x- \frac{n_2 x }{n_1+n_2 x} \end{split} \end{align} As in the case of the map chosen in \cite{Arutyunov:1997gt}, this parameterization is such that the the function $u(x) \equiv z(x)$ is rational, \begin{equation} u(x) = \Bigg( \frac{x+ \frac{n_1}{n_2}}{x-1} \Bigg)^{n_1+n_2} \Bigg( \frac{x}{x - 1 + \frac{n_1}{n_2} } \Bigg)^{n_1-n_2} . \label{uxm} \end{equation} The obvious asymmetry between $n_1$ and $n_2$ in our parameterization of the covering surface was introduced in Eq.(\ref{coverm}), when we chose to lift $\sigma_{(n_1)}$ to the origin of ${\mathbb S}^2_{\mathrm{cover}}$. We could just as well have chosen the opposite, lifting $\sigma_{(n_2)}$ to the origin, and obtaining a different map, with $n_1$ and $n_2$ exchanged. These two maps are isomorphic \cite{Lima:2020nnx}, and, in any case, when we map the four-point functions back to the base sphere the results are symmetric in $n_1$ and $n_2$. The multiple inverses of (\ref{uxm}) cannot be found in general, but we can find the asymptotic functions $x^{u_*}_{\frak a}(u)$ near the points $u = u_*$ where there are coincidence limits in the four-point function (\ref{Gcfull}). As discussed above, we can assume that \begin{equation} n_1 > n_2 \end{equation} without loss of generality. Then for $u \to u_* = 0$, we get the two functions \begin{subequations} \label{xfraka} \begin{equation} (u \to 0) \quad \begin{sqcases} x \to 0, & x^0_{\frak1}(u) \approx \left( 1 - \tfrac{n_1}{n_2} \right) \left( \tfrac{n_1}{n_2} \right)^{- \frac{n_1+n_2}{n_1-n_2}} u^{\frac{1}{n_1-n_2}} + \cdots \\ x \to - \tfrac{n_1}{n_2} , &x^0_{\frak2}(u) \approx - \tfrac{n_1}{n_2} + \left(1 + \tfrac{n_1}{n_2} \right) \left(\tfrac{n_1}{n_2} \right)^{- \frac{n_1-n_2}{n_1+n_2}} u^{\frac{1}{n_1+n_2}} \end{sqcases} \label{xto0minnuxComp} \end{equation} When $u \to u_* =1$, we have $x = \infty$ and $x = -\tfrac{n_1-n_2 }{2n_2}$; the behavior of $u(x)$ near $x = \infty$ can be found with the conformal transformation $x = 1 / \varepsilon$, \begin{align} (u \to 1) \quad \begin{sqcases} x \to \infty , % & x^1_{\frak1}(u) \approx - 4n_1 (1-u)^{-1} \\ \\ x \to - \tfrac{n_1-n_2 }{2n_2} , & x^1_{\frak2}(u) \approx - \tfrac{n_1-n_2 }{2n_2} \\ &\quad\quad\quad + 3^{\frac13} 2^{-2} (n_1-n_2)^{\frac23} (n_1 + n_2)^{\frac23} n_1^{-\frac13} n_2^{-\frac43} (1 - u)^{\frac{1}{3}} \end{sqcases} \label{xaxp4nsComp} \end{align} \end{subequations} \subsection{Four-point functions} \label{SectStressTensor} In the LM approach \cite{Lunin:2000yv,Lunin:2001pw}, we compute $G^c_{\mathscr R}$ as a path integral. Choosing fiducial metrics $ds^2_{\mathrm{base}} = dz d\bar z$ and $ds^2_{\mathrm{cover}} = dt d \bar t$ on the two spheres, the covering map relates the base and the covering surfaces by a Weyl transformation \begin{equation} ds^2_{\mathrm{base}} = e^\Phi ds^2_{\mathrm{cover}} , \end{equation} so the path integrals on each surface are related, \begin{flalign} && &G_{\zeta_1\zeta_2} \big (x,\bar x) |_{(\mathrm{base})} = e^{S_L} \; G_{\zeta_1\zeta_2}^{(\mathrm{cover})} (x, \bar x), \label{CoverBaseG} && \\ \text{where} && & S_L = \frac{6}{96 \pi} \int \! d^2t \sqrt{g_{\mathrm{cover}}} \Big[ g_{\mathrm{cover}}^{ab} \partial_a \Phi \partial_b \Phi + 2 \Phi R(g_{\mathrm{cover}}) \Big] && \end{flalign} is the Liouville action \cite{Friedan:1982is}, and $G_{\zeta_1,\zeta_2}^{(\mathrm{cover})}(x,\bar x)$ is the untwisted correlator of the fields lifted to the covering surface (where we drop twist indices of operators), \begin{equation} \label{GcoverxMain} \begin{split} &G_{\zeta_1,\zeta_2}^{\mathrm{(cover)}}(x,\bar x) \\ & = \Big\langle R^{\zeta_1 \dagger}(\infty,\bar \infty) R^{\zeta_2 \dagger} (t_\infty, \bar t_\infty) O^{(\rm{int})} (t_1,\bar t_1) O^{(\rm{int})} (x, \bar x) R^{\zeta_1} (t_0,\bar t_0) R^{\zeta_2}(0,\bar 0) \Big\rangle , \end{split} \end{equation} Note that the connected four-point functions on the base lift to six-point functions on the covering, since the composite operator $[ R^{\zeta_1}_{[n_1]} R^{\zeta_2}_{[n_2]}]$ is lifted to two operators at different points by the covering map. The functions at the r.h.s.~of Eq.(\ref{CoverBaseG}) are naturally parameterized by $x$, hence the base-sphere function at the l.h.s.~appears parameterized by $x$ instead of $u$. The inverse maps $x_{\frak a}(u)$ encode the restoration of the twisted boundary conditions of the base-sphere function, \begin{align} G_{\zeta_1\zeta_2}(u,\bar u) |_{\bf g = 0} &= \frac{\varpi(n_1 n_2)}{N^2} \sum_{\frak a = \frak1}^{2 \max(n_1,n_2)} G_{\zeta_1\zeta_2} ( x_{\frak a}(u),\bar x_{\frak a}(\bar u)) . \label{Guufromx} \end{align} Here $2 \max(n_1, n_2)$ is the number of inverses of $u(x)$, i.e.~the number of solutions $x_{\frak a}(u)$ of $u(x) = u_*$ for a general $u_*$; $\varpi$ is a combinatoric factor; see Appendix B of Ref.\cite{Lima:2020nnx} for a detailed discussion. Let us now compute the r.h.s.~of Eq.(\ref{CoverBaseG}). The Liouville factor depends only on the twists. The Liouville field $\Phi = \log | dz/dt |^2$ must be carefully regularized around the ramification points (\ref{localzt}), and the Liouville action is fixed by the local behavior% % \footnote{% See Eq.(D.63) of Ref.\cite{Avery:2010qw} for a formula like (\ref{SL}), with twists inserted at infinity. } % % \cite{Lunin:2000yv} \begin{align} \label{SL} \begin{split} S_L = - \frac{1}{2} &\Bigg[ \sum_* \frac{n_*-1}{n_*} \log | b_*| + \frac{n_{t_\infty} +1}{n_{t_\infty}} \log |b_{t_\infty}| - \frac{n_\infty - 1}{n_\infty} \log | b_{\infty}| \\ &\quad + \sum_* (n_* -1) \log n_* - (n_{t_\infty} +1) \log n_{t_\infty} - (n_\infty + 3) \log n_\infty \\ &\quad + \text{Regularization factors} \Bigg] . \end{split} \end{align} The coefficients in (\ref{localzt}) have to be expressed as functions of $x$ (which requires knowledge of the complete function $z(t)$). This is easily found by expanding (\ref{coverm}), \begin{subequations} \label{bofx} \begin{align} b_0 &= x^{-n_2} (x-1)^{-n_1} (x + \tfrac{n_1}{n_2})^{n_1 + n_2} (x + \tfrac{n_1}{n_2} - 1)^{-n_1} \\ b_{t_0} &= (- \tfrac{n_1}{n_2} )^{-n_2} (x-1)^{-n_2} (x + \tfrac{n_1}{n_2})^{n_1 + n_2} (x + \tfrac{n_1}{n_2} - 1)^{n_2-n_1} \\ b_{t_1} &= - n_1 (x-1)^{-2} (x + \tfrac{n_1}{n_2})^{2} (x + \tfrac{n_1}{n_2} - 1)^{-2} (x + \tfrac{n_1 - n_2}{2n_2}) \\ \begin{split} b_{x} &= n_1 x^{n_1-n_2-2} (x-1)^{-(n_1+n_2)} (x + \tfrac{n_1}{n_2})^{n_1+n_2} \\ &\qquad\qquad\qquad\qquad\quad \times (x + \tfrac{n_1}{n_2} - 1)^{n_2-n_1} (x + \tfrac{n_1 - n_2}{2n_2}) \end{split} \\ b_{t_\infty} &= (\tfrac{n_1}{n_2} )^{n_2} x^{n_1} (x-1)^{-(n_1+n_2)} (x + \tfrac{n_1}{n_2})^{-n_2} (x + \tfrac{n_1}{n_2} - 1)^{n_2} \\ b_{\infty} &= (-1)^{n_2} (x-1)^{-(n_1+n_2)} (x + \tfrac{n_1}{n_2})^{n_1} (x + \tfrac{n_1}{n_2} - 1)^{n_2-n_1} \end{align} \end{subequations} We can now find $e^{S_L}$ inserting (\ref{bofx}) into the Liouville action (\ref{SL}), \begin{align} \label{SLofx} \begin{split} S_L(x) &= - \frac{ 2 n_2^2 + (2+3 n_2) (n_1 - n_2) n_1 }{ 4 n_1 n_2} \log |x| \\ &\quad + \frac{ 2 n_2^2 + (2+3 n_2) (n_1 + n_2) n_1 }{ 4 n_1 n_2} \log | x-1 | \\ &\quad + \frac{ 2 n_2^2 + (2-3 n_2) (n_1 + n_2) n_1 }{ 4 n_1 n_2} \log | x + \tfrac{n_1}{n_2} | \\ &\quad - \frac{ 2 n_2^2 + (2-3 n_2) (n_1 - n_2) n_1 }{ 4 n_1 n_2} \log | x + \tfrac{n_1}{n_2} -1 | \\ &\quad - \frac{1}{2} \log | x + \tfrac{n_1-n_2}{2n_2} | + \frac{3-n_2}{4} \log n_1 + 2 \log n_2 \end{split} \end{align} The regularization factors of (\ref{SL}), omitted in (\ref{SLofx}) are universal and can be absorbed into a definition of the twist fields via the path integral. The two last $x$-independent terms can also be changed by different normalizations of the twist fields, so we will replace them for a ($x$-independent) constant in what follows. After these scalings, the non-trivial terms in the Liouville factor give the correlator of bare twists: \begin{equation} \label{gsig} \begin{split} \Big\langle [ \sigma_{[n_1]} \sigma_{[n_2]} ] (\infty, \bar \infty) \, \sigma_{[2]}(1, \bar 1) \, \sigma_{[2]}(u,\bar u) \, [ \sigma_{[n_1]} \sigma_{[n_2]} ] (0,\bar 0) \Big\rangle \qquad\qquad \\ = \frac{\varpi(n_1 n_2)}{N^2} \sum_{\frak a = \frak1}^{2 \max(n_1,n_2)} \exp S_L ( x_{\frak a}(u),\bar x_{\frak a}(\bar u)) . \end{split} \end{equation} The function (\ref{GcoverxMain}) does, of course, depend on the specific Ramond fields. For the purpose of clarity, we now proceed by working with the composite operator with one R-neutral and one R-charged ground states, namely $[ R^{\dot1}_{[n_1]} R^{+}_{[n_2]} ]$, and then we state the final result for the other double-cycle fields. A general computation is given in Appendix \ref{General4pt}. Fermionic exponentials inserted at the ramification points, when lifted to the covering surface, carry a factor of $b$, see \cite{Lunin:2001pw}; thus \begin{align} R^{\dot1\dagger}(\infty) R^{+\dagger} (t_\infty) &= b_\infty^{\frac{1}{4n_1}} b_{t_\infty}^{\frac{1}{4n_2}} e^{ \frac{i}{2} [ \phi_{1} (\infty) + \phi_{2} (\infty) ] } \ e^{ - \frac{i}{2} [ \phi_{1}(t_\infty) - \phi_{2}(t_\infty) ] } , \\ R^{\dot1}(0) R^{+} (t_0) &= b_0^{-\frac{1}{4n_1}} b_{t_0}^{-\frac{1}{4n_2}} e^{ - \frac{i}{2} [ \phi_{1} (0) + \phi_{2} (0) ] } \ e^{ \frac{i}{2} [ \phi_{1}(t_0) - \phi_{2}(t_0) ] } . \end{align} The lifted interaction operators (\ref{InteraOpera}) also carry factors $b_{t_1}$ and $b_x$. Bosonic currents $\partial X^{A\dot A}_I(z)$, $\bar \partial X^{A\dot A}_I(\bar z)$ simply lift to $\partial X^{A\dot A}(t)$, $\bar \partial X^{A\dot A}(\bar t)$. The product of two interaction operators has several terms, but many of them do not contribute to the four-point function: the bosonic currents factorize as two-point functions, all of which vanish except for the ones in (\ref{twopntboconj}), hence \begin{equation} \label{Gbosn} \begin{split} G_{\dot1 +}^{\mathrm{(cover)}} &= 4 \left| b_\infty^{\frac{1}{4n_1}} b_{t_\infty}^{\frac{1}{4n_2}} b_{t_1}^{- \frac{5}{8}} b_{x}^{- \frac{5}{8}} b_0^{-\frac{1}{4n_1}} b_{t_0}^{-\frac{1}{4n_2} } \right|^2 \left| \frac{2}{ (t_1 - x)^{2} } \right|^2 \times G^F_{\dot1+} . \end{split} \end{equation} The remaining factor, $G_F$, involves only the fermionic exponentials in the terms which did not vanish due to the bosonic factors. They group into two different contributions: \begin{equation} \label{GFIIImain} \begin{split} G^F_{\dot1+} = \Big\langle & e^{ \frac{i}{2} [ \phi_{1} + \phi_{2} ] } (\infty) e^{ \frac{i}{2} [ \tilde\phi_{1} + \tilde\phi_{2} ] } (\bar\infty) e^{- \frac{i}{2} [ \phi_{1} - \phi_{2} ]} (t_\infty) e^{- \frac{i}{2} [ \tilde\phi_{1} - \tilde\phi_{2} ]} (\bar t_\infty) \\ &\quad \times \Big(I + II \Big) e^{- \frac{i}{2} [ \phi_{1} + \phi_{2} ] } (0) e^{- \frac{i}{2} [ \tilde\phi_{1} + \tilde\phi_{2} ] } (\bar0) e^{\frac{i}{2}[ \phi_{1} - \phi_{2}]} (t_0) e^{\frac{i}{2}[ \tilde\phi_{1} - \tilde\phi_{2}]} (\bar t_0) \Big\rangle \end{split} \end{equation} where \begin{align} \begin{split} \label{ints} I &= 2 e^{- \frac{i}{2}(\phi_1 + \phi_2)}(t_1) \, e^{ \frac{i}{2}(\phi_1 + \phi_2 )} (x) \\ &\quad \times \bigg[ e^{ \frac{i}{2}(\tilde\phi_1 + \tilde\phi_2)} (\bar t_1) \, e^{- \frac{i}{2}(\tilde\phi_1 + \tilde\phi_2)} (\bar x) + e^{- \frac{i}{2}(\tilde\phi_1 + \tilde\phi_2)} (\bar t_1) \, e^{ \frac{i}{2}(\tilde\phi_1 + \tilde\phi_2)} (\bar x) \bigg] \\ II &= 2 e^{ \frac{i}{2}(\phi_1 + \phi_2)} ( t_1) \, e^{- \frac{i}{2}(\phi_1 + \phi_2)} (x) \\ & \quad \times \bigg[ e^{- \frac{i}{2}(\tilde\phi_1 + \tilde\phi_2)} (\bar t_1) \, e^{ \frac{i}{2}(\tilde\phi_1 + \tilde\phi_2)} (\bar x) + e^{ \frac{i}{2}(\tilde\phi_1 + \tilde\phi_2)} (\bar t_1) e^{- \frac{i}{2}(\tilde\phi_1 + \tilde\phi_2)} (\bar x) \bigg] \end{split} \end{align} Explicit computation of the correlators gives \begin{align} \label{GFfinalMain} \begin{split} &G^F_{\dot1+} = 2 \Bigg| ( t_\infty - t_0 )^{ - \frac12 } ( t_1 - x)^{- \frac{1}{2} } \Big[ % \Big( \frac{x}{t_1} \Big)^{ \frac{1}{2} } + % \Big( \frac{x}{t_1} \Big)^{ - \frac{1}{2} } \Big] \Bigg|^2 \end{split} \end{align} Combining Eqs.(\ref{GFfinalMain}), (\ref{Gbosn}), and (\ref{SLofx}), and using Eqs.(\ref{tim}), Eqs.(\ref{bofx}), we get \begin{align} \begin{split} G_{\dot1+}(x,\bar x) |_{\mathrm{base}} = \Bigg| C \ \frac{ x^{1-n_1+n_2} (x-1)^{1+n_1+n_2} ( x+ \frac{n_1}{n_2} )^{ 1 - n_1 - n_2 } (x -1 + \frac{n_1}{n_2} )^{ 1 + n_1 - n_2 } }{ ( x + \frac{n_1-n_2}{2n_2} )^4 } & \\ \times \Big[ (x-1)(x - 1 + \tfrac{n_1}{n_2}) + x ( x + \tfrac{n_1}{n_2} ) \Big] \Bigg|^2 . & \end{split} \end{align} The computations for the other operators are very similar. From now on, we drop the indication $|_{\mathrm{base}}$ of the correlation functions, which for all double-cycle fields can be expressed as \begin{equation} G_{\zeta_1\zeta_2}(x,\bar x) = \big| G_{\zeta_1\zeta_2}(x) \big|^2 , \end{equation} where \begin{align} \label{Gm21p} \begin{split} &G_{\dot1 +}(x) = C \ \frac{ x^{1-n_1+n_2} (x-1)^{1+n_1+n_2} ( x+ \frac{n_1}{n_2} )^{ 1 - n_1 - n_2 } (x -1 + \frac{n_1}{n_2} )^{ 1 + n_1 - n_2 } }{ ( x + \frac{n_1-n_2}{2n_2} )^4 } \\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad \ \; \times \Big[ (x-1)(x - 1 + \tfrac{n_1}{n_2}) + x ( x + \tfrac{n_1}{n_2} ) \Big] \\ &G_{\dot1-} (x) = G_{\dot1+}(x) \end{split} \end{align} \begin{align} \label{G2112} \begin{split} G_{\dot1\dot2}(x) = C \ \frac{ x^{1-n_1+n_2} (x-1)^{1+n_1+n_2} ( x+ \frac{n_1}{n_2} )^{ 1 - n_1 - n_2 } (x -1 + \frac{n_1}{n_2} )^{ 1 + n_1 - n_2 } }{ ( x + \frac{n_1-n_2}{2n_2} )^4 } & \\ \times \Big[ x^2 + \left( x -1 + \tfrac{n_1}{n_2} \right)^2 \Big] & \end{split} \end{align} \begin{align} \label{G2211} \begin{split} G_{\dot1\dot1}(x) = C \frac{ x^{1-n_1+n_2} (x-1)^{1+n_1+n_2} ( x+ \frac{n_1}{n_2} )^{ 1 - n_1 - n_2 } (x -1 + \frac{n_1}{n_2} )^{ 1 + n_1 - n_2 } }{ ( x + \frac{n_1-n_2}{2n_2} )^4 } & \\ \times \Big[ (x - 1)^2 + \left( x + \tfrac{n_1}{n_2} \right)^2 \Big] & \end{split} \end{align} \begin{align} \label{Gmpmp} \begin{split} G_{+-}(x) = 2 C \frac{ x^{2-n_1+n_2} (x-1)^{1+n_1+n_2} ( x+ \frac{n_1}{n_2} )^{ 1 - n_1 - n_2 } (x -1 + \frac{n_1}{n_2} )^{ 2 + n_1 - n_2 } }{ ( x + \frac{n_1-n_2}{2n_2} )^4 } \end{split} \end{align} \begin{align} \label{Gmmpp} \begin{split} G_{++}(x) = C \frac{ x^{1-n_1+n_2} (x-1)^{2+n_1+n_2} ( x+ \frac{n_1}{n_2} )^{ 2 - n_1 - n_2 } (x -1 + \frac{n_1}{n_2} )^{ 1 + n_1 - n_2 } }{ ( x + \frac{n_1-n_2}{2n_2} )^4 } \end{split} \end{align} We can find all these expressions from formula (\ref{Gcofx}), by using the parameters in Table \ref{ZNSR}. These functions exhaust all possible combinations of two twisted Ramond fields; recall that $G_{\dot2+}(x,\bar x) = [G_{\dot1-}(x,\bar x)]^* = G_{\dot1-}(x,\bar x)$, $G_{\dot2\dot2}(x,\bar x) = G_{\dot1\dot1}(x,\bar x)$, etc. Formulae (\ref{Gmpmp}) and (\ref{Gmmpp}) were derived in \cite{Lima:2020nnx} using the stress-tensor method \cite{Dixon:1986qv,Arutyunov:1997gi,Arutyunov:1997gt,Pakman:2009ab,Pakman:2009zz} instead of the Lunin-Mathur technique. These two functions are simpler than the others (the sum of terms in square brackets is absent) because of a convenient cancelation of terms, so the their derivation cannot be immediately generalized. It is instructive to have a general computation using the stress-tensor, and this is given in the Appendix, in \S\ref{AppStressTensor}. Here let us briefly outline the procedure. In the stress-tensor method \cite{Dixon:1986qv}, there is no reference to the path integral: we use only the conformal Ward identity to derive a first-order differential equation, \begin{equation} \label{methodbaseMain} \begin{split} & \partial_u \log G_{\zeta_1\zeta_2}(u) \\ & = \underset{z = u}{ \mathrm{Res} } \, \frac{ \big\langle T(z) [ R^{(\zeta_1)\dagger}_{[n_1]} R^{(\zeta_2)\dagger}_{[n_2]} ] (\infty,\bar \infty) O^{(\rm{int})}_{[2]} (1,\bar 1) O^{(\rm{int})}_{[2]} (u,\bar u) [ R^{(\zeta_1)}_{[n_1]} R^{(\zeta_2)}_{[n_2]} ] (0,\bar 0) \big\rangle }{ \big\langle [ R^{(\zeta_1)\dagger}_{[n_1]} R^{(\zeta_2)\dagger}_{[n_2]} ] (\infty,\bar \infty) O^{(\rm{int})}_{[2]} (1,\bar 1) O^{(\rm{int})}_{[2]} (u,\bar u) [ R^{(\zeta_1)}_{[n_1]} R^{(\zeta_2)}_{[n_2]} ] (0,\bar 0) \big\rangle } . \end{split} \end{equation} Again, the covering map can be used as a facilitator for dealing with the monodromies \cite{Arutyunov:1997gi,Arutyunov:1997gt,Pakman:2009zz,Pakman:2009ab,Pakman:2009mi}. To find the r.h.s.~of Eq.(\ref{methodbaseMain}) we compute the equivalent function on the covering, namely \begin{equation}\label{methodcovChCompMain} \begin{split} & F^{\zeta_1\zeta_2}_{\mathrm{cover}} (t) = \frac{ \big\langle T(t) R^{(\zeta_1)\dagger}(\infty) R^{(\zeta_2)\dagger} (t_\infty) O^{(\rm{int})} (t_1,\bar t_1) O^{(\rm{int})} (x,\bar x) R^{(\zeta_1)}(0) R^{(\zeta_2)} (t_0) \big\rangle }{ \big\langle R^{(\zeta_1)\dagger}(\infty) R^{(\zeta_2)\dagger} (t_\infty) O^{(\rm{int})} (t_1,\bar t_1) O^{(\rm{int})} (x,\bar x) R^{(\zeta_1)}(0) R^{(\zeta_2)} (t_0) \big\rangle } , \end{split} \end{equation} The function in the denominator is simply $G_{\zeta_1\zeta_2}^{\mathrm{(cover)}}(x,\bar x)$ in Eq.(\ref{GcoverxMain}). A crucial difference from the LM method is that here the factors $b_*$ are irrelevant, as they cancel in the fraction. Also, $G_{\zeta_1\zeta_2}^{\mathrm{(cover)}}(x,\bar x)$ often cancels in the fraction as well. One must compute the contraction of the operators in the numerator with $T(z)$, and manipulate the result conveniently, see Refs.\cite{Lima:2020boh,Lima:2020kek,Lima:2020nnx,Lima:2020urq}. For example, for the field $[ R^{\dot1}_{[n_1]} R^{+}_{[n_2]} ]$ the final result is \begin{align} \begin{split} F^{\dot1+}_{\mathrm{cover}} (t) &= \frac{(t_1 - x)^2}{(t-t_1)^2 (t - x)^2} + \frac{1}{4} \Bigg[ \left( \frac{1}{t} \right)^2 + \left( \frac{1}{t - t_0} - \frac{1}{t-t_\infty} \right)^2 \\ &\qquad\qquad\qquad + \left( \frac{1}{t - t_1} - \frac{1}{t-x} \right)^2 + \frac{2(t_1 - x)^2}{t ( t - t_1) (t - x) (t_1 + x)} \Bigg] . \end{split} \end{align} Once we have $F^{\zeta_1\zeta_2}_{\mathrm{cover}} (t)$, we go back to the base surface by mapping $t \mapsto z$. The stress-tensor transforms with the Schwarzian derivative $\{t, z\}$, and one must sum over the different copies/sheets of the covering. Near the point $z = u$, where there is a twist-two operator, there are two copies, so \begin{equation} \begin{split} \partial_u \log G_{\zeta_1\zeta_2}(u) & = 2 \underset{z = u}{ \mathrm{Res} } \Bigg[ \{t,z\} + \left( \frac{dt}{dz} \right)^2 F^{\zeta_1\zeta_2}_{\mathrm{cover}}(t(z)) \Bigg] . \end{split} \end{equation} The asymptotic form of the map $t(z)$ can be found in Ref.\cite{Lima:2020nnx}. The r.h.s.~is expressed as a function of $x$, which appears as a parameter in $t_1,t_0,t_\infty$, etc. Hence instead of $G_{\zeta_1\zeta_2}(u,\bar u)$, we are only able to solve the differential equation after a change of coordinates, \begin{equation} \begin{split} \partial_x \log G_{\zeta_1\zeta_2}(x) & = 2 \left( \frac{du}{dx} \right) \underset{z = u}{ \mathrm{Res} } \Bigg[ \{t,z\} + \left( \frac{dt}{dz} \right)^2 F^{\zeta_1\zeta_2}_{\mathrm{cover}}(t(z)) \Bigg] \end{split} \end{equation} where $u(x)$ is given in Eq.(\ref{uxm}). Integration gives immediately the functions (\ref{Gm21p})-(\ref{Gmmpp}), with $C$ as an integration constant. Repeating the procedure with $\tilde T(\bar z)$, we obtain the anti-holomorphic part $\bar G(\bar x)$, whence $G(x,\bar x) = G(x) \bar G(\bar x)$. \section{Dynamics and OPE limits} \label{SectOPEs} We can obtain the fusion rules and the structure constants $C_{12k}$ from the OPEs of fields $\mathscr O_k(u)$ with dimensions $\Delta_k$, \begin{equation} \mathscr O_1 (u,\bar u) \mathscr O_2(0,\bar 0) = \sum_k C_{12k} \, |u|^{\Delta_k - \Delta_1 - \Delta_2} \mathscr O_k(0,\bar 0) + \text{descendants}, \label{OPE} \end{equation} Charges are conserved: $j^3_k = j^3_1+ j^3_2$. If we normalize the operators such that \begin{equation} \big\langle \mathscr O_\ell^\dagger (u,\bar u) \mathscr O_k(0,\bar 0) \big\rangle = \delta_{k\ell} |u|^{-2 \Delta_k} , \quad \text{i.e.} \quad \big\langle \mathscr O_\ell^\dagger (1,\bar 1) \mathscr O_k(0,\bar 0) \big\rangle = \delta_{k\ell}, \end{equation} then taking the bracket of the OPE (\ref{OPE}) with $\mathscr O_k^\dagger$ we find that the coefficient of the leading term in the expansion, i.e.~the structure constant, can be written as \begin{equation} C_{12k} = \big\langle \mathscr O_k^\dagger (\infty,\bar \infty) \mathscr O_1(1,\bar 1) \mathscr O_2(0,\bar 0)\big\rangle \equiv \big\langle \mathscr O_k^\dagger \mathscr O_1\mathscr O_2 \big\rangle . \end{equation} We can obtain this conformal data from the connected four-point functions computed in Sect.\ref{SectDoublCyclFunc}. \subsection{Different twists $n_1 \neq n_2$} \begin{table}[t] \begin{center} \begin{tabular}{r|| c c c c } {$G_{\zeta_1\zeta_2}(x)$} &{ $G_{\dot1\pm}$} &{ $G_{\dot1\dot2}$} &{ $G_{\dot1\dot1}$} &{ $G_{+-}$} \\ \hline \hline { $A_{\zeta_1\zeta_2}$ } & $\frac{n_2 - n_1}{2n_2}$ & $ \frac{(n_1 - n_2)^2}{2n_2^2}$ & $ \frac{n_1^2 + n_2^2}{2n_2^2}$ & 0 \\ Eq. & (\ref{Gm21p}) & (\ref{G2112}) & (\ref{G2211}) & (\ref{Gmpmp}) \end{tabular} \caption{Values of $A_{\zeta_1\zeta_2}$ in the different four-point functions with neutral fields.} \label{AGEqs} \end{center} \end{table} We are interested in four-point functions \begin{equation} \label{Gcfull2} G_{\zeta_1, \zeta_2}(u,\bar u) = \Big\langle \big[ R^{\zeta_1}_{[n_1]} R^{\zeta_2}_{[n_2]} \big]^\dagger (\infty,\bar \infty) \; O^{(\rm{int})}_{[2]}(1, \bar 1) \; O^{(\rm{int})}_{[2]} (u , \bar u) \; \big[ R^{\zeta_1}_{[n_1]} R^{\zeta_2}_{[n_2]} \big] (0,\bar 0) \Big\rangle \end{equation} from which the OPEs between Ramond fields and interaction operators can be found by taking coincidence limits $u \to 0,1,\infty$, where the inverses of the map $u(x)$ are given by Eqs.(\ref{xfraka}). The functions (\ref{Gm21p})-(\ref{Gmpmp}) can all be written in the following way: \begin{subequations} \label{GcompNeuGen} \begin{align} & G_{\zeta_1\zeta_2}(x) = \Big[ A_{\zeta_1\zeta_2} + x(x - 1 + \tfrac{n_1}{n_2}) \Big] \mathcal F(x) \\ & \mathcal F(x) = C \frac{ x^{1-n_1+n_2} (x-1)^{1+n_1+n_2} ( x+ \frac{n_1}{n_2} )^{ 1 - n_1 - n_2 } (x -1 + \frac{n_1}{n_2} )^{ 1 + n_1 - n_2 } }{ ( x + \frac{n_1-n_2}{2n_2} )^4 } \end{align} \end{subequations} being distinguished only by the constants $A_{\zeta_1\zeta_2}$, listed in Table \ref{AGEqs}. The dynamics of the double-cycle fields involving at least one R-neutral field have therefore the same qualitative features. For $u \to 0$, we obtain the conformal data of the fusion rules \begin{equation} \label{fusionOr} O^{(\rm{int})}_{[2]} \times [ R^{\dot1}_{[n_1]} R^{+}_{[n_2]} ] , \quad O^{(\rm{int})}_{[2]} \times [ R^{\dot1}_{[n_1]} R^{\dot2}_{[n_2]} ] , \quad O^{(\rm{int})}_{[2]} \times [ R^{\dot1}_{[n_1]} R^{\dot1}_{[n_2]} ] , \end{equation} so the universality of (\ref{GcompNeuGen}) already shows that these OPEs all give rise to operators with the \emph{same} conformal dimensions, although they of course differ by the SU(2) charges, which are conserved. The function (\ref{Gmpmp}) for the neutral field $[ R^{+}_{[n_1]} R^{-}_{[n_2]} ]$, must be treated separately, because when $A_{+-} = 0$ the expansion around $x = 0$ (in the limit $u \to 0$) changes. The function (\ref{Gmmpp}) with the composite field $[ R^{+}_{[n_1]} R^{+}_{[n_2]} ]$ does not follow the structure (\ref{GcompNeuGen}), although it can also be related to $\mathcal F$. These two latter cases were discussed in Ref.\cite{Lima:2020nnx}. From now on, unless otherwise specified, $G_{\zeta_1\zeta_2}$ should be understood to hold the structure (\ref{GcompNeuGen}). In the limit $u \to 1$ we obtain the fusion rule $[O^{(\rm{int})}_{[2]}] \times [O^{(\rm{int})}_{[2]}]$, which must agree with the results found elsewhere \cite{Lima:2020boh,Lima:2020kek,Lima:2020nnx,Lima:2020urq} in other correlation functions involving two deformation operators. Indeed, for $n_1 > n_2 > 1$, we can immediately see that $G_{\zeta_1\zeta_2}(x)$ has singular points at $x = \{0,- \frac{n_1}{n_2}\}$ and $x = \{ - \frac{n_1-n_2}{2n_2}, \infty\}$, the coincidence limits where $u = 0$ and $u = 1$, respectively, as shown in (\ref{xto0minnuxComp})-(\ref{xaxp4nsComp}), and, asymptotically, \begin{subequations} \label{Ggenchannu0} \begin{align} G(x^1_{\frak1}(u) ) &= \frac{16 n_1^2 C}{(1-u)^2} + \text{non-sing.} \label{Gx11u} \\ \begin{split} \label{GgenAs3chan} G(x^1_{\frak2}(u)) &= - 2^{-2} 3^{-\frac23} (n_1^2-n_2^2)^{-\frac23} \big[ (n_1-n_2)^2 - 4 A n_2^2 \big] n_1^{-\frac23} n_2^{-\frac23} \ \frac{1}{(1-u)^{4/3}} \\ &\quad + \frac{\mathrm{const.}}{(1-u)^{2/3}} + \frac{\mathrm{const.}}{(1-u)^{1/3}} + \text{non-sing.} \end{split} \\ \begin{split} G(x^0_{\frak1}(u)) &= A (n_1 - n_2)^{-2} n_1^{- \frac{2n_1}{n_1-n_2}} n_2^{ \frac{2n_1}{n_1-n_2}} \ u^{-1 + \frac{1}{n_1-n_2} } \left[ 1 + \mathrm{O}( u^{\frac{1}{n_1-n_2}} ) \right] \label{Ggenchannu01} \end{split} \\ \begin{split} G(x^0_{\frak2}(u)) &= (A + \tfrac{n_1}{n_2}) (1 + \tfrac{n_1}{n_2})^{-2} n_1^{- \frac{2n_1}{n_1+n_2}} n_2^{-\frac{2n_1}{n_1+n_2}} \ u^{-1 + \frac{1}{n_1+n_2} } \left[ 1 + \mathrm{O}( u^{\frac{1}{n_1+n_2}} ) \right] \end{split} \label{Ggenchannu02} \end{align} \end{subequations} In Eqs.(\ref{GgenAs3chan})-(\ref{Ggenchannu02}) we have taken \begin{equation} C = \frac{1}{16n_1^2}, \end{equation} so that the coefficient of the singularity in Eq.(\ref{Gx11u}) is unity. Note that $\mathcal F (x) \to C$ when $x \to \infty$, so only the universal term in Eq.(\ref{GcompNeuGen}) survives. This ensures the normalization of the interaction operators, as extensively discussed in \cite{Lima:2020boh,Lima:2020kek,Lima:2020nnx,Lima:2020urq}. In fact, the two channels (\ref{Gx11u}) and (\ref{GgenAs3chan}) correspond to the OPE \begin{equation} \label{OPEoints} O^{(\rm{int})}_{[2]} \times O^{(\rm{int})}_{[2]} = \mathds1 + \sigma_3 . \end{equation} which can be deduced from the powers of the leading singularities, giving an operator of dimension zero in (\ref{Gx11u}) and an operator of dimension \begin{equation} \Delta^\sigma_3 = h^\sigma_3 + \tilde h^\sigma_3 = \tfrac23 + \tfrac23. \end{equation} There is no operator of dimension zero in any of the other channels. The leading coefficients in the expansions (\ref{Ggenchannu0}) give products of structure constants of the operators involved in the OPEs. For example, in the $\sigma_{(3)}$ channel the OPE is \begin{equation} O^{(\rm{int})}_{(2)}(u,\bar u) O^{(\rm{int})}_{(2)}(1,\bar 1) = \frac{\big\langle O^{(\rm{int})}_{(2)} \sigma_{(3)} O^{(\rm{int})}_{(2)} \big\rangle }{ |1-u|^{-2 ( \frac23 - 2)} } \sigma_{(3)}(1,\bar1) + \cdots \end{equation} Inserting this back into the four-point function (\ref{Gcfull2}) and comparing with (\ref{GgenAs3chan}), we find that% % % \footnote{% Recall that $G_{\zeta_1\zeta_2}(u,\bar u) = G_{\zeta_1\zeta_2}(x(u)) \bar G_{\zeta_1\zeta_2}(\bar x(\bar u))$. The expression in the r.h.s.~is the absolute value of the leading coefficient in Eq.(\ref{GgenAs3chan}).} % % \begin{align} \label{OsORsR} \begin{split} & \big\langle O^{(\rm{int})}_{(2)} \sigma_{(3)} O^{(\rm{int})}_{(2)} \big\rangle \big\langle [ R^{\zeta_1}_{(n_1)} R^{\zeta_2}_{(n_2)} ]^\dagger \sigma_3 [ R^{\zeta_1}_{(n_1)} R^{\zeta_2}_{(n_2)} ] \big\rangle \\ &\qquad\qquad\qquad = \Big| 2^{-2} 3^{-\frac23} (n_1^2-n_2^2)^{-\frac23} \big[ (n_1-n_2)^2 - 4 A_{\zeta_1\zeta_2} n_2^2 \big] n_1^{-\frac23} n_2^{-\frac23} \Big|^2 \end{split} \end{align} Note that the operators above are not sums over orbits --- only some of the products $O^{(\rm{int})}_{(2)} \times O^{(\rm{int})}_{(2)}$ result in a $\sigma_{(3)}$; other cycles result in $\mathds 1$. The function $G_{\zeta_1\zeta_2}(x^1_{\frak2}(u))$ gives us the behavior of one representative of the class of permutations $(2)(2) = (3)$, and all other representatives behave the same way. % The analysis can be repeated for $G_{+-}(x)$ and $G_{++}(x)$. The structure constant $\langle O^{(\rm{int})}_{(2)} \sigma_{(3)} O^{(\rm{int})}_{(2)} \rangle$ has been computed in \cite{Lima:2020kek}\footnote{See Eq.(C.5) ibid.} \begin{equation} \label{stcconsOsO} \langle O^{(\rm{int})}_{(2)} \sigma_{(3)} O^{(\rm{int})}_{(2)} \big\rangle = 2^{\frac{13}{3}} 3^{4} , \end{equation} yielding the structure constants $ \langle [ R^{\zeta_1}_{(n_1)} R^{\zeta_2}_{(n_2)} ]^\dagger \sigma_{(3)} [ R^{\zeta_1}_{(n_1)} R^{\zeta_2}_{(n_2)} ] \rangle $, listed in App.\ref{AppLists}, Eq.(\ref{strucconss3}). \bigskip In the limit $u \to 0$, we find the fusion rules (\ref{fusionOr}). The two channels $x^0_{\frak1}$ and $x^0_{\frak2}$ give us two operators, $ Y^{\zeta_1\zeta_2}_{\frak1, \ [n_1+n_2]}$ and $Y^{\zeta_1\zeta_2}_{\frak2, \ [n_1+n_2]}$ respectively, both with twist $n_1+n_2$, \begin{equation} \label{OPEointRR} O^{(\rm{int})}_{[2]} \times [ R^{\zeta_1}_{[n_1]} R^{\zeta_2}_{[n_2]} ] = Y^{\zeta_1\zeta_2}_{\frak1, \ [n_1+n_2]} + Y^{\zeta_1\zeta_2}_{\frak2, \ [n_1+n_2]} \end{equation} Now the OPEs read as \begin{equation} \label{OPEORO} O^{(\rm{int})}_{(2)}(u,\bar u) [ R^{\zeta_1}_{(n_1)} R^{\zeta_2}_{(n_2)} ](0,\bar 0) = \frac{ \langle Y^{\zeta_1\zeta_2 \dagger}_{\frak a, (n_1+n_2)} O^{(\rm{int})}_{(2)} [ R^{\zeta_1}_{(n_1)} R^{\zeta_2}_{(n_2)} ] \rangle }{ |u|^{- \Delta^{\zeta_1\zeta_2}_{\frak a} + 2 + \frac{n_1 + n_2}{2} } } Y^{\zeta_1\zeta_2}_{\frak a, (n_1+n_2)} (0,\bar 0) +\cdots \end{equation} with $\frak a =1,2$. The dimensions $\Delta^{\zeta_1\zeta_2}_{\frak 1}$ of $Y^{\zeta_1\zeta_2}_{\frak 1, [n_1+n_2]}$ follow from the powers of $u$ in the channel (\ref{Ggenchannu01}), \begin{subequations} \label{DeltO1} \begin{align} \Delta^{\dot1+}_{\frak1} = \Delta^{\dot1\dot2}_{\frak1} = \Delta^{\dot1\dot1}_{\frak1} & = \frac{2}{n_1-n_2} + \frac{n_1 + n_2}{2}, \label{DeltO1a} \\ \Delta^{+-}_{\frak1} &= \frac{4}{n_1+n_2} + \frac{n_1 + n_2}{2} \\ \Delta^{++}_{\frak1} &= \frac{2}{n_1-n_2} + \frac{n_1 + n_2}{2} \end{align}\end{subequations} while the dimensions of $Y^{\zeta_1\zeta_2}_{\frak 2, [n_1+n_2]}$ follow from the channel (\ref{Ggenchannu02}), \begin{subequations} \label{DeltO2} \begin{align} \Delta^{\dot1+}_{\frak2} \label{DeltO2a} = \Delta^{\dot1\dot2}_{\frak2} = \Delta^{\dot1\dot1}_{\frak2} &= \frac{2}{n_1+n_2} + \frac{n_1 + n_2}{2} \\ \Delta^{+-}_{\frak2} &= \frac{2}{n_1-n_2} + \frac{n_1 + n_2}{2} \label{DeltO2b} \\ \Delta^{++}_{\frak2} &= \frac{4}{n_1+n_2} + \frac{n_1 + n_2}{2} \label{DeltO2c} \end{align}\end{subequations} (We recall that $\Delta^{+\pm}_{\frak a}$ must be computed separately.) As already mentioned, all operators with at least one R-neutral single-cycle Ramond ground state have the same dimension. Inserting back into (\ref{Gcfull2}) and comparing with (\ref{Ggenchannu01})-(\ref{Ggenchannu02}) we can determine the structure constants. Explicitly, inserting (\ref{OPEORO}) back into the four-point function we get \begin{equation} \begin{split} & \lim_{u\to0} \Big\langle [ R^{\zeta_1}_{(n_1)} R^{\zeta_2}_{(n_2)} ]^\dagger (\infty, \bar \infty) O^{(\rm{int})}_{(2)}(1,\bar 1) O^{(\rm{int})}_{(2)}(u,\bar u) [ R^{\zeta_1}_{(n_1)} R^{\zeta_2}_{(n_2)} ] (0,\bar 0) \Big\rangle \\ &\qquad = \frac{ \big\langle Y^{\zeta_1\zeta_2 \dagger}_{\frak a, (n_1+n_2)} O^{(\rm{int})}_{(2)} [ R^{\zeta_1}_{(n_1)} R^{\zeta_2}_{(n_2)} ] \big\rangle }{ |u|^{- \Delta^{\zeta_1\zeta_2}_{\frak a} + 2 - \frac{n_1 + n_2}{2} } } \big\langle [ R^{\zeta_1}_{(n_1)} R^{\zeta_2}_{(n_2)} ]^\dagger O^{(\rm{int})}_{(2)} Y^{\zeta_1\zeta_2}_{\frak a, (n_1+n_2)} \big\rangle +\cdots \end{split} \end{equation} Hence the leading coefficients in the r.h.s.~of Eqs.(\ref{Ggenchannu01})-(\ref{Ggenchannu02}) give \begin{subequations} \label{ProdRO} \begin{align} \begin{split} \label{ProdRO1} & \Big| \big\langle O^{(\rm{int})}_{(2)} [ R^{\zeta_1}_{(n_1)} R^{\zeta_2}_{(n_2)} ]^\dagger Y^{\zeta_1\zeta_2}_{\frak 1, (n_1+n_2)} \big\rangle \Big|^2 = \Big| A_{\zeta_1\zeta_2} (n_1 - n_2)^{-2} n_1^{- \frac{2n_1}{n_1-n_2}} n_2^{ \frac{2n_1}{n_1-n_2}} \Big|^2 \end{split} \\ \begin{split} \label{ProdRO2} & \Big| \big\langle O^{(\rm{int})}_{(2)} [ R^{\zeta_1}_{(n_1)} R^{\zeta_2}_{(n_2)} ]^\dagger Y^{\zeta_1\zeta_2}_{\frak 2, (n_1+n_2)} \big\rangle \Big|^2 = \Big| (A_{\zeta_1\zeta_2} + \tfrac{n_1}{n_2}) (1 + \tfrac{n_1}{n_2})^{-2} n_1^{- \frac{2n_1}{n_1+n_2}} n_2^{-\frac{2n_1}{n_1+n_2}} \Big|^2 \end{split} \end{align}\end{subequations} and similarly for $G_{+-}$ and $G_{++}$. We thus have two lists of products of structure constants, given in Appendix \ref{AppLists}. \subsection{Equal twists $n_1 = n_2$} \label{SectEqualtwsOPE} When $n_1 = n_2 = n$, the results above must be revisited. The analytic structure of the map (\ref{uxm}) changes drastically \begin{equation} u(x) = \left( \frac{x+ 1}{x-1} \right)^{2n} . \label{uxnn} \end{equation} The coincidence limits (\ref{xfraka}) change as well. The limit $u \to 1$ now has the solutions $x = \infty$ and $x = 0$, where we find the two inverses $x^1_{\frak a}(u)$ to be \begin{align} (u \to 1) \quad \begin{sqcases} x \to \infty , % &\quad x^1_{\frak1}(u) \approx - 4n (1-u)^{-1} \\ x \to 0, &\quad x^1_{\frak2}(u) \approx \tfrac{1}{4n} (1 - u) \end{sqcases} \label{xaxp4nsCompM} \end{align} Meanwhile, for $u \to 0$ we now only have \emph{one} solution: \begin{equation} (u \to 0) \quad \begin{sqcases} x \to - 1 ,\quad & x^0_{\frak2}(u) \approx - 1 + 2 \ u^{\frac{1}{2n}} + \cdots \end{sqcases} \label{xto0minnuxCompM} \end{equation} We denote this unique inverse by $x^0_{\frak2}(u)$ because this corresponds to the second channel in (\ref{xaxp4nsComp}) when $n_1 = n_2 = n$; the channel called $x^0_{\frak1}(u)$ \emph{disappears}. This change in the coincidence limits reflects a change in the structure of the four-point functions (\ref{Gm21p})-(\ref{Gmmpp}), and in the fusion rules and OPEs. We now have only three different functions: \begin{align} \begin{split} \label{g2n} g_1(x) &= C (x-1)^{1 + 2n} ( x + 1 )^{ 1 - 2 n } \\ &= G_{+-}(x) = G_{\dot1\pm}(x) = G_{\dot1\dot2}(x) \end{split} \\ \begin{split} \label{g3n} g_2(x) &= C x^{-2} (1 + x^2) (x-1)^{1+2n} ( x + 1 )^{1 - 2 n } \\ &= G_{\dot1\dot1}(x) \end{split} \\ \begin{split} \label{g1n} g_3(x) &= C x^{-2} (x-1)^{2+2n} ( x + 1 )^{2 - 2n } \\ &= G_{++}(x) \end{split} \end{align} In the channel $x^1_{\frak 1}(x)$, all behave the same way: $g_1(x) \approx g_2(x) \approx g_3(x) \approx C x^2$ when $x \to \infty$. Therefore, as expected, we find again the usual identity channel with $C = 1/16n^2$. Using (\ref{xaxp4nsCompM}), in the channel $u\to 1$ with $x \to 0$ we have \begin{align} \begin{split} \label{g2x0} g_1(x^1_{\frak2}(u)) &= - \frac{1}{16n^2 } + \frac{x^1_{\frak2}}{4n} + \cdots = - \frac{1}{16n^2 } + (1-u) + \cdots \end{split} \\ \begin{split} \label{g3x0} g_2(x^1_{\frak2}(u)) &= \frac{-1}{(4n x^1_{\frak2})^2} + \frac{1}{4n x^1_{\frak2}} + \text{non-sing.} = \frac{-1}{(1-u)^{2}} + \frac{1}{1-u} + \text{non-sing.} \end{split} \\ \begin{split} \label{g1x0} g_3(x^1_{\frak2}(u)) &= \frac{1}{(4n x^1_{\frak2})^2} - \frac{1}{4n x^1_{\frak2}} + \text{non-sing.} = \frac{1}{(1-u)^{2}} - \frac{1}{1-u} + \text{non-sing.} \end{split} \end{align} In the expansions (\ref{g1x0}) and (\ref{g3x0}) we can recognize again the identity channel. The expansion (\ref{g2x0}) is \emph{not singular}, hence it does not correspond to an OPE channel. In short, in the limit $u \to 1$ we only find the identity channel of the OPE $O^{(\rm{int})}_{[2]} \times O^{(\rm{int})}_{[2]} = \mathds1$. The $\sigma_3$ channel of the fusion rule (\ref{OPEoints}) has disappeared. The disappearance could be predicted by looking at the structure constants $ \langle [ R^{\zeta_1}_{(n_1)} R^{\zeta_2}_{(n_2)} ]^\dagger \sigma_{(3)} [ R^{\zeta_1}_{(n_1)} R^{\zeta_2}_{(n_2)} ] \rangle $ in Eq.(\ref{strucconss3}) --- they all vanish when $n_1 = n_2$. This reflects the fact that Eq.(\ref{compto1}), which here assumes the form \begin{equation} (n_1) (n_2) (3) (n_1)'(n_2)' = {\mathds1}, \end{equation} has no solutions satisfying the conditions: $(n_1)$ and $(n_2)$ are disjoint; $(n_1)'$ and $(n_2)'$ are disjoint; and $(3)$ has one copy in $(n_1)$ and two copies in $(n_2)$ or vice-versa. \bigskip In the limit $u\to0$ we have seen that the channel $x^1_{\frak1}$, extant for $n_1 \neq n_2$, has also disappeared. In the remaining channel (\ref{xto0minnuxCompM}) we have the expansions \begin{align} g_1(x^0_{\frak2}(u)) \label{g2xm1} &= - \frac{1}{4n^2} \ u^{-1 + \frac{1}{2n}} \left[ 1 - (1+2n) \, u^{\frac{1}{2n}} + \cdots \right] \\ g_2(x^0_{\frak2}(u)) \label{g3xm1} &= - \frac{1}{2n^2} \ u^{-1 + \frac{1}{2n}} \left[ 1 + (1-2n) \, u^{\frac{1}{2n}} + \cdots \right] \\ g_3(x^0_{\frak2}(u)) \label{g1xm1} &= \frac{1}{n^2} \ u^{-1 + \frac{1}{n}} \left[ 1 + 2(1-n) \, u^{\frac{1}{2n}} + \cdots \right] \end{align} and the fusion rules \begin{align} \label{OPEointRRM} O^{(\rm{int})}_{[2]} \times [ R^{\zeta_1}_{[n]} R^{\zeta_2}_{[n]} ] = Y^{\zeta_1\zeta_2}_{\frak2, \ [2n]} \end{align} which replaces (\ref{OPEointRR}). The dimensions of the operators $Y^{\zeta_1\zeta_2}_{\frak2, \ [2n]}$, which have twist $2n$, are read from Eqs.(\ref{g3xm1})-(\ref{g2xm1}) to be \begin{align} \Delta^{+-}_{Y , \frak2} = \Delta^{\dot1\pm}_{Y , \frak2} = \Delta^{\dot1\dot2}_{Y , \frak2} = \Delta^{\dot1\dot1}_{Y , \frak2} &= \frac1n + n \\ \Delta^{++}_{Y , \frak2} &= \frac{2}{n} + n \label{Depp2n} \end{align} and should be compared with the dimensions in Eqs.(\ref{DeltO2}) and (\ref{DeltO1}). Inserting the OPEs back into the four-point functions, we find the structure constants \begin{align} \begin{split} \Big| \big\langle O^{(\rm{int})}_{(2)} [ R^{+}_{(n)} R^{-}_{(n)} ]^\dagger Y^{+-}_{{\frak2}(2n)} \big\rangle \Big|^2 &= \Big| \big\langle O^{(\rm{int})}_{(2)} [ R^{\dot1}_{(n)} R^{\pm}_{(n)} ]^\dagger Y^{\dot1\pm}_{{\frak2}(2n)} \big\rangle \Big|^2 \\ &= \Big| \big\langle O^{(\rm{int})}_{(2)} [ R^{\dot1}_{(n)} R^{\dot2}_{(n)} ]^\dagger Y^{\dot1\dot2}_{{\frak2}(2n)} \big\rangle \Big|^2 \\ &= 2^{-4}n^{-4} \end{split} \\ \Big| \big\langle O^{(\rm{int})}_{(2)} [ R^{\dot1}_{(n)} R^{\dot1}_{(n)} ]^\dagger Y^{\dot1\dot1}_{{\frak2}(2n)} \big\rangle \Big|^2 &= 2^{-2} n^{-4} \\ \Big| \big\langle O^{(\rm{int})}_{(2)} [ R^{+}_{(n)} R^{+}_{(n)} ]^\dagger Y^{++}_{{\frak2}(2n)} \big\rangle \Big|^2 &= n^{-4} \label{Stucppnn} \end{align} One can compare these values with those obtained by taking $n_1 = n_2 = n$ in the list (\ref{List2}), and see that they agree. \subsection{OPEs with full Ramond ground states} The OPEs described above involve the product of $O^{(\rm{int})}_{[2]}$ with a double-cycle Ramond field. For a generic Ramond ground state, several $Y$ operators appear in the OPE, according to the factorization of the Ramond field into pairs as in Eq.(\ref{Gufact}). The interaction operator connects two components of the Ramond ground state to form the $Y$ operators, and leave the remaining components untouched. Hence we have the OPE% % % \footnote{% As usual, this is a schematic equation, without coefficients in front of operators.} % % \begin{align} \label{OPEgen} \begin{split} O^{(\rm{int})}_{[2]} \times \Big[ \prod_{i,\zeta_i} ( R^{\zeta_i}_{[n_i]})^{q^{\zeta_i}_i} \Big] & = \sum_{i>j} \sum_{\zeta, \zeta' \neq \zeta} \sum_{\frak a = \frak1,\frak2} \Big[ Y^{\zeta \zeta'}_{\frak a, [n_i+n_j]} \!\! \prod_{\substack{k \neq i,j \\ \zeta_k \neq \zeta,\zeta'}} \!\! ( R^{\zeta_k}_{[n_k]})^{q^{\zeta_k}_k} \Big] \\ & + \sum_{i} \sum_{\zeta, \zeta' \neq \zeta} \Big[ Y^{\zeta \zeta'}_{\frak2, [2 n_i]} \!\! \prod_{\substack{k \neq i \\ \zeta_k \neq \zeta,\zeta'}} \!\! ( R^{\zeta_k}_{[n_k]})^{q^{\zeta_k}_k} \Big] \\ & + \sum_{i,\zeta} \Big[ Y^{\zeta \zeta}_{\frak2, [2 n_i]} \!\! \prod_{\substack{k \neq i \\ \zeta_k \neq \zeta}} \!\! ( R^{\zeta_k}_{[n_k]})^{q^{\zeta_k}_k} \Big] \end{split} \end{align} where the product of operators inside brackets, which now includes the $Y$, have ``normal-ordered cycles'' in the same sense as $ [ \prod ( R^{\zeta_i}_{[n_i]})^{q^{\zeta_i}_i}] $. Note that, in the last two lines of Eq.(\ref{OPEgen}), there is only the operator $Y^{\zeta \zeta}_{\frak2, [2 n_i]}$ in the ${\frak a} = \frak2$ channel, because this is the only one that exists when the twists of the double-cycle Ramond field are equal. \section{Non-renormalization of the Ramond ground states} \label{SectNonRenor} We are now going to show explicitly that the Ramond ground states \begin{equation} \Big[ \prod_i (R^{\zeta_i}_{[n_i]})^{q_i} \Big] , \quad \sum_i n_i q_i = N, \end{equation} are protected, by computing the integral (\ref{JscrR}) using our formulae for the four-point function (\ref{Gu}), and showing that it vanishes. We have shown in Sect.\ref{SectFactorization} that the four-point function (\ref{Gu}) factorizes into a sum of functions $G_{\zeta_1\zeta_2}(u,\bar u)$. Hence (\ref{JscrR}) reduces to a sum of integrals \begin{equation} \label{Jzeze} \begin{split} J_{\zeta_1\zeta_2} &= \int\! d^2u \, G_{\zeta_1\zeta_2}(u,\bar u) \\ &= \int\! d^2x \, \big| u'(x) \, G_{\zeta_1\zeta_2}(x) \big|^2 . \end{split} \end{equation} These can be computed analytically, and we are going to show that \begin{equation} J_{\zeta_1\zeta_2} = 0 \end{equation} for all $\zeta_i$. Therefore we will show explicitly that the composite Ramond fields are protected in the deformed theory, at order $\lambda^2$, and at order ${\bf g} = 0$ in the genus expansion. \bigskip \noindent {\bfseries Different twists $n_1 \neq n_2$} \noindent Using the form in Eq.(\ref{GcompNeuGen}), and making the change of variables \begin{equation} y = - (\tfrac{2n_2}{n_1+n_2})^{2} (x-1)(x+ \tfrac{n_1}{n_2} ) , \end{equation} we have \begin{align} \label{Jzzy} \begin{split} J_{\zeta_1\zeta_2} &= \frac{1}{2^{10} n_1^4} \left( A_{\zeta_1\zeta_2} - \tfrac{2n_1}{n_2} \right) A_{\zeta_1\zeta_2} \int \! d^2y\, | 1 - y |^{-3} \\ & + \frac{1}{2^{10} n_1^4} \left( \frac{n_1+n_2}{2n_2} \right)^{4} \int \! d^2y\, | 1 - y |^{-3} |y - w|^{2} \\ & + \frac{1}{2^{10} n_1^4} \left( \frac{n_1+n_2}{2n_2} \right)^2 A_{\zeta_1\zeta_2} \int\! d^2y\, | 1 - y |^{-3} \big( y +\bar y \big) \end{split} \end{align} The non-holomorphic integral in the last line vanishes: $\int \! d^2y\, | 1 - y |^{-3} \mathrm{Im} (y)$ cancels with $\int \! d^2y\, | 1 - y |^{-3} \mathrm{Im}(\bar y)$, and the integrand of $\int \! d^2y\, | 1 - y |^{-3} 2 \mathrm{Re}(y)$ is odd so the integral over the Real line vanishes. At first sight, the remaining two integrals are not defined at $y = 1$, where the integrand diverges, but they can be defined via analytic continuation with the same method of deforming contours described in \cite{Lima:2020kek}. The analytic structure of the integrals in (\ref{Jzzy}) is however much simpler; their (unique) analytic continuation is given in detail in \cite{dotsenko1988lectures}, where one finds \begin{align} \begin{split} \label{TheDFintn1} \int \! d^2y \, & |y|^{2a} | y-1 |^{2b} = \sin( \pi b) \frac{\Gamma(1+a) \Gamma^2 (1+b) \Gamma (-1-a-b) }{ \Gamma(-a) \Gamma(2+a+b) } . \end{split} \end{align} The r.h.s.~is an analytic function of the parameters $a,b$. Thus we have \begin{align} \begin{split} \int \! d^2y \, | y-1 |^{-3} &= \lim_{a \to 0} \frac{4 \pi \Gamma(1+a) \Gamma (\tfrac12 -a) }{ \Gamma(-a) \Gamma(\tfrac12+a) } \\ &= \lim_{a \to 0} \frac{4 \pi }{ \Gamma(-a) } \\ &= 0 . \end{split} \end{align} Finally, the last remaining integral can be solved by Eq.(\ref{TheDFintn1}) with a further change of variables: $u = (y-w)/(1+w)$, \begin{align} \label{DF32} \begin{split} \int \! d^2y\, | 1 - y |^{-3} |y - w|^{2} &= (1+w) \int \! d^2u \, |u|^2 |1-u|^{-3} \\ &= (1+w) \lim_{a \to 1} \frac{4 \pi \Gamma(1+a) \Gamma (\tfrac12 -a) }{ \Gamma(-a) \Gamma(\tfrac12+a) } \\ &= (1+w) \lim_{a \to 1} \frac{-16\pi}{ \Gamma(-a)} \\ &= 0 . \end{split} \end{align} Thus all integrals in the r.h.s.~of Eq.(\ref{Jzzy}) vanish. As noted in Sect.\ref{SectOPEs}, Eq.(\ref{GcompNeuGen}) is not valid for the function $G_{++}(x)$ in (\ref{Gmmpp}), so the discussion above does not apply immediately; nevertheless, the integral $J_{++}$ was computed in Ref.\cite{Lima:2020nnx} --- it vanishes, and in fact it turns out to have the same form as in (\ref{DF32}). \bigskip \noindent {\bfseries Equal twists $n_1 = n_2 = n$} \noindent When $n_1 = n_2 = n$, we have to integrate the functions (\ref{g2n})-(\ref{g1n}). There are thus three integrals: \begin{align} \begin{split} J_1 &= \int\! d^2x \big| u'(x) g_1(x) \big|^2 = \frac{1}{16n^2} \int\! d^2x \end{split} \\ \begin{split} J_2 &= \int\! d^2x \big| u'(x) g_2(x) \big|^2 = \frac{1}{16n^2} \int\! d^2x |x^{-2} (x^2+1) |^2 \end{split} \\ \begin{split} J_3 &= \int\! d^2x \big| u'(x) g_3(x) \big|^2 = \frac{1}{16n^2} \int\! d^2x |x^{-2} (x^2-1) |^2 \end{split} \end{align} Again, they are divergent/undefined, but can be given a unique analytic continuation by the same method as before, after being put in the form (\ref{TheDFintn1}). Making the change of variables $y = x^2$ in $J_1$ and $J_3$, and $y = - x^2$ in $J_2$, we get \begin{align} \begin{split} J_1 &= \frac{1}{2^6 n^2} \int\! d^2y \ | y |^{-1} = \frac{1}{2^6 n^2} \times (-4 \sin 0 ) = 0 \end{split} \\ \begin{split} J_2 &= \frac{1}{2^6 n^2} \int\! d^2y \, |y|^{-3} |1-y|^2 = \frac{1}{2^6 n^2} \times (16 \sin \pi) = 0 \end{split} \\ \begin{split} J_3 &= \frac{1}{2^6 n^2} \int\! d^2y \, |y|^{-3} |1-y|^2 = \frac{1}{2^6 n^2} \times (16 \sin \pi) = 0 \end{split} \end{align} This concludes the demonstration that the integrals (\ref{Jzeze}) all vanish, for any values of $n_1,n_2$, and any double-cycle Ramond fields. \section{Discussion and conclusions} \label{SectConclusion} The main result of the present work is the computation of the four-point function (\ref{GuIntro}). Its analysis illuminates the effects of how the deformation operator interacts with the Ramond ground states (\ref{CompleteRamondIntro}), as we move the CFT away from the free orbifold point. It does so by yielding two important pieces of information: fusion rules of the Ramond ground states with the interaction operator, and the non-renormalization of the conformal dimension of these states. As a conclusion, we will now put these results in perspective, by discussing them in the context of previous literature. \bigskip \noindent {\bfseries Protection of Ramond ground states} \noindent We have shown explicitly that the dimensions of the Ramond ground states in the D1-D5 CFT are protected at second order in the $\lambda$-expansion and at the level of genus-zero covering surfaces --- which is related to the large-$N$ expansion of the correlation functions but not exactly the same, as shown in \S\ref{SectNscaling}. This was very expected from algebraic considerations: the Ramond ground states are related to BPS operators, the NS chiral ring, by spectral flow of the ${\mathcal N} = (4,4)$ super-conformal algebra with central charge $c = 6N$. Recent work \cite{Lima:2020boh,Lima:2020kek,Lima:2020nnx,Lima:2020urq} has indicated that there are some subtleties with the use of spectral flow in the deformed theory. The $n$-twisted sector of the free orbifold theory can be described as a ${\mathcal N}=(4,4)$ SCFT with central charge $c = 6n$, whose Ramond ground states have conformal weight $h= \frac14 n$, and are mapped by the spectral flow of the $c = 6n$ algebra to the $n$-twisted extremal NS chiral fields. Thus non-renormalization of the $n$-twisted NS chirals, proven in Ref.\cite{Pakman:2009mi}, could perhaps suggest that the $n$-twisted Ramond ground states would also be protected --- but they are not. In the interacting theory, the twist-two $S_N$-invariant deformation operator $O^{(\rm{int})}_{[2]}$ deforms the currents of the operator algebra, and the twisted sectors become mixed one with the other. In other words, for $n < N$ the relation between the $n$-twisted Ramond ground states with $h= \frac14 n$ and the NS chiral ring is lost. In light of such results of Refs.\cite{Lima:2020boh,Lima:2020kek,Lima:2020nnx,Lima:2020urq}, we see that the protection of the ``total'' Ramond ground states with $h = \frac14 N$ demonstrated here is not extended to the $n$-twisted components individually. From another point of view, since the scalar modulus $O^{(\rm{int})}_{[2]}$ preserves supersymmetry, the super-conformal algebra with central charge $c = 6N$ is preserved, even though the individual $n$-twisted algebras with $c = 6n < 6N$ are not. Hence protection of the states with $h = \frac14 N$ follows from the protection of BPS NS chirals obtained by spectral flow of the algebra with $c = 6N$, but the same does not apply for each twisted sector individually. \bigskip \noindent {\bfseries Symmetric states with same twists} \noindent Ramond ground states made by products of components with equal twists are dual to special, symmetric solutions of supergravity. In particular, when all spins are aligned, $[ ( R^{+}_{[n]})^{N/n} ]$ (assuming $N/n$ is an integer), as well as descendants of this state, are dual to a family of axially symmetric SUGRA solutions \cite{Lunin:2001jy,Giusto:2004id,Giusto:2004ip,Giusto:2012yz}. The resulting four-point functions are so simple that they are worth writing explicitly. The four-point function (\ref{Gufact}) is \begin{align} \label{GufactNncon} \begin{split} G(u,\bar u) &= \Big\langle \Big[ ( R^{+}_{[n]})^{N/n} \Big]^\dagger (\infty,\bar \infty) \, O^{(\rm{int})}_{[2]}(1,\bar 1) \, O^{(\rm{int})}_{[2]} (u,\bar u) \Big[ ( R^{+}_{[n]})^{N/n} \Big] (0,\bar 0) \Big\rangle \\ &= {\mathscr P}^2(N/n) \Big\langle \big[ R^{+}_{[n]} R^{+}_{[n]} \big]^\dagger (\infty,\bar \infty) \, O^{(\rm{int})}_{[2]}(1,\bar 1) \, O^{(\rm{int})}_{[2]} (u,\bar u) \big[ R^{+}_{[n]} R^{+}_{[n]} \big] (0,\bar 0) \Big\rangle \\ &= \left( \frac{{\mathscr P}(\frac{N}{n})}{16n^2 {\mathscr S}_n(N) {\mathscr S}_2(N)} \right)^2 \sum_{\frak a = \frak1}^{2 n} \Bigg| \frac{ \left[ x_{\frak a}(u) -1\right]^{2+2n} \left[ x_{\frak a}(u) + 1 \right]^{2 - 2n } }{ x_{\frak a}^{2}(u)} \Bigg|^2 \end{split} \end{align} where we have used Eq.(\ref{g1n}), along with the sum over the inverses of the map (\ref{uxnn}). The OPE (\ref{OPEointRRM}), which has the very simple structure constant (\ref{Stucppnn}), results in \begin{equation} O^{(\rm{int})}_{[2]} \times \big[ ( R^{+}_{[n]})^{N/n} \big] = \big[ Y^{++}_{\frak2, \ [2n]} ( R^{+}_{[n]})^{\frac{N-2n}{n}} \big] \end{equation} where the $2n$-twisted operator $Y^{++}_{\frak2, \ [2n]}$ has dimension $\Delta^{++}_{Y , \frak2} = n + 2/n$, cf.~Eq.(\ref{Depp2n}), R-charge $(j^3 , \tilde \jmath^3) = (\frac12, \frac12)$ and SU(2)$_2$ charges $({\frak j}^3, \tilde{\frak j}^3) = (0, 0)$. An operator with this dimension and the correct charges can easily be obtained by applying fractional modes of the R-current to a Ramond ground state: $J^+_{\frac{2}{2n}} \tilde J^+_{\frac{2}{2n}} R^{-}_{(2n)} (z,\bar z)$. Recall that the R-current fractional mode $J^+_{k/M}$, for integer $k < M$, is well-defined in the $M$-twisted sector, raises the R-charge by one unit and the holomorphic dimension by $k/M$. \bigskip \noindent {\bfseries Two OPE channels for different twists} \noindent The dynamical information obtained from the fusion rules (\ref{OPEointRRIntro}) reveals the existence of non-BPS operators $ Y^{\zeta_1\zeta_2}_{\frak a, [n_1+n_2]}$ defining the conformal blocks in the OPE algebra of the Ramond ground states and the deformation operator. The two operators $Y^{\zeta_1\zeta_2}_{\frak a, [n_1+n_2]}$, distinguished by the label ${\frak a} = \frak1,\frak2$, correspond to the two channels in the OPE (\ref{OPEointRR}). The dimensions of the operators in these two channels have a curious degenerate structure shown in Eqs.(\ref{DeltO1})-(\ref{DeltO2}). The operators with at least one R-neutral index are all degenerate, with dimensions (\ref{DeltO1a}) in channel $\frak1$ and (\ref{DeltO2a}) in channel $\frak2$. The latter have dimensions \begin{equation} \label{DimCon2} \Delta^{\dot1+}_{\frak2} = \Delta^{\dot1\dot2}_{\frak2} = \Delta^{\dot1\dot1}_{\frak2} = \frac{2}{n_1+n_2} + \frac{n_1 + n_2}{2} \end{equation} and it is not hard to find possible realizations of these operators in terms of descendants of known fields. For example, for some constants $A, B$, we can make the linear combination ansatz \begin{equation} \label{ansfrac1} Y^{\dot1+}_{\frak2, (n_1+n_2)} (z,\bar z) = A\, \psi^+_{\frac{1}{n_1+n_2}} \tilde \psi^+_{\frac{1}{n_1+n_2}} R^{\dot1}_{(n_1 + n_2)} (z,\bar z) + B\, {\psi}^{\dot1}_{\frac{1}{n_1+n_2}} \tilde {\psi}^{\dot1}_{\frac{1}{n_1+n_2}} R^{+}_{(n_1 + n_2)} (z,\bar z) \end{equation} for $Y^{\dot1+}_{\frak2, (n_1+n_2)}$, which has the same charges as $[ R_{[n_1]}^{\dot1} R_{[n_2]}^{+} ]$, namely $j^3 = \frac12 = \tilde \jmath^3$ and ${\frak j}^3 = - \frac12 = \tilde{\frak j}^3$. The r.h.s.~has the correct dimensions and charges, since the fermion fractional modes increase the conformal weight by $1/(n_1+n_2)$ and the SU(2) charges by $\frac12$. The dimensions of the operators in channel $\frak1$, \begin{equation} \label{DimCon1} \Delta^{\dot1+}_{\frak1} = \Delta^{\dot1\dot2}_{\frak1} = \Delta^{\dot1\dot1}_{\frak1} = \frac{2}{n_1-n_2} + \frac{n_1 + n_2}{2} , \end{equation} are more curious. It is not possible to obtain the first term in the r.h.s.~by simply applying current fractional modes as in (\ref{ansfrac1}), because although the denominator is $n_1-n_2$, the operator is again in the $n_1 + n_2$ twisted sector. Hence the operators in this sector do not seem to be simple descendants of primary fields. The dimensions (\ref{DimCon1}) are singular when $n_1 = n_2$, but, as we have shown in \S\ref{SectEqualtwsOPE}, this channel is not present when the twists are equal; only channel $\frak2$, with dimensions $\Delta^{\dot1+}_{\frak2} = \Delta^{\dot1\dot2}_{\frak2} = \Delta^{\dot1\dot1}_{\frak2} = n + 1/n$ remains. A similar thing happens for $\Delta^{+\pm}_{Y,\frak a}$. The distinction between different OPE channels above is a very interesting feature of the orbifold theory. In this paper, the channels appear as a consequence of the existence of different solutions $x^{u_*}_{\frak a}(u)$ of the polynomial% % \footnote{More precisely, $u(x) = u_*$ can be reduced to a polynomial equation since $u(x)$ in the form (\ref{uxm}) is a rational function.} % equation $u(x) = u_*$ which has, in general, ${\bf H} = 2 \max(n_1,n_2)$ solutions. On the other hand, these solutions are related to the different classes of permutations solving Eq.(\ref{compto1}); $\bf H$ is a Hurwitz number, see \cite{Pakman:2009zz}. In Appendix B of Ref.\cite{Lima:2020nnx}, we have shown how the $\bf H$ solutions of $u(x) = u_*$ correspond to ${\bf H}$ solutions of Eq.(\ref{compto1}). The discussion there makes it clear how there is an important qualitative change in the counting of different classes of permutations when $n_1 = n_2$, in synchronicity with the drastic changes in the structure of the equation $u(x) = u_*$. It would be interesting to examine this phenomenon more carefully from the point of view of spin chains/diagrams introduced in Refs.\cite{Pakman:2009mi,Pakman:2009zz}. \bigskip \noindent {\bfseries Four-point functions of composite operators} \noindent The results of Sect.\ref{SectFactorization} for the factorization of the four-point function can be applied rather directly to similar functions where the Ramond ground states are replaced by other composite fields with the structure $\big[ \prod_i ( {\mathscr O}^i_{[n_i]})^{q_i} \big]$, $\sum_{i} n_i q_i = N$. For example, we may consider powers of twisted NS chiral fields, etc. We can also replace the interaction operators by other twist-two operators. One ends up with connected functions containing double-cycle operators, which can be computed with the covering map of \S\ref{SectCovMaps}. There could be some additional complications. For example, depending on the operators involved, it may be that the three-point function factorization analogous to (\ref{3ptFact}) does not vanish. Note that these three-point function fall under the category analyzed in Ref.\cite{Tormo:2018fnt}; the authors computed correlators of NS chiral fields, but even for other fields their covering map can be used. These different correlators --- along with the function (\ref{GuIntro}) computed here --- are examples of so-called `heavy-heavy-light-light' (HHLL) four-point functions: they contain two `heavy' operators, whose conformal weight is of the order of the central charge $c = 6N$, and two `light' operators, whose conformal weights remain finite in the limit of large $N$. This kind of four-point function is quite interesting for AdS$_3$/CFT$_2$ holography, as discussed in \cite{Galliani:2016cai,Galliani:2017jlg}. There, the authors consider HHLL correlators with Ramond ground states as heavy operators, while the light operators are untwisted NS fields with $h = \frac12$. It would be interesting to use our methods developed in the present paper to compute similar operators with \emph{twisted} light states, e.g.~the lowest-weight NS chiral $O^{(0,0)}_{[2]}$; we leave this for future work. \vspace{10mm} \noindent {\bf Acknowledgements} \noindent The work of M.S.~is partially supported by the Bulgarian NSF grant KP-06-H28/5 and that of M.S.~and G.S.~by the Bulgarian NSF grant KP-06-H38/11. M.S.~is grateful for the kind hospitality of the Federal University of Esp\'irito Santo, Vit\'oria, Brazil, where part of his work was done.
{ "attr-fineweb-edu": 1.515625, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdW7xK7FjYEB4VknS
\section{Introduction}\label{sc-intr} \setcounter{equation}{0} \setcounter{figure}{0} \setcounter{table}{0} Radiative transport equations \cite{agoshkov2012boundary,Davison-1957,Pomraning-1973,Lewis-Miller-1984,Mihalis-Mihalis-1999,graziani2006computational,Dautray-Lions-2000} describe the flows of particles, such as photons, neutrons, and electrons, as they pass through and interact with a background medium. These equations are used in various applications, including astrophysics and nuclear reactor analysis. In this paper, we consider the scaled, steady-state, linear transport equation \begin{subequations}\label{eq-main} \begin{alignat}{3} \Omega \cdot \nabla \Psi(\Omega,x) + \left(\frac{\sig{s}(x)}{\varepsilon}+\varepsilon \sig{a}(x)\right) \Psi(\Omega,x) &= \frac{\sig{s}(x)}{\varepsilon}\overline{\Psi}(x) + \varepsilon q(x), & \quad &(\Omega,x)\in S \times D,\\ \Psi(\Omega,x) &= \alpha(\Omega,x), & &(\Omega,x)\in \Gamma^{-}. \end{alignat} \end{subequations} Here $D\subset \mathbb{R}^d$ $(d=1,2,3)$ is an open, bounded, and Lipschitz domain; $S$ is the projection of the unit sphere in $\mathbb{R}^3$ into $\mathbb{R}^d$ (the interval $[-1,1]$ for $d = 1$ and unit disk for $d = 2$); and $\Gamma^{-} = \{ (x,\Omega) \in S\times\partial D\mid\Omega\cdot n(x)<0\}$, where $n(x)$ is the outward unit normal vector at any point $x \in \partial D$ where the boundary is $C^1$. The \textit{angular flux} $\Psi$ is the flux of particles at the location $x$ moving with unit speed in the direction $\Omega$, and the \textit{scalar flux} $\overline{\Psi} = \frac{1}{|S|}\int_{S}\Psi d\Omega$ is the average of $\Psi$ over $S$.% \footnote{Often the quantity $\Phi=4\pi\overline{\Psi}$ is referred to as the scalar flux. The difference is simply a normalization factor from integration of the sphere. Here, we borrow the convention used in \cite{Lewis-Miller-1984}.} The functions $\sig{s}$ and $\sig{a}$ are (known) non-dimensionalized scattering and absorption cross-sections, respectively, and $q$ is a (known) non-dimensionalized source. The function $\alpha(\Omega,x)$ is the (known) incoming flux at $x\in \partial D$ moving in the direction $\Omega$. The constant $\varepsilon>0$ is a scaling parameter which characterizes the relative strength of scattering. Designing effective numerical methods for \eqref{eq-main} is a serious challenge, and the intent of this paper is to address two of the main issues. Firstly, for a three-dimensional problem, the unknown intensity $\Psi$ is a function of three spatial and two angular variables; the discretization of this five-dimensional phase space usually requires significant computational resources. Secondly, when the parameter $\varepsilon$ is small, $\Psi$ is nearly independent of $\Omega$ and can be approximated by the solution of a diffusion equation in the variable $x$ only \cite{Habetler-Matkowsky-1975,bardos1984diffusion,bensoussan1979boundary}. That is, away from the boundary, $\Psi(\Omega,x) = \Psi^{(0)} (x) + O(\varepsilon)$ as $\varepsilon \to 0$, where $\Psi^{(0)} $ satisfies \begin{equation}\label{eq-difflim} -\nabla\cdot\left(\frac{1}{3\sig{s}}\nabla \Psi^{(0)} (x)\right) + \sig{a} \Psi^{(0)}(x) = q(x), \quad x \in D, \end{equation} along with appropriate boundary conditions. A numerical method for \eqref{eq-main} should preserve this asymptotic limit without having to resolve the length scales associated with $\varepsilon$ \cite{jin1999efficient}. In other words, in the limit $\varepsilon \to 0$, a discretization of the transport equation \eqref{eq-main} should become a consistent and stable discretization of the diffusion equation \eqref{eq-difflim}. Otherwise a highly refined mesh is needed to approximate the solution accurately \cite{larsen1987asymptotic}.% \footnote{{This issue is also known as ``locking" in the elliptic literature \cite{babuvska1992locking}.}} Classical approaches for discretizing \eqref{eq-main} often involve separate treatment of the angular and spatial variables, and a variety of options are available. Among them, the $S_N$-DG method \cite{HHE2010, larsen1989asymptotic,adams2001discontinuous} has received significant attention due to it's robustness, computational efficiency, and convenient implementation. The $S_N$ method (see\cite{larsen2010advances} for a substantial review and additional references) is a collocation method in which the angular variable $\Omega$ is discretized into a finite number of directions and a quadrature rule is used to evaluate $\overline{\Psi}$. The $S_N$ discretization preserves non-negativity of $\Psi$ and can incorporate the boundary conditions from \eqref{eq-main} in a straightforward way. It also preserves the characteristic structure of the advection operator in \eqref{eq-main}, which allows for the use of fast sweeping techniques for inverting the discrete form of the operator on the left-hand side of \eqref{eq-main}. Discontinuous Galerkin (DG) methods are a class of finite element methods that construct numerical solutions using piecewise polynomial spaces. The DG approach was introduced in \cite{reed1973triangular} for the express purpose of solving equations like \eqref{eq-main}, followed shortly thereafter by a rigorous analysis in \cite{lesaint1974finite}. Since then, DG methods have been applied to nonlinear hyperbolic conservation laws and convection-dominated problems \cite{cockburn2001runge}, elliptic problems \cite{arnold2002unified}, and equations with higher-order derivatives \cite{yan2002local,xu2010local}. When used with upwind fluxes, DG methods preserve the characteristic structure of \eqref{eq-main} that enables sweeps. Moreover, if the approximation space can support globally continuous linear polynomials, then DG methods with upwind fluxes will yield accurate numerical solutions for $\Psi$ without the need to resolve $\varepsilon$ with the spatial mesh \cite{larsen1989asymptotic, adams2001discontinuous, guermond2010asymptotic}. However, this condition on the approximation space means that at least $P^1$ elements must be used for a triangular mesh and $Q^1$ elements for a rectangular mesh.% \footnote{{This condition can be circumvented for non-upwind methods. In \cite{ragusa2012robust}, the authors made the piecewise constant DG method asymptotic preserving with parameters adjusting numerical fluxes under different regimes. Similar techniques were introduced in finite volume contexts \cite{jin1996numerical} as well and were recently used in \cite{guermond2019positive} to develop a positive, asymptotic preserving method.} } In order to reduce memory costs in the upwind $S_N$-DG method, while still preserving the asymptotic diffusion limit and maintaining the characteristic structure needed for sweeps, we propose in this paper to couple the finite element spaces between different collocation angles in the discrete ordinate approximation. Since the solution becomes isotropic in the diffusion limit ($\varepsilon \to 0$), we hypothesize that only a $P^1$ (for triangles) or $Q^1$ (for rectangles) approximation of the angular average is necessary. Thus, instead of using a tensor product finite element space for the $S_N$-DG system, we seek the solution in a proper subspace, in which all the elements have isotropic slopes. This choice of finite element space yields a significant reduction in memory per spatial cell, as illustrated in \cref{tab-sndg-costper}. \begin{table}[!h] \centering \begin{tabular}{c|c|c} \hline Unknowns per cell & Triangles ($P^1$) & Rectangles ($Q^1$) \\ \hline Standard $S_N$-DG & $(d+1)n_\Omega$ & $2^{d}n_\Omega$ \\ \hline low-memory $S_N$-DG & ${n_\Omega + d}$ & ${(n_\Omega -1)+ 2^{d}}$ \\ \hline Memory cost ratio as $n_\Omega \gg 1$ & $d+1$ & $2^{d}$\\ \hline \end{tabular} \caption{Memory costs of standard $S_N$-DG and the low-memory variation, both for triangles and rectangles, for spatial dimension $d$. The first two rows give the number of unknowns per angle per spatial cell for each approach. The last row is the asymptotic ratio of the memory costs by two methods when $n_\Omega$ becomes large. } \label{tab-sndg-costper} \end{table} In the diffusion limit, the low-memory approach typically displays second-order accuracy. However, because the finite element representation of each ordinate is coupled to all the other ordinates, the overall accuracy of the low-memory approach for fixed $\varepsilon$ is only first-order. To address this drawback, we propose a modification of the low-memory scheme that uses local reconstruction to improve accuracy. As long as the reconstruction uses upwind information, the resulting transport operator can still be inverted with sweeps. While rigorous theoretic properties of this modified scheme are still under investigation, we observe numerically that it recovers second-order accuracy for arbitrary fixed $\varepsilon$ and captures the asymptotic diffusion limit. However, the method does generate some small numerical artifacts at the discontiuity of the cross section, which we point out in the numerical results of \Cref{sc-num}. The rest of the paper is organized as follows. In \Cref{sc-background}, we introduce the background and revisit the $S_N$-DG method. Low-memory methods, including the original first-order approach and the second-order reconstructed scheme, are detailed in \Cref{sc-lmdg}. Numerical tests are provided in \Cref{sc-num} to illustrate the behavior of both approaches. Finally, conclusions and future work are discussed in \Cref{sc-conclude}. \section{The $S_N$-DG method}\label{sc-background} \setcounter{equation}{0} \setcounter{figure}{0} \setcounter{table}{0} In this section, we review the $S_N$-DG scheme and discuss its asymptotic properties and implementation. Throughout the paper, we consider the case $\inf_{x\in D} \sig{s}(x) = \delta_s > 0$ and $\inf_{x\in D}\sig{a}(x)=\delta_{\rm{a}} >0$, unless otherwise stated. In general, the well-posedness of \eqref{eq-main} also holds for $\sig{a} \geq 0$ \cite{wu2015geometric}. In some places, we will also assume that the cross-section is piecewise constant, either to simplify the exposition or to make connections between first- and second-order forms of the diffusion limit. In the numerics, we often consider nonzero boundary conditions. However, in proofs we often assume that $\alpha = 0$. When $\alpha$ is nonzero but isotropic, many of the results still hold. However, when $\alpha$ is anisotropic, the diffusion equation requires a boundary layer correction in order to be uniformly accurate \cite{Habetler-Matkowsky-1975}. At the discrete level, this situation requires more sophisticated analysis \cite{larsen1989asymptotic,adams2001discontinuous,guermond2010asymptotic} than is presented here. \subsection{Formulation} Consider a quadrature rule with points $\{\Omega_j\}_{ j =1}^{n_\Omega}$ and positive weights $\{w_j\}_{ j =1}^{n_\Omega}$ such that \begin{equation} \frac{1}{|S|}\int_S f(\Omega) d\Omega \approx \sum_{j=1}^{n_\Omega} w_j f(\Omega_j), \quad \forall f \in C(S). \end{equation} We assume the quadrature is exact for polynomials in $\Omega$ up to degree two% \footnote{Level symmetric quadratures of moderate size will satisfy these properties. See, e.g., \cite{Lewis-Miller-1984} and references therein.}% ; that is, \begin{eqnarray} \label{eq-polyquad} (i)~\sum_{j=1}^{n_\Omega} w_j = 1,\quad (ii)~\sum_{j=1}^{n_\Omega} w_j \Omega_j = 0, \quad \text{and} \quad (iii)~\sum_{j=1}^{n_\Omega} w_j \Omega_j \otimes \Omega_j = \frac{1}{3} \operatorname{Id}. \end{eqnarray} The $S_N$ method approximates the angular flux $\Psi$ at the quadrature points $\{\Omega_j\}_{ j = 1}^{n_\Omega}$ by a vector-valued function $\psi(x) = (\psi_1(x),\psi_2(x),\dots,\psi_{n_\Omega}(x))$ whose components satisfy a coupled system with $n_\Omega$ equations \begin{equation} \label{eq:sn} \Omega_j \cdot \nabla \psi_j(x) + \left(\frac{\sig{s}}{\varepsilon}+\varepsilon \sig{a}\right)\psi_j(x) = \frac{\sig{s}}{\varepsilon} \overline{\psi}(x) + \varepsilon q(x), \qquad \overline{\psi}(x) = \sum_{j=1}^{n_\Omega} w_j \psi(\Omega_j, x). \end{equation} To formulate the upwind DG discretization of the $S_N$ system \eqref{eq:sn}, let $\mathcal{T}_h = \{K\}$ be a quasi-uniform partition of the domain $D$. We assume $D = \cup_{K\in \mathcal{T}_h} \mathrm{cl}(K)$ to avoid unnecessary technicalities. Let $\mathcal{F}_h = \cup_{K\in \mathcal{T}_h}\partial K$ be the collection of cell interfaces and let $\mathcal{F}_h^\partial$ be the collection of boundary faces. Given a cell $K$, we denote by $\nu_K$ the outward normal on $\partial K$ and for any $x \in \partial K$, let $v^{\rm{int}}(x) = \lim_{\delta \to 0^+}v(x - \delta \nu_K)$ and $v^{\rm{ext}}(x) = \lim_{\delta \to 0^+} v(x + \delta \nu_K)$. Given a face $F$, we denote by $\nu_F$ a prescribed normal (chosen by convention) and, for any $x \in F$, let $v^{\pm} = \lim_ {\delta \to 0^+} v(x \pm \delta \nu_F)$. For convenience, we assume trace values are identically zero when evaluated outside of $D$. The standard $S_N$-DG method uses the tensor-product finite element space \begin{equation}\label{eq-cV} \mathcal{V}_h= \prod_{j=1}^{n_\Omega} V_h,\qquad V_h = \{v_j: v_j|_K \in Z_1(K)\}, \end{equation} where for triangular or tetrahedral meshes, $Z_1(K)$ is the space $P^1(K)$ of linear polynomials on $K$ and for Cartesian meshes $Z_1(K)$ is the space $Q^1(K)$ of multilinear polynomials on $K$. The space $\mathcal{V}_h$ can be equipped with an inner product $(\cdot,\cdot)$ and associated norm $\|\cdot\|$ given by \begin{equation} (u,v) = \sum_{K\in \mathcal{T}_h} \sum_{j=1}^{n_\Omega} w_j \int_K u_j v_j dx \qquad\text{and}\qquad \|v\| = \sqrt{(v,v)}. \end{equation} The semi-norm induced by jumps at the cell interfaces is given by \begin{equation}\label{eq-jump} \llbracket v\rrbracket = \left({\sum_{F\in \mathcal{F}_h}\sum_{j=1}^{n_\Omega} w_j \int_F |\Omega_j\cdot \nu_F| (v_j^{-}-v_j^{+})^2 dx}\right)^{1/2}. \end{equation} To construct the $S_N$-DG method, define the local operators \begin{subequations} \begin{align} \label{eq-Ljk} L_{j,K}(u,v) =& -\int_K u_{j} \Omega_j \cdot \nabla v_j dx + \int_{\partial K} \widehat{u}_{j} \Omega_j\cdot \nu_K v^{\rm{int}}_j dx \\ &+ \int_K \left(\frac{\sig{s}}{\varepsilon}+\varepsilon \sig{a}\right) u_jv_j dx,\nonumber \\ \label{eq-Sjk} S_{j,K}(u,v) =& \int_K\frac{\sig{s}}{\varepsilon}\overline{u} v_j dx, \quad \text{with}~\overline{u} = \sum_{j=1}^{n_\Omega} w_j u_j, \\ Q_{j,K,\alpha}(v) =& \int_K \varepsilon q v_j dx - \int_{\partial K \cap \mathcal{F}_h^{\partial}} \alpha \Omega_j\cdot \nu_K v_j^{\mathrm{int}} dx, \end{align} \end{subequations} where $\widehat{u}_j (x)= \lim_{\delta \to 0^-} u(x + \delta \Omega_j) $ is the upwind trace at $x \in \partial K$, and is defined as zero when the limit is taken outside of $D$. Then set \begin{equation}\label{eq-Bdef} B(u,v) = L(u,v) - S(u,v), \end{equation} where \begin{equation} \label{eq-LandS} L(u,v) = \sum_{K\in \mathcal{T}_h}\sum_{j=1}^{n_\Omega} w_jL_{j,K}(u,v) \quad \text{and} \quad S(u,v) = \sum_{K\in \mathcal{T}_h}\sum_{j=1}^{n_\Omega} w_j S_{j,K}(u,v), \end{equation} and let \begin{equation}\label{eq-lalpha} Q_{\alpha}(v) = \sum_{K\in \mathcal{T}_h}\sum_{j=1}^{n_\Omega} w_j Q_{j,K,\alpha}(v). \end{equation} The $S_N$-DG method is then: \textit{find $\psi_{h}=(\psi_{h,1},\dots,\psi_{h,n_\Omega}) \in \mathcal{V}_h $ such that} \begin{equation}\label{eq-sndg-scheme} B(\psi_h,v) = Q_{\alpha}(v), \qquad \forall v \in \mathcal{V}_h. \end{equation} \subsubsection{Implementation}\label{sc-basis} Recall that $n_\Omega$ is the number of discrete ordinates in the $S_N$ discretization. Let $n_x = |\mathcal{T}_h|$ be the number of mesh cells in $\mathcal{T}_h$ and let $n_P$ be the dimension of $Z_1(K)$. Then the dimension of $\mathcal{V}_h$ is $n_\Omega\cdot n_x \cdot n_P$. Let $\{b^{p,r}:p = 1,\ldots,n_x,r=0,\ldots,n_P-1\}$ be a set of basis functions for $V_h$, with $b^{p,r}$ locally supported on $K_p \in \mathcal{T}_h$. Then the set $\mathbb{B} = \{\xi^{l,p,r}:l =1,\dots,n_\Omega,p=1,\dots,n_x,r=0,\dots,n_P-1\}$, where $\xi_j^{l,p,r}(x) = \delta_{l j} b^{p,r}(x)$ ($j = 1,\ldots n_\Omega$) and $\delta$ is the Kronecker delta, gives a complete set of basis functions for $\mathcal{V}_h$. With this choice of basis functions, the variational formulation in \eqref{eq-sndg-scheme}, written as \begin{equation}\label{eq-vari-LSQ} L(\psi_h,v) = S(\psi_h,v) + Q_{\alpha}(v),\qquad \forall v\in \mathcal{V}_h, \end{equation} can be assembled into a linear system (detailed in \Cref{ap-sndg-matrix}) \begin{equation}\label{eq-sndg-mat} \mathbf{L} \mathbf{\Psi} = \mathbf{M} \mathbf{P} \mathbf{\Psi} + \mathbf{Q}. \end{equation} In the above equation, $\mathbf{L}$ is an $(n_\Omega\cdot n_x\cdot n_P)\times (n_\Omega\cdot n_x\cdot n_P)$ block diagonal matrix, where the $j$-th block ($j = 1,\ldots n_\Omega$) corresponds to the discretization of the operator $\psi_j \to \Omega_j \cdot \nabla \psi_j + \left(\frac{\sig{s}}{\varepsilon}+\varepsilon \sig{a}\right)\psi_j$; $\mathbf{M}$ is an injective $(n_\Omega\cdot n_x\cdot n_P)\times (n_x\cdot n_P)$ matrix, $\mathbf{P}$ is an $(n_x\cdot n_P) \times (n_\Omega\cdot n_x\cdot n_P)$ matrix; $\mathbf{Q}$ is an $(n_\Omega\cdot n_x\cdot n_P)$ vector assembled from the source $q$ and the inflow boundary $\alpha$; and $\mathbf{\Psi} = (\psi^{l,p,r})$ is an $(n_\Omega\cdot n_x\cdot n_P)$ vector such that $\psi_h = \sum_{l,p,r}\psi^{l,p,r}\xi^{l,p,r}$. If upwind values are used to evaluate the numerical trace $\widehat{u}_j$, each block of $\mathbf{L}$ can be inverted efficiently with a sweep algorithm. The system in \eqref{eq-sndg-mat} can be solved numerically with a Krylov method by first solving the reduce system \begin{equation}\label{eq-sndg-phi} \mathbf{\Phi} - \mathbf{P} \mathbf{L}^{-1}\mathbf{M} \mathbf{\Phi} = \mathbf{P} \mathbf{L}^{-1} \mathbf{Q} \end{equation} for the ${n_x\cdot n_P}$ vector $\mathbf{\Phi}:=\mathbf{P} \mathbf{\Psi}$. This equation is derived by applying $\mathbf{L}^{-1}$ and then $\mathbf{P}$ to \eqref{eq-sndg-mat}. In a second step $\mathbf{\Psi}$ is recovered from the relation \begin{equation}\label{eq-sndg-final-sweep} \mathbf{\Psi} = \mathbf{L}^{-1}\mathbf{M} \mathbf{\Phi} + \mathbf{L}^{-1} \mathbf{Q}. \end{equation} The following theorem is proven in \Cref{ap-sndg-mat-psi}. \begin{THM} \label{thm-sndg-mat-psi} The matrix $\mathbf{I}_{n_x\cdot n_P}-\mathbf{P}\mathbf{L}^{-1}\mathbf{M}$ is invertible. \end{THM} \begin{REM}[Sherman--Morrison formula] According to the Sherman-Morrison formula (see for example \cite[Section 2.1.3]{golub2012matrix}): given invertible matrices $\mathbf{B} = \mathbf{A} + \mathbf{U} \mathbf{V}$ and $\mathbf{I} + \mathbf{V} \mathbf{A}^{-1}\mathbf{U}$, \begin{equation}\label{eq-sndg-mat-sm} \mathbf{B}^{-1} = \mathbf{A}^{-1} - \mathbf{A}^{-1}\mathbf{U} (\mathbf{I}+\mathbf{V}\mathbf{A}^{-1}\mathbf{U})^{-1}\mathbf{V} \mathbf{A}^{-1}. \end{equation} The direct application of \eqref{eq-sndg-mat-sm} with $\mathbf{A} = \mathbf{L}$, $\mathbf{U} = -\mathbf{M}$ and $\mathbf{V} = \mathbf{P}$, yields the formula in \eqref{eq-sndg-final-sweep} with $\mathbf{\Phi}$ given by \eqref{eq-sndg-phi}. \end{REM} \subsubsection{Asymptotic scheme} As $\varepsilon \to 0$, the $S_N$-DG scheme gives a consistent approximation to the asymptotic diffusion problem. For simplicity, we focus here on the zero inflow boundary condition $\alpha = 0$. The analysis of more general boundary conditions can be found in \cite{adams2001discontinuous, guermond2010asymptotic, guermond2014discontinuous, larsen1989asymptotic}. We use an overline to represent isotropic subspaces. For example, \begin{equation} \overline{\mathcal{V}}_h = \{v = (v_1,\ldots,v_{n_\Omega}) \in \mathcal{V}_h: v_i = \overline{v}, \forall i\}. \end{equation} We further define $\mathcal{C}_{h,\mathrm{zero}}$ to be the space of continuous functions in $\overline{\mathcal{V}}_h$ that vanish on $\partial D$. $\overline{\mathcal{V}}_h^d = \{(\vphi_1,\dots,\vphi_d): \vphi_i \in \overline{\mathcal{V}}_h\}$ is used to represent the tensor product space of $\overline{\mathcal{V}}_h$ with an induced norm still denoted as $\|\cdot\|$. In particular, since $\overline{\mathcal{V}}_h$ and $V_h$ are isomorphic, we often identify $\overline{\mathcal{V}}_h$ with $V_h$. To facilitate the discussion, we also define \begin{equation} \label{eq-Jh} J_h = \frac{1}{\varepsilon}\sum_{j=1}^{n_\Omega} w_j\Omega_j \psi_{h,j} = \sum_{j=1}^{n_\Omega} w_j\Omega_j \frac{\psi_{h,j}-\overline{\psi}_h}{\varepsilon}, \end{equation} which is a vector field in $\mathbb{R}^d$. The following result is proved in \cite{guermond2010asymptotic}\footnote{The result in \cite{guermond2010asymptotic} is actually stated for more generally. In particular it allows $\alpha$ to be nonzero and possibly anisotropic.}; see also \cite{adams2001discontinuous} and \Cref{thm-lmdg-asympscheme} in this paper. \begin{THM}[Asymptotic scheme]\label{thm-sndg-asympscheme} Suppose $\alpha = 0$. Then as $\varepsilon \to 0$, $(\psi_h)_{\varepsilon>0}$ and $(J_h)_{\varepsilon>0}$ converge to ${\psi}_h^{(0)} = \overline{\psi}_h^{(0)} \in \mathcal{C}_{h,\mathrm{zero}}$ and $J_h^{(0)}\in \overline{\mathcal{V}}_{h}^d$, respectively, that are the unique solution to the mixed problem: \begin{subequations}\label{eq-sndg-mix} \begin{gather} \sum_{K\in \mathcal{T}_h} \int_K \left(-J_h^{(0)}\cdot \nabla \vphi + \sig{a}{\psi}_h^{(0)} \vphi\right) dx= \int_D q\vphi dx,\label{eq-sndg-lim1}\\ \sum_{K\in \mathcal{T}_h}\int_K\left(\frac{1}{3}\nabla {\psi}_h^{(0)} + \sig{s} J_h^{(0)}\right)\cdot \zeta dx = 0\label{eq-sndg-lim2}, \end{gather} \end{subequations} $\forall \vphi \in \mathcal{C}_{h,\mathrm{zero}}$ and $\forall \zeta \in \overline{\mathcal{V}}_h^{d}$. \end{THM} \section{Low-memory strategies}\label{sc-lmdg} \setcounter{equation}{0} \setcounter{figure}{0} \setcounter{table}{0} In this section, we generalize the statement of \Cref{thm-sndg-asympscheme} slightly to allow for proper subspaces of $\mathcal{V}_h$ in the finite element formulation. Based on the analysis, a first-order low-memory scheme is constructed. We then apply the reconstruction technique to lift the accuracy of the method to second-order. \subsection{Asymptotic schemes with subspaces of $\mathcal{V}_h$}\label{subsec-asymptotic-subspaces} The results of \cref{thm-sndg-asympscheme} suggest that, rather than $\psi_{h}$, it is the approximation of the integrated quantities $\overline{\psi}_h$ and $J_h$ that play an important role in the diffusion limit. In particular, the continuity requirement on $\overline{\psi}_h^{(0)}$ plays a crucial role. Indeed, as is well known \cite{adams2001discontinuous}, if the space $V_h$ is constructed from piecewise constants, then \eqref{eq-sndg-mix} implies that ${\psi}_h^{(0)}$ is a global constant and $J_h^{(0) }= 0$. This solution is clearly inconsistent with the diffusion limit. However, it is possible to construct a DG method: \textit{find $\psi_{h}=(\psi_{h,1},\dots,\psi_{h,n_\Omega}) \in \mathcal{W}_h $ such that} \begin{equation}\label{eq-sndglm-scheme} B(\psi_h,v) = Q_{\alpha}(v), \qquad \forall v \in \mathcal{W}_h \end{equation} based on a proper subspace $\mathcal{W}_h \subset \mathcal{V}_h$ that maintains the diffusion limit, but requires fewer unknowns for a given mesh $\mathcal{T}_h$. \begin{THM}\label{thm-gene-uni-solvency} For each $\varepsilon>0$ and linear subspace $\mathcal{W}_h \subset \mathcal{V}_h$, \eqref{eq-gene-scheme} has a unique solution. In particular, if $\alpha = 0$, the solution satisfies the energy estimate \begin{equation}\label{eq-stab} \frac{1}{\varepsilon}\|\sig{s}^{\frac{1}{2}}(\psi_h - \overline{\psi}_h)\|^2 + \frac{\varepsilon}{2}\|\sig{a}^\frac{1}{2}\psi_h\|^2 +\frac{1}{2}\llbracket \psi_h\rrbracket^2 \leq \frac{\varepsilon}{2\delta_{\rm{a}}}\|q\|^2. \end{equation} \end{THM} The proof is based on coercivity of $B(\cdot,\cdot)$ and we refer to \cite{HHE2010} and \cite{guermond2010asymptotic} for details. Here, $\alpha = 0$ is assumed for simplicity. Energy estimates with general inflow boundary condition can be found in \cite[Lemma 4.2]{guermond2010asymptotic}. In \cite{HHE2010}, the case $\varepsilon = 1$ is studied and error estimates are derived using the coercivity with respect to a modified norm. We next characterize sufficient conditions for $\mathcal{W}_h $. Define the spaces \begin{equation} \overline{\Omega \mathcal{W}_h}:= \{\sum_{j=1}^{n_{\Omega}} w_j \Omega_j v_j: v\in \mathcal{W}_h\} \subset \overline{\mathcal{V}}_h^d \quad \text{and} \quad \Omega\cdot \overline{\Omega \mathcal{W}_h} := \{\Omega\cdot \zeta:\zeta \in \overline{\Omega \mathcal{W}_h}\} \subset \mathcal{V}_h, \end{equation} where $\Omega\cdot \zeta:=(\Omega_1 \cdot \zeta, \ldots, \Omega_{n_\Omega} \cdot \zeta)$. According to \eqref {eq-Jh}, $J_h\in \overline{\Omega \mathcal{W}_h}$. \Cref{thm-sndg-asympscheme} can now be generalized to the space $\mathcal{W}_h $. \begin{THM}\label{thm-lmdg-asympscheme} Suppose $\alpha = 0$. Suppose $\mathcal{W}_h \subset \mathcal{V}_h$ is a linear space such that $\Omega\cdot \overline{\Omega \mathcal{W}_h} \subset \mathcal{W}_h$. Then as $\varepsilon \to 0$, $({\psi_h})_{\varepsilon>0}$ and $({J}_h)_{\varepsilon>0}$ converge to ${\psi}_h^{(0)} = \overline{\psi}_h^{(0)}\in \mathcal{C}_{h,\mathrm{zero}}\cap\mathcal{W}_h$ and ${J}_h^{(0)}\in\overline{\Omega \mathcal{W}_h}$, respectively, that are the unique solution to the mixed problem \eqref{eq-sndg-mix}, $\forall \vphi \in \mathcal{C}_{h,\mathrm{zero}}\cap \mathcal{W}_h$ and $\forall \zeta \in \overline{\Omega \mathcal{W}_h}$. \end{THM} \begin{proof} Because the proof follows the arguments in \cite[Section 4]{guermond2010asymptotic} closely, we provide only a brief outline, emphasizing where the condition on the space $\mathcal{W}_h$ plays a role. 1. The stability estimate in \eqref{eq-stab} provides the following three bounds: \begin{equation}\label{eq-bound} (i)~\|\psi_h \|^2\leq \frac{1}{\delta_{\rm{a}}^2}\|q\|^2 ,\quad (ii)~\|\psi_h -\overline{\psi}_h \|^2 \leq \frac{\varepsilon^2}{\delta_{\rm{a}} \delta_{\rm{s}}}\|q\|^2 ,\quad \text{and} \quad (iii)~ \llbracket\psi_h\rrbracket^2 \leq \frac{\varepsilon}{\delta_{\rm{a}}}\|q\| ^2. \end{equation} Bounds (i) and (ii) imply that $\psi_h $ converges (via a subsequence) to a function $ {\psi}_h^{(0)} \in \overline{\mathcal{V}}_h$. Bound (iii) implies that $\psi_h^{(0)} \in \mathcal{C}_{h,\mathrm{zero}} \cap \mathcal{W}_h = \mathcal{C}_{h,\mathrm{zero}} \cap \overline{\mathcal{W}}_h$. 2. Since, from the definition in \eqref{eq-Jh}, \begin{equation} \|J_h \| \leq \sum_{j=1}^{n_\Omega} w_j \frac{\|\psi_h-\overline{\psi}_h\|}{\varepsilon}, \end{equation} where $\|J_h\|$ is the tensor product norm of $J_h$ in $\overline{\mathcal{V}}^d_h$, the bound (ii) implies further that $(J_h )_{\varepsilon>0} \subset \overline{\Omega \mathcal{W}_h}$ is uniformly bounded and hence converges subsequentially to a limit $J_h^{(0)}\in \overline{\Omega \mathcal{W}_h}$. 3. The equation in \eqref{eq-sndg-lim1} is derived by testing \eqref{eq-sndglm-scheme} with $v =\vphi \in \mathcal{C}_{h,\mathrm{zero}}\cap \mathcal{W}_h$ and using the fact that $\vphi$ is independent of $\Omega$ and continuous in $x$. 4. It is the derivation of \eqref{eq-sndg-lim2} which uses the condition $\Omega\cdot \overline{\Omega \mathcal{W}_h} \subset \mathcal{W}_h$. Specifically, if $v = \Omega\cdot \zeta$ with $\zeta \in \overline{\Omega\mathcal{W}_{h}}$, then this condition implies that $v \in \mathcal{W}_h$. Therefore, we can test \eqref{eq-sndglm-scheme} with this choice of $v$ to find that \begin{equation} \begin{aligned} L(\psi_h,\Omega \cdot \zeta) - S(\psi_h,\Omega \cdot \zeta) =&- \sum_{j=1}^{n_\Omega} w_j \sum_{K \in \mathcal{T}_h} \int_K \psi_{h,j} \Omega_j \cdot \nabla (\Omega_j \cdot \zeta) dx \\ &+ \sum_{j=1}^{n_\Omega} w_j \sum_{K \in \mathcal{T}_h} \int_{\partial K} \widehat\psi_{h,j} (\Omega_j \cdot \nu_K) (\Omega_j \cdot \zeta^{\rm{int}}) dx \\ &+ \sum_{j=1}^{n_\Omega} w_j \sum_{K \in \mathcal{T}_h} \int_K \left( \left( \frac{\sig{s}}{\varepsilon} + \varepsilon \sig{a} \right) \psi_{h,j} - \frac{\sig{s}}{\varepsilon} \overline{\psi}_h \right) (\Omega_j \cdot \zeta) dx \\ =:&\ I + II + III. \end{aligned} \end{equation} We combine $I$ and $II$, using the fact that $ \overline{\psi}^{(0)}_{h} \in \mathcal{C}_{h,\mathrm{zero}}$ and invoking \eqref{eq-polyquad}. This gives \begin{equation} \label{eq-fick1} \begin{aligned} \lim_{\varepsilon\to 0} (I + II) &=\sum_{j=1}^{n_\Omega} w_j (\Omega_j \otimes \Omega_j) : \sum_{K \in \mathcal{T}_h} \left(- \int_K \overline{\psi}^{(0)}_{h} \nabla \zeta dx + \int_{\partial K} \overline{\psi}^{(0)}_{h} \nu_K \otimes \zeta^{\rm{int}} dx \right) \\ & = \frac{1}{3} \operatorname{Id} : \sum_{K \in \mathcal{T}_h} \int_K \nabla \overline{\psi}^{(0)}_{h} \otimes \zeta dx = \sum_{K \in \mathcal{T}_h} \int_K \frac{1}{3} \nabla \overline{\psi}^{(0)}_{h} \cdot \zeta dx. \end{aligned} \end{equation} Since $\sum_{j=1}^{n_\Omega} w_j \overline{\psi}_{h} \Omega_j =0 $, \begin{align} \label{eq-fick2} \lim_{\varepsilon\to 0} III = \lim_{\varepsilon\to 0} \sum_{K \in \mathcal{T}_h} \int_K \left( \frac{\sig{s}}{\varepsilon} + \varepsilon \sig{a} \right) \sum_{j=1}^{n_\Omega}(w_j \psi_{h,j} \Omega_j )\cdot \zeta dx = \int_K \sig{s} J^{(0)}_h \cdot \zeta dx . \end{align} Finally, the right-hand side of \eqref{eq-sndglm-scheme} is (for $\alpha = 0$) \begin{equation} \label{eq-L0-test} Q_0(v) = \sum_{j=1}^{n_\Omega} w_j \sum_{K \in \mathcal{T}_h} \int_K \Omega_j \cdot \zeta q dx= 0. \end{equation} Combining \eqref{eq-fick1}, \eqref{eq-fick2}, and \eqref{eq-L0-test} recovers \eqref{eq-sndg-lim2}. 5. Uniqueness of the subsequential limits $\psi ^{(0)}_h$ and $J^{(0)}_h$ follows from the uni-solvency of \eqref{eq-sndg-mix}. Indeed if $(\widetilde \psi_h,\widetilde J_h)$ is the difference between any two solutions of \eqref{eq-sndg-mix}, then \begin{equation} 3\sig{s} \|\widetilde J_h\|^2 + \sig{a}\|\widetilde \psi_h \|^2 = 0. \end{equation} Since $\sig{s}$ and $\sig{a}$ are assumed positive, it follows that $\widetilde \psi_h$ and $\widetilde J_h$ are identically zero. \end{proof} We then discuss the choice of $\mathcal{W}_h$ and the corresponding space pair, $\mathcal{S}_h := \mathcal{C}_{h,\mathrm{zero}}\cap \mathcal{W}_h$ and $\mathcal{J}_h := \overline{\Omega\mathcal{W}_h}$, in the diffusion limit. Let $Z_0(K)$ be the space spanned by constants on $K$. Then we define the piecewise constant space $ \mathcal{V}_{h,0} = \{v\in \mathcal{V}_h:v_j|_K \in Z_0(K), \forall K \in \mathcal{T}_h\} $ and its orthogonal complement $\mathcal{V}_{h,1} = \{v \in \mathcal{V}_h: \int_K v_j dx = 0, \forall K \in \mathcal{T}_h\}. $ The isotropic subspace of $\mathcal{V}_{h,r}$ is denoted by $\overline{\mathcal{V}}_{h,r}$ and the subsequent product space is denoted by $\overline{\mathcal{V}}_{h,r}^d$, $r = 0,1$. 1. When $\mathcal{W}_h = \mathcal{V}_{h,0}$ or $\mathcal{W}_h =\{v\in \mathcal{V}_h:v_j|_K \in P_1(K),\forall K \in \mathcal{T}_h\}$, we have $\mathcal{S}_h = \{0\}$, which implies $\psi_h^{(0)} = 0$ and $J_h^{(0)} = 0$. 2. When $\mathcal{W}_h = \mathcal{V}_{h,0} + \overline{\mathcal{V}}_{h,1} + \Omega \cdot \overline{\mathcal{V}}_{h,1}^d$, it can be shown that $\mathcal{S}_h = \mathcal{C}_{h,\mathrm{zero}}$, $\mathcal{J}_h =\overline{\mathcal{V}}_h^d$\footnote{Since $\overline{\mathcal{V}}_h^d\supset \mathcal{J}_h = \overline{\Omega \mathcal{W}_h} \supset \overline{\Omega \left(\Omega\cdot \overline{\mathcal{V}}_h^d\right)} = \overline{\mathcal{V}}_h^d$, which forces $\mathcal{J}_h = \overline{\mathcal{V}}_h^d$. Here we have used (iii) in \eqref{eq-polyquad} for the last equality.} and $\Omega\cdot \mathcal{J}_h \subset \mathcal{W}_h$. The asymptotic scheme is the same as that of the original $S_N$-DG method. If $\sig{s}$ and $\sig{a}$ are both piecewise constant, then the asymptotic scheme has the primal form: \textit{find $\psi_h^{(0)} \in \mathcal{C}_{h,\mathrm{zero}}$, such that \begin{equation}\label{eq-lmdg-cg} \sum_{K\in \mathcal{T}_h} \int_K \left(\frac{1}{3 \sig{s}}\nabla {\psi}_h^{(0)} \cdot \nabla \vphi+\sig{a}{\psi}_h^{(0)}\vphi\right) dx= \int_D q\vphi dx, \end{equation} $\forall \vphi \in \mathcal{C}_{h,\mathrm{zero}}$.} This is the classical continuous Galerkin approximation, which is stable and second-order accurate. 3. When $\mathcal{W}_h =\mathcal{V}_{h,0} + \overline{\mathcal{V}}_{h,1}$, then $\mathcal{S}_h = \mathcal{C}_{h,\mathrm{zero}}$, $\mathcal{J}_h =\overline{\mathcal{V}}_{h,0}^d$ and $\Omega\cdot \mathcal{J}_h \subset \mathcal{W}_h$. With $P^1$ elements and triangular meshes, the asymptotic scheme is essentially the $P_N$ scheme suggested by Egger and Schlottbom in \cite{egger2012mixed} with $N = 1$. If $Q^1$ elements and Cartesian meshes are used, the scheme yields the same variational form as that in \cite{egger2012mixed}, while the space pair no longer satisfies the condition $\nabla \mathcal{S}_h \subset \mathcal{J}_h$. From another point of view, suppose $\sig{s}$ and $\sig{a}$ are piecewise constant, the primal form is: \emph{find $\psi_h^{(0)} \in \mathcal{C}_{h,\mathrm{zero}}$, such that \begin{equation}\label{eq-difflimPF} \sum_{K\in \mathcal{T}_h} \int_K\left( \frac{1}{3\sig{s}} \Pi_0(\nabla \psi_h^{(0)}) \cdot \Pi_0(\nabla \vphi) + \sig{a}\psi_h^{(0)}\vphi\right)dx = \sum_{K\in \mathcal{T}_h} \int_K q \vphi dx, \end{equation} $\forall \vphi \in \mathcal{C}_{h,\mathrm{zero}}$.} For $P^1$ elements on triangular meshes, \eqref{eq-difflimPF} is identical to \eqref{eq-lmdg-cg}. For $Q_1$ elements on Cartesian meshes, one can show that \eqref{eq-difflimPF} is unisolvent. Furthermore, $\|\Pi_0(\nabla \psi_h^{(0)})\|^2 + \|\psi_h^{(0)}\|^2 \leq \max(\frac{3\sig{s}}{2\sig{a}},\sig{a}^{-2}) \|q\|^2$, if $\sig{a}\geq \delta_{\mathrm{a}} >0$. While the accuracy is hard to analyze under the finite element framework. Assume a uniform square mesh with cell length $h$. Let $\sig{s}$ and $\sig{a}$ be globally constant. Then \eqref{eq-difflimPF} can be rewritten as a finite difference scheme under the Lagrange basis functions. \begin{align} -\frac{\psi_{i-1,j-1}+\psi_{i-1,j+1}-4\psi_{i,j}+\psi_{i+1,j-1}+\psi_{i+1,j+1}}{3\sig{s}\cdot 2h^2} + \sig{a}A[\psi_{i,j}] = & A[q_{i,j}],\\ A[\psi_j] := \frac{1}{36} \left(\psi_{i-1,j-1}+\psi_{i-1,j+1}+\psi_{i+1,j-1}+\psi_{i+1,j+1}\right) &\\ + \frac{1}{9}\left(\psi_{i-1,j}+\psi_{i,j-1}+\psi_{i,j+1}+\psi_{i+1,j}\right)\ + \frac{4}{9}\psi_{i,j}.& \nonumber \end{align} The truncation error of the method is $\mathcal{O}(h^2)$. \ At the first glance, $\mathcal{W}_h = \mathcal{V}_{h,0} + \overline{\mathcal{V}}_{h,1} + \Omega \cdot \overline{\mathcal{V}}_{h,1}$ seems to be the natural choice for constructing the low-memory scheme that preserves the correct diffusion limit. However, coupling between angles requires special treatment for reducing the system dimension. The extra moments $\Omega \cdot \overline{\mathcal{V}}_{h,1}^{d}$ will make the resulting system even larger than that of the original $S_N$-DG method. Although it may be worth to include extra moments for problems with anisotropic scattering, for which a large system has to be solved anyway, we avoid this option for solving \eqref{eq-main}. We therefore explore the other choice $\mathcal{W}_h = \mathcal{V}_{h,0} + \overline{\mathcal{V}}_{h,1}$ in the rest of the paper. \subsection{Low-memory scheme} Based on the analysis and discussion of \Cref {subsec-asymptotic-subspaces}, we propose a scheme that uses the finite element space \begin{equation} \mathcal{V}_h^{\mathrm{lm}} = \mathcal{V}_{h,0}+\overline{\mathcal{V}}_{h,1}. \end{equation} The low-memory $S_N$-DG scheme is written as follows: \textit{find $\psi_h \in \mathcal{W}_h$, such that \begin{equation}\label{eq-gene-scheme} B(\psi_h,v) = Q_{\alpha}(v), \qquad \forall v \in \mathcal{V}_h^\mathrm{lm}, \end{equation}} where $B$ and $Q_\alpha$ are defined in \eqref{eq-Bdef} and \eqref{eq-lalpha}, respectively. We now show that this scheme can be implemented using sweeps; i.e., a strategy analagous to the \eqref{eq-sndg-phi} and \eqref{eq-sndg-final-sweep}, which relies heavily on the fast inversion of the operator $\mathbf{L}$. For simplicity, we only consider the case $\sig{s}$ being piecewise constant. The implementation is based on the block matrix formulation \eqref{eq-sndg-mat} of the $S_N$-DG method: \begin{equation} \mathbf{L} = \left[\begin{matrix} \mathbf{L}_{00}&\mathbf{L}_{01}\\ \mathbf{L}_{10}& \mathbf{L}_{11}\end{matrix}\right], \quad \mathbf{S} = \left[\begin{matrix}\mathbf{M}_0\mathbf{P}_0&\\&\mathbf{M}_1\mathbf{P}_1\end{matrix}\right],\quad \text{and} \quad \mathbf{Q} = \left[\begin{matrix}\mathbf{Q}_0\\\mathbf{Q}_1\end{matrix}\right]. \end{equation} Here $\mathbf{L}_{rr'}$ are matrix blocks associated to $L(u,v)$ with $u\in \mathcal{V}_{h,r'}, v\in \mathcal{V}_{h,r}$. The sizes of $\mathbf{L}_{00}$, $\mathbf{L}_{01}$, $\mathbf{L}_{10}$ and $\mathbf{L}_{11}$ are $(n_\Omega\cdot n_x)\times (n_\Omega\cdot n_x)$, $(n_\Omega\cdot n_x)\times (n_\Omega\cdot n_x\cdot (n_P-1))$, $(n_\Omega\cdot n_x\cdot (n_P-1))\times (n_\Omega\cdot n_x)$ and $(n_\Omega\cdot n_x\cdot (n_P-1))\times (n_\Omega\cdot n_x\cdot (n_P-1))$, respectively. The block $\mathbf{S}_{rr'} = \mathbf{M}_{r} \mathbf{P}_{r'}$ is associated to $S(u,v)$ with $u\in \mathcal{V}_{h,r'}, v\in \mathcal{V}_{h,r}$; it has the same size as $\mathbf{L}_{rr'}$. The matrices $\mathbf{M}_0$ and $\mathbf{P}_0$ have dimensions $(n_\Omega\cdot n_x)\times n_x$ and $n_x \times(n_\Omega\cdot n_x)$, respectively; the matrices $\mathbf{M}_1$ and $\mathbf{P}_1$ have dimensions $(n_\Omega\cdot n_x\cdot(n_P-1))\times (n_x\cdot(n_P-1))$ and $(n_x\cdot (n_P-1)) \times(n_\Omega\cdot n_x\cdot (n_P-1))$, respectively. The vector block $\mathbf{Q}_r$ is associated to $Q_{\alpha}(v)$ for $v\in \mathcal{V}_{h,r}$, with $\mathbf{Q}_0$ an $n_\Omega\cdot n_x$ vector and $\mathbf{Q}_1$ an $n_\Omega\cdot n_x\cdot (n_P-1)$ vector. Recall from \Cref{sc-basis} that for each $p$, $\{b^{p,r}\}_{r=0}^{n_P-1}$ forms a basis for $Z_1(K_p)$, and $\xi^{l,p,r}_j = \delta_{lj}b^{p,r}$. We further assume $\{b^{p,r}\}_{r=0}^{n_P-1}$ is an orthogonal set and $\{b^{p,0}\}_{p=1}^{n_x}$ is a set of constant functions on $K_p$. Then $\mathbb{B}_0 = \{\xi^{l,p,0}:l = 1,\dots,n_\Omega, p = 1,\dots,n_x\}$ and $\mathbb{B}_1 = \{\xi^{l,p,r}:l = 1,\dots,n_\Omega, p = 1,\dots,n_x,r = 1,\dots,n_P-1\}$ are sets of basis functions for $\mathcal{V}_{h,0}$ and $\mathcal{V}_{h,1}$, respectively. Let $\mathbb{B}_1^\mathrm{lm} = \{\eta^{p,r}: \eta^{p,r}_j = b^{p,r},j =1,\dots,n_\Omega, p = 1,\dots,n_x, r =1,\dots,n_P-1\}$. Then $\mathbb{B}_1^\mathrm{lm}$ is a set of basis for $\overline{\mathcal{V}}_{h,1}$. Hence $\mathcal{V}_h^\mathrm{lm} = \mathrm{span} \{\mathbb{B}_0,\mathbb{B}_1^\mathrm{lm}\}$. The dimension of $\mathcal{V}_h^\mathrm{lm}$ is then $n_\Omega\cdot n_x + n_x\cdot (n_P-1)$. Because $\eta^{p,r}_j = b^{p,r} = \sum_{l=1}^{n_\Omega}\delta_{lj}b^{p,r}= \sum_{l=1}^{n_\Omega} \xi_j^{l,p,r}$, there exists a mapping from $\mathbb{B}_1$ to $\mathbb{B}_1^\mathrm{lm}$ \begin{equation} \eta^{p,r} = \sum_{l=1}^{n_\Omega}\xi^{l,p,r} = \sum_{l',p',r'=1}^{n_\Omega}\Sigma^{(p,r),(l',p',r')}\xi^{l',p',r'}, \end{equation} where $\mathbf{\Sigma} = (\Sigma^{(p,r),(l',p',r')})$ is an $(n_x\cdot (n_P-1))\times (n_\Omega\cdot n_x \cdot (n_P-1))$ matrix with components $\Sigma^{(p,r),(l',p',r')} = \delta_{pp'}\delta_{rr'}$. The matrix $\mathbf{\Sigma}$ corresponds to a summation operator that maps an angular flux to a scalar flux, while $\mathbf{\Sigma}^T$ copies the scalar flux to each angular direction. Let the solution of the low-memory method be represented by $\mathbf{\Psi} = \left[\mathbf{\Psi}_0, \mathbf{\Sigma}^T\mathbf{\Phi}_1\right]^T$. Using the fact $\mathbf{P}_1\mathbf{\Sigma}^T = \mathbf{I}_{n_x\cdot (n_P-1)}$, one can show $\mathbf{\Psi}$ satisfies the equations \begin{subequations}\label{eq-lmdg-block} \begin{gather} \mathbf{L}_{00}\mathbf{\Psi}_0+\mathbf{L}_{01}\mathbf{\Sigma}^T\mathbf{\Phi}_1 = \mathbf{M}_0\mathbf{P}_0\mathbf{\Psi}_0 + \mathbf{Q}_0,\label{eq-lmdg-block0}\noeqref{eq-lmdg-block0}\\ \mathbf{\Sigma}\mathbf{L}_{10}\mathbf{\Psi}_0+\mathbf{\Sigma}\mathbf{L}_{11}\mathbf{\Sigma}^T\mathbf{\Phi}_1 = \mathbf{\Sigma}\mathbf{M}_1\mathbf{\Phi}_1 + \mathbf{\Sigma}\mathbf{Q}_1.\label{eq-lmdg-block1}\noeqref{eq-lmdg-block1} \end{gather} \end{subequations} As that in the original $S_N$-DG method, the system dimension of \eqref{eq-lmdg-block} can be reduced with the following procedure. \ 1. Solve for $\mathbf{\Phi}_1$ in terms of $\mathbf{\Psi}_0$ through \eqref{eq-lmdg-block1}: \begin{equation}\label{eq-lmdg-psi1} \mathbf{\Phi}_1 = \mathbf{B}_{11}^{-1}\mathbf{\Sigma} \left(-\mathbf{L}_{10} \mathbf{\Psi}_0 + \mathbf{Q}_1\right), \qquad \mathbf{B}_{11} = \mathbf{\Sigma} \mathbf{L}_{11}\mathbf{\Sigma}^T - \mathbf{\Sigma}\mathbf{M}_1. \end{equation} 2. Substitute $\mathbf{\Phi}_1$ from \eqref{eq-lmdg-psi1} into \eqref{eq-lmdg-block0} to obtain a closed equation for $\mathbf{\Psi}_0$: \begin{equation} \begin{aligned}\label{eq-lmdg-psi0} \mathbf{\Psi}_0 - \mathbf{L}_{00}^{-1} \mathbf{M}_0 (\mathbf{P}_0 \mathbf{\Psi}_0) &-\mathbf{L}_{00}^{-1} \mathbf{L}_{01}\mathbf{\Sigma}^T(\mathbf{B}_{11}^{-1} \mathbf{\Sigma}\mathbf{L}_{10} \mathbf{\Psi}_0) = \mathbf{L}_{00}^{-1} (\mathbf{Q}_0- \mathbf{L}_{01}\mathbf{\Sigma}^T\mathbf{B}_{11}^{-1}\mathbf{\Sigma}\mathbf{Q}_1). \end{aligned} \end{equation} 3. Apply $\mathbf{P}_0$ and $\mathbf{\Sigma}\mathbf{L}_{10}$ to \eqref{eq-lmdg-psi0} to obtain a closed system for $\mathbf{X}_0 = \mathbf{P}_0\mathbf{\Psi}_0$ and {$\mathbf{X}_1 = \mathbf{B}_{11}^{-1}\mathbf{\Sigma}\mathbf{L}_{10}\mathbf{\Psi}_0$}: \begin{equation}\label{eq-lmdg-reducemat} \begin{aligned} \mathbf{K} \left[\begin{matrix}\mathbf{X}_0\\\mathbf{X}_{1} \end{matrix}\right]=\left[\begin{matrix} \mathbf{P}_0\\ \mathbf{\Sigma}\mathbf{L}_{10} \end{matrix}\right]\mathbf{L}_{00}^{-1} (\mathbf{Q}_0- \mathbf{L}_{01}\mathbf{\Sigma}^T\mathbf{B}_{11}^{-1}\mathbf{\Sigma}\mathbf{Q}_1), \end{aligned} \end{equation} where \begin{equation} \mathbf{K} = \left[\begin{matrix} \mathbf{I}_{n_x}- \mathbf{P}_0\mathbf{L}_{00}^{-1} \mathbf{M}_0 & -\mathbf{P}_0\mathbf{L}_{00}^{-1}\mathbf{L}_{01}\mathbf{\Sigma}^T\\ - \mathbf{\Sigma}\mathbf{L}_{10}\mathbf{L}_{00}^{-1} \mathbf{M}_0 &\mathbf{B}_{11} -\mathbf{\Sigma}\mathbf{L}_{10}\mathbf{L}_{00}^{-1}\mathbf{L}_{01}\mathbf{\Sigma}^T \\ \end{matrix}\right]. \end{equation} 4. Solve for $\mathbf{X}_0$ and $\mathbf{X}_1$ in \eqref{eq-lmdg-reducemat}. Then use \eqref{eq-lmdg-psi0} and \eqref{eq-lmdg-psi1} to obtain $\mathbf{\Psi}$: \begin{subequations} \label{eq-lm-step4} \begin{align} \label{eq-lm-step4-phi} \mathbf{\Psi}_0 &= \mathbf{L}_{00}^{-1} \mathbf{M}_0 \mathbf{X}_0 + \mathbf{L}_{00}^{-1}\mathbf{L}_{01}\mathbf{\Sigma}^T\mathbf{X}_1 + \mathbf{L}_{00}^{-1} (\mathbf{Q}_0- \mathbf{L}_{01}\mathbf{\Sigma}^T\mathbf{B}_{11}^{-1}\mathbf{\Sigma}\mathbf{Q}_1),\\ \label{eq-lm-step4-psi} \mathbf{\Phi}_1 &= \mathbf{B}_{11}^{-1} \mathbf{\Sigma}\left(-\mathbf{L}_{10} \mathbf{\Psi}_0 + \mathbf{Q}_1\right). \end{align} \end{subequations} \ Only Step 4 above is needed to implement the algorithm. If one solves for $\mathbf{\Psi}_0$ directly from \eqref{eq-lmdg-psi0}, then an $(n_\Omega\cdot n_x)\times(n_\Omega\cdot n_x)$ matrix should be inverted. While with \eqref{eq-lmdg-reducemat}, the matrix dimensions are reduced to $(n_x\cdot n_P)\times(n_x\cdot n_P)$. Typically $n_P$ is much smaller than $n_\Omega$. We state the following theorems on the invertibility of $\mathbf{B}_{11}$ and $\mathbf{K}$, whose proof can be found in \Cref{ap-B11} and \Cref{ap-K}, respectively. \begin{THM}\label{lem-B11} $\mathbf{B}_{11}$ is invertible. Furthermore, if the quadrature rule is central symmetric, then $\mathbf{B}_{11}$ is symmetric positive definite. Here, central symmetry means $\Omega_j$ and $-\Omega_j$ are both selected in the quadrature rule and their weights are equal $w_j = w_{-j}$. \end{THM} \begin{THM}\label{thm-K} $\mathbf{K}$ is invertible. \end{THM} \begin{REM} Typically, the linear system in such context is solved using the Krylov method, in which one needs to evaluate the multiplication of a vector with $\mathbf{K}$ in each iteration. We can use the following formula to avoid repeated evaluation in the left multiplication of $\mathbf{K}$. \begin{equation}\label{eq-nonrec-block} \mathbf{K} \left[\begin{matrix}\mathbf{X}_0\\\mathbf{X}_{1} \end{matrix}\right]= \left[\begin{matrix} \mathbf{I}_{n_x} & \\ &\mathbf{B}_{11}\\ \end{matrix}\right] \left[\begin{matrix} \mathbf{X}_0 \\ \mathbf{X}_1 \end{matrix}\right] - \left[\begin{matrix} \mathbf{P}_0 \\ \mathbf{\Sigma}\mathbf{L}_{10}\\ \end{matrix}\right]\mathbf{L}_{00}^{-1} \left(\mathbf{M}_0\mathbf{X}_0+\mathbf{L}_{01}\mathbf{\Sigma}^T\mathbf{X}_1\right). \end{equation} \end{REM} \begin{REM} As demonstrated in \cite{heningburg2019hybrid}, the inversion of the block $\mathbf{L}_{00}$ in \eqref{eq-lmdg-reducemat}, rather than the full matrix $\mathbf{L}$ in \eqref{eq-sndg-phi}, results in a significant savings in terms of floating point operations (and hence time-to-solution). This savings will be partially offset by the need to invert the matrix $\mathbf{B} _{11}$ in \eqref{eq-lmdg-psi1}. However, since the overall effect on time-to-solution depends heavily on the details of implementation, we do not investigate this aspect of the low-memory method in the numerical results, but instead leave such an investigation to future work. \end{REM} \subsection{Reconstructed low-memory scheme}\label{sc-rlmdg} Because the low-memory scheme couples the angular components of $\mathcal{V}_{h,1}$, it is only first-order for fixed $\varepsilon > 0$. To recover second-order accuracy (formally), we introduce a spatial reconstruction procedure to approximate the anisotropic parts of $\mathcal{V}_{h,1}$. \subsubsection{Numerical scheme}\label{sc-rec} We denote by $\Pi_i$ the orthogonal projection from $\mathcal{V}_{h}$ to $\mathcal{V}_{h,i}$, $i = 0,1$. The only information from the low-memory space $\mathcal{V}_h^{\mathrm{lm}}$ retains from $v \in \mathcal{V}_{h,1}$ is $\overline{\Pi_1(v)}$; the information contained in $\Pi_1 (v) - \overline{\Pi_1 (v)}$ is missing. We therefore introduce an operator $R_\alpha^*v = R_\alpha \Pi_0(v) - \overline{R_\alpha \Pi_0(v)}$, where $R_\alpha\Pi_0$ is an operator that returns the reconstructed slopes using piecewise constants and the boundary condition $\alpha$, to rebuild the difference. Then the reconstructed scheme is written as: \textit{find $\psi_h \in \mathcal{V}_h^\mathrm{lm}$ such that \begin{equation}\label{eq-rlmdg-vari} B(\psi_h + R_\alpha^*\psi_h, v) = Q_{\alpha}(v),\qquad \forall v\in \mathcal{V}_h^\mathrm{lm}. \end{equation}} The reconstruction $\psi_h + R_\alpha^*\psi_h$ then gives a more accurate approximation to $\Psi$. Equivalently, by assembling all boundary terms into the right hand side, the reconstructed scheme can also be formulated as a Petrov--Galerkin method with trial function space \begin{equation}\mathcal{V}_h^{\mathrm{rlm}} = \{v + R_0^*v: v\in \mathcal{V}_{h}^{\mathrm{lm}}\}.\end{equation} Since $R_0^* 0 = 0$, $\mathcal{V}_h^{\mathrm{rlm}}$ is in fact a linear space. With this formulation, the reconstructed method solves the following problem: \textit{find $\psi_{h,R_0} \in \mathcal{V}_{h}^{\mathrm{rlm}}$, such that \begin{equation}\label{eq-rlmdg-scheme} B(\psi_{h,R_0},v) = \widetilde{Q}_{\alpha}(v), \qquad \forall v \in \mathcal{V}_h^\mathrm{lm}. \end{equation}} The use of different trial and test functions spaces make the analysis of this scheme less transparent. Currently, we have no theoretical guarantee of unisolvency or the numerical diffusion limit. We observe, however, that the method recovers second-order convergence for several different test problem across a wide range of $\varepsilon$. \ In this paper, we apply the reconstruction suggested in \cite{heningburg2019hybrid} to recover slopes for simplicity, although in general other upwind approaches can also be used\footnote{For example, one can apply upwind reconstruction with wider stencils to improve the accuracy with an increased computational costs. Furthermore, the reconstruction can also be different at different spatial cells along different collocation angles, which may lead to an adaptive version of the reconstructed method. We postpone the discussion on numerical efficiency with different reconstruction methods to future work.}. For illustration, we consider a uniform Cartesian mesh on $[0,1]\times [0,1]\times [0,1]$. The grid points are labeled from $\frac{1}{2}$ to $n+\frac{1}{2}$ respectively. We denote by $u^0_{i,j,k}$ the cell average of $u$ on the cell $K_{i,j,k}$ that centers at $(x_i, y_j, z_k)$. Along each direction $\Omega = (\Omega_x,\Omega_y,\Omega_z)$, \begin{equation}(R_\alpha\Pi_0 (u))|_{K_{i,j,k}} = (\delta_x^{s_x}u^0_{i,j,k}) (x - x_i) + (\delta_y^{s_y}u^0_{i,j,k}) (y - y_j) + (\delta_z^{s_z}u^0_{i,j,k}) (z - z_k),\end{equation} where $s_x = - \text{sign} (\Omega_x)$, \begin{align} \delta^-_x u^0_{i,j,k} &= \begin{cases} \dfrac{u^0_{i,j,k} - u^{0}_{i-1,j,k}}{h}, & 2\leq i\leq n, \\ \dfrac{u^{0}_{1,j,k}-\alpha(\Omega,(0,y_j,z_k))}{h/2}, & i = 1, \end{cases} \\ \delta^+_x u^0_{i,j,k} &= \begin{cases} \dfrac{u^{0}_{i+1,j,k} - u^0_{i,j,k}}{h} & 1\leq i\leq n-1, \\ \dfrac{\alpha(\Omega,(1,y_j,z_k))-u^{0}_{n,j,k}}{h/2}, & i = n. \end{cases} \end{align} $\delta_y^\pm$ and $\delta_z^\pm$ are defined similarly. For numerical results in the next section, we only reconstruct the $P^1$ slopes to recover the second-order accuracy; $Q^1$ type reconstruction gives similar results in terms of the convergence rate. \subsubsection{Implementation} Let $\mathbb{B}_0^\mathrm{rlm}= \{\xi^{l,p,0}+R_0^*\xi^{l,p,0}:l=1,\dots,n_\Omega, p =1,\dots,n_x\}$ and $\mathcal{V}_h^{\mathrm{rlm}} = \mathrm{span} \{\mathbb{B}_0^\mathrm{rlm},\mathbb{B}_1^\mathrm{lm}\}$. As in the first-order method, the total degrees of freedom is $n_\Omega\cdot n_x+ n_x\cdot(n_P-1)$. The boundary terms are assembled into a vector $\mathbf{r}_\alpha$. Here we use $\mathbf{\Psi} = \left[\mathbf{\Psi}_0, \mathbf{\Psi}_1 \right]^T to represent the solution of the reconstructed method, where $\mathbf{\Psi}_1 = \mathbf{\Sigma}^T\mathbf{\Phi}_1+(\mathbf{I}_{n_\Omega\cdot n_x\cdot(n_P-1)}-\mathbf{\Sigma}^T\mathbf{P}_1)(\mathbf{R} \mathbf{\Psi}_0 +\mathbf{r}_\alpha)$. Note $\mathbf{P}_1\mathbf{\Sigma}^T = \mathbf{I}_{n_x\cdot (n_P-1)}$, which implies $\mathbf{P}_1\mathbf{\Psi}_1 = \mathbf{\Phi}_1$. The block matrix form can then be written as follows. \begin{subequations}\label{eq-rlmdg-block} \begin{eqnarray} \mathbf{L}_{00}\mathbf{\Psi}_0+\mathbf{L}_{01}\mathbf{\Psi}_1&=& \mathbf{M}_0\mathbf{P}_0\mathbf{\Psi}_0 + \mathbf{Q}_0,\\ \mathbf{\Sigma}\mathbf{L}_{10}\mathbf{\Psi}_0+\mathbf{\Sigma}\mathbf{L}_{11} \mathbf{\Psi}_1 &=& \mathbf{\Sigma}\mathbf{M}_1\mathbf{\Phi}_1 + \mathbf{\Sigma}\mathbf{Q}_1. \end{eqnarray} \end{subequations} With \begin{eqnarray} \widetilde{\mathbf{L}}_{00} &=& \mathbf{L}_{00} + \mathbf{L}_{01} \mathbf{R},\\ \widetilde{\mathbf{L}}_{10} &=& \mathbf{L}_{10} + \mathbf{L}_{11}(\mathbf{I}_{n_\Omega\cdot n_x\cdot(n_P-1)}-\mathbf{\Sigma}^T\mathbf{P}_1)\mathbf{R},\\ \widetilde{\mathbf{Q}}_0 &=& \mathbf{Q}_0 - \mathbf{L}_{01}(\mathbf{I}_{n_\Omega\cdot n_x\cdot(n_P-1)}-\mathbf{\Sigma}^T\mathbf{P}_1)\mathbf{r}_\alpha,\\ \widetilde{\mathbf{Q}}_1 &=& \mathbf{Q}_1 - \mathbf{L}_{11}(\mathbf{I}_{n_\Omega\cdot n_x\cdot(n_P-1)}-\mathbf{\Sigma}^T\mathbf{P}_1)\mathbf{r}_\alpha, \end{eqnarray} one can rewrite \eqref{eq-rlmdg-block} as \begin{subequations}\label{eq-rlmdg-block-tilde} \begin{eqnarray} \widetilde{\mathbf{L}}_{00}\mathbf{\Psi}_0+\mathbf{L}_{01}\mathbf{\Sigma}^T\left(\mathbf{\Phi}_1 -\mathbf{P}_1\mathbf{R} \mathbf{\Psi}_0 \right)&=& \mathbf{M}_0\mathbf{P}_0\mathbf{\Psi}_0 + \widetilde{\mathbf{Q}}_0,\label{eq-rlmdg-block0}\\ \mathbf{\Sigma}\widetilde{\mathbf{L}}_{10}\mathbf{\Psi}_0+\mathbf{\Sigma}\mathbf{L}_{11}\mathbf{\Sigma}^T\mathbf{\Phi}_1 &=& \mathbf{\Sigma}\mathbf{M}_1\mathbf{\Phi}_1 + \mathbf{\Sigma}\widetilde{\mathbf{Q}}_1.\label{eq-rlmdg-block1} \noeqref{eq-rlmdg-block0,eq-rlmdg-block1} \end{eqnarray} \end{subequations} We follow the procedure as before to reduce the system dimension. \ 1. Solve for $\mathbf{\Phi}_1$ in terms of $\mathbf{\Psi}_0$ through \eqref{eq-rlmdg-block1}: \begin{equation}\label{eq-rlmdg-psi1} \mathbf{\Phi}_1 = \mathbf{B}_{11}^{-1} \mathbf{\Sigma}\left(-\widetilde{\mathbf{L}}_{10} \mathbf{\Psi}_0 + \widetilde{\mathbf{Q}}_1\right), \qquad \mathbf{B}_{11} = \mathbf{\Sigma} {\mathbf{L}}_{11}\mathbf{\Sigma}^T - \mathbf{\Sigma}\mathbf{M}_1. \end{equation} 2. Substitute $\mathbf{\Phi}_1$ from \eqref{eq-rlmdg-psi1} into \eqref{eq-rlmdg-block0} to obtain a closed equation for $\mathbf{\Psi}_0$: \begin{equation}\label{eq-rlmdg-psi0} \begin{aligned} \mathbf{\Psi}_0 - \widetilde{\mathbf{L}}_{00}^{-1} \mathbf{M}_0 (\mathbf{P}_0 \mathbf{\Psi}_0) &-\widetilde{\mathbf{L}}_{00}^{-1} {\mathbf{L}}_{01}\mathbf{\Sigma}^T(\mathbf{B}_{11}^{-1} \mathbf{\Sigma}\widetilde{\mathbf{L}}_{10} +\mathbf{P}_1 \mathbf{R})\mathbf{\Psi}_0\\ &= \widetilde{\mathbf{L}}_{00}^{-1} (\widetilde{\mathbf{Q}}_0- {\mathbf{L}}_{01}\mathbf{\Sigma}^T\mathbf{B}_{11}^{-1}\mathbf{\Sigma}\widetilde{\mathbf{Q}}_1). \end{aligned} \end{equation} 3. Applying $\mathbf{P}_0$ and $\mathbf{\Sigma}\widetilde{\mathbf{L}}_{10}$ to \eqref{eq-rlmdg-psi0}, to obtain a closed system for $\mathbf{X}_0 = \mathbf{P}_0\mathbf{\Psi}_0$ and $\mathbf{X}_1 = (\mathbf{B}_{11}^{-1}\mathbf{\Sigma}\widetilde{\mathbf{L}}_{10}+\mathbf{P}_1\mathbf{R})\mathbf{\Psi}_0$: \begin{equation}\label{eq-rlmdg-reducemat} \begin{aligned} \widetilde{\mathbf{K}} \left[\begin{matrix}\mathbf{X}_0\\\mathbf{X}_{1} \end{matrix}\right] &=\left[\begin{matrix} \mathbf{P}_0\\ \mathbf{\Sigma}\mathbf{L}_{10} \end{matrix}\right]\widetilde{\mathbf{L}}_{00}^{-1} (\widetilde{\mathbf{Q}}_0- {\mathbf{L}}_{01}\mathbf{\Sigma}^T\mathbf{B}_{11}^{-1}\mathbf{\Sigma}\widetilde{\mathbf{Q}}_1), \end{aligned} \end{equation} where \begin{equation} \widetilde{\mathbf{K}} = \left[\begin{matrix} \mathbf{I}_{n_x}- \mathbf{P}_0\widetilde{\mathbf{L}}_{00}^{-1} \mathbf{M}_0 & -\mathbf{P}_0\widetilde{\mathbf{L}}_{00}^{-1}{\mathbf{L}}_{01}\mathbf{\Sigma}^T\\ - \mathbf{\Sigma}\widetilde{\mathbf{L}}_{10}\widetilde{\mathbf{L}}_{00}^{-1} \mathbf{M}_0 &\mathbf{B}_{11} -\mathbf{\Sigma}\widetilde{\mathbf{L}}_{10}\widetilde{\mathbf{L}}_{00}^{-1}{\mathbf{L}}_{01}\mathbf{\Sigma}^T \\ \end{matrix}\right]. \end{equation} 4. Solve for $\mathbf{X}_0$ and $\mathbf{X}_1$ in \eqref{eq-rlmdg-reducemat}. Use \eqref{eq-rlmdg-psi0} and \eqref{eq-rlmdg-psi1} to recover $\mathbf{\Psi}$. \begin{eqnarray} \mathbf{\Psi}_0 &=& \widetilde{\mathbf{L}}_{00}^{-1} \mathbf{L}_{01}\mathbf{X}_0 + \widetilde{\mathbf{L}}_{00}^{-1} \mathbf{M}_0 \mathbf{X}_1 + \mathbf{L}_{00}^{-1} (\widetilde{\mathbf{Q}}_0- {\mathbf{L}}_{01}\mathbf{\Sigma}^T\mathbf{B}_{11}^{-1}\mathbf{\Sigma}\widetilde{\mathbf{Q}}_1),\\ \mathbf{\Phi}_1 &=& \mathbf{B}_{11}^{-1}\mathbf{\Sigma} \left(-\widetilde{\mathbf{L}}_{10} \mathbf{\Psi}_0 + \widetilde{\mathbf{Q}}_1\right). \end{eqnarray} \ As the first-order method, only Step 4 is used in the implementation. Since only upwind information is used, $\widetilde{\mathbf{L}}_{00}$ is invertible and can be inverted with sweeps along each angular direction. Note $\mathbf{B}_{11}$ is invertible, as has been pointed out in \Cref{ap-B11}. One can follow the argument in \Cref{ap-K} to show $\widetilde{\mathbf{K}}$ is invertible if the scheme \eqref{eq-rlmdg-vari} is unisolvent. \section{Numerical tests}\label{sc-num} \setcounter{equation}{0} \setcounter{figure}{0} \setcounter{table}{0} In this section, we present numerical tests to examine performance of the methods. \subsection{One dimensional tests (slab geometry)} In slab geometries, the radiative transport equation takes the form (see, e.g., \cite[Page 28]{Lewis-Miller-1984}). \begin{align}\label{eq-1d} \mu \partial_x\psi (\mu,x) + \left(\frac{\sig{s}}{\varepsilon} +\varepsilon \sig{a}\right) \psi(\mu,x) &= \frac{\sig{s}}{2\varepsilon}\int_{-1}^1\psi(\mu',x)d\mu' + \varepsilon q(x),\\ \psi(\mu,x_a) = \psi_l(\mu), \text{ if }\mu\geq0, &\quad \text{and} \quad \psi(\mu,x_b) = \psi_r(\mu), \text{ if }\mu<0, \end{align} where $x\in [x_a,x_b]$ and $\mu \in [-1,1]$. We will compare the $S_N$-$P^0$-DG scheme, $S_N$-$P^1$-DG scheme, low-memory scheme (LMDG) and the reconstructed scheme (RLMDG). Numerical error is evaluated in $L^1$ norm. \begin{example}\label{examp-1dfab} We first examine convergence rates of the methods using fabricated solutions. Let $\varepsilon = 1$, $\sig{s} = 1$, $\sig{a} = 1$ and $D = [0,1]$. Assuming the exact solution $\psi$, we compute the source term $q$ and the inflow boundary conditions $\psi_l$ and $\psi_r$ accordingly. With this approach, it may happen that $q$ depends on $\mu$. We use the $32$ points Gauss quadrature on $[-1,1]$ for $S_N$ discretization. We consider the case $\psi = \cos x$ and $\psi = \cos(x+\mu)$. The results are documented in \Cref{tab-1d-aniso}. When $\psi$ is isotropic, the low-memory scheme exhibits second-order convergence. For the anisotropic case, the LMDG scheme degenerates to first-order accuracy, while the RLMDG scheme remains second-order accurate. \begin{table}[h!] \footnotesize \centering \begin{tabular}{c|c|c|c|c|c|c|c|c|c} \hline &&\multicolumn{2}{c|}{$P^0$-DG}&\multicolumn{2}{c|}{$P^1$-DG}&\multicolumn{2}{c|}{LMDG }&\multicolumn{2}{c}{RLMDG}\\ \hline $\psi$&$h$&error&order&error&order&error&order&error&order\\ \hline \multirow{2}{*}{$\cos x$} &$1/20$ &3.79e-3& - & 2.48e-5& -& 2.24e-5& -& 9.14e-5& - \\ &$1/40$ &1.91e-3& 0.99 & 6.27e-6& 1.98& 5.62e-6& 2.00& 2.31e-5& 1.99\\ &$1/80$ &9.55e-4& 1.00 & 1.58e-6& 1.99& 1.41e-6& 2.00& 5.80e-6& 1.99\\ &$1/160$&4.78e-4& 1.00 & 3.96e-7& 2.00& 3.52e-7& 2.00& 1.45e-6& 2.00\\ \hline \multirow{2}{*}{$\cos (x+\mu)$} &$1/20$& 4.70e-3& -& 2.11e-5& -& 3.20e-3& -& 7.74e-5& - \\ &$1/40$& 2.36e-3& 0.99& 5.36e-6&1.98& 1.60e-3& 1.00& 1.95e-5& 1.99\\ &$1/80$& 1.18e-3& 1.00& 1.35e-6&1.99& 8.01e-4& 1.00& 4.89e-6& 1.99\\ &$1/160$& 5.91e-4& 1.00& 3.39e-7&1.99& 4.01e-4& 1.00& 1.23e-6& 2.00\\ \hline \end{tabular} \caption{Accuracy test for \Cref{examp-1dfab}.}\label{tab-1d-aniso} \end{table} \begin{figure} \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{1derr-nx-iso-eps-converted-to.pdf} \caption{Number of mesh cells.} \end{subfigure} ~ \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{1derr-mc-iso-eps-converted-to.pdf} \caption{Degrees of freedom.} \end{subfigure} ~ \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{1derr-sd-iso-eps-converted-to.pdf} \caption{System dimension.} \end{subfigure} \\ \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{1derr-nx-aniso-eps-converted-to.pdf} \caption{Number of mesh cells.} \end{subfigure} ~ \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{1derr-mc-aniso-eps-converted-to.pdf} \caption{Degrees of freedom.} \end{subfigure} ~ \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{1derr-sd-aniso-eps-converted-to.pdf} \caption{System dimension.} \end{subfigure} \caption{Numerical efficiency in \Cref{examp-2daccu}. The first row is for isotropic test $\psi = \cos(x)$ and the second row is for anisotropic test $\psi = \cos(x+\mu)$. }\label{fig-efficiency} \end{figure} To better understand numerical efficiency, we analyze results in \Cref{tab-1d-aniso} by plotting $L^1$ error versus number of mesh cells, total degrees of freedom of the solution (memory costs), and number of equations in the reduced linear system (either \eqref{eq-sndg-phi}, \eqref{eq-lmdg-reducemat}, or \eqref{eq-rlmdg-reducemat}). For the LMDG method, when the solution is isotropic, the method uses similar number of mesh cells as the $P^1$-DG method to reach the same accuracy. As a result, a reduced linear system of similar size is solved, but the degrees of freedom is smaller. For the anisotropic case, the LMDG method is first-order accurate. Compared with the $P^0$-DG method, it is able to reach similar accuracy on a coarser mesh. The reduced linear system is larger, but the number of degrees of freedom is indeed smaller. For the RLMDG method, it seems to be less accurate compared with $P^1$-DG method, and a finer mesh has to be used to achieve the same accuracy. As a result, the solution degrees of freedom is similar to that of the $P^1$-DG method but the reduced system is even larger. However, we point out a more accurate reconstruction may solve this problem. For example, instead of using two cells, one can recover slopes in the interior region with a three-cell upwind reconstruction (which we call RLMDG$^*$). This new method is still second-order accurate, but its error is comparable to the $P^1$-DG method and significantly smaller than the current RLMDG method. Efficiency results for RLMDG$^*$ are depicted by green lines in \Cref{fig-efficiency} (they overlap with red lines in (d) and (f)). These results show that RLMDG$^*$ yields reduced systems of similar size to those of the $P^1$-DG method, but it uses less overall memory. \end{example} \begin{example}\label{examp-1ddifflim} In the second numerical test, we examine the convergence rate and asymptotic preserving property of the methods. Let $\sig{s} = \sig{a} = 1$ in \eqref{eq-1d}. The computational domain is set as $D =[0,{\pi}]$. We take $\psi_l = \psi_r = 0$ and $q = \frac{4}{3}\sin(x)$. The $32$-point Gauss quadrature is used for $S_N$ discretization. Numerical error at $\varepsilon = 10^{-5}$ and $\varepsilon = 1$ is listed in \Cref{tab-1d-eps10-5}, respectively. The reference solutions are set as the numerical solutions with $P^1$-DG scheme on a mesh with $1280$ cells. One can see from \Cref{tab-1d-eps10-5}, the LMDG scheme exhibits second-order convergence rate at $\varepsilon = 10^{-5}$, when the solution is almost isotropic, while it converges at a first-order rate when $\varepsilon = 1$. The RLMDG method is second-order in both cases. Solution profiles of different schemes on a sparse uniform mesh, with $h = \pi/8$, are shown in \Cref{fig-1dguermond-1}. When $\varepsilon = 10^{-5}$, both LMDG and RLMDG methods preserve the correct diffusion limit, unlike the $P^0$-DG method. When $\varepsilon = 1$, all schemes give valid approximations. \begin{table}[h!] \footnotesize \centering \begin{tabular}{c|c|c|c|c|c|c|c|c|c} \hline &&\multicolumn{2}{c|}{$P^0$-DG}&\multicolumn{2}{c|}{$P^1$-DG}&\multicolumn{2}{c|}{LMDG} &\multicolumn{2}{c}{RLMDG}\\ \hline $\varepsilon$&$h$&error&order&error&order&error&order&error&order\\ \hline \multirow{2}{*}{$10^{-5}$} &$1/20$& 2.00e-0& - &1.89e-3&-& 1.89e-3&-&7.52e-3&-\\ &$1/40$& 2.00e-0&0.00&4.70e-4&2.01&4.71e-4&2.00&1.88e-3&2.00\\ &$1/80$& 2.00e-0&0.00&1.17e-4&2.01&1.16e-4&2.03&4.70e-4&2.00\\ &$1/160$& 1.99e-0&0.00&2.91e-5&2.00&3.06e-5&1.92&1.17e-4&2.00\\ \hline \multirow{2}{*}{$1$} &$1/20$& 1.06e-1&-& 2.91e-3&- &3.08e-2&-& 9.55e-3& -\\ &$1/40$& 5.38e-2&0.98& 7.72e-4&1.92&1.59e-2&0.95&2.60e-3&1.88\\ &$1/80$& 2.71e-2&0.99& 1.99e-4&1.95&8.09e-3&0.98&6.90e-4&1.91\\ &$1/160$& 1.35e-2&1.00& 5.03e-5&1.99&4.08e-3&0.99&1.80e-4&1.94\\ \hline \end{tabular} \caption{Accuracy test for \Cref{examp-1ddifflim}.}\label{tab-1d-eps10-5} \end{table} \begin{figure}[h!] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{1deps1e-5-eps-converted-to.pdf} \caption{$\varepsilon = 10^{-5}$.} \end{subfigure} ~ \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{1deps1-eps-converted-to.pdf} \caption{$\varepsilon = 1$.} \end{subfigure} \caption{Profiles of numerical scalar fluxes in \Cref{examp-1ddifflim}.} \end{figure} \end{example} \begin{example}\label{examp-guermond} We then consider a test from \cite{ragusa2012robust} with discontinuous cross-sections. The problem is defined on $[0,1]$ and is purely scattering, i.e., $\sig{a}\equiv 0$. The cross-section is $\sig{s} = \sig{s,1}=100$ on the left part of the domain $[0,0.5]$, and is $\sig{s} = \sig{s,2} = 100, 1000 \text{ or } 10000$ on the right part $[0.5,1]$. The source term is constant $q = 0.01$. In the numerical test, we set the mesh size to be $h =0.1$ and $h = 0.02$, and solutions are depicted in \Cref{fig-1dguermond-1} and \Cref{fig-1dguermond-2}, respectively. As one can see, unlike the $P^0$-DG scheme, both LMDG and RLMDG schemes provide correct solution profiles. Since the problem is diffusive, the LMDG scheme gives accurate approximations that are almost indistinguishable with the $P^1$-DG solutions. The reconstructed scheme has difficulty resolving the kink at $x = 0.5$, likely because the reconstruction is no longer accurate at this point. This artifact can indeed be alleviated as the mesh is refined comparing \Cref{fig-1dguermond-1} and \Cref{fig-1dguermond-2}. \begin{figure}[h!] \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{disccross3-0d1-100-eps-converted-to.pdf} \caption{$\sig{s,2} = 100$.} \end{subfigure} ~ \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{disccross3-0d1-1000-eps-converted-to.pdf} \caption{$\sig{s,2} = 1000$.} \end{subfigure} ~ \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{disccross3-0d1-10000-eps-converted-to.pdf} \caption{$\sig{s,2} = 10000$.} \end{subfigure} \caption{Profiles of numerical scalar fluxes in \Cref{examp-guermond}, $h = 0.1$.}\label{fig-1dguermond-1} \end{figure} \begin{figure}[h!] \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{disccross3-0d02-100-eps-converted-to.pdf} \caption{$\sig{s,2} = 100$.} \end{subfigure} ~ \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{disccross3-0d02-1000-eps-converted-to.pdf} \caption{$\sig{s,2} = 1000$.} \end{subfigure} ~ \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{disccross3-0d02-10000-eps-converted-to.pdf} \caption{$\sig{s,2} = 10000$.} \end{subfigure} \caption{Profiles of numerical scalar fluxes in \Cref{examp-guermond}, $h = 0.02$.}\label{fig-1dguermond-2} \end{figure} \end{example} \begin{example}\label{examp-disccross1} In this numerical test, we solve a test problem from \cite{larsen1989asymptotic} with discontinuous cross-sections. We take $q = 0$ with the left inflow $\psi_l = 1$ at $x_a = 0$ and $\psi_r = 0$ at $x_b = 11$. Let $\frac{\sig{s}}{\varepsilon} = \left\{\begin{matrix} 0,&0<x<1 \\100,& 1<x<11\end{matrix}\right.$ and $\varepsilon{\sig{a}} = \left\{\begin{matrix} 2,&0<x<1 \\0,& 1<x<11\end{matrix}\right.$. The $16$-point Gauss quadrature rule is used for angular discretization. The spatial mesh is set as $h = \left\{\begin{matrix}0.1,&0<x<1\\1,&1<x<11\\\end{matrix}\right.$ Profiles of the scalar flux obtained with various schemes are depicted in \Cref{fig-disccross1-1}. The reference solutions are obtained with the $P^1$-DG scheme on a refined mesh. The solution of the LMDG scheme is satisfactory. As before, the RLMDG scheme gives an accurate approximation to the scalar flux, except for kinks near the discontinuity. However, this numerical artifact can also be alleviated by suppressing the reconstruction across the discontinuity; see \Cref{fig-disccross1-2}. \begin{figure}[h!] \centering \begin{subfigure}[h]{0.45\textwidth} \includegraphics[width=\textwidth]{disccross1-eps-converted-to.pdf} \caption{Numerical scalar fluxes.}\label{fig-disccross1-1} \end{subfigure} \begin{subfigure}[h]{0.45\textwidth} \includegraphics[width=\textwidth]{disccross1_suppress-eps-converted-to.pdf} \caption{With suppressed reconstruction.}\label{fig-disccross1-2} \end{subfigure} \caption{Profiles of numerical scalar fluxes in \Cref{examp-disccross1}. \end{figure} \end{example} \begin{example}\label{examp-disccross2} This test is also from \cite{larsen1989asymptotic}, with $D = [0,20]$ and $\psi_l = \psi_r =0$. The cross-sections are $\frac{\sig{s}}{\varepsilon} = \left\{\begin{matrix} 90,&0<x<10 \\100,& 10<x<20\end{matrix}\right.$ and $\varepsilon{\sig{a}} = \left\{\begin{matrix} 10,&0<x<10 \\0,& 10<x<20\end{matrix}\right.$. We solve the problem using the ${16}$-point Gauss quadrature rule and the spatial mesh is uniform with $h=1$. For this numerical test, the system has smaller changes among different directions. Both the LMDG and RLMDG schemes give accurate approximations. Solution profiles are give in \Cref{fig-disccross2}. \begin{figure}[h!] \centering \includegraphics[width=0.45\textwidth]{disccross2-eps-converted-to.pdf} \caption{Profiles of numerical scalar fluxes in \Cref{examp-disccross2}.}\label{fig-disccross2} \end{figure} \end{example} \subsection{Two dimensional tests} We consider two dimensional problems on Cartesian meshes in this section. \begin{example}\label{examp-2daccu} We set $\varepsilon = 1$ and $\sig{s} = \sig{a} =1$ and test the accuracy with exact solutions $\psi = \sin(x+y)$ and $\psi = (\Omega_x-3\Omega_y)^2\sin(2x+y)$. As can be seen from \Cref{tab-2d-aniso}, for $\psi = \sin(x+y)$, both LMDG and RLMDG schemes are second-order accurate. While for the anisotropic problem with $\psi = (\Omega_x-3\Omega_y)^2\sin(2x+y)$, the RLMDG scheme is still second-order accurate and the LMDG scheme is first-order accurate. \begin{table}[h!] \centering \footnotesize \hskip-1.0cm \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \hline \multicolumn{11}{c}{$\psi = \sin(x+y)$}\\ \hline &\multicolumn{2}{c|}{$P^0$-DG}&\multicolumn{2}{c|}{$P^1$-DG}&\multicolumn{2}{c|}{$Q^1$-DG}&\multicolumn{2}{c|}{LMDG}&\multicolumn{2}{c}{RLMDG}\\ \hline $h/\sqrt{2}$&error&order&error&order&error&order&error&order&error&order\\ \hline $1/20$& 2.04e-2&- &1.45e-4&- &1.40e-4&- &1.24e-4&- &4.59e-4&-\\ $1/40$& 1.10e-2&0.89&3.42e-5&2.08&3.53e-5&1.98&3.12e-5&1.99&1.18e-4&1.96\\ $1/80$& 5.77e-3&0.94&8.28e-6&2.04&8.88e-6&1.99&7.82e-6&2.00&2.98e-5&1.98\\ $1/160$&2.96e-3&0.96&2.04e-6&2.02&2.26e-6&2.00&1.96e-6&2.00&7.51e-6&1.99\\ \hline \multicolumn{11}{c}{$\psi = (\Omega_x - 2\Omega_y)^2 \sin(2x + y)$}\\ \hline &\multicolumn{2}{c|}{$P^0$-DG}&\multicolumn{2}{c|}{$P^1$-DG}&\multicolumn{2}{c|}{$Q^1$-DG} &\multicolumn{2}{c|}{LMDG}&\multicolumn{2}{c}{RLMDG}\\ \hline $h/\sqrt{2}$&error&order&error&order&error&order&error&order&error&order\\ \hline $1/20$& 7.84e-2 & & 1.64e-3& -& 1.39e-3& -& 5.04e-2& -& 4.81e-3& -\\ $1/40$& 4.18e-2 &0.91 & 4.12e-4& 2.00& 3.53e-4& 1.98& 2.57e-2& 0.97& 1.21e-3&1.99\\ $1/80$& 2.12e-2 &0.98 & 1.01e-4& 2.03& 8.87e-5& 1.99& 1.30e-2& 0.99& 3.05e-4&1.99\\ $1/160$& 1.07e-2 &0.97 & 2.52e-5& 2.00& 2.22e-5& 2.00& 6.51e-3& 0.99& 7.63e-5&2.00\\ \hline \end{tabular} \caption{2D accuracy test with fabricated solutions.}\label{tab-2d-aniso} \end{table} \end{example} \begin{example}\label{examp-2ddifflim} To examine the asymptotic preserving property, we consider the problem defined on $[-1,1]\times[-1,1]$ with zero inflow boundary conditions. Let $\sig{s} = \sig{a} = 1$. We assume $q = (\frac{\pi^2}{6}+1)\cos(\frac{\pi}{2}x)\cos(\frac{\pi}{2}y)$. The asymptotic solution is $\psi^{(0)}= \cos(\frac{\pi}{2}x)\cos(\frac{\pi}{2}y)$. We test with $\varepsilon= 1,2^{-6},2^{-10},2^{-14}$; the numerical results are given in \Cref{fig-2d}. For the $P^0$-DG and $P^1$-DG schemes, solutions become zero near the diffusion limit, while for the $Q^1$-DG scheme, LMDG scheme and RLMDG scheme, the correct asymptotic profile is maintained. \begin{figure}[h!] \centering \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{P0-2e-0-eps-converted-to.pdf} \caption{$P^0$, $\varepsilon = 1$.} \end{subfigure} ~ \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{P0-2e-6-eps-converted-to.pdf} \caption{$P^0$, $\varepsilon = 2^{-6}$.} \end{subfigure} ~ \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{P0-2e-10-eps-converted-to.pdf} \caption{$P^0$, $\varepsilon = 2^{-10}$.} \end{subfigure} ~ \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{P0-2e-14-eps-converted-to.pdf} \caption{$P^0$, $\varepsilon = 2^{-14}$.} \end{subfigure}\\ \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{P1-2e-0-eps-converted-to.pdf} \caption{$P^1$, $\varepsilon = 1$.} \end{subfigure} ~ \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{P1-2e-6-eps-converted-to.pdf} \caption{$P^1$, $\varepsilon = 2^{-6}$.} \end{subfigure} ~ \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{P1-2e-10-eps-converted-to.pdf} \caption{$P^1$, $\varepsilon = 2^{-10}$.} \end{subfigure} ~ \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{P1-2e-14-eps-converted-to.pdf} \caption{$P^1$, $\varepsilon = 2^{-14}$.} \end{subfigure}\\ \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{Q1-2e-0-eps-converted-to.pdf} \caption{$Q^1$, $\varepsilon = 1$.} \end{subfigure} ~ \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{Q1-2e-6-eps-converted-to.pdf} \caption{$Q^1$, $\varepsilon = 2^{-6}$.} \end{subfigure} ~ \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{Q1-2e-10-eps-converted-to.pdf} \caption{$Q^1$, $\varepsilon = 2^{-10}$.} \end{subfigure} ~ \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{Q1-2e-14-eps-converted-to.pdf} \caption{$Q^1$, $\varepsilon = 2^{-14}$.} \end{subfigure}\\ \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{lmdg-2e-0-eps-converted-to.pdf} \caption{LM, $\varepsilon = 1$.} \end{subfigure} ~ \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{lmdg-2e-6-eps-converted-to.pdf} \caption{LM, $\varepsilon = 2^{-6}$.} \end{subfigure} ~ \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{lmdg-2e-10-eps-converted-to.pdf} \caption{LM, $\varepsilon = 2^{-10}$.} \end{subfigure} ~ \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{lmdg-2e-14-eps-converted-to.pdf} \caption{LM, $\varepsilon = 2^{-14}$.} \end{subfigure}\\ \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{rlmdg-2e-0-eps-converted-to.pdf} \caption{RLM, $\varepsilon = 1$.} \end{subfigure} ~ \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{rlmdg-2e-6-eps-converted-to.pdf} \caption{RLM, $\varepsilon = 2^{-6}$.} \end{subfigure} ~ \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{rlmdg-2e-10-eps-converted-to.pdf} \caption{RLM, $\varepsilon = 2^{-10}$.} \end{subfigure} ~ \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{rlmdg-2e-14-eps-converted-to.pdf} \caption{RLM, $\varepsilon = 2^{-14}$.} \end{subfigure} \caption{Profiles of numerical scalar fluxes in \Cref{examp-2ddifflim}.}\label{fig-2d} \end{figure} \end{example} \section{Conclusions and future work}\label{sc-conclude} \setcounter{equation}{0} \setcounter{figure}{0} \setcounter{table}{0} In this paper, we study a class of low-memory $S_N$-DG methods for the radiative transport equation. In our first method, we use the variational form of the original $S_N$-DG scheme with a smaller finite element space, in which functions have isotropic slopes. This method preserves the asymptotic diffusion limit and can still be solved with sweeps. It is first-order accurate and exhibits second-order convergence rate near the diffusion limit. The second method is a correction of the first method with reconstructed slopes, which also preserves the diffusion limit and is second-order accurate in general settings (numerically). A summary of different methods and their properties can be found in \Cref{tab-compare}. Future work will focus on the efficiency boost of the low-memory methods. Possible directions include: (i) further reducing degrees of freedom by enriching piecewise constant space only with continuous linear elements; (ii) developing preconditioners for linear systems; (iii) comparing numerical efficiency of the methods with different reconstruction approaches, including adaptivity. \begin{table}[h!] \footnotesize \centering \begin{tabular}{c|c|c|c|c|c|c} \hline \multicolumn{2}{c|}{}&$P^0$-DG&$P^1$-DG&$Q^1$-DG&LMDG&RLMDG\\ \hline \multicolumn{2}{c|}{Unisolvency when $\sig{a}\geq \delta_{\mathrm{a}} >0$} &\multicolumn{4}{c|}{Yes}&\multirow{3}{*}{\thead{Unknown.\\ Numeri-\\cally:\\ Yes}}\\ \cline{1-6} \multirow{2}{*}{\thead{Preserves \\interior \\diffusion limit}}&1D&\multirow{2}{*}{No}&\multicolumn{3}{c|}{Yes}&\\\cline{2-2}\cline{4-6} &2D& &\thead{Triangular: Yes \\Rectangular: No}&\multicolumn{2}{c|}{Yes}&\\ \hline \multirow{2}{*}{\thead{Order of \\accuracy }}&isotropic&\multirow{2}{*}{1}&\multicolumn{2}{c|}{\multirow{2}{*}{2}}&2&\multirow{2}{*}{2}\\\cline{2-2}\cline{6-6} &anisotropic&&\multicolumn{2}{c|}{} &1&\\ \hline \multirow{3}{*}{\thead{System \\dimension }}&1D&\multirow{3}{*}{$n_x$}&\multicolumn{4}{c}{$2n_x$}\\\cline{2-2}\cline{4-7} &2D& &{$3n_x$}&\multicolumn{3}{c}{$4n_x$}\\\cline{2-2}\cline{4-7} &3D& &{$4n_x$}&\multicolumn{3}{c}{$8n_x$}\\\cline{2-2}\cline{4-7} \hline \multirow{3}{*}{\thead{Solution \\dimension }}&1D&\multirow{3}{*}{$n_\Omega \cdot n_x$}&\multicolumn{2}{c|}{$2n_\Omega\cdot n_x$}&\multicolumn{2}{c}{$n_\Omega\cdot n_x+n_x$}\\\cline{2-2}\cline{4-7} &2D& &{$3n_\Omega\cdot n_x$}&$4n_\Omega\cdot n_x$&\multicolumn{2}{c}{$n_\Omega\cdot n_x+3n_x$}\\\cline{2-2}\cline{4-7} &3D&&{$4n_\Omega\cdot n_x$}&$8n_\Omega\cdot n_x$&\multicolumn{2}{c}{$n_\Omega\cdot n_x+7n_x$}\\ \hline \end{tabular} \caption{Comparison of different methods.}\label{tab-compare} \end{table} \vspace{-0.5cm} \section*{Acknowledgment} ZS would like to thank Oak Ridge National Laboratory for hosting his NSF internship and to thank the staff, post-docs, interns and other visitors at ORNL for their warm hospitality.
{ "attr-fineweb-edu": 1.929688, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdWg5qhLA7iWj60ir
\section{Introduction} In a pair of major, skillful papers, using concepts of random matrix theory, \.Zyczkowski and Sommers were able to obtain exact formulas for the {\it total} volumes--both in terms of the Hilbert-Schmidt (HS) metric \cite{szHS} and Bures (minimal monotone) metric \cite{szBures}--of the $(N^2-1)$-dimensional convex set of $N \times N$ complex density matrices and the $((N^2+N-2)/2)$-dimensional convex set of $N \times N$ real density matrices, representing $N$-level quantum systems (cf. \cite{andai} \cite[secs. 14.3, 14.4]{ingemarkarol}). In two recent studies, we have been interested in the question of how to modify/truncate, in some natural manner (by multiplying certain integrands by relevant functions), these formulas of \.Zyczkowski and Sommers, so that they will yield not {\it total} volumes, but only the (lesser, strictly included) volumes occupied by the {\it separable/nonentangled} states \cite{slaterJGP2,slaterJMP2008} (cf. \cite{ZHSL}). We will below report some interesting progress in this regard, in relation to the two-qubit ($N=4$) states. To begin, we present two parallel formulas from \cite{szHS} and \cite{szBures} for certain generalized normalization constants ($C_{N}^{(\alpha,\beta)})$ used in the total HS and Bures volume computations. (Some notation and formatting has been altered.) For the HS case, we have \cite[eq. (4.1)]{szHS} (cf. \cite[eq. (14.35)]{ingemarkarol}), \begin{equation} \frac{1}{C_{N(HS)}^{(\alpha,\beta)}}= \int_0^{\infty} \prod_{i=1}^{N} {\rm d}\lambda_{i } \delta(\sum_{i=1}^N \lambda_i -1) \prod_{i=1}^N \lambda_i^{\alpha-1} \prod_{i<j} |\lambda_i-\lambda_j|^{\beta}, \label{constab} \end{equation} and for the Bures case \cite[eq. (3.19)]{szBures} (cf. \cite[eq. (14.46)]{ingemarkarol}), \begin{equation} \frac{1}{C_{N(Bures)}^{(\alpha,\beta)}}=\int_0^{\infty} \prod_{i=1}^{N} \frac{{\rm d}\lambda_{i }}{\lambda_{i}^{1/2}} \delta(\sum_{i=1}^N \lambda_i -1) \left[\prod_{i<j}^{1...N} \frac{(\lambda_{i} -\lambda_{j})^2}{\lambda_{i}+\lambda_{j}} \right]^{\beta/2}\prod_{i ={1}}^N \lambda_{i}^{\alpha-1} \label{CHall}\ . \end{equation} The $\lambda$'s are the $N$ (nonnegative) eigenvalues--constrained to sum to 1--of the corresponding $N \times N$ density matrices, while the parameter $\beta$ is a ``Dyson index'', with $\beta=1$ corresponding to the real case, and $\beta=2$, the complex case (and $\beta=4$, the quaternionic case, not explicitly discussed in \cite{szHS,szBures}). The parameter $\alpha$ will be equal to 1 for the case--of immediate interest to us here--of generically {\it nondegenerate} density matrices. \subsection{Objective} Our goal, in overall terms, is to find metric-{\it independent} (separability) functions, \begin{equation} \label{goal} S_N^{(\alpha,\beta)}(\lambda_1\ldots\lambda_N), \end{equation} which, if inserted into formulas (\ref{constab}) and (\ref{CHall}) under the integral signs, as simple multiplicative factors, will yield separable-rather than total--volumes when the resulting modified $C_{N}^{\alpha,\beta}$'s are employed in exactly the same auxiliary computations (involving flag manifolds) in \cite{szHS} and \cite{szBures} as the $C_{N}^{\alpha,\beta}$'s given by (\ref{constab}) and (\ref{CHall}) were there. More specifically here, our numerical analyses will be restricted to the $N=4$ and $\beta=2$ (complex) and $\beta=1$ (real) cases. Our metric-independent goal is plausible for the following reason. Precisely the same preliminary integrations--respecting the separability constraints--over the non-eigenvalue parameters [possibly, Euler angles \cite{tbs} \cite[App. I]{slaterJMP2008}] must be performed for both metrics before arriving at the stage at which we must concern ourselves with the remaining integration over the eigenvalues and the {\it differences} that are now clearly apparent between metrics in their corresponding measures over the simplex of eigenvalues. Although we are not able to explicitly/symbolically determine what the results of these preliminary integrations might be (the computational challenges are certainly considerable), they must--whatever form they may take--obviously be the same for both metrics in question. Our goal here is to understand--with the assistance of numerical methods--what functional forms these preliminary (12-dimensional in the complex case, and 9-dimensional in the real case) integrations yield. \subsection{Maximal concurrence and absolute separability} The further narrower specific focus of this study will be to explore the possibility that there exists a functional relationship of the form, \begin{equation}\label{ansatz} S_4^{(1,\beta)}(\lambda_1\ldots\lambda_4) = \sigma^{(\beta)} (C(\lambda_1\ldots\lambda_4)), \end{equation} where $\sigma^{(\beta)}(x)$ are some unknown {\it univariate} ({\it one}-dimensional) functions and \begin{equation} \label{maxcon} C(\lambda_1\ldots\lambda_4)= \max \{0,\lambda_1 -\lambda_3 -2 \sqrt{\lambda_2 \lambda_4}\}, \hspace{.2in} \lambda_1 \geq \lambda_2 \geq \lambda_3 \geq \lambda_4, \end{equation} is the {\it maximal concurrence} over spectral orbits of two-qubit density matrices \cite[sec. VII]{roland2} \cite{ishi,ver}. For two-qubit states, $C \in [0,1]$, with $C=0$ corresponding to the {\it absolutely} separable states. That is, {\it no} density matrix with $C=0$ can be nonseparable/entangled \cite{wkw}. (In a recent study, we were able to obtain {\it exact} expressions--involving the tetrahedral dihedral angle $\cos ^{-1}\left(\frac{1}{3}\right)$--for the contributions to the Hilbert-Schmidt real and complex two-qubit volumes for those states with $C=0$, and to numerically estimate the Bures counterparts \cite[secs. III.B, III.C]{slaterJMP2008}. In numerical terms, the HS {\it absolute} separability probability of generic complex two-qubit states is 0.00365826, and the Bures counterpart, 0.000161792. The HS real analogue is 0.0348338.) The {\it concurrence} itself is a widely used entanglement measure of bipartite mixed states \cite[eq. (15.26)]{ingemarkarol}. \subsection{Motivation} Certainly part of our motivation for advancing the ansatz (\ref{ansatz}) was that an analogous modeling of a {\it trivariate} function in terms of a {\it univariate} function was found to hold--making use of the {\it Bloore} (correlation coefficient) parameterization of density matrices \cite{bloore}--for {\it diagonal-entry}-parameterized separability functions \cite[eq. (6)]{slaterPRA2} \cite{slater833}. This led to substantial insights--and {\it exact} conjectures ($\frac{8}{33}$ and $\frac{8}{17}$)--with regard to Hilbert-Schmidt (complex and real) two-qubit separability probabilities. (The Dyson indices $\beta$ played a central analytical role there, in relating real and complex [and quaternionic] results, but not apparently--as far as we can perceive--in the analyses to be presented below.) \section{Numerics} \subsection{Methodology} We do find encouragement in advancing the ansatz (\ref{ansatz}) by the extensive numerical results we generate, in that our estimates of $\sigma^{(1)}(C)$ and $\sigma^{(2)}(C)$ shown in Fig.~\ref{fig:Joint} rather closely reproduce--as we will indicate below (sec.~\ref{evaluation})--other (independent) numerical results and accompanying conjectures we have previously obtained. \begin{figure} \includegraphics{JointRealComplex.pdf} \caption{\label{fig:Joint} Joint plot of estimated (real [blue, $\beta=1$] and complex [red, $\beta=2$]) functions of the maximal concurrence over spectral orbits, $S_4^{(1,\beta)}(\lambda_1\ldots\lambda_4) = \sigma^{(\beta)} (C(\lambda\ldots\lambda_4))$. Note evident jumps in both functions when the maximal concurrence equals 0.5. The graphs cross at $C=0.181245$. For $C = 0$, $\sigma(C)=1$, so all associated density matrices are separable.} \end{figure} The $\beta=2$ complex curve in Fig.~\ref{fig:Joint} shown is based on the use for quasi-Monte Carlo numerical integration of 26,300,000 12-dimensional [Tezuka-Faure (TF) \cite{tezuka}] {\it low-discrepancy} points, and the $\beta=1$ case, on 33,000,000 6-dimensional such points. (The TF procedure--programmed in Mathematica by Giray \"Okten \cite{giray1}--is not conducive to the placing of error bars on the results, though later routines developed by him are.) These points comprise {\it sample} values, respectively, of the 12 Euler angles used to parameterize $SU(4)$ and the 6 Euler angles used for $SO(4)$. For {\it each} TF-point, 499 auxiliary computations were carried out--in addition to that of the corresponding Haar measure associated with the Euler angles--for sets of eigenvalues with values of maximal concurrence running at equally-spaced intervals from $\frac{1}{500}$ to $\frac{499}{500}$. Each density matrix generated--corresponding to a specific set of eigenvalues with fixed $C$ and Euler angles \cite{tbs} \cite[App. I]{slaterJMP2008}--was tested for separability. Prior to the quasi-Monte Carlo runs, we established a database--using the Mathematica command ``FindInstance''--of 100 sets of four eigenvalues for each of the equally-spaced 499 values of $C$. One of the 100 sets was randomly selected [and then randomly permuted] for each of the TF-points and each of the 499 iterations. This ``random generation'' of sets of eigenvalues with {\it fixed} values of $C$ is clearly less than an ideal procedure, but it was what we found to be practical under the particular circumstances. (In sec.~\ref{supplementary}, we manage to improve upon this approach.) Several weeks of MacMini computer time were used for each of the two sets--real and complex--of calculations. (Along with the computations concerning the maximal concurrence (\ref{maxcon}), we also carried out a fully parallel set of computations using the related variable, $\frac{2 \sqrt{\lambda_2 \lambda_4}}{\lambda_1-\lambda_3}$. Those results, however, seemed comparatively disappointing in their predictive power, so we do not detail them here.) \subsection{Evaluation of numerical results} \label{evaluation} Let us now appraise our estimated functions (Fig.~\ref{fig:Joint}) by seeing how well they are able to reproduce previous related results, themselves based on very extensive analyses (mostly involving quasi-Monte Carlo integration also). \subsubsection{Complex case} Use of the complex ($\beta=2$) function in Fig.~\ref{fig:Joint} impressively explains $98.7253\%$ of the variance of the estimated trivariate eigenvalue-parameterized separability function for $C>0$ presented in \cite[sec. III.B]{slaterJGP2}. It also yields an estimate of 0.254756 for the Hilbert-Schmidt separability probability, while our exact conjecture from \cite{slater833} is $\frac{8}{33} \approx 0.242424$. Further, the {\it Bures} separability probability estimate yielded is 0.0692753, while our conjectured value is $\frac{1680 \left(-1+\sqrt{2}\right)}{\pi ^8} \approx 0.0733389$ \cite{slaterJGP}. \subsubsection{Real case} Passing to the real ($\beta=1$) case, we had previously formulated the conjecture that the HS separability probability is $\frac{8}{17} \approx 0.470588$ \cite[sec. 9.1]{slater833}. Our estimate based on the (blue) function shown in Fig.~\ref{fig:Joint} is 0.480302. (The corresponding estimate--for which we have no prior conjecture--for the Bures {\it real} two-qubit separability probability is 0.212152.) Further, we are able to reproduce $97.7502\%$ of the variation in the corresponding trivariate function for $C>0$. (This last function had been estimated using a recent Euler-angle parameterization of $SO(4)$, obtained by S. Cacciatori \cite[App. I]{slaterJMP2008}. It was derived by Cacciatori after the submission of \cite{slaterJGP2}, and thus not reported nor used there, although its complex counterpart--based on 3,600,000 Tezuka-Faure points--had been \cite[sec. III.B]{slaterJGP2}, while the real case was based on a considerably lesser number of TF-points, 700,000.) \subsection{Jumps near $C=\frac{1}{2}$} For the real ($\beta=1$) case, the jump near $C(\lambda_1\ldots\lambda_4)=\frac{1}{2}$ is from approximately 0.118696 to 0.180357, and in the complex ($\beta=2$) case, from 0.0439255 to 0.651586. The magnitudes of the two jumps are then quite comparable, being respectively, $51.964\%$ and $51.488\%$. In Fig.~\ref{fig:midpoint}, we replot the curves shown in Fig.~\ref{fig:Joint} in the immediate vicinity of $C=\frac{1}{2}$, but amplify the complex (red) curve by a factor of 2.8. We perceive a very close similarity in shape. \begin{figure} \includegraphics{Midpoint.pdf} \caption{\label{fig:midpoint}Same plot as Fig.~\ref{fig:Joint}, restricted to the vicinity of $C=\frac{1}{2}$, and the complex (red) curve being amplified by a factor of 2.8} \end{figure} \subsection{Additional discontinuities} In Fig.~\ref{fig:Diff} we show the {\it derivatives} with respect to $C$ of the estimates of $\sigma^{(\beta)}(C)$. (Fig.~\ref{fig:Diff2} is a plot of the same two curves, except that we have now added 10 to the derivative in the real case and subtracted 10 in the complex case, so that the discontinuities can be more readily distinguished and compared.) \begin{figure} \includegraphics{JointDiff.pdf} \caption{\label{fig:Diff} Joint plot of {\it derivatives} with respect to $C$ of the estimated (real [blue, $\beta=1$] and complex [red, $\beta=2$]) functions of the maximal concurrence over spectral orbits, $\sigma^{(\beta)}(C(\lambda\ldots\lambda_4))$. Spikes are observable--{\it both} for the real and complex cases--at: $\frac{1}{2}=0.5; \frac{147}{500} =0.294$; $\frac{51}{250}=0.204$; and $\frac{17}{50} =\frac{51}{150} =0.34$.} \end{figure} \begin{figure} \includegraphics{JointDiff2.pdf} \caption{\label{fig:Diff2}Same as Fig.~\ref{fig:Diff}, except that the real (blue) curve has been translated upwards by 10 units and the complex (red) curve downwards by 10 units, so that the individual discontinuities in the two derivatives can be more readily seen and compared.} \end{figure} In addition to the already-discussed behavior at $C={1/2}$, we see--{\it both} in the real and complex cases--a secondary spike at $\frac{147}{500} =0.294$, and lesser spikes at $\frac{51}{250}=0.204$ and $\frac{17}{50} =\frac{51}{150} =0.34$. So, all the observed spikes, signaling what we presume are discontinuities in the $\sigma^{(\beta)}$'s, and concomitant nontrivial {\it piecewise} behavior--indicative of different separability constraints becoming active/binding or not--are for $C \leq \frac{1}{2}$. The point $C=\frac{51}{500}=0.102$ may also be a discontinuity, at least in the complex case. We could detect no apparent spikes/discontinuities in the upper half-range, $C \in [\frac{1}{2},1]$. (In a somewhat analogous study of two-qubit {\it three}-parameter HS separability {\it probabilities}, intricate {\it piecewise} continuous behavior [involving the {\it golden ratio}] was observed \cite[eq. (37) and Fig. 11]{pbsCanosa}.) In Fig.~\ref{fig:segment} we show the segments of the estimated functions $\sigma^{(\beta)}(C)$ between the two discontinuities, $\frac{51}{250}=0.204$ and $\frac{17}{50} =0.34$. The behavior seems very close to {\it linear} for both curves, except for the intermediate discontinuity at $\frac{147}{500} =0.294$. \begin{figure} \includegraphics{Segment.pdf} \caption{\label{fig:segment} Joint plot of estimated (real [blue, $\beta=1$] and complex [red, $\beta=2$]) functions of the maximal concurrence over spectral orbits, $S_4^{(1,\beta)}(\lambda_1\ldots\lambda_4) = \sigma^{(\beta)} (C(\lambda\ldots\lambda_4))$. The graphs are obviously close to linear between the discontinuities $\frac{51}{250}=0.204$ and $\frac{17}{50} =0.34$, except for the intermediate discontinuity at $\frac{147}{500} =0.294$. To high accuracy, the real (blue) curve can be fitted by the line $1.07614 -1.99472C$ and the complex (red) curve, by $1.19822 - 2.69548 C$.} \end{figure} \subsection{Supplementary analyses} \label{supplementary} Since the completion of the extensive numerical analyses described above, we have undertaken supplementary, parallel analyses in which 5,000 (rather than 500) subintervals of $C \in [0,1]$ are employed, as well as an improved method is used for sampling random eigenvalues with fixed values of $C$ (using the Mathematica FindInstance command, now with a random seed). These results so far seem largely consistent with those already described. However, the new analogues of the plots of derivatives, Figs.~\ref{fig:Diff} and \ref{fig:Diff2}, are still much too rough in character to detect the presence of any secondary (non-jump) discontinuities. But, even at this stage (having tested 30,400 times the separability of 4,999 complex density matrices, and 28,100 times the separability of 4,999 real density matrices), we can produce the interesting counterpart (Fig.~\ref{fig:midpoint2}) to Fig.~\ref{fig:midpoint}, in which the two jumps near $C=\frac{1}{2}$ of roughly $50\%$ magnitude are clearly unmistakable. \begin{figure} \includegraphics{Midpoint2.pdf} \caption{\label{fig:midpoint2}The same plot as Fig.~\ref{fig:midpoint}, but based on our ongoing supplementary analysis with finer resolution in $C$ and enhanced eigenvalue sampling. The twin jumps in the estimated eigenvalue-parameterized real and complex separability functions near $C=\frac{1}{2}$ are now certainly indisputably clear.} \end{figure} Further, the highly linear behavior displayed in Fig.~\ref{fig:segment} also reappears (Fig.~\ref{fig:segment2}). \begin{figure} \includegraphics{Segment2.pdf} \caption{\label{fig:segment2}Same plot as Fig.~\ref{fig:segment}, but based on our ongoing supplementary analysis with finer resolution in $C$ and enhanced eigenvalue sampling} \end{figure} \section{Concluding Remarks} For the real and complex two-qubit systems, we have investigated the possibility that the associated ({\it three}-dimensional) {\it eigenvalue}-parameterized separability functions--conceptually important for computing separability {\it probabilities}--are expressible as {\it one}-dimensional functions ($\sigma(C)$) of the maximal concurrence over spectral orbits ($C \in [0,1]$). Our numerical estimates, in this regard, have been encouraging, in that they closely reproduce {\it independently}-generated numerical results and exact conjectures concerning separability probabilities based on the Hilbert-Schmidt and Bures (minimal monotone) metrics over the two-qubit systems, and based on the use of {\it diagonal-entry}-parameterized separability functions. Plots of the real and complex versions of $\sigma(C)$ {\it both} exhibit {\it jumps} of approximately $50\%$ magnitude near the midpoint, $C=\frac{1}{2}$, and {\it both} also indicate the presence of, at least, three further (non-jump) discontinuities ($C \approx 0.204, 0.294, 0.34$), apparently indicative of points at which certain distinct separability constraints become either active/binding or not. Over the interval $C \in [0.204,0.34]$, the real and complex fitted functions $\sigma(C)$ {\it both} appear to be simply linear (except at $C \approx 0.294$). We have principally studied above the possibility (4) that the ostensibly {\it trivariate} two-qubit eigenvalue-separability functions can be equivalently expressed as {\it univariate} functions of only a single variable, that is, the maximal concurrence $C$ over spectral orbits \cite{roland2}. Since we have unfortunately not been able to fully formally resolve this issue--although our supporting evidence for this proposition is intriguing--we can not also presently fully eliminate the possibility that one or even two (yet unspecified) variables supplemental to $C$ are, in fact, needed, and that the corresponding separability function is not in fact strictly univariate in nature (as we do know it definitely is the case with the two-qubit diagonal-entry-parameterized separability functions \cite{slater833}). It presently appears somewhat problematical to extend the line of analysis above to the qubit-{\it qutrit} ($N=6$) case. In addition, to simply the greatly increased computational burden that would be involved, there does not seem to be a maximal concurrence formula comparable to the two-qubit one (\ref{maxcon}) with the requisite properties we have utilized \cite[p. 102108-16]{roland2}. We have examined the relationship between separability and entanglement in a specific analytical setting--involving {\it eigenvalue}-parameterized separability functions and the use of the \.Zyczkowski-Sommers formulas \cite{szHS,szBures} for the total Hilbert-Schmidt and Bures volumes of the $N \times N$ density matrices. A number of studies of Batle, Casas, A. Plastino and A. R. Plastino (for example, \cite{batlecasas1}) have also focused on the relationship between separability and entanglement, but in somewhat different analytical frameworks (typically involving the ZHSL measure \cite{ZHSL}, which is {\it uniform} over the eigenvalue simplex). The closest we can come, it seems, to a direct comparison with their analyses, is to note that the dot-dashed curve in Fig. 2 of \cite{batlecasas1} is based on the Hilbert-Schmidt metric, and their $x$-axis is the Bures distance, while we have employed the maximal concurrence $C$ on the $x$-axis in the somewhat comparable Fig. 1 above (cf. \cite{batleplastino1}). Both theirs and our plots are, in general, downward-decreasing, but theirs gives no indication of any discontinuities. Also, their plot is of the separability {\it probability}, while ours is of the (presumed univariate) eigenvalue-parameterized separability {\it function}, to be used in the computation of the probability. \begin{acknowledgments} I would like to express appreciation to the Kavli Institute for Theoretical Physics (KITP) for computational support in this research. \end{acknowledgments}
{ "attr-fineweb-edu": 1.134766, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdXc4eIfiUUQN6752
\section{INTRODUCTION} \label{sec1} This paper is a continuation of a series of papers (Papers I -- VI and VIII -- X) of radial-velocity studies of close binary stars \citep{ddo1,ddo2,ddo3,ddo4,ddo5,ddo6,ddo8,ddo9,ddo10} and presents data for the tenth group of ten close binary stars observed at the David Dunlap Observatory. For technical details and conventions, and for preliminary estimates of uncertainties, see the interim summary paper \citet[Paper VII]{ddo7}. In this paper, we make use of broadening functions extracted not only from the region of the Mg~I triplet at 5184~\AA, as in previous papers, but also from two regions containing telluric features centered at 6290~\AA\ and 6400~\AA. These experimental setups were used because of concerns about flexure effects in our spectrograph. While this experiment provided a good check on the stability of our radial-velocity system and -- to a large extent -- alleviated our concerns, we found that the stellar lines in these two regions were generally too weak to replace the 5184~\AA\ feature on a routine basis. The broadening functions (from now on called BF's) based on the 6290~\AA\ and 6400~\AA\ observations were rather poorly defined, especially for earlier spectral types; this was mostly due to the low efficiency of our diffraction grating in the red region. As the result, the BF's for AG~Vir and DU~Boo were poor, with the secondary component almost undetectable. Thus, in the end, we have returned to the 5184~\AA\ region for the subsequent observations. The flexure tests based on our telluric-lines template (Regulus = HD87901) have shown that the standard wavelength calibrations provide a reasonable stability of our radial-velocity system with the largest deviations within $\pm$3 km~s$^{-1}$. The broadening functions obtained in the red region were used in the present study only to augment the data for the quadruple systems, ET~Boo, VW~LMi and TV~UMi, and only for observations at critical orbital phases of long-period systems when any spectrum was of use in defining the orbit. In August 2005, a new grating with 2160 l/mm was acquired to replace the previously most frequently used grating with 1800 l/mm which after many years of use lost its original efficiency. Thus, unfortunately, due to the changes described above, in combination with the necessarily very extended time coverage for triple and quadruple systems, the present dataset is the least homogeneous since the start of this series of studies. The BF's used here were extracted from spectra obtained with four different CCD detectors and two different diffraction gratings. This lack of homogeneity does not seem to affect the final data which have uncertainties similar to previously reported in this series of investigations. Selection of the targets in our program remains quasi-random: At a given time, we observe a few dozen close binary systems with periods usually shorter than one day, brighter than 10 -- 11 magnitude and with declinations $>-20^\circ$; we publish the results in groups of ten systems as soon as reasonable orbital elements are obtained from measurements evenly distributed in orbital phases. Whenever possible, we estimate spectral types of the program stars using our classification spectra obtained with a grating of 600 l/mm over a range of 635 \AA\ or 890 \AA\ (depending on the CCD detector) centered at 4200 \AA. Our classifications are based on comparison with several spectral standards of the MK system observed on the same nights. They are compared with the mean $(B-V)$ color indices usually taken from the Tycho-2 catalog \citep{Tycho2} and the photometric estimates of the spectral types using the relations published by \citet{Bessell1979}. The radial velocity (hereafter RV) observations reported in this paper have been collected between June 1997 and September 2005. The ranges of dates for individual systems can be found in Table~\ref{tab1}. The present group contains 3 quadruple systems, ET~Boo, VW~LMi and TV~UMi, whose complex nature had been noticed several years ago, but whose full orbital solutions required extended monitoring. This paper is structured in a way similar to that of previous papers, in that most of the data for the observed binaries are in two tables consisting of the RV measurements in Table~\ref{tab1} and their preliminary sine-curve solutions in Table~\ref{tab2}. RVs and corresponding spectroscopic orbits for all ten systems are shown in Figures~\ref{fig1} to \ref{fig3}. In this paper we changed the way how RVs are determined from the broadening functions: Instead of Gaussians, we now use single or double rotational profiles to fit the peaks in the broadening functions. This approach, which is described in Section~\ref{rot}, gives much better results with smaller random errors. We stress that this is still not a full modeling of the broadening function shape (as attempted in \citet{ruc92,ahvir,wuma}) which would be an optimal approach, but a convenient and better working (than Gaussians) tool. The measured RVs are listed in Table~\ref{tab1} together with weights, determined as $1/\sigma^2$, as based on individual determinations of central component velocity. This weighting scheme, which accounts for differences in relative quality of observations, improves the quality of the orbital solutions. \setcounter{footnote}{3} The data in Table~\ref{tab2} are organized in the same manner as in previous papers. In addition to the parameters of spectroscopic orbits, the table provides information about the relation between the spectroscopically observed epoch of the primary-eclipse T$_0$ and the recent photometric determinations in the form of the $O-C$ deviations for the number of elapsed periods $E$. For HL~Dra the reference ephemeris is taken from the Hipparcos Catalogue; for DU~Boo from \citet{prib2005}; for the rest of the systems, ephemerides given in the on-line version of ``An Atlas O-C diagrams of eclipsing binary stars''\footnote{http://www.as.wsp.krakow.pl/ephem/} \citep{Kreiner2004} were adopted. Because the on-line ephemerides are frequently updated, we give those used for the computation of the $O-C$ residuals below Table~\ref{tab2} (status as in February 2006). The values of $T_0$ refer to the deeper eclipse which for W-type systems corresponds to the lower conjunction of the more massive component; in such cases the epoch is a non-integer number. In the cases of ET~Boo and VW~LMi, where observations covered several years and photometric data have been rather scanty, we optimized not only $T_0$, but also the orbital period. Table~\ref{tab2} contains our new spectral classifications of the program objects. Independent classification was done for all systems except TX~Cnc. Section~\ref{sec2} of the paper contains brief summaries of previous studies for individual systems and comments on the new data. The novel technique of fitting the rotational profiles to peaks in the BF's is described in Section~\ref{rot}. Examples of BF's of individual systems extracted from spectra observed close to quadratures are shown in Fig.~\ref{fig4}. Similarly as in our previous papers dealing with multiple systems, RVs for the eclipsing pair were obtained from BF's with the additional peaks removed. This task was performed by first fitting multiple Gaussian profiles to the combined BF's and then removing the signatures of the third (and sometimes fourth) component. While the final RVs of the close pair were determined by rotational profile fitting to such ``cleaned'' profiles, the velocities of well separated and slowly rotating components of the additional components were determined by the Gaussian fits (Table~\ref{tab3}). Because the BF technique actually produces Gaussians for intrinsically sharp signatures with $\sigma \simeq 15$ km~s$^{-1}$, this approach is internally consistent. \section{RESULTS FOR INDIVIDUAL SYSTEMS} \label{sec2} \subsection{DU~Boo} \label{duboo} Photometric variability of DU~Boo was discovered by the Hipparcos satellite \citep{hip} where the star was classified as ellipsoidal variable of the A2 spectral type. Later, \citet{gomgar1997} observed the system photometrically and found that it is an eclipsing binary with a large O'Connell effect amounting to the difference $Max.II - Max.I = 0.10$ mag in the $V$ passband. It is interesting to note that light-curve asymmetry and the associated surface inhomogeneities have been very stable since the time of the Hipparcos discovery; this indicates that solar-type dark-spots paradigm does not apply in this case. Recently, \citet{prib2004} analyzed the $UBV$ photometry and found that DU~Boo is a relatively long-period (1.0559 day) contact binary showing total eclipses; the derived photometric mass ratio was found to be $q = 0.194(2)$. Our spectroscopic mass ratio $q = 0.234(35)$ is consistent with the photometric determination, which documents reliability of photometric mass ratios derived from timing of the inner eclipse contacts for contact binaries showing total eclipses \citep{MD72a,MD72b}. The large O'Connell effect is reflected in the extracted broadening functions of DU~Boo. While the primary component shows undisturbed broadening functions around quadratures, close in shape to the theoretical rotational profiles (Section~\ref{rot}), the BF profile for the secondary is always very deformed. This causes distortions of the observed RV curve and adversely affects the solution for the mass-center velocity. The peaks of the BF's are not fully separated supporting the photometric solution of DU~Boo as a contact binary. The system is clearly of the A-type with the more massive component eclipsed at the deeper minimum. By combining our spectroscopic results with the inclination angle $i = 81.5\degr$ \citep{prib2004}, we obtain the total mass of the system $M_1+M_2 = 2.56 \pm 0.07 M_\odot$. Our new spectral type estimate of A7V is definitely later than the spectral type given in the Hipparcos catalogue (A2). The mean Tycho-2 color $(B-V) = 0.31$ is in better accord with our determination of the spectral type and requires only a small interstellar extinction. The orbital period of the system (1.0559 days) is rather long for a contact binary of A7V spectral type which indicates that the components of DU~Boo may be evolved. The Hipparcos parallax is relatively small $\pi = 2.58 \pm 1.03$ and not precise enough for determination of the system luminosity. \subsection{ET~Boo} \label{etboo} The photometric variability of the star was discovered by the Hipparcos satellite \citep{hip}; where it is cataloged as a $\beta$~Lyrae eclipsing binary of the F8 spectral type. ET~Boo is a known member of the visual pair COU~1760 (ADS~14593+4649), with the orbital period about 113 years and the magnitude difference of $\Delta V$ = 0.86 (see Sixth Catalog of Orbits of Visual Binary Stars, currently available only in the electronic form\footnote {Washington Double Star Catalogue (WDS), \citet{WDS}, http://ad.usno.navy.mil/wds/orb6.html}). The close, eclipsing binary producing the light variations (from now on: stars 1 and 2) is the brighter component of the visual pair. The observed separation of the visual components was $0.1 - 0.2$ arcsec during the available astrometric observations \citep{WDS} which is much less than the typical seeing at the DDO of 1 -- 3 arcsec; therefore, the spectra of both components were observed simultaneously. We discovered that the broadening functions (Figures~\ref{fig4} and \ref{fig5}) show an occasional splitting of the third-component peak, indicating that it is in fact a close, non-eclipsing binary (from now on: stars 3 and 4) with a very strongly elliptical orbit. Thus, the system is a hierarchical quadruple with both components of the visual 113 year period system being double-line (SB2) close binaries. Our estimates of the combined brightness of the third and fourth components are $L_{34}/(L_1 + L_2) = 0.35 \pm 0.02$ for the spectral region around 5184~\AA\ and $L_{34}/(L_1+L_2) = 0.33 \pm 0.02$ at 6400~\AA\ during the maximum light of the closer pair. The RV data for the close binary were handled in the standard way, by first removing the peaks (by preliminary fitting of Gaussian profiles) of the second binary and then measuring the positions of the RV peaks for the close pair. The novelty here is that instead of Gaussians, as in previous papers, we used the double rotational profiles for the close pair (Section~\ref{rot}). The results indicate that the brighter component of ET~Boo is a semi-detached or more likely a detached binary with a relatively large mass ratio, $q = 0.884$. Only a small fraction of the available BF's show splitting of the visual companion peaks (components 3 and 4); this property gave us a hint of a highly eccentric orbit, but also crucially helped in finding the orbital period. Because these stars have very similar brightness, the period ambiguity was resolved by consideration of the RV differences between the components. A preliminary orbital period found by trigonometric polynomial fitting to the data was later refined by the spectroscopic orbit solution (Table~\ref{tab4}, Fig.~\ref{fig6}) to give 31.521 days. The systemic velocity of the second binary $V_0 = -24.15 \pm 0.44$ km~s$^{-1}$ is close to the systemic velocity of the close pair $V_0 = -23.52 \pm 0.52$ km~s$^{-1}$ confirming the physical association of the two binaries. Since the orbital period of the wide pair is 113 years, no orbital motion can be expected to be detected in the 5 years of our observations. \subsection{TX~Cnc} \label{txcnc} The photometric variability of TX~Cnc, an apparent member of the Praesepe open cluster, was first announced by \citet{haff1937}. \citet{whel1973} were the first to obtain good simultaneous fit to the photometric and spectroscopic data of the system on the assumption of the Roche model. A preliminary analysis of the DDO observations was published in a PhD thesis \citep{blake2002}. The RVs used there have been later re-determined using the rotational profile fitting. Surprisingly, the broadening functions show a shape of a practically detached binary (see Fig~\ref{fig4}), although the orbital velocities are quite typical for a contact, W-type system. The spectroscopic elements of TX~Cnc determined by \citet{whel1973} were $V_0$ = 26.6 $\pm$ 3 km~s$^{-1}$, $K_1$ = 117.3$\pm$3 km~s$^{-1}$, $K_2$ = 189.8 $\pm$ 4 km~s$^{-1}$, giving $q$ = 0.62. A later RV solution published by \citet{lean1983}, $V_0 = 29 \pm 6$ km~s$^{-1}$, $K_1 = 96 \pm 8$ km~s$^{-1}$ and $K_2 = 181 \pm 11$ km~s$^{-1}$, was based on only 8 photographic spectra. Our spectroscopic elements suggest a smaller mass ratio $q = 0.455 \pm 0.011$ than derived before and a larger total mass, $(M_1 + M_2) \sin^3 i = 1.330 \pm 0.012 M_\odot$; both changes are rather characteristic for an improved quality of the spectroscopic observations. TX~Cnc is of particular interest for our understanding of the evolution of contact binaries because it appears to belong to Praesepe, which is one of the youngest (900 Myr) open clusters containing such binaries \citep{ruc98}. All indications point at an advanced age of contact binaries so that confirmation of the membership of TX~Cnc to Praesepe may provide a much needed lower limit on the time needed to form such binaries. Unfortunately, the system's parallax was not measured during the Hipparcos mission so that the membership must be judged by less direct means. Our radial velocity data giving $V_0 = 34.0 \pm 0.5$ km~s$^{-1}$, are fully consistent with the mean velocity of the Praesepe cluster, $V = 34.53 \pm 0.12$, and the velocity dispersion of its spectroscopic binaries, $\sigma = 0.40$ km~s$^{-1}$ \citep{merm1999}. In the PPM Catalogue, \citet{roba1988} assign TX Cnc a parallax of $5.21 \pm 0.79$ mas. Hipparcos astrometry of Praesepe analyzed by \citet{lee1999} gives a cluster parallax of $5.32 \pm 0.37$ mas, i.e. a distance modulus of $m-M = 6.37 \pm 0.15$. With the maximum brightness of TX~Cnc of $V_{max} = 10.0$, we get $M_V = 3.63 \pm 0.15$ which is in perfect agreement with the absolute magnitude estimated from the \citet{rd1997} calibration giving $M_V = 3.60$, for assumed $(B-V)_0$ = 0.54 corresponding to spectral type F8V. This excellent agreement in radial velocity, parallax and luminosity distance is supported by proper motion data. The careful photographic study by \citet{js91} showed that TX Cnc has a proper motion of $(\mu_{x}=-4.4 \pm 3.2, \mu_{y}= 0.5 \pm 1.4)$ mas yr$^{-1}$ relative to the centre of motion of Praesepe, leading \citet{js91} to assign a 99\% probability that TX Cnc belongs to Praesepe. The velocity dispersion contributes less than 1 mas yr$^{-1}$. The Tycho-2 work of \citet{Tycho2} yields $(\mu_{\alpha} \cos \delta = -36.2 \pm 1.2, \mu_{\delta}= -11.2 \pm 1.3)$ mas yr$^{-1}$ for the absolute proper motion of TX Cnc, compared with the mean centre of mass motion of $(-35.7 \pm 0.4, -12.7 \pm 0.3)$ mas yr$^{-1}$ found by \citet{lee1999}. We can therefore be very confident that TX Cnc is a member of Praesepe, and hence it is an important system for testing theories of contact binary formation and evolution. \subsection{V1073~Cyg} \label{v1073cyg} The bright A-type contact binary V1073~Cyg was a subject of several photometric studies \citep{sezer1996,morris2000}. There exist also two spectroscopic investigations: \citet{fitz1964} published a spectroscopic orbit for the primary component with a marginal detection of the secondary, with $V_0 = -8\pm3$ km~s$^{-1}$, $K_1 = 66 \pm 4$ km~s$^{-1}$ and the mass ratio of $q$ = 0.34. The author found a small eccentricity of the orbit, $e = 0.115 \pm 0.053$, which cannot be significant when one applies arguments of \citet{Lucy71,Lucy73} on statistics of eccentricity determinations. \citet{ahn1992} obtained an apparently more reliable, circular orbit solution for both components, with the resulting spectroscopic elements, $V_0 = -0.8 \pm 1.1$ km~s$^{-1}$, $K_1 = 66.7 \pm 1.3$ km~s$^{-1}$ and $K_2 = 210.2 \pm 1.2$ km~s$^{-1}$. Our results, $V_0 = -6.85 \pm 0.50$ km~s$^{-1}$, $K_1 = 65.53 \pm 0.64$ km~s$^{-1}$ and $K_2 = 218.9 \pm 1.5$ km~s$^{-1}$, are superior to the previous ones thanks to the BF extraction technique and the rotational profile fitting. The relatively large formal $rms$ errors are mainly caused by the simple sine curve solution used by us. When combined with a high-precision light curve, the BF modeling can provide high-quality absolute parameters of the system. Our spectral type estimate, F0V, is much later than original classification of \citet{fitz1964} who estimated the spectral type as A3Vm; it confirms the classification of \citet{hill1975}. The Tycho-2 color $B-V$ = 0.375 and our spectral type indicate a non-negligible amount of reddening. The Hipparcos parallax, $\pi = 5.44 \pm 0.95$ mas is not sufficiently precise to draw conclusions on the physical properties of the system. \subsection{HL~Dra} \label{hldra} Variability of HL~Dra was detected during the Hipparcos mission. The system was classified as a $\beta$~Lyrae eclipsing binary with the orbital period of 0.944276 days. The primary component is of the A5 spectral type. No ground-based photometric study of the system has been published yet. Also no recent minima after the Hipparcos mission are available. Our time of the spectroscopic conjunction shows only a small shift (+0.0047 days) with respect to the time predicted by the original Hipparcos ephemeris so that the orbital period of the system appears to be very stable. We have not been able to detect spectral signatures of the secondary component in our data. The broadening functions close to quadratures show only small humps on both sides of the primary peak which cannot be identified with the secondary component because they do not show any orbital motion. The system is clearly a detached or semi-detached pair with a low luminosity secondary component. HL~Dra was observed during two seasons in two different wavelength regions, in 2004 at 5184~\AA~ and in 2005 at 6290~\AA. The latter dataset is of relatively poor quality due to the low number and weakness of spectral lines in the red spectral region of an early type star. The orbital single-line solutions resulting from the two datasets are in a good accord except for the center of mass velocity of $V_0=-29.3 \pm 0.4$ km~s$^{-1}$ for the 2004 data and $V_0=-36.5 \pm 0.6$ km~s$^{-1}$ for the 2005 data. The shift is well outside the formal errors and may be caused by a motion of the eclipsing pair around the barycenter with a third body. The 2004 data are of much better quality so the orbital parameters listed in Table~\ref{tab1} correspond to this dataset. Our new spectral type determination is slightly later, A6V, than previously published. The Tycho-2 color index $(B-V) = 0.222$ corresponds to the A8V spectral type so that the reddening appears to be small. \subsection{AK~Her} \label{AK Her} AK~Her is W UMa-type contact binary discovered by Metcalf (see \citet{pick1917}). It is the brighter component in the visual binary ADS~10408. The companion, located at a separation of 4.2 arcsec, is 3.5 mag fainter than AK~Her at its maximum light. The position angle 323\degr~ is almost perpendicular to our fixed, E--W spectrograph slit so that this component was not detectable in the broadening functions. The system is known to show a cyclic variation in the moments of eclipses which is probably caused by the light-time effect induced by an undetected companion on an orbit of about 57 years \citep{awad2004}. The perturbing star cannot be identified with the known visual companion and must be much closer to the binary. The complex multiplicity of the system is supported by the Hipparcos astrometry by the following: (i)~The system shows a stochastic astrometric solution (the X flag in the catalog field H59); (ii)~It is suspected not to be single (the S flag in H61), (iii)~The trigonometric parallax of $10.47 \pm 2.77$ mas has a much too large error for the brightness of the system. Our individual spectroscopic data do not show any contribution from this putative third (or rather fourth) component. It is possible that such a companion will be seen in a detailed analysis of averaged spectra \citep{dangelo2006}, but this approach is outside the scope of the present paper. Our RV solution is the first to treat the star as a double-lined binary system (SB2). \citet{sanf1934} observed the RV curve of the primary component and determined $f(m)$ = 0.0208 M$_\odot$. His RV solution ($V_0 = -13$ km~s$^{-1}$, $K_1 = 79$ km~s$^{-1}$) is fairly consistent with our solution. Our spectral classification gives an earlier spectral type for the system, F4V, than previously discussed, F8V. It is not fully consistent with the Tycho-2 color index $(B-V) = 0.490$ and implies some interstellar reddening. \subsection{VW~LMi} \label{vwlmi} The photometric variability of VW~LMi was found by the Hipparcos mission. It was classified in the Hipparcos catalogue \citep{hip} as a W~UMa type eclipsing binary with an orbital period 0.477547 days. The first photometric observations of the system were published by \citet{dumi2000}. Later the light curve of the system was analyzed by \citet{dumi2003}, who found the mass ratio $q_{ph}$ = 0.395 and inclination $i$ = 72.4\degr; we show later that these values are incorrect as they do not take into account the presence of the relatively bright binary companion. VW~LMi has been observed spectroscopically at DDO since 1998. It was realized from the beginning that the system is a quadruple one, consisting of two spectroscopic binaries. While the eclipsing pair can be identified with the short-period contact binary, the second spectroscopic binary is a detached one with the period of about 7.9 days (see below how we arrived at the more exact value). The light contribution of the second spectroscopic binary at the maximum brightness of the contact pair is $L_{34}/(L_1 + L_2) = 0.42$. The study of quadruple system VW~LMi is complicated by the mutual orbital motion of both binaries so that changes in the respective $V_0$ values cannot be neglected. Another complication is the similar brightness of components of the second, non-eclipsing binary making derivation of its orbital period very difficult. We worked first with RV differences of the components of the second pair to find its orbital period. Trigonometric polynomial fits to the data led to the orbital period $P_{34}$ = 7.9305 days, which explained all data very well. An attempt to find the orbit of the contact pair (after removing the contribution of the second binary) resulted in poor quality of the spectroscopic orbit. In fact, the residuals from both preliminary orbits showed a clear anti-correlation between the velocities of the contact pair (components 1 and 2) and of the non-eclipsing binary (components 3 and 4) indicating the mutual orbital motion of the two binaries. The period analysis revealed only one feasible period of about 355 days, hence close to one year. Since the data span 7 years and the observing season for VW~LMi is from late November to mid May, the phase coverage of the mutual orbit is partial and has gaps. A further improvement of all three orbits was achieved by simultaneous fits to all four datasets of the RVs of the form: \[V_{i} = V_0 + (-1)^i K_i [e_{j} \cos \omega_{j} + \cos (\omega_{j}+\nu_{j})] + (-1)^j K_{2j-1,2j} [e_3 \cos \omega_3 + \cos (\omega_3 +\nu_3)]\] where $V_0$ is the mass-center velocity of whole quadruple system, $K_i$ are the respective velocity semi-amplitudes of individual components, $e_j$ is orbital eccentricity, $\omega_j$ is longitude of the periastron and $\nu_j$ is the true anomaly. The index $i$ corresponds to the component number ($i = 1 - 4$), while the index $j$ takes the value of 1 for the contact binary, 2 for the detached binary and 3 for the mutual orbit of the two systems. Thus, for the components of the contact binary, $j=1$ and $i=1,2$ while for the components of the detached binary, $j=2$ and $i=3,4$. $K_{2j-1,2j}$ for $j$=1 should be read as $K_{12}$ while for $j=2$ as $K_{34}$ where $K_{12}$ and $K_{34}$ are the semi-amplitudes of the RV changes of the mass centers of the contact and the detached binary, respectively. All results of the simultaneous fits are presented in Table~\ref{tab5} while the orbital elements of the contact pair are also given with the remaining binaries of this study in Table~\ref{tab2}. For simplicity, all measurements were assigned the same weight although the velocities for the detached pair were determined by the Gaussian fits while those of contact pair by the rotational profile fits. The sine curve fits to the data for the contact binary, corrected for the motion on the outer orbit, are shown in Fig.~\ref{fig2}. While the secondary component is usually not blended with the peaks of the second spectroscopic binary, the primary of the contact pair is always visible projected against the ``background'' of the profiles of the 3rd and 4th component. This circumstance caused an enhanced scatter of the velocities of the primary component. The corrected RVs of the second binary with the corresponding fits are plotted in Fig.~\ref{fig7}. The final orbital period for this binary is 7.93044 days and the orbit is nearly circular. The velocities of all four components corrected for the corresponding orbital motions in the inner orbits and their best fits are plotted in Fig.~\ref{fig8}. These residual RVs represent the orbital motion of the mass centers of both binaries. Because the outer orbit has a relatively short period of 355 days, it is of interest to inquire into the mutual orientation of the three orbits. This can be estimated from projected masses of the components in a sort of a ``bootstrap'' process started with the derived inclination of the contact, eclipsing system. A preliminary solution of unpublished photoelectric data obtained at the Star\'a Lesn\'a Observatory of the Astronomical Institute of the Slovak Academy of Sciences were used to estimate inclination angle of the eclipsing pair. Fixing the third light at $L_{34}/(L_1 + L_2) = 0.42$ (see above) and using the spectroscopic mass ratio of $q=0.42$ led to the inclination angle $i_{12} = 80.1^\circ \pm 0.2^\circ$. This is, as expected, a much larger inclination than the one obtained by \citet{dumi2003} ($i = 72.4^\circ$) without an assumption of the third light. Using our estimate of the inclination angle and the projected total mass of the contact pair $(M_1+M_2) \sin i_{12} = 2.28\, M_\odot$, we obtain $(M_1 + M_2) = 2.39\, M_\odot$. The outer, 355 day orbit defines the mass ratio for the two pairs, $(M_1+M_2)/(M_3+M_4) = 1.09$. Therefore, the true (not the projected) mass of the second spectroscopic binary is $(M_3+M_4) = 2.19\, M_\odot$. Using the projected mass $(M_3+M_4) \sin i_{34} = 1.79 \pm 0.03\, M_\odot$, we estimate the inclination of the orbit of the second pair to be about $69^\circ$. The outer, 355 day orbit is even less inclined to the sky since with $(M_1+M_2+M_3+M_4) = 4.58\, M_\odot$ and the projected total mass of only $2.67\, M_\odot$, so that we obtain $i_{12-34} = 57 \degr$. Obviously, we could not determine if these values are all in the same sense or are complements to $180^\circ$; a determination of the sense of the revolution could only come from interfermetric observations. The non-eclipsing, detached binary, with $(M_3+M_4) = 2.19\, M_\odot$, is composed of two almost identical ($q_{34} = M_3/M_4 = K_4/K_3 = 0.980 \pm 0.017$), most probably main sequence stars. Their masses correspond to about F9V -- G0V and their similarity is also reflected in the luminosity ratio $l_3/l_4 \approx$ 1.04. The evolutionary status of components can be guessed from a comparison of their rotational and orbital velocities. If we assume a synchronous rotation and take rotational velocities of the components estimated from Gaussian profile fits to be about 12 km~s$^{-1}$ and semi-amplitudes of RVs about 60 km~s$^{-1}$, we see that fractional radii of components are $r/a<0.2$. With the semi-major axis of the absolute orbit of about 10 R$_\odot$, their radii are $<2 R_\odot$. The similar spectral type of all components of the quadruple system of VW~LMi has resulted in a very good quality of extracted BF's, as can be judged in Figure~\ref{fig4}. It is interesting to note that multiplicity of VW~LMi was not identified astrometrically during the Hipparcos mission in spite of the relative proximity of the system at $\pi = 8.04 \pm 0.90$ mas. This is probably caused by the orbital period of the two pairs around each other being close to one year thus mimicking the parallactic motion. Because of the small size of the mutual (355 day) orbit of only 0.62 AU, chances of resolving the two astrometric components are rather small even with advanced techniques because the expected maximum angular separation will be only 10 mas. The situation is little bit more optimistic with the expected light-time effect of the eclipse timing of the contact binary as the expected full amplitude is about 0.0074 days which should be relatively easy to detect with the observed photometric amplitude of 0.42 mag. \subsection{V566~Oph} \label{v566oph} The W~UMa-type binary V566~Oph was discovered by \citet{hoff1935}. The system is bright ($V_{max}$ = 7.46) and therefore it was a subject of numreous previous photometric (for references, see \citet{twig1979}) and spectroscopic observations. An interval of constant light observed during the secondary eclipse indicates the A-type. \citet{MD72b} published the first light curve analysis of V566~Oph based on the Roche model. The total eclipses permitted to determine reliable geometric elements: the fill-out $F=1.25 \pm 0.05$ ($f$ = 0.25), mass ratio $q = 0.238 \pm 0.005$ and inclination of $i = 80\degr \pm 2\degr$. There exist three previous spectroscopic studies of the system. Two of these are based on photographic observations \citep{hear1965,mclean83} and a more recent one is based on the Reticon data \citep{hill1989}. The latter study used direct fitting of the synthetic profiles to the CCF functions. The spectroscopic elements obtained in his study, $V_0$ = 38.5$\pm$1.1 km~s$^{-1}$, $K_1$ = 72.6$\pm$1.5 km~s$^{-1}$ and $K_2$ = 272.9$\pm$1.3 km~s$^{-1}$, are practically within the errors of our results, $V_0$ = 37.3$\pm$0.5 km~s$^{-1}$, $K_1$ = 71.1$\pm$0.7 km~s$^{-1}$ and $K_2$ = 270.1$\pm$1.1 km~s$^{-1}$. The current improvement of the orbit is mainly due to the use of the BF extraction technique and of the rotational-profile fitting. The orbit can be still improved by taking the proximity effects into account. Our new determination of the mass ratio, $q = 0.263 \pm 0.012$ is in a moderately good agreement with photometric mass ratio of \citet{MD72b}, confirming the utility of the photometric approach to systems with total eclipses. The orbital period of the system is rather unstable. In spite of the possible light-time effect orbit found by \citet{priruc2006}, we did not find any traces of the third component in the extracted BFs. V566~Oph is a relatively nearby system with a good Hipparcos parallax, $\pi = 13.98 \pm 1.11$ mas. The absolute magnitude determined using the calibration of \citet{rd1997}, $M_V = 3.07$, using $(B-V)=0.406$ from the Tycho-2 catalogue \citep{Tycho2} is in good agreement with the visual absolute magnitude determined from the Hipparcos parallax, $M_V = 3.19 \pm 0.17$. Our new spectral classification, F4V, indicates a slightly later spectral type than that F2V found by \citet{hill1989}. \subsection{TV~UMi} \label{tvumi} TV~UMi is another discovery of the Hipparcos mission. The system was classified as a $\beta$~Lyrae eclipsing binary with orbital period of 0.415546 days, although the classification was obviously complicated by the low amplitude of the light variations of only about 0.08 mag. The eclipses are very wide and of almost the same depth. For that reason we suspected that the variability is caused by a contact binary of the W~UMa-type. Prolonged spectroscopic observations of TV~UMi at the DDO showed that the system is a quadruple one and consists of two spectroscopic binaries. The second close binary in TV~UMi is almost as bright as the contact pair, but its components are difficult to analyze because of the strong eccentricity of the orbit and the very short duration of periastron passages when the spectral signatures could be potentially resolved. In fact, the components of the second pair could not be separated in most of our spectra; such a separation took place on only three occasions. The largest observed separation of the components of the second binary on June 8, 2001 of more than 100 km~s$^{-1}$ had to be close to periastron passage. One of the BF's from that night is included in Fig.~\ref{fig3}. During this event, the stronger component had a more negative RV. A period analysis of line separations indicates a 31.2-days orbital periodicity of the second pair. The reliable determination of the orbital parameters and confirmation of the preliminary orbital period would require more observations, preferably with a larger spectroscopic resolution to separate the components even outside the periastron passages. The TV~UMi system resembles ET~Boo in that its companion binary also has an eccentric orbit with a similar orbital period. Our RV observations cover a shorter time base and are less numerous than in the case of VW~LMi so that we have not been able to find the mutual orbital motion of the binaries. During intervals when we observed the narrow blend of the peaks of the third and fourth components, the combined RV was about $-15$ km~s$^{-1}$, slightly less than systemic velocity of the contact binary ($-9.7$ km~s$^{-1}$). This indicates a possible slow orbital motion of both pairs. The light contribution of the second binary is large, $L_{34}/(L_1+L_2) = 0.90$. As observed during the periastron passage on June 8, 2001, its components have slightly unequal brightness: $L_3/(L_1+L_2) = 0.58$ and $L_4/(L_1+L_2) = 0.32$. Using the measured RVs of the third and fourth components during the periastron passage, $RV_3 = -67.94 \pm 1.13$ km~s$^{-1}$ and $RV_4 = 37.21 \pm 1.03$ km~s$^{-1}$, and the mean RV of the blend of the two components giving the approximate systemic velocity of the second pair, $-15$ km~s$^{-1}$, we see that the mass ratio of the second binary is close to unity. The observed photometric amplitude of the contact system, $\Delta m = 0.083$, when corrected for light contribution of the companion binary, remains small at $\Delta m$ = 0.163. It is interesting to note that the system was not detected nor even suspected as a multiple one from the Hipparcos astrometric data. Chances to resolve both binaries by direct imaging are higher than in VW~LMi because no rapid orbital motion of the contact binary was observed indicating a longer orbital period and thus a larger separation of both binaries. According to the Hipparcos astrometry TV~UMi is relatively nearby, with $\pi = 7.64 \pm 0.78$ mas. Clearly, the current data for the second close binary system in TV~UMi are inadequate for determination of full orbital parameters for the whole system. Such a determination would require a long-term monitoring program with one or two spectra obtained per night over a period of few months. \subsection{AG~Vir} \label{agvir} Variability of AG~Vir was discovered by \citet{guthprag1929}. Since then it was a subject to several photometric investigations (for references, see \citet{bell1990}). The system is very similar to DU~Boo (Section~\ref{duboo}) in that the first light maximum is always brighter of the two by about 0.08 mag. A simultaneous photometric and spectroscopic analysis of \citet{bell1990} showed that a reliable determination of the geometrical elements was complicated by the strong asymmetry of the light curve. The system is of the A-type with an undeterminable degree of contact (could be marginal or deep). The spectral type is A7V (our new determination suggests A5V), so that the observed photospheric brightness inhomogeneities may be quite different from the solar-type dark spots. The period analysis of \citet{blancat1970} indicated the presence of a light-time effect in the eclipse timing caused by a third component on a 40 years orbit. However, later observations did not confirm any cyclic behavior, although the observed times of minima do show a large scatter. This can be interpreted either by the light-curve variations caused by the presence of surface spots or by a light-time effect caused by a body on a short-period orbit. In fact, our broadening functions do show a well defined signature of the third component with $L_3/(L_1 + L_2) \approx 0.05$ (see Fig.~\ref{fig9}). However, this star has a rather different RV ($+2.9 \pm 2.5$ km~s$^{-1}$) from the systemic velocity of the contact binary ($-10.99 \pm 0.82$ km~s$^{-1}$). Unfortunately, the uncertainties of the RVs are quite large and only our better-quality 2005 spectra (as used in this paper) show the presence of this component. A close inspection of the published cross-correlation functions of \citet{bell1990} reveals a marginally defined presence of a third component close to the systemic velocity of the binary, but -- as expected -- the definition of the CCF's was inferior to that of the BF's. The Hipparcos parallax ($1.33 \pm 1.18$ mas) is too small and inconsistent with the estimated absolute magnitude for a A5V star, $M_V = +2.1$, and the observed maximum visual magnitude, $V_{max} = 8.50$. It is possible that this inconsistency is caused by a transverse motion of the eclipsing pair around the third component. \section{Measurements of radial velocities using rotational profiles} \label{rot} While the previous papers of this series (Papers I--VI and VIII--X) reported RV measurements obtained by Gaussian fits to the extracted broadening functions, in this paper we use a novel technique of extracting RVs by rotational profile fits. While neither Gaussians nor rotational profiles can replace the full modeling of broadening functions which will hopefully take place one day, the rotational profiles are equally simple to implement as the Gaussians, but offer an improvement in the quality of the RV measurements with a much better convergence to the final result and noticeably smaller random errors. The broadening function of a rigidly rotating star \citep{gray1976} is described by four parameters: the overall strength or the amplitude of the BF, $a_0$, the central velocity, $a_1$ (identified with the light centroid velocity of the star), and the half-width $a_2 = V \sin i$; an additional parameter is a vertical background displacement, $a_3$, which can be usually traced to a different continuum and pseudo-continuum levels for the standard and program stars during the spectral normalization step. For a double-peaked profile, the three first parameters ($a_0$ to $a_2$) appear twice so that the fit involves 7 unknowns. By using auxiliary quantities, $c_1 = 1-u$ and $c_2=\pi u/4$, the profile can be written as: \begin{equation} I=a_0+a_1 \left[ c_1 \sqrt{1-(x-a_1)^2/a_2^2} + c_2 \left(1-(x-a_1)^2/a_2^2\right)\right] /(c_1+c_2). \end{equation} The profile depends only slightly on the limb darkening coefficient, $u$. We assumed $u=0.7$ in our measurements. As opposed to the Gaussian profile, the rotational profile (see an example in Fig.~\ref{fig10}) has rather steep sides and -- by definition -- is exactly zero for velocities larger than projected equatorial velocity $V_{rot} \sin i$. These properties are crucial to the improvement in the determination of the parameter $a_1$, the light centroid for each component. Although the rotational profiles represent the BF profiles of double-line binaries much better than Gaussians, a practical application may encounter the same complications: \begin{itemize} \item If the secondary shows a very faint peak in the BF, its width must be usually fixed at a reasonable value to improve the stability of the solution and to give consistent results; \item The BF profiles for contact binaries are rather different from those of single, rigidly rotating stars, particularly in the ``neck'' region between the stars where an additional light is present. Obviously, these asymmetric deviations cannot be fitted by rotational profiles nor by Gaussians. Normally, this leads to under-estimates of the velocity semi-amplitudes. \end{itemize} In the case of close binary stars, the combined rotational profiles can be applied to broadening functions strictly only outside the eclipses as a proper representation of the data should involve inclusion of the eclipse and proximity effects \citep{ruc92}. However, a simple upper envelope of the two individual profiles works well even during partial eclipse phases. We found that the double rotational profiles converge faster to the final results and describe the data much better than the Gaussians. This is well illustrated in the case of TX~Cnc, where a preliminary orbit defined by RVs obtained through a Gaussian fits had almost twice as large standard errors of the spectroscopic elements. While $rms$ errors for the velocities derived from the Gaussian fits were $\sigma_1$ = 8.4 km~s$^{-1}$ and $\sigma_2$ = 8.3 km~s$^{-1}$, the errors for the RV's derived by rotational profile fitting are $\sigma_1$ = 4.1 km~s$^{-1}$ and $\sigma_2$ = 5.7 km~s$^{-1}$. \section{SUMMARY} With the new ten short-period binaries, this paper brings the number of the systems studied at the David Dunlap Observatory to a round number of one hundred. The systems presented in this paper include three quadruple systems for which we have been collecting data for several years in a hope of being able to study variability of RVs on times scales ranging from a fraction of the day to several years. This has been achieved for ET~Boo and VW~LMi where we can say much about all components of these hierarchical binaries. ET~Boo is a known visual binary with the period of 113 years with each component being a close binary. VW~LMi is a particularly interesting system with the period of mutual revolution of both binaries of only 355 days. Starting with a preliminary photometric solution of the light curve of VW~LMi which gave the orbital inclination of the close binary, we were able to determine the orbital inclinations of all involved binaries in this system and thus to derive masses of all components. We have been less successful for the quadruple system TV~UMi where the second pair requires a very prolonged monitoring for analysis of the 31.2 day orbit of the second pair. We have found that AG~Vir appears to be a triple system, although there is inconsistency in the velocity of its companion. For AK~Her, we were able to obtain data free of contamination from the known third component; there are indications that this binary has another faint companion causing the light-time effect in eclipse timing. The systems DU~Boo, TX~Cnc, V1073~Cyg, V566~Oph are relatively mundane double-lined contact binaries while HL~Dra is a single-lined binary of an unknown variability type. All RVs for close binaries analyzed in this paper have been determined by using a novel technique of rotational profile fitting to the broadening functions. This technique, while still not perfect in reproducing asymmetries and intricacies of the real BF's, is much advantageous and accurate than the Gaussian fitting previously used in our studies. \acknowledgements We express our thanks to Christopher Capobianco, Kosmas Gazeas, Panos Niarchos, Matt Rock, Piotr Rogoziecki, and Greg Stachowski, for their contribution in collecting the observations. Support from the Natural Sciences and Engineering Council of Canada to SMR and SWM and from the Polish Science Committee (KBN grants PO3D~006~22 and P03D~003~24) to WO and RP is acknowledged with gratitude. The travel of TP to Canada has been supported by a IAU Commission 46 travel grant and a Slovak Academy of Sciences VEGA grant 4014. TP appreciates the hospitality and support of the local staff during his stay at DDO. M. Blake acknowledges support through an NSERC grant to C. T. Bolton. The research made use of the SIMBAD database, operated at the CDS, Strasbourg, France and accessible through the Canadian Astronomy Data Centre, which is operated by the Herzberg Institute of Astrophysics, National Research Council of Canada. This research made also use of the Washington Double Star (WDS) Catalog maintained at the U.S. Naval Observatory.
{ "attr-fineweb-edu": 1.24707, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdZE4uBhjDigVP8nX
\section{Introduction} This review article is mainly concerned with some useful matrix integrals over unitary group. To compute the some integrals over the unitary group $\unitary{d}$, we use frequently Schur-Weyl duality, a technique from representation theory of group. Before proceeding to give the specific details of Schur-Weyl duality, we need briefly introduce Schur-Weyl duality with its applications in quantum information theory. In classical information theory, the method of types can be used to carry out some tasks such as estimating probability distribution, randomness concentration and data compression. It has bee shown that Schur basis can be used to generalize classical method of types, thus allowing us to perform quantum counterparts of the previously mentioned tasks. In fact Schur basis is a natural choice if we want to study systems with permutation symmetry. Schur transformation can be used to carry out several tasks such as estimation of the spectrum of an unknown mixed state, universal distortion-free entanglement concentration using only local operations, and encoding into decoherence free subsystems, etc. Another applications of Schur transformation include communication without shared reference frame and universal compression of quantum date. More explicit results related to Schur-Weyl duality can be mentioned. For example, Keyl and Werner using Schur-Weyl duality to estimate the spectrum of an unknown ($d$-level) mixed state $\rho$ from its $k$-fold product state \cite{Keyl}. Harrow gave efficient quantum circuits for Schur and Clebsch-Gordan transforms from computational point of view \cite{Harrow,Bacon}. Christandl employing Schur-Weyl duality in \cite{Christ2006,Christ2012} investigated the structure of multipartite quantum states, and obtained a group-theoretic proof for some entropy inequalities concerning von Neumann entropy, such as (strong) subadditivity of von Neumann entropy. Gour use this duality to classify the multipartite entanglement of quantum state in the finite dimensional setting \cite{Gour}. A generalization of Schur-Weyl duality with applications in quantum estimation can be found in \cite{Mashhad,Spekkens}. \section{Schur-Weyl duality} In this section, we give the details of Schur-Weyl duality. A generalization of this duality is obtained within the framework of the infinite-dimensional $C^*$-algebras \cite{Neeb}. In order to arrive at Schur-Weyl duality, we need the following ancillary results, well-known facts in representation theory. We assume familiarity with knowledge of representation theory of a compact Lie group or finite group \cite{Sepanski,Wallach}. \begin{prop} Let $V$ and $W$ be finite dimensional complex vector spaces. If $\cM \subseteq \mathrm{End}(V)$ and $\cN \subseteq \mathrm{End}(W)$ are von Neumann algebras, then $(\cM\otimes\cN)'=\cM'\otimes\cN'$. \end{prop} \begin{proof} Apparently, $\cM' \otimes \cN'\subseteq(\cM\otimes\cN)'$. It suffices to show that $(\cM\otimes\cN)' \subseteq \cM' \otimes \cN'$. For arbitrary $T \in (\cM\otimes\cN)'$, by the Operator-Schmidt Decomposition, \begin{eqnarray} T = \sum_j \lambda_j A_j\otimes B_j, \end{eqnarray} where $\lambda_j\geqslant0$, and $A_j$ and $B_j$ are orthonormal bases of $\mathrm{End}(\mathbb{C}^m)$ and $\mathrm{End}(\mathbb{C}^n)$, respectively. Now for arbitrary $M\in\cM$ and $N\in\cN$, $M\otimes\mathbb{1}_W, \mathbb{1}_V\otimes N\in\cM\otimes\cN$, it follows that \begin{eqnarray} [T, M\otimes\mathbb{1}_W] = 0 = [T,\mathbb{1}_V\otimes N]. \end{eqnarray} That is, \begin{eqnarray} \sum_j \lambda_j [A_j,M]\otimes B_j = 0~~\text{and}~~ \sum_j \lambda_j A_j\otimes [B_j, N] = 0. \end{eqnarray} We drop those terms for which $\lambda_j$ are zero. Thus $\lambda_j$ is positive for all $j$ in the above two equations. Since $\set{A_j}$ and $\set{B_j}$ are linearly independent, respectively, it follows that $$ [A_j,M] = 0~~\text{and}~~[B_j, N] = 0. $$ This implies that $A_j\in\cM'$ and $B_j\in\cN'$. Therefore $T\in\cM'\otimes\cN'$. \end{proof} \begin{prop}[The dual theorem]\label{double-commutants} Let $V$ be a representation of a finite group $G$ with decomposition $V \cong \bigoplus_{\alpha\in \widehat G} n_\alpha V_\alpha\cong \bigoplus_{\alpha\in \widehat G} V_\alpha\otimes \mathbb{C}^{n_\alpha}$. Let $\mathcal{A}}\def\cB{\mathcal{B}}\def\cC{\mathcal{C}}\def\cD{\mathcal{D}}\def\cE{\mathcal{E}$ be the algebra generated by $V$ and $\cB=\mathcal{A}}\def\cB{\mathcal{B}}\def\cC{\mathcal{C}}\def\cD{\mathcal{D}}\def\cE{\mathcal{E}'$ its commutant. Then \begin{eqnarray} &&\mathcal{A}}\def\cB{\mathcal{B}}\def\cC{\mathcal{C}}\def\cD{\mathcal{D}}\def\cE{\mathcal{E} \cong \bigoplus_{\alpha\in \widehat G}\mathrm{End}(V_\alpha) \otimes\mathbb{1}_{\mathbb{C}^{n_\alpha}},\\ &&\cB \cong \bigoplus_{\alpha\in \widehat G}\mathbb{1}_{V_\alpha} \otimes\mathrm{End}(\mathbb{C}^{n_\alpha}). \end{eqnarray} Furthermore we have $\cB'=\mathcal{A}}\def\cB{\mathcal{B}}\def\cC{\mathcal{C}}\def\cD{\mathcal{D}}\def\cE{\mathcal{E}$, where $\cB'$ is the commutant of $\cB$. That is $\mathcal{A}}\def\cB{\mathcal{B}}\def\cC{\mathcal{C}}\def\cD{\mathcal{D}}\def\cE{\mathcal{E} = \mathcal{A}}\def\cB{\mathcal{B}}\def\cC{\mathcal{C}}\def\cD{\mathcal{D}}\def\cE{\mathcal{E}''$ and $\cB = \cB''$. Thus both $\mathcal{A}}\def\cB{\mathcal{B}}\def\cC{\mathcal{C}}\def\cD{\mathcal{D}}\def\cE{\mathcal{E}$ and $\cB$ are von Neumann algebras. \end{prop} \begin{proof} It is easy to see that $$ \frac{d_\alpha}{\abs{G}}\sum_{g\in G}\overline{V_{\alpha,ij}(g)}V(g)\in \mathcal{A}}\def\cB{\mathcal{B}}\def\cC{\mathcal{C}}\def\cD{\mathcal{D}}\def\cE{\mathcal{E}. $$ By the orthogonality of the functions $V_{\alpha,ij}$ and the decomposition of $V$ into irreducible components, we get \begin{eqnarray*} \frac{d_\alpha}{\abs{G}}\sum_{g\in G}\overline{V_{\alpha,ij}(g)}V(g) &=& \frac{d_\alpha}{\abs{G}}\sum_{g\in G}\overline{V_{\alpha,ij}(g)}\Pa{\bigoplus_{\beta\in \widehat G}V_{\beta}(g)\otimes \mathbb{1}_{\mathbb{C}^{n_\beta}}}\\ &=& \bigoplus_{\beta\in \widehat G}\Pa{\frac{d_\alpha}{\abs{G}}\sum_{g\in G}\overline{V_{\alpha,ij}(g)}V_{\beta}(g)}\otimes \mathbb{1}_{\mathbb{C}^{n_\beta}}\\ &=& \bigoplus_{\beta\in \widehat G}\Pa{\frac{d_\alpha}{\abs{G}}\sum_{g\in G}\overline{V_{\alpha,ij}(g)}\sum_{k,l}V_{\beta,kl}(g)E_{\beta,kl}}\otimes \mathbb{1}_{\mathbb{C}^{n_\beta}}\\ &=& \bigoplus_{\beta\in \widehat G}\sum_{k,l}\Pa{\frac{d_\alpha}{\abs{G}}\sum_{g\in G}\overline{V_{\alpha,ij}(g)}V_{\beta,kl}(g)}E_{\beta,kl}\otimes \mathbb{1}_{\mathbb{C}^{n_\beta}} \\ &=& E_{\alpha, ij}\otimes\mathbb{1}_{\mathbb{C}^{n_\alpha}}, \end{eqnarray*} implying $E_{\alpha, ij}\otimes\mathbb{1}_{\mathbb{C}^{n_\alpha}}\in \mathcal{A}}\def\cB{\mathcal{B}}\def\cC{\mathcal{C}}\def\cD{\mathcal{D}}\def\cE{\mathcal{E}$, hence $\mathrm{End}(V_{\alpha})\otimes\mathbb{1}_{\mathbb{C}^{n_\alpha}}\subseteq \mathcal{A}}\def\cB{\mathcal{B}}\def\cC{\mathcal{C}}\def\cD{\mathcal{D}}\def\cE{\mathcal{E}$. Now $$ \mathcal{A}}\def\cB{\mathcal{B}}\def\cC{\mathcal{C}}\def\cD{\mathcal{D}}\def\cE{\mathcal{E} = \op{span}\set{V(g): g\in G} \cong \bigoplus_{\alpha\in \widehat G}\op{span}\set{V_{\alpha}(g)\otimes \mathbb{1}_{\mathbb{C}^{n_\alpha}}} = \bigoplus_{\alpha\in \widehat G}\mathrm{End}(V_{\alpha})\otimes\mathbb{1}_{\mathbb{C}^{n_\alpha}}. $$ Clearly $\bigoplus_{\alpha\in \widehat G}\mathbb{1}_{V_{\alpha}}\otimes\mathrm{End}(\mathbb{C}^{n_\alpha})\subseteq \mathcal{A}}\def\cB{\mathcal{B}}\def\cC{\mathcal{C}}\def\cD{\mathcal{D}}\def\cE{\mathcal{E}' = \cB$. To see that every element in $\cB$ is of this form, i.e. $\cB\subseteq\bigoplus_{\alpha\in \widehat G}\mathbb{1}_{V_{\alpha}}\otimes\mathrm{End}(\mathbb{C}^{n_\alpha})$. Consider a projection $c_\alpha$ onto $V_\alpha\otimes\mathbb{C}^{n_\alpha}$. The projectors $c_\alpha$ form a resolution of the identity and $c_\alpha\in \mathcal{A}}\def\cB{\mathcal{B}}\def\cC{\mathcal{C}}\def\cD{\mathcal{D}}\def\cE{\mathcal{E}$. Since $\mathcal{A}}\def\cB{\mathcal{B}}\def\cC{\mathcal{C}}\def\cD{\mathcal{D}}\def\cE{\mathcal{E}'=B$, it follows that any $B\in\cB$ must commute with $P_\alpha$: $c_\alpha B = Bc_\alpha$. This leads to $$ B = \Pa{\sum_\alpha c_\alpha}B = \sum_\alpha c_\alpha B c_\alpha = \sum_\alpha B_\alpha. $$ Moreover, $B_\alpha\in (\mathrm{End}(V_{\alpha})\otimes\mathbb{1}_{\mathbb{C}^{n_\alpha}})' = \mathbb{1}_{V_{\alpha}}\otimes\mathrm{End}(\mathbb{C}^{n_\alpha})$, thus it must be of the form $B_\alpha = \mathbb{1}_{V_\alpha}\otimes b_{\alpha}$. \end{proof} \begin{remark} Let $\widehat G$ be a complete set of inequivalent irreps of $G$. Then for any reducible representation $V$, there is a basis under which the action of $V(g)$ can be expressed as \begin{eqnarray}\label{eq-decom-of-rep} V(g)\cong \bigoplus_{\alpha\in \widehat G} \bigoplus^{n_\alpha}_{j=1} V_\alpha(g) = \bigoplus_{\alpha\in \widehat G} V_\alpha(g) \otimes\mathbb{1}_{n_\alpha}, \end{eqnarray} where $\alpha\in\widehat G$ labels an irrep $V_\alpha$ and $n_\alpha$ is the multiplicity of the irrep $V_\alpha$ in the representation $V$. Here we use $\cong$ to indicate that there exists a unitary change of basis relating the left-hand size to the right-hand side. Under this change of basis we obtain a similar decomposition of the representation space $V$ (known as the \emph{isotypic decomposition}): \begin{eqnarray}\label{eq-iso-decom} V \cong \bigoplus_{\alpha\in\widehat G} V_\alpha\otimes \mathrm{Hom}_G(V_\alpha,V). \end{eqnarray} Since $G$ acts trivially on $\mathrm{Hom}_G(V_\alpha,V)$, Eq.~\eqref{eq-decom-of-rep} remains the same. The value of Eq.~\eqref{eq-iso-decom} is that the unitary mapping from the right-hand side (RHS) to the left-hand side (LHS) has a simple explicit expression: it corresponds to the canonical map $\varphi: \cX\otimes \mathrm{Hom}(\cX\otimes\cY)\to \cY$ given by $\varphi(x\otimes f) = f(x)$. Of course, this doesn't tell us how to describe $\mathrm{Hom}_G(V_\alpha,V)$, or how to specify an orthonormal basis for the space, but we will later find this form of the decomposition useful. \end{remark} Consider a system of $k$ qudits, each with a standard local computational basis $\set{\ket{i},i=1,\ldots,d}$. The Schur-Weyl duality relates transforms on the system performed by local $d$-dimensional unitary operations to those performed by permutation of the qudits. Recall that the symmetric group $S_k$ is the group of all permutations of $k$ objects. This group is naturally represented in our system by \begin{eqnarray} \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi)\ket{i_1\cdots i_k} := \ket{i_{\pi^{-1}(1)}\cdots i_{\pi^{-1}(k)}}, \end{eqnarray} where $\pi\in S_k$ is a permutation and $\ket{i_1\cdots i_k}$ is shorthand for $\ket{i_1}\otimes\cdots\otimes\ket{i_k}$. Let $\unitary{d}$ denote the group of $d\times d$ unitary operators. This group is naturally represented in our system by \begin{eqnarray} \bQ(U)\ket{i_1\cdots i_k} := U\ket{i_1}\otimes\cdots\otimes U\ket{i_k}, \end{eqnarray} where $U\in\unitary{d}$. Thus we have the following famous result: \begin{thrm}[Schur]\label{th-schur} Let $\mathcal{A}}\def\cB{\mathcal{B}}\def\cC{\mathcal{C}}\def\cD{\mathcal{D}}\def\cE{\mathcal{E} = \op{span}\Set{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi): \pi\in S_k}$ and $\cB = \op{span}\Set{\bQ(U): U\in\unitary{d}}$. Then: \begin{eqnarray} \mathcal{A}}\def\cB{\mathcal{B}}\def\cC{\mathcal{C}}\def\cD{\mathcal{D}}\def\cE{\mathcal{E}' = \cB\quad\text{and}\quad \mathcal{A}}\def\cB{\mathcal{B}}\def\cC{\mathcal{C}}\def\cD{\mathcal{D}}\def\cE{\mathcal{E} = \cB' \end{eqnarray} \end{thrm} \begin{proof}[First proof] The proof is separated into two steps: 1) $\mathcal{A}}\def\cB{\mathcal{B}}\def\cC{\mathcal{C}}\def\cD{\mathcal{D}}\def\cE{\mathcal{E}'= \op{span}\set{A^{\otimes k}: A\in \mathrm{End}(\mathbb{C}^d)}$. 2) $\op{span}\set{A^{\otimes k}: A\in \mathrm{End}(\mathbb{C}^d)}=\cB$. In order to show that 1) holds, note that $\mathrm{End}((\mathbb{C}^d)^{\otimes k}) = \mathrm{End}(\mathbb{C}^d)^{\otimes k}$, Firstly we show that \begin{eqnarray} \mathcal{A}}\def\cB{\mathcal{B}}\def\cC{\mathcal{C}}\def\cD{\mathcal{D}}\def\cE{\mathcal{E}'= \mathrm{End}(\mathbb{C}^d)^{\otimes k}\cap\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(S_k)'=\mathrm{End}((\mathbb{C}^d)^{\otimes k})\cap\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(S_k)' = \op{span}\set{\bQ(A): A\in \mathrm{End}(\mathbb{C}^d)}. \end{eqnarray} We need only show that LHS is contained in RHS since the reverse inclusion is trivial. For arbitrary $\Gamma\in \mathcal{A}}\def\cB{\mathcal{B}}\def\cC{\mathcal{C}}\def\cD{\mathcal{D}}\def\cE{\mathcal{E}'=\mathrm{End}(\mathbb{C}^d)^{\otimes k}\cap\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(S_k)'$, we have $\Gamma = \cT_{S_k}(\Gamma)$ and $\Gamma\in \mathrm{End}(\mathbb{C}^d)^{\otimes k}$, where $\cT_{S_k}=\frac1{k!}\sum_{\pi\in S_k}\mathrm{Ad}_{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi)}$. It suffices to show that $$ \Gamma = \cT_{S_k}(A_1\otimes\cdots\otimes A_k)\in \op{span}\set{\bQ(A): A\in \mathrm{End}(\mathbb{C}^d)}, $$ where $A_j\in \mathrm{End}(\mathbb{C}^d)$. In what follows, we show that each such $\Gamma$ can be written in terms of tensor products $A^{\otimes k}$. Since \begin{eqnarray} \frac{d}{dt}(M+tN)^{\otimes k} =\sum^{k-1}_{j=0}(M+tN)^{\otimes j}N(M+tN)^{\otimes (k-j-1)}, \end{eqnarray} it follows that \begin{eqnarray} \left.\frac{d}{dt}\right|_{t=0}(M+tN)^{\otimes k} =\sum^{k-1}_{j=0}M^{\otimes j}NM^{\otimes (k-j-1)}. \end{eqnarray} Consider the following partial derivative \begin{eqnarray}\label{eq:partial} \left.\frac{\partial^{k-1}}{\partial t_2\ldots\partial t_k}\right|_{t_2=\cdots=t_k=0}\Pa{A_1+\sum^k_{j=2}t_jA_j}^{\otimes k}, \end{eqnarray} which can be realized by subsequently applying \begin{eqnarray} \left.\frac{\partial}{\partial t_j}\right|_{t_j=0}\Pa{A + t_jA_j}^{\otimes k} = \lim_{t_j\to 0} \frac{(A + t_jA_j)^{\otimes k} - A^{\otimes k}}{t_j}, \end{eqnarray} iteratively going from $j=k$ all the way to $j=2$. The \eqref{eq:partial} takes the form of a limit of sums of tensor powers. Since $\op{span}\set{\bQ(A): A\in \mathrm{End}(\mathbb{C}^d)}$ is a finite dimensional vector space, this limit is contained in $\op{span}\set{\bQ(A): A\in \mathrm{End}(\mathbb{C}^d)}$. On the other hand, a direct calculation shows that \begin{eqnarray} k!\Gamma = \left.\frac{\partial^{k-1}}{\partial t_2\ldots\partial t_k}\right|_{t_2=\cdots=t_k=0}\Pa{A_1+\sum^k_{j=2}t_jA_j}^{\otimes k} \end{eqnarray} and hence all operators $\Gamma$ are contained in $\op{span}\set{\bQ(A): A\in \mathrm{End}(\mathbb{C}^d)}$. Now we turn to prove that 2) holds. Firstly we show that \begin{eqnarray} \op{span}\set{U^{\otimes k}: U\in\unitary{d}} = \op{span}\set{T^{\otimes k}: T\in \rG\rL(d,\mathbb{C})}. \end{eqnarray} For any $T\in \rG\rL(d,\mathbb{C})$, there exists $M\in \mathrm{End}(\mathbb{C}^d)$ such that $$ T = e^M. $$ (The elementary proof of this fact is shifted to the following remark.) Then \begin{eqnarray}\label{eq:exp} T^{\otimes n} = (e^M)^{\otimes n} = \exp\Pa{\sum^n_{j=1}\mathbb{1}^{\otimes j-1}\otimes M\otimes\mathbb{1}^{n-j}} = \exp(\bQ_*(M)), \end{eqnarray} where $$ \bQ_*(M) := \left.\frac{d}{dt}\right|_{t=0} \bQ(e^{tM}). $$ Clearly $\bQ(e^{tM}) = e^{t\bQ_*(M)}$ for any real $t\in\mathbb{R}$. In fact, $\bQ$ is a Lie group representation of $\unitary{d}$ or $\rG\rL(d,\mathbb{C})$. $\bQ_*$ is a Lie algebra representation induced by $\bQ$. If we can show that $\bQ_*(M)\in \op{span}\set{U^{\otimes n}: U\in\unitary{d}}$, then by \eqref{eq:exp}, it follows that $T^{\otimes n}\in \op{span}\set{U^{\otimes n}: U\in\unitary{d}}$. Next, we show that $\bQ_*(M)\in \op{span}\set{U^{\otimes n}: U\in\unitary{d}}$. For any skew-Hermitian operator $X$, $e^{tX}$ is a unitary, thus $\bQ(e^{tX})\in \op{span}\set{U^{\otimes n}: U\in\unitary{d}}$. By the connection of $\bQ$ and $\bQ_*$, we have $\bQ(e^{tX}) = e^{t\bQ_*(X)}$, implying that $\bQ_*(X)\in \op{span}\set{U^{\otimes n}: U\in\unitary{d}}$, where $X\in\mathfrak{u}(d)$, a Lie algebra of $\unitary{d}$. Let $M = X+\sqrt{-1}Y$ for $X,Y\in\mathfrak{u}(d)$. Thus by the complex-linearity of $\bQ_*$, it follows that $$ \bQ_*(M) = \bQ_*(X) + \sqrt{-1}\bQ_*(Y). $$ Since $\op{span}\set{U^{\otimes n}: U\in\unitary{d}}$ is a complex-linear space, it follows that \begin{eqnarray} \bQ_*(X) + \sqrt{-1}\bQ_*(Y) \in \op{span}\set{U^{\otimes n}: U\in\unitary{d}} \end{eqnarray} whenever $\bQ_*(X),\bQ_*(Y)\in\op{span}\set{U^{\otimes n}: U\in\unitary{d}}$. Therefore $\bQ_*(M)\in \op{span}\set{U^{\otimes n}: U\in\unitary{d}}$. Up to now, we established the fact that \begin{eqnarray} \op{span}\set{U^{\otimes n}: U\in\unitary{d}} = \op{span}\set{T^{\otimes n}: T\in \rG\rL(d,\mathbb{C})}. \end{eqnarray} Secondly, we show that \begin{eqnarray}\label{eq:exp-inv} \op{span}\set{T^{\otimes n}: T\in \rG\rL(d,\mathbb{C})} = \op{span}\set{A^{\otimes n}: A\in \mathrm{End}(\mathbb{C}^d)}. \end{eqnarray} We use the fact that $\rG\rL(d,\mathbb{C})$ is dense in $\mathrm{End}(\mathbb{C}^d)$. Indeed, for any $A\in \mathrm{End}(\mathbb{C}^d)$, by the Singular Value Decomposition, we have $$ A = UDV^\dagger, $$ where $U,V\in \unitary{d}$ and $D$ is a diagonal matrix whose diagonal entries are nonnegative. Define $$ T_\varepsilon = U\Pa{D+\frac{\varepsilon}{1+\varepsilon}\mathbb{1}}V^\dagger $$ for very small positive real $\varepsilon$. Apparently $T_\varepsilon\in \rG\rL(d,\mathbb{C})$ and $\norm{A-T_\varepsilon}<\varepsilon$. This indicates that $\rG\rL(d,\mathbb{C})$ is dense in $\mathrm{End}(\mathbb{C}^d)$ in norm topology. For any fixed $A\in\mathrm{End}(\mathbb{C}^d)$, we take $T\in\rG\rL(d,\mathbb{C})$ such that $\norm{A-T}$ is very small. Since $$ \norm{\bQ(A) - \bQ(T)}\leqslant n \Delta^{n-1}\norm{A-T}, $$ where $\Delta:= \max\set{\norm{A},\norm{T}}$, it follows, from the fact that $\op{span}\set{T^{\otimes n}: T\in\rG\rL(d,\mathbb{C})}$ is closed (in the finite-dimensional setting), that \eqref{eq:exp-inv} is true. Therefore the proof is complete. \end{proof} \begin{remark} In this Remark, we will show that, for every $T\in \rG\rL(d,\mathbb{C})$, there exists $M\in\mathrm{End}(\mathbb{C}^d)$ such that $T=e^M$. This result is a famous one in Lie theory. A general method for its proof is rather involved. To avoid usage of advanced tools in Lie theory. We give here an elementarily proof of it. We just use the matrix technique. Indeed, it is easy to show that if $T$ is a diagonalizable matrix, then the conclusion is true. For a general case, we separate the proof into two cases: \emph{Case 1}. There is a sequence of diagonalizable matrices $T_k$ satisfying that \begin{enumerate}[(i)] \item $\lim_k T_k = T$, \item If $T_k = e^{M_k}$, then there is a constant $c>0$ such that $\norm{M_k}\leqslant c$ holds for every $k$. \end{enumerate} Now we show that the \emph{existence} of $T_k$. Consider the Jordan canonical decomposition of $T$ for $T = PJP^{-1}$. Let $t_j$ be the diagonal entries of $J$. Note that $T$ is an invertible matrix, so $t_j\neq0$ for every $1\leqslant j\leqslant d$. Let $$ T_k := P(J + \Lambda_k)P^{-1}, $$ where $\Lambda_k : = \mathrm{diag}(\lambda^k_1,\lambda^k_2,\ldots,\lambda^k_d)$. Then $T_k$ meets the conditions (i) and (ii) in \emph{Case 1} if \begin{enumerate}[(a)] \item $\lim_k \lambda^k_j = 0$ for $j=1,\ldots,d$; \item $t_j +\lambda^k_j$ are all different when $j$ runs from 1 to $d$ for every given $k$. Thus $T_k$ has $d$ different eigenvalues $t_j+\lambda^k_j$, and of course $T_k$ is diagonalizable; \item there is a constant $c$ such that $\abs{\ln(t_j+\lambda^k_j)}\leqslant c$ for every $k$ and $j$. Note that if (b) is true, then $\norm{M_k} = \max_j \abs{\ln(t_j+\lambda^k_j)}$. \end{enumerate} The construction of $\lambda^k_j$ satisfying (a)--(c) is described as follows: For any given $k$, let $\lambda^k_1 = \tfrac{t_1}{k}$, and $\lambda^k_j$ be one of $\tfrac{t_j}{k},\tfrac{t_j}{k+1},\ldots,\tfrac{t_j}{k+j}$ such that $t_i + \lambda^k_i\neq t_j + \lambda^k_j$ whenever $i<j$. Apparently (a) and (b) are satisfied. To check (c), we have $$ \abs{\ln(t_j+\lambda^k_j)} = \abs{\ln(t_j)+\ln(1+\lambda^k_j/t_j)}\leqslant \abs{\ln(t_j)} + \abs{\ln(1+\lambda^k_j/t_j)}, $$ taking $c=\max_j \abs{\ln(t_j)}+\ln2$ is enough. That is $\norm{M_k} = \max_j \abs{\ln(t_j+\lambda^k_j)}\leqslant c$ for all $k$. \emph{Case 2}. When \emph{Case 1} holds, since the exponential function is a smooth and continuous function, so the image of the \emph{compact} set, $\exp\Pa{B(0,c)}$ must be closed, where $B(0,c)$ is the closed ball with radius $c$, thus the limit $T$ of $e^{M_k}$ is also in $\exp\Pa{B(0,c)}$. This means that there exists $M\in B(0,c)$ such that $T=e^M$. The proof is finished. \end{remark} We remark here that the above first proof of Schur-Weyl duality makes reference to PhD thesis of Christandl \cite{Christ2006}. The following second proof is taken from the book of Goodman and Wallach \cite{Wallach}. \begin{proof}[Second proof] Let $\set{\ket{1},\ldots,\ket{d}}$ be the standard basis for $\mathbb{C}^d$. For an ordered $k$-tuple $I=(i_1,\ldots,i_k)$ with $i_1,\ldots,i_k\in [d]$, where $[d]:=\set{1,\ldots,d}$, define $\abs{I}=k$ and $\ket{I}:=\ket{i_1\cdots i_k}$. The tensors $\set{\ket{I}:I\in [d]^k}$, with $I$ ranging over the all such $k$-tuples, give a basis for $(\mathbb{C}^d)^{\otimes k}$. The group $S_k$ permutes this basis by the action $\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi)\ket{I} = \ket{\pi\cdot I}$, where for $I=(i_1,\ldots,i_k)$ and $\pi\in S_k$, we define $$ \pi\cdot(i_1,\ldots,i_k) := (i_{\pi^{-1}(1)},\ldots,i_{\pi^{-1}(k)}). $$ Note that $\pi$ changes the \emph{positions}(1 to $k$) of the indices, not their values (1 to $d$), and we have $(\sigma\pi)\cdot I = \sigma\cdot(\pi\cdot I)$ for $\sigma,\pi\in S_k$. Suppose $B\in\mathrm{End}((\mathbb{C}^d)^{\otimes k})$ has matrix $[b_{I,J}]$ relative to the basis $\set{\ket{I}: I\in[d]^k}$: $\Innerm{I}{B}{J} = b_{I,J}$ and $$ B\ket{J} = \sum_{I\in[d]^k} b_{I,J}\ket{I}. $$ We have \begin{eqnarray} B\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi)\ket{J} = B\ket{\pi\cdot J} = \sum_I b_{I,\pi\cdot J}\ket{I} \end{eqnarray} for $\pi\in S_k$, whereas \begin{eqnarray} \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi)B\ket{J} = \sum_I b_{I,J}\ket{\pi\cdot I} = \sum_I b_{\pi^{-1}\cdot I, J}\ket{I}. \end{eqnarray} Thus $B\in\mathcal{A}}\def\cB{\mathcal{B}}\def\cC{\mathcal{C}}\def\cD{\mathcal{D}}\def\cE{\mathcal{E}'$ if and only if $b_{I,\pi\cdot J} = b_{\pi^{-1}\cdot I, J}$ for all multi-indices $I,J$ and all $\pi\in S_k$. Replacing $I$ by $\pi\cdot I$, we can write this condition as \begin{eqnarray}\label{eq:orbit} b_{\pi\cdot I,\pi\cdot J} = b_{I, J},\quad \forall I,J;\pi\in S_k. \end{eqnarray} Consider the non-degenerate bilinear form $\Inner{X}{Y}: =\Tr{XY}$ on $\mathrm{End}((\mathbb{C}^d)^{\otimes k})$. We claim that the restriction of this form to $\mathcal{A}}\def\cB{\mathcal{B}}\def\cC{\mathcal{C}}\def\cD{\mathcal{D}}\def\cE{\mathcal{E}'$ is non-degenerate. Indeed, we have a projection $X\mapsto X^{\#}$ of $\mathrm{End}((\mathbb{C}^d)^{\otimes k})$ onto $\mathcal{A}}\def\cB{\mathcal{B}}\def\cC{\mathcal{C}}\def\cD{\mathcal{D}}\def\cE{\mathcal{E}'$ given by averaging over $S_k$: \begin{eqnarray} X^{\#} = \frac1{k!}\sum_{\pi\in S_k} \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi)X \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi)^{-1}. \end{eqnarray} If $B\in\mathcal{A}}\def\cB{\mathcal{B}}\def\cC{\mathcal{C}}\def\cD{\mathcal{D}}\def\cE{\mathcal{E}'$, then $$ \Inner{X^{\#}}{B} = \frac1{k!}\sum_{\pi\in S_k}\Tr{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi)X \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi)^{-1}B} = \Inner{X}{B}, $$ since $\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi)B = B\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi)$. Thus $\Inner{\mathcal{A}}\def\cB{\mathcal{B}}\def\cC{\mathcal{C}}\def\cD{\mathcal{D}}\def\cE{\mathcal{E}'}{B}=0$ implies that $\Inner{X}{B}=0$ for all $X\in\mathrm{End}((\mathbb{C}^d)^{\otimes k})$, and so $B=0$. Hence the trace form on $\mathcal{A}}\def\cB{\mathcal{B}}\def\cC{\mathcal{C}}\def\cD{\mathcal{D}}\def\cE{\mathcal{E}'$ is non-degenerate. To show that $\mathcal{A}}\def\cB{\mathcal{B}}\def\cC{\mathcal{C}}\def\cD{\mathcal{D}}\def\cE{\mathcal{E}'=\cB$, it thus suffices to show that if $B\in\mathcal{A}}\def\cB{\mathcal{B}}\def\cC{\mathcal{C}}\def\cD{\mathcal{D}}\def\cE{\mathcal{E}'$ is orthogonal to $\cB$, then $B=0$. Now if $g=[g_{ij}]\in\rG\rL(d,\mathbb{C})$, then $\bQ(g)$ has matrix $g_{I,J}=g_{i_1j_1}\cdots g_{i_kj_k}$ relative to the basis $\set{\ket{I}:I\in[d]^k}$. Thus we assume that \begin{eqnarray} \Inner{B}{\bQ(g)}=\sum_{I,J}b_{I,J}g_{j_1i_1}\cdots g_{j_ki_k}=0 \end{eqnarray} for all $g\in\rG\rL(d,\mathbb{C})$, where $[b_{I,J}]$ is the matrix of $B$. Define a polynomial function $p_B$ on $M(\mathbb{C}^d)$ by $$ p_B(X) = \sum_{I,J}b_{I,J}x_{j_1i_1}\cdots x_{j_ki_k} $$ for $X=[x_{ij}]\in M(\mathbb{C}^d)$. Clearly $p_B$ is vanished over $\rG\rL(d,\mathbb{C})$, a dense subset of $M(\mathbb{C}^d)$; and $p_B$ is a continuous function on $M(\mathbb{C}^d)$, therefore $p_B\equiv 0$, so for all $[x_{ij}]\in M(\mathbb{C}^d)$, we have \begin{eqnarray}\label{eq:global} \sum_{I,J}b_{I,J}x_{j_1i_1}\cdots x_{j_ki_k}=0. \end{eqnarray} In what follows, we show that $b_{I,J}=0$ for all $I,J$. We begin by grouping the terms in the above equation according distinct monomials in the matrix entries $\set{x_{ij}}$. Introduce the notation $x_{I,J}=x_{i_1j_1}\cdots x_{i_kj_k}$, and view these monomials as polynomial functions on $M(\mathbb{C}^d)$. Let $\Theta$ be the set of all ordered pairs $(I,J)$ of multi-indices with $\abs{I}=\abs{J}=k$. The group $S_k$ acts on $\Theta$ by $$ \pi\cdot(I,J) = (\pi\cdot I,\pi\cdot J). $$ From Eq.~\eqref{eq:orbit}, we see that $B$ commutes with $S_k$ if and only if the function $(I,J)\mapsto b_{I,J}$ is constant on the orbits of $S_k$ in $\Theta$. The action of $S_k$ on $\Theta$ defines an equivalence relation on $\Theta$, where $(I,J)\sim(I',J')$ if $(I',J')=(\pi\cdot I,\pi\cdot J)$ for some $\pi\in S_k$. This gives a decomposition of $\Theta$ into disjoint equivalence classes. Choose a set $\Gamma$ of representatives for the equivalence classes. Then every monomial $x_{I,J}$ with $\abs{I}=\abs{J}=k$ can be written as $x_\gamma$ for some $\gamma\in \Gamma$. Indeed, since the variables $x_{ij}$ mutually commute, we have $$ x_\gamma = x_{\pi\cdot\gamma},\quad\forall \pi\in S_k; \gamma\in\Gamma. $$ Suppose $x_{I,J}=x_{I',J'}$. Then there must be an integer $p$ such that $x_{i'_1j'_1}=x_{i_p j_p}$. Call $p=1'$. Similarly, there must be an integer $q\neq p$ such that $x_{i'_2j'_2}=x_{i_q j_q}$. Call $q=2'$. Continuing this way, we obtain a permutation $$ \pi: (1,2,\ldots,k)\to (1',2',\ldots,k') $$ such that $I=\pi\cdot I'$ and $J=\pi\cdot J'$. This proves that $\gamma$ is uniquely determined by $x_\gamma$. For $\gamma\in\Gamma$, let $n_\gamma=\abs{S_k\cdot\gamma}$ be the cardinality of the corresponding orbit. Assume that the coefficients $b_{I,J}$ satisfy Eqs.~\eqref{eq:orbit} and \eqref{eq:global}. Since $b_{I,J}=b_\gamma$ for all $(I,J)\in S_k\cdot \gamma$, it follows from Eq.~\eqref{eq:global} that $$ \sum_{\gamma \in\Gamma} n_\gamma b_\gamma x_\gamma = 0. $$ Since the set of monomials $\set{x_\gamma: \gamma\in\Gamma}$ is linearly independent, this implies that $b_{I,J}=0$ for all $(I,J)\in\Theta$. This proves that $B=0$. Hence $\cB=\mathcal{A}}\def\cB{\mathcal{B}}\def\cC{\mathcal{C}}\def\cD{\mathcal{D}}\def\cE{\mathcal{E}'$. \end{proof} The following result concerns with a wonderful decomposition of the representations on $k$-fold tensor space $(\mathbb{C}^d)^{\otimes k}$ of $\unitary{d}$ and $S_k$, respectively, using their corresponding irreps accordingly. The proof is taken from \cite{Christ2006}. \begin{thrm}[Schur-Weyl duality]\label{th:S-W-Duality} There exist a basis, known as Schur basis, in which representation $\Pa{\bQ\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T},(\mathbb{C}^d)^{\otimes k}}$ of $\unitary{d}\times S_k$ decomposes into irreducible representations $\bQ_\lambda$ and $\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_\lambda$ of $\unitary{d}$ and $S_k$, respectively: \begin{enumerate}[(i)] \item $(\mathbb{C}^d)^{\otimes k}\cong \bigoplus_{\lambda\vdash(k,d)} \bQ_\lambda\otimes \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_\lambda$; \item $\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi)\cong \bigoplus_{\lambda\vdash(k,d)} \mathbb{1}_{\bQ_\lambda}\otimes \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_\lambda(\pi)$; \item $\bQ(U)\cong \bigoplus_{\lambda\vdash(k,d)} \bQ_\lambda(U)\otimes \mathbb{1}_{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_\lambda}$. \end{enumerate} Since $\bQ$ and $\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}$ commute, we can define representation $\Pa{\bQ\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T},(\mathbb{C}^d)^{\otimes k}}$ of $\unitary{d}\times S_k$ as \begin{eqnarray} \bQ\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(U,\pi) = \bQ(U)\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi) = \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi)\bQ(U)\quad \forall (U,\pi)\in \unitary{d}\times S_k. \end{eqnarray} Then: \begin{eqnarray}\label{schur-duality} \bQ\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(U,\pi) = U^{\otimes k}\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_\pi = \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_\pi U^{\otimes k}\cong \bigoplus_{\lambda\vdash(k,d)} \bQ_\lambda(U)\otimes \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_\lambda(\pi). \end{eqnarray} \end{thrm} In order to prove the above theorem, we first observe that algebras generated by $\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}$ and $\bQ$ centralize each other. Then we can apply \emph{double commutant theorem} to get expression Eq.~\eqref{schur-duality} only with unspecified range of $\lambda$. In order to specify the range, we find a correspondence between irreducible representations of $S_k$ and $\unitary{d}$ and partitions $\lambda\vdash(k, d)$. We call the unitary transformation performing the basis change from standard basis to Schur basis, \emph{Schur transform} and denote by $U_{\mathrm{sch}}$. It has been shown that Schur transform can be implemented efficiently on a quantum computer. \begin{proof} The application of the Duality Theorem.~\ref{double-commutants} to $G=S_k$ (and to its dual partner $\unitary{d}$, Theorem.~\ref{th-schur}) shows the above three equations, where $\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_\lambda$ are irreducible representations of $S_k$. The representation of $\unitary{d}$ that is paired with $\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_\lambda$ is denoted by $\bQ_\lambda$. In the following, we show that $\bQ_\lambda$'s are irreducible. A brief but elegant argument is follows: $\bQ_\lambda$ is irreducible if and only if its extension to $\rG\rL(d)$ is irreducible. That is $\bQ_\lambda(\unitary{d})$ is irreducible if and only if $\bQ_\lambda(\rG\rL(d,\mathbb{C}))$ is irreducible. So it suffices to show that $\bQ_\lambda$ is indecomposable under $\rG\rL(d,\mathbb{C})$. By Schur's Lemma this is equivalent to showing that $\mathrm{End}_{\rG\rL(d,\mathbb{C})}(\bQ_\lambda)\cong \mathbb{C}$. That is, the maps in $\mathrm{End}(\bQ_\lambda)$ that commute with the action of $\rG\rL(d,\mathbb{C})$ are proportional to the identity. In what follows, we show that $\mathrm{End}_{\rG\rL(d,\mathbb{C})}(\bQ_\lambda)\cong \mathbb{C}$. From Schur's Lemma, we have $$ \mathrm{End}_{S_k}\Pa{(\mathbb{C}^d)^{\otimes k}} \cong\bigoplus_{\lambda} \mathrm{End}(\bQ_\lambda)\otimes\mathbb{1}_{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_\lambda}\cong \bigoplus_{\lambda} \mathrm{End}(\bQ_\lambda). $$ Thus $$ \mathrm{End}_{\rG\rL(d,\mathbb{C})\times S_k}\Pa{(\mathbb{C}^d)^{\otimes k}} \cong \bigoplus_{\lambda} \mathrm{End}_{\rG\rL(d,\mathbb{C})}(\bQ_\lambda). $$ By the dual theorem, $\rG\rL(d,\mathbb{C})$ and $S_k$ are double commutants, $$ \mathrm{End}_{S_k}\Pa{(\mathbb{C}^d)^{\otimes k}} = \op{span}\Set{T^{\otimes k}: T\in\rG\rL(d,\mathbb{C})}, $$ and thus $\mathrm{End}_{\rG\rL(d,\mathbb{C})\times S_k}\Pa{(\mathbb{C}^d)^{\otimes k}}$ is clearly contained in the center of $\mathrm{End}_{S_k}\Pa{(\mathbb{C}^d)^{\otimes k}}$. Therefore $\mathrm{End}_{\rG\rL(d,\mathbb{C})}(\bQ_\lambda)$ is contained in the center of $\mathrm{End}(\bQ_\lambda)\cong \mathbb{C}$. Finally $$ \mathrm{End}_{\rG\rL(d,\mathbb{C})}(\bQ_\lambda)\cong \mathbb{C}. $$ For the proof of $\lambda\vdash(k,d)$, it is rather involved since we need the notion of highest weight classification of a compact Lie group. It is omitted here. We are done. \end{proof} \begin{remark} By the Duality Theorem~\ref{double-commutants} and Theorem~\ref{th:S-W-Duality}, it follows that $\bQ(X)\in\cB$ for $X\in\mathrm{End}(\mathbb{C}^d)$. Furthermore the decomposition of $\bQ(X)$ is of the form: \begin{eqnarray} \bQ(X) \cong \bigoplus_{\lambda\vdash (k,d)}\bQ_\lambda(X)\otimes \mathbb{1}_{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_\lambda}. \end{eqnarray} Therefore \begin{eqnarray} X^{\otimes k}\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi) = \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi)X^{\otimes k} \cong \bigoplus_{\lambda\vdash (k,d)}\bQ_\lambda(X)\otimes \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_\lambda(\pi). \end{eqnarray} \end{remark} The dimensions of pairing irreps for $\unitary{d}$ and $S_k$, respectively, in Schur-Weyl duality can be computed by so-called \emph{hook length formulae}. The hook of box $(i, j)$ in a Young diagram determined by a partition $\lambda$ is given by the box itself, the boxes to its right and below. The hook length is the number of boxes in a hook. Specifically, we have the following result without its proof: \begin{thrm}[Hook length formulae] The dimensions of pairing irreps for $\unitary{d}$ and $S_k$, respectively, in Schur-Weyl duality can be given as follows: \begin{eqnarray} \dim(\bQ_\lambda) &=& \prod_{(i,j)\in\lambda}\frac{d+j-i}{h(i,j)} = \prod_{1\leqslant i<j\leqslant d}\frac{\lambda_i - \lambda_j + j-i}{j-i}, \\ \dim(\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_\lambda) &=& \frac{k!}{\prod_{(i,j)\in\lambda}h(i,j)}. \end{eqnarray} \end{thrm} We will see the concrete example which is the most simple one: \begin{exam} Suppose that $k = 2$ and $d$ is greater than one. Then the Schur-Weyl duality is the statement that the space of two-tensors decomposes into symmetric and antisymmetric parts, each of which is also an irreducible module for $\rG\rL(d,\mathbb{C})$: \begin{eqnarray} \mathbb{C}^d\otimes\mathbb{C}^d = \vee^2\mathbb{C}^d\oplus \wedge^2\mathbb{C}^d. \end{eqnarray} The symmetric group $S_2$ consists of two elements and has two irreducible representations, the trivial representation and the sign representation. The trivial representation of $S_2$ gives rise to the symmetric tensors, which are invariant (i.e. do not change) under the permutation of the factors, and the sign representation corresponds to the skew-symmetric tensors, which flip the sign. \end{exam} \section{Matrix integrals over unitary groups} In this section, we will give the proofs on some integrals over unitary matrix group. We will use the uniform Haar-measure $\mu$ over unitary matrix group $\unitary{d}$. We also use the vec-operator correspondence. The $\vec$ mapping is defined as follows: \begin{eqnarray} \vec(\out{i}{j}) = \ket{ij}. \end{eqnarray} Thus $\vec(\mathbb{1}_d) = \sum^d_{j=1}\ket{jj}$. Clearly \begin{eqnarray} \vec(AXB)=A\otimes B^\t\vec(X). \end{eqnarray} The $\vec$ mapping for bipartite case is still valid: \begin{eqnarray} \vec(\out{m}{n}\otimes \out{\mu}{\nu}) = \vec(\out{m\mu}{n\nu}) = \ket{m\mu n\nu}. \end{eqnarray} In what follows, we will employ Schur-Weyl duality to give the computations about the integrals of the following forms: \begin{eqnarray} \int_{\unitary{d}} U^{\otimes k} A (U^{\otimes k})^\dagger d\mu(U)~~\text{or}~~\int_{\unitary{d}} U^{\otimes k} \otimes (U^{\otimes k})^\dagger d\mu(U). \end{eqnarray} We demonstrate the integral formulae for the special cases where $k=1,2$ with detailed proofs since they have extremely important applications in quantum information theory. Analogously, we also obtain the explicit computations about the integrals of the following forms: \begin{eqnarray} \int_{\unitary{d}} U^k A (U^k)^\dagger d\mu(U)~~\text{or}~~\int_{\unitary{d}} U^k \otimes (U^k)^\dagger d\mu(U). \end{eqnarray} \subsection{The $k=1$ case} \begin{prop}[Completely depolarizing channel]\label{prop:u-integral} It holds that \begin{eqnarray} \int_{\unitary{d}}UAU^\dagger d\mu(U) = \frac{\Tr{A}}{d}\mathbb{1}_d, \end{eqnarray} where $A\in M_d(\mathbb{C})$. \end{prop} \begin{proof} For any $V\in \unitary{d}$, we have \begin{eqnarray*} V\Pa{\int_{\unitary{d}}UAU^\dagger d\mu(U)}V^\dagger &=& \int_{\unitary{d}}(VU)A(VU)^\dagger d\mu(U)\\ &=& \int_{\unitary{d}}(VU)A(VU)^\dagger d\mu(VU)\\ &=& \int_{\unitary{d}}WAW^\dagger d\mu(W) = \int_{\unitary{d}}UAU^\dagger d\mu(U), \end{eqnarray*} implying that $\int_{\unitary{d}}UAU^\dagger d\mu(U)$ commutes with $\unitary{d}$. Thus $\int_{\unitary{d}}UAU^\dagger d\mu(U) = \lambda_A\mathbb{1}_d$. By taking trace over both sides, we get $\lambda_A = \frac{\Tr{A}}d$. Therefore the desired conclusion is obtained. \end{proof} The application of Proposition~\ref{prop:u-integral} can be found in \cite{Frank}. \begin{cor}\label{cor:randomized} It holds that \begin{eqnarray} \int_{\unitary{d_A}} (U_A\otimes\mathbb{1}_B)X_{AB}(U_A\otimes\mathbb{1}_B)^\dagger d\mu(U_A) = \frac{\mathbb{1}_A}{d_A}\otimes\Ptr{A}{X_{AB}}. \end{eqnarray} \end{cor} \begin{proof} We chose an orthonormal base $\set{\ket{\mu}: \mu=1,\ldots,d_B}$ for the second Hilbert space $B$. Then $X_{AB} = \sum_{\mu,\nu=1}^{d_B}X^A_{\mu\nu}\otimes\out{\mu}{\nu}$ such that \begin{eqnarray} &&\int_{\unitary{d_A}} (U_A\otimes\mathbb{1}_B)X_{AB}(U_A\otimes\mathbb{1}_B)^\dagger d\mu(U_A) = \sum_{\mu,\nu=1}^{d_B} \Pa{\int_{\unitary{d_A}} U_AX^A_{\mu\nu}U^\dagger_A d\mu(U_A)}\otimes\out{\mu}{\nu}\\ &&=\sum_{\mu,\nu=1}^{d_B} \Pa{\Tr{X^A_{\mu\nu}}\frac{\mathbb{1}_A}{d_A}}\otimes\out{\mu}{\nu} = \frac{\mathbb{1}_A}{d_A}\otimes \Pa{\sum_{\mu,\nu=1}^{d_B}\Tr{X^A_{\mu\nu}}\out{\mu}{\nu}} = \frac{\mathbb{1}_A}{d_A}\otimes\Ptr{A}{X_{AB}}. \end{eqnarray} This completes the proof. \end{proof} \begin{cor}\label{cor:randomized-systems} It holds that \begin{eqnarray} \int_{\unitary{d_A}}\int_{\unitary{d_B}} (U_A\otimes U_B)X_{AB}(U_A\otimes U_B)^\dagger d\mu(U_A)d\mu(U_B) = \Ptr{AB}{X_{AB}}\frac{\mathbb{1}_A}{d_A}\otimes\frac{\mathbb{1}_B}{d_B} \end{eqnarray} \end{cor} \begin{cor}\label{cor:UU} It holds that \begin{eqnarray} \int_{\unitary{d}}U\otimes\overline{U} d\mu(U) = \frac1d\out{\vec(\mathbb{1}_d)}{\vec(\mathbb{1}_d)}. \end{eqnarray} \end{cor} \begin{proof} Since \begin{eqnarray*} \vec\Pa{\int_{\unitary{d}}UAU^\dagger d\mu(U)}&=& \Pa{\int_{\unitary{d}}U\otimes\overline{U} d\mu(U)}\ket{\vec(A)},\\ \vec\Pa{\frac{\Tr{A}}{d}\mathbb{1}_d}&=& \frac1d\ket{\vec(\mathbb{1}_d)}\inner{\vec(\mathbb{1}_d)}{\vec(A)}. \end{eqnarray*} Using Proposition~\ref{prop:u-integral}, it follows that \begin{eqnarray*} \int_{\unitary{d}}U\otimes\overline{U} d\mu(U) = \frac1d\out{\vec(\mathbb{1}_d)}{\vec(\mathbb{1}_d)}, \end{eqnarray*} implying the result. \end{proof} \begin{cor}\label{cor:uu-swap} It holds that \begin{eqnarray} \int_{\unitary{d}} U\otimes U^\dagger d\mu(U) = \frac{F}{d}, \end{eqnarray} where $F$ is the swap operator defined as $F=\sum^d_{i,j=1}\out{ij}{ji}$. \end{cor} \begin{proof}[The first proof] By taking partial transposes relative to second subsystems over both sides in Corollary~\ref{cor:UU}, we get the desired identity. \end{proof} \begin{proof}[The second proof] Let $M=\int_{\unitary{d}} U\otimes U^\dagger d\mu(U)$. Since Haar-measure $\mu$ is uniform over the unitary group $\unitary{d}$, it follows that $\mu(U)=\mu(V)$ for any $U,V\in\unitary{d}$. In particular, $\mu(U)=\mu(U^\dagger)$. Thus $M^\dagger=M$. From the elementary fact that $\Tr{(A\otimes B) F} =\Tr{AB}$, we have $\Tr{MF}=d$. Since Haar-measure is left-regular, it follows that \begin{eqnarray*} (V\otimes\mathbb{1})M(\mathbb{1}\otimes V^\dagger) &=& \int_{\unitary{d}}VU\otimes U^\dagger V^\dagger d\mu(U) \\ &=& \int_{\unitary{d}}VU\otimes (VU)^\dagger d\mu(VU) = M. \end{eqnarray*} That is $(V\otimes\mathbb{1})M(\mathbb{1}\otimes V^\dagger) = M$ for all $V\in\unitary{d}$. By taking traces over both sides, we have \begin{eqnarray*} \Tr{M} = \Tr{(V\otimes\mathbb{1})M(\mathbb{1}\otimes V^\dagger)} = \Tr{M(V\otimes V^\dagger)}. \end{eqnarray*} By taking integrals over both sides, we have \begin{eqnarray*} \int_{\unitary{d}}\Tr{M}d\mu(V) = \int_{\unitary{d}}\Tr{M(V\otimes V^\dagger)}d\mu(V), \end{eqnarray*} which means that $\Tr{M}=\Tr{M^2}$. By Cauchy-Schwartz inequality, we get $$ d^2 = \Br{\Tr{MF}}^2\leqslant \Tr{M^2}\Tr{F^2} = d^2\Tr{M}, $$ implies that $\Tr{M}\geqslant1$. In what follows, we show that $\Tr{M}=1$. By the definition of $M$, we have \begin{eqnarray*} &&\Tr{M} = \int_{\unitary{d}} \abs{\Tr{U}}^2d\mu(U) \\ &&= \int_{\unitary{d}} \inner{\vec(\mathbb{1}_d)}{\vec(U)}\inner{\vec(U)}{\vec(\mathbb{1}_d)}d\mu(U)\\ &&= \Innerm{\vec(\mathbb{1}_d)}{\int_{\unitary{d}}\out{\vec(U)}{\vec(U)}d\mu(U)}{\vec(\mathbb{1}_d)}. \end{eqnarray*} Define a unital quantum channel $\Gamma$ as follows: $$ \Gamma=\int_{\unitary{d}}\mathrm{Ad}_{U}d\mu(U). $$ Thus by Proposition~\ref{prop:u-integral}, we have $\Gamma(X)=\Tr{X}\frac{\mathbb{1}_d}{d}$. By Choi-Jiamio{\l}kowksi isomorphism, it follows that $$ \jam{\Gamma} = (\Gamma\otimes\mathbb{1})(\out{\vec(\mathbb{1}_d)}{\vec(\mathbb{1}_d)}) = \int_{\unitary{d}}\out{\vec(U)}{\vec(U)}d\mu(U). $$ For the completely depolarizing channel $\Gamma(X) = \Tr{X}\frac{\mathbb{1}_d}{d}$, we already know that $\jam{\Gamma} = \frac1d\mathbb{1}_d\otimes\mathbb{1}_d$. Therefore \begin{eqnarray} \int_{\unitary{d}}\out{\vec(U)}{\vec(U)}d\mu(U) = \frac1d\mathbb{1}_d\otimes\mathbb{1}_d. \end{eqnarray} Finally $\Tr{M}=\frac1d\inner{\vec(\mathbb{1}_d)}{\vec(\mathbb{1}_d)} = 1$. This indicates that Cauchy-Schwartz inequality is saturated, and moreover the saturation happens if and only if $M\propto F$. Let $M=\lambda F$. By taking traces over both sides, we have $\lambda=\frac1d$. The desired conclusion is obtained. \end{proof} \begin{proof}[The third proof] We derive directly the integral formula from the Schur Orthogonality Relations of a compact Lie group. See the Section~\ref{sect:concluding-remarks}. \end{proof} \begin{cor}\label{cor:integral-of-visibility} It holds that \begin{eqnarray} \int_{\unitary{d}} \abs{\Tr{AU}}^2 d\mu(U) = \frac1d\tr{A^\dagger A}, \end{eqnarray} where $A\in M_d(\mathbb{C})$. \end{cor} \begin{proof} In fact, $$ \abs{\Tr{AU}}^2 = \Tr{AU}\overline{\Tr{AU}} = \Tr{(A\otimes A^\dagger)(U\otimes U^\dagger)}. $$ It follows that \begin{eqnarray*} \int_{\unitary{d}} \abs{\Tr{AU}}^2 d\mu(U) &=& \Tr{(A\otimes A^\dagger)\int_{\unitary{d}} U\otimes U^\dagger d\mu(U)}\\ &=& \frac1d\Tr{(A\otimes A^\dagger)F} = \frac1d\Tr{AA^\dagger}, \end{eqnarray*} implying the result. \end{proof} \begin{cor}\label{cor:local-integral} It holds that \begin{eqnarray} \int_{\unitary{d_1}}\int_{\unitary{d_2}} \abs{\Tr{A(U\otimes V)}}^2d\mu(U)d\mu(V) = \frac1{d_1d_2}\Tr{A^\dagger A}, \end{eqnarray} where $A\in M_{d_1d_2}(\mathbb{C})$. \end{cor} \begin{proof} By the SVD of a matrix, we have \begin{eqnarray} A = \sum_j s_j \out{\Phi_j}{\Psi_j}, \end{eqnarray} where $s_j := s_j(A)$ is the singular values of the matrix $A$ and $\ket{\Phi_j},\ket{\Psi_j}\in \mathbb{C}^{d_1}\otimes \mathbb{C}^{d_2}$. From the properties of the vec mapping for a matrix, we see that there exist $d_2\times d_1$ matrices $X_j$ and $Y_j$, respectively, such that \begin{eqnarray} \ket{\Phi_j} = \vec(X_j),~~\ket{\Psi} = \vec(Y_j). \end{eqnarray} This indicates that \begin{eqnarray*} &&\abs{\Tr{A(U\otimes V)}}^2 = \abs{\sum_j s_j\Innerm{\Psi_j}{U\otimes V}{\Phi_j}}^2\\ &&= \sum_{i,j} s_is_j\Innerm{\Psi_i}{U\otimes V}{\Phi_i}\overline{\Innerm{\Psi_j}{U\otimes V}{\Phi_j}}\\ &&= \sum_{i,j} s_is_j\Innerm{\Psi_i}{U\otimes V}{\Phi_i} \Innerm{\Phi_j}{U^\dagger\otimes V^\dagger}{\Psi_j}, \end{eqnarray*} which implies that \begin{eqnarray*} &&\abs{\Tr{A(U\otimes V)}}^2 = \sum_{i,j} s_is_j \Inner{Y_i}{UX_i V^\t}\Inner{X_j}{U^\dagger Y_j (V^\dagger)^\t}\\ &&= \sum_{i,j} s_is_j \Inner{Y_i\otimes X_j}{(U\otimes U^\dagger) (X_i\otimes Y_j)(V^\t\otimes (V^\t)^\dagger)}. \end{eqnarray*} Thus \begin{eqnarray*} &&\int_{\unitary{d_1}}\int_{\unitary{d_2}} \abs{\Tr{A(U\otimes V)}}^2d\mu(U)d\mu(V)\\ &&=\sum_{i,j} s_is_j \Inner{Y_i\otimes X_j}{\Pa{\int_{\unitary{d_1}}U\otimes U^\dagger d\mu(U)} (X_i\otimes Y_j)\Pa{\int_{\unitary{d_2}} V^\t\otimes (V^\t)^\dagger d\mu(V)}}\\ &&= \frac1{d_1d_2} \sum_{i,j} s_is_j \Inner{Y_i\otimes X_j}{F_{11} (X_i\otimes Y_j)F_{22}}\\ &&=\frac1{d_1d_2} \sum_{i,j} s_is_j \Tr{(Y_i\otimes X_j)^\dagger F_{11} (X_i\otimes Y_j)F_{22}}, \end{eqnarray*} where $F_{11}$ is the swap operator on $\mathbb{C}^{d_1}\otimes\mathbb{C}^{d_1}$, $F_{22}$ is the swap operator on $\mathbb{C}^{d_2}\otimes\mathbb{C}^{d_2}$. Taking orthonormal base $\ket{\mu}$ and $\ket{m}$ of $\mathbb{C}^{d_1}$ and $\mathbb{C}^{d_2}$, respectively, gives rise to $$ F_{11} = \sum^{d_1}_{\mu,\nu=1} \out{\mu\nu}{\nu\mu},~~F_{22} = \sum^{d_2}_{m,n=1}\out{mn}{nm}. $$ By substituting both operators into the above expression, it follows that \begin{eqnarray*} \int_{\unitary{d_1}}\int_{\unitary{d_2}} \abs{\Tr{A(U\otimes V)}}^2d\mu(U)d\mu(V) &=& \frac1{d_1d_2} \sum_{i,j} s_is_j \Tr{X_i X^\dagger_j}\Tr{Y^\dagger_i Y_j}\\ &=& \frac1{d_1d_2}\Tr{A^\dagger A}. \end{eqnarray*} The proof is complete. \end{proof} Note that Corollaries~\ref{cor:uu-swap}, \ref{cor:integral-of-visibility}, and \ref{cor:local-integral} are used in the recent paper \cite{Lin} to establish an interesting relationship between quantum correlation and interference visibility. In what follows, we obtain a general result: \begin{prop} It holds that \begin{eqnarray} &&\int_{\unitary{d_1}}\int_{\unitary{d_2}}\cdots\int_{\unitary{d_n}} \abs{\Tr{A(U_{(1)}\otimes U_{(2)}\otimes \cdots\otimes U_{(n)})}}^2d\mu(U_{(1)})d\mu(U_{(2)})\cdots d\mu(U_{(n)}) \nonumber\\ &&= \frac1d\Tr{A^\dagger A}, \end{eqnarray} where $A\in M_d(\mathbb{C})$ for $d=\prod^n_{j=1}d_j$. \end{prop} This result will be useful in the investigation of multipartite quantum correlation. The detail of its proof is as follows. \begin{proof} Firstly we note from Corollary~\ref{cor:randomized} that \begin{eqnarray} \int_{\unitary{d_1}} (U_1\otimes\mathbb{1}_{2\ldots n})X_{12\ldots n}(U_1\otimes\mathbb{1}_{2\ldots n})^\dagger d\mu(U_1) = \frac{\mathbb{1}_1}{d_1}\otimes\Ptr{1}{X_{12\ldots n}}. \end{eqnarray} Furthermore, we have \begin{eqnarray} \int_{\unitary{d_1}}\int_{\unitary{d_2}} (U_1\otimes U_2\otimes\mathbb{1}_{3\ldots n})X_{12\ldots n}(U_1\otimes U_2\otimes\mathbb{1}_{3\ldots n})^\dagger d\mu(U_1)d\mu(U_2) = \frac{\mathbb{1}_1}{d_1}\otimes\frac{\mathbb{1}_2}{d_2}\otimes\Ptr{12}{X_{12\ldots n}}. \end{eqnarray} By induction, we have \begin{eqnarray} &&\int_{\unitary{d_1}}\int_{\unitary{d_2}}\cdots \int_{\unitary{d_n}}(U_1\otimes U_2\otimes\cdots \otimes U_n)X_{12\ldots n}(U_1\otimes U_2\otimes\cdots \otimes U_n)^\dagger d\mu(U_1)d\mu(U_2)\cdots d\mu(U_n) \nonumber\\ &&= \Ptr{12\ldots n}{X_{12\ldots n}}\frac{\mathbb{1}_1}{d_1}\otimes\frac{\mathbb{1}_2}{d_2}\otimes\cdots\otimes\frac{\mathbb{1}_n}{d_n}. \end{eqnarray} This implies that \begin{eqnarray} &&\int_{\unitary{d_1}}\int_{\unitary{d_2}}\cdots \int_{\unitary{d_n}}\out{U_1\otimes U_2\otimes\cdots \otimes U_n}{U_1\otimes U_2\otimes\cdots \otimes U_n} d\mu(U_1)d\mu(U_2)\cdots d\mu(U_n)\\ &&=\frac1d \mathbb{1}_{12\ldots n}\otimes\mathbb{1}_{12\ldots n}, \end{eqnarray} where $d=\prod^n_{j=1}d_j$. Now $$ \abs{\Tr{A(U_1\otimes U_2\otimes\cdots\otimes U_n)}}^2 = \Inner{A^\dagger}{U_1\otimes U_2\otimes\cdots\otimes U_n}\Inner{U_1\otimes U_2\otimes\cdots\otimes U_n}{A^\dagger} $$ implying \begin{eqnarray} &&\int_{\unitary{d_1}}\cdots\int_{\unitary{d_n}}\abs{\Tr{A(U_1\otimes U_2\otimes\cdots\otimes U_n)}}^2 \\ &&= \Innerm{A^\dagger}{\int_{\unitary{d_1}}\cdots \int_{\unitary{d_n}}\out{U_1\otimes U_2\otimes\cdots \otimes U_n}{U_1\otimes U_2\otimes\cdots \otimes U_n} d\mu(U_1)\cdots d\mu(U_n)}{A^\dagger}\\ &&= \frac1d\inner{A^\dagger}{A^\dagger} = \frac1d\Tr{A^\dagger A}. \end{eqnarray} We are done. \end{proof} \subsection{The $k=2$ case} \begin{prop}\label{prop:uu-integral} It holds that \begin{eqnarray} &&\int_{\unitary{d}}(U\otimes U)A(U\otimes U)^\dagger d\mu(U) \nonumber\\ &&= \Pa{\frac{\Tr{A}}{d^2-1} - \frac{\Tr{AF}}{d(d^2-1)}}\mathbb{1}_{d^2} - \Pa{\frac{\Tr{A}}{d(d^2-1)}- \frac{\Tr{AF}}{d^2-1}}F, \end{eqnarray} where $A\in M_{d^2}(\mathbb{C})$ and the swap operator $F$ is defined by $F\ket{ij}=\ket{ji}$ for all $i,j=1,\ldots,d$. \end{prop} \begin{proof} Analogously, we have $\int_{\unitary{d}}(U\otimes U)A(U\otimes U)^\dagger d\mu(U)$ commutes with $\set{V\otimes V: V\in\unitary{d}}$. Denote $P_\wedge := \tfrac12(\mathbb{1}_{d^2}-F)$ and $P_\vee := \tfrac12(\mathbb{1}_{d^2}+F)$. It is easy to see that $\Tr{P_\wedge} = \tfrac12(d^2-d)$ and $\Tr{P_\vee} = \tfrac12(d^2+d)$. Since $F=\sum_{i,j}\out{ij}{ji}$, it follows that $F^\dagger = F$ and $F^2=\mathbb{1}_{d^2}$. Thus both $P_\wedge$ and $P_\vee$ are projectors and $P_\wedge + P_\vee=\mathbb{1}_{d^2}$. Because $\mathbb{C}^d\otimes\mathbb{C}^d = \wedge^2\mathbb{C}^d\oplus \vee^2\mathbb{C}^d$, we have $P_\wedge(\mathbb{C}^d\otimes\mathbb{C}^d) P_\wedge= \wedge^2\mathbb{C}^d$ and $P_\vee(\mathbb{C}^d\otimes\mathbb{C}^d) P_\vee= \vee^2\mathbb{C}^d$. Besides, for any $V\in \unitary{d}$, $$ V\otimes V \stackrel{\smash{\textnormal{\tiny def}}}{=} \Br{\begin{array}{cc} P_\wedge(V\otimes V)P_\wedge & 0 \\ 0 & P_\vee(V\otimes V)P_\vee \end{array} } $$ Now write $$ \int_{\unitary{d}}(U\otimes U)A(U\otimes U)^\dagger d\mu(U) = \Br{\begin{array}{cc} M_{11} & M_{12} \\ M_{21} & M_{22} \end{array}} $$ is a block matrix, where $M_{11}\in \mathrm{End}(\wedge^2 \mathbb{C}^d),M_{22}\in \mathrm{End}(\vee^2 \mathbb{C}^d)$ and $$ M_{12}\in\mathrm{Hom}_{\unitary{d}}(\vee^2 \mathbb{C}^d,\wedge^2 \mathbb{C}^d),M_{21}\in\mathrm{Hom}_{\unitary{d}}(\wedge^2 \mathbb{C}^d,\vee^2 \mathbb{C}^d). $$ Thus \begin{eqnarray*} &&\Br{\begin{array}{cc} M_{11} & M_{12} \\ M_{21} & M_{22} \end{array}}\Br{\begin{array}{cc} P_\wedge(V\otimes V)P_\wedge & 0 \\ 0 & P_\vee(V\otimes V)P_\vee \end{array}} \\ &&=\Br{\begin{array}{cc} P_\wedge(V\otimes V)P_\wedge & 0 \\ 0 & P_\vee(V\otimes V)P_\vee \end{array}} \Br{\begin{array}{cc} M_{11} & M_{12} \\ M_{21} & M_{22} \end{array}}. \end{eqnarray*} We get that, for all $V\in\unitary{d}$, \begin{eqnarray*} \begin{cases} M_{11}(\wedge^2V) &= (\wedge^2V) M_{11},\\ M_{22}(\vee^2V) &= (\vee^2V) M_{22},\\ M_{12} (\vee^2V) &= (\wedge^2V) M_{12},\\ M_{21} (\wedge^2V) &= (\vee^2V) M_{21}. \end{cases} \end{eqnarray*} Therefore we obtained that $$ M_{11} = \lambda(A) P_\wedge,~~M_{22} = \mu(A) P_\vee,~~ M_{12} = 0,~~ M_{21} = 0. $$ That is \begin{eqnarray}\label{eq-u-design} \int_{\unitary{d}}(U\otimes U)A(U\otimes U)^\dagger d\mu(U) = \Br{\begin{array}{cc} \lambda(A) P_\wedge & 0 \\ 0 & \mu(A) P_\vee \end{array} } = \lambda(A) P_\wedge + \mu(A) P_\vee. \end{eqnarray} If $A=\mathbb{1}_{d^2}$ in Eq.~\eqref{eq-u-design}, then $\mathbb{1}_{d^2} = \lambda(\mathbb{1}_{d^2}) P_\wedge + \mu(\mathbb{1}_{d^2}) P_\vee$. Thus $\lambda(\mathbb{1}_{d^2})= \mu(\mathbb{1}_{d^2}) = 1$ since $\mathbb{1}_{d^2} = P_\wedge + P_\vee$ and $P_\wedge \bot P_\vee$. If $A = P_\wedge$ in Eq.~\eqref{eq-u-design}, then $P_\wedge = \lambda(P_\wedge) P_\wedge + \mu(P_\wedge) P_\vee$ since $U\otimes U$ commutes with $P_\wedge$. Thus $\lambda(P_\wedge) = 1$ and $\mu(P_\wedge)= 0$. Note that $\lambda(A),\mu(A)$ are two linear functional. Thus we have: $\lambda(F) = -1$ and $\mu(F) = 1$. This indicates that $$ \int_{\unitary{d}}(U\otimes U)F(U\otimes U)^\dagger d\mu(U) = \lambda(F) P_\wedge + \mu(F) P_\vee = P_\vee - P_\wedge = F. $$ More simpler approach to this identity can be described as follows: Since $F(M\otimes N)F = N\otimes M$, it follows that $F(M\otimes N) = (N\otimes M)F$. Thus \begin{eqnarray*} \int_{\unitary{d}}(U\otimes U)F(U\otimes U)^\dagger d\mu(U) &=& \int_{\unitary{d}}F(U\otimes U)(U\otimes U)^\dagger d\mu(U)\\ &=& F\int_{\unitary{d}}d\mu(U) = F = P_\vee - P_\wedge. \end{eqnarray*} Apparently \begin{eqnarray*} \int_{\unitary{d}}(U\otimes U)^\dagger F(U\otimes U) d\mu(U) = F = P_\vee - P_\wedge. \end{eqnarray*} By taking trace over both sides above, we get $$ \Tr{A} = \lambda(A)\Tr{P_\wedge} + \mu(A)\Tr{P_\vee}. $$ Now by multiplying $F$ on both sides in Eq.~\eqref{eq-u-design} and then taking trace again, we get \begin{eqnarray*} &&\int_{\unitary{d}}\Tr{(U\otimes U)^\dagger F (U\otimes U)A}d\mu(U) \\ &&= \lambda(A) \Tr{P_\wedge F} + \mu(A) \Tr{P_\vee F}\\ &&= \mu(A) \Tr{P_\vee} - \lambda(A) \Tr{P_\wedge}, \end{eqnarray*} where we used the fact that $P_\wedge F = -P_\wedge$ and $P_\vee F = P_\vee$. Thus we have \begin{eqnarray*} \begin{cases} \frac{d(d-1)}2\lambda(A) + \frac{d(d+1)}2\mu(A) &= \Tr{A},\\ \frac{d(d+1)}2\mu(A) - \frac{d(d-1)}2\lambda(A) &= \Tr{AF}. \end{cases} \end{eqnarray*} Solving this group of two binary equations gives rise to \begin{eqnarray*} \begin{cases} \lambda(A) =& \frac{\Tr{A} - \Tr{AF}}{d(d-1)},\\ \mu(A) =& \frac{\Tr{A} + \Tr{AF}}{d(d+1)}. \end{cases} \end{eqnarray*} Finally we obtained the desired conclusion as follows: \begin{eqnarray*} \int_{\unitary{d}}(U\otimes U)A(U\otimes U)^\dagger d\mu(U) = \frac{\Tr{A} - \Tr{AF}}{d(d-1)} P_\wedge + \frac{\Tr{A} + \Tr{AF}}{d(d+1)} P_\vee. \end{eqnarray*} We are done. \end{proof} The applications of Proposition~\ref{prop:uu-integral} in quantum information theory can be found in \cite{Roga,Dupuis}. \begin{cor}\label{cor:super-operator} It holds that \begin{eqnarray} &&\int_{\unitary{d}} U^\dagger AUXU^\dagger BU d\mu(U)\nonumber\\ && = \frac{d\Tr{AB}-\Tr{A}\Tr{B}}{d(d^2-1)}\Tr{X}\mathbb{1}_d + \frac{d\Tr{A}\Tr{B}-\Tr{AB}}{d(d^2-1)}X. \end{eqnarray} \end{cor} \begin{proof} It suffices to compute the integral $\int_{\unitary{d}} (U^\dagger AU) \otimes (U^\dagger BU) d\mu(U)$ since \begin{eqnarray}\label{eq:map} &&\vec\Pa{\int_{\unitary{d}} (U^\dagger AU)X(U^\dagger BU) d\mu(U)}\nonumber\\ && = \int_{\unitary{d}} (U^\dagger AU) \otimes (U^\dagger BU)^\t d\mu(U) \vec(X). \end{eqnarray} Once we get the formula for $\int_{\unitary{d}} (U^\dagger AU) \otimes (U^\dagger BU) d\mu(U)$, taking partial transpose relative to the second factor in the tensor product, we get the formula for $\int_{\unitary{d}} (U^\dagger AU) \otimes (U^\dagger BU)^\t d\mu(U)$. Now by Proposition~\ref{prop:uu-integral}, we have \begin{eqnarray*} &&\int_{\unitary{d}} (U^\dagger AU) \otimes (U^\dagger BU) d\mu(U)= \int_{\unitary{d}} (U \otimes U)^\dagger(A\otimes B)(U\otimes U) d\mu(U)\\ &&=\Pa{\frac{\Tr{A}\Tr{B}}{d^2-1} - \frac{\Tr{AB}}{d(d^2-1)}}\mathbb{1}_{d^2} - \Pa{\frac{\Tr{A}\Tr{B}}{d(d^2-1)}- \frac{\Tr{AB}}{d^2-1}}F\\ &&= \frac{d\Tr{A}\Tr{B} -\Tr{AB}}{d(d^2-1)}\mathbb{1}_{d^2} + \frac{d\Tr{AB}-\Tr{A}\Tr{B}}{d(d^2-1)}F, \end{eqnarray*} implying that \begin{eqnarray*} &&\int_{\unitary{d}} (U^\dagger AU) \otimes (U^\dagger BU)^\t d\mu(U)\\ &&= \frac{d\Tr{A}\Tr{B} -\Tr{AB}}{d(d^2-1)}\mathbb{1}_{d^2} + \frac{d\Tr{AB}-\Tr{A}\Tr{B}}{d(d^2-1)}\out{\vec(\mathbb{1}_d)}{\vec(\mathbb{1}_d)}. \end{eqnarray*} Substituting this identity into \eqref{eq:map} gives the desired result. \end{proof} Recall that a super-operator $\Phi$ is \emph{unitarily invariant} if $\mathrm{Ad}_{U^\dagger}\circ\Phi\circ\mathrm{Ad}_U= \Phi$ for all $U\in\unitary{d}$. We also note that an super-operator $\Phi$ on $\mathrm{End}(\cH_d)$ can be represented as \begin{eqnarray} \Phi(X) = \sum_j A_j X B^\dagger_j. \end{eqnarray} Now we may give the specific form of any unitarily invariant super-operator in the following corollary. \begin{cor}\label{cor:u-invariant-super-operator} Let $\Phi$ be a unitarily invariant super-operator on $\mathrm{End}(\cH_d)$. Then \begin{eqnarray} \Phi(X) = \frac{d\Tr{\Phi(\mathbb{1}_d)}-\Tr{\Phi}}{d(d^2-1)}\Tr{X}\mathbb{1}_d + \frac{d\Tr{\Phi}-\Tr{\Phi(\mathbb{1}_d)}}{d(d^2-1)}X, \end{eqnarray} where $\Tr{\Phi}$ is the trace of super-operator $\Phi$, defined by $\Tr{\Phi} := \sum_{\mu,\nu}\Innerm{\mu}{\Phi(\out{\mu}{\nu})}{\nu}$. \end{cor} \begin{proof} Apparently $\mathrm{Ad}_{U^\dagger}\circ\Phi\circ\mathrm{Ad}_U= \Phi$ for all $U\in\unitary{d}$. This implies that, for the uniform Haar measure $d\mu(U)$ over the unitary group, \begin{eqnarray} \Phi(X) &=& \int_{\unitary{d}}\Phi(X)d\mu(U) = \int_{\unitary{d}}U^\dagger\Phi(UXU^\dagger)Ud\mu(U)\\ &=& \sum_j \int_{\unitary{d}}U^\dagger A_jUXU^\dagger B^\dagger_jUd\mu(U). \end{eqnarray} By Corollary~\ref{cor:super-operator}, \begin{eqnarray} &&\int_{\unitary{d}}U^\dagger A_jUXU^\dagger B^\dagger_jUd\mu(U) \\ &&= \frac{d\Tr{A_jB^\dagger_j}-\Tr{A_j}\Tr{B^\dagger_j}}{d(d^2-1)}\Tr{X}\mathbb{1}_d + \frac{d\Tr{A_j}\Tr{B^\dagger_j}-\Tr{A_jB^\dagger_j}}{d(d^2-1)}X. \end{eqnarray} Thus \begin{eqnarray} \Phi(X)&=& \frac{d\sum_j\Tr{A_jB^\dagger_j}-\sum_j\Tr{A_j}\Tr{B^\dagger_j}}{d(d^2-1)}\Tr{X}\mathbb{1}_d \\ &&+ \frac{d\sum_j\Tr{A_j}\Tr{B^\dagger_j}-\sum_j\Tr{A_jB^\dagger_j}}{d(d^2-1)}X\\ &=&\frac{d\Tr{\Phi(\mathbb{1}_d)}-\Tr{\Phi}}{d(d^2-1)}\Tr{X}\mathbb{1}_d + \frac{d\Tr{\Phi}-\Tr{\Phi(\mathbb{1}_d)}}{d(d^2-1)}X, \end{eqnarray} where we have used the fact that \begin{eqnarray} \Tr{\Phi} &=& \sum_{\mu,\nu} \Inner{\out{\mu}{\nu}}{\Phi(\out{\mu}{\nu})} = \sum_{\mu,\nu} \Innerm{\mu}{\Phi(\out{\mu}{\nu})}{\nu}\\ &=& \sum_j\sum_{\mu,\nu} \Innerm{\mu}{A_j\out{\mu}{\nu}B^\dagger_j}{\nu} = \sum_j \Pa{\sum_\mu \Innerm{\mu}{A_j}{\mu}}\Pa{\sum_\nu \Innerm{\nu}{B^\dagger_j}{\nu}}\\ &=& \sum_j\Tr{A_j}\Tr{B^\dagger_j}. \end{eqnarray} There is a caution that the trace of super-operator $\Phi$ is different from the trace of operator $\Phi(\mathbb{1}_d)$. \end{proof} We can simplify this expression if we assume more structure on the super-operator. A trace-preserving unitarily invariant quantum operation $\Lambda$ is a depolarizing channel: for $\rho\in\density{\cH_d}$, \begin{eqnarray} \Lambda(\rho) = p\rho + (1-p)\frac{\mathbb{1}_d}{d},~~~\Pa{p=\frac{\Tr{\Phi} - 1}{d^2-1}}. \end{eqnarray} Indeed, this easily follows from the facts that $\Tr{\Phi(\mathbb{1}_d)}=d$ and $\Tr{\rho}=1$. Let $\Phi$ be a super-operator on $\mathrm{End}(\cH_d)$. Define the \emph{twirled} super-operator \begin{eqnarray} \Phi_\T = \int_{\unitary{d}} \mathrm{Ad}_{U^\dagger}\circ\Phi\circ\mathrm{Ad}_U d\mu(U). \end{eqnarray} Clearly twirled super-operator $\Phi_\T$ is unitarily invariant. \begin{remark} From the proof of Corollary~\ref{cor:u-invariant-super-operator}, we see that for any super-operator $\Phi\in\mathrm{End}(\cH_d)$, \begin{eqnarray} \int_{\unitary{d}}U^\dagger\Phi(UXU^\dagger)Ud\mu(U) = \frac{d\Tr{\Phi(\mathbb{1}_d)}-\Tr{\Phi}}{d(d^2-1)}\Tr{X}\mathbb{1}_d + \frac{d\Tr{\Phi}-\Tr{\Phi(\mathbb{1}_d)}}{d(d^2-1)}X. \end{eqnarray} Now let $d=d_Ad_B$ and $\cH_d=\cH_A\otimes\cH_B$ with $\dim(\cH_A)=d_A$ and $\dim(\cH_B)=d_B$. Assume that $X = \rho_{AB}$, a density matrix on $\cH_A\otimes\cH_B$. Fixing an orthonormal basis $\set{\ket{\psi_{B,j}}:j=1,\ldots,d_B}$ for $\cH_B$. Suppose that $\Phi(X) = \ptr{B}{X}\otimes\mathbb{1}_B$. Then it can be rewritten as: $$ \Phi(X) = \sum^{d_B}_{i,j=1}(\mathbb{1}_A\otimes\out{\psi_{B,i}}{\psi_{B,j}})X(\mathbb{1}_A\otimes\out{\psi_{B,j}}{\psi_{B,i}}). $$ Clearly $\Phi(\mathbb{1}_A\otimes\mathbb{1}_B) = d_B\mathbb{1}_A\otimes\mathbb{1}_B$, implying that $$ \Tr{\Phi(\mathbb{1}_A\otimes\mathbb{1}_B)} = d_Ad^2_B~~\text{and}~~\Tr{\Phi} = \sum^{d_B}_{i,j=1}(d_A\delta_{ij})^2= d^2_Ad_B. $$ From the above discussion, we see that \begin{eqnarray} \int_{\unitary{d}}U^\dagger\Phi(U\rho_{AB}U^\dagger)Ud\mu(U) = \frac{dd_B-d_A}{d^2-1}\mathbb{1}_A\otimes\mathbb{1}_B + \frac{dd_A-d_B}{d^2-1}\rho_{AB}. \end{eqnarray} Denote $\rho'_{AB} = U\rho_{AB}U^\dagger$ and $\rho'_{A}=\ptr{B}{\rho'_{AB}}$. Then \begin{eqnarray*} \Tr{(\rho'_A)^2} &=& \Tr{(\rho'_A\otimes\mathbb{1}_B)\rho'_{AB}} = \Tr{\Phi(\rho'_{AB})\rho'_{AB}}\\ &=&\Tr{U^\dagger\Phi(U\rho_{AB}U^\dagger)U\rho_{AB}}. \end{eqnarray*} Therefore \begin{eqnarray*} \left\langle\Tr{(\rho'_A)^2}\right\rangle := \int\Tr{(\rho'_A)^2}d\mu(U) = \Tr{\int U^\dagger\Phi(U\rho_{AB}U^\dagger)Ud\mu(U)\rho_{AB}}. \end{eqnarray*} That is, \begin{eqnarray} \left\langle\Tr{(\rho'_A)^2}\right\rangle = \frac{dd_B-d_A}{d^2-1}+ \frac{dd_A-d_B}{d^2-1}\Tr{\rho^2_{AB}}. \end{eqnarray} In particular, if $\rho_{AB}$ is a bipartite pure state, then $\Tr{\rho^2_{AB}}=1$, and \begin{eqnarray} \left\langle\Tr{(\rho'_A)^2}\right\rangle=\frac{d_A+d_B}{d_Ad_B+1}. \end{eqnarray} \end{remark} \begin{cor} Let $X,Y\in\mathrm{End}(\mathbb{C}^d)$. Then the uniform average of $\Innerm{\psi}{X}{\psi}\Innerm{\psi}{Y}{\psi}$ over state vectors $\ket{\psi}$ on the unit sphere $\mathbb{S}^{2d-1}$ in $\mathbb{C}^d$ is given by \begin{eqnarray} \int_{\mathbb{S}^{2d-1}} \Innerm{\psi}{X}{\psi}\Innerm{\psi}{Y}{\psi} d\ket{\psi} = \frac{\Tr{XY} + \Tr{X}\Tr{Y}}{d(d+1)}. \end{eqnarray} \end{cor} \begin{proof} The original integral can be reduced to the computing of the following integral: \begin{eqnarray*} &&\int_{\unitary{d}} (U\otimes U) (X\otimes Y)(U\otimes U)^\dagger d\mu(U)\\ &&=\frac{\Tr{X\otimes Y} - \Tr{(X\otimes Y)F}}{d(d-1)} P_\wedge + \frac{\Tr{X\otimes Y} + \Tr{(X\otimes Y)F}}{d(d+1)} P_\vee. \end{eqnarray*} Since $P_\vee\ket{\psi_0\psi_0} =\ket{\psi_0\psi_0}$ and $P_\wedge\ket{\psi_0\psi_0} =0$, it follows that \begin{eqnarray*} &&\int_{\mathbb{S}^{2d-1}} \Innerm{\psi}{X}{\psi}\Innerm{\psi}{Y}{\psi} d\ket{\psi} \\ &&= \Innerm{\psi_0\psi_0}{\frac{\Tr{X}\Tr{Y} - \Tr{XY}}{d(d-1)} P_\wedge + \frac{\Tr{X}\Tr{Y} + \Tr{XY}}{d(d+1)} P_\vee}{\psi_0\psi_0} \\ &&= \frac{\Tr{XY} + \Tr{X}\Tr{Y}}{d(d+1)}. \end{eqnarray*} We are done. \end{proof} As a direct consequence of the above Corollary, it follows that for any super-operator $\Phi$ on $\mathrm{End}(\cH_d)$, \begin{eqnarray} \int_{\mathbb{S}^{2d-1}} \Innerm{\psi}{\Phi(\out{\psi}{\psi})}{\psi} d\ket{\psi} = \frac{\Tr{\Phi(\mathbb{1}_d)} + \Tr{\Phi}}{d(d+1)}. \end{eqnarray} \begin{cor}\label{cor:UUUU} It holds that \begin{eqnarray} &&\int_{\unitary{d}}U\otimes U\otimes\overline{U}\otimes\overline{U} d\mu(U)\nonumber \\ &&= \frac1{d^2-1} \Pa{\out{\vec(\mathbb{1}_{d^2})}{\vec(\mathbb{1}_{d^2})} + \out{\vec(F)}{\vec(F)}}\nonumber \\ &&~~~~- \frac1{d(d^2-1)}\Pa{\out{\vec(\mathbb{1}_{d^2})}{\vec(F)} + \out{\vec(F)}{\vec(\mathbb{1}_{d^2})}}. \end{eqnarray} \end{cor} \begin{proof} Apparently, this result can be derived from Proposition~\ref{prop:uu-integral}. \end{proof} \begin{cor}\label{cor:uuuu} It holds that \begin{eqnarray} \int_{\unitary{d}} U\otimes U\otimes U^\dagger\otimes U^\dagger d\mu(U) = \frac{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(13)(24)} + \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(14)(23)}}{d^2-1} - \frac{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(1423)} + \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(1324)}}{d(d^2-1)}. \end{eqnarray} \end{cor} \begin{proof} By taking partial transposes relative to the third and fourth subsystems, respectively, over both sides in Corollary~\ref{cor:UUUU}, it suffices to show that \begin{eqnarray*} (\out{\vec(\mathbb{1}_{d^2})}{\vec(\mathbb{1}_{d^2})})^{\t_{3,4}} &=& \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(13)(24)},\\ (\out{\vec(F)}{\vec(F)})^{\t_{3,4}} &=& \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(14)(23)},\\ (\out{\vec(\mathbb{1}_{d^2})}{\vec(F)})^{\t_{3,4}} &=& \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(1423)},\\ (\out{\vec(F)}{\vec(\mathbb{1}_{d^2})})^{\t_{3,4}} &=& \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(1324)}. \end{eqnarray*} Note that $$ \vec(\mathbb{1}_{d^2}) = \sum^{d}_{i,j=1}\ket{ijij},~~\vec(F) = \sum^{d}_{i,j=1}\ket{ijji}. $$ It follows that \begin{eqnarray*} \out{\vec(\mathbb{1}_{d^2})}{\vec(\mathbb{1}_{d^2})} &=& \sum^{d}_{i,j,k,l=1}\out{ijij}{klkl},\\ \out{\vec(F)}{\vec(F)} &=& \sum^{d}_{i,j,k,l=1}\out{ijji}{kllk},\\ \out{\vec(\mathbb{1}_{d^2})}{\vec(F)} &=& \sum^{d}_{i,j,k,l=1}\out{ijij}{kllk},\\ \out{\vec(F)}{\vec(\mathbb{1}_{d^2})} &=& \sum^{d}_{i,j,k,l=1}\out{ijji}{klkl}. \end{eqnarray*} Therefore we have \begin{eqnarray*} (\out{\vec(\mathbb{1}_{d^2})}{\vec(\mathbb{1}_{d^2})})^{\t_{3,4}} &=& \sum^{d}_{i,j,k,l=1}\out{ijkl}{klij} = \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(13)(24)},\\ (\out{\vec(F)}{\vec(F)})^{\t_{3,4}} &=& \sum^{d}_{i,j,k,l=1}\out{ijlk}{klji} = \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(14)(23)},\\ (\out{\vec(\mathbb{1}_{d^2})}{\vec(F)})^{\t_{3,4}} &=& \sum^{d}_{i,j,k,l=1}\out{ijlk}{klij} = \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(1423)},\\ (\out{\vec(F)}{\vec(\mathbb{1}_{d^2})})^{\t_{3,4}} &=& \sum^{d}_{i,j,k,l=1}\out{ijkl}{klji} = \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(1324)}. \end{eqnarray*} The proof is complete. \end{proof} \begin{cor} It holds that \begin{eqnarray} \int_{\unitary{d}} \abs{\Tr{AU}}^4d\mu(U) = \frac2{d^2-1}\Br{\Tr{A^\dagger A}}^2 - \frac2{d(d^2-1)}\Tr{(A^\dagger A)^2}, \end{eqnarray} where $A\in M_d(\mathbb{C})$. \end{cor} \begin{proof} Note that $$ \abs{\Tr{AU}}^4 = \Tr{\Br{A^{\otimes 2}\otimes (A^{\otimes 2})^\dagger} \Br{U^{\otimes 2}\otimes(U^{\otimes 2})^\dagger}}. $$ By Corollary~\ref{cor:uuuu}, we obtain the final result. \end{proof} \subsection{The general case} The partial materials in this subsection are written based on the results in \cite{Benoit,Collins}. We recall that for an algebra inclusion $\cM\subset\cN$, a \emph{conditional expectation} is a $\cM$-bimodule map $\sE: \cN\to\cM$ such that $\sE(\mathbb{1}_\cN) = \mathbb{1}_\cM$. For $A\in\mathrm{End}((\mathbb{C}^d)^{\otimes k})$, we define \begin{eqnarray} \sE_k(A) = \int_{\unitary{d}} U^{\otimes k}A \Pa{U^{\otimes k}}^\dagger d\mu(U). \end{eqnarray} Clearly $\sE_k: \mathrm{End}((\mathbb{C}^d)^{\otimes k})\to \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\mathbb{C}[S_k])$ is a conditional expectation. Moreover $\sE_k$ is an orthogonal projection onto $\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\mathbb{C}[S_k])$. It is compatible with the trace in the sense that $\trace\circ\sE_k = \trace$. For $A\in\mathrm{End}((\mathbb{C}^d)^{\otimes k})$, we set \begin{eqnarray} \Delta(A) &=& \sum_{\pi\in S_k} \Inner{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi)}{A}\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi)\nonumber\\ &=& \sum_{\pi\in S_k} \Tr{A\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi^{-1})}\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi) \in \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\mathbb{C}[S_k]). \end{eqnarray} \begin{prop} $\Delta$ embraces the following properties: \begin{enumerate}[(i)] \item $\Delta$ is a $\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\mathbb{C}[S_k])$-$\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\mathbb{C}[S_k])$ bimodule morphism in the sense that $$ \Delta(A\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\sigma)) = \Delta(A)\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\sigma),~~\Delta(\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\sigma)A) = \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\sigma)\Delta(A). $$ \item $\Delta(\mathbb{1})$ coincides with the character of $\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}$ hence it is equal to $$ \Delta(\mathbb{1}) = k! \sum_{\lambda\vdash k} \frac{s_{\lambda}(1^{\times d})}{f^\lambda}C_\lambda $$ and is an invertible element of $\mathbb{C}[S_k]$; its inverse will be called Weingarten function and is equal to $$ \mathrm{Wg} = \frac1{(k!)^2}\sum_{\lambda\vdash (k,d)} \frac{(f^\lambda)^2}{s_{\lambda}(1^{\times d})}\chi_\lambda $$ \item the relation between $\Delta(A)$ and $\sE_k(A)$ is explicitly given by $$ \Delta(A) = \sE_k(A)\Delta(\mathbb{1}) $$ \item the range of $\Delta$ is equal to $\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\mathbb{C}[S_k])$; \item the following holds true in $\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\mathbb{C}[S_k])$: $$ \Delta(A\sE_k(B)) = \Delta(A)\Delta(B)\Delta(\mathbb{1})^{-1}. $$ \end{enumerate} \end{prop} \begin{proof} (i). Clearly we have: \begin{eqnarray*} \Delta(A\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\sigma)) &=& \sum_{\pi\in S_k} \Tr{[A\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\sigma)]\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi^{-1})}\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi)\\ &=& \sum_{\pi\in S_k} \Tr{A\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\sigma\pi^{-1})}\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}((\sigma\pi^{-1})^{-1})\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\sigma)\\ &=&\Delta(A)\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\sigma). \end{eqnarray*} Similarly, we also have: $\Delta(\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\sigma)A) = \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\sigma)\Delta(A)$. Furthermore we get \begin{eqnarray} \Delta(\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\sigma_l) A\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\sigma_r)) = \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\sigma_l)\Delta(A)\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\sigma_r), \end{eqnarray} where $\sigma_l,\sigma_r\in S_k$. Therefore $\Delta$ is bimodule morphism.\\ (ii). Let $A=\mathbb{1}$ in the definition of $\Delta$. We get that \begin{eqnarray} \Delta(\mathbb{1}) = \sum_{\pi\in S_k} \Tr{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi^{-1})}\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi) = \sum_{\pi\in S_k} \chi(\pi^{-1})\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi). \end{eqnarray} By Schur-Weyl duality, we have $$ (\mathbb{C}^d)^{\otimes k} \cong \bigoplus_{\lambda\vdash(k,d)} \bQ_\lambda\otimes\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_\lambda $$ and $$ \chi = \sum_{\lambda\vdash (k,d)} d_\lambda \chi_\lambda, $$ where $d_\lambda$ is the multiplicities of $\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_\lambda$, i.e. $d_\lambda = \dim(\bQ_\lambda)=s_{\lambda}(1^{\times d})$. Hence $$ \chi(\pi^{-1}) = \sum_{\lambda\vdash (k,d)} d_\lambda \chi_\lambda(\pi^{-1})= \sum_{\lambda\vdash (k,d)} s_{\lambda}(1^{\times d}) \chi_\lambda(\pi^{-1}), $$ which is substituted into the rhs of expression of $\Delta(\mathbb{1})$ above, gives rise to \begin{eqnarray*} \Delta(\mathbb{1}) &=& \sum_{\pi\in S_k}\Pa{\sum_{\lambda\vdash (k,d)} s_{\lambda}(1^{\times d}) \chi_\lambda(\pi^{-1})}\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi)\\ &=& \sum_{\lambda\vdash (k,d)} s_{\lambda}(1^{\times d})\Pa{\sum_{\pi\in S_k} \chi_\lambda(\pi^{-1})\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi)}. \end{eqnarray*} Since the minimal central projection $C_\lambda$ in $\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\mathbb{C}[S_k])$ must be of the following form: $$ C_\lambda = \frac{f^\lambda}{k!}\sum_{\pi\in S_k}\chi_\lambda(\pi^{-1})\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi), $$ it follows that $$ \sum_{\pi\in S_k}\chi_\lambda(\pi^{-1})\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi) = \frac{k!}{f^\lambda}C_\lambda $$ Thus $$ \Delta(\mathbb{1}) = k!\sum_{\lambda\vdash k}\frac{s_{\lambda}(1^{\times d})}{f^\lambda}C_\lambda. $$ Moreover $\Delta(\mathbb{1})$ is invertible and $$ \Delta(\mathbb{1})^{-1} = \frac1{k!}\sum_{\lambda\vdash k}\frac{f^\lambda}{s_{\lambda}(1^{\times d})}C_\lambda. $$ We denote by $\mathrm{Wg}$ the function corresponding to $\Delta(\mathbb{1})^{-1}$, i.e. $$ \mathrm{Wg} = \frac1{(k!)^2}\sum_{\lambda\vdash (k,d)} \frac{(f^\lambda)^2}{s_{\lambda}(1^{\times d})}\chi_\lambda. $$ (iii). Since $\bQ(U)$ commutes with $\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi)$, it follows that \begin{eqnarray*} \Delta(\sE_k(A)) &=& \sum_{\pi\in S_k} \Tr{\sE_k(A)\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi^{-1})}\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi)\\ &=& \sum_{\pi\in S_k} \Tr{\int_{\unitary{d}}\bQ(U)A\bQ(U)^\dagger d\mu(U)\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi^{-1})}\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi)\\ &=& \sum_{\pi\in S_k} \Tr{A\int_{\unitary{d}}\bQ(U)^\dagger\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi^{-1})\bQ(U)d\mu(U)}\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi)\\ &=&\sum_{\pi\in S_k} \Tr{A\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi^{-1})}\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi) = \Delta(A), \end{eqnarray*} implying \begin{eqnarray*} \Delta(A) = \Delta(\sE_k(A)\mathbb{1}) = \sE_k(A)\Delta(\mathbb{1}) \end{eqnarray*} by the fact that $\Delta$ is bimodule morphism and $\sE_k(A)\in\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\mathbb{C}[S_k])$. We can get more that \begin{eqnarray*} \Delta(A) = \Delta(\sE_k(A)) = \sE_k(A)\Delta(\mathbb{1}) = \Delta(\mathbb{1})\sE_k(A). \end{eqnarray*} This indicates that \begin{eqnarray}\label{eq:general-integral} \sE_k(A) &=& \Delta(A)\Delta(\mathbb{1})^{-1} = \Delta(\mathbb{1})^{-1}\Delta(A)\\ &=& \frac1{k!}\Pa{\sum_{\pi\in S_k} \Tr{A\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi^{-1})}\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi)}\Pa{\sum_{\lambda\vdash k}\frac{f^\lambda}{s_{\lambda}(1^{\times d})}C_\lambda}. \end{eqnarray} (iv). It is trivially from (ii) and (iii).\\ (v). It is easily seen that \begin{eqnarray} \Delta(A\sE_k(B)) = \Delta(A)\sE_k(B) = \Delta(A)\Delta(B)\Delta(\mathbb{1})^{-1}. \end{eqnarray} We are done. \end{proof} \begin{cor} Let $k$ be a positive integer and $\mathbf{i}=(i_1,\ldots,i_k)$, $\mathbf{i}'=(i'_1,\ldots,i'_k)$, $\mathbf{j}=(j_1,\ldots,j_k)$, $\mathbf{j}'=(j'_1,\ldots,j'_k)$ be $k$-tuples of positive integers. Then \begin{eqnarray} &&\int_{\unitary{d}} U_{i_1 j_1}\cdots U_{i_k j_k} \overline{U_{i'_1 j'_1}}\cdots\overline{U_{i'_k j'_k}}d\mu(U) \\ &&= \sum_{\sigma,\tau\in S_k}\mathrm{Wg}(\sigma\tau^{-1}) \iinner{i_1}{i'_{\sigma(1)}}\cdots \iinner{i_k}{i'_{\sigma(k)}}\iinner{j_1}{j'_{\tau(1)}}\cdots\iinner{j_k}{j'_{\tau(k)}} \end{eqnarray} \end{cor} \begin{proof} Note that $$ \Delta(A\sE_k(B)) = \Delta(A)\Delta(B)\Delta(\mathbb{1})^{-1}. $$ In order to show our result, it is enough to take appropriate $A=\out{\mathbf{i}'}{\mathbf{i}}$ and $B=\out{\mathbf{j}}{\mathbf{j}'}$, where $\ket{\mathbf{i}}=\ket{i_1\cdots i_k}$, etc. Now that \begin{eqnarray} \int_{\unitary{d}} U_{i_1 j_1}\cdots U_{i_k j_k} \overline{U_{i'_1 j'_1}}\cdots\overline{U_{i'_k j'_k}}d\mu(U) = \Tr{A\sE_k(B)}. \end{eqnarray} By the definition of $\Delta$, we get \begin{eqnarray*} \Delta(A\sE_k(B)) &=& \sum_{\pi\in S_k} \Tr{A\sE_k(B)\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi^{-1})}\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi) \\ &=& \Tr{A\sE_k(B)}\mathbb{1} + \sum_{\pi\in S_k\backslash\set{e}} \Tr{A\sE_k(B)\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi^{-1})}\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi) \end{eqnarray*} and \begin{eqnarray*} \Delta(A) &=& \sum_{\sigma\in S_k} \Tr{A\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\sigma^{-1})}\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\sigma)\\ &=& \sum_{\sigma\in S_k}\Innerm{\mathbf{i}}{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\sigma^{-1})}{\mathbf{i}'}\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\sigma)\\ &=& \sum_{\sigma\in S_k}\iinner{i_1}{i'_{\sigma(1)}}\cdots \iinner{i_k}{i'_{\sigma(k)}}\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\sigma), \end{eqnarray*} where $\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\sigma)\ket{i_1\cdots i_k} = \ket{i_{\sigma^{-1}(1)}\cdots i_{\sigma^{-1}(k)}}$ or $\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\sigma)\ket{i_{\sigma(1)}\cdots i_{\sigma(k)}} = \ket{i_1\cdots i_k}$. That is, $$ \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\sigma) = \sum_{i_1,\ldots, i_k\in[d]}\out{i_1\cdots i_k}{i_{\sigma(1)}\cdots i_{\sigma(k)}}. $$ Note also that $\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\sigma)^\dagger = \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\sigma^{-1})$. Therefore $$ \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\sigma^{-1}) = \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\sigma)^\dagger = \sum_{i_1,\ldots, i_k\in[d]}\out{i_{\sigma(1)}\cdots i_{\sigma(k)}}{i_1\cdots i_k}. $$ Similarly \begin{eqnarray*} \Delta(B) &=& \sum_{\tau\in S_k} \Innerm{\mathbf{j}'}{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\tau^{-1})}{\mathbf{j}}\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\tau) = \sum_{\sigma\in S_k}\iinner{j'_1}{j_{\tau(1)}}\cdots \iinner{j'_k}{j_{\tau(k)}}\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\tau)\\ &=&\sum_{\tau\in S_k}\iinner{j_{\tau(1)}}{j'_1}\cdots \iinner{j_{\tau(k)}}{j'_k}\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\tau) = \sum_{\tau\in S_k}\iinner{j_1}{j'_{\tau^{-1}(1)}}\cdots \iinner{j_k}{j'_{\tau^{-1}(1)}}\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\tau)\\ &=&\sum_{\tau\in S_k}\iinner{j_1}{j'_{\tau(1)}}\cdots \iinner{j_k}{j'_{\tau(1)}}\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\tau^{-1}). \end{eqnarray*} Note that \begin{eqnarray*} \Delta(\mathbb{1})^{-1} &=& \Pa{k!\sum_{\lambda\vdash k}\frac{s_{\lambda}(1^{\times d})}{f^\lambda}C_\lambda}^{-1} = \frac1{k!}\sum_{\lambda\vdash k} \frac{f^\lambda}{s_{\lambda}(1^{\times d})}C_\lambda\\ &=& \sum_{\pi\in S_k}\Pa{\frac1{(k!)^2}\sum_{\lambda\vdash k} \frac{(f^\lambda)^2}{s_{\lambda}(1^{\times d})}\chi_\lambda(\pi^{-1})}\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi)\\ &=& \sum_{\pi\in S_k}\mathrm{Wg}(\pi^{-1})\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi), \end{eqnarray*} where $$ C_\lambda := \sum_{\pi\in S_k}\frac{f^\lambda}{k!}\chi_\lambda(\pi^{-1})\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi) $$ is the minimal central projection and $$ \mathrm{Wg} := \frac1{(k!)^2}\sum_{\lambda\vdash k} \frac{(f^\lambda)^2}{s_{\lambda}(1^{\times d})}\chi_\lambda $$ is the \emph{Weingarten function}. Up to now, we can get \begin{eqnarray*} &&\Delta(A)\Delta(B)\Delta(\mathbb{1})^{-1} \\ &&= \sum_{\sigma,\tau,\pi\in S_k} \iinner{i_1}{i'_{\sigma(1)}}\cdots \iinner{i_k}{i'_{\sigma(k)}} \iinner{j_1}{j'_{\tau(1)}}\cdots \iinner{j_k}{j'_{\tau(1)}}\mathrm{Wg}(\pi^{-1}) \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\sigma\tau^{-1}\pi)\\ &&= \sum_{\sigma,\tau\in S_k}\iinner{i_1}{i'_{\sigma(1)}}\cdots \iinner{i_k}{i'_{\sigma(k)}} \iinner{j_1}{j'_{\tau(1)}}\cdots \iinner{j_k}{j'_{\tau(1)}}\mathrm{Wg}(\sigma\tau^{-1})\mathbb{1} \\ &&~~~+\sum_{\sigma,\tau,\pi\in S_k: \sigma\tau^{-1}\pi\neq e} \iinner{i_1}{i'_{\sigma(1)}}\cdots \iinner{i_k}{i'_{\sigma(k)}} \iinner{j_1}{j'_{\tau(1)}}\cdots \iinner{j_k}{j'_{\tau(1)}}\mathrm{Wg}(\pi^{-1}) \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\sigma\tau^{-1}\pi). \end{eqnarray*} Comparing both sides, we get \begin{eqnarray*} &&\int_{\unitary{d}} U_{i_1 j_1}\cdots U_{i_k j_k} \overline{U_{i'_1 j'_1}}\cdots\overline{U_{i'_k j'_k}}d\mu(U) = \Tr{A\sE_k(B)} \\ &&= \sum_{\sigma,\tau\in S_k}\mathrm{Wg}(\sigma\tau^{-1}) \iinner{i_1}{i'_{\sigma(1)}}\cdots \iinner{i_k}{i'_{\sigma(k)}}\iinner{j_1}{j'_{\tau(1)}}\cdots\iinner{j_k}{j'_{\tau(k)}}. \end{eqnarray*} This completes the proof. \end{proof} \begin{cor} If $k\neq l$, then $$ \int_{\unitary{d}}U_{i_1 j_1}\cdots U_{i_k j_k} \overline{U_{i'_1 j'_1}}\cdots\overline{U_{i'_l j'_l}}d\mu(U) = 0. $$ \end{cor} \begin{proof} For every $z\in \unitary{1}$, the map $\unitary{d}\ni U\mapsto zU\in\unitary{d}$ is measure-preserving, therefore \begin{eqnarray} &&\int_{\unitary{d}}U_{i_1 j_1}\cdots U_{i_k j_k} \overline{U_{i'_1 j'_1}}\cdots\overline{U_{i'_l j'_l}}d\mu(U) \\ &&= \int_{\unitary{d}}zU_{i_1 j_1}\cdots zU_{i_k j_k} \overline{zU_{i'_1 j'_1}}\cdots\overline{zU_{i'_l j'_l}}d\mu(U)\\ &&= z^{k-l}\int_{\unitary{d}}U_{i_1 j_1}\cdots U_{i_k j_k} \overline{U_{i'_1 j'_1}}\cdots\overline{U_{i'_l j'_l}}d\mu(U), \end{eqnarray} implying that $$ (1-z^{k-l})\int_{\unitary{d}}U_{i_1 j_1}\cdots U_{i_k j_k} \overline{U_{i'_1 j'_1}}\cdots\overline{U_{i'_l j'_l}}d\mu(U) = 0. $$ By the arbitrariness of $z\in\unitary{1}$, there exists a $z_0\in\unitary{1}$ such that $z^{k-l}_0\neq1$ since $k\neq l$. \end{proof} \begin{remark} What is $\int_{\unitary{d}} Ud\mu(U)$? One approach to see this is to form a $d\times d$ matrix $M$ whose $(i,j)$-th entry is the $\int_{\unitary{d}} U_{ij}d\mu(U)$, for $1\leqslant i,j\leqslant d$. Writing this in terms of matrix form, we have for any fixed $V\in \unitary{d}$, $$ M = \int_{\unitary{d}} Ud\mu(U) = \int_{\unitary{d}} VUd\mu(U) = V\int_{\unitary{d}} Ud\mu(U) = VM, $$ where we used the fact that $d\mu(U)$ is regular. But $VM=M$ for all unitary $V$ can only hold if $M=0$. That is, $$ \int_{\unitary{d}} Ud\mu(U) = 0. $$ \end{remark} \begin{cor} It holds that \begin{eqnarray} \int_{\unitary{d}} U^{\otimes k}\otimes \Pa{U^{\otimes k}}^\dagger d\mu(U) = \sum_{\sigma,\tau\in S_k} \mathrm{Wg}(\sigma\tau^{-1})\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{\tau+k,\sigma^{-1}}, \end{eqnarray} where, for any $\pi_1,\pi_2\in S_k$, \begin{eqnarray} \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{\pi_1+k,\pi_2}\ket{j_1\cdots j_k i'_1\cdots i'_k} := \Ket{i'_{\pi_2^{-1}(1)}\cdots i'_{\pi_2^{-1}(k)}j_{\pi_1^{-1}(1)}\cdots j_{\pi_1^{-1}(k)}}. \end{eqnarray} \end{cor} \begin{proof} Clearly \begin{eqnarray*} &&\Innerm{\mathbf{ij}'}{\int_{\unitary{d}} U^{\otimes k}\otimes \Pa{U^{\otimes k}}^\dagger d\mu(U)}{\mathbf{ji}'} = \int_{\unitary{d}} U_{i_1 j_1}\cdots U_{i_k j_k} \overline{U_{i'_1 j'_1}}\cdots\overline{U_{i'_k j'_k}}d\mu(U) \\ &&= \sum_{\sigma,\tau\in S_k}\mathrm{Wg}(\sigma\tau^{-1}) \iinner{i_1}{i'_{\sigma(1)}}\cdots \iinner{i_k}{i'_{\sigma(k)}}\iinner{j'_1}{j_{\tau^{-1}(1)}}\cdots\iinner{j'_k}{j_{\tau^{-1}(k)}}. \end{eqnarray*} Next, by the definition of $\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{\pi_1+k,\pi_2}$, hence we get \begin{eqnarray} \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{\tau+k,\sigma^{-1}}\ket{j_1\cdots j_k i'_1\cdots i'_k} = \Ket{i'_{\sigma(1)}\cdots i'_{\sigma(k)}j_{\tau^{-1}(1)}\cdots j_{\tau^{-1}(k)}}, \end{eqnarray} where $\pi+k$ means $$ \pi+k\equiv\Pa{\begin{array}{ccc} 1 & \cdots & k \\ \pi(1)+k & \cdots & \pi(k)+k \end{array} }, $$ implying that \begin{eqnarray} \Innerm{\mathbf{ij}'}{\int_{\unitary{d}} U^{\otimes k}\otimes \Pa{U^{\otimes k}}^\dagger d\mu(U)}{\mathbf{ji}'} = \Innerm{\mathbf{ij}'}{\sum_{\sigma,\tau\in S_k}\mathrm{Wg}(\sigma\tau^{-1})\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{\tau+k,\sigma^{-1}}}{\mathbf{ji}'}. \end{eqnarray} The proof is complete. \end{proof} \begin{remark} In recent papers \cite{Horodecki}, the authors modified the Schur-Weyl duality in the sense that the commutant of $U^{\otimes k-1}\otimes \overline{U}$ can be specifically computed. They make an attempt in \cite{mhs,schm} to use the obtained new commutant theorem investigate some questions in quantum information theory. \end{remark} \begin{cor} The uniform average of $\out{\psi}{\psi}^{\otimes k}$ over state vectors $\ket{\psi}$ on the unit sphere $\mathbb{S}^{2d-1}$ in $\mathbb{C}^d$ is given by \begin{eqnarray} \int_{\mathbb{S}^{2d-1}} \out{\psi}{\psi}^{\otimes k} d\ket{\psi} = \frac1{s_{(k)}(1^{\times d})}C_{(k)} = \frac1{\binom{k+d-1}{k}}C_{(k)}, \end{eqnarray} where the meaning of $C_\lambda$ can be referred to \eqref{eq:central-proj}, here $\lambda=(k)$. \end{cor} \begin{proof} In fact, this result is a direct consequence of \eqref{eq:general-integral}. More explicitly, let us fix a vector $\ket{\psi_0}$. Then every $\ket{\psi}$ can be generated by a uniform unitary $U$ such that $\ket{\psi}=U\ket{\psi_0}$. Thus \begin{eqnarray} \int_{\mathbb{S}^{2d-1}} \out{\psi}{\psi}^{\otimes k} d\ket{\psi} &=& \int_{\unitary{d}} U^{\otimes k}\out{\psi_0}{\psi_0}^{\otimes k}U^{\otimes k,\dagger}d\mu(U). \end{eqnarray} Taking $A=\out{\psi_0}{\psi_0}^{\otimes k}$ in \eqref{eq:general-integral} gives rise to \begin{eqnarray} &&\int_{\unitary{d}} U^{\otimes k}\out{\psi_0}{\psi_0}^{\otimes k}U^{\otimes k,\dagger}d\mu(U) = \Pa{\frac1{k!}\sum_{\pi\in S_k} \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi)}\Pa{\sum_{\lambda\vdash k}\frac{f^\lambda}{s_{\lambda}(1^{\times d})}C_\lambda}\\ &&= C_{(k)}\Pa{\sum_{\lambda\vdash k}\frac{f^\lambda}{s_{\lambda}(1^{\times d})}C_\lambda} = \frac{f^{(k)}}{s_{(k)}(1^{\times d})}C_{(k)}, \end{eqnarray} implying the desired result. \end{proof} The following compact version of $\sE_k(A)$ is given by Audenaert in \cite{kmra}. The detailed presentation is shifted to Appendix (see below). \begin{prop} Let $\cH_{\mathrm{in}}$ and $\cH_{\mathrm{out}}$ be two copies of the Hilbert space $\cH = (\mathbb{C}^d)^{\otimes k}$. Let $C_{(k)}^\vee$ be the projector on the totally symmetric subspace of $\cH_{\mathrm{out}}\otimes \cH_{\mathrm{in}}$. Then it holds that \begin{eqnarray} \int_{\unitary{d}} \out{U^{\otimes k}}{U^{\otimes k}} d\mu(U)= C_{(k)}^\vee\Pa{\Br{\Ptr{\mathrm{in}}{C_{(k)}^\vee}}^{-1}\otimes\mathbb{1}_{\mathrm{in}}}. \end{eqnarray} \end{prop} \begin{cor} Let $A\in\mathrm{End}((\mathbb{C}^d)^{\otimes k})$. Then it holds that \begin{eqnarray} \int_{\unitary{d}} \Pa{U^{\otimes k}}A\Pa{U^{\otimes k}}^\dagger d\mu(U)= \frac1{k!}\Br{\sum_{\pi\in S_k}\Tr{A\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi^{-1})}\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi)}\Br{\Ptr{\mathrm{in}}{C_{(k)}^\vee}}^{-1}. \end{eqnarray} \end{cor} \begin{cor} Assume $X\in\mathrm{End}(\mathbb{C}^d)$ with spectrum $\set{x_j:1,\ldots,d}$. It holds that \begin{eqnarray} \int_{\unitary{d}} \Pa{U^{\otimes k}}X^{\otimes k}\Pa{U^{\otimes k}}^\dagger d\mu(U)=\sum_{\lambda\vdash(k,d)}\frac{s_\lambda(x_1,\ldots,x_d)}{s_\lambda(1^{\times d})}C_\lambda=\sum_{\lambda\vdash(k,d)}\frac{\Tr{C_\lambda X^{\otimes k}}}{\Tr{C_\lambda}}C_\lambda. \end{eqnarray} \end{cor} \begin{proof} We give a very simple derivation of this identity via Schur-Weyl duality , i.e. Theorem~\ref{th:S-W-Duality}. Indeed, the mentioned integral can be rewritten as \begin{eqnarray} \int_{\unitary{d}} \Pa{U^{\otimes k}}X^{\otimes k}\Pa{U^{\otimes k}}^\dagger d\mu(U)=\int_{\unitary{d}} \bQ(U)\bQ(X) \bQ^\dagger(U) d\mu(U). \end{eqnarray} Now by Schur-Weyl duality, we have \begin{eqnarray*} \bQ(U) = \bigoplus_{\lambda\vdash (k,d)} \bQ_\lambda(U)\otimes\mathbb{1}_{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_\lambda},~~\bQ(X) = \bigoplus_{\lambda\vdash (k,d)} \bQ_\lambda(X)\otimes\mathbb{1}_{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_\lambda},~~\bQ^\dagger(U) = \bigoplus_{\lambda\vdash (k,d)} \bQ^\dagger_\lambda(U)\otimes\mathbb{1}_{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_\lambda}. \end{eqnarray*} Thus \begin{eqnarray} \int_{\unitary{d}} \bQ(U)\bQ(X) \bQ^\dagger(U) d\mu(U) &=& \sum_{\lambda\vdash(k,d)} \Pa{\int_{\unitary{d}} \bQ_\lambda(U)\bQ_\lambda(X) \bQ^\dagger_\lambda(U)d\mu(U)} \otimes \mathbb{1}_{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_\lambda} \\ &=& \sum_{\lambda\vdash(k,d)} \Pa{\frac1{\dim(\bQ_\lambda)}\Tr{\bQ_\lambda(X)}\mathbb{1}_{\bQ_\lambda}} \otimes \mathbb{1}_{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_\lambda}\\ &=& \sum_{\lambda\vdash(k,d)} \frac1{\dim(\bQ_\lambda)}\Tr{\bQ_\lambda(X)} \mathbb{1}_{\bQ_\lambda}\otimes \mathbb{1}_{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_\lambda}, \end{eqnarray} which implies the desired result, where we have used the facts that \begin{eqnarray*} \Tr{\bQ_\lambda(X)} &=& s_\lambda(x_1,\ldots,x_d), \\ \dim(\bQ_\lambda)&=& s_\lambda(1^{\times d}),~ C_\lambda = \mathbb{1}_{\bQ_\lambda} \otimes \mathbb{1}_{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_\lambda}. \end{eqnarray*} This completes the proof. \end{proof} \begin{exam} Note that we get the following decomposition via Schur-Weyl duality \begin{eqnarray} (\mathbb{C}^d)^{\ot3}\cong \bQ_{(3)}\otimes\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(3)}\bigoplus\bQ_{(2,1)}\otimes\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(2,1)}\bigoplus\bQ_{(1,1,1)}\otimes\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(1,1,1)} \end{eqnarray} where \begin{eqnarray} \dim(\bQ_\lambda)= \begin{cases} \frac{d(d+1)(d+2)}6,&\text{if}~ \lambda=(3),\\ \frac{(d-1)d(d+1)}3, &\text{if}~ \lambda=(2,1),\\ \frac{(d-2)(d-1)d}6,&\text{if}~ \lambda=(1,1,1), \end{cases}~\text{and}~\dim(\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_\lambda)= \begin{cases} 1,&\text{if}~ \lambda=(3),\\ 2, &\text{if}~ \lambda=(2,1),\\ 1,&\text{if}~ \lambda=(1,1,1). \end{cases} \end{eqnarray} Hence \begin{eqnarray} C_\lambda = \begin{cases} \frac16\Pa{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(1)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(12)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(13)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(23)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(123)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(132)}},&\text{if}~ \lambda=(3),\\ \frac13\Pa{2\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(1)}-\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(123)}-\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(132)}},&\text{if}~\lambda=(2,1),\\ \frac16\Pa{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(1)}-\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(12)}-\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(13)}-\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(23)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(123)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(132)}},&\text{if}~\lambda=(1,1,1). \end{cases} \end{eqnarray} It follows that \begin{eqnarray} \Tr{C_\lambda} = \begin{cases} \frac{d(d+1)(d+2)}6,&\text{if}~ \lambda=(3),\\ \frac{2(d-1)d(d+1)}3,&\text{if}~\lambda=(2,1),\\ \frac{(d-2)(d-1)d}6,&\text{if}~\lambda=(1,1,1) \end{cases} \end{eqnarray} and \begin{eqnarray} \Tr{X^{\otimes 3}C_\lambda} = \begin{cases} \frac16\Br{\Tr{X}^3+3\Tr{X^2}\Tr{X}+2\Tr{X^3}},&\text{if}~ \lambda=(3),\\ \frac23\Br{\Tr{X}^3-\Tr{X^3}},&\text{if}~\lambda=(2,1),\\ \frac16\Br{\Tr{X}^3-3\Tr{X^2}\Tr{X}+2\Tr{X^3}},&\text{if}~\lambda=(1,1,1). \end{cases} \end{eqnarray} Therefore \begin{eqnarray} \int (UXU^\dagger)^{\ot3}d\mu(U) = \Delta^{(3)}_3C_{(3)} + \Delta^{(2,1)}_3C_{(2,1)} +\Delta^{(1,1,1)}_3C_{(1,1,1)}, \end{eqnarray} where \begin{eqnarray} \Delta^{(3)}_3 &:=& \frac{\Tr{X}^3+3\Tr{X^2}\Tr{X}+2\Tr{X^3}}{d(d+1)(d+2)},\\ \Delta^{(2,1)}_3 &:=&\frac{\Tr{X}^3-\Tr{X^3}}{(d-1)d(d+1)},\\ \Delta^{(1,1,1)}_3 &:=&\frac{\Tr{X}^3-3\Tr{X^2}\Tr{X}+2\Tr{X^3}}{(d-2)(d-1)d}. \end{eqnarray} \end{exam} \begin{exam} Similar we get the following decomposition: \begin{eqnarray} (\mathbb{C}^d)^{\otimes 4}&\cong& \bQ_{(4)}\otimes\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(4)}\bigoplus \bQ_{(3,1)}\otimes\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(3,1)}\bigoplus \bQ_{(2,2)}\otimes\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(2,2)}\nonumber\\ &&\bigoplus \bQ_{(2,1,1)}\otimes\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(2,1,1)}\bigoplus \bQ_{(1,1,1,1)}\otimes\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(1,1,1,1)}, \end{eqnarray} where \begin{eqnarray} \dim(\bQ_\lambda)= \begin{cases} \frac{d(d+1)(d+2)(d+3)}{24},&\text{if}~ \lambda=(4),\\ \frac{(d-1)d(d+1)(d+2)}8, &\text{if}~ \lambda=(3,1),\\ \frac{(d-1)d^2(d+1)}{12},&\text{if}~ \lambda=(2,2),\\ \frac{(d-2)(d-1)d(d+1)}8,&\text{if}~ \lambda=(2,1,1),\\ \frac{(d-3)(d-2)(d-1)d}{24},&\text{if}~ \lambda=(1,1,1,1), \end{cases}~\text{and}~\dim(\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_\lambda)= \begin{cases} 1,&\text{if}~ \lambda=(4),\\ 3, &\text{if}~ \lambda=(3,1),\\ 2,&\text{if}~ \lambda=(2,2),\\ 3,&\text{if}~ \lambda=(2,1,1),\\ 1,&\text{if}~ \lambda=(1,1,1,1). \end{cases} \end{eqnarray} Hence \begin{eqnarray} C_{(4)} &=& \frac1{24}\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(1)}+\frac1{24}\Pa{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(12)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(13)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(14)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(23)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(24)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(34)}}\\ &&+\frac1{24}\Pa{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(12)(34)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(13)(24)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(14)(23)}}\\ &&+\frac1{24}\Pa{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(123)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(132)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(124)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(142)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(134)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(143)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(234)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(243)}}\\ &&+\frac1{24}\Pa{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(1234)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(1243)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(1324)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(1342)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(1423)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(1432)}} \end{eqnarray} \begin{eqnarray} C_{(3,1)} &=& \frac38\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(1)}+\frac18\Pa{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(12)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(13)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(14)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(23)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(24)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(34)}}\\ &&-\frac18\Pa{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(12)(34)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(13)(24)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(14)(23)}}\\ &&-\frac18\Pa{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(1234)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(1243)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(1324)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(1342)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(1423)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(1432)}} \end{eqnarray} \begin{eqnarray} C_{(2,2)} &=& \frac16\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(1)}+\frac16\Pa{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(12)(34)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(13)(24)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(14)(23)}}\\ &&-\frac1{12}\Pa{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(123)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(132)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(124)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(142)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(134)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(143)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(234)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(243)}} \end{eqnarray} \begin{eqnarray} C_{(2,1,1)} &=& \frac38\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(1)}-\frac18\Pa{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(12)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(13)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(14)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(23)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(24)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(34)}}\\ &&-\frac18\Pa{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(12)(34)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(13)(24)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(14)(23)}}\\ &&+\frac18\Pa{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(1234)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(1243)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(1324)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(1342)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(1423)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(1432)}} \end{eqnarray} \begin{eqnarray} C_{(1,1,1,1)} &=& \frac1{24}\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(1)}-\frac1{24}\Pa{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(12)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(13)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(14)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(23)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(24)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(34)}}\\ &&+\frac1{24}\Pa{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(12)(34)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(13)(24)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(14)(23)}}\\ &&+\frac1{24}\Pa{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(123)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(132)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(124)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(142)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(134)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(143)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(234)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(243)}}\\ &&-\frac1{24}\Pa{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(1234)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(1243)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(1324)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(1342)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(1423)}+\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_{(1432)}} \end{eqnarray} \begin{eqnarray} \int (UXU^\dagger)^{\ot4}d\mu(U) = \Delta^{(4)}_4C_{(4)}+\Delta^{(3,1)}_4C_{(3,1)} +\Delta^{(2,2)}_4C_{(2,2)}+\Delta^{(2,1,1)}_4C_{(2,1,1)} +\Delta^{(1,1,1,1)}_4C_{(1,1,1,1)}, \end{eqnarray} where \begin{eqnarray} \Delta^{(4)}_4&:=&\frac{\Tr{X}^4+6\Tr{X^2}\Tr{X}^2+3\Tr{X^2}^2 +8\Tr{X^3}\Tr{X}+6\Tr{X^4}}{d(d+1)(d+2)(d+3)},\\ \Delta^{(3,1)}_4&:=&\frac{\Tr{X}^4+2\Tr{X^2}\Tr{X}^2-\Tr{X^2}^2-2\Tr{X^4}}{(d-1)d(d+1)(d+2)},\\ \Delta^{(2,2)}_4&:=&\frac{\Tr{X}^4+3\Tr{X^2}^2-4\Tr{X^3}\Tr{X}}{(d-1)d^2(d+1)},\\ \Delta^{(2,1,1)}_4&:=&\frac{\Tr{X}^4-2\Tr{X^2}\Tr{X}^2-\Tr{X^2}^2+2\Tr{X^4}}{(d-2)(d-1)d(d+1)},\\ \Delta^{(1,1,1,1)}_4&:=&\frac{\Tr{X}^4-6\Tr{X^2}\Tr{X}^2+3\Tr{X^2}^2+8\Tr{X^3}\Tr{X}-6\Tr{X^4}}{(d-3)(d-2)(d-1)d}. \end{eqnarray} \end{exam} \section{Some matrix integrals related to random matrix theory} This section is written based on Taylor's Lectures on Lie groups \cite{Tay}. In this section, we give a direct derivation of a formula for \begin{eqnarray} \int_{\unitary{d}} \abs{\tr{U^k}}^2dU, \end{eqnarray} of usage in random matrix theory. We also calculate a more refined object, \begin{eqnarray} \int_{\unitary{d}} U^k\otimes (U^k)^\dagger dU=\int_{\unitary{d}} U^k\otimes U^{-k} dU, \end{eqnarray} which in turn yields a formula for \begin{eqnarray} \int_{\unitary{d}} f(U)\otimes g(U) dU. \end{eqnarray} Let $f:\mathbb{S}^1\to\mathbb{C}$ be a bounded Borel function, where $\mathbb{S}^1=\Set{e^{\sqrt{-1}\theta}: \theta\in(0,2\pi]}=\set{z\in\mathbb{C}: \abs{z}=1}$. Given $U\in \unitary{d}$, we define $f(U)\in \mathrm{End}(\mathbb{C}^d)$ by the spectral decomposition: If $U=\sum^d_{j=1}e^{\sqrt{-1}\theta_j}\out{u_j}{u_j}$ with $\set{\ket{u_j}:j=1,\ldots,d}$ being an orthonormal basis for $\mathbb{C}^d$, then $f(U)$ is defined as \begin{eqnarray} f(U) := \sum^d_{j=1}f\Pa{e^{\sqrt{-1}\theta_j}}\out{u_j}{u_j}. \end{eqnarray} For instance, $U^k= \sum^d_{j=1}e^{\sqrt{-1}k\theta_j}\out{u_j}{u_j}$. We are interested in formulae for \begin{eqnarray} \int_{\unitary{d}} \Tr{f(U)}\Tr{g(U)} dU. \end{eqnarray} Note that the above is equal to the trace of \begin{eqnarray}\label{eq:comput-of-integral} \int_{\unitary{d}} f(U)\otimes g(U) dU. \end{eqnarray} The notion of Fourier series will be used here. On $\mathbb{S}^1$, let $d\mu(z)$ is the uniform and normalized Haar measure. Thus for $z=e^{\sqrt{-1}\theta}\in\mathbb{S}^1$, $$ d\mu(z) = \frac{d\theta}{2\pi}. $$ The functions $\chi_m(z)=z^m$ or $\theta\mapsto e^{\sqrt{-1}m\theta}$ for $m\in\mathbb{Z}$ are just irreducible characters of $\unitary{1}=\mathbb{S}^1$, and thus form an orthonormal basis of complex $L^2(\mathbb{S}^1,\mu)$. For $f\in L^1(\mathbb{S}^1,\mu)$, defining the coefficients \begin{eqnarray} \widehat f(k):=\int_{\mathbb{S}^1}f(z)z^{-k}d\mu(z) \end{eqnarray} yields the formal series \begin{eqnarray}\label{eq:formal-series} f \sim \sum_{k\in\mathbb{Z}} \widehat f(k)z^k. \end{eqnarray} If $f\in L^2(\mathbb{S}^1,\mu)$, the series converges unconditionally to $f$ in $L^2(\mathbb{S}^1,\mu)$. For general $f\in L^1(\mathbb{S}^1,\mu)$ and $z\in\mathbb{S}^1$, let \begin{eqnarray} S_mf(z):=\sum_{k=-m}^m \widehat f(k)z^k,~~~m=0,1,2,\ldots. \end{eqnarray} There is a one-to-one correspondence between functions $F:\mathbb{R}\to\mathbb{C}$, periodic of period $2\pi$, and functions $f:\mathbb{S}^1\to\mathbb{C}$, given by $F(\theta)=f(e^{\sqrt{-1}\theta}),\theta\in\mathbb{R}$. We will set $\widehat F(k)=\widehat f(k)$. For $F\in L^1([0,2\pi];\mathbb{C})$ (with respect to Lebesgue measure) the formal series \eqref{eq:formal-series} corresponds to \begin{eqnarray} F(\theta) \sim \sum_{k\in\mathbb{Z}} \widehat F(k)e^{\sqrt{-1}k\theta}, \end{eqnarray} which has been called the \emph{exponential Fourier series} of $F$. In terms of trigonometric functions we get another series \begin{eqnarray} F(\theta)\sim c_0 + \sum^\infty_{k=1}a_k\cos(k\theta)+b_k\sin(k\theta), \end{eqnarray} called the \emph{Fourier series} of $F$ (or of $f$). Here $a_k:=\widehat f(k)+\widehat f(-k)$ and $b_k=\sqrt{-1}(\widehat f(k)-\widehat f(-k))$. Specifically, the Fourier series of $F$ converges to $F$ at a given $\theta$ if and only if $\lim_{m\to\infty}S_mf(e^{\sqrt{-1}\theta})=f(e^{\sqrt{-1}\theta})$. Now we find that \begin{eqnarray} f(U) = \sum^{+\infty}_{k=-\infty}\widehat f(k)U^k, \end{eqnarray} where $$ \widehat f(k):= \frac1{2\pi}\int^{2\pi}_0 F(\theta)e^{-\sqrt{-1}k\theta}d\theta = \frac1{2\pi}\int^{2\pi}_0 f(e^{\sqrt{-1}\theta})e^{-\sqrt{-1}k\theta}d\theta. $$ and $$ F(\theta) = \sum_{k\in\mathbb{Z}}\widehat F(k)e^{\sqrt{-1}k\theta}\Longleftrightarrow f(e^{\sqrt{-1}\theta})=\sum_{k\in\mathbb{Z}}\widehat f(k)e^{\sqrt{-1}k\theta}. $$ Thus we have \begin{eqnarray} \int_{\unitary{d}} f(U)\otimes g(U) dU &=& \int_{\unitary{d}} \Pa{\sum^{+\infty}_{i=-\infty}\widehat f(i)U^i}\otimes\Pa{\sum^{+\infty}_{j=-\infty}\widehat g(j)U^j}dU\\ &=&\sum_{i,j\in\mathbb{Z}} \widehat f(i)\widehat g(j)\int_{\unitary{d}}U^i\otimes U^j dU\\ &=&\sum_{i,j\in\mathbb{Z}} \widehat f(i)\widehat g(j) M_{ij}, \end{eqnarray} where $M_{ij} = \int_{\unitary{d}}U^i\otimes U^j dU$. Performing the measure-preserving transformation $U\mapsto e^{\sqrt{-1}\psi}U$ on $\unitary{d}$, we see that \begin{eqnarray} M_{ij}= e^{\sqrt{-1}(i+j)\psi}M_{ij}~~~\text{for all}~\psi\in\mathbb{R}. \end{eqnarray} Thus $M_{ij}=0$ for $i\neq -j$. Hence \begin{eqnarray} \int_{\unitary{d}} f(U)\otimes g(U) dU = \sum_{k\in\mathbb{Z}} \widehat f(k)\widehat g(-k) M_k, \end{eqnarray} where $M_k:=\int_{\unitary{d}}U^k\otimes U^{-k}dU$, which implies that \begin{eqnarray} \int_{\unitary{d}} \tr{f(U)}\tr{g(U)} dU = \sum_{k\in\mathbb{Z}} \widehat f(k)\widehat g(-k) \tr{M_k}. \end{eqnarray} It remains to compute the following integral \begin{eqnarray} \Tr{M_k}=\int_{\unitary{d}} \abs{\tr{U^k}}^2dU. \end{eqnarray} Here we establish the following identity: \begin{prop}\label{th:u^k-int} It holds that \begin{eqnarray} \int_{\unitary{d}} \abs{\tr{U^k}}^2dU = \min(k,d). \end{eqnarray} \end{prop} \begin{proof} Here we give a natural proof, based on Weyl's integration formula, which implies that whenever $\varphi:\unitary{d}\to\mathbb{C}$ invariant under conjugation, then \begin{eqnarray} \int_{\unitary{d}}\varphi(U)dU =C_d \int_{\mathbb{T}^d}\varphi(D(\theta))J(\theta)dD(\theta) = \frac{C_d}{(2\pi)^d}\overbrace{\int^{2\pi}_0\cdots\int^{2\pi}_0}^{d}\varphi(D(\theta))J(\theta)d\theta \end{eqnarray} where $D(\theta)=\mathrm{diag}(e^{\sqrt{-1}\theta_1},\ldots,e^{\sqrt{-1}\theta_d})$, and $J(\theta)=\prod_{i<j}\abs{e^{\sqrt{-1}\theta_i} - e^{\sqrt{-1}\theta_j}}^2$. We will verify in calculations below that $C_d=1/d!$. Now \begin{eqnarray} \int_{\unitary{d}} \abs{\tr{U^k}}^2dU = \frac{C_d}{(2\pi)^d} \overbrace{\int^{2\pi}_0\cdots\int^{2\pi}_0}^{d}\abs{e^{\sqrt{-1}k\theta_1}+\cdots+e^{\sqrt{-1}k\theta_d}}^2 J(\theta)d\theta. \end{eqnarray} We restate this as follows. Set $\zeta_j=e^{\sqrt{-1}\theta_j}$, so \begin{eqnarray} \abs{e^{\sqrt{-1}k\theta_1}+\cdots+e^{\sqrt{-1}k\theta_d}}^2 = \abs{\zeta^k_1+\cdots+\zeta^k_d}^2 = \sum_{p,q=1}^d \zeta^k_p\zeta^{-k}_q \end{eqnarray} and \begin{eqnarray} J(\theta) &=& \prod_{i<j}\abs{\zeta_i-\zeta_j}^2 = \prod_{i<j} (\zeta_i-\zeta_j)(\zeta^{-1}_i-\zeta^{-1}_j)\\ &=& (\op{sign}\tau)(\zeta_1\cdots\zeta_d)^{-(d-1)}\prod_{i<j}(\zeta_i-\zeta_j)^2, \end{eqnarray} where $\tau=(d\cdots 21)$, i.e. $\tau(j)=d+1-j$ or $\tau$ is written as $$ \tau:=\left( \begin{array}{cccc} 1 & 2 & \cdots & d \\ d & d-1 & \cdots & 1 \\ \end{array} \right). $$ Note that $\op{sign}\tau=(-1)^{\frac{d(d-1)}2}$. We see that $\int_{\unitary{d}} \abs{\tr{U^k}}^2dU$ is the constant term in \begin{eqnarray} C_d(\op{sign}\tau)(\zeta_1\cdots\zeta_d)^{-(d-1)}\Pa{\sum_{p,q=1}^d \zeta^k_p\zeta^{-k}_q}\prod_{i<j}(\zeta_i-\zeta_j)^2. \end{eqnarray} Thus our task is to identify the constant term in this Laurent polynomial. To work on the last factor, we recognize \begin{eqnarray} V(\zeta) = \prod_{i<j} (\zeta_i-\zeta_j) \end{eqnarray} as a Vandermonde determinant; hence \begin{eqnarray} V(\zeta) = \sum_{\sigma\in S_d} (\op{sign}\sigma)\zeta^{\sigma(1)-1}_1\cdots\zeta^{\sigma(d)-1}_d. \end{eqnarray} Hence \begin{eqnarray} \prod_{i<j}(\zeta_i-\zeta_j)^2 = V(\zeta)^2 = \sum_{\sigma,\pi\in S_d} (\op{sign}\sigma)(\op{sign}\pi)\zeta^{\sigma(1)+\pi(1)-2}_1\cdots\zeta^{\sigma(d)+\pi(d)-2}_d. \end{eqnarray} Let us first identify the constant term in \begin{eqnarray} J(\theta) = (\op{sign}\tau)(\zeta_1\cdots\zeta_d)^{-(d-1)}V(\zeta)^2. \end{eqnarray} We see this constant term is equal to \begin{eqnarray*} &&\frac1{(2\pi)^d} \overbrace{\int^{2\pi}_0\cdots\int^{2\pi}_0}^{d}J(\theta)d\theta \\ &&= (\op{sign}\tau)\frac1{(2\pi)^d} \overbrace{\int^{2\pi}_0\cdots\int^{2\pi}_0}^{d}\Pa{\sum_{\sigma,\pi\in S_d}(\op{sign}\sigma)(\op{sign}\pi)\zeta^{\sigma(1)+\pi(1)-d-1}_1\cdots\zeta^{\sigma(d)+\pi(d)-d-1}_d}d\theta\\ &&=(\op{sign}\tau) \sum_{\sigma,\pi\in S_d}(\op{sign}\sigma)(\op{sign}\pi)\Pa{\frac1{2\pi}\int^{2\pi}_0 \zeta^{\sigma(1)+\pi(1)-d-1}_1d\theta_1}\times\cdots\times \Pa{\frac1{2\pi}\int^{2\pi}_0 \zeta^{\sigma(d)+\pi(d)-d-1}_dd\theta_d}\\ &&=(\op{sign}\tau) \sum_{(\sigma,\pi)\in S_d\times S_d:\forall j,\sigma(j)+\pi(j)=d+1}(\op{sign}\sigma)(\op{sign}\pi)=(\op{sign}\tau) \sum_{(\sigma,\pi)\in S_d\times S_d:\pi=\tau\sigma}(\op{sign}\sigma)(\op{sign}\pi). \end{eqnarray*} Note that the sum is over all $(\sigma,\pi)\in S_d\times S_d$ such that $\sigma(j)+\pi(j)=d+1$ for each $j\in\set{1,\ldots,d}$. In other words, we get $\pi(j)=d+1-\sigma(j)=\tau(\sigma(j))$ for all $j\in\set{1,\ldots,d}$, i.e. $\pi=\tau\sigma$. Thus the sum is equal to $$ (\op{sign}\tau)\sum_{\sigma\in S_d}(\op{sign}\sigma)(\op{sign}\tau\sigma) = d!, $$ which gives rise to $C_d=1/d!$. Clearly \begin{eqnarray} &&C_d(\op{sign}\tau)(\zeta_1\cdots\zeta_d)^{-(d-1)}\Pa{\sum_{p,q=1}^d \zeta^k_p\zeta^{-k}_q}\prod_{i<j}(\zeta_i-\zeta_j)^2\\ &&= C_d(\op{sign}\tau)(\zeta_1\cdots\zeta_d)^{-(d-1)} \sum^d_{p,q=1}\sum_{(\sigma,\pi)\in S_d\times S_d}(\op{sign}\sigma)(\op{sign}\pi)\\ &&~~~\times \zeta^k_{p}\zeta^{-k}_{q}\zeta^{\sigma(1)+\pi(1)-2}_1\cdots\zeta^{\sigma(d)+\pi(d)-2}_d,\\ &&=C_d(\op{sign}\tau)(V_1(\zeta)+V_2(\zeta)), \end{eqnarray} where \begin{eqnarray} V_1(\zeta) &:=& (\zeta_1\cdots\zeta_d)^{-(d-1)} \sum^d_{p=1}\sum_{(\sigma,\pi)\in S_d\times S_d}(\op{sign}\sigma)(\op{sign}\pi)\zeta^{\sigma(1)+\pi(1)-2}_1\cdots\zeta^{\sigma(d)+\pi(d)-2}_d,\\ V_2(\zeta) &:=& (\zeta_1\cdots\zeta_d)^{-(d-1)} \sum_{p\neq q}\sum_{(\sigma,\pi)\in S_d\times S_d}(\op{sign}\sigma)(\op{sign}\pi)\zeta^k_{p}\zeta^{-k}_{q}\zeta^{\sigma(1)+\pi(1)-2}_1\cdots\zeta^{\sigma(d)+\pi(d)-2}_d. \end{eqnarray} Now \begin{eqnarray} V_1(\zeta) &:=& d\times \sum_{(\sigma,\pi)\in S_d\times S_d}(\op{sign}\sigma)(\op{sign}\pi)\zeta^{\sigma(1)+\pi(1)-d-1}_1\cdots\zeta^{\sigma(d)+\pi(d)-d-1}_d, \end{eqnarray} implying \begin{eqnarray} \frac1{(2\pi)^d} \overbrace{\int^{2\pi}_0\cdots\int^{2\pi}_0}^{d} V_1(\zeta)d\theta :=d \cdot d!(\op{sign}\tau). \end{eqnarray} It remains to consider the integral involved in $V_2(\zeta)$. That is, \begin{eqnarray} \frac1{(2\pi)^d} \overbrace{\int^{2\pi}_0\cdots\int^{2\pi}_0}^{d} V_2(\zeta)d\theta. \end{eqnarray} We see that for a given $p\neq q$, a pair $(\sigma,\pi)\in S_d\times S_d$ contributes to the constant term in $V_2(\zeta)$ if and only if \begin{eqnarray} \sigma(j)+\pi(j) = \begin{cases} d+1,&\text{if}~j\in\set{1,\ldots,d}\backslash\set{p,q}\\ d+1-k,&\text{if}~j=p,\\ d+1+k,&\text{if}~j=q. \end{cases} \end{eqnarray} That is, \begin{eqnarray} \pi(j) = \begin{cases} d+1-\sigma(j),&\text{if}~j\in\set{1,\ldots,d}\backslash\set{p,q}\\ d+1-\sigma(j)-k,&\text{if}~j=p,\\ d+1-\sigma(j)+k,&\text{if}~j=q. \end{cases} \end{eqnarray} By the definition of $\tau$, $d+1-\sigma(j) = \tau(\sigma(j))$ for all $j$. Thus \begin{eqnarray} \pi(j) = \begin{cases} \tau(\sigma(j)),&\text{if}~j\in\set{1,\ldots,d}\backslash\set{p,q}\\ \tau(\sigma(j))-k,&\text{if}~j=p,\\ \tau(\sigma(j))+k,&\text{if}~j=q. \end{cases} \end{eqnarray} Define \begin{eqnarray} \omega_{pq}(j) = \begin{cases} j,&\text{if}~j\in\set{1,\ldots,d}\backslash\set{j_p,j_q}\\ j-k,&\text{if}~j=j_p,\\ j+k,&\text{if}~j=j_q, \end{cases} \end{eqnarray} where $j_p=\tau(\sigma(p))$ and $j_q=\tau(\sigma(q))$. Therefore $\pi=\omega_{pq}\tau\sigma$, where $\omega_{pq}=(j_pj_q)$ with $j_p-j_q=k$. Note that all the possible choices of $\omega_{pq}$ depends on all the possible values of positive integer $j_p$, i.e. totally $d-k$ since $k+1\leqslant j_p\leqslant d$. Now \begin{eqnarray} V_2(\zeta) := \sum_{p\neq q}\sum_{(\sigma,\pi)\in S_d\times S_d}(\op{sign}\sigma)(\op{sign}\pi)\zeta^k_{p}\zeta^{-k}_{q}\zeta^{\sigma(1)+\pi(1)-d-1}_1\cdots\zeta^{\sigma(d)+\pi(d)-d-1}_d, \end{eqnarray} implying that if $1\leqslant k\leqslant d-1$, \begin{eqnarray} \frac1{(2\pi)^d} \overbrace{\int^{2\pi}_0\cdots\int^{2\pi}_0}^{d} V_2(\zeta)d\theta &=& \sum_{p\neq q}\sum_{(\sigma,\pi)\in S_d\times S_d:\pi=\omega_{pq}\tau\sigma}(\op{sign}\sigma)(\op{sign}\pi)\\ &=& \sum_{p\neq q}\sum_{\sigma\in S_d}(\op{sign}\sigma)(\op{sign}\omega_{pq}\tau\sigma)\\ &=& d!(\op{sign}\tau)\sum_{p\neq q}\op{sign}\omega_{pq} = -(d-k)\cdot d!(\op{sign}\tau), \end{eqnarray} where we used the fact that $\sum_{p\neq q}\op{sign}\omega_{pq} = -(d-k)$. The reason is for some pairs $(p,q)$ with $p\neq q$, $\omega_{pq}$ does not exist, but just exist for $d-k$ pairs $(p,q)$ with $p\neq q$. We also note that if $k\geqslant d$, the choice of $\omega_{pq}$ is empty. Therefore the integral involved in $V_2(\zeta)$ is zero. \end{proof} \begin{cor} For $k\geqslant 1,d\geqslant 2$, we have \begin{eqnarray} \int_{\unitary{d}}U^k\otimes U^{-k} dU = \frac{\min(k,d)-1}{d^2-1}\mathbb{1}_{d^2} + \frac{d^2-\min(k,d)}{d(d^2-1)}F, \end{eqnarray} where $F=\sum^d_{i,j=1}\out{ij}{ji}$ is the swap operator on $\mathbb{C}^d\otimes\mathbb{C}^d$. \end{cor} \begin{proof} Apparently $[M_k,V\otimes V]=0$ for all $V\in \unitary{d}$. It follows from Proposition~\ref{prop:uu-integral} that \begin{eqnarray} M_k &=& \int_{\unitary{d}} (V\otimes V)M_k(V\otimes V)^{-1}dV\\ &=& \Pa{\frac{\Tr{M_k}}{d^2-1} - \frac{\Tr{M_kF}}{d(d^2-1)}}\mathbb{1}_{d^2} + \Pa{\frac{\Tr{M_kF}}{d^2-1} - \frac{\Tr{M_k}}{d(d^2-1)}}F. \end{eqnarray} It suffices to compute $\tr{M_k}$ and $\tr{M_kF}$. Clearly $\tr{M_kF}=d$. By Proposition~\ref{th:u^k-int}, we have $\tr{M_k}=\min(k,d)$. Therefore the desired conclusion is obtained. \end{proof} \begin{cor} For $k\geqslant 1,d\geqslant 2$, we have \begin{eqnarray} \int_{\unitary{d}}U^k A(U^k)^\dagger dU = \frac{\min(k,d)-1}{d^2-1}A + \frac{d^2-\min(k,d)}{d(d^2-1)}\Tr{A}\mathbb{1}_d, \end{eqnarray} where $F=\sum^d_{i,j=1}\out{ij}{ji}$ is the swap operator on $\mathbb{C}^d\otimes\mathbb{C}^d$. \end{cor} \begin{proof} Firstly we have \begin{eqnarray} \int_{\unitary{d}}U^k \otimes\overline{U}^k dU = \frac{\min(k,d)-1}{d^2-1}\mathbb{1}_{d^2} + \frac{d^2-\min(k,d)}{d(d^2-1)}\out{\vec(\mathbb{1}_d)}{\vec(\mathbb{1}_d)}, \end{eqnarray} which indicates that \begin{eqnarray} &&\Pa{\int_{\unitary{d}}U^k \otimes\overline{U}^k dU}\ket{\vec(A)} \\ &&= \frac{\min(k,d)-1}{d^2-1}\mathbb{1}_{d^2}\ket{\vec(A)} + \frac{d^2-\min(k,d)}{d(d^2-1)}\ket{\vec(\mathbb{1}_d)}\Inner{\vec(\mathbb{1}_d)}{\vec(A)}. \end{eqnarray} Therefore \begin{eqnarray} \int_{\unitary{d}}U^k A(U^k)^\dagger dU = \frac{\min(k,d)-1}{d^2-1}A + \frac{d^2-\min(k,d)}{d(d^2-1)}\Tr{A}\mathbb{1}_d, \end{eqnarray} implying the desired result. When $k=1$, the result of the present proposition is reduced to Proposition~\ref{prop:u-integral}. \end{proof} Up to now, we can finish the computation of \eqref{eq:comput-of-integral}. We obtain \begin{prop} It holds that \begin{eqnarray} \int_{\unitary{d}} f(U)\otimes g(U) dU = \frac{h(0)-\mathcal{F}}\def\cG{\mathcal{G}}\def\cH{\mathcal{H}}\def\cI{\mathcal{I}}\def\cJ{\mathcal{J}_dh(0)}{d^2-1}\Pa{\mathbb{1}_{n^2}-\frac1dF} - \frac{h(0)-\widehat h(0)}{d^2-1}(\mathbb{1}_{d^2}-dF) + \widehat h(0)\mathbb{1}_{d^2}, \end{eqnarray} where \begin{eqnarray} h(\theta) = \frac1{2\pi}\int^{2\pi}_0f(t)g(t-\theta)dt \end{eqnarray} and $\mathcal{F}}\def\cG{\mathcal{G}}\def\cH{\mathcal{H}}\def\cI{\mathcal{I}}\def\cJ{\mathcal{J}_dh$ denotes the $d$-th Fej\'{e}r mean of the Fourier series of $h$: \begin{eqnarray} \mathcal{F}}\def\cG{\mathcal{G}}\def\cH{\mathcal{H}}\def\cI{\mathcal{I}}\def\cJ{\mathcal{J}_dh(\theta) = \sum^d_{j=-d}\Pa{1-\frac{\dim(\abs{j},d)}d}\widehat h(j)e^{\sqrt{-1}j\theta}. \end{eqnarray} \end{prop} \begin{prop} It holds that \begin{eqnarray} \int_{\unitary{d}} \abs{\tr{U^k}}^4 dU = \begin{cases} 2k^2, &\text{if}~1\leqslant 2k\leqslant d-1; \\ 2k^2+2k-d,&\text{if}~d\leqslant 2k\leqslant 2(d-1),\\ d(2d-1), &\text{if}~k\geqslant d. \end{cases} \end{eqnarray} \end{prop} \begin{proof} With notation in Proposition~\ref{th:u^k-int}, we have \begin{eqnarray} \int_{\unitary{d}} \abs{\tr{U^k}}^4 dU = \frac1{(2\pi)^dd!} \overbrace{\int^{2\pi}_0\cdots\int^{2\pi}_0}^{d}\abs{e^{\sqrt{-1}k\theta_1}+\cdots+e^{\sqrt{-1}k\theta_d}}^4 J(\theta)d\theta. \end{eqnarray} Thus \begin{eqnarray} \abs{e^{\sqrt{-1}k\theta_1}+\cdots+e^{\sqrt{-1}k\theta_d}}^4 = \abs{\zeta^k_1+\cdots+\zeta^k_d}^4 = \sum_{p,q,r,s=1}^d \zeta^k_p\zeta^{-k}_q\zeta^k_r\zeta^{-k}_s \end{eqnarray} and \begin{eqnarray} J(\theta) =(\op{sign}\tau)\sum_{\sigma,\pi\in S_d} (\op{sign}\sigma)(\op{sign}\pi)\zeta^{\sigma(1)+\pi(1)-d-1}_1\cdots\zeta^{\sigma(d)+\pi(d)-d-1}_d. \end{eqnarray} We see that $\int_{\unitary{d}} \abs{\tr{U^k}}^4dU$ is the constant term in \begin{eqnarray} &&\frac1{d!}(\op{sign}\tau)\sum^d_{p,q,r,s=1}\sum_{(\sigma,\pi)\in S_d\times S_d}(\op{sign}\sigma)(\op{sign}\pi) \zeta^k_{p}\zeta^{-k}_{q}\zeta^k_{r}\zeta^{-k}_{s}\cdot\zeta^{\sigma(1)+\pi(1)-d-1}_1\cdots\zeta^{\sigma(d)+\pi(d)-d-1}_d\\ &&=\frac1{d!}(\op{sign}\tau)(V_{11}(\zeta)+V_{12}(\zeta)+V_{21}(\zeta)+V_{22}(\zeta)), \end{eqnarray} where \begin{eqnarray} V_{11}(\zeta) &:=& \sum^d_{p=q=1}\sum^d_{r=s=1} \sum_{(\sigma,\pi)\in S_d\times S_d}(\op{sign}\sigma)(\op{sign}\pi) \zeta^k_{p}\zeta^{-k}_{q}\zeta^k_{r}\zeta^{-k}_{s}\cdot\zeta^{\sigma(1)+\pi(1)-d-1}_1\cdots\zeta^{\sigma(d)+\pi(d)-d-1}_d,\\ V_{12}(\zeta) &:=& \sum^d_{p=q=1}\sum_{r\neq s} \sum_{(\sigma,\pi)\in S_d\times S_d}(\op{sign}\sigma)(\op{sign}\pi) \zeta^k_{p}\zeta^{-k}_{q}\zeta^k_{r}\zeta^{-k}_{s}\cdot\zeta^{\sigma(1)+\pi(1)-d-1}_1\cdots\zeta^{\sigma(d)+\pi(d)-d-1}_d,\\ V_{21}(\zeta) &:=& \sum_{p\neq q}\sum^d_{r=s=1}\sum_{(\sigma,\pi)\in S_d\times S_d}(\op{sign}\sigma)(\op{sign}\pi) \zeta^k_{p}\zeta^{-k}_{q}\zeta^k_{r}\zeta^{-k}_{s}\cdot\zeta^{\sigma(1)+\pi(1)-d-1}_1\cdots\zeta^{\sigma(d)+\pi(d)-d-1}_d,\\ V_{22}(\zeta) &:=& \sum_{p\neq q}\sum_{r\neq s}\sum_{(\sigma,\pi)\in S_d\times S_d}(\op{sign}\sigma)(\op{sign}\pi) \zeta^k_{p}\zeta^{-k}_{q}\zeta^k_{r}\zeta^{-k}_{s}\cdot\zeta^{\sigma(1)+\pi(1)-d-1}_1\cdots\zeta^{\sigma(d)+\pi(d)-d-1}_d. \end{eqnarray} Next our task is to identify the constant term in this Laurent polynomial. Now \begin{eqnarray} V_{11}(\zeta) &:=& d^2\times \sum_{(\sigma,\pi)\in S_d\times S_d}(\op{sign}\sigma)(\op{sign}\pi)\zeta^{\sigma(1)+\pi(1)-d-1}_1\cdots\zeta^{\sigma(d)+\pi(d)-d-1}_d, \end{eqnarray} implying \begin{eqnarray} \frac1{(2\pi)^d} \overbrace{\int^{2\pi}_0\cdots\int^{2\pi}_0}^{d} V_{11}(\zeta)d\theta :=d^2 \cdot d!(\op{sign}\tau). \end{eqnarray} Then we consider the integral involved in $V_{12}(\zeta)$. Apparently, \begin{eqnarray} V_{12}(\zeta) &:=& d\times \sum_{r\neq s}\sum_{(\sigma,\pi)\in S_d\times S_d}(\op{sign}\sigma)(\op{sign}\pi)\zeta^k_r\zeta^{-k}_s\zeta^{\sigma(1)+\pi(1)-d-1}_1\cdots\zeta^{\sigma(d)+\pi(d)-d-1}_d, \end{eqnarray} implying that \begin{eqnarray} \frac1{(2\pi)^d} \overbrace{\int^{2\pi}_0\cdots\int^{2\pi}_0}^{d} V_{12}(\zeta)d\theta = \begin{cases} - d(d-k)d!(\op{sign}\tau), &\text{if}~1\leqslant k\leqslant d-1; \\ 0,&\text{if}~k\geqslant d. \end{cases} \end{eqnarray} Similarly, \begin{eqnarray} V_{21}(\zeta) &:=& d\times \sum_{p\neq q}\sum_{(\sigma,\pi)\in S_d\times S_d}(\op{sign}\sigma)(\op{sign}\pi)\zeta^k_p\zeta^{-k}_q\zeta^{\sigma(1)+\pi(1)-d-1}_1\cdots\zeta^{\sigma(d)+\pi(d)-d-1}_d, \end{eqnarray} it follows that \begin{eqnarray} \frac1{(2\pi)^d} \overbrace{\int^{2\pi}_0\cdots\int^{2\pi}_0}^{d} V_{21}(\zeta)d\theta = \begin{cases} - d(d-k)d!(\op{sign}\tau), &\text{if}~1\leqslant k\leqslant d-1; \\ 0,&\text{if}~k\geqslant d. \end{cases} \end{eqnarray} It remains to consider the integral involved in $V_{22}(\zeta)$. We have that \begin{eqnarray} V_{22}(\zeta) &:=& \sum_{p\neq q}\sum_{r\neq s} \sum_{(\sigma,\pi)\in S_d\times S_d}(\op{sign}\sigma)(\op{sign}\pi)\zeta^k_p\zeta^{-k}_q\zeta^k_r\zeta^{-k}_s\zeta^{\sigma(1)+\pi(1)-d-1}_1\cdots\zeta^{\sigma(d)+\pi(d)-d-1}_d. \end{eqnarray} We still need to split $V_{22}(\zeta)$ into some parts. In order to be convenience, we introduce the following notation: $\cI:= \Set{(\mu,\nu): \mu,\nu\in[d]~\text{and}~\mu\neq \nu}$, where $[d]:=\set{1,2,\ldots,d}$. We also denote \begin{eqnarray} \Lambda_1&:=&\Set{((p,q),(r,s)):(p,q),(r,s)\in\cI~\text{and}~(p,q)=(r,s)},\\ \Lambda_2&:=&\Set{((p,q),(r,s)):(p,q),(r,s)\in\cI~\text{and}~(p,q)=(s,r)},\\ \Lambda_3&:=&\Set{((p,q),(r,s)):(p,q),(r,s)\in\cI~\text{and}~(p,q)\neq(r,s)~\text{and}~(p,q)\neq(s,r)}. \end{eqnarray} Thus we can get a partition of $\cI\times\cI=\Lambda_1\cup\Lambda_2\cup\Lambda_3$ \begin{eqnarray} V_{22}(\zeta) = V^{(1)}_{22}(\zeta) + V^{(2)}_{22}(\zeta) + V^{(3)}_{22}(\zeta), \end{eqnarray} where \begin{eqnarray} V^{(1)}_{22}(\zeta) &:=& \sum_{((p,q),(r,s))\in\Lambda_1}\sum_{(\sigma,\pi)\in S_d\times S_d}(\op{sign}\sigma)(\op{sign}\pi)\zeta^k_p\zeta^{-k}_q\zeta^k_r\zeta^{-k}_s\zeta^{\sigma(1)+\pi(1)-d-1}_1\cdots\zeta^{\sigma(d)+\pi(d)-d-1}_d\\ &=&\sum_{(p,q)\in\cI}\sum_{(\sigma,\pi)\in S_d\times S_d}(\op{sign}\sigma)(\op{sign}\pi)\zeta^{2k}_p\zeta^{-2k}_q\zeta^{\sigma(1)+\pi(1)-d-1}_1\cdots\zeta^{\sigma(d)+\pi(d)-d-1}_d,\\ V^{(2)}_{22}(\zeta) &:=& \sum_{((p,q),(r,s))\in\Lambda_2}\sum_{(\sigma,\pi)\in S_d\times S_d}(\op{sign}\sigma)(\op{sign}\pi)\zeta^k_p\zeta^{-k}_q\zeta^k_r\zeta^{-k}_s\zeta^{\sigma(1)+\pi(1)-d-1}_1\cdots\zeta^{\sigma(d)+\pi(d)-d-1}_d\\ &=&\sum_{(p,q)\in\cI}\sum_{(\sigma,\pi)\in S_d\times S_d}(\op{sign}\sigma)(\op{sign}\pi)\zeta^{\sigma(1)+\pi(1)-d-1}_1\cdots\zeta^{\sigma(d)+\pi(d)-d-1}_d\\ &=&\binom{d}{2}2!\sum_{(\sigma,\pi)\in S_d\times S_d}(\op{sign}\sigma)(\op{sign}\pi)\zeta^{\sigma(1)+\pi(1)-d-1}_1\cdots\zeta^{\sigma(d)+\pi(d)-d-1}_d,\\ V^{(3)}_{22}(\zeta) &:=& \sum_{((p,q),(r,s))\in\Lambda_3}\sum_{(\sigma,\pi)\in S_d\times S_d}(\op{sign}\sigma)(\op{sign}\pi)\zeta^k_p\zeta^{-k}_q\zeta^k_r\zeta^{-k}_s\zeta^{\sigma(1)+\pi(1)-d-1}_1\cdots\zeta^{\sigma(d)+\pi(d)-d-1}_d. \end{eqnarray} This indicates that \begin{eqnarray} \frac1{(2\pi)^d} \overbrace{\int^{2\pi}_0\cdots\int^{2\pi}_0}^{d} V^{(1)}_{22}(\zeta)d\theta &=& \begin{cases} - (d-2k)d!(\op{sign}\tau), &\text{if}~1\leqslant 2k\leqslant d-1; \\ 0,&\text{if}~2k\geqslant d \end{cases}\\ \frac1{(2\pi)^d} \overbrace{\int^{2\pi}_0\cdots\int^{2\pi}_0}^{d} V^{(2)}_{22}(\zeta)d\theta &=&d(d-1)d!(\op{sign}\tau). \end{eqnarray} Moreover we separate the index set $\Lambda_3$ into some disjoint unions: $\Lambda_3= \Lambda^{(1)}_3\cup\Lambda^{(2)}_3\cup\Lambda^{(3)}_3\cup\Lambda^{(4)}_3\cup\Lambda^{(5)}_3$, where \begin{eqnarray} \Lambda^{(1)}_3 &:=& \Set{((p,q),(r,s))\in\Lambda_3: p=r},\\ \Lambda^{(2)}_3 &:=& \Set{((p,q),(r,s))\in\Lambda_3: p=s},\\ \Lambda^{(3)}_3 &:=& \Set{((p,q),(r,s))\in\Lambda_3: q= r},\\ \Lambda^{(4)}_3 &:=& \Set{((p,q),(r,s))\in\Lambda_3: q= s},\\ \Lambda^{(5)}_3 &:=& \Set{((p,q),(r,s))\in\Lambda_3: p\neq r,p\neq s,q\neq r,q\neq s}. \end{eqnarray} Thus $V^{(3)}_{22}(\zeta)$ is partitioned as five subparts: \begin{eqnarray} V^{(3)}_{22}(\zeta)= V^{(31)}_{22}(\zeta)+V^{(32)}_{22}(\zeta)+V^{(33)}_{22}(\zeta)+V^{(34)}_{22}(\zeta)+V^{(35)}_{22}(\zeta). \end{eqnarray} We see that for a given $p\neq q$ and $r\neq s$, if $p=r$, then a pair $(\sigma,\pi)\in S_d\times S_d$ contributes to the constant term in $V^{(31)}_{22}(\zeta)$ if and only if \begin{eqnarray} \sigma(j)+\pi(j) = \begin{cases} d+1,&\text{if}~j\in\set{1,\ldots,d}\backslash\set{p,q,s}\\ d+1-2k,&\text{if}~j=p,\\ d+1+k,&\text{if}~j=q,s. \end{cases} \end{eqnarray} That is, \begin{eqnarray} \pi(j) = \begin{cases} d+1-\sigma(j),&\text{if}~j\in\set{1,\ldots,d}\backslash\set{p,q,s}\\ d+1-\sigma(j)-2k,&\text{if}~j=p,\\ d+1-\sigma(j)+k,&\text{if}~j=q,s. \end{cases} \end{eqnarray} By the definition of $\tau$, $d+1-\sigma(j) = \tau(\sigma(j))$ for all $j$. Thus \begin{eqnarray} \pi(j) = \begin{cases} \tau(\sigma(j)),&\text{if}~j\in\set{1,\ldots,d}\backslash\set{p,q,s}\\ \tau(\sigma(j))-2k,&\text{if}~j=p,\\ \tau(\sigma(j))+k,&\text{if}~j=q,s. \end{cases} \end{eqnarray} Define \begin{eqnarray} \omega_{pqs}(j) = \begin{cases} j,&\text{if}~j\in\set{1,\ldots,d}\backslash\set{j_p,j_q,j_s}\\ j-2k,&\text{if}~j=j_p,\\ j+k,&\text{if}~j=j_q,j_s, \end{cases} \end{eqnarray} where $j_p=\tau(\sigma(p)),j_q=\tau(\sigma(q))$ and $j_s=\tau(\sigma(s))$. Therefore $\pi=\omega_{pqs}\tau\sigma$, where $\omega_{pqs}=(j_pj_qj_s)$ or $(j_pj_sj_q)$. Note that all the possible choices of $\omega_{pqs}$ depends on all the possible values of positive integer $j_p$. If $\omega_{pqs}=(j_pj_qj_s)$, then $j_p=j_q+2k$ and $j_s=j_q+k$, thus $1\leqslant j_q\leqslant d-2k$. If $\omega_{pqs}=(j_pj_sj_q)$, then $j_p=j_s+2k$ and $j_q=j_s+k$, thus $1\leqslant j_s\leqslant d-2k$. This implies that \begin{eqnarray} \frac1{(2\pi)^d} \overbrace{\int^{2\pi}_0\cdots\int^{2\pi}_0}^{d} V^{(31)}_{22}(\zeta)d\theta &=&\begin{cases} (d-2k)d!(\op{sign}\tau), &\text{if}~1\leqslant 2k\leqslant d-1; \\ 0,&\text{if}~2k\geqslant d. \end{cases} \end{eqnarray} Similarly the above analysis goes for $V^{(34)}_{22}(\zeta)$. We see that for a given $p\neq q$ and $r\neq s$, if $q=s$, then a pair $(\sigma,\pi)\in S_d\times S_d$ contributes to the constant term in $V^{(34)}_{22}(\zeta)$ if and only if \begin{eqnarray} \sigma(j)+\pi(j) = \begin{cases} d+1,&\text{if}~j\in\set{1,\ldots,d}\backslash\set{p,q,r}\\ d+1+2k,&\text{if}~j=q,\\ d+1-k,&\text{if}~j=p,r. \end{cases} \end{eqnarray} That is, \begin{eqnarray} \pi(j) = \begin{cases} d+1-\sigma(j),&\text{if}~j\in\set{1,\ldots,d}\backslash\set{p,q,r}\\ d+1-\sigma(j)+2k,&\text{if}~j=q,\\ d+1-\sigma(j)-k,&\text{if}~j=p,r. \end{cases} \end{eqnarray} By the definition of $\tau$, $d+1-\sigma(j) = \tau(\sigma(j))$ for all $j$. Thus \begin{eqnarray} \pi(j) = \begin{cases} \tau(\sigma(j)),&\text{if}~j\in\set{1,\ldots,d}\backslash\set{p,q,r}\\ \tau(\sigma(j))+2k,&\text{if}~j=q,\\ \tau(\sigma(j))-k,&\text{if}~j=p,r. \end{cases} \end{eqnarray} Define \begin{eqnarray} \omega_{pqs}(j) = \begin{cases} j,&\text{if}~j\in\set{1,\ldots,d}\backslash\set{j_p,j_q,j_r}\\ j+2k,&\text{if}~j=j_q,\\ j-k,&\text{if}~j=j_p,j_r, \end{cases} \end{eqnarray} where $j_p=\tau(\sigma(p)),j_q=\tau(\sigma(q))$ and $j_r=\tau(\sigma(r))$. Therefore $\pi=\omega_{pqr}\tau\sigma$, where $\omega_{pqr}=(j_pj_qj_r)$ or $(j_pj_rj_q)$. Note that all the possible choices of $\omega_{pqr}$ depends on all the possible values of positive integer $j_p$. If $\omega_{pqr}=(j_pj_qj_r)$, then $j_p=j_q+2k$ and $j_r=j_q+k$, thus $1\leqslant j_q\leqslant d-2k$. If $\omega_{pqr}=(j_pj_rj_q)$, then $j_p=j_r+2k$ and $j_q=j_r+k$, thus $1\leqslant j_r\leqslant d-2k$. This implies that \begin{eqnarray} \frac1{(2\pi)^d} \overbrace{\int^{2\pi}_0\cdots\int^{2\pi}_0}^{d} V^{(34)}_{22}(\zeta)d\theta &=&\begin{cases} (d-2k)d!(\op{sign}\tau), &\text{if}~1\leqslant 2k\leqslant d-1; \\ 0,&\text{if}~2k\geqslant d. \end{cases} \end{eqnarray} It is easily obtained that the formulae for $V^{(32)}_{22}(\zeta)$ and $V^{(33)}_{22}(\zeta)$. \begin{eqnarray} \frac1{(2\pi)^d} \overbrace{\int^{2\pi}_0\cdots\int^{2\pi}_0}^{d} V^{(32)}_{22}(\zeta)d\theta &=&\begin{cases} -d(d-1-k)d!(\op{sign}\tau), &\text{if}~1\leqslant k\leqslant d-1; \\ 0,&\text{if}~k\geqslant d \end{cases} \end{eqnarray} and \begin{eqnarray} \frac1{(2\pi)^d} \overbrace{\int^{2\pi}_0\cdots\int^{2\pi}_0}^{d} V^{(33)}_{22}(\zeta)d\theta &=&\begin{cases} -d(d-1-k)d!(\op{sign}\tau), &\text{if}~1\leqslant k\leqslant d-1; \\ 0,&\text{if}~k\geqslant d. \end{cases} \end{eqnarray} It remains to compute the integral involved in $V^{(35)}_{22}(\zeta)$. We see that for a given $p\neq q$ and $r\neq s$, if $p\neq r,p\neq s,q\neq r,q\neq s$, then a pair $(\sigma,\pi)\in S_d\times S_d$ contributes to the constant term in $V^{(35)}_{22}(\zeta)$ if and only if \begin{eqnarray} \sigma(j)+\pi(j) = \begin{cases} d+1,&\text{if}~j\in\set{1,\ldots,d}\backslash\set{p,q,r,s}\\ d+1-k,&\text{if}~j=p,r\\ d+1+k,&\text{if}~j=q,s. \end{cases} \end{eqnarray} That is, \begin{eqnarray} \pi(j) = \begin{cases} d+1-\sigma(j),&\text{if}~j\in\set{1,\ldots,d}\backslash\set{p,q,r,s}\\ d+1-\sigma(j)-k,&\text{if}~j=p,r,\\ d+1-\sigma(j)+k,&\text{if}~j=q,s. \end{cases} \end{eqnarray} By the definition of $\tau$, $d+1-\sigma(j) = \tau(\sigma(j))$ for all $j$. Thus \begin{eqnarray} \pi(j) = \begin{cases} \tau(\sigma(j)),&\text{if}~j\in\set{1,\ldots,d}\backslash\set{p,q,r,s}\\ \tau(\sigma(j))-k,&\text{if}~j=p,r,\\ \tau(\sigma(j))+k,&\text{if}~j=q,s. \end{cases} \end{eqnarray} Define \begin{eqnarray} \omega_{pqrs}(j) = \begin{cases} j,&\text{if}~j\in\set{1,\ldots,d}\backslash\set{j_p,j_q,j_r,j_s}\\ j-k,&\text{if}~j=j_p,j_r,\\ j+k,&\text{if}~j=j_q,j_s, \end{cases} \end{eqnarray} where $j_p=\tau(\sigma(p)),j_q=\tau(\sigma(q))$ and $j_r=\tau(\sigma(r)),j_s=\tau(\sigma(s))$. Therefore $\pi=\omega_{pqrs}\tau\sigma$, where $\omega_{pqrs}=(j_pj_q)(j_rj_s)$ or $(j_pj_s)(j_qj_r)$. Therefore \begin{eqnarray} \frac1{(2\pi)^d} \overbrace{\int^{2\pi}_0\cdots\int^{2\pi}_0}^{d} V^{(35)}_{22}(\zeta)d\theta &=&\begin{cases} 2(d-k)(d-k-1)d!(\op{sign}\tau), &\text{if}~1\leqslant k\leqslant d-1; \\ 0,&\text{if}~k\geqslant d. \end{cases} \end{eqnarray} Finally we get that \begin{eqnarray} \int_{\unitary{d}} \abs{\tr{U^k}}^4 dU = \begin{cases} 2k^2, &\text{if}~1\leqslant 2k\leqslant d-1; \\ 2k^2+2k-d,&\text{if}~d\leqslant 2k\leqslant 2(d-1),\\ d(2d-1), &\text{if}~k\geqslant d. \end{cases} \end{eqnarray} We are done. \end{proof} In fact, when $k>d$, $\int_{\unitary{d}} \abs{\tr{U^k}}^4 dU = d(2d-1)$ can be seen again in \cite{Pastur2004}. Apparently, we get more in this proposition. \begin{remark} Based on the above discussion, we can consider the following computations: \begin{enumerate}[(i)] \item $\int_{\unitary{d}} (U^k)^{\otimes n}\otimes (U^{-k})^{\otimes n}dU$; \item $\int_{\unitary{d}} (U^k)^{\otimes n}A(U^{-k})^{\otimes n}dU$; \item $\int_{\unitary{d}} \abs{\Tr{U^k}}^{2n}dU$. \end{enumerate} Indeed, for (iii), we see from the results in \cite{Diaconis2001} that if the integer $k$ satisfies the condition $1\leqslant kn \leqslant d$, then \begin{eqnarray} \int_{\unitary{d}} \abs{\Tr{U^k}}^{2n}dU = k^n\cdot n!. \end{eqnarray} What happened for $kn>d$? We leave them open for future research. \end{remark} \section{Discussion and concluding remarks}\label{sect:concluding-remarks} We see that the integrals considered in this paper, where all the underlying domain of integrals are just $\unitary{d}$. As a matter of fact, analogous problems can be considered when the unitary group $\unitary{d}$ can be replaced by a compact Lie group $G$ of some particular property, for instance, we may assume that $G$ is a \emph{gauge group} (see \cite{Mashhad,Spekkens}), a some kind of subgroup of $\unitary{d}$. In addition, we can derive some similar results from Schur Orthogonality Relations. Recall that for a compact Lie group $G$, let $\set{g\to V^{(\mu)}(g)}$ be the set of all inequivalent unitary irreps on the underlying vector space $\cV$. Consider the matrix entries of all these unitary matrices as a set of functions from $G$ to $\mathbb{C}$, denoted by $\set{V^{(\mu)}_{i,j}}$. Then, they satisfy the following Schur-Orthogonality Relations: \begin{eqnarray}\label{eq:Schur-Ortho-Relation} \int_G V^{(\mu)}_{i,j}(g) \overline{V}^{(\nu)}_{k,l}(g) dg = \frac1{d_\mu}\delta_{\mu\nu}\delta_{ik}\delta_{jl}, \end{eqnarray} where $dg$ is the uniform probability Haar measure on $G$, bar means the complex conjugate and $d_\mu$ is the dimension of irrep $\mu$. We can make analysis about \eqref{eq:Schur-Ortho-Relation} as follows: For the orthonormal base $\set{\ket{i}:i=1,\ldots,d_\mu}$ and $\set{\ket{k}: k=1,\ldots,d_\nu}$, we have \begin{eqnarray} V^{(\mu)}_{i,j}(g) = \Innerm{i}{V^{(\mu)}(g)}{j},~~~\overline{V}^{(\nu)}_{k,l}(g) = \Innerm{k}{\overline{V}^{(\nu)}(g)}{l}. \end{eqnarray} Then \begin{eqnarray*} \int_G V^{(\mu)}(g)\otimes \overline{V}^{(\nu)}(g)dg = \frac1{d_\mu}\delta_{\mu\nu}\sum_{i,j=1}^{d_\mu} \sum_{k,l=1}^{d_\nu}\delta_{ik}\delta_{jl}\out{ik}{jl}. \end{eqnarray*} That is \begin{eqnarray} \int_G V^{(\mu)}(g)\otimes \overline{V}^{(\nu)}(g)dg = \begin{cases} 0,&~\text{if}~\mu\neq \nu, \\ \frac1{d_\mu}\out{\vec(\mathbb{1}_\mu)}{\vec(\mathbb{1}_\mu)}, &~\text{if}~\mu= \nu. \end{cases} \end{eqnarray} Here $\vec(\mathbb{1}_\mu):=\sum_{i,j=1}^{d_\mu}\ket{ii}$. This indicates that \begin{eqnarray} \int_G V^{(\mu)}(g)XV^{(\mu),\dagger}(g)dg = \frac1{d_\mu}\Tr{X}\mathbb{1}_\mu \end{eqnarray} is a completely depolarizing channel. Therefore for $\mu\neq\nu$, $\int_G V^{(\mu)}(g)\otimes V^{(\nu),\dagger}(g)dg = 0$, and \begin{eqnarray} \int_G V^{(\mu)}(g)\otimes V^{(\mu),\dagger}(g)dg = \frac1{d_\mu} F^{(\mu)}, \end{eqnarray} where $F^{(\mu)}$ is the swap operator on the 2-fold tensor space of irrep $\mu$. In view of this point, we naturally want to know if the integral \begin{eqnarray}\label{eq:g-g*} \int_G V(g) \otimes V^\dagger(g)dg \end{eqnarray} can be computed explicitly, where $\set{g\to V(g)}$ is any unitary representation of $G$. In particular, when $G = \unitary{d}$ and $V(g) = \bQ(g)$, the integral \eqref{eq:g-g*} is reduced to the form: \begin{eqnarray} \int_{\unitary{d}} \bQ(g) \otimes \bQ^\dagger(g)dg, \end{eqnarray} for which we have derived explicit formula in the present paper. We leave these topics for future research. \section{Appendix} To better understand Schur-Weyl duality, i.e. irreps of unitary group and permutation group, we collect some relevant materials. The details presented in the Appendix are written based on Notes of Audenaert \cite{kmra}. \subsection{Partitions} A partition is a sequence $\lambda=(\lambda_1,\lambda_2,\ldots,\lambda_r,\ldots)$ of non-negative integers in non-increasing order $$ \lambda_1\geqslant\lambda_2\geqslant\cdots\geqslant \lambda_r\geqslant\cdots $$ and containing finitely many non-zero terms. The non-vanishing terms $\lambda_j$ are called the \emph{parts} of $\lambda$. The \emph{length} of $\lambda$, denoted $\ell(\lambda)$, is the number of parts of $\lambda$. the \emph{weight} of $\lambda$, denoted $\abs{\lambda}$, is the sum of the parts: $\abs{\lambda} := \sum_j \lambda_j$. A partition $\lambda$ with weight $\abs{\lambda}=k$ is also called a partition of $k$, and this is denoted $\lambda\vdash k$. We will also use the notation $\lambda\vdash(k,d)$ to indicate that $\lambda\vdash k$ and $\ell(\lambda)\leqslant d$ in one statement. For $\lambda\vdash k$, we use the shorthand $\bar \lambda:= \frac\lambda k$. For $j\geqslant1$, the $j$-th element of $\lambda$ is denoted by $\lambda_j$. This element is a part if $j\leqslant \ell(\lambda)$, otherwise it is 0. It is frequently convenient to use a different notation that indicates the number of times each integer $j=1,2,\ldots,\abs{\lambda}$ occurs as a part, the so-called \emph{multiplicity} $m_j$ of $j$: $$ \lambda = (1^{m_1}2^{m_2}\ldots r^{m_r}\ldots). $$ As a shorthand we will use a superscripted index: $\lambda_j = m_j(\lambda)$. Now one has the relations \begin{eqnarray*} \begin{cases} \sum^k_{j=1}\lambda^j &= \ell(\lambda),\\ \sum^k_{j=1}j\lambda^j &= \abs{\lambda} = k. \end{cases} \end{eqnarray*} When dealing with numerical calculations it is necessary to impose an ordering on the set of partitions. We will adhere here to the lexicographic ordering, in which $\lambda$ precedes $\mu$, denoted $\lambda>\mu$, if and only if the first non-zero difference $\lambda_j-\mu_j$ is positive. \begin{exam} With the above convention, the partitions of $5$ are ordered as follows: $$ (5),~(41),~(32),~(31^2),~(2^21),~(21^3),~(1^5). $$ It is seen easily that lexicographic ordering is a total order. \end{exam} \subsection{Young frames and Young tableaux} Partitions can be graphically represented by Young diagrams, which are Young tableaux with empty boxes. The $j$-th part $\lambda_j$ corresponds to the $j$-th row of the diagram, consisting of $\lambda_j$ boxes. Conversely, the Young diagrams of $k$ boxes can be uniquely labeled by a partition $\lambda\vdash k$. We will therefore identify a Young diagram with the partition labeling it. A \emph{Young tableau} (YT) of $d$ objects and of shape $\lambda\vdash k$ is a Young diagram $\lambda$ in which the boxes are labeled by numbers $\set{1,\ldots,d}$. A \emph{standard Young tableau} (SYT) of shape $\lambda\vdash k$ is a Young tableau of $d=k$ objects such that the labels appear \emph{increasing} in every row from left to right, and \emph{increasing} in every column downwards; hence every number occurs exactly once. A \emph{Semi-standard Young tableau} (SSYT) of shape $\lambda\vdash k$ is a Young tableau such that the labels appear \emph{non-decreasing} in every row from left to right, and \emph{increasing} in every column downwards. The number of SSYTs of $d$ objects and of shape $\lambda\vdash k$ (imposing the condition $\ell(\lambda)\leqslant d$) is given by $s_\lambda(1^{\times d}) \equiv s_{\lambda,d}(1)$; see below for an explanation. The number $f^\lambda$ of SYTs of shape $\lambda \vdash (k,d)$ is \begin{eqnarray} f^\lambda = k!\frac{\Delta(\mu_1,\ldots,\mu_d)}{\mu_1!\cdots \mu_d!},~d=\ell(\lambda), \end{eqnarray} where $\Delta(\mu_1,\ldots,\mu_d)$ denotes the \emph{difference product} of a non-increasing sequence $$ \Delta(\mu_1,\ldots,\mu_d) := \prod_{1\leqslant i<j\leqslant d} (\mu_i - \mu_j), $$ and the numbers $\mu_j = \mu_j(\lambda)$ are defined by $$ \mu_j(\lambda) := \lambda_j + \ell(\lambda) - j,\text{~for~}j=1,2,\ldots, \ell(\lambda). $$ \subsection{Permutations} We can display a permutation $\pi$ using \emph{cycle notation}. Given $j\in\set{1,\ldots,k}:=[k]$, the elements of the sequence $j,\pi(j),\ldots$ cannot be distinct. Taking the first power $n$ such that $\pi^n(j)=j$, we have the cycle $$ (j,\pi(j),\ldots,\pi^{n-1}(j)). $$ Equivalently, the cycle $(i,j,\ldots, l)$ means that $\pi$ sends $i$ to $j,\ldots $, and $l$ back to $i$. Now pick an element not in the cycle containing $i$ and iterate this process until all members of $[k]$ have been used. For example $\pi\in S_5$, $\pi=(1,2,3)(4)(5)$ in cycle notation. Note that cyclically permuting the elements within a cycle or reordering the cycles themselves does not change the permutation. Thus $$ (1,2,3)(4)(5) = (2,3,1)(4)(5) = (4)(2,3,1)(5) = (4)(5)(3,1,2). $$ A $k$-cycle, or \emph{cycle of length} $k$, is a cycle containing $k$ elements. The \emph{cycle type}, or simply the \emph{type}, of $\pi$ is an expression of the form $$ (1^{m_1},2^{m_2},\ldots,k^{m_k}), $$ where $m_k$ is the number of cycles of length $k$ in $\pi$. A 1-cycle of $\pi$ is called a \emph{fixed-point}. Fixed-points are usually dropped from the cycle notation if no confusion will result. It is easy to see that a permutation $\pi$ such that $\pi^2=\mathbb{1}$ if and only if all of $\pi$'s cycles have length 1 or 2. Another way to give the cycle type is as a partition. A \emph{partition} of $k$ is a sequence $$ \lambda = (\lambda_1,\lambda_2,\ldots,\lambda_d), $$ where the $\lambda_i$ are weakly decreasing and $\sum^d_{i=1}\lambda_i=k$. Thus $\pi=(1,2,3)(4)(5)$ corresponds to a partition $(3,1,1)$, and a cycle type $(1^2,2^0,3^1,4^0,5^0)$. In $S_k$, it is not hard to see that if $$ \pi = (i_{11},i_{12},\ldots,i_{1j_1})\cdots (i_{m1},i_{m2},\ldots,i_{mj_m}) $$ in cycle notation, then for any $\sigma\in S_k$ $$ \sigma\pi \sigma^{-1} = (\sigma(i_{11}),\sigma(i_{12}),\ldots,\sigma(i_{1j_1}))\cdots (\sigma(i_{m1}),\sigma(i_{m2}),\ldots,\sigma(i_{mj_m})). $$ It follows that two permutations are in the same conjugate class if and only if they have the \emph{same cycle type}. Thus there is a natural one-to-one correspondence between partitions of $k$ and conjugate classes of $S_k$. We can compute the size of a conjugate class in the following manner. Let $G$ be any group and consider the \emph{centralizer} of $g\in G$ defined by $$ Z_g := \Set{h\in G: hgh^{-1}=g}, $$ i.e., the set of all elements that commute with $g$. Now, there is a bijection between the cosets of $Z_g$ and the elements of $K_g$, where $K_g$ is the conjugate class of $g$---the set of all elements conjugate to a given $g$ , so that $$ \abs{K_g} = \frac{\abs{G}}{\abs{Z_g}}. $$ Now let $G=S_k$ and use $K_\gamma$ for $K_g$ when $g$ has type $\gamma$. Thus if $\gamma=(1^{m_1},2^{m_2},\ldots,k^{m_k})$ and $g\in S_k$ has type $\gamma$, then $\abs{Z_g}$ depends only on $\gamma$ and $$ z_\gamma\stackrel{\smash{\textnormal{\tiny def}}}{=} \abs{Z_g}= 1^{m_1}m_1!2^{m_2}m_2!\cdots k^{m_k}m_k!. $$ The number $\abs{K_g}$ of elements in a conjugacy class $\gamma$ of $S_k$, denoted $h_\gamma$, is given by $$ h_\gamma = \frac{k!}{z_\gamma}. $$ We know that every permutation $\pi\in S_k$ decomposes uniquely as a product of disjoint cycles. The orders of the cycles, sorted in non-increasing order, determine the cycle type of the permutation. Evidently, the cycle type of a permutation $\pi\in S_k$ is a partition of $k$. We will denote the cycle type of a permutation $\pi\in S_k$ by $\gamma = \gamma(\pi)\vdash k$. We shall identify the conjugacy classes with their cycle type, and even write $\pi\in\gamma$ for a permutation $\pi$ with cycle type $\gamma$. For instance, $h_{(k)} = (k-1)!$ and $h_{(1^k)}=1$. Obviously, we need to have $\sum_{\gamma\vdash k} \frac1{z_\gamma} =1$. \subsection{Products of power sums} For an integer $r\geqslant1$, the $r$-th \emph{power sum} in the variables $x_j$ is $p_r = \sum_j x^r_j$. For a partition $\gamma\vdash (k,r)$, the \emph{power sum products} $p_\gamma$ are defined by \begin{eqnarray} p_\gamma &:=& p_{\gamma_1}p_{\gamma_2}\cdots p_{\gamma_r}=\Pa{\sum_j x^{\gamma_1}_j}\Pa{\sum_j x^{\gamma_2}_j}\cdots\Pa{\sum_j x^{\gamma_r}_j}. \end{eqnarray} As a special case, $p_\gamma(1^{\times d}) = d^r$, where $r=\ell(\gamma)$ is nothing but the number of cycles in $\gamma$. \subsection{Schur functions} To define the Schur symmetric functions, or S-functions, it is best to start with the polynomial case, i.e. with a finite number $d$ of variables $x_1,\ldots, x_d$. The complete set of S-functions is obtained by letting $d$ tend to infinity. The S-functions $s_\lambda$ of $d$ variables and of homogeneity order $k$ are labeled by partitions $\lambda\vdash (k,d)$, and are defined by \begin{eqnarray} s_\lambda(x_1,\ldots,x_d):= \frac{\det\Pa{x^{\lambda_j+d-j}_i}^d_{i,j=1}}{\det\Pa{x^{d-j}_i}^d_{i,j=1}} \end{eqnarray} (recall again that for $j>\ell(\lambda),\lambda_j=0$). For $\ell(\lambda)>d$, one again has $s_\lambda(x_1,\ldots,x_d)=0$. If some variables assume equal values, a limit has to be taken, since both numerator and denominator vanish in that case. The denominator in the definition of the S-function is a Vandermonde determinant and is thus equal to $\Delta(x_1,\ldots,x_d)$. The numerator is divisible (in the ring of polynomials) by each of the differences $x_i-x_j$, and therefore also by the denominator; hence the S-functions in a finite number of variables really are polynomials. For the important case where all $d$ variables assume the value 1 (i.e. giving the number of semi-standard Young tableaux of $d$ objects and of shape $\lambda$), we get, for $\ell(\lambda)\leqslant d$: \begin{eqnarray} s_\lambda(1^{\times d}) = \frac{\Delta(\lambda_1+d-1,\lambda_2+d-2,\ldots,\lambda_d)}{\Delta(d-1,d-2,\ldots,0)}, \end{eqnarray} and, again, $s_\lambda(1^{\times d})=0$ for $\ell(\lambda)>d$. Note that $\Delta(d-1,d-2,\ldots,0)=1!2!\cdots (d-1)!$. In particular, if $\lambda=(k)$, one finds that $s_{(k)}(1^{\times d}) = \binom{k+d-1}{k}$. \subsection{Characters of the symmetric group and unitary group} In the case of the symmetric group, the irreps are labeled by Young diagrams $\lambda$. The character of a permutation $\pi\in S_k$ in irrep $\lambda$ is denoted $\chi_\lambda(\pi)$. Since characters are class functions, one only needs to find the characters of any representative of a conjugacy class, so that one can use the symbol $\chi_{\lambda,\gamma}$, with $$ \chi_{\lambda,\gamma} = \chi_\lambda(\pi),~~~\forall \pi\in\gamma. $$ The character table is the matrix with elements $\chi_{\lambda,\gamma}$, where $\lambda$ is the row index and $\gamma$ the column index (assuming lexicographic ordering for both). As the conjugacy classes of $S_k$ are labeled by partitions of $k$, there are as many rows as columns, hence the character table is a square matrix. The character of the identity permutation $e$ equals the degree of the representation in the given irrep. One can show that this degree is equal to the number of standard Young tableaux of shape $\lambda$ $$ \chi_\lambda(e) = f^\lambda. $$ The characters in irrep $\lambda=(k)$ are all 1: $$ \chi_{(k),\gamma} = 1,~~~\forall \gamma\vdash k. $$ Thus $f^{(k)}=1$. For $\gamma$ consisting of one cycle, $\gamma=(k)$, the characters are $$ \chi_{\lambda,(k)} = \begin{cases} (-1)^d,&\lambda = (k-d,1^d),0\leqslant d\leqslant k\\ 0,&\text{otherwise}. \end{cases} $$ In what follows, We now briefly consider the irreducible polynomial representations of the full linear group $\rG\rL(d,\mathbb{C})$ (note that both the full linear group $\rG\rL(d,\mathbb{C})$ and the unitary group $\unitary{d}$ embrace the same irreps). There representations get their name from the fact that their matrix elements are polynomials in the elements of the represented matrix. Just like the irreps of the symmetric group, the polynomial irreps of $\rG\rL(d,\mathbb{C})$ are labeled by Young diagrams. The conjugacy classes of $\rG\rL(d,\mathbb{C})$ consist of all matrices $A\in\rG\rL(d,\mathbb{C})$ have the same eigenvalues $(a_1,\ldots,a_d)$ and thus can be labeled by these eigenvalues. The simple characters (known, in this context, as characteristics) are denoted $\phi_\lambda(A) = \phi_\lambda(a_1,\ldots,a_d)$. According to a famous result by Schur, these characters are the Schur functions (polynomials) of the eigenvalues $$ \phi_\lambda(a_1,\ldots,a_d) = s_\lambda(a_1,\ldots,a_d). $$ \subsection{Representations of $S_k$ and $\rG\rL(d,\mathbb{C})$ on the tensor product space $(\mathbb{C}^d)^{\otimes k}$} Here we have denoted the dimension of the subspace $\bQ_\lambda$ by $t^\lambda(d)$, and the dimension of $\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_\lambda$ by $f^\lambda$. The matrix $\bQ_\lambda(A)$ is an irrep of $A\in\rG\rL(d,\mathbb{C})$ of degree $t^\lambda(d)$, operating on $\bQ_\lambda$. The matrix $\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_\lambda(\pi)$ is an irrep of $\pi\in S_k$ of degree $f^\lambda$, operating on $\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_\lambda$. Taking traces yields the corresponding simple characters \begin{eqnarray} \begin{cases} \Tr{\bQ_\lambda(A)} &= s_\lambda(a_1,\ldots,a_d),\\ \Tr{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_\lambda(\pi)} &= \chi_\lambda(\pi) = \chi_{\lambda,\gamma(\pi)}, \end{cases} \end{eqnarray} where $a_1,\ldots,a_d$ are the eigenvalues of $A$. For the dimensions one finds \begin{eqnarray} \begin{cases} t^\lambda(d) &= \Tr{\bQ_\lambda(\mathbb{1}_d)} = s_\lambda(1^{\times d}),\\ f^\lambda &= \Tr{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_\lambda(e)} = \chi_\lambda(e), \end{cases} \end{eqnarray} i.e. $t^\lambda(d)$ is the number of semi-standard Young tableaux $\lambda$ of $d$ objects, and $f^\lambda$ is the number of standard Young tableaux $\lambda$. In accordance with these decompositions, the tensor space $(\mathbb{C}^d)^{\otimes k}$ splits up into invariant subspaces. The subspaces $\bQ_\lambda\otimes\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_\lambda$ are invariant under all $A^{\otimes k}$ and all $\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi)$. They are further reducible into direct sums of $f^\lambda$ subspaces of dimension $t^\lambda(d)$, invariant under the transformations $A^{\otimes k}$ but no longer invariant under permutations $\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi)$. These irreducible invariant subspaces are called the symmetry classes of the tensor space. They are labeled by standard Young tableaux of shape $\lambda$. We now consider the invariant subspaces $\bQ_\lambda\otimes \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_\lambda$, corresponding to the Young diagrams $\lambda$. Their dimension is $f^\lambda s_\lambda(1^{\times d})$. We will denote the projectors on these subspaces by $C_\lambda$. They are the sum of the Young projectors corresponding to the standard Young tableaux $\lambda$. We will consider the Young projectors themselves in the next subsection. The projectors $C_\lambda$ form an orthogonal set and add up to the identity on the full tensor space: \begin{eqnarray} C_\lambda C_{\lambda'} = \delta_{\lambda\lambda'}C_\lambda,~~~\sum_{\lambda\vdash k}C_\lambda = \mathbb{1}_{(\mathbb{C}^d)^{\otimes k}},~~~\Tr{C_\lambda} = f^\lambda s_\lambda(1^{\times d}). \end{eqnarray} Consider the conjugacy classes $\gamma$ of $S_k$ with cycle type $\gamma\vdash k$. We define the "class average" of all permutation matrices with cycle type $\gamma$ as \begin{eqnarray} C^\gamma:= \frac1{h_\gamma} \sum_{\pi\in\gamma}\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi). \end{eqnarray} Note the distinction between the notations $C_\lambda$, where the subscript $\lambda$ labels an irrep, and $C^\gamma$, where the superscript $\gamma$ labels a conjugacy class. Alternatively, we can write \begin{eqnarray} C^\gamma = \frac1{k!} \sum_{\sigma\in S_k}\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\sigma\pi\sigma^{-1}). \end{eqnarray} The projectors $C_\lambda$ can be expressed in terms of the permutations $\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi)$, according to a general relation, as: \begin{eqnarray}\label{eq:central-proj} C_\lambda = \frac{f^\lambda}{k!} \sum_{\pi\in S_k}\chi_\lambda(\pi)\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi), \end{eqnarray} and in terms of $p^\gamma$ as: \begin{eqnarray}\label{eq:central-proj} C_\lambda = f^\lambda \sum_{\gamma\vdash} \frac1{z_\gamma}\chi_{\lambda,\gamma}C^\gamma. \end{eqnarray} Let $A$ be a matrix with eigenvalues $(a_1,\ldots,a_d)$. Taking the trace of one $\lambda$-term in the following expression: $$ A^{\otimes k} = \bigoplus_{\lambda\vdash k} \bQ_\lambda(A)\otimes \mathbb{1}_{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_\lambda} $$ immediately yields $C_\lambda A^{\otimes k}C_\lambda = \bQ_\lambda(A)\otimes \mathbb{1}_{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_\lambda}$, and \begin{eqnarray} \Tr{C_\lambda A^{\otimes k}} = f^\lambda s_\lambda(a_1,\ldots,a_d). \end{eqnarray} For $\pi\in \gamma\vdash k$, it is easy to see that \begin{eqnarray} \Tr{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi) A^{\otimes k}} = \Tr{C^\gamma A^{\otimes k}} = p_\gamma(a_1,\ldots,a_d). \end{eqnarray} Combining this with \eqref{eq:central-proj} gives the famous Frobenius formula, relating the characteristics of the full linear group to the characters of the symmetric group \begin{eqnarray} s_\lambda(a_1,\ldots,a_d) = \sum_{\gamma\vdash k} \frac1{z_\gamma}\chi_{\lambda,\gamma}p_\gamma(a_1,\ldots,a_d). \end{eqnarray} As this holds for any $A$, and thus for any set of values $a_j$ of whatever dimension, it yields the transition matrix from the $p_\gamma$ symmetric functions to the S-functions \begin{eqnarray} s_\lambda = \sum_{\gamma\vdash k} \frac1{z_\gamma} \chi_{\lambda,\gamma}p_\gamma. \end{eqnarray} Using the orthogonality relations of the characters, we find \begin{eqnarray} C^\gamma = \sum_{\lambda\vdash k} \frac1{f^\lambda}\chi_{\lambda,\gamma}C_\lambda,~~~ p_\gamma = \sum_{\lambda\vdash k} \frac1{f^\lambda}\chi_{\lambda,\gamma} s_\lambda. \end{eqnarray} \subsection{Symmetric functions and representations of tensor products} A property of index permutation matrices that is both simple and powerful is that index permutation matrices over tensor products of Hilbert spaces are tensor products themselves. With a minor abuse of notation we identify $(\cH_A\otimes\cH_B)^{\otimes k}$ with $\cH_A^{\otimes k}\otimes\cH_B^{\otimes k}$ and write \begin{eqnarray} \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi)(\cH_A\otimes\cH_B) = \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi)(\cH_A)\otimes \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi)(\cH_B). \end{eqnarray} Here $\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi)(\cH_A)$ acts on $\cH^{\otimes k}_A$, and $\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi)(\cH_B)$ acts on $\cH^{\otimes k}_B$. Clearly $\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}(\pi)(\cH_A\otimes\cH_B)$ acts on $(\cH_A\otimes\cH_B)^{\otimes k}$. As a short hand, the above equation can be written as $$ \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}^{AB}(\pi) = \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}^A(\pi)\otimes\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}^B(\pi). $$ This corresponds to considering symmetric functions of tensor products of variables. If $x=(x_1,x_2,\ldots)$ and $y=(y_1,y_2,\ldots)$, then their tensor product, which is customarily denote $xy$ rather than $x\otimes y$, consists of all possible products $x_iy_j$. For power product sums one immediately sees \begin{eqnarray} p_\gamma(xy) =p_\gamma(x)p_\gamma(y). \end{eqnarray} This yields for Schur functions \begin{eqnarray}\label{eq:KC} s_\lambda(xy) = \sum_{\mu,\nu\vdash k} g_{\lambda\mu\nu}s_\mu(x)s_\nu(y), \end{eqnarray} where $g_{\lambda\mu\nu}$ are the so-called \emph{Kronecker coefficients} \begin{eqnarray} g_{\lambda\mu\nu} := \frac1{k!} \sum_{\pi\in S_k} \chi_\lambda(\pi)\chi_\mu(\pi)\chi_\nu(\pi) = \sum_{\gamma\vdash k}\frac1{z_\gamma}\chi_{\lambda,\gamma}\chi_{\mu,\gamma}\chi_{\nu,\gamma}. \end{eqnarray} One of the rare cases in which a closed formula can be given for the Kronecker coefficients, is $\lambda = (k)$. One finds $$ g_{(k)\mu\nu} =\delta_{\mu\nu}~~~\text{and}~~~ s_{(k)}(xy) = \sum_{\lambda\vdash k} s_\lambda(x)s_\lambda(y). $$ A consequence of \eqref{eq:KC} is that for $X$ and $Y$, acting on $\cH_A$ and $\cH_B$, respectively, \begin{eqnarray} \frac1{f^\lambda}\Tr{C_\lambda(X\otimes Y)^{\otimes k}} = \sum_{\mu,\nu\vdash k}g_{\lambda\mu\nu} \Pa{\frac1{f^\mu}\Tr{C_\mu X^{\otimes k}}}\Pa{\frac1{f^\nu}\Tr{C_\nu Y^{\otimes k}}}, \end{eqnarray} where $C_\lambda$ acts on $(\cH_A\otimes\cH_B)^{\otimes k}$, $C_\mu$ on $\cH^{\otimes k}_A$, and $C_\nu$ on $\cH^{\otimes k}_B$. In terms of the irrpes of $\rG\rL(d,\mathbb{C})$ we have \begin{eqnarray} \bQ_\lambda(X\otimes Y) \cong \bigoplus_{\mu,\nu\vdash k} g_{\lambda\mu\nu} \bQ_\mu(X) \otimes \bQ_\nu(Y), \end{eqnarray} where $g_{\lambda\mu\nu}$ counts the number of copies of $\bQ_\mu(X) \otimes \bQ_\nu(Y)$ in the direct sum. Consider the computation about the partial trace of $$ \Ptr{B}{C^{AB}_\lambda(\mathbb{1}_{\cH^{\otimes k}_A}\otimes C^B_\nu)}, $$ where $C^{AB}_\lambda$ acts on $(\cH_A\otimes\cH_B)^{\otimes k}$ and $C^B_\nu$ on $\cH^{\otimes k}_B$. Since \begin{eqnarray} C^{AB}_\lambda = \frac{f^\lambda}{k!}\sum_{\pi\in S_k}\chi_\lambda(\pi)\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}^{AB}(\pi) = \frac{f^\lambda}{k!}\sum_{\pi\in S_k}\chi_\lambda(\pi)\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}^A(\pi)\otimes\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}^B(\pi), \end{eqnarray} which, together with $C^B_\nu\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}^B(\pi)C^B_\nu = \mathbb{1}_{\bQ_\nu}\otimes \mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}_\nu(\pi)$, implies that \begin{eqnarray} \Ptr{B}{C^{AB}_\lambda(\mathbb{1}_{\cH^{\otimes k}_A}\otimes C^B_\nu)} &=& \frac{f^\lambda}{k!}\sum_{\pi\in S_k}\chi_\lambda(\pi)\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}^A(\pi)\Tr{\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}^B(\pi)C^B_\nu}\\ &=&\frac{f^\lambda}{k!}\sum_{\pi\in S_k}\chi_\lambda(\pi)\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}^A(\pi)s_\nu(1^{\times d_B})\chi_\nu(\pi)\\ &=&\frac{f^\lambda s_\nu(1^{\times d_B})}{k!}\sum_{\pi\in S_k}\chi_\lambda(\pi)\chi_\nu(\pi)\mathbf{P}}\def\bQ{\mathbf{Q}}\def\bR{\mathbf{R}}\def\bS{\mathbf{S}}\def\bT{\mathbf{T}^A(\pi)\\ &=& f^\lambda s_\nu(1^{\times d_B})\sum_{\gamma\vdash k}\frac1{z_\gamma}\chi_{\lambda,\gamma}\chi_{\nu,\gamma}C^\gamma\\ &=& f^\lambda s_\nu(1^{\times d_B})\sum_{\gamma\vdash k}\frac1{z_\gamma}\chi_{\lambda,\gamma}\chi_{\nu,\gamma}\Pa{\sum_{\mu\vdash k}\frac1{f^\mu}\chi_{\mu,\gamma}C^A_\mu}\\ &=&f^\lambda s_\nu(1^{\times d_B})\sum_{\mu\vdash k}\frac1{f^\mu}\Pa{\sum_{\gamma\vdash k}\frac1{z_\gamma}\chi_{\lambda,\gamma}\chi_{\mu,\gamma}\chi_{\nu,\gamma}}C^A_\mu\\ &=&f^\lambda s_\nu(1^{\times d_B})\sum_{\mu\vdash k}\frac{g_{\lambda\mu\nu}}{f^\mu}C^A_\mu. \end{eqnarray} Therefore we have \begin{eqnarray} \Ptr{B}{C^{AB}_\lambda(\mathbb{1}_{\cH^{\otimes k}_A}\otimes C^B_\nu)} = f^\lambda s_\nu(1^{\times d_B})\sum_{\mu\vdash k}\frac{g_{\lambda\mu\nu}}{f^\mu}C^A_\mu. \end{eqnarray} This fact implies that \begin{eqnarray} \Ptr{B}{C^{AB}_\lambda} = \sum_{\nu\vdash k}\Ptr{B}{C^{AB}_\lambda(\mathbb{1}_{\cH^{\otimes k}_A}\otimes C^B_\nu)} = \sum_{\nu\vdash k}f^\lambda s_\nu(1^{\times d_B})\sum_{\mu\vdash k}\frac{g_{\lambda\mu\nu}}{f^\mu}C^A_\mu. \end{eqnarray} In particular, for $\lambda=(k)$, \begin{eqnarray} \Ptr{B}{C^{AB}_{(k)}} = \sum_{\mu\vdash k}\frac{s_\mu(1^{\times d_B})}{f^\mu}C^A_\mu. \end{eqnarray} In addition, we also have \begin{eqnarray} \Ptr{B}{C^{AB}_\lambda(C^A_\mu\otimes C^B_\nu)} = f^\lambda s_\nu(1^{\times d_B})\frac{g_{\lambda\mu\nu}}{f^\mu}C^A_\mu, \end{eqnarray} implying \begin{eqnarray} \Tr{C^{AB}_\lambda(C^A_\mu\otimes C^B_\nu)} = f^\lambda g_{\lambda\mu\nu}s_\mu(1^{\times d_A})s_\nu(1^{\times d_B}). \end{eqnarray} Summing over all $\lambda\vdash k$ gives rise to \begin{eqnarray} \Pa{f^\mu s_\mu(1^{\times d_A})} \Pa{f^\nu s_\nu(1^{\times d_B})} &=& \Tr{C^A_\mu\otimes C^B_\nu} = \sum_{\lambda\vdash k}\Tr{C^{AB}_\lambda(C^A_\mu\otimes C^B_\nu)} \\ &=&\Pa{\sum_{\lambda\vdash k}f^\lambda g_{\lambda\mu\nu}}s_\mu(1^{\times d_A})s_\nu(1^{\times d_B}), \end{eqnarray} implying that $f^\mu f^\nu = \sum_{\lambda\vdash k}f^\lambda g_{\lambda\mu\nu}$. \section{Weyl integration formula} This section is written based on Bump's book \cite{Bump}. \subsection{Haar measure} If $G$ is a locally compact group, there is, up to a constant multiple, a unique regular Borel measure $\mu_L$ that is invariant under left translation. Here \emph{left translation invariance} means that $\mu(M) = \mu(gM)$ for all measurable sets $M$. \emph{Regularity} means that \begin{eqnarray} \mu(M) &=& \inf\set{\mu(\cO): M\subseteq \cO, \cO~\text{open}}\\ &=&\sup\set{\mu(\cC): M\supseteq \cC, \cC~\text{compact}}. \end{eqnarray} Such a measure is called a \emph{left Haar measure}. It has the properties that any compact set has finite measure and any nonempty open set has positive measure. I will not prove the existence and uniqueness of the Haar measure, which has already established. Left-invariance of the measure amounts to left-invariance of the corresponding integral, \begin{eqnarray} \int_G f(g'g)d\mu_L(g) = \int_G f(g)d\mu_L(g), \end{eqnarray} for any Haar integral function $g$ on $G$. There is also a right-invariant measure $\mu_R$, unique up to constant multiple, called a \emph{right Haar measure}. Left and right Haar measures may or may not coincide. For example, if $$ G=\Set{\left( \begin{array}{cc} y & x \\ 0 & 1 \\ \end{array} \right): x,y\in \mathbb{R},y>0 }, $$ then it is easy to see that the left- and right-invariant measures are, respectively, $$ d\mu_L=y^{-2}dxdy,~~~d\mu_R=y^{-1}dxdy. $$ They are not the same. However, there are many cases where they do coincide, and if the left Haar measure is also right-invariant, we call $G$ \emph{unimodular}. Conjugation is an automorphism of $G$, and so it takes a left Haar measure to another left Haar measure, which must be a constant multiple of the first. Indeed, \begin{eqnarray} \int_G f(x^{-1}gx)d\mu_L(g) = \int_G f(g)d\mu_L(xgx^{-1}) = \int_G f(g)d\mu_L(gx^{-1}). \end{eqnarray} Clearly $d\mu^x_L(g) := d\mu_L(gx^{-1})$ defines a new left Haar measure. By the uniqueness of left Haar measure, up to constant multiple, $d\mu^x_L(g)=\delta(x)d\mu_L(g)$, which implies that \begin{eqnarray}\label{eq:one-dimen-homo} \int_G f(x^{-1}gx)d\mu_L(g) = \delta(x)\int_G f(g)d\mu_L(g). \end{eqnarray} \begin{prop} The function $\delta:G\to\mathbb{R}^\times_+$ is a continuous homomorphism. The measure $\delta(g)\mu_L(g)$ is a right-invariant, denoted $\mu_R(g)$. \end{prop} \begin{proof} Conjugation by first $x_1$ and then $x_2$ is the same as conjugation by $x_1x_2$ in one step. This can be seen from the following reasoning: Let $x=x_1x_2$ in \eqref{eq:one-dimen-homo}, we have \begin{eqnarray} \int_G f(x_2^{-1}x_1^{-1}gx_1x_2)d\mu_L(g) = \delta(x_1x_2)\int_G f(g)d\mu_L(g) \end{eqnarray} and \begin{eqnarray} &&\int_G f(x_2^{-1}x_1^{-1}gx_1x_2)d\mu_L(g) = \int_G f_{x_2}(x_1^{-1}gx_1)d\mu_L(g) = \delta(x_1)\int_G f_{x_2}(g)d\mu_L(g)\\ &&=\delta(x_1)\int_G f(x_2^{-1}gx_2)d\mu_L(g) = \delta(x_1)\delta(x_2)\int_G f(g)d\mu_L(g) \end{eqnarray} where $f_{x_2}(g):= f(x_2^{-1}gx_2)$. That is $$ \delta(x_1x_2)=\delta(x_1)\delta(x_2). $$ Replace $f$ by $f\delta$ in the following \begin{eqnarray} \int_G f(gx)d\mu_L(g) = \delta(x)\int_G f(g)d\mu_L(g) \end{eqnarray} we get \begin{eqnarray} \int_G f(gx)\delta(gx)d\mu_L(g) = \delta(x)\int_G f(g)\delta(g)d\mu_L(g), \end{eqnarray} which gives rise to \begin{eqnarray} \int_G f(gx)\delta(g)d\mu_L(g) = \int_G f(g)\delta(g)d\mu_L(g), \end{eqnarray} that is \begin{eqnarray} \int_G f(gx)d\mu_R(g) = \int_G f(g)d\mu_R(g), \end{eqnarray} completing the proof. \end{proof} \begin{prop} If $G$ is compact, then $G$ is unimodular and $\mu_L(G)<\infty$. \end{prop} \begin{proof} Since $\delta$ is a homomorphism, the image of $\delta$ is a subgroup of $\mathbb{R}^\times_+$. Since $G$ is compact, $\delta(G)$ is also compact, and the only compact subgroup of $\mathbb{R}^\times_+$ is just $\set{1}$. Thus $\delta$ is trivial, so a left Haar measure is right-invariant. We have mentioned as assumed fact that the Haar volume of any compact subset of a locally compact group is finite, so if $G$ is finite, its Haar volume is finite. \end{proof} If $G$ is compact, then it is natural to normalize the Haar measure so that $G$ has volume 1. To simplify our notation, we will denote $\int_G f(g)d\mu_L(g)$ by $\int_Gf(g)dg$. \begin{prop} If $G$ is unimodular, then the map $g\to g^{-1}$ is an isometry. \end{prop} \begin{proof} It is easy to see that $g\to g^{-1}$ turns a left Haar measure into a right Haar measure. If left and right Haar measures agree, then $g\to g^{-1}$ multiplies the left Haar measure by a positive constant, which must be 1 since the map has order 2. \end{proof} \subsection{Weyl integration formula} Let $G$ be a compact, connected Lie group, and let $T$ be a maximal torus. It is already known that every conjugacy class meets $T$. Thus we should be able to compute the Haar integral over $G$. The following formula that allows this, the \emph{Weyl integration Formula}, is therefore fundamental in representation theory and in other areas, such as random matrix theory. \begin{eqnarray} \int_G f(g)dg = \frac1{\abs{W(G)}}\int_T \Pa{\int_{G/T}f(gtg^{-1})\abs{\det(\mathbb{1}-\mathrm{Ad}(t))}d(gT)}dt. \end{eqnarray} If $G$ is a locally compact group and $H$ a closed subgroup, then the quotient space $G/H$ consisting of all cosets $gH$ with $g\in G$, given the quotient topology, is a locally compact Hausdorff space. If $X$ is a locally compact Hausdorff space let $C_c(X)$ be the space of continuous, compactly supported functions on $X$. If $X$ is a locally compact Hausdorff space, a linear functional $\cI$ on $C_c(X)$ is called \emph{positive} if $\cI(f)\geqslant0$ if $f$ is nonnegative. According to the \emph{Riesz representation theorem}, every such $\cI$ is of the form \begin{eqnarray} \cI(f) = \int_X f d\mu \end{eqnarray} for some regular Borel measure $d\mu$. \begin{prop} Let $G$ be a locally compact group, and let $H$ be a compact subgroup. Let $d\mu_G$ and $d\mu_H$ be left Haar measures on $G$ and $H$, respectively. Then there exists a regular Borel measure $d\mu_{G/H}$ on $G/H$ which is invariant under the action of $G$ by left translation. The measure $d\mu_{G/H}$ may be normalized so that, for $f\in C_c(G)$, we have \begin{eqnarray} \int_{G/H} \Pa{\int_H f(gh)d\mu_H(h)}d\mu_{G/H}(gH). \end{eqnarray} Here the function $g\mapsto \int_Hf(gh)d\mu_H(h)$ is constant on the cosets $gH$, and we are therefore identifying it with a function on $G/H$. \end{prop} \begin{proof} We may choose the normalization of $d\mu_H$ so that $H$ has total volume 1. We define a map $\Lambda: C_c(G)\to C_c(G/H)$ by \begin{eqnarray} (\Lambda f)(g) = \int_H f(gh)d\mu_H(h). \end{eqnarray} Note that $\Lambda f$ is a function on $G$ which is right invariant under translation by elements of $H$, so it may be regarded as a function on $G/H$. Since $H$ is compact, $\Lambda f$ is compactly supported. If $\phi\in C_c(G/H)$, regarding $\phi$ as a function on $G$, we have $\Lambda\phi=\phi$ because \begin{eqnarray} (\Lambda \phi)(g) = \int_H \phi(gh)d\mu_H(h) = \int_H \phi(g)d\mu_H(h) = \phi(g). \end{eqnarray} This shows that $\Lambda$ is surjective. We may therefore define a linear functional $\cI$ on $C_c(G/H)$ by \begin{eqnarray} \cI(\Lambda f) = \int_G f(g)d\mu_G(g),~~~f\in C_c(G) \end{eqnarray} provided we check that this is well-defined. We must show that if $\Lambda f=0$, then \begin{eqnarray} \cI(\Lambda f) = 0, \end{eqnarray} i.e. $\int_G f(g)d\mu_G(g)=0$. We note that the function $(g,h)\mapsto f(gh)$ is compactly supported and continuous on $G\times H$, so if $\Lambda f=0$, we may use Fubini's theorem to write \begin{eqnarray} 0&=&\int_G (\Lambda f)(g) d\mu_G(g) = \int_G\Pa{\int_H f(gh)d\mu_H(h)}d\mu_G(g)\\ &=&\int_H\Pa{\int_G f(gh)d\mu_G(g)}d\mu_H(h). \end{eqnarray} In the inner integral on the right-hand side we make the variable change $g\mapsto gh^{-1}$. Recalling that $d\mu_G(g)$ is \emph{left} Haar measure, this produces a factor of $\delta_G(h)$, where $\delta_G(h)$ is the modular homomorphism. Thus $$ 0=\int_H \delta_G(h) \Pa{\int_G f(g)d\mu_G(g)}d\mu_H(h). $$ Now the group $H$ is compact, so its image under $\delta_G$ is a compact subgroup of $\mathbb{R}^\times_+$, which must be $\set{1}$. Thus $\delta_G(h)=1$ for all $h\in H$, and we obtain $\int_G f(g)d\mu_G(g)=0$, justifying the definition of the functional $\cI$. The existence of the measure on $G/H$ now follows from the Riesz representation theorem. \end{proof} \begin{exam} Suppose that $G=\unitary{n}$. A maximal torus is $$ \mathbb{T} = \Set{\mathrm{diag}(t_1,\ldots,t_n): \abs{t_1}=\cdots=\abs{t_n}=1 }. $$ Its normalizer $N(\mathbb{T})$ consists of all monomial matrices (matrices with a single nonzero entry in each row and column) so that the quotient $N(\mathbb{T})/\mathbb{T}\cong S_n$. \end{exam} \begin{prop} Let $T$ be a maximal torus in the compact connected Lie group $G$, and let $\mathfrak{t},\mathfrak{g}$ be the Lie algebras of $T$ and $G$, respectively. \begin{enumerate}[(i)] \item Any vector in $\mathfrak{g}$ fixed by $\mathrm{Ad}(T)$ is in $\mathfrak{t}$. \item We have $\mathfrak{g}=\mathfrak{t}\oplus \mathfrak{t}^\perp$, where $\mathfrak{t}^\perp$ is invariant under $\mathrm{Ad}(T)$. Under the restriction of $\mathrm{Ad}$ to $T$, $\mathfrak{t}^\perp$ decomposes into a direct sum of two-dimensional real irreps of $T$. \end{enumerate} \end{prop} Let $W(G)$ be the Weyl group of $G$. The Weyl group acts on $T$ by conjugation. Indeed, the elements of the Weyl group are cosets $w=nT$ for $n\in N(T)$. If $t\in T$, the elements $ntn^{-1}$ depends only on $w$ so by abuse of notation we denote it $wtw^{-1}$. \begin{thrm} (i) Two elements of $T$ are conjugate in $G$ if and only if they are conjugate in $N(T)$.\\ (ii) The inclusion $T\to G$ induces a bijection between the orbits of $W(G)$ on $T$ and the conjugacy classes of $G$. \end{thrm} \begin{proof} Suppose that $t,u\in T$ are conjugate in $G$, say $gtg^{-1}=u$. Let $H$ be the connected component of the identity in the centralizer of $u$ in $G$. It is a closed Lie subgroup of $G$. Both $T$ and $gTg^{-1}$ are contained in $H$ since they are connected commutative groups containing $u$. As they are maximal tori in $G$, they are maximal tori in $H$, and so they are conjugate in the compact connected group $H$. If $h\in H$ such that $hTh^{-1}=gTg^{-1}$, then $w=h^{-1}g\in N(T)$. Since $wtw^{-1}=h^{-1}uh=u$, we see that $t$ and $u$ are conjugate in $N(T)$. Since $G$ is the union of the conjugates of $T$, (ii) is a restatement of (i). \end{proof} \begin{prop} The centralizer $C(T)=T$. \end{prop} \begin{prop} There exists a dense open set $\Omega$ of $T$ such that the $\abs{W(G)}$ elements $wtw^{-1}(w\in W(G))$ are all distinct for $t\in \Omega$. \end{prop} \begin{proof} If $w\in W(G)$, let $$ \Omega_w=\set{t\in T: wtw^{-1}\neq t}. $$ It is an open subset of $T$ since its complement is evidently closed. If $w\neq\mathbb{1}$ and $t$ is a generator of $T$, then $t\in\Omega_w$ because otherwise if $n\in N(T)$ represents $w$, then $n\in C(t)=C(T)$, so $n\in T$. This is a contradiction since $w\neq\mathbb{1}$. By Kronecker Theorem, it follows that $\Omega_w$ is a dense open set. The finite intersection $\Omega=\cap_{w\neq\mathbb{1}}\Omega_w$ thus fits our requirements. \end{proof} \begin{thrm}[Weyl] If $f$ is a class function, and if $dg$ and $dt$ are Haar measures on $G$ and $T$ (normalized so that $G$ and $T$ have volume 1), then \begin{eqnarray} \int_G f(g)dg = \frac1{\abs{W(G)}}\int_T f(t)\det\Pa{\Br{\mathrm{Ad}(t^{-1})-\mathbb{1}_{\mathfrak{t}^\perp}}|_{\mathfrak{t}^\perp}}dt \end{eqnarray} \end{thrm} \begin{proof} Let $\cX=G/T$. We give $\cX$ the measure $d_\cX$ invariant under left translation by $G$ such that $\cX$ has volume 1. Consider the map $$ \phi: \cX\times T\to G,~~~\phi(xT,t)=xtx^{-1}. $$ Both $\cX\times T$ and $G$ are orientable manifolds of the same dimension. Of course, $G$ and $T$ both are given the Haar measures such that $G$ and $T$ have volume 1. We choose volume elements on the Lie algebras $\mathfrak{g}$ and $\mathfrak{t}$ of $G$ and $T$, respectively, so that the Jacobians of the exponential maps $\mathfrak{g}\to G$ and $\mathfrak{t}\to T$ at the identity are $\mathbb{1}$. We compute the Jacobian $J\phi$ of $\phi$. Parameterize a neighborhood of $xT$ in $\cX$ by a chart based on a neighborhood of the origin in $\mathfrak{t}^\perp$. This chart is the map $$ \mathfrak{t}^\perp \ni A\mapsto xe^AT. $$ We also make use of the exponential map to parameterize a neighborhood of $t\in T$. This is the chart $\mathfrak{t}\ni B\mapsto te^B$. We therefore have the chart near the point $(xT,t)$ in $\cX\times T$ mapping $$ \mathfrak{t}^\perp\times \mathfrak{t}\ni (A,B)\to (xe^AT,te^B)\in\cX\times T $$ and, in these coordinates, $\phi$ is the map $$ (A,B)\mapsto xe^Ate^Be^{-A}x^{-1}. $$ To compute the Jacobian of this map, we translate on the left by $t^{-1}x^{-1}$ and on the right by $x$. There is no harm in this because these maps are Haar isometries. We are reduced to computing the Jacobian of the map $$ (A,B)\mapsto t^{-1}e^Ate^Be^{-A} = e^{\mathrm{Ad}(t^{-1}A)}e^Be^{-A}. $$ Identifying the tangent space of the real vector space $\mathfrak{t}^\perp\times \mathfrak{t}$ with itself (that is, with $\mathfrak{g}=\mathfrak{t}^\perp\times \mathfrak{t}$), the differential of this map is $$ A\oplus B\mapsto \Pa{\mathrm{Ad}(t^{-1})-\mathbb{1}_{\mathfrak{t}^\perp}}A\oplus B. $$ The Jacobian is the determinant of the differential, so \begin{eqnarray}\label{eq:Jacobian} (J\phi)(xT,t) = \det\Pa{\Br{\mathrm{Ad}(t^{-1})-\mathbb{1}_{\mathfrak{t}^\perp}}|_{\mathfrak{t}^\perp}}. \end{eqnarray} The map $\phi: \cX\times T\to G$ is a $\abs{W(G)}$-fold cover over a dense open set and so, for any function $f$ on $G$, we have \begin{eqnarray} \int_G f(g)dg = \frac1{\abs{W(G)}}\int_{\cX\times T} f(\phi(xT,t))J(\phi(xT,t))d_\cX\times dt. \end{eqnarray} The integrand $f(\phi(xT,t))J(\phi(xT,t))=f(t)\det\Pa{\Br{\mathrm{Ad}(t^{-1})-\mathbb{1}_{\mathfrak{t}^\perp}}|_{\mathfrak{t}^\perp}}$ is independent of $x$ since $f$ is a class function, and the result follows. \end{proof} \begin{remark} Let $G$ be a Lie group and $\mathfrak{g}$ its Lie algebra. Identify both $\rT_0\mathfrak{g}$ and $\rT_eG$ with $\mathfrak{g}$. Then, $(d\exp)_0: \rT_0\mathfrak{g}\to \rT_eG$ is the identity map. Indeed, \begin{eqnarray*} (d\exp)_0(A) = \left.\frac{d}{dt}\right|_{t=0} \exp(0+tA) = A. \end{eqnarray*} That is $(d\exp)_0$ is the identity map over $\mathfrak{g}$. \end{remark} \begin{prop} Let $G=\unitary{n}$, and let $\mathbb{T}$ be the diagonal torus. Writing $$ t=\mathrm{diag}(t_1,\ldots,t_n)\in \mathbb{T}, $$ and letting $\int_\mathbb{T}dt$ be the Haar measure on $\mathbb{T}$ normalized so that its volume is 1, we have \begin{eqnarray} \int_G f(g)dg = \frac1{n!}\int_\mathbb{T}f(t)\prod_{i<j}\abs{t_i-t_j}^2dt. \end{eqnarray} \end{prop} \begin{proof} We need to check that $$ \det\Pa{\Br{\mathrm{Ad}(t^{-1})-\mathbb{1}_{\mathfrak{t}^\perp}}|_{\mathfrak{t}^\perp}} = \prod_{i<j}\abs{t_i-t_j}^2. $$ To compute this determinant, we may as well consider the linear transformation induced by $\mathrm{Ad}(t^{-1})-\mathbb{1}_{\mathfrak{t}^\perp}$ on the complexified vector space $\mathbb{C}\otimes \mathfrak{t}^\perp$. We may identify $\mathbb{C}\otimes\mathfrak{u}(n)$ with $\mathfrak{gl}(n,\mathbb{C})=M_n(\mathbb{C})$. We recall that $\mathbb{C}\otimes \mathfrak{t}^\perp$ is spanned by the $T$-eigenspaces in $\mathbb{C}\otimes\mathfrak{u}(n)$ corresponding to nontrivial characters of $T$. There are spanned by the elementary matrices $E_{ij}$ with a 1 in the $(i,j)$-th position and zeros elsewhere, where $1\leqslant i,j\leqslant n$ and $i\neq j$. The eigenvalue of $t$ on $E_{ij}$ is $t_it^{-1}_j$. Hence \begin{eqnarray} \det\Pa{\Br{\mathrm{Ad}(t^{-1})-\mathbb{1}_{\mathfrak{t}^\perp}}|_{\mathfrak{t}^\perp}} =\prod_{i\neq j}(t_it^{-1}_j - 1) = \prod_{i<j}(t_it^{-1}_j - 1)(t_jt^{-1}_i - 1). \end{eqnarray} Since $\abs{t_i}=\abs{t_j}=1$, we have $$ (t_it^{-1}_j - 1)(t_jt^{-1}_i - 1) = (t_i-t_j)(t^{-1}_i-t^{-1}_j)=\abs{t_i-t_j}^2. $$ This completes the proof. \end{proof} \begin{remark} Let $G=\unitary{1}=\mathbb{S}^1$, $\rho_n:\mathbb{S}^1\to \rG\rL(1,\mathbb{C})$ be given by $\rho_n(e^{\sqrt{-1}\theta}) = e^{\sqrt{-1}n\theta}$. Then $dg=d\theta/2\pi$ and $$ \frac1{2\pi}\int^{2\pi}_{0} e^{\sqrt{-1}n\theta}e^{-\sqrt{-1}m\theta}=\delta_{mn}. $$ \end{remark} \begin{cor}\label{cor:class-integration} If $f$ is a class function over $\unitary{n}$, then \begin{eqnarray} \int_{\unitary{n}} f(u)du &=& \frac1{n!}\int_{\mathbb{T}^n}f(D(\theta))J(\theta)dD(\theta)\\ &=&\frac1{(2\pi)^nn!}\overbrace{\int^{2\pi}_0\cdots\int^{2\pi}_0}^{n} f(D(\theta))J(\theta)d\theta_1\cdots d\theta_n, \end{eqnarray} where $$ D(\theta):= \mathrm{diag}\Pa{e^{\sqrt{-1}\theta_1},\ldots,e^{\sqrt{-1}\theta_n}}~~\text{and}~~J(\theta) := \prod_{i<j}\abs{e^{\sqrt{-1}\theta_i}-e^{\sqrt{-1}\theta_j}}^2. $$ \end{cor} \begin{remark} We know that for one-dimensional torus, the normalized Haar measure is defined as $du:=\frac{d\theta}{2\pi}$ over $\unitary{1}$. This implies that for $n$-dimensional torus of $\unitary{n}$: $$ \mathbb{T}^n=\overbrace{\unitary{1}\times\cdots\times \unitary{1}}^n, $$ the normalized Haar measure is given by the product measure of $n$ one-dimensional measures of $\unitary{1}$. Thus for $D(\theta)\in\mathbb{T}^n$, described by $D(\theta)= u_1(\theta)\times\cdots u_n(\theta)$ with $du_j(\theta)=d\theta_j/2\pi$, the normalized Haar measure is defined as \begin{eqnarray*} dD(\theta) := du_1\times\cdots \times du_n = \frac{d\theta_1}{2\pi}\times\cdots\times\frac{d\theta_n}{2\pi} = \frac1{(2\pi)^n}d\theta_1\cdots d\theta_n = \frac1{(2\pi)^n}d\theta, \end{eqnarray*} where $d\theta:= d\theta_1\cdots d\theta_n$. \end{remark} In Corollary~\ref{cor:class-integration}, assume that $f\equiv1$, then we have \begin{eqnarray} 1 &=& \int_{\unitary{n}}du = \frac1{n!}\int_{\mathbb{T}^n}J(\theta)dD(\theta)\\ &=&\frac1{(2\pi)^nn!}\overbrace{\int^{2\pi}_0\cdots\int^{2\pi}_0}^{n} J(\theta)d\theta_1\cdots d\theta_n, \end{eqnarray} implying \begin{eqnarray}\label{eq:n!} n!=\int_{\mathbb{T}^n}J(\theta)dD(\theta) = \frac1{(2\pi)^n}\overbrace{\int^{2\pi}_0\cdots\int^{2\pi}_0}^{n} J(\theta)d\theta. \end{eqnarray} In what follows, we give a check on the identity in \eqref{eq:n!}. Here is another way of writing $J(\theta)$, which is useful. Set $e^{\sqrt{-1}\theta_j}=\zeta_j$. Then \begin{eqnarray} J(\theta) = V(\zeta)V(\bar\zeta), \end{eqnarray} where $$ V(\zeta):=V(\zeta_1,\ldots,\zeta_n) = \prod_{1\leqslant i<j\leqslant n} (\zeta_j-\zeta_i). $$ Now $V(\zeta)$ is a Vandermonde determinant: $$ V(\zeta) = \det\left( \begin{array}{cccc} 1 & 1 & \cdots & 1 \\ \zeta_1 & \zeta_2 & \cdots & \zeta_n \\ \vdots & \vdots & \ddots & \vdots \\ \zeta^{n-1}_1 & \zeta^{n-1}_2 & \cdots & \zeta^{n-1}_n \\ \end{array} \right). $$ Define $a_{ij}:=\zeta^{i-1}_j~(i,j\in\set{1,\ldots,n})$. We can form a $n\times n$ matrix $A=[a_{ij}]$ in terms of $a_{ij}$. Apparently, $V(\zeta)=\det(A)$. According to the definition of determinant, the expansion of a determinant can be given by \begin{eqnarray} \det(A) = \sum_{\pi\in S_n}\op{sign}(\pi)a_{\pi(1)1}\cdots a_{\pi(n)n}. \end{eqnarray} Now that $a_{\pi(i)j}=\zeta^{\pi(i)-1}_j$. We thus obtain that \begin{eqnarray} V(\zeta) = \sum_{\pi\in S_n} \op{sign}(\pi)\zeta^{\pi(1)-1}_1\zeta^{\pi(2)-1}_2\cdots\zeta^{\pi(n)-1}_n. \end{eqnarray} Now $\bar\zeta_j=\zeta^{-1}_j$ for $\zeta_j\in \unitary{1}$, so \begin{eqnarray} J(\theta) = \sum_{(\pi,\sigma)\in S_n\times S_n} \op{sign}(\pi)\op{sign}(\sigma)\zeta^{\pi(1)-\sigma(1)}_1\zeta^{\pi(2)-\sigma(2)}_2\cdots\zeta^{\pi(n)-\sigma(n)}_n. \end{eqnarray} Hence \begin{eqnarray*} &&\int_{\mathbb{T}^n}J(\theta)dD(\theta)=\sum_{(\pi,\sigma)\in S_n\times S_n}\op{sign}(\pi)\op{sign}(\sigma)\int_{\mathbb{T}^n}dD(\theta)\Pa{ \zeta^{\pi(1)-\sigma(1)}_1\zeta^{\pi(2)-\sigma(2)}_2\cdots\zeta^{\pi(n)-\sigma(n)}_n}\\ &&=\sum_{(\pi,\sigma)\in S_n\times S_n}\op{sign}(\pi)\op{sign}(\sigma) \Pa{\frac1{2\pi}\int^{2\pi}_0e^{\sqrt{-1}(\pi(1)-\sigma(1))\theta}d\theta_1}\times\cdots\times\Pa{\frac1{2\pi}\int^{2\pi}_0e^{\sqrt{-1}(\pi(n)-\sigma(n))\theta}d\theta_n}\\ &&=\sum_{(\pi,\sigma)\in S_n\times S_n}\op{sign}(\pi)\op{sign}(\sigma)\delta_{\pi(1)\sigma(1)}\cdots\delta_{\pi(n)\sigma(n)}=\sum_{\pi\in S_n}1 = n!, \end{eqnarray*} where we used the fact that $$ \frac1{2\pi}\int^{2\pi}_0e^{\sqrt{-1}(\pi(k)-\sigma(k))\theta}d\theta_k=\delta_{\pi(k)\sigma(k)}. $$ Denote $\theta = (\theta_1,\ldots,\theta_n)$ and define functionals $\alpha_{ij}(\theta) = \theta_i-\theta_j$. We mention another way of writing $J(\theta)$, i.e. \begin{eqnarray} J(\theta) = A(\theta)\overline{A(\theta)}, \end{eqnarray} where $$ A(\theta) := \prod_{i<j}\Pa{1 - e^{-\sqrt{-1}\alpha_{ij}(\theta)}}. $$ \begin{remark} In fact, by using Selberg's integral \cite[Eq.(17.7.1), pp323]{Mehta}, one gets more in the following: \begin{eqnarray} \frac1{(2\pi)^n}\int^{2\pi}_0\cdots \int^{2\pi}_0 \prod_{1\leqslant i<j\leqslant n}\abs{e^{\sqrt{-1}\theta_i}-e^{\sqrt{-1}\theta_j}}^{2\gamma} d\theta_1\cdots d\theta_n = \frac{(n\gamma)!}{(\gamma!)^n}, \end{eqnarray} where $\gamma\in\mathbb{N}$. Then, letting $\gamma=1$ gives that \begin{eqnarray} \frac1{(2\pi)^n}\int^{2\pi}_0\cdots \int^{2\pi}_0 \prod_{1\leqslant i<j\leqslant n}\abs{e^{\sqrt{-1}\theta_i}-e^{\sqrt{-1}\theta_j}}^2 d\theta_1\cdots d\theta_n = n!, \end{eqnarray} as obtained in the above remark. In addition, there is another integral of interest is the following \cite[Eq.(17.11.11), pp331]{Mehta}: \begin{eqnarray} \cI_n(k,\gamma):=\frac1{(2\pi)^n}\frac{(\gamma!)^n}{(n\gamma)!}\int^{2\pi}_0\cdots \int^{2\pi}_0 \abs{\sum^n_{k=1}e^{\sqrt{-1}\theta_k}}^{2k}\prod_{1\leqslant i<j\leqslant n}\abs{e^{\sqrt{-1}\theta_i}-e^{\sqrt{-1}\theta_j}}^{2\gamma} d\theta_1\cdots d\theta_n. \end{eqnarray} In fact, if $\gamma=1$, then \begin{eqnarray} \cI_n(k,1)= \int_{\mathrm{U}}\def\rV{\mathrm{V}}\def\rW{\mathrm{W}}\def\rX{\mathrm{X}}\def\rY{\mathrm{Y}(n)} \abs{\Tr{U}}^{2k}d\mu(U). \end{eqnarray} For $0\leqslant k\leqslant n$, we have $\cI_n(k,1)=k!$. But however $\cI_n(k,\gamma)$ is not known for general $\gamma>1$. \end{remark} \subsubsection*{Acknowledgement} LZ acknowledges the financial support from National Natural Science Foundation of China (No.11301124). LZ would also like to thank Hai-Jiang Yu and Samuel Braunstein for their comments.
{ "attr-fineweb-edu": 1.873047, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdZM5qhLACG6g_Rk0
\section{Appendix} \input{parts/appendix} \end{document} \subsection{Proofs of \cref{lemma:reparam,lemma:scaled-Brenier}} \label{subsec:proof-scaled-brenier} \begin{proof}[Proof of \cref{lemma:reparam}] Remark that the continuity of $\psi_1$ and $\psi_2$ and their inverse ensures their measurability. We have the following equalities: \begin{align*} \argmin_{\pi \in \Pi(\mu,\nu)} \int c(\psi_1 (x), \psi_2(y))\, \mathrm d\pi(x,y)& =\argmin_{\pi \in \Pi(\mu,\nu)} \int c( u,v)\, \mathrm d(\psi_1,\psi_2)_\#\pi(u,v) \\ & =(\psi_1 ^{-1} ,\psi_2^{-1} )_\# \argmin_{\tilde\pi \in \Pi(\psi_{1\#} \mu, \psi_{2\#}\nu)} \int c(u,v)\, \mathrm d\tilde\pi(u,v) \end{align*} since the mapping $(\psi_1 ^{-1} ,\psi_2^{-1} )_\#$ is a one-to-one correspondence from $\Pi(\psi_{1\#} \mu, \psi_{2\#}\nu)$ to $\Pi(\mu,\nu)$ by bijectivity of $\psi_1$ and $\psi_2$. This bijectivity ensures that any optimal deterministic transport plan $\tilde\pi^\star$ between $\psi_{1\#}\mu$ and $\psi_{2\#}\nu$ induces an optimal deterministic transport plan $\pi^\star$ between $\mu$ and $\nu$, and \textit{vice versa}. Writing $\tilde\pi^\star =(\operatorname{id},T)_\#(\psi_{1\#}\mu)$, this plan $\pi^\star$ is given by \begin{align*} \pi ^\star &= (\psi_1 ^{-1} ,\psi_2^{-1} )_\#\tilde \pi ^\star \\ & =(\psi_1 ^{-1} ,\psi_2^{-1} )_\#(\operatorname{id},T)_\#\psi_{1\#} \mu \\ & =(\operatorname{id},\psi_2^{-1}\circ T\circ\psi_1)_\#\mu\,. \qedhere \end{align*} \end{proof} \begin{proof}[Proof of \cref{lemma:scaled-Brenier}] As $\psi_{1\#}\mu$ has a density w.r.t.~the Lebesgue measure since $\psi_1$ is a diffeomorphism and $\psi_{1\#}\mu$ and $\psi_{2\#}\nu$ have compact support, Brenier's theorem states that there exists a unique optimal transport plan between $\psi_{1\#}\mu$ and $\psi_{2\#}\nu$ and that it is induced by a map $\nabla \phi$, where $\phi$ is a convex function. Using \cref{lemma:reparam} then gives the result. \end{proof} \begin{remark}[Discussion on the hypothesis of \cref{lemma:scaled-Brenier}] In the proof of \cref{lemma:scaled-Brenier}, we only needed (i) $\psi_1$, $\psi_2$ and their inverse to be measurable, (ii) $\psi_{1\#}\mu$ to have a density w.r.t.~Lebesgue, and (iii) $\psi_{1\#}\mu$ and $\psi_{2\#}\nu$ to have compact support. Imposing $\psi_1$ to be a diffeomorphism and $\psi_2$ to be a homeomorphism ensures both (i) and (ii) and is natural to expect. \end{remark} \subsection{Measurable selection of maps in the manifold setting} \subsubsection{Measurability of set-valued maps} \label{sec:measurable_set_valued} Let $X, U$ be two topological spaces, and let $\mathcal{A}$ denote the Borel $\sigma$-algebra on $X$. A set-valued map $S$ is a map from $X$ to $P(U)$ (the set of subsets of $U$). This will be denoted by $S : X \rightrightarrows U$. The idea is to introduce notations which are consistent with the case where $S(x) = \{u\}$ for all $x$ in $X$, where we want to retrieve the standard case of maps $X \to U$. Definitions are taken from \cite{rockafellar2009variational}, where measurability is studied when $U = \mathbb{R}^n$. Most results and proofs adapt to a more general setting---in particular when $U$ is a complete Riemannian manifold $M$. For the sake of completeness, we provide all the proofs, and highlight those that require specific care by replacing $\mathbb{R}^n$ by such a manifold. Of importance for our proofs, we define: \begin{itemize} \item The \emph{pre-image} of a set $B \subset U$ is given by \[ S^{-1}(U) = \{ x \in X,\ S(x) \cap B \neq \varnothing\}. \] \item The \emph{domain} of $S$ is $S^{-1}(U)$, that is $\{ x \in X,\ S(x) \neq \varnothing \}$. \end{itemize} We will often use the following relation: if a set $A$ can be written as $A = \bigcup A_k$, then $S^{-1}(A) = \bigcup S^{-1}(A_k)$. Indeed, $x \in S^{-1}(A) \Leftrightarrow S(x) \cap A \neq \varnothing \Leftrightarrow \exists k,\ S(x) \cap A_k \neq \varnothing \Leftrightarrow \exists k,\ x \in S^{-1}(A_k) \Leftrightarrow x \in \bigcup_k S^{-1}(A_k)$. A set-valued map $S : X \rightrightarrows U$ is said to be \emph{measurable} if, for any open set $O \subset U$, \begin{equation} S^{-1}(O) \in \mathcal{A}. \end{equation} Note that if $S$ is measurable (as a set-valued map), then its domain must be measurable as well (as an element of $\mathcal{A}$). We say that $S : X \rightrightarrows U$ is \emph{closed-valued} if $S(x)$ is a closed subset of $U$ for all $x \in X$. \begin{proposition}[Theorem 14.3.c in \cite{rockafellar2009variational}]\label{prop:equiv_measure_closedvalued} A closed-valued map $S$ is measurable if and only if $S^{-1}(B) \in \mathcal{A}$ for all $B \subset U$ that are either: \begin{enumerate}[label=(\alph*),nolistsep] \item open (the definition); \item compact; \item closed. \end{enumerate} \end{proposition} \begin{proof}[Proof of \cref{prop:equiv_measure_closedvalued}]\ \begin{itemize}[nolistsep] \item(a) $\Rightarrow$ (b): For a compact $B \subset U$, let $B_k = \{ x \in U,\ d(x,B) < k^{-1} \}$, $k \geq 0$ (that is open). Note that $x \in S^{-1}(B) \Leftrightarrow S(x) \cap B \neq \varnothing \Leftrightarrow S(x) \cap B_k \neq \varnothing$ for all $k$ because $S(x)$ is a closed set. Hence $S^{-1}(B) = \bigcap_k S^{-1}(B_k)$. All the $S^{-1}(B_k)$ are measurable, so is $S^{-1}(B)$ as a countable intersection of measurable sets. \item (b) $\Rightarrow$ (a): Fix $O$ an open set of $U$. As we assume $U$ to be a complete separable Riemannian manifold, $O$ can be written as a countable union of compact balls: $O = \bigcup_n \overline{B(x_n, r_n)}$. \item (b) $\Rightarrow$ (c): Immediate. \item (c) $\Rightarrow$ (b): A closed set $B$ can be obtained as a countable union of compact sets by letting $B = \bigcup_n B \cap \overline{B(x_0, n)}$ for some $x_0$. Hence $S^{-1}(B) = \bigcup_n S^{-1}(B \cap \overline{B(x_0,n)})$ is in $\mathcal{A}$. \qedhere \end{itemize} \end{proof} Now, we introduce a proposition on operations that preserve measurability of closed-set valued maps. The proof requires adaptation from the one of \cite{rockafellar2009variational} because the latter uses explicitly the fact that one can compute Minkowski sums of sets (which may not make sense on a manifold). \begin{proposition}[Proposition 14.11 in \cite{rockafellar2009variational}, adapted to the manifold case]\label{prop:intersection_of_measurable} Let $S_1$ and $S_2 : X \rightrightarrows U$ be two measurable closed-set valued maps. Then \begin{itemize}[nolistsep] \item $P : x \mapsto S_1(x) \times S_2(x)$ is measurable as a closed-valued map in $U \times U$ (equipped with the product topology). \item $Q : x \mapsto S_1(x) \cap S_2(x)$ is measurable. \end{itemize} \end{proposition} \begin{proof} The first point can be proved in the same spirit as the proof proposed by Rockafellar and Wets. Namely, let $O'$ be an open set in $U \times U$. By definition of the product topology, $O'$ can be obtained as $\bigcup_n O_1^{(n)} \times O_2^{(n)}$ where $O_1^{(n)}$ and $O_2^{(n)}$ are open sets in $U$. Then $P^{-1}(O') = \bigcup_n P^{-1}(O_1^{(n)} \times O_2^{(n)})$. Now, observe that $P^{-1}(A \times B) = \{ x,\ S_1(x) \times S_2(x) \in A \times B = \{x,\ S_1(x) \in A \text{ and } S_2(x) \in B \} = S_1^{-1}(A) \cap S_2^{-1}(B)$, so that finally, $P^{-1}(O') = \bigcup_n S_1^{-1}(O_1^{(n)}) \cap S_2^{-1}(O_2^{(n)})$ that is measurable as a countable union of (finite) intersection of measurable sets (given that $S_1, S_2$ are measurables). Note that this does not require $S_1,S_2$ to be closed-valued. Now, let us focus on the second point, that requires more attention. Thanks to the previous proposition, it is sufficient to show that $Q^{-1}(C) \in \mathcal{A}$ for any compact set $C \subset U$. In \cite{rockafellar2009variational}, this is done by writing $Q^{-1}(C) = \{ x,\ S_1(x) \cap S_2(x) \cap C \neq \varnothing \} = \{x,\ R_1(x) \cap R_2(x) \neq \varnothing \} = \{x,\ 0 \in (R_1(x) - R_2(x)) \} = (R_1 - R_2)^{-1}(\{0\})$, where $R_j(x) = S_j(x) \cap C$ (that is also closed valued), and using the fact that the (Minkowski) difference of measurable closed-valued maps is measurable as well \cite[Prop.~14.11.c]{rockafellar2009variational}. To adapt this idea (we cannot consider Minkowski difference in our setting), we introduce the diagonal $\Delta = \{(u,u),\ u \in U\} \subset U \times U$. Now, observe that $R_1(x) \cap R_2(x) \neq \varnothing \Leftrightarrow (R_1(x) \times R_2(x)) \cap \Delta \neq \varnothing$, that is $x \in R^{-1}(\Delta)$, where $R(x) = R_1(x) \times R_2(x)$. Now, since the maps $R_1$ and $R_2$ are measurable closed-valued maps (inherited from $S_1,S_2$), so is $R$ according to the previous point. And since $\Delta$ is closed, $R^{-1}(\Delta) = Q^{-1}(C)$ is measurable. \end{proof} \subsubsection{Proof of \cref{prop:selection-manifold}} The proof is essentially an adaptation of the one of \cite{fontbona2010measurability}, with additional care required due to the fact that we do not have access to a linear structure on $M$. It relies on measurability of set-valued maps (see \cite[Ch. 5 and 14]{rockafellar2009variational} and \cref{sec:measurable_set_valued} for a summary). The crucial point regarding measurability is the following proposition. \begin{proposition}\label{prop:Bnk_measurable} The set \begin{equation} B_{n,k} = \{ (u,x),\ T_u(x) \in A_{n,k} \}. \end{equation} is measurable. \end{proposition} Its proof relies on a core lemma: \begin{lemma}\label{lemma:ferme_mesurable} Let $F \subset M$ be a closed set. Then the set \[ B_F = \{ (u,x),\ T_u(x) \in F \} \] is measurable. \end{lemma} The key will be to identify this set as the domain of a measurable set-valued map, see \cref{sec:measurable_set_valued}. \begin{proof}[Proof of \cref{lemma:ferme_mesurable}] Observe that $B_F = \{(u,x),\ (\{x\} \times F) \cap \mathrm{gph}(T_u) \neq \varnothing\}$, where $\mathrm{gph}(T_u) = \{(x,T_u(x)),\ x \in M\}$ denotes the topological closure of the graph of the optimal transport map $T_u$ that pushes $\mu_u$ onto $\nu_u$. Letting $S_1 : (u,x) \mapsto \{x\} \times F$ and $S_2 : (u,x) \mapsto \mathrm{gph}(T_u)$, so that $B_F = \mathrm{dom}(S)$, where $S(x) = S_1(x) \cap S_2(x)$. According to \cref{prop:intersection_of_measurable}, given that $S_1$ and $S_2$ are closed-valued, if they are measurable, so is $S$, and so is $B_F$ as the domain of a measurable map. The measurability of these two maps can be easily adapted from the work of \cite{fontbona2010measurability}, we give details for the sake of completeness. \textbf{Measurability of $S_1$:} Let $O \subset M \times M$ be open. The set $S_1^{-1}(O) = \{x,\ \{x\} \times F \cap O \neq \varnothing \}$ is open (thus measurable): for any $x,z \in S_1^{-1}(O)$, we have $\epsilon > 0$ such that $B(x,\epsilon) \times \{z\}$. It proves the measurability of $S_1$. \textbf{Measurability of $S_2$:} Given that $u \mapsto (\mu_u,\nu_u)$ is measurable by assumption, and that measurability is preserved by composition, we want to show that (i) the map $S : (\mu,\nu) \mapsto \Pi(\mu,\nu)$ (the set of optimal transport plans between $\mu$ and $\nu$) is measurable and (ii) the map $U : \pi \in P(M^2) \mapsto \operatorname{supp}\pi$ satisfies $U^{-1}(O)$ is open for any open set $O \subset P(M^2)$. From these two points, we get that $(U \circ S)^{-1}(O)$ is measurable, thus the measurability of $S_2$. To get (i), observe first that $S$ is closed-valued, so that it is sufficient to prove that $S^{-1}(C)$ is measurable for any closed set $C \subset P(M)^2$ according to \cref{prop:equiv_measure_closedvalued}. Let $C \subset P(M^2)$ be closed. Then, $S^{-1}(C) = \{(\mu,\nu),\ \Pi^*(\mu,\nu) \cap C \neq \varnothing \}$, and consider a sequence $(\mu_n, \nu_n)_n$ in $S^{-1}(C)$ that converges to $(\mu,\nu)$ for the weak topology. Let $\pi_n \in \Pi^*(\mu_n,\nu_n) \cap C$. According to \cite[Thm.~5.20]{villani2009optimal}, $(\pi_n)_n$ admits a weak limit $\pi$ in $\Pi^*(\mu,\nu)$, but also since $C$ is closed, $\pi \in C$, so that $(\mu,\nu) \in S^{-1}(C)$ that is closed (hence measurable), proving the measurability of $S$. (ii) simply follows from the fact that $U^{-1}(O) = \{ \pi,\ \operatorname{supp}\pi \cap O \neq \varnothing \} = \{ \pi,\ \pi(O) > 0 \}$ that is open. Indeed, the Portmanteau theorem gives that if $\pi_n \to \pi$ (weakly) and $\pi_n(O) = 0$, then $0 = \liminf \pi_n(O) \geq \pi(O) \geq 0$, so $\pi(O) = 0$. The complementary set of $U^{-1}(O)$ is closed, that is $U^{-1}(O)$ is open. \end{proof} \begin{proof}[Proof of \cref{prop:Bnk_measurable}] This follows from the assumption that $A_{n,k}$ can be inner-approximated by a sequence of closed set $F_j \subset A_{n,k}$ and the fact that the $B_{F_j}$ are measurable. \end{proof} We can now prove our main theorem. The proof is clearly inspired from the one of \cite{fontbona2010measurability}, though it requires, in few places, careful adaptation. \begin{proof}[Proof of \cref{prop:selection-manifold}] Recall that we assume that $M = \bigsqcup_n A_{n,k}$. For each $n,k$, select (in a measurable way) a $a_{n,k}$ in $A_{n,k}$. Then, define the map \begin{equation} T^{(k)} : (u,x) \mapsto a_{n,k},\ \text{ such that } T_u(x) \in A_{n,k}. \end{equation} This map is measurable. Indeed, the map $\Phi_k : (u,x) \mapsto A_{n,k}$ where $T_u(x) \in A_{n,k}$ is measurable, because $\Phi_k^{-1}(O) = \bigcup_n B_{n,k} \cap O$ that is measurable. Now, for two maps $f,g : B \times M \to M$, let $D_1$ denotes the natural $L_1$ distance on $M$, that is \begin{equation} D_1(f,g) = \int_B \int_M d(f(u,x), g(u,x)) \,\mathrm d \mu_u(x) \,\mathrm d m(u)\,. \end{equation} This yields a complete metric space, and we can observe that $(T^{(k)})_k$ is a Cauchy sequence for this distance. Indeed, for $k \leq j$ two integers, recall that we assume that $(A_{n,j})_n$ is a refinement of $(A_{n,k})_n$, yielding \begin{align*} D_1(T^{(k)}, T^{(j)}) &= \int_{B} \int_M d(T^{(k)}(u,x), T^{(j)}(u,x)) \,\mathrm d \mu_u (x) \,\mathrm d m(u) \\ &= \int_{B} \int_M \sum_n \sum_{n' : A_{n',j} \subset A_{n,k}} 1_{B_{n',j}}(u,x) \cdot d(a_{n,k}, a_{n',j}) \,\mathrm d \mu_u(x) \,\mathrm d m(u) \\ &= \int_B \int_M \sum_n \sum_{n' : A_{n',j} \subset A_{n,k}} d(a_{n,k},a_{n',j}) \,\mathrm d \nu_u(A_{n',j}) \,\mathrm d m(u) \\ &\leq 2^{-k} \end{align*} where we use that for all $u$, $\int_{x \in M} 1_{B_{n',j}}(u,x) \,\mathrm d \mu_u(x) = \nu_u(A_{n',j})$ by construction (recall that $(u,x) \in B_{n',j} \Leftrightarrow T_u(x) \in A_{n',j} \Leftrightarrow x \in \mu_u(T_u^{-1}(A_{n',j})) = T_u \# \mu_u(A_{n',j})$ and $T_u$ transports $\mu_u$ onto $\nu_u$), and then that the diameter of the partition $A_{n,k}$ is less than or equal to $2^{-k}$ and that $\nu_u$ and $m$ are probability measures. Now, let $T$ denote the limit of $(T^{(k)})_k$ (that is measurable). It remains to show that $T(u,x) = T_u(x)$, $m$-a.e. This can be obtained by proving that \begin{equation} \int g(x) f(T(x,u)) \,\mathrm d \mu_u(x) = \int g(x) f(T_u(x)) \,\mathrm d \mu_u(x), \end{equation} for any pair $f,g : M \to \mathbb{R}$ of bounded Lipschitz-continuous functions \cite[Lemma 2.24]{van2000asymptotic}. As in \cite{fontbona2010measurability}, let $ \|f\| = \sup_{x \neq y} \frac{|f(x) - f(y)|}{d(x,y)} + \sup_x |f(x)|$. The difference between these two terms can be bounded using the partition $(A_{n,k})_n$. We have for all $u$ $m$-a.e.: \begin{align} &\left| \int g(x) f(T_u(x)) \,\mathrm d \mu_u(x) - \int g(x) f(T(u,x)) \,\mathrm d \mu_u(x) \right| \\ \leq &\left| \int g(x) f(T_u(x)) \,\mathrm d \mu_u(x) - \int g(x) f(T^{(k)}(u,x)) \,\mathrm d \mu_u(x) \right| + \|g\| \|f\| \int d\left( T^{(k)}(u,x), T(u,x) \right) \,\mathrm d \mu_u(x). \end{align} Since $T^{(k)} \to T$ in $D_1$, it implies that up to a subsequence, $\int_x d(T^{(k)}(u,x), T(u,x)) \,\mathrm d \mu_u(x) \to 0$ as $k \to \infty$ for all $u$ $m$-a.e. To treat the first term and show that it goes to $0$ as $k \to \infty$ for a subset of $B$ with full $m$-measure, we write $m$-a.e.~$u$: \begin{align*} &\left| \int g(x) f(T_u(x)) \,\mathrm d \mu_u(x) - \int g(x) f(T^{(k)}(u,x)) \,\mathrm d \mu_u(x) \right| \\ \leq &\int |g(x)| \left| f(T_u(x)) - f(T^{(k)}(u,x))\right| \,\mathrm d \mu_u(x) \\ \leq &\|g\| \|f\| \int d(T_u(x),T^{(k)}(u,x)) \,\mathrm d \mu_u(x) \\ \leq &\|g\| \|f\| 2^{-k} \sum_n \nu_u(A_{n,k}). \end{align*} This concludes the proof. \end{proof} \subsection{Measure disintegration} \label{sec:disintegration} \begin{defi}[Measure disintegration] Let $\mathcal{X}$ and $\mathcal{Z}$ be two Radon spaces, $\mu \in P(\mathcal{X})$ and $p : \mathcal{X} \to \mathcal{Z}$ a Borel-measurable function. A family of probability measures $\{\mu_{u}\}_{u \in \mathcal{Z}} \subset P(\mathcal{X})$ is a \emph{disintegration} of $\mu$ by $p$ if: \begin{enumerate}[label=(\roman*)] \item the function $z \mapsto \mu_{u}$ is Borel-measurable; \item $\mu_{u}$ lives on the fiber $p^{-1}(u)$ : for $p_*\mu$-a.e.~$u \in \mathcal{Z}$, $$\mu_{u}(\mathcal{X} \setminus p^{-1}(u))=0\,,$$ and so $\mu_{u}(B)=\mu_{u}(B \cap p^{-1}(u))$ for any Borel $B\subset \mathcal{X}$; \item for every measurable function $f: \mathcal{X} \rightarrow[0, \infty]$, $$\int_{\mathcal{X}} f(x) \,\mathrm d \mu(x)=\int_{\mathcal{Z}} \left(\int_{p^{-1}(u)} f(x) \,\mathrm d \mu_{u}(x)\right) \,\mathrm d (p_*\mu)(u)\,.$$ In particular, for any Borel $B \subset \mathcal{X}$, taking $f$ to be the indicator function of $B$, $$\mu(B)=\int_{\mathcal{Z}} \mu_{u}(B) \,\mathrm d (p_*\mu)(u)\,.$$ \end{enumerate} \end{defi} \begin{theorem}[Disintegration theorem] Let $\mathcal{X}$ and $\mathcal{Z}$ be two Radon spaces, $\mu \in P(\mathcal{X})$ and $p : \mathcal{X} \to \mathcal{Z}$ a Borel-measurable function. There exists a $p_*\mu$-a.e.~uniquely determined family of probability measures $\{\mu_{u}\}_{u \in \mathcal{Z}} \subset P(\mathcal{X})$ that provides a disintegration of $\mu$ by $p$. \end{theorem} \section{Applications to the quadratic and inner-product GW problems} \label{subsec:applications} \subsection{The inner-product cost} \label{subsec:applications_innerProduct} We recall the \cref{eqn:GW-inner-prod} problem: \begin{equation} \tag{GW-IP} \min _{\pi \in \Pi\left(\mu, \nu\right)} \int_{\mathcal{X}\times\mathcal{Y}}\int_{\mathcal{X}\times\mathcal{Y}}\left|\langle x,\, x'\rangle-\langle y,\, y'\rangle\right|^{2} \,\mathrm d\pi(x, y) \,\mathrm d\pi(x', y')\,, \label{eqn:GW-inner-prod-in-subsection} \end{equation} Expanding the integrand and using the fact that $\iint\langle x,\,x'\rangle^2 \, \mathrm d\pi\, \mathrm d\pi=\iint\langle x,\,x'\rangle^2 \, \mathrm d\mu\, \mathrm d\mu$ is constant (the same goes for the terms that depend on $\nu$), \cref{eqn:GW-inner-prod-in-subsection} is equivalent to \begin{equation*} \min _{\pi \in \Pi\left(\mu, \nu\right)} \iint-\langle x,\, x'\rangle\langle y,\, y'\rangle \,\mathrm d\pi(x, y) \,\mathrm d\pi(x', y')\,. \end{equation*} This problem is not invariant to translations but it is to the action of $O_n(\mathbb{R}) \times O_d(\mathbb{R})$. Assuming an optimal correspondence plan $\pi^\star$, this plan is also an optimal transport plan for the linearized problem \cref{eq:linearized} with cost \begin{align*} C_{\pi^\star}(x,y)=-\int\langle x,\, x'\rangle\langle y,\, y'\rangle \,\mathrm d\pi^\star(x', y')=\scal{-\int(y'\otimes x')x \,\mathrm d\pi^\star(x', y')}{y}=-\langle M^\star x,\,y\rangle\,, \end{align*} where $M^\star\triangleq\int y'\otimes x' \,\mathrm d\pi^\star(x', y')\in \mathbb{R}^{d\times n}$. This linearized cost satisfies the \cref{eq:twist} condition if and only if $M^\star$ is of full rank, hence in this case the solution $\pi^\star$ of \cref{eqn:GW-inner-prod-in-subsection} is unique and induced by a map, and \cite[Theorem 4.2.3]{vayer2020contribution} gives a result on the structure of this map. We can actually generalize this result to the case where $M^\star$ is arbitrary: \begin{theorem}[Existence of an optimal map for the inner product cost] \label{theorem:inner-main} Let $n\geq d$, $\mu, \nu\in \mathcal{P}(\mathbb{R}^n)\times\mathcal{P}(\mathbb{R}^d)$ two measures with compact supports. Suppose that $\mu\ll\mathcal{L}^n$. Then there exists an optimal map for \cref{eqn:GW-inner-prod-in-subsection} that can be written as \begin{equation} T=O_\mathcal{Y}^\top \circ (T_0\circ p_{\mathbb{R}^d})\circ O_\mathcal{X}\,, \label{eq:T0-form-1} \end{equation} where $O_\mathcal{X}$ and $O_\mathcal{Y}$ are change-of-basis matrices of $\mathbb{R}^n$ and $\mathbb{R}^d$, $p_{\mathbb{R}^d}: \mathbb{R}^n \to \mathbb{R}^d$ is defined by $p_{\mathbb{R}^d}(x_1,\dots,x_n)=(x_1,\dots,x_d)$, and \begin{equation} T_0(x_1,\dots,x_d)=(\nabla f\circ\Sigma(x_1,\dots,x_h), \nabla g_{x_1,\dots,x_h}(x_{h+1},\dots,x_d)) \label{eq:T0-form-2} \end{equation} with $h\leq d$, $\Sigma\in \mathbb{R}^{h\times h}$ diagonal with positive entries, $f: \mathbb{R}^h \to \mathbb{R}$ convex and all $g_{x_1,\dots,x_h}: \mathbb{R}^{d-h} \to \mathbb{R}$ convex. \end{theorem} In order to show this, we will need two simple lemmas that we state now and prove in \cref{subsec:proof-scaled-brenier}, the second one being a simple corollary of the first: \begin{lemma} \label{lemma:reparam} Let $\mu,\nu\in \mathcal{P}(E)$ and let $\psi_1,\psi_2:E\to F$ be homeomorphisms. Let $c:F\times F\to\mathbb{R}$ and consider the cost $x,y\mapsto c( \psi_1(x),\psi_2(y))$. Then a map is optimal for the cost $c$ between $\mu$ and $\nu$ if and only if it is of the form $\psi_2^{-1}\circ T\circ\psi_1$ with $T$ optimal for the cost $\tilde c$ between $\psi_{1\#}\mu$ and $\psi_{2\#}\nu$. \end{lemma} \begin{lemma}[Brenier with scaled inner product] \label{lemma:scaled-Brenier} Let $h\geq 1$ and $\mu,\nu\in \mathcal{P}(\mathbb{R}^h)$ with $\mu\ll\mathcal{L}^h$ with compact supports. Consider the cost $c(x, y)= -\langle \psi_1(x),\,\psi_2(y)\rangle$ where $\psi_1,\psi_2:\mathbb{R}^h\to \mathbb{R}^h$ are diffeomorphisms. Then, there exists a unique optimal transport plan between $\mu$ and $\nu$ for the cost $c$, and it is induced by a map $t:\mathbb{R}^h\to \mathbb{R}^h$ of the form $t=\psi_2^{-1}\circ\nabla f\circ\psi_1$, with $f$ convex. \end{lemma} \noindent We are now ready to prove \cref{theorem:inner-main}: \begin{proof}[Proof of \cref{theorem:inner-main}] Using a singular value decomposition, we have $M^\star=O_\mathcal{Y}^\top \Sigma O_\mathcal{X}\in\mathbb{R}^{d\times n}$ with $O_\mathcal{X},O_\mathcal{Y}\in O_n(\mathbb{R})\times O_d(\mathbb{R})$ orthogonal matrices of each Euclidean space and $\Sigma\in\mathbb{R}^{d\times n}$ diagonal with non-negative coefficients. The cost then becomes $C_{\pi^\star}(x, y)=-\langle O_\mathcal{Y}^\top\Sigma O_\mathcal{X} x,\,y\rangle =-\langle \Sigma (O_\mathcal{X} x),\, O_\mathcal{Y} y\rangle$. Using \cref{lemma:reparam}, the problem transforms into an optimal transportation problem between $\mu'\triangleq O_\mathcal{X}\mu$ and $\nu'\triangleq O_\mathcal{Y}\nu$; and choosing $O_\mathcal{Y}$ and $O_\mathcal{X}$ that sort the singular values in decreasing order, \textit{i.e.}~assuming $\sigma_1 \geq \dots \geq \sigma_h>0$ with $h\triangleq \rk(M^\star)\leq d$, the problem therefore transforms into $\min_{\tilde\pi} \langle c_\Sigma,\, \tilde\pi\rangle$ for $\tilde\pi \in \Pi(\mu', \nu')$, where $c_\Sigma(\tilde x,\tilde y)=-\sum_{i=1}^h \sigma_i \tilde x_i \tilde y_i\triangleq-\langle p(\tilde x),\, p(\tilde y)\rangle_\sigma$, $p$ being the orthogonal projection on $\mathbb{R}^h$. We reduce to the case where both measures live in the same space by noting that since $c_\Sigma(\tilde x,\tilde y)=c_\Sigma(p_{\mathbb{R}^d}(\tilde x), \tilde y)$ for all $\tilde x$ and $\tilde y$, any map $T_0$ optimal between $\mu''\triangleq p_{\mathbb{R}^d\#}\mu'$ and $\nu'$ will induce a map $T=T_0\circ p_{\mathbb{R}^d}$ optimal between $\mu'$ and $\nu'$\footnote{by \cref{lemma:decomp-manifold} it suffices to check that $(p_{\mathbb{R}^d},\operatorname{id})_\#(\operatorname{id},T)_\#\mu'$ is in $\Pi^\star(p_{\mathbb{R}^d\#}\mu',\nu')$: $$(p_{\mathbb{R}^d},\operatorname{id})_\#(\operatorname{id},T)_\#\mu'=(p_{\mathbb{R}^d},T_0\circ p_{\mathbb{R}^d})_\#\mu'=(\operatorname{id},T_0)p_{\mathbb{R}^d\#}\mu'\,.$$}. One can then recover an optimal map between $\mu$ and $\nu$ by composing with $O_\mathcal{X}$ and $O_\mathcal{Y}^\top$ (\cref{lemma:reparam}), hence \Cref{eq:T0-form-1}. The existence of such a map $T_0$ optimal between $\mu''$ and $\nu'$ satisfying \cref{eq:T0-form-2} follows from the application of \cref{theo:fibers-main} for $E=E_0=\mathbb{R}^d=\mathbb{R}^h\times\mathbb{R}^{d-h}=B_0\times F$ and $\varphi=p$. Indeed, $B_0$ and $F$ are complete Riemannian manifolds; $\langle \cdot,\,\cdot\rangle_\sigma$ is twisted on $B_0\times B_0$; $p_\#\mu''$ has a density on $B_0$ and every $(\mu'')_u$ has a density w.r.t.~the Lebesgue measure on $F$ as a conditional probability. We then make $t_B$ explicit. One has that $c_\Sigma(x,y)=-\langle \tilde\Sigma x,\,y\rangle$, where $\tilde\Sigma=\diag({\sigma_i})_{1\leq i\leq h}$. As $p_\#\mu''$ has a density, we can apply \cref{lemma:scaled-Brenier} stated above with $(\psi_1,\psi_2)=(\tilde\Sigma,\operatorname{id})$ to obtain that there exists a unique optimal transport plan $\pi_B^\star$ between $p_\#\mu''$ and $p_\#\nu'$ for the cost $c_\Sigma$ and that it is induced by a map $t_B:B\to B$ of the form $t_B=\nabla f\circ\tilde\Sigma$, with $f$ convex. \end{proof} \begin{remark} A special case of our theorem is Theorem 4.2.3 from \cite{vayer2020contribution} (\cref{prop:sota-titouan} in this work): when $h=d$, the optimal map between $O_\mathcal{X}\mu$ and $O_\mathcal{Y}\nu$ writes $T_0\circ p_{\mathbb{R}^d}$ with $T_0=\nabla f\circ \tilde\Sigma$. The induced optimal map between $\mu$ and $\nu$ is then: \begin{align*} T&=O_\mathcal{Y}^\top\circ (\tilde T_0\circ p_{\mathbb{R}^d})\circ O_\mathcal{X} \\ &=O_\mathcal{Y}^\top\circ (\nabla f\circ \tilde\Sigma\circ p_{\mathbb{R}^d})\circ O_\mathcal{X} \\ &=O_\mathcal{Y}^\top\circ (\nabla f\circ \Sigma)\circ O_\mathcal{X} \\ &= \nabla( f\circ O_\mathcal{Y})\circ O_\mathcal{Y}^\top\circ \Sigma \circ O_\mathcal{X} & & \text{since } \nabla ( f\circ A)=A^\top \circ \nabla f \circ A\\ & = \nabla\tilde f\circ M^\star\,, \end{align*} where $\tilde f\triangleq f\circ O_\mathcal{Y}$ is convex. \end{remark} \subsection{The quadratic cost} \label{subsec:applications_quadratic} We recall the \cref{eqn:GW-quadratic} problem: \begin{equation} \tag{GW-Q} \min _{\pi \in \Pi\left(\mu, \nu\right)} \int_{\mathcal{X}\times\mathcal{Y}}\int_{\mathcal{X}\times\mathcal{Y}}\left||x-x'|^2-|y-y'|^2\right|^{2} \,\mathrm d\pi(x, y) \,\mathrm d\pi(x', y')\,, \label{eqn:GW-quadratic-in-subsection} \end{equation} which is invariant by translation of $\mu$ and $\nu$. With no loss of generality, we suppose both measures centered. Expanding the integrand provides $$\left||x-x'|^{2}-|y-y'|^{2}\right|^2=|x-x'|^4+|y-y'|^4-2|x-x'|^{2}|y-y'|^{2}\,,$$ and the two first terms only depend on $\mu$ and $\nu$, not on $\pi$. Expanding the remaining term yields nine terms. Two of them also lead to a constant contribution: $-|x|^2|y'|^2$ and $-|x'|^2|y|^2$; four lead to vanishing integrals since $\mu$ and $\nu$ are centered: $2|x|^{2}\langle y,\,y'\rangle$, $2|x'|^{2}\langle y,\,y'\rangle$, $2|y|^{2}\langle x,\,x'\rangle$ and $2|y'|^{2}\langle x,\,x'\rangle$. The remaining three terms then yield the following equivalent problem: \begin{equation*} \min _{\pi \in \Pi\left(\mu, \nu\right)} \int-|x|^2|y|^2\,\mathrm d\pi(x, y)+2\iint-\langle x,\, x'\rangle\langle y,\, y'\rangle \,\mathrm d\pi(x, y) \,\mathrm d\pi(x', y')\,. \end{equation*} Assuming an optimal correspondence plan $\pi^\star$, this plan is also an optimal transport plan for the linearized problem \cref{eq:linearized} with cost \begin{align*} C_{\pi^\star}(x,y)=-|x|^2|y|^2-4\int\langle x,\, x'\rangle\langle y,\, y'\rangle \,\mathrm d\pi^\star(x', y') =-|x|^2|y|^2-4\langle M^\star x,\,y\rangle\,, \end{align*} where $M^\star\triangleq\int y'\otimes x' \,\mathrm d\pi^\star(x', y')\in \mathbb{R}^{d\times n}$. In the cases where the rank of $M^\star$ is $d$ (resp.~$d-1$), this linearized cost satisfies \cref{eq:subtwist} (resp.~\cref{eq:mtwist} with $m=2$), yielding an optimal map/anti-map (resp.~bimap) structure by compactness of the support of $\mu$ and $\nu$ when $\mu$ has a density. In the case where $\rk M^\star\leq d-2$, nothing can be said and there is \textit{a priori} little hope for the existence of an optimal correspondence map; but not unsurprisingly, we can actually prove it. \begin{theorem}[Existence of an optimal map, bimap or map/anti-map for the quadratic cost] \label{theorem:quad-main} Let $n\geq d$, $\mu, \nu\in \mathcal{P}(\mathbb{R}^n)\times\mathcal{P}(\mathbb{R}^d)$ two measures with compact supports. Suppose that $\mu\ll\mathcal{L}^n$. Let $\pi^\star$ be a solution of \cref{eqn:GW-quadratic-in-subsection} and $M^\star\triangleq\int y'\otimes x' \,\mathrm d\pi^\star(x', y')$. Then: \begin{itemize} \item if $\rk M^\star=d$, there exists an optimal plan that is induced by a map/anti-map; \item if $\rk M^\star=d-1$, there exists an optimal plan that is induced by a bimap; \item if $\rk M^\star\leq d-2$, there exists an optimal plan that is induced by a map that can be written as \begin{equation*} T=O_\mathcal{Y}^\top \circ T_0\circ O_\mathcal{X}\,, \end{equation*} where $O_\mathcal{X}$ and $O_\mathcal{Y}$ are change-of-basis matrices of $\mathbb{R}^n$ and, writing $\Phi(x)\triangleq ((x_u,|x_v|^2),x_v/|x_v|)\triangleq(x_B,x_F)$ for any $x\in\mathbb{R}^n$, \begin{equation*} \Phi\circ T_0(x)=\left(\tilde c\text{-}\exp_{x_B}(\nabla f(x_B)), \exp_{x_F}(\nabla g_{x_B}(x_F))\right) \end{equation*} with $h\leq n$, $f: \mathbb{R}^{h+1} \to \mathbb{R}$ being $\tilde c$-convex and all $g_{x_B}: \mathbb{R}^{n-h} \to \mathbb{R}$ being $d_{S^{n-h-1}}^2/2$-convex. \end{itemize} \end{theorem} The case $\rk M^\star\leq d-2$ is a consequence of \cref{theo:fibers-main} and the proof is as follows: \begin{proof} We consider the measure $\nu$ as a measure of $\mathbb{R}^n$ of $d$-dimensional support. Similarly to the inner product cost, by SVD the cost becomes $c(x, y)=-|x|^2|y|^2-\langle O_\mathcal{Y}^\top\Sigma O_\mathcal{X} x,\,y\rangle =-|O_\mathcal{X} x|^2|O_\mathcal{Y} y|^2-\langle \Sigma (O_\mathcal{X} x),\, O_\mathcal{Y} y\rangle$. Using \cref{lemma:reparam} and similarly to the inner product case, the problem transforms into $\min_{\tilde\pi} \langle c_\Sigma,\, \tilde\pi\rangle$ for $\tilde\pi \in \Pi(O_\mathcal{X}\mu, O_\mathcal{Y}\nu)$, where $c_\Sigma(x,y)\triangleq -|x|^2|y|^2-\langle \Sigma x,\, y\rangle$. Further assuming $\sigma_1 \geq \dots \geq \sigma_h>0$ and writing any $z\in\mathbb{R}^n$ as $z=(z_u,z_v)\in\mathbb{R}^h\times\mathbb{R}^{n-h}$, \begin{align*} c_\Sigma(x,y) & =-|x_u|^{2}|y_u|^{2}-|x_u|^{2}|y_v|^{2}-|x_v|^{2}|y_u|^{2}-|x_v|^{2}|y_v|^{2}-\langle \tilde \Sigma x_u,\, y_u\rangle & & \text{with }\tilde \Sigma =\diag(\sigma_{1},\dots,\sigma_{h})\\ & =-|x_u|^{2}|y_u|^{2}-|x_u|^{2}n_{y_v}-n_{x_v}|y_u|^{2}-n_{x_v}n_{y_v}-\langle \tilde \Sigma x_u,\, y_u\rangle & & \text{with }n_{x_v}=|x_v|^2 \text{ and }n_{y_v}=|y_v|^2\\ & \triangleq \tilde c(\phi(x),\phi(y)) & & \text{with }\phi: x\mapsto (x_u,|x_v|^2)\,. \end{align*} The cost $c_\Sigma(x,y)$ therefore only depends of the values of $\phi(x)$ and $\phi(y)$. Let us now examine the injectivity of $\nabla_{x}\tilde c(\tilde x,\cdot)$ for a fixed $\tilde x=\big(\begin{smallmatrix}x_u\\n_{x_v}\end{smallmatrix}\big)$. For any $\tilde y=\big(\begin{smallmatrix}y_u\\n_{y_v}\end{smallmatrix}\big)$: $$\nabla_{x}\tilde c(\tilde x,\tilde y)= (w,t) \iff \begin{cases} w=2(|y_u|^{2}+n_{y_v})x_u+\tilde \Sigma y_u \\ t=|y_u|^{2}+n_{y_v} \end{cases} \iff \begin{cases} y_u=\tilde \Sigma ^{-1}(w-2tx_u) \\ n_{y_v}=t-|y_u|^{2} \end{cases}$$ hence $\tilde c$ satisfies the twist condition. Now, the same as in \cref{example:fiber-main} applies, but this time with $E_0 = \mathbb{R}^h\times\mathbb{R}^{n-h}$, $E = E_0\backslash (\mathbb{R}^h\times\{0\})$, $B_0 = \mathbb{R}^h\times\mathbb{R}$ and $F = S^{n-h-1} = \{x \in E_0\mid |x_v|=1\}$. Is then ensured the existence of a structured Monge map between $\mu$ and $\nu$ for the cost $c$: it decomposes for almost all $x \in \mathbb{R}^n$ as a Monge map on the basis $B_0 = \mathbb{R}^{h+1}$ obtained as the gradient of a $\tilde c$-convex function $f:\mathbb{R}^{h+1}\to\mathbb{R}$ (via the $\tilde c$-exponential map on $\mathbb{R}^{h+1}$) and a Monge map on each fiber $F = S^{n-h-1}$, also built from gradients of convex functions $h_{(x_u,|x_v|^2)}:S^{n-h-1}\to\mathbb{R}$ (via the exponential map on the sphere); hence the result. Last, note that the case where $M^\star = 0$ has not been explictly treated. In this case, the cost is simply $c(x,y)=-|x|^2|y|^2=\tilde c(n_{x_v},n_{y_v})$ and the strategy above directly applies. \end{proof} \section{Existence of Monge maps for fiber-invariant costs} \label{subsec:general_thm} This section provides the main result on existence of Monge maps for OT problems for which the cost satisfies an invariance property. As detailed in \Cref{subsec:applications}, this property will be satisfied by the transport costs $C_{\pi^\star}$ arising from the first-order condition of \eqref{eqn:GW-quadratic} and \eqref{eqn:GW-inner-prod}---see \Cref{subsec:intro_GW_and_OT}. \subsection{Statement of the results} The idea is the following: let $\mu,\nu$ be two probability measures supported on a measurable space $(E,\Sigma_E)$ and consider a measurable map $\varphi : E \to B$, for some measurable space $(B,\Sigma_B)$, we shall omit to mention the $\sigma$-algebra afterwards. We sometimes use the name \emph{base space} for the space $B$. Let $(\mu_u)_{u \in B}$ (resp.~$(\nu_u)_{u \in B}$) denote a disintegration of $\mu$ (resp.~$\nu$) with respect to $\varphi$ (see \cref{sec:disintegration} for a definition of measure disintegration). Consider a cost $c : E \times E \to \mathbb{R}$ that is invariant on the fibers (that are simply the pre-image of points in the base $B$ by $\varphi$) of $\varphi$, that is $c(x,y) = \tilde{c}(\varphi(x), \varphi(y))$ for all $x,y \in E$ and some cost function $\tilde{c}$ on $B \times B$. Solving the OT problem between $\mu$ and $\nu$ for $c$ boils down to the OT problem between $\varphi_\# \mu$ and $\varphi_\# \nu$ on $B \times B$ for $\tilde{c}$. If we can ensure that there exists a Monge map $t_B$ between $\varphi_\#\mu$ and $\varphi_\#\nu$ (for instance, if we can use \Cref{theorem:brenier}), we may try to build a Monge map $T$ between $\mu$ and $\nu$ by (i) transporting each fiber $\mu_u$ onto $\nu_{t_B(u)}$ \emph{using a map} $T_u$, and (ii) gluing the $(T_u)_{u \in B}$ together to define a \emph{measurable} map $T$ satisfying $T _\# \mu = \nu$ that will be optimal as it coincides with $t_B$ on $B$ and the cost $c$ does not depend on the fibers $(\varphi^{-1}(u))_{u \in B}$. We stress that ensuring the measurability of the map $T$ is non-trivial and crucial from a theoretical standpoint. \begin{figure}[!h] \centering \input{figures/visu_fibers} \caption{Illustration of the construction of the Monge map between $\mu$ and $\nu$: we optimally transport the projections of the measures in $B$ and then ``lift'' the resulting map $t_B$ to $E$ by sending each fiber $\mu_u$ onto the fiber $\nu_{t_B(u)}$, resulting respectively from the disintegrations of $\mu$ and $\nu$ by $\varphi$.} \label{fig:visu-fibers} \end{figure} We formalize this idea by the mean of two theorems: the first one guarantees in a fairly general setting the existence of a Monge map for the \eqref{eq:gw} problem, but its construction is quite convoluted and there is little to no hope that it can be leveraged in practice, either from a theoretical or computational perspective. Assuming more structure, in particular on the fibers of $\varphi$, enables the construction of a Monge map for \eqref{eq:gw} with a structure akin to \Cref{prop:quad-cost-manifold-villani}. As detailed in \Cref{subsec:applications}, both \eqref{eqn:GW-quadratic} and \eqref{eqn:GW-inner-prod} fall in the latter setting. \begin{theorem}\label{theo:fibers-nonsense} Let $\mathcal{X}$ and $\mathcal{Y}$ be two measurable spaces for which there exists two measurable maps $\Phi_\mathcal{X} : \mathcal{X} \to \mathbb{R}^d$ and $\Phi_\mathcal{Y} : \mathcal{Y} \to \mathbb{R}^d$ that are injective, and whose inverses are measurable. Let $\mu \in \mathcal{P}(\mathcal{X})$ and $\nu \in \mathcal{P}(\mathcal{Y})$ be two probability measures. Let $c : \mathcal{X} \times \mathcal{Y} \to \mathbb{R}$ be a cost function, and $B_+,B_-$ be two measurable spaces along with measurable maps $\varphi : \mathcal{X} \to B_+$ and $\psi : \mathcal{Y} \to B_-$. Assume that there exists a cost $\tilde{c} : B_+ \times B_- \to \mathbb{R}$ such that \[ c(x,y)=\tilde c(\varphi(x),\psi(y))\quad \text{ for all } x,y\in \mathcal{X} \times \mathcal{Y}\,.\] and that there exists a Monge map $t_B : B_+ \to B_-$ that transports $\varphi _\# \mu$ onto $\psi _\# \nu$ for the cost $\tilde{c}$. Assume that there exists a disintegration $(\mu_u)_{u \in B_+}$ of $\mu$ with respect to $\varphi$ such that $\varphi_\#\mu$-a.e., $\mu_u$ is atomless. Then there exists a Monge map between $\mu$ and $\nu$ for the cost $c$. Furthermore, it projects onto $t_B$ through $(\varphi,\psi)$, in sense that $(\varphi,\psi)_\#(\operatorname{id},T)_\# \mu = (\operatorname{id}, t_B) _\#(\varphi _\# \mu)$. \end{theorem} The proof of this theorem is provided in \Cref{subsec:proof-nonsense}. \begin{remark} The atomless assumption on the disintegration $(\mu_u)_u$ is a natural minimal requirement to expect the existence of map (without specific assumption on the target measure $\nu$) and implies in particular that the fibers $(\varphi^{-1}(u))_{u \in B_+}$ should not be discrete (at least $\varphi_\#\mu$-a.e.). Indeed, if for instance $\mathcal{X} = \mathcal{Y} = B_+ = B_- = \mathbb{R}$ and $\varphi : x \mapsto |x|$, the fibers of $\varphi$ are of the form $\{-u, u\}$ (for $u \geq 0$), hence the disintegrations $(\mu_u)_{u \geq 0}$ and $(\nu_u)_{u \geq 0}$ are discrete and given by $\mu_u(u) \delta_u + (1-\mu_u(u)) \delta_{-u}$ and $\nu_u(u) \delta_u + (1-\nu_u(u)) \delta_{-u}$, and there is in general no map $T_u$ between two such discrete measures, unless we assume that $\mu_u(u) = \nu_u(u)$ or $1-\nu_u(u)$, $\varphi_\#\mu$-a.e. Observe also that $\varphi_\#\mu$ may have atoms: as we assume the existence of the Monge map $t_B$, it implies in that case that $\psi_\#\nu$ must also have atoms. \end{remark} \begin{remark} The ``projection'' property $(\varphi,\psi)_\#(\operatorname{id},T)_\# \mu = (\operatorname{id}, t_B) _\#(\varphi _\# \mu)$ can also be written $\psi \circ T(x) = t_B \circ \varphi(x)$, for $\mu$-a.e.~$x$. A converse implication, that is ``every Monge map between $\mu$ and $\nu$ projects onto a Monge map between $\varphi_\#\mu$ and $\varphi_\#\nu$'' may not hold in general. This is however true if we can guarantee that there is a unique optimal transport plan between $\varphi_\#\mu$ and $\psi_\#\nu$ and that it is of the form $(\operatorname{id},t_B)_\#\mu$ (\textit{e.g.}~if we can apply \cref{theorem:brenier})---in that case, $T$ necessary projects onto $t_B$ in the aforementioned sense. \end{remark} Under additional assumptions, we can build a more structured Monge map. Namely, as our goal is to apply \Cref{prop:quad-cost-manifold-villani}, we will assume that the (common) basis $B = B_+ = B_-$ is a manifold and that \emph{almost all} the fibers of $\varphi : E \to B$ are homeomorphic to the \emph{same} manifold $F$, and that every source measure of interest ($\mu$, $\mu_u$, $\varphi_\#\mu$) have densities. We also introduce the following convention: if $\mu \in \mathcal{P}(E)$ for some measurable space $E$, $E' \subset E$, and $\varphi : E' \to B$, we let $\varphi_\#\mu$ be the (non-negative) measure supported on $B$ defined by $\varphi_\#\mu(A) = \mu(\varphi^{-1}(A))$ for $A \subset B$ measurable. If $\mu(E') = 1$, note that $\varphi_\#\mu$ defines a probability measure on $B$ (\textit{i.e.}~it has mass one). This formalism allows us to state our theorem even when some assumptions only hold $\lambda$-a.e. \begin{theorem} \label{theo:fibers-main} Let $E_0$ be a measurable space and $B_0$ and $F$ be complete Riemannian manifolds. Let $\mu,\nu \in \mathcal{P}(E_0)$ be two probability measures with compact support. Assume that there exists a set $E\subset E_0$ such that $\mu(E) = 1$ and that there exists a measurable map $\Phi : E \to B_0 \times F$ that is injective and whose inverse on its image is measurable as well. Let $p_B, p_F$ denote the projections of $B_0 \times F$ on $B_0$ and $F$ respectively. Let $\varphi\triangleq p_ B \circ \Phi: E\to B_0$. Let $c: E_0 \times E_0 \to \mathbb{R}$ and suppose that there exists a twisted $\tilde c: B_0 \times B_0 \to \mathbb{R}$ such that \[ c(x,y)=\tilde c(\varphi(x),\varphi(y))\quad \text{ for all } x,y\in E_0\,.\] Assume that $\varphi_\#\mu$ is absolutely continuous w.r.t.~the Lebesgue measure on $B_0$ and let thus $t_B$ denote the unique Monge map between $\varphi_\#\mu$ and $\varphi_\#\nu$ for this cost. Suppose that there exists a disintegration $((\Phi_\#\mu)_u)_{u}$ of $\Phi_\#\mu$ by $p_B$ such that for $\varphi_\#\mu$-a.e.~$u$, $(\Phi_\#\mu)_{u}$ is absolutely continuous w.r.t.~the volume measure on $F$. Then there exists an optimal map $T$ between $\mu$ and $\nu$ for the cost $c$ that can be decomposed as \begin{equation}\label{eq:structureMongeMap} \Phi \circ T\circ \Phi^{-1}(u,v)=(t_ B (u),t_ F (u,v))=(\tilde c\text{-}\exp_u(\nabla f(u)), \exp_v(\nabla h_u(v)))\,, \end{equation} with $f: B_0 \to\mathbb{R}$ $\tilde c$-convex and $h_u: F \to\mathbb{R}$ $d_ F ^2/2$-convex for $\varphi_\#\mu$-a.e.~$u$. Note that $t_ F$ could actually be any function that sends each fiber $(\Phi_\#\mu)_u$ onto $(\Phi_\#\nu)_{t_ B (u)}$ in a measurable way. \end{theorem} The proof of this theorem is provided in \Cref{subsec:proof-fibers-main}. Let us give a simple example that illustrates the role played by our assumptions. This example has connections with \eqref{eqn:GW-quadratic} as detailed in \Cref{subsec:applications_quadratic}. \begin{example}\label{example:fiber-main} Let $E_0 = \mathbb{R}^d$ and $E = E_0\backslash \{0\}$, let $B_0 = \mathbb{R}$ and $F = S^{d-1} = \{x \in E_0\mid |x|=1\}$. For convenience, we also introduce the space $B = \mathbb{R}_+^*$. Consider the cost function $c(x,y) = (|x| - |y|)^2$, so that $c$ only depends on the norm of its entries. The fibers of the map $x \mapsto |x|$ are spheres---with the exception of $x=0$, which invites us to consider the diffeomorphism \begin{align*} \Phi : E &\to \mathbb{R}_+^* \times S^{d-1} = B \times F \subset B_0 \times F \\ x &\mapsto \left(|x|, \frac{x}{|x|}\right)\,. \end{align*} From this, we can write $c(x,y) = \tilde{c}(\varphi(x),\varphi(y))$ where $\varphi(x) = |x|$ and $\tilde{c}(u,u') = (u - u')^2$ (which is twisted). Now, if $\mu$ has a density on $\mathbb{R}^d$, so does $\Phi_\# \mu$ (as $\Phi$ is a diffeomorphism) on $B_0 \times F$. The coarea formula gives the existence of a disintegration $(\mu_u)_{u \in B}$ of $\Phi_\#\mu$ by $p_B : (u,v) \mapsto u$ (note that $p_{B\#}(\Phi_\#\mu) = \varphi_\#\mu$ also has a density) such that all the $\mu_u$ admits a density on $S^{d-1}$. Our theorem thus applies, ensuring the existence of a structured Monge map between $\mu$ and (any) $\nu$ for the cost $c$: it decomposes for almost all $x \in \mathbb{R}^d$ as a Monge map on the basis $B_0 = \mathbb{R}$ (although it is actually only characterized on the image of $\varphi$, that is $B = \mathbb{R}_+^*$) obtained as the gradient of a convex function $f$ (there is no need for the exponential map here since $\nabla f$ is the increasing mapping between the quantiles of $\varphi_\#\mu$ and $\varphi_\#\nu$) and a Monge map on each fiber $F = S^{d-1}$, also built from gradients of convex functions (via the exponential map on the sphere). Note that our theorem only requires assumptions to hold almost everywhere on $E_0 = \mathbb{R}^d$, which is important to allow us to ignore the singularity of $\varphi$ at $x = 0$. \end{example} \subsection{Proof of \Cref{theo:fibers-nonsense}} \label{subsec:proof-nonsense} The proof decomposes in three steps. \paragraph{Step 1: Existence and optimality of lifts.} We know by assumption that there exists a Monge map $t_B$ that is optimal between the pushforward measures $\varphi_\#\mu$ and $\psi _\#\nu$. As our goal is to build a Monge map between the initial measures $\mu$ and $\nu$, we first show that (i) there exists a transport plan $\pi \in \Pi(\mu,\nu)$ such that $(\varphi,\psi)_\#\pi = (\operatorname{id}, t_B)_\#\mu$ and (ii) any such $\pi$ is an optimal transport plan between $\mu$ and $\nu$ for the cost $c$. This is formalized by the following lemmas. \begin{lemma}[Existence of a lift] \label{lemma:pullback-manifold} For any transport plan $\tilde\pi \in \Pi(\varphi_\#\mu,\psi_\#\nu)$, there exists a transport plan $\pi \in \Pi(\mu,\nu)$ such that $(\varphi,\psi)_\#\pi=\tilde\pi$. \end{lemma} \begin{proof} Let $(\mu_u)_{u\in B_+}$ and $(\nu_v)_{v\in B_-}$ be disintegrations of $\mu$ and $\nu$ by $\varphi$ and $\psi$ respectively. Given $\tilde\pi \in \Pi(\varphi_\#\mu,\psi_\#\nu)$, we define $$\pi\triangleq \iint _{ B_+\times B_-}(\mu_{u}\otimes \nu_{v})\, \mathrm d \tilde\pi(u,v)\,,$$ \textit{i.e.}~trivially sending every fiber $\mu_{u}$ onto every fiber $\nu_{v}$, while weighting by $\tilde\pi$. See \cite[Sec. 5.3]{ambrosio2005gradient} for the notation. Then, for any Borel set $A\subset \mathcal{X}$, \begin{align*} \pi(A\times \mathcal{Y}) & =\iint _{ B_+\times B_-}\mu_{u}(A)\nu_{v}(\mathcal{Y})\, \mathrm d\tilde\pi(u,v) \\ & =\iint _{ B_+\times B_-}\mu_{u}(A)\, \mathrm d\tilde\pi(u,v) \\ & =\int _{ B_+}\mu_{u}(A)\, \mathrm d(\varphi_\#\mu)(u) & & \text{since the first marginal of }\tilde\pi \text{ is } \varphi_\#\mu\\ & = \mu(A)\,, & & \text{(disintegration theorem)} \end{align*} and similarly for $\nu$; hence $\pi \in \Pi(\mu,\nu)$. Now, let us show that $(\varphi,\psi)_\#\pi=\tilde\pi$. For $U$ and $V$ Borel sets of $B_+$ and $B_-$, \begin{align*} ((\varphi,\psi)_\#\pi)(U\times V) & =\iint _{U\times V}\mathrm d((\varphi,\psi)_\#\pi)(u,v) \\ & =\iint _{\varphi ^{-1}(U)\times \psi ^{-1}(V)}\mathrm d\pi(x,y) \\ & =\iint _{\varphi ^{-1}(U)\times \psi ^{-1}(V)}\iint _{ B_+\times B_-}\mathrm d(\mu_{u}\otimes \nu_{v})(x,y)\, \mathrm d\tilde\pi(u,v) \\ & =\iint_{ B_+\times B_-}\left(\int _{\varphi ^{-1}(U)}\mathrm d\mu_{u}(x)\int _{ \psi ^{-1}(V)}\mathrm d \nu_{v}(y)\right)\, \mathrm d\tilde\pi(u,v) & & \text{by Fubini's theorem}\\ & =\iint_{ B_+\times B_-}\mu_u(\varphi ^{-1}(U))\nu_v(\psi ^{-1}(V))\, \mathrm d\tilde\pi(u,v)\\ & =\iint_{ B_+\times B_-}\delta_{U}(u)\delta_{V}(v)\, \mathrm d\tilde\pi(u,v)\\ & =\iint_{U\times V}\mathrm d\tilde\pi(u,v)\\ & =\tilde\pi(U\times V)\,. && \qedhere \end{align*} \end{proof} \begin{lemma}[Decomposition of optimal plans for the base space cost] \label{lemma:decomp-manifold} Let $c: \mathcal{X} \times \mathcal{Y} \to \mathbb{R}$ and $\tilde c: B_+ \times B_-\to \mathbb{R}$ such that $$c(x,y)=\tilde c(\varphi(x),\psi(y))\quad \text{ for all } x,y\in \mathcal{X} \times \mathcal{Y}\,.$$ Then $$\Pi^\star_{\tilde c}(\varphi_\#\mu,\psi_\#\nu)=(\varphi,\psi)_\#\Pi^\star_{c}(\mu,\nu)\,,$$ where $\Pi^\star_{c}(\mu,\nu)$ denotes the set of optimal transport plan between $\mu$ and $\nu$ for the cost $c$, and similarly for $\Pi^\star_{\tilde c}(\varphi_\#\mu,\psi_\#\nu)$. \end{lemma} \begin{proof} Let us first remark that for every $\tilde\pi\in\Pi(\varphi_\#\mu,\psi_\#\nu)$ and $\pi\in\Pi(\mu,\nu)$, \begin{equation} \label{eq:equality-proj-manifold} \text{if } \tilde\pi=(\varphi,\psi)_\#\pi, \text{ then } \langle c,\,\pi\rangle=\langle \tilde c,\,\tilde\pi\rangle. \end{equation} Indeed, for such a $\tilde\pi$ \begin{align*} \iint_{ B_+\times B_-}\tilde c(u,v)\,\mathrm d \tilde\pi(u,v)&=\iint_{\mathcal{X} \times \mathcal{Y}}\tilde c(\varphi(x),\psi(y))\,\mathrm d \pi(x,y)&&\text{by definition of the pushforward}\\ &=\iint_{\mathcal{X} \times \mathcal{Y}}c(x,y)\,\mathrm d \pi(x,y)\,. \end{align*} $\subset$. Let $\tilde\pi^\star \in \Pi ^\star _{\tilde c}(\varphi_\#\mu,\psi_\#\nu)$. By \Cref{lemma:pullback-manifold}, there exists a $\pi \in \Pi(\mu,\nu)$ such that $(\varphi,\psi)_\#\pi=\tilde\pi^\star$. Then for any $\gamma \in \Pi(\mu,\nu)$, $$\langle c,\,\pi\rangle \stackrel{ \cref{eq:equality-proj-manifold} }{ = } \langle \tilde c,\,\tilde\pi^\star \rangle \stackrel{ (*) }{ \leq } \langle \tilde c,\,(\varphi,\psi)_\#\gamma\rangle \stackrel{ \cref{eq:equality-proj-manifold} }{ = } \langle c,\,\gamma\rangle\,,$$ where $(*)$ follows from the optimality of $\tilde\pi^\star$. Hence the optimality of $\pi$.\\ $\supset$. Let $\pi ^\star \in \Pi ^\star _{c}(\mu,\nu)$. By \Cref{lemma:pullback-manifold}, for any $\tilde\gamma \in \Pi (\varphi_\#\mu,\psi_\#\nu)$ there exists a $\gamma \in \Pi(\mu,\nu)$ such that $(\varphi,\psi)_\#\gamma=\tilde\gamma$. We then have $$\langle \tilde c,\,(\varphi,\psi)_\#\pi ^\star \rangle\stackrel{ \cref{eq:equality-proj-manifold} }{ = }\langle c,\,\pi ^\star \rangle \stackrel{ (*) }{ \leq } \langle c,\,\gamma\rangle\stackrel{ \cref{eq:equality-proj-manifold} }{ = }\langle \tilde c,\,\tilde\gamma\rangle\,,$$ where $(*)$ follows from the optimality of $\pi^\star$. Hence the optimality of $(\varphi,\psi)_\#\pi ^\star$. \end{proof} \paragraph{Step 2: Existence of Monge maps between the fibers.} Using \Cref{lemma:pullback-manifold} with $\tilde{\pi} = (\operatorname{id},t_B)_\#(\varphi_\#\mu)$, we know that we can build an optimal transportation plan $\pi \in \Pi(\mu,\nu)$ that essentially coincides with $t_B$ on $B_+ \times B_-$ and transports each fiber $\mu_u$ onto $\nu_{t_B(u)}$ for $\mu$-a.e.~$u \in B_+$. In order to build a Monge map between $\mu$ and $\nu$, we must show that one can actually transport almost all $\mu_u$ onto $\nu_{t_B(u)}$ using a map rather than a plan. For this, we use the following result, see \cite[Rem.~1.23, Lemma 1.28, Cor.~1.29]{santambrogio2015optimal}. \begin{proposition}\label{prop:fact_nonsense} Let $\alpha,\beta$ be two measures supported on $\mathbb{R}^d$ with $\alpha$ atomless. Then: \begin{enumerate}[label=(\roman*),nolistsep] \item if $d=1$, there exists a transport map $\tilde{T}$ that pushes $\alpha$ onto $\beta$. Furthermore, it is the \emph{unique} optimal map between these measures for the quadratic cost $(x,y) \mapsto |x - y|^2$; \item \label{item:sigma_d} there exists a map $\sigma_d : \mathbb{R}^d \to \mathbb{R}$ (that does not depend on $\alpha,\beta$) that is (Borel) measurable, injective, and its inverse is measurable as well. \end{enumerate} \end{proposition} As we assumed that the ground spaces $\mathcal{X}$ and $\mathcal{Y}$ can be embedded in $\mathbb{R}^d$ using the injective, measurable maps $\Phi_\mathcal{X}$ and $\Phi_\mathcal{Y}$, we can apply \Cref{prop:fact_nonsense} using $\sigma_\mathcal{X} = \sigma_d \circ \Phi_\mathcal{X}$ and $\sigma_\mathcal{Y} = \sigma_d \circ \Phi_\mathcal{Y}$. As $\sigma_\mathcal{X}$ is injective, $\sigma_\mathcal{X}\,_\# \mu_u$ is atomless on $\mathbb{R}$, and we can thus consider the \emph{unique} Monge map $\tilde{T}_u$ between $\sigma_{\mathcal{X}\#}\mu_u$ and $\sigma_{\mathcal{Y}\#} \nu_{t_B(u)}$ for the quadratic cost on $\mathbb{R}$. From this, as the maps $\sigma_\mathcal{X}$ and $\sigma_\mathcal{Y}$ are measurable and injective (thus invertible on their image) we can define $T_u = \sigma_\mathcal{Y}^{-1} \circ \tilde{T}_u \circ \sigma_\mathcal{X} : \mathcal{X} \to \mathcal{Y}$, that defines a (measurable) transport map between $\mu_u$ and $\nu_{t_B(u)}$. \paragraph{Step 3: building a measurable global map.} Now that we have maps $(T_u)_u$ between each $\mu_u$ and $\nu_{t_B(u)}$, it may be tempting to simply define a map $T : \mathcal{X} \to \mathcal{Y}$ by $T(x) = T_{\varphi(x)}(x)$ when $\mu_{\varphi(x)}$ is atomless (which, by assumption, holds $\mu$-a.e.). Intuitively, this map induces a transport plan $(\operatorname{id}, T)_\#\mu$ that satisfes $(\varphi,\psi)_\#(\operatorname{id},T)_\#\mu = (\operatorname{id}, t_B)_\#(\varphi _\# \mu)$ on $B_+ \times B_-$ and thus must be optimal according to \Cref{lemma:decomp-manifold}. One remaining step, though, is to prove that this map $T$ can be defined in a measurable way. For this, we use the following \emph{measurable selection theorem} due to \cite[Thm.~1.1]{fontbona2010measurability}, that reads: \begin{proposition}\label{prop:fontbonaRd} Let $(B,\Sigma,m)$ be a $\sigma$-finite measure space and consider a measurable function $B \ni u \mapsto (\mu_u,\nu_u) \in \mathcal{P}(\mathbb{R}^d)^2$. Let $c : \mathbb{R}^d \times \mathbb{R}^d \to \mathbb{R}$ be a cost function, and assume that for $m$-a.e.~$u \in B$, there is a (unique) Monge map $T_u$ between $\mu_u$ and $\nu_u$ for the cost $c$. Then there exists a measurable function $(u,x) \mapsto T(u,x)$ such that $m$-a.e., $T(u,x) = T_u(x)$, $\mu_u$-a.e. \end{proposition} We can apply this result in the case $d=1$ to the familly of measure $(\sigma_{\mathcal{X}\#} \mu_u, \sigma_{\mathcal{Y}\#} \nu_{t_B(u)})_{u \in B_+}$, where the reference measure on $B_+$ is $\varphi_\#\mu$.\footnote{Note that we cannot apply \cref{prop:fontbonaRd} to the measures $(\mu_u,\nu_{t_B(u)})_u$ and the maps $(T_u)_u$ directly, as $T_u$ may not be the unique Monge map between the measures, a required assumption of the proposition.} We first need to show the measurability of this family of measures. By definition of the disintegration of measures (see for instance \cite[Thm.~5.3.1]{ambrosio2005gradient}), the map $u_- \in B_- \mapsto \nu_{u_-}$ is measurable; and the Monge map $t_B$ is measurable as well so is the map $B_+ \ni u \mapsto \sigma_{\mathcal{Y}\#} v_{t_B(u)}$ by composition of measurable maps, and thus the map $u \mapsto (\mu_u, \nu_{t_B(u)})$. Therefore, \cref{prop:fontbonaRd} applies and guarantees the existence of a measurable map $\tilde{T} : B_+ \times \mathbb{R} \to \mathbb{R}$ such that $\tilde{T}(u,x) = \tilde{T}_u(x)$, for $\varphi_\#\mu$ almost all $u$, and $\sigma_{\mathcal{X}\#}\mu$ almost all $x$. Now, we can define \begin{align*} T : \mathcal{X} &\to \mathcal{Y} \\ x &\mapsto \sigma_\mathcal{Y}^{-1} \circ \tilde{T}(\varphi(x), \sigma_\mathcal{X}(x))\,. \end{align*} This map is measurable as composition of measurable maps. Let us prove that this defines a transport map between $\mu$ and $\nu$. For any function $g : \mathcal{Y} \to \mathbb{R}$ continuous with compact support, we can write \begin{equation*} \int_\mathcal{Y} g(y) \,\mathrm d T_\#\mu(y) = \int_\mathcal{X} g(T(x)) \,\mathrm d \mu(x) = \int_{u \in B_+} \int_{x \in \varphi^{-1}(\{u\})} g\left(\sigma_\mathcal{Y}^{-1}\left(\tilde{T}_u(\sigma_\mathcal{X}(x))\right)\right) \,\mathrm d \mu_u(x) \,\mathrm d \varphi _\# \mu(u)\,, \end{equation*} where we use the disintegration of $\mu$ w.r.t.~$\varphi$ and the fact that the $\mu_u$ are supported on $\varphi^{-1}(\{u\})$, allowing us to write $\tilde{T}(\varphi(x),\sigma_\mathcal{X}(x)) = \tilde{T}_u(\sigma_\mathcal{X}(x))$ on that fiber ($\varphi_\#\mu$-a.e.). Now, recall that $T_u : x \mapsto \sigma_\mathcal{Y}^{-1}\left(\tilde{T}_u(\sigma_\mathcal{X}(x))\right)$ defines a transport map between $\mu_u$ and $\nu_{t_B(u)}$. In particular, the image of the fiber $\varphi^{-1}(\{u\})$ by this map is $\psi^{-1}(\{t_B(u)\}) \subset \mathcal{Y}$. Therefore, we get \begin{align*} \int_\mathcal{Y} g(y) \,\mathrm d T_\# \mu(y) &= \int_{u \in B_+} \int_{y \in \psi^{-1}(\{t_B(u)\})} g(y) \,\mathrm d \nu_{t_B(u)} \,\mathrm d \varphi_\#\mu(u) \\ &= \int_{u \in B_+} \int_{y \in \mathcal{Y}} g(y) \,\mathrm d \nu_{t_B(u)} \,\mathrm d \varphi_\#\mu(u) & & \text{as $\nu_{t_B(u)}$ is supported on $\psi^{-1}(\{t_B(u)\})$} \\ &= \int_{v \in B_-} \int_{y \in \mathcal{Y}} g(y) \,\mathrm d \nu_{v}(y) \,\mathrm d t_{B\#} (\varphi_\#\mu)(v) & & \text{by change of variable $v = t_B(u)$} \\ &= \int_{v \in B_-} \int_{y \in \mathcal{Y}} g(y) \,\mathrm d \nu_{v}(y) \,\mathrm d \psi_\#\nu(v) & & \text{as $t_B$ pushes $\varphi_\#\mu$ to $\psi_\#\nu$}\\ &= \int_{y \in \mathcal{Y}} g(y) \,\mathrm d \nu(y) & & \text{as $(\nu_v)_v$ is a disintegration of $\nu$ by $\psi$}\,, \end{align*} proving that $T _\# \mu = \nu$. By \cref{lemma:decomp-manifold}, this map is optimal if and only if it satisfies $(\varphi,\psi)_\#(\operatorname{id},T)_\# \mu = (\operatorname{id}, t_B)_\#(\varphi_\#\mu)$, as $t_B$ is an optimal transportation plan between $\varphi_\#\mu$ and $\psi_\#\nu$, making $(\operatorname{id},T)_\#\mu$ optimal between $\mu$ and $\nu$ (hence $T$ a Monge map). For this, let $g : \mathcal{X} \times \mathcal{Y} \to \mathbb{R}$ be a continuous function with compact support. We have \begin{align*} \iint_{B_+ \times B_-} g(u,v) \,\mathrm d (\varphi,\psi)_\#(\operatorname{id},T)_\# \mu(u,v) &= \iint_{\mathcal{X} \times \mathcal{Y}} g(\varphi(x),\psi(y)) \,\mathrm d (\operatorname{id},T)_\#\mu(x,y) \\ &= \int_\mathcal{X} g(\varphi(x),\psi(T(x))) \,\mathrm d \mu(x) \\ &= \int_{u \in B_+} \int_{x \in \varphi^{-1}(\{u\})} g(u, \psi(\sigma_\mathcal{Y}^{-1}(T_u(\sigma_\mathcal{X}(x))))) \,\mathrm d \mu_u(x) \,\mathrm d \varphi _\# \mu(u) \\ &= \int_{u \in B_+} \int_{y \in \psi^{-1}(\{t_B(u)\})} g(u, t_B(u)) \,\mathrm d \nu_{t_B(u)}(y) \,\mathrm d \varphi _\# \mu(u) \\ &= \int_{u \in B_+} g(u, t_B(u)) \,\mathrm d \varphi_\#\mu(u) \\ &= \int_{B_+ \times B_-} g(u,v) \,\mathrm d (\operatorname{id},t_B)_\# \varphi _\#\mu(u,v)\,, \end{align*} proving the required equality and thus that $T$ is a Monge map between $\mu$ and $\nu$. \subsection{Proof of \Cref{theo:fibers-main}} \label{subsec:proof-fibers-main} To alleviate notations, we let $\mu' \triangleq \Phi_\#\mu$ and $\nu' \triangleq \Phi_\#\nu$ in the following. We also denote by $B$ the image of $\varphi = p_B \circ \Phi$, so that $\mu',\nu'$ are supported on $B \times F \subset B_0 \times F$. \paragraph{Step 1: Construction of the structured Monge map.} Given that $\varphi_\#\mu$ is absolutely continuous w.r.t.~the Lebesgue measure on the complete (separable) Riemannian manifold $B_0$, by \cref{theorem:brenier} there exists a unique optimal transport plan $\pi ^\star_{ B }$ between $\varphi_\#\mu$ and $\varphi _\# \nu$ for the cost $\tilde c$ and it is induced by a map $t_{ B }: B_0 \to B_0$ of the form $t_{ B }=\exp_u(\nabla f)$, with $f$ locally Lipschitz and $\tilde c$-convex. By \cref{lemma:decomp-manifold}, we know that any transport plan in $\pi \in \Pi(\mu,\nu)$ that satisfy $(\varphi,\varphi) _\# \pi = (\operatorname{id}, t_B)_\#\mu$ must be optimal. Therefore, if $\pi$ happens to be induced by a map $T$, that is $\pi = (\operatorname{id}, T)_\#\mu$, we would obtain a Monge map between $\mu$ and $\nu$. To build such a $T$, we proceed as in \Cref{subsec:proof-nonsense}: we define a Monge map $T_u$ between $(\mu'_u)_u$ and $\smash{(\nu'_{t_B(u)})_u}$ (recall that those are the disintegration of $\Phi_\#\mu = \mu'$ and $\Phi_\#\nu=\nu'$ with respect to $p_B$), for $\varphi_\#\mu$-a.e.~$u$ and build a global map between $\mu'$ and $\nu'$ by (roughly) setting $T(u,x) = T_u(x)$. As in \Cref{subsec:proof-nonsense}, proving the measurability of such $T$ requires care. \paragraph{Step 2: Transport between the fibers.} For $\varphi_\#\mu$-a.e.~$u$, $\mu'_u$ has a density w.r.t.~the volume measure on $F$ and the optimal cost between $\mu'_u$ and $\smash{\nu'_{t_{ B }(u)}}$ is finite by assumption. Whenever $\mu'_u$ has a density, we can therefore apply \cref{prop:quad-cost-manifold-villani} between $\mu'_u$ and $\smash{\nu'_{t_{ B }(u)}}$ with the cost $d_{ F }^2$ to obtain that there exists a plan $\pi_u$ between these fibers that is induced by a map $T_u: F \to F$ that can be expressed as $T_u(v)=\exp_v(\tilde \nabla h_u(v))$ with $h_u$ being $d_ F ^2/2$-convex on $F$. \paragraph{Step 3: Measurability of the global map.} Now that we have built structured maps $T_u$ between corresponding fibers (through $t_B$), it remains to prove the existence of a measurable map $T : B_0 \times F \to B_0 \times F$ transporting $\mu'$ onto $\nu'$ satisfying $T(u,x) = (t_B(u),T_u(x))$ for $\varphi_\#\mu$-almost every $u$ and $\mu'_u$-almost every $x$. For this, we need an adaptation of \cref{prop:fontbonaRd} to the manifold setting. Namely, we have the following: \begin{proposition}[Measurable selection of maps, manifold case]\label{prop:selection-manifold} Let $M$ be a complete Riemannian manifold and $(B, \Sigma, m)$ a measured space. Consider a measurable function $B\ni u \mapsto (\mu_u, \nu_u) \in \mathcal{P}(M)^2$. Assume that for $m$-almost every $u\in B$, $\mu_u\ll\operatorname{vol}_M$ and $\mu_u$ and $\nu_u$ have a finite transport cost. Let $T_u$ denote the (unique by \cref{prop:quad-cost-manifold-villani}) optimal transport map induced by the quadratic cost $d_M^2$ on $M$ between $\mu_u$ and $\nu_u$.\\ Then there exists a function $(u,x)\mapsto T(u,x)$, measurable w.r.t.~$\Sigma \otimes \mathcal{B}(\mathbb{R}^d)$, such that $m$-a.e., $$T(u,x)=T_{u}(x)\quad \mu_{u}\text{-a.e.}$$ \end{proposition} This proposition can essentially be proved by adapting the proof of \cite{fontbona2010measurability} to the manifold setting, and most steps adapt seamlessly. We provide a sketch of proof below. A complete proof, where we stress the points that needed specific care in adaptation, is deferred to the appendix. \begin{proof}[Sketch of proof of \cref{prop:selection-manifold}] The proof relies on theory of measurable sets-valued maps \cite[Ch. 5 and 14]{rockafellar2009variational}. The main steps are the following: \begin{enumerate} \item For $k \in \mathbb{N}$, let $(A_{n,k})_n$ be a partition of $M$ of cells with $M$-volume lesser than or equal to $2^{-kD}$ (where $D$ denotes the dimension of $M$) and such that $(A_{n,j})_n$ is a refinement of $(A_{n,k})_n$ for $j \geq k$, that is each $A_{n,k}$ is partitioned by a subset of the $(A_{n,j})_n$. Then, for all $n,k$, the set $\{(u,x),\ T_u(x) \in A_{n,k} \}$ is measurable. \item Consider $a_{n,k} \in A_{n,k}$ for each $n,k$ chosen in a measurable way. Build a sequence of measurable maps $(T^{(k)})_k$ defined by $T^{(k)} : (u,x) \mapsto a_{n,k}$ where $T_u(x) \in A_{n,k}$. This is a Cauchy sequence for the metric $D_1(f,g) \triangleq \int d_M(f(u,x), g(u,x)) \,\mathrm d \mu_u(x) \,\mathrm d m(u)$ for $f,g : L^1(B \times M \to M, \mu_u \otimes m)$, that is the space of functions $f$ such that $\int d_M(f(u,x), z) \,\mathrm d \mu_u(x) \,\mathrm d m(u) < \infty$ for some $z \in M$. This space is complete \cite{chiron2007definitions}, so we can consider the (measurable) map $T = 0.7_k T^{(k)}$. \item Prove that we indeed have $T(u,x) = T_u(x)$, roughly using that $T^{(k)}(u,x)$ both approximates $T_u(x)$ (by construction) and $T(u,x)$ (as the limit of the sequence).\qedhere \end{enumerate} \end{proof} We can apply this proposition with the manifold being the (common) fiber $F$ on which the $\mu'_u,\smash{\nu'_{t_B(u)}}$ are supported for $\varphi_\#\mu$-a.e.~$u$, and for which we have access to the (unique) Monge map $T_u$. It gives the existence of a global map $t$ satisfying $t(u,v) = T_u(v)$ for $\varphi_\#\mu$-a.e.~$u$, and $\mu'_u$-a.e.~$v$, and we can thus define the (measurable) map $T(u,x) = (t_B(u), t(u,x))$. One then has for any continuous function $z$ with compact support: \begin{align*} \int_{ B_0 \times F } z(u',v')\, \mathrm d(T_*\Phi_*\mu)(u',v') & =\int_{ B_0 \times F }z(t_ B (u),T_u(v))\, \mathrm d(\Phi_*\mu)(u,v) & & \text{(pushforward } T \text{ on }\Phi_*\mu \text{)} \\ & =\iint_{ B_0 \times F }z(t_ B (u),T_u(v))\, \mathrm d(\Phi_*\mu)_{u}(v)\, \mathrm d(\varphi_*\mu)(u) & & \text{(disintegration theorem)} \\ & =\iint_{ B_0 \times F }z(t_ B (u),v')\, \mathrm d(g_{u*}(\Phi_*\mu)_{u})(v')\, \mathrm d(\varphi_*\mu)(u) & & \text{(pushforward } T_u \text{ on }(\Phi_*\mu)_{u} \text{)} \\ & =\iint_{ B_0 \times F }z(t_ B (u),v')\, \mathrm d((\Phi_*\nu)_{t_ B (u)})(v')\, \mathrm d(\varphi_*\mu)(u) & & (g_{u*}(\Phi_*\mu)_{u}=(\Phi_*\nu)_{t_ B (u)}) \\ & =\iint_{ B_0 \times F }z(u',v')\, \mathrm d((\Phi_*\nu)_{u'})(v')\, \mathrm d(t_ B \varphi_*\mu)(u') & & \text{(pushforward } t_ B \text{ on }\varphi_*(\Phi_*\mu)_{u} \text{)} \\ & =\iint_{ B_0 \times F }z(u',v')\, \mathrm d((\Phi_*\nu)_{u'})(v')\, \mathrm d(\varphi_*\nu)(u') & & (t_{ B *}(\varphi_*\mu)=\varphi_*\nu) \\ & =\int_{ B_0 \times F }z(u',v')\, \mathrm d(\Phi_*\nu)(u',v')\,, & & \text{(disintegration theorem)} \end{align*} hence $T$ sends $\Phi_*\mu$ to $\Phi_*\nu$ and $T_E\triangleq\Phi^{-1}\circ T\circ\Phi$ therefore sends $\mu$ to $\nu$; and since \[ (\varphi,\varphi)_*(\operatorname{id},T_E)_*\mu=(\varphi,\varphi\circ T_E)_*\mu=(\varphi,t_{ B }\circ \varphi)_*\mu=(\operatorname{id},t_{ B })_*\varphi_*\mu=\pi_{ B }^\star \,,\] we have that $T_E$ is an optimal map between $\mu$ and $\nu$. \section{Introduction} \label{sec:introduction} Finding correspondences between objects that do not live on the same metric space is a problem of fundamental interest both in application and theory, in very different fields such as computer vision \cite{AlexBerg,memoli2011gromov}, mathematics \cite{sturm2012space}, biology \cite{demetci2020gromov} and machine learning \cite{titouan2020co,alvarez2018gromov}. The problem of graph matching \cite{GraphMatchingDeLaTorre} is a prominent example of such a situation. On the mathematical side, comparing metric measured spaces has attracted interest \cite{sturm2006geometry,sturm2012space,DePonti2022EntropyTransportDB} over the past decade. A common answer to such problems consists in seeking for a map between the two objects that is of low distortion. In the case of measures that live on a metric space, this distortion is measured in terms of the distances. To make the problem well-posed and symmetric, the problem is relaxed to a superposition of deterministic maps, such as in optimal transport and they are called plans or couplings \cite{santambrogio2015optimal}. In optimal transport, the fact that the optimization can actually be reduced to the space of maps has been developed a lot since Brenier's work \cite{brenier1987decomposition} and further generalized \cite{mccann2001polar}. Brenier's result essentially states that for the quadratic cost in Euclidean spaces, the optimal map is given by the gradient of a convex function. Such results on the structure of optimal plans/maps are of great interest in order to reduce the optimization set \cite{makkuva2020optimal}. In stark contrast to optimal transport which is a linear programming problem, the formulation of the problems mentioned above falls in the class of quadratic assignment problem \cite{koopmans1957assignment}, which is a computationally harder problem. As a consequence, it is not surprising that less results are available in the literature. In fact, the problem of understanding the structure of optimal plans, and when they are actual maps has been proposed by Sturm in \cite[Challenge 3.6]{sturm2012space}. In this work, we address this question in two particular cases in Euclidean spaces. The first one is when the distortion is measured in terms of the scalar product; we show the existence of optimal maps and we detail their structure. The second case is the quadratic squared distance for which the problem seems to have less structure; we show that optimal plans can be chosen to be supported by the union of a graph and an anti-graph of maps. We also study further the one-dimensional case, which has attracted recent attention \cite{vayer2020contribution,beinert2022assignment}. Indeed, in the latter article, a counter-example is given to the fact that the monotone (increasing or decreasing) mapping is optimal in the discrete case. We improve on these results in two directions. First by showing that this alternative (monotone increasing/decreasing) is actually true under some conditions on the measures and second we provide numerical evidence for a counter-example to the existence of optimal maps between a density and a measure. We refer the reader to \Cref{subsec:intro_contribs} for a detailed account of our contributions, while the background and state-of-the-art are presented respectively in \Cref{subsec:intro_GW,subsec:intro_relatedworks}. \subsection{The Gromov--Wasserstein problem} \label{subsec:intro_GW} \subsubsection{Formulation} The Gromov--Wasserstein (GW) problem, initially introduced in \cite{memoli2011gromov}, can be seen as an extension of the Gromov-Hausdorff distance \cite{gromov1999metric}, see also \cite{sturm2006geometry} for a similar extension, to the context of measured spaces $(\mathcal{X},\mu)$ equipped with a cost function $c_\mathcal{X} : \mathcal{X} \times \mathcal{X} \to \mathbb{R}$ (typically, $c_\mathcal{X}$ can be a distance on $\mathcal{X}$). Given $(\mathcal{X},\mu)$ and $(\mathcal{Y},\nu)$ equipped with costs $c_\mathcal{X}, c_\mathcal{Y}$ respectively, and random variables $X,X' \sim \mu$ and $Y,Y' \sim \nu$, the GW problem seeks a correspondence (\textit{i.e.}~a joint law) between $\mu$ and $\nu$ that would make the distribution $c_\mathcal{X}(X,X')$ as close as possible to $c_\mathcal{Y}(Y,Y')$, in a $L^p$ sense. Formally, it reads \begin{defi}\label{def:gw} Let $\mathcal{X}$ and $\mathcal{Y}$ be Polish spaces and $p\geq 1$. Given two probability measures $\mu \in \mathcal{P}(\mathcal{X})$ and $\nu \in \mathcal{P}(\mathcal{Y})$, two continuous symmetric functions $c_{\mathcal{X}}:\mathcal{X}\times \mathcal{X}\to \mathbb{R}$ and $c_{\mathcal{Y}}:\mathcal{Y}\times \mathcal{Y}\to \mathbb{R}$, the $p$\emph{-Gromov--Wasserstein problem} aims at finding \begin{align*} \tag{GW} \operatorname{GW}_p(\mu,\nu)=\inf_{\pi \in \Pi(\mu,\nu)} \left( \int _{\mathcal{X}\times \mathcal{Y}}\int _{\mathcal{X}\times \mathcal{Y}}|c_{\mathcal{X}}(x,x')-c_{\mathcal{Y}}(y,y')|^p \, \mathrm d\pi(x,y)\, \mathrm d\pi(x',y')\right)^{1/p}, \label{eq:gw} \end{align*} \newpage \noindent where $\Pi(\mu,\nu)$ denotes the subset of $\mathcal{P}(\mathcal{X} \times \mathcal{Y})$ of probability measures that admits $\mu$ (resp.~$\nu$) as first (resp.~second) marginal. Any $\pi^\star$ minimizing \cref{eq:gw} is said to be an \emph{optimal correspondence plan} between $\mu$ and $\nu$. Whenever $\pi^\star$ can be written as $\pi^\star = (\operatorname{id},T)_\# \mu$ where $T : \mathcal{X} \to \mathcal{Y}$ measurable satisfies for all Borel $A$, $T_\# \mu(A) \triangleq \mu(T^{-1}(A)) = \nu(A)$, $T$ is said to be an \emph{optimal correspondence map}, or a \emph{Monge map} between $\mu$ and $\nu$. \end{defi} While the existence of optimal correspondence plans holds by compactness arguments as long as the above minimum is not $+\infty$, much less is known about the existence of optimal correspondence maps, even in simple cases. In this work, we will consider two specific instances of this problem, both assuming that $\mathcal{X} \subset \mathbb{R}^n$ and $\mathcal{Y} \subset \mathbb{R}^{d}$ for two integers $n\geq d$ and using $p=2$: \begin{enumerate}[label=(\roman*)] \item the \emph{inner product case}, where $c_\mathcal{X}$ and $c_\mathcal{Y}$ denote the inner products on $\mathbb{R}^n$ and $\mathbb{R}^d$ (both denoted by $\langle \cdot, \cdot \rangle$), respectively: \begin{equation} \tag{GW-IP} \min _{\pi \in \Pi\left(\mu, \nu\right)} \int_{\mathcal{X}\times\mathcal{Y}}\int_{\mathcal{X}\times\mathcal{Y}}\left|\langle x,\, x'\rangle-\langle y,\, y'\rangle\right|^{2} \,\mathrm d\pi(x, y) \,\mathrm d\pi(x', y')\,, \label{eqn:GW-inner-prod} \end{equation} which essentially compares distribution of angles in $(\mathcal{X},\mu)$ and $(\mathcal{Y},\nu)$; \item the \emph{quadratic case}, where $c_\mathcal{X}$ and $c_{\mathcal{Y}}$ are the squared Euclidean distance on $\mathbb{R}^n$ and $\mathbb{R}^d$, respectively: \begin{equation} \tag{GW-Q} \min _{\pi \in \Pi\left(\mu, \nu\right)} \int_{\mathcal{X}\times\mathcal{Y}}\int_{\mathcal{X}\times\mathcal{Y}}\left||x-x'|^2-|y-y'|^2\right|^{2} \,\mathrm d\pi(x, y) \,\mathrm d\pi(x', y')\,, \label{eqn:GW-quadratic} \end{equation} where by $|\cdot|$ we mean $\|\cdot\|_2$, notation that we keep in the rest of the paper for the sake of clarity. This choice for $c_{\mathcal{X}}$ and $c_{\mathcal{Y}}$ is standard as we have the following property: if $\operatorname{GW}_p(\mu,\nu) = 0$, the measured metric spaces (mms) $(\mathcal{X}, c_\mathcal{X}, \mu)$ and $(\mathcal{Y}, c_\mathcal{Y}, \nu)$ are \emph{strongly isomorphic}, that is there exists an isometry $\varphi : (\mathcal{X}, d_\mathcal{X}) \to (\mathcal{Y}, d_\mathcal{Y})$ such that $\varphi _\# \mu = \nu$, see \cite{vayer2020contribution}. A subcase of this problem is given when $\mu = \frac{1}{N} \sum_{i=1}^N \delta_{x_i}$ and $\nu = \frac{1}{N} \sum_{j=1}^N \delta_{y_j}$ are uniform probability distributions supported on $N$ points each. In this scenario, optimal correspondence plans $\pi$ can be chosen as permutations $\sigma$ of $\{1,\dots,N\}$ \cite[Thm.~4.1.2]{vayer2020contribution}, and the problem optimizes over the set of such permutations $\mathfrak{S}_N$, \begin{equation} \tag{QAP} \min_{\sigma\in \mathfrak{S}_N}\ \sum_{i,j} \left| |x_i - x_j|^2 - |y_{\sigma(i)} - y_{\sigma(j)}|^2 \right|^2, \label{eq:QAP} \end{equation} which is a particular case of the \emph{Quadratic Assignment Problem} (QAP) introduced in \cite{koopmans1957assignment}. \end{enumerate} \subsubsection{Relation with the optimal transportation problem: a tight bi-convex relaxation} \label{subsec:intro_GW_and_OT} Let us first recall the formulation of the Optimal Transportation (OT) problem, also known as the Kantorovich problem, that will play an extensive role in this work. \begin{defi}[Kantorovich problem] Given two probability measures $\mu \in\mathcal{P}(\mathcal{X})$ and $\nu \in\mathcal{P}(\mathcal{Y})$ and a cost function $c:\mathcal{X}\times \mathcal{Y}\to \mathbb{R}\cup\{\infty\}$, we consider the problem \begin{equation} \tag{OT} \min_{\pi \in \Pi(\mu,\nu)} \int _{\mathcal{X}\times \mathcal{Y}}c(x,y)\, \mathrm d\pi(x,y) \,. \label{eq:OT} \end{equation} A \emph{transport plan} $\pi\in\Pi(\mu,\nu)$ realizing \cref{eq:OT} is called an \emph{optimal transport plan}, or \emph{optimal coupling}. Whenever it can be written as $(\operatorname{id}, T) _\# \mu$ for some map $T : \mathcal{X} \to \mathcal{Y}$, $T$ is said to be an \emph{optimal transport map}, or a \emph{Monge map} between $\mu$ and $\nu$ for the cost $c$. \end{defi} The minimization problem in \cref{eq:gw} can be interpreted as the minimization of the map $\pi \mapsto F(\pi,\pi) \triangleq \iint k \,\mathrm d \pi \otimes \pi$ where $k((x,y),(x',y')) = |c_\mathcal{X}(x,x') - c_\mathcal{Y}(y,y')|^2$, and $F$ is thus a symmetric bilinear map. By first order condition, if $\pi^\star$ minimizes \cref{eq:gw}, then it also minimizes $\pi \mapsto 2 F(\pi, \pi^\star)$. If we let $C_{\pi^\star}(x,y) = \int_{\mathcal{X} \times \mathcal{Y}} k((x,y),(x',y')) \,\mathrm d \pi^\star(x', y')$, we obtain the linear problem \begin{equation} \label{eq:linearized} \min_{\pi \in \Pi(\mu,\nu)} \int_{\mathcal{X} \times \mathcal{Y}} C_{\pi^\star}(x,y) \,\mathrm d \pi(x,y), \end{equation} which is nothing but the \cref{eq:OT} problem induced by the cost $C_{\pi^\star}$ on $\mathcal{X} \times \mathcal{Y}$. Therefore, we obtain that any optimal \emph{correspondence} plan for \cref{eq:gw} with costs $c_\mathcal{X}, c_\mathcal{Y}$ must be an optimal \emph{transportation} plan for \cref{eq:OT} with cost $C_{\pi^\star}$. A crucial point, proved in \cite[Thm.~3]{sejourne2021unbalanced} as a generalization of \cite{konno1976maximization}, is that if $k$ is symmetric negative on the set of (signed) measures on $\mathcal{X} \times \mathcal{Y}$ with null marginals, that is $\iint k \,\mathrm d \alpha \otimes \alpha \leq 0$ for all such $\alpha$, then the converse implication holds: any solution $\gamma^\star \in \Pi(\mu,\nu)$ of the OT problem with cost $C_{\pi^\star}$ is also a solution of the GW problem, that is $F(\pi^\star,\pi^\star) = F(\gamma^\star,\gamma^\star) = F(\pi^\star,\gamma^\star)$. In fact, when $k$ is symmetric negative, the function $F(\pi,\pi)$ is concave in $\pi$ and the Gromov--Wasserstein problem falls in the category of concave \emph{minimization} problems on a convex set. As an immediate consequence, in the finite case where the measures are sum of Dirac masses of equal mass, there exists an optimum which is a permutation matrix. In \cite{maron2018probably}, this property is indeed used to propose concave relaxation of the Euclidean graph matching problem. Since the solutions of \cref{eq:gw} are in correspondence with the solutions of an OT problem, the tools and knowledge from optimal transportation can be used to derive existence and structure of optimal maps since it has been extensively studied, see \Cref{subsec:intro_relatedworks}. In particular, this holds for our two problems of interest \cref{eqn:GW-quadratic} and \cref{eqn:GW-inner-prod}: if $\alpha$ denotes a signed measure on $\mathcal{X} \times \mathcal{Y} \subset \mathbb{R}^n \times \mathbb{R}^d$ with $0$ marginals, observe that \begin{align*} &\hspace{-5mm}\int \left| |x-x'|^2 - |y-y'|^2\right|^2 \,\mathrm d \alpha(x,y) \,\mathrm d \alpha(x',y') \\ &= \underbrace{\int |x-x'|^2 \,\mathrm d \alpha \otimes \alpha}_{=\ 0} + \underbrace{\int |y-y'|^2 \,\mathrm d \alpha \otimes \alpha}_{=\ 0} \ -\ 2 \int |x-x'|^2 |y-y'|^2 \,\mathrm d \alpha \otimes \alpha \\ & = -2 \int (|x|^2 - 2 \langle x, x' \rangle + |x|^2)(|y|^2 - 2 \langle y , y' \rangle + |y'|^2) \,\mathrm d \alpha \otimes \alpha. \end{align*} Developing the remaining factor involve nine terms, but given that $\alpha$ has zero marginals (in particular, zero mass), we obtain that $\int |x|^2 |y|^2 \,\mathrm d \alpha \otimes \alpha = 0$ (and similarly for the terms involving $|x'|^2 |y'|^2$, $|x|^2|y'|^2$ and $|x'|^2|y|^2$), and also that $\int |x|^2 \langle y,y' \rangle \,\mathrm d \alpha \otimes \alpha = 0$ (and similarly for the other terms). Eventually, the only remaining term is \begin{equation} - 8 \int \langle x, x' \rangle \langle y,y' \rangle \,\mathrm d \alpha \otimes \alpha \nonumber = - 8 \left\| \int x \otimes y \,\mathrm d \alpha(x,y) \right\|_F^2 \leq 0\,, \label{EqCorrelation1} \end{equation} where $x \otimes y \in \mathbb{R}^{n \times d}$ is the matrix $(x_i y_j)_{i,j}$, where $x = (x_1,\dots,x_n)$ and $y = (y_1,\dots, y_d)$, and $\| \cdot \|_F$ denotes the Froebenius norm of a matrix. The negativity of this term ensures that solutions of \cref{eqn:GW-quadratic} are exactly the solutions of an OT problem. Computations for \cref{eqn:GW-inner-prod} are similar---actually, they immediately boil down to the same last two equalities. More generally, when one considers a cost such as $(d_\mathcal{X}(x,x') - d_\mathcal{Y}(y,y'))^2$, by expanding the square, the only term that matters in the optimization is $-2d_\mathcal{X}(x,x')d_\mathcal{Y}(y,y')$. Let us assume that it is possible to write both distances $d_\mathcal{X}$ and $d_\mathcal{Y}$ as squared distances in Hilbert spaces, namely $d_\mathcal{X}(x,x') = \|\varphi(x) - \varphi(x')\|_{H_\mathcal{X}}^2$ and $d_\mathcal{Y}(y,y') = \|\psi(y) - \psi(y')\|_{H_Y}^2$ for an embedding $\varphi : \mathcal{X} \to H_\mathcal{X}$ in a Hilbert space $H_\mathcal{X}$ and similarly for $\mathcal{Y}$. Then computation \cref{EqCorrelation1} holds in this case. Such a property depends on the metric space and when it is satisfied the metric space is said to be of negative type or that the distance is Hilbertian. Another equivalent formulation is to say that $d_\mathcal{X}$ is a conditionally negative kernel on $\mathcal{X}$. We refer to \cite{lyons2013distance} for a thorough discussion. \begin{defi} A function $k_\mathcal{X}:\mathcal{X} \times \mathcal{X} \to \mathbb{R}$ is a \emph{conditionally negative definite kernel} if it is symmetric and for all $N\geq 1$, $x_1,\ldots,x_N \in \mathcal{X}$ and $\omega_1,\ldots,\omega_N \in \mathbb{R}$ such that $\sum_{i=1}^N \omega_i = 0$, $\sum_{i,j\leq N}\omega_i\omega_jk_\mathcal{X}(x_i,x_j) \leq 0$. \end{defi} Every conditionally positive kernel can be written as $k_\mathcal{X}(x,x') = f(x) + f(x') - \frac 12 \| \varphi(x) - \varphi(x')\|^2_{H}$ for an embedding $\varphi: \mathcal{X} \to H$ a Hilbert space, as shown in \cite{schoenberg1938metric}. With respect to the Gromov-Wasserstein functional, our discussion above shows that in fact $c_\mathcal{X}$ can actually be replaced with a kernel which is conditionally negative definite and that the relaxation still holds. To sum up our review of the literature, \begin{proposition} Let $(\mathcal{X},k_\mathcal{X},\mu)$ and $(\mathcal{Y},k_\mathcal{Y},\nu)$ be two spaces endowed each with a conditionally positive kernel and a probability measure; then the bi-convex relaxation of $\operatorname{GW}_2^2$ is tight. The corresponding kernel $k((x,y),(x',y'))$ is indeed non-positive on signed measures with null marginals on $\mathcal{X} \times \mathcal{Y}$. \end{proposition} Remark that the problem of minimizing $F(\pi,\gamma)$ is indeed a bi-convex problem since it is linear in each variable $\pi,\gamma$. There are several important Riemannian manifolds which are of negative type, among them the real Hyperbolic space, the sphere and the Euclidean space. Counter-examples are for instance in finite dimension the Hyperbolic space on the quaternions \cite{faraut1974distances}, and in infinite dimension the $L^2$-Wasserstein distance in $\mathbb{R}^d$ for $d\geq 3$ as proven in \cite{andoni2018snowflake}. \subsection{Related works} \label{subsec:intro_relatedworks} \subsubsection{Monge maps for the OT problem} The \cref{eq:OT} problem has been extensively studied (see \cite{santambrogio2015optimal,villani2009optimal,peyre2019computational} for a thorough introduction) and particular attention has been devoted to situations where existence of Monge maps, or variation of, can be ensured. Brenier's theorem, stated below, is the most well-known of such cases where the optimal plan is a map. \begin{theorem}[Brenier's theorem] \label{theorem:brenier} Let $\mathcal{X}=\mathcal{Y}=\mathbb{R}^d$, $\mu,\nu \in\mathcal{P}(\mathbb{R}^d)$ such that the optimal cost between $\mu$ and $\nu$ is finite and $c(x,y)=|x-y|^{2}$. If $\mu\ll \mathcal{L}^d$, then there exists a unique (up to a set of $\mu$-measure zero) solution of \cref{eq:OT} and it is induced by a map $T$. This map is characterized by being the unique gradient of a convex function $T=\nabla f$ such that $(\nabla f)_\#\mu=\nu$. \end{theorem} \noindent This central result admits a generalization in the manifold setting that we shall use later on. \begin{proposition}[{\cite[Thm.~10.41]{villani2009optimal}}, Solution of the Monge problem for the square distance] \label{prop:quad-cost-manifold-villani} Let $M$ be a Riemannian manifold, and $c(x, y)=d(x, y)^{2}$. Let $\mu, \nu\in\mathcal{P}(M)$ such that the optimal cost between $\mu$ and $\nu$ is finite. If $\mu\ll\operatorname{vol}_M$, then there is a unique solution of the Monge problem between $\mu$ and $\nu$ and it can be written as $$y=T(x)=\exp _{x}(\tilde{\nabla} f(x)),$$ where $f$ is some $d^{2} / 2$-convex function. The approximate gradient can be replaced by a true gradient if any one of the following conditions is satisfied: \begin{enumerate}[label=(\alph*),nolistsep] \item $\mu$ and $\nu$ are compactly supported; \item $M$ has nonnegative sectional curvature; \item $\nu$ is compactly supported and $M$ has asymptotically nonnegative curvature. \end{enumerate} \end{proposition} Brenier's theorem can be extended in a few directions. The condition that $\mu$ has a density can be weakened to the fact that it does not give mass to sets of Hausdorff dimension smaller than $d-1$ (\textit{e.g.} hypersurfaces), and $c$ can actually be a bit more general than being the squared distance function, as long as it satisfies the \emph{twist condition}, that we define now together with its variants. In the following, let $\mathcal{X}=\mathcal{Y}$ be complete Riemannian manifolds and let $c:\mathcal{X}\times\mathcal{Y}\to \mathbb{R}$ be a continuous cost function, differentiable in $x$. We refer to \cite{mccann2011five,chiappori2010hedonic,villani2009optimal} for more information on the twist condition, to \cite{ahmad2011optimal,mccann2012glimpse} on the subtwist condition and to \cite{moameni2016characterization} on the $m$-twist and generalized twist conditions. \begin{proposition}[Twist] \label{prop:twist} We say that $c$ satisfies the \emph{twist condition} if \begin{equation} \tag{Twist} \text{for all }x_0\in\mathcal X,\quad y\mapsto \nabla_x c(x_0,y)\in T_{x_0}\mathcal{X} \text{ is injective.} \label{eq:twist} \end{equation} Suppose that $c$ satisfies \cref{eq:twist} and assume that any $c$-concave function is differentiable $\mu$-a.e.~on its domain. If $\mu$ and $\nu$ have finite transport cost, then \cref{eq:OT} admits a unique optimal transport plan $\pi^\star$ and it is induced by a map which is the gradient of a $c$-convex function $f:\mathcal{X}\to\mathbb{R}$: $$\pi^\star=(\operatorname{id},c\text{-}\exp_x(\nabla f))_*\mu\,.$$ \end{proposition} \begin{remark} Following \cite{mccann2011five,villani2009optimal}, we recall that the $c$-\emph{exponential map} is defined on the image of $-\nabla_xc$ by the formula $c$-$\exp_x(p)=(\nabla_x c)^{-1}(x,-p)$, \textit{i.e.}~$c$-$\exp_x(p)$ is the unique $y$ such that $\nabla_xc(x,y)+p=0$. This notion particularizes into the usual Riemannian exponential map when $c(x,y)=d(x,y)^2/2$. \end{remark} \begin{remark} Costs of the form $c(x,y)=h(x-y)$ with $h$ strictly convex, and in particular the costs $c(x,y)=|x-y|^p$ for $p>1$, do satisfy the twist condition. \end{remark} The twist condition is equivalent to the fact that for all $y_1\neq y_2\in\mathcal Y$, the function $x\in\mathcal X\mapsto c(x,y_1)-c(x,y_2)$ has no critical point. Remark that on a compact manifold, if the cost is $C^1$, this condition can never be satisfied. Note also that the Riemannian distance squared is not $C^1$ everywhere and one can still prove the existence of Monge map; see \Cref{prop:quad-cost-manifold-villani}. This justifies the introduction of two weaker notions, that turns out to remain sufficient to obtain some (but less) structure on the optimal plans: \begin{proposition}[Subtwist] \label{prop:subtwist} We say that $c$ satisfies the \emph{subtwist condition} if \begin{equation} \tag{Subtwist} \text{for all } y_1\neq y_2\in\mathcal Y,\quad x\in \mathcal X\mapsto c(x,y_1)-c(x,y_2)\quad \text{ has at most 2 critical points.} \label{eq:subtwist} \end{equation} Suppose that $c$ satisfies \cref{eq:subtwist} and assume that any $c$-concave function is differentiable $\mu$-a.e~on its domain. If $\mu$ and $\nu$ have finite transport cost, then \cref{eq:OT} admits a unique optimal transport plan $\pi^\star$ and it is induced by the union of a map and an anti-map: \begin{equation*} \pi^\star=(\operatorname{id} , G)_\# \bar\mu+(H, \operatorname{id})_\#(\nu-G_\# \bar\mu) \end{equation*} for some Borel measurable maps $G:\mathcal{X}\to\mathcal{Y}$ and $H: \mathcal{Y}\to\mathcal{X}$ and non-negative measure $\bar\mu \leq \mu$ such that $\nu-G_\# \bar\mu$ vanishes on the range of $G$. \end{proposition} \begin{proposition}[$m$-twist] \label{prop:mtwist} We say that $c$ satisfies a $m$\emph{-twist} (resp.~\emph{generalized twist}) \emph{condition} if \begin{equation} \tag{$m$-twist} \text{for all } x_{0} \in \mathcal{X},y_{0} \in \mathcal{Y}\text{, the set} \left\{y \mid \nabla_x c\left(x_{0}, y\right)=\nabla_x c\left(x_{0}, y_{0}\right)\right\}\text{ has at most } m \text{ elements} \label{eq:mtwist} \end{equation} (resp.~is a finite subset of $\mathcal{Y}$). Suppose that $c$ is bounded, satisfies \cref{eq:mtwist} and assume that any $c$-concave function is differentiable $\mu$-almost surely on its domain. If $\mu$ has not atom and $\mu$ and $\nu$ have finite transport cost, then each optimal plan $\pi^\star$ of \cref{eq:OT} is supported on the graphs of $k\in[\![m]\!]$ (resp.~in $\mathbb{N}\cup\{\infty\}$) measurable maps, \textit{i.e.}~there exists a sequence $\{\alpha_i\}_{i=1}^k$ of non-negative functions from $\mathcal{X}$ to $[0,1]$ and Borel measurable maps $T_i:\mathcal{X}\to\mathcal{Y}$ such that $$\pi^\star=\sum_{i=1}^{k} \alpha_{i}\left(\operatorname{id}, T_i\right)_\# \mu\,,$$ in the sense $\pi^\star(S)=\sum_{i=1}^k\int_\mathcal{X}\alpha_i(x)\mathbbm{1}_S(x,T_i(x))\,\mathrm d\mu$ for any Borel $S\subset \mathcal{X}\times\mathcal{Y}$. \end{proposition} \begin{example}\label{example:2-twist} If $\mathcal{X} = \mathcal{Y} = \mathbb{R}$, and $c(x,y)$ is a second order polynomial in $xy$ with non-zero degree one coefficient, such as $c(x,y) = x^2 y^2 + \lambda xy$ for some $\lambda \neq 0$, the $2$-twist condition holds. As we shall see in \Cref{subsec:quadra1D}, such costs are closely related to the quadratic GW problem \cref{eqn:GW-quadratic} in dimension 1. \end{example} \begin{remark} Notice that although the $m$-twist condition is a generalization of the twist condition (which is the 1-twist condition since $y_0$ is always in the set), it is not a generalization of the subtwist condition. \end{remark} \begin{remark} Following \cite[Rem. 10.33]{villani2009optimal}, when measures $\mu$ and $\nu$ have compact support and $\mu$ has a density---which belong to our set of assumptions in the following---, all conditions of \cref{prop:twist,prop:subtwist,prop:mtwist} are satisfied. \end{remark} \subsubsection{Monge maps for the GW problem} In sharp constrast with the optimal transportation problem, there are very few results that ensure the existence of a Monge map for the Gromov--Wasserstein problem, even in the particular cases considered in this work. In the inner product case, \cite[Thm. 4.2.3]{vayer2020contribution} gives a result on the existence of a Monge map under some assumptions: \begin{proposition}[Inner product cost: optimal map under condition] \label{prop:sota-titouan} Let $n\geq d$, $\mu, \nu\in \mathcal{P}(\mathbb{R}^n)\times \mathcal{P}(\mathbb{R}^d)$ two measures of finite second order moment with $\mu\ll\mathcal{L}^n$. Suppose that there exists $\pi^\star$ solution of \cref{eqn:GW-inner-prod} such that $M^\star=\int y\otimes x \,\mathrm d\pi^\star(x, y)$ is of full rank. Then there exists an optimal map between $\mu$ and $\nu$ that can be written as $T=\nabla f\circ M^\star$ with $f: \mathbb{R}^d \to \mathbb{R}$ convex. \end{proposition} For the quadratic case, there is only very little results. In \cite{vayer2020contribution} is claimed that in the discrete case in dimension 1 with uniform mass and same number of points $N$, the optimal solution of \cref{eq:QAP} would be either the identity $\sigma(i)=i$ or the anti-identity $\sigma(i)=N+1-i$ (Thm. 4.1.1). However, a counter-example to this claim has been recently provided by \cite{beinert2022assignment}. \noindent To the best of our knowledge, the only positive results on the existence of Monge maps for the quadratic cost are the following. \begin{proposition}[{\cite[Thm.~9.21]{sturm2012space}}] Let absolutely continuous probability measures $\mu_0$ and $\mu_1$ on $\mathbb{R}^n$ be given, each of them being rotationally invariant around its barycenter $z_0$ or $z_1$ resp., that is, $\left(U_i\right)_\# \mu_i=\mu_i$ for each $U \in O(n)$ and $i=0,1$ where $U_i(x)\triangleq U\left(x-z_i\right)+z_i$. Then every $\pi \in \Pi\left(\mu_0, \mu_1\right)$ which minimizes \cref{eqn:GW-quadratic} is induced by a transport map $T$, unique up to composition with rotations. The transport map is constructed as follows: for $i=0,1$, let $\nu_i$ be the radial distribution of $\mu_i$ around $z_i$, and let $F_i$ be the respective distribution function, \textit{i.e.} $$F_i(r)\triangleq \nu_i([0, r])\triangleq \mu_i(\bar{B}_r(z_i))\,.$$ Then the monotone rearrangement $F_1 \circ F_0^{-1}: \mathbb{R}_{+} \to \mathbb{R}_{+}$ pushes forward $\nu_0$ to $\nu_1$. \end{proposition} \begin{proposition}[{\cite[Prop.~4.2.4]{vayer2020contribution}}] Let $\mu, \nu\in \mathcal{P}(\mathbb{R}^n)\times \mathcal{P}(\mathbb{R}^d)$ with compact support, with $n\geq d$. Assume that $\mu\ll\mathcal{L}^n$ and that both $\mu$ and $\nu$ are centered. Suppose that there exists $\pi^\star$ solution of \cref{eqn:GW-quadratic} such that $M^\star=\int y\otimes x \,\mathrm d\pi^\star(x, y)$ is of full rank. Then there exists $f:\mathbb{R}^d\to\mathbb{R}$ convex such that $T=\nabla f \circ M^\star$ pushes $\mu$ to $\nu$. Moreover, if there exists a differentiable convex $F:\mathbb{R}\to\mathbb{R}$ such that $|T(x)|_2^2=F'(|x|^2_2)$ $\mu$-a.e., then $T$ is optimal for \cref{eqn:GW-quadratic}. \end{proposition} \subsection{Outline and Contributions} \label{subsec:intro_contribs} This work is organized in the following way. \Cref{subsec:general_thm} provides a general setting in which existence of optimal \emph{transport} maps can be shown for cost that are defined by submersions. We provide two versions of the result, one (\cref{theo:fibers-nonsense}) which has no structure and is fairly general and one (\cref{theo:fibers-main}) which imposes a more structured setting thus recovering more structure in the optimal maps; the latter having the benefit of being more usable in practice. The proof of the second version requires a measurability argument which is addressed in details in \cref{prop:selection-manifold}. Following the connection between GW and OT problems through the linearization result exposed in \Cref{eq:linearized}, applications of these general results to the Gromov-Wasserstein problems are done in \Cref{subsec:applications} for the scalar product cost and the squared distance, both in Euclidean spaces. Finally, \Cref{subsec:quadra1D} focuses on the one-dimensional case with quadratic cost and consists in two parts: first, we conduct a numerical exploration in order to assess if our previous structural results are sharp in dimension one; then, we prove a positive result on the optimality of monotone maps, which partly explains why a monotone map is often optimal and highlights the importance of long-range effects of the cost. \subsection{Complementary study of the quadratic cost in the one-dimensional case} \label{subsec:quadra1D} Recalling the \cref{eqn:GW-inner-prod-in-subsection} is invariant by translation, we assume that measures $\mu$ and $\nu$ below are centered. In the one-dimensional case $\mathcal{X},\mathcal{Y}\subset\mathbb{R}$, the linearized quadratic GW problem reads, with $\pi^\star$ an optimal correspondence plan: \begin{equation} \label{eq:gw-1d-cont} \min_{\pi\in\Pi(\mu,\nu)} \int_{\mathcal{X}\times\mathcal{Y}} (-x^2y^2-4mxy)\,\mathrm d\pi(x,y)\,,\quad\text{where }m=\int_{\mathcal{X}\times\mathcal{Y}} x'y'\,\mathrm d\pi^\star(x',y')\,, \end{equation} and for any plan $\pi\in\Pi(\mu,\nu)$ (not necessarily optimal), we denote by $m(\pi)=\int xy\,\mathrm d\pi(x,y)$ what we call the \emph{correlation} of $\pi$. The associated OT cost function $c(x,y)=-x^2y^2-4mxy$ only satisfies the subtwist condition when $m\neq0$, which does not allow to conclude on the deterministic structure of optimal correspondence plans in the general case. However, in the one-dimensional case one has at their disposal a useful additional proposition when the cost $c$ is \emph{submodular}, which is sometimes called the Spence--Mirrles condition, that guarantees the optimality of the increasing (resp.~decreasing) matching $\pi_\text{mon}^\oplus$ (resp.~$\pi_\text{mon}^\ominus$) \cite{carlier2008remarks,santambrogio2015optimal}: \begin{proposition}[Submodular cost] \label{prop:submod} Let $\mathcal{X},\mathcal{Y}\subset\mathbb{R}$. We say that a twice-differentiable function $c:\mathcal{X}\times\mathcal{Y}\to\mathbb{R}$ is \emph{submodular} if \begin{equation} \tag{Submod} \text{for all } x,y\in\mathcal{X}\times\mathcal{Y},\quad \partial_{xy}c(x,y)\leq0\,. \label{eq:submod} \end{equation} Let $\mu,\nu \in\mathcal{P}(\mathcal{X})\times\mathcal{P}(\mathcal{Y})$ of finite transport cost. If $c$ satisfies \cref{eq:submod}, then $\pi_\text{mon}^\oplus$ is an optimal plan for \cref{eq:OT}, with uniqueness if the inequality is strict.\\ Similarly, \emph{supermodularity} is defined with the reversed inequality and induces the optimality of $\pi_\text{mon}^\ominus$. \end{proposition} The linearized quadratic GW cost with parameter $m\geq0$ is submodular on the region $S=\{(x,y)\mid xy\geq -m \}$ and supermodular elsewhere (see \cref{fig:submod} for an illustration); so we cannot directly apply this proposition. Still, it is reasonable to expect that optimal correspondence plans exhibit a monotone increasing structure on $S$ (written $\oplus$ in \cref{fig:submod}) and a monotone decreasing one elsewhere (written $\ominus$), and we can actually leverage this type of property to obtain the optimality of the monotone rearrangements in some particular cases (see \cref{subsec:quadra_1D_positive}). \begin{figure} \centering \include{figures/submod_GW} \vspace{-7mm} \caption{Submodularity (symbol $\oplus$, in \textcolor{tabgreen!80}{light green}) and supermodularity (symbol $\ominus$) regions for the linearized quadratic GW cost with parameter $m\geq0$.} \label{fig:submod} \end{figure} We also recall the discrete formulation of \cref{eq:OT} in dimension one. Given two sets $\left\{ x_{1},\dots,x_{N} \right\}$ and $\left\{ y_{1},\dots,y_{M} \right\}$ of $\mathbb{R}$ and two probability vectors $a$ and $b$, the \cref{eq:OT} problem between the discrete measures $\mu=\sum_{i=1}^{N}a_{i}\delta_{x_{i}}$ and $\nu=\sum_{j=1}^{M}b_{j}\delta_{y_{j}}$ reads \begin{equation*} \min_{\pi\in U(a,b)}\ \langle C,\,\pi\rangle\,, \end{equation*} where $U(a,b)\triangleq \{ \pi\in\mathbb{R}^{N\times M}\mid \pi\mathbbm{1}_M=a,\pi^\top\mathbbm{1}_N=b\}$ is the \emph{transport polytope}, $C=(c(x_i,y_j))_{i,j}$ is the cost matrix and $\langle \cdot,\,\cdot\rangle$ is the Frobenius inner product. In the case of the linearized problem \cref{eq:gw-1d-cont}, we denote by $C_{\operatorname{GW}(m)}$ the cost matrix, that has coefficients $(C_{\operatorname{GW}(m)})_{i,j}=-x_i^2y_j^2-4m x_iy_j$ with $m=\langle C_{xy},\, \pi^\star\rangle$ and $(C_{xy})_{i,j}=x_iy_j$. In the following sections, we study the optimality of the monotone increasing and decreasing rearrangements $\pi_\text{mon}^\oplus$ and $\pi_\text{mon}^\ominus$. It is worth noting that by submodularity of $x,y\mapsto -xy$, these two correspondence plans have respective correlations $m_\text{min}$ and $m_\text{max}$, where \begin{align} \label{eq:m-min-max} \begin{cases} m_\text{min}&= \min_{\pi}\ \langle C_{xy},\, \pi\rangle\\ m_\text{max}&= \max_{\pi}\ \langle C_{xy},\, \pi\rangle \end{cases},\quad\text{with }(C_{xy})_{i,j}=x_iy_j\,, \end{align} and that for any correspondence plan $\pi$, the value of its correlation $m(\pi)$ lies in the interval $[m_\text{min},m_\text{max}]$. \\We provide in the following a complementary study of the quadratic cost in dimension one, namely \begin{enumerate}[label=(\roman*),nolistsep] \item a procedure to find counter-examples to the optimality of the monotone rearrangements; \item empirical evidence for the tightness of \cref{theorem:quad-main}; \item empirical evidence for the instability of having a monotone rearrangement as optimal correspondence plan; \item a new result on the optimality of the monotone rearrangements when the measures are composed of two distant parts. \end{enumerate} All experiments are reproducible and the code can be found on GitHub\footnote{link of the code: \href{https://github.com/theodumont/monge-gromov-wasserstein}{https://github.com/theodumont/monge-gromov-wasserstein}.}. \subsubsection{Adversarial computation of non-monotone optimal correspondence plans} \label{subsec:quadra1D_adversarial} Theorem 4.1.1 of \cite{vayer2020contribution} claims that in the discrete case in dimension 1 with $N=M$ and $a=b=\mathbbm{1}_N$, the optimal solution of \cref{eq:QAP} is either the monotone increasing rearrangement $\pi_\text{mon}^\oplus$ or the monotone decreasing one $\pi_\text{mon}^\ominus$ (or equivalently the identity $\sigma(i)=i$ or the anti-identity $\sigma(i)=N+1-i$); which seems to be the case with a high probability empirically when generating random discrete measures. While this claim is true for $N=1,2$ and $3$, a counter-example for $N\geq 7$ points has been recently exhibited in \cite{beinert2022assignment}. We further propose a procedure to automatically obtain additional counter-examples, demonstrating empirically that such adversarial distributions occupy a non-negligible place in the space of empirical measures. We propose to move away from distributions of optimal plans $\pi_\text{mon}^\oplus$ and $\pi_\text{mon}^\ominus$ by performing a gradient descent over the space of empirical distributions with $N$ points using an objective function that favors the strict sub-optimality of the monotone rearrangements; we now detail this procedure. For $N\geq 1$, we consider the set of empirical distributions over $\mathcal{X}\times \mathcal{Y}=\mathbb{R}\times\mathbb{R}$ with $N$ points and uniform mass, \textit{i.e.}~of the form $\smash{\pi=\frac{1}{N}\sum_{i=1}^{N}\delta_{(x_{i},y_{i})}}$. Such plans $\pi$ can be seen as the identity mapping between vectors $X=(x_1,\dots,x_N)$ and $Y=(y_1,\dots,y_N)$, and we therefore note $\pi=\operatorname{id}(X,Y)$. Denoting by $c_{\operatorname{GW}}$ the functional that takes a correspondence plan and returns its cost on the GW problem, we then define $\mathcal{F}$ on $\mathbb{R}^N\times\mathbb{R}^N$ by $$\mathcal{F}(X,Y)\triangleq c_{\operatorname{GW}}(\pi)-\min \left\{ c_{\operatorname{GW}}(\pi_\text{mon}^\oplus),\, c_{\operatorname{GW}}(\pi_\text{mon}^\ominus)\right\},$$ \vspace{-3mm}$$\text{where}\quad\begin{cases} \pi=\operatorname{id}(X,Y)\\ \pi_\text{mon}^\oplus\text{ and }\pi_\text{mon}^\ominus\text{ are the monotone rearrangements between }X\text{ and }Y . \end{cases}$$ This quantifies how well the plan $\pi$ performs when compared to the best of the two monotone rearrangements. We generate $N$ points at random in $[0,1]^2$ and then perform a simple gradient descent over the positions of the points $(X,Y)=(x_{i},y_{i})_i$ following the objective $$\min_{X,Y \in \mathbb{R}^N}\ \mathcal{F}(X,Y)\,.$$ We include an early-stopping threshold $t$, since when $\mathcal{F}(\pi)$ becomes negative (\textit{i.e.}~we found an slightly adversarial example), the objective function often starts to decrease exponentially fast, exploiting the adversarial behaviour of the plan as much as it can. We found that choosing $t=-2$ gave good results in our experiments. The procedure can be found in \cref{algorithm:gd} below. We implemented it using PyTorch's autodiff \cite{pytorch} and used \cite{blondel2020fast} to implement a differentiable sorting operator to compute the monotone rearrangements. Adversarial plans $\pi_f=\operatorname{id}(X_f,Y_f)$ obtained by \cref{algorithm:gd} are not \textit{a priori} optimal for the GW cost between their marginals; but they have at least a better cost than the monotone rearrangements since $\mathcal{F}(X_f,Y_f)< 0$, proving the sub-optimality of the latter. \begin{figure}[H] \centering \begin{minipage}{.8\linewidth} \begin{algorithm}[H] \caption{Simple gradient descent over the positions $(x_i)_i$ and $(y_i)_i$.} \label{algorithm:gd} \vspace{1mm} \textbf{Parameters:} \begin{itemize}[nolistsep] \item $N$: number of points of the distributions \item $N_\text{iter}$: maximum number of iterations \item $\eta$: step size \item $t$: early stopping threshold \end{itemize} \vspace{3mm} \textbf{Algorithm:} \begin{algorithmic}[1] \State $X\gets$ $N$ random values in $[0,1]$, then centered \State $Y\gets$ $N$ random values in $[0,1]$, then centered \For{$i\in\{1,\dots,N_\text{iter}\}$} \State $\pi_\text{mon}^\oplus\gets\texttt{id(sort(}X\texttt{)},\texttt{sort(}Y\texttt{))}$ \Comment{\texttt{id} is the identity mapping} \State $\pi_\text{mon}^\ominus\gets\texttt{id(sort(}X\texttt{)},\texttt{sort(}Y\texttt{)[::-1])}$ \State $\pi_{\phantom{\text{mon}}}\gets\texttt{id(}X,Y\texttt{)}$ \State $\mathcal{F}(X,Y)\gets \texttt{GW(}\pi\texttt{)}-\texttt{min(GW(}\pi_\text{mon}^\oplus\texttt{)},\texttt{GW(}\pi_\text{mon}^\ominus\texttt{))}$ \State \textbf{if} $\mathcal{F}(X,Y)< t$ \textbf{then} stop \Comment{early stopping} \State $(X,Y)\gets (X,Y)-\eta\nabla\mathcal{F}(X,Y)$ \Comment{step of gradient descent} \EndFor \State return $\pi_f=\texttt{id(}X,Y\texttt{)}$ \end{algorithmic} \vspace{3mm} \textbf{Output:} a plan $\pi_f$ with better GW cost than $\pi_\text{mon}^\oplus$ and $\pi_\text{mon}^\ominus$ \end{algorithm} \end{minipage} \end{figure} On \Cref{fig:res-GD} is displayed an example of adversarial plans obtained following this procedure. It can be observed that during the descent, the plan $\pi$ has difficulties getting out of what seems to be a saddle point consisting in being the monotone rearrangements between its marginals. Moreover, it is worth noting that the marginals of our typical adversarial plans, such as the one of \cref{fig:res-GD}, are often similar to the counter-example proposed in \cite{beinert2022assignment}, where both measures have their mass concentrated near zero, except for one outlier for $\nu$ and two for $\mu$, one on each tail. \begin{figure}[h] \centering \begin{subfigure}[b]{.33\linewidth} \centering \include{figures/GD-obj} \vspace{-8mm} \end{subfigure} \begin{subfigure}[b]{.33\linewidth} \centering \include{figures/GD-initial} \vspace{-8mm} \end{subfigure} \begin{subfigure}[b]{.33\linewidth} \centering \include{figures/GD-final} \vspace{-8mm} \end{subfigure} \caption{Gradient descent results with parameters $N=122$, $\eta=26$, $t=-2$. \textbf{(Left)\ } Evolution of the objective function $\mathcal{F}$. \textbf{(Center)\ } Initial plan $\pi_0$, generated at random. \textbf{(Right)\ } Final plan $\pi_f$ (iter. 66).} \label{fig:res-GD} \end{figure} \newpage Furthermore, examining the optimal correspondence plan for these adversarial examples allows to exhibit cases where it is not a map, providing empirical evidence for the following conjecture: \begin{conjecture} \label{conj:tight} \cref{theorem:quad-main} is tight, \textit{i.e.}~there exists $\mu$ and $\nu$ for which optimal correspondence plans for \cref{eqn:GW-quadratic-in-subsection} are not maps but rather a union of two graphs (either that of two maps or that of a map and an anti-map); and this even if $\mu$ has a density, classical OT assumption for the existence of an optimal transport map. \end{conjecture} In order to approximate numerically the case of a measure which has density w.r.t.~the Lebesgue measure, we convolve our distributions $\mu=(X_f,\mathbbm{1}_n)$ and $\nu=(Y_f,\mathbbm{1}_n)$ with a Gaussian of standard deviation $\sigma$ and represent it in eulerian coordinates; that is we evaluate the closed form density on a fine enough grid. When $\sigma$ is large, the optimal correspondence plan for GW is probably induced by a monotone map, as it is the case very frequently empirically; on the contrary, if $\sigma$ is sufficiently small, \textit{i.e.}~when the distributions are very close to their discrete analogous, the optimal correspondence plan should not be a monotone map, by construction of $\mu$ and $\nu$. \begin{remark} Because of the adversarial nature of $\pi_f$ for the sub-optimality of $\pi_\text{mon}^\oplus$ and $\pi_\text{mon}^\ominus$, we know that when $\sigma$ is sufficiently small, the optimal correspondence plan is not a monotone rearrangement. Still, it could be the case that this optimal plan is a map, but not a monotone one, and there is \textit{a priori} no reason to believe that $\pi_f$ will agree with \cref{conj:tight}. Surprisingly, it does, as numerical experiments below suggest. \end{remark} In order to find the optimal correspondence plan $\pi^\star$ between $\mu$ and $\nu$, we leverage the fact that $\pi^\star$ is a solution of its associated linearized problem. Therefore, a minimizer of the $\operatorname{GW}$ functional is given by \begin{equation} \argmin\ \Big\{ \operatorname{GW}(\pi_{m}^\star)\mid \pi_m^\star \in \argmin_{\pi \in U(a,b)}\ \langle C_{\operatorname{GW}(m)},\,\pi\rangle \,,\, m \in [m_\text{min},m_\text{max}]\Big\} \,, \end{equation} where $(C_{\operatorname{GW}(m)})_{i,j}=-x_i^2y_j^2-4m x_iy_j$. We therefore compute both $m_\text{min}$ and $m_\text{max}$ by solving the linear programs in \cref{eq:m-min-max}, discretize the interval $[m_\text{min},m_\text{max}]$ with $N_{\Delta m}$ points, and solve the corresponding linear optimization problem for every value of the parameter $m$ and evaluate the $\operatorname{GW}$ cost on each optimal plan for the given parameter $m$. We then check if the optimal plan exhibits a bimap or a map/anti-map structure. The procedure is described in \cref{algorithm:bimap}. \begin{figure}[h] \centering \begin{minipage}{.8\linewidth} \begin{algorithm}[H] \caption{Generating bimaps from adversarial examples.} \label{algorithm:bimap} \vspace{1mm} \textbf{Input:} an adversarial plan $\pi_f=\operatorname{id}(X_f,Y_f)$ obtained from \cref{algorithm:gd} \vspace{3mm} \noindent\textbf{Parameters:} \begin{itemize}[nolistsep] \item $\sigma$: standard deviation of convolution \item $N_{\Delta x}$: discretization precision \item $N_{\Delta m}$: discretization precision of the interval $[m_\text{min},m_\text{max}]$ \end{itemize} \vspace{3mm} \textbf{Algorithm:} \begin{algorithmic}[1] \State $a\gets \texttt{convolution}(X_f,\sigma,N_{\Delta x})$ \State $b\gets \texttt{convolution}(Y_f,\sigma,N_{\Delta x})$ \Comment{optional (see below)} \State $m_{\vphantom{\text{max}}\text{min}}\gets \min_{\pi\in U(a,b)}\ \langle C_{xy},\,\pi\rangle$ \Comment{solve linear programs} \State $m_{\vphantom{\text{min}}\text{max}}\gets \max_{\pi\in U(a,b)}\ \langle C_{xy},\,\pi\rangle$ \State \texttt{scores} $\gets \texttt{[]}$ \For{$c\in\{m_\text{min},\dots,m_\text{max}\}$} \Comment{with $N_{\Delta m}$ points} \State $\pi_{m}\gets \argmin_{\pi\in U(a,b)}\ \langle C_{\operatorname{GW}(m)},\,\pi\rangle$ \Comment{solve linear program} \State append $\operatorname{GW}(\pi_{m})$ to \texttt{scores} \EndFor \State $\pi^\star\gets\argmax_\pi$ \texttt{scores} \Comment{take best plan for GW} \State $b\gets$ ``$\pi^\star$ is a bimap'' \State return $\pi^\star$, $b$ \end{algorithmic} \vspace{3mm} \textbf{Outputs:} \begin{itemize}[nolistsep] \item $\pi^\star$: optimal plan for GW \item $b$: boolean asserting if $\pi^\star$ is a bimap \end{itemize} \end{algorithm} \end{minipage} \end{figure} We display the results on \cref{fig:res-bimap}, where we plot the optimal correspondence plan $\pi^\star$ in two cases: \begin{enumerate}[label=(\alph*),nolistsep] \item starting from an adversarial plan with both marginals convolved as to simulate densities; \item starting from an adversarial plan with only the first marginal convolved and the second marginal being a sum of Dirac measures. \end{enumerate} To facilitate the reading, we draw a blue pixel at a location $x$ on the discretized $x$-axis (resp.~$y$ on the $y$-axis) each time $x$ (resp.~$y$) has two (disjoint) images (resp.~antecedents), making $\pi^\star$ a bimap (resp.~a bi-anti-map), or the union of a graph and an anti-graph. In both cases, we observe that $\pi^\star$ is not a map but a bimap instead, similarly to \cite[Sec. 4.5]{chiappori2010hedonic}. Note that in case (b), $\nu$ being atomic, there cannot be a map from $\nu$ to $\mu$, so in both (a) and (b) we numerically exhibit an instance where there is \textit{a priori} no map from either $\mu$ to $\nu$ nor $\nu$ to $\mu$. We also plot the submodularity regions of the linearized GW cost function with parameter $m(\pi^\star)$ as an overlay and we observe that when the plan gives mass to a region where the cost is submodular (resp.~supermodular), is has a monotone increasing (resp.~decreasing) behaviour in this region. \begin{figure}[h] \centering \begin{subfigure}[b]{.49\linewidth} \centering \include{figures/bimap_plan_double} \vspace{-8mm} \end{subfigure} \hfill \begin{subfigure}[b]{.49\linewidth} \centering \include{figures/bimap_plan_bis} \vspace{-8mm} \end{subfigure} \caption{Optimal correspondence plan (in log scale) obtained with our procedure, starting either from a plan with both marginals convolved \textbf{(Left)\ } or with only the first marginal convolved \textbf{(Right)\ }; bimap and anti-bimap coordinates (\textcolor{tabblue}{blue}); submodularity regions (\textcolor{tabgreen!60}{light green}). Parameters: $\sigma=\smash{5.10^{-3}}$, $N_{\Delta x}=150$, $N_{\Delta m}=2000$. } \label{fig:res-bimap} \end{figure} \begin{remark} Although the region on which the optimal plan $\pi^\star$ is a bimap is of small size on \cref{fig:res-bimap} right, we cannot expect better due to the form of the adversarial example $\pi_f$. Indeed, the bimap behaviour is governed by the outliers of the distributions (see \cref{fig:res-GD}), as points in the right tail of $\mu$ are encouraged to split in half between points in the right and left tails of $\nu$. As the bimap region only spans the outlier region, it stays of small size when $\mu$ and $\nu$ have only few outliers. \end{remark} \clearpage \subsubsection{Empirical instability of the optimality of monotone rearrangements} \label{subsec:quadra1D_instability} The above study demonstrates that there exist probability measures $\mu$ and $\nu$ for which property \begin{equation*} P(\mu,\nu)\text{ : }\quad \pi_\text{mon}^\oplus \text{ or } \pi_\text{mon}^\ominus \text{ is an optimal correspondence plan between }\mu\text{ and }\nu \end{equation*} does not hold. However, it is very likely in practice when generating empirical distributions at random; one could ask if property $P(\mu,\nu)$ is at least \emph{stable}, \textit{i.e.}~if when we have $\mu_0$ and $\nu_0$ satisfying $P(\mu_0,\nu_0)$ there is a small ball around $\mu_0$ and $\nu_0$ (for a given distance, say Wasserstein $p$) inside which property $P$ remains valid. We believe that this it not the case. In order to illustrate this, we start from the counter-example given in \cite{beinert2022assignment} with $N=7$ points and $\varepsilon=10^{-2}$, that we convolve with a Gaussian of standard deviation $\sigma$ as before. We then plot as a function of $m\in[m_\text{min},m_\text{max}]$ the (true) GW cost of a plan $\pi_m^\star$, optimal for the linearized GW problem $\pi^\star_m\in\argmin_{\pi}\ \langle C_{\operatorname{GW}(m)},\,\pi\rangle$. The minimum values of this graph are attained by the correlations of optimal correspondence plans, as explained in \Cref{subsec:quadra1D_adversarial}. Hence if $\sigma$ is small, this optimal plan is not a monotone rearrangement by construction and the minimum are not located on the boundary of the domain. On the contrary, when $\sigma$ is large, the convolved measures stop being adversarial and the monotone rearrangements start being optimal again. In order to study the phase transition, we plot on \Cref{fig:no-stab} the landscape of $m\mapsto \operatorname{GW}(\pi^\star_m)$ while gradually increasing the value of $\sigma$. \begin{figure}[h] \centering \begin{subfigure}[b]{.24\linewidth} \centering \include{figures/stab_1} \vspace{-8mm} \caption*{$\sigma_1=8.10^{-3}$} \end{subfigure} \begin{subfigure}[b]{.24\linewidth} \centering \include{figures/stab_25} \vspace{-8mm} \caption*{$\sigma_2=8.8.10^{-3}$} \end{subfigure} \begin{subfigure}[b]{.24\linewidth} \centering \include{figures/stab_4} \vspace{-8mm} \caption*{$\sigma_3=10^{-2}$} \end{subfigure} \begin{subfigure}[b]{.24\linewidth} \centering \include{figures/stab_5} \vspace{-8mm} \caption*{$\sigma_4=3.10^{-2}$} \end{subfigure} \caption{Evolution of the graph of $m\mapsto \operatorname{GW}(\pi^\star_m)$ when varying $\sigma$ on the counter-example of \cite{beinert2022assignment} with $N=7$ points and $\varepsilon=10^{-2}$. Parameters: $N_{\Delta x}=100$, $N_{\Delta m}=150$..} \label{fig:no-stab} \end{figure} Looking at \cref{fig:no-stab}, it is worth noting that there is an incentive for plans of correlation close to $m_\text{min}$ or $m_\text{max}$ to be the monotone rearrangements, as the horizontal portions of the plot suggest. More importantly, it can be observed that when $\sigma=\sigma_3$ or $\sigma_4$, the monotone rearrangements are optimal, as their correlations realize the minimum of $m\mapsto \operatorname{GW}(\pi^\star_m)$; unlike for $\sigma_1$ and $\sigma_2$, for which the minimum value of the plot is located near zero. Hence there exists a $\sigma_0\in(\sigma_2,\sigma_3)$ for which the convolved measures have both $\pi_\text{mon}^\oplus$, $\pi_\text{mon}^\ominus$ and another $\pi_0$ as optimal correspondence plans. It is direct that property $P$ does not hold in the neighbourhood of these specific measures $\mu_0$ and $\nu_0$, hence the following result, that we still state as a conjecture since we only provided numerical illustrations of it: \begin{conjecture}[Instability of the optimality of monotone rearrangements] \label{theo:no-stab} There exists $\mu,\nu$ two measures on $\mathbb{R}$ such that the optimal plan is supported by the graph of a monotone map, and $\mu_n,\nu_n$ that weakly converge to $\mu,\nu$ whose optimal plans are never supported by a monotone map. \end{conjecture} \subsubsection{A positive result for measures with two components} \label{subsec:quadra_1D_positive} In the following, $\mu_1$, $\mu_2$, $\nu_1$ and $\nu_2$ are four probability measures supported on a compact interval $A \subset \mathbb{R}$. Denote $\Delta = \mathrm{diam}(A)$, and fix $t \in (0,1)$ and $K > \Delta$. Let $\tau_K : x \mapsto x + K$ denote the translation by $K$, and $A+K = \tau_K(A) = \{x + K\mid x \in A\}$. Now, introduce the measures \begin{equation} \mu = (1-t) \mu_1 + t \tau_{K\#} \mu_2 \quad \text{and} \quad \nu = (1-t) \nu_1 + t \tau_{K\#} \nu_2\,. \end{equation} Note that $\mu_1$ and $\tau_{K\#} \mu_2$ (resp.~$\nu_1$ and $\tau_{K\#}\nu_2$) have disjoint supports. We want to prove the following: \begin{proposition}\label{prop:measure_separation} For $K$ large enough, the unique optimal plan for the quadratic cost between $\mu$ and $\nu$ is given by one of the two monotone maps (increasing or decreasing). \end{proposition} \begin{remark} The hypothesis of the theorem illustrates that monotone maps are favored when $\mu$ and $\nu$ both contain a single or more outliers. The proof of the theorem actually shows the importance of long range correspondences or global effect over the local correspondences on the plan. In other words, even though locally, monotone maps may not be optimal, global correspondences favor them. Moreover, these global correspondences have proportionally more weight in the GW functional since the cost is the squared difference of the squared distances. In conclusion, pair of points which are at long distances tend to be put in correspondence. In turn, this correspondence, as shown in the proof, favors monotone matching. Although non-quantitative, this argument gives some insight on the fact that a monotone map is often optimal. \end{remark} \noindent We first prove the following lemma: \begin{lemma}\label{lemma:measure_separation} In the setting described above, there exists $K_0>0$ such that if $K \geq K_0$, every $\pi$ optimal plan for $\operatorname{GW}(\mu,\nu)$ can be decomposed as $\pi = \pi_1 + \pi_2$, where either: \begin{enumerate} \item $\pi_1$ is supported on $A \times A$ and $\pi_2$ on $(A + K) \times (A + K)$ (that is, we separately transport $\mu_1$ to $\nu_1$ and $\tau_{K\#}\mu_2$ to $\tau_{K\#}\nu_2$), or \item $\pi_1$ is supported on $A \times (A+K)$ and $\pi_2$ on $A \times (A + K)$ (that is, we transport $\mu_1$ to $\tau_{K\#}\nu_2$ and $\mu_2$ to $\tau_{K\#}\nu_1$). \end{enumerate} Furthermore, whenever $t \neq \frac{1}{2}$, only the first point can occur. \end{lemma} \begin{figure}[h] \centering \input{figures/illu_gw_separated} \caption{Visual sketch of the proof of \cref{lemma:measure_separation}.} \label{FigPositiveResult} \end{figure} \begin{proof} Consider first the case $t = \frac{1}{2}$. To shorten the notations, we introduce the notations $A_1 = A$ and $A_2 = A+K$. We can now decompose any plan $\pi$ as $\pi_{11} + \pi_{12} + \pi_{21} + \pi_{22}$ where for instance $\pi_{12}$ denotes the restriction of the plan $\pi$ to the product $A_1 \times A_2$. Let us also denote by $r$ the mass of $\pi_{12}$, one has $0 \leq r\leq 1/2$ and by symmetry, one can choose that $r\leq 1/4$, otherwise we exchange $A_1$ and $A_2$ for the second measure since the cost is invariant to isometries. Remark that, due to marginal constraints, the total mass of $\pi_{11}$ and $\pi_{22}$ is $1/2 - r$ and the mass of $\pi_{21}$ is $r$. Therefore, it is possible to consider a coupling plan $\tilde \pi_{11}$ between the first marginal of $\pi_{12}$ and the second marginal of $\pi_{21}$, and similarly, let $\tilde \pi_{22}$ be a coupling plan between the first marginal of $\pi_{21}$ and the second marginal of $\pi_{12}$. We then define a competitor plan $\tilde \pi = \pi_{11} + \tilde \pi_{11} + \pi_{22} + \tilde \pi_{22}$. The first step is to get a lower bound on the term $\operatorname{GW}(\pi,\pi)$. Slightly overloading the notations, we introduce \begin{equation}\label{EqBilinearGW} \operatorname{GW}(\pi,\gamma) = \int c \,\mathrm d \pi \otimes \gamma\,. \end{equation} We expand $\operatorname{GW}$ by bilinearity \begin{equation*} \operatorname{GW}(\pi,\pi) = \sum_{i,j,i',j'} \operatorname{GW}(\pi_{ij},\pi_{i'j'}) = \sum_{i,j} \operatorname{GW}(\pi_{ii},\pi_{jj}) + R\,, \end{equation*} where $R$ is the remainder that contains 12 terms from which one can identify two types. 8 terms are of the type $\operatorname{GW}(\pi_{12},\pi_{11}) \geq r(1/2-r)(K^2 - \Delta^2)^2$. Indeed, one compares pairs of points $(x,x')$ and $(y,y')$ for $(x,y) \in A_1 \times A_1$ and $(x',y') \in A_1 \times A_2$, therefore $(x - x')^2$ is upper bounded by $\Delta^2$ and $(y - y')^2$ lower bounded by $K^2$ and the bound above follows after integration against the corresponding measures. The second type is $\operatorname{GW}(\pi_{12},\pi_{21}) \geq 0$, there are $4$ of such terms. We thus have \begin{equation*} R \geq 8 r(1/2-r)(K^2 - \Delta^2)^2\,. \end{equation*} We now upper-bound the competitor. Similarly, one has \begin{equation*} \operatorname{GW}(\tilde\pi,\tilde\pi) = \sum_{i,j} \operatorname{GW}(\pi_{ii},\pi_{jj}) + \tilde R\, \end{equation*} where $\tilde R = 2\operatorname{GW}(\tilde \pi_{11},\pi_{22} + \tilde \pi_{22}) +2 \operatorname{GW}(\tilde \pi_{22},\pi_{11} + \tilde \pi_{11}) + 2\operatorname{GW}(\pi_{11},\tilde \pi_{11}) + 2\operatorname{GW}(\pi_{22},\tilde \pi_{22})$. The two last terms can be upper bounded by $2r(1/2-r) \Delta^2$. Indeed, one compares distance squared of couples of points in $A_1$ to couple of points in $A_1$, so it is upper bounded by $\Delta^2$. Again by elementary inequalities (see \cref{FigPositiveResult}), the two first terms can be upper bounded by $r(2K\Delta + \Delta^2)^2$. Note that the total mass of the plan $\pi_{11} + \tilde \pi_{11}$ is $1/2$ which explains why $(1/2-r)$ does not appear. Therefore, the difference between the two values of $\operatorname{GW}$ is \begin{equation}\label{EqComparison} \operatorname{GW}(\pi,\pi) - \operatorname{GW}(\tilde \pi,\tilde \pi) \geq r\left(8 (1/2-r)(K^2 - \Delta^2)^2 - 4 (1/2-r) \Delta^2 - 2(2K\Delta + \Delta^2)^2\right)\,. \end{equation} Then, since $1/2 - r\geq 1/4$ the limit in $K$ of the polynomial function on the r.h.s. of \Cref{EqComparison} is $+\infty$ uniformly in $r\in [0,\frac 14]$, and the result follows; there exists $K>0$ such that the polynomial function above is nonnegative, for instance $\max(0,K_0)$ where $K_0$ is the largest root. The proof in the case $t > 1/2$ (the other is symmetric) is even simpler since $t- r > t-1/2$ and consequently, there is no choice in the matching of the two measures; it is determined by the corresponding masses. One can directly apply the argument above. \qedhere \end{proof} \noindent We now prove \cref{prop:measure_separation}. \begin{proof}[Proof of \cref{prop:measure_separation}] Thanks to \cref{lemma:measure_separation}, we know that we can restrict to transportation plans $\pi = \pi_1 + \pi_2$ where, up to flipping $\nu$, we can assume that $\pi_1$ is supported on $A \times A$ and $\pi_2$ on $(A+K) \times (A+K)$.\footnote{Note: this is where the choice is made, as in the proof of \cref{lemma:measure_separation}, between the increasing and the decreasing mappings. Using this convention, the increasing monotone map is shown to be optimal.} Using again the bilinear form $\operatorname{GW}(\pi,\gamma)$ defined in \cref{EqBilinearGW}, the objective values reached by any transport plan $\pi = \pi_1 + \pi_2$ actually decomposes as \[ \operatorname{GW}(\pi,\pi) = \operatorname{GW}(\pi_1,\pi_1) + 2\operatorname{GW}(\pi_1,\pi_2) + \operatorname{GW}(\pi_2,\pi_2)\,. \] Now, assume that we have found $\pi_2^\star$ optimal. Let us minimize in $\pi_1$ the resulting quadratic problem: \[ \min_{\pi_1}\ \operatorname{GW}(\pi_1,\pi_1) + 2\operatorname{GW}(\pi_1,\pi_2^\star)\,. \] We know that if $\pi_1^\star$ is a minimizer of this quantity, it must also be a solution of the \emph{linear} problem \[ \min_{\pi_1}\ \operatorname{GW}(\pi_1, \pi_1^\star) + \operatorname{GW}(\pi_1,\pi_2^\star)\,.\] This minimization problem is exactly the optimal transportation problem for the cost \begin{multline*} c(x,y) = \int_{A \times A} ((x-x')^2 - (y-y')^2)^2 \,\mathrm d \pi_1^\star(x',y') \\+ 2 \int_{(A+K)^2} ((x-x'')^2 - (y-y'')^2)^2 \,\mathrm d \pi_2^\star(x'',y'') \,. \end{multline*} Now, using the relation $((x-x'')^2 - (y-y'')^2)^2 = ((x-y)-(x''-y''))^2 ((x+y) - (x''+y''))^2$, and that $\pi_2^\star$ is a transportation plan between $\tau_{K\#}\mu_2$ and $\tau_{K\#} \nu_2$ so that we can make a change of variable, observe that \begin{multline*} c(x,y) = \int_{A\times A} ((x-x')^2 - (y-y')^2)^2 \,\mathrm d \pi_1^\star(x',y') \\+ \int_{A \times A} ((x-y)-(x''-y''))^2 ((x+y) - (x'' + y'' +2K))^2 \,\mathrm d (\tau_{-K},\tau_{-K})_\#\pi_2^\star(x'',y'')\,. \end{multline*} Now, observe that $\partial_{xy} c(x,y)$ is a polynomial function in $K,x,y$ whose dominant term in $K$ is simply $-2K^2$; recall that $A$ is compact, so that this polynomial function is bounded in $x,y$. We conclude \[ \partial_{xy} c(x,y) = -2 K^2 + O(K) < 0 \] for $K$ large enough, for all $x,y \in A$. The plan $\pi_1^\star$ is optimal for a submodular cost, and by \cref{prop:submod} must be the increasing matching between $\mu_1$ and $\nu_1$. By symmetry, so is $\pi_2^\star$. \end{proof}
{ "attr-fineweb-edu": 1.057617, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdZvxK6-gD0SrddrL
\section{Introduction} Whether the self-gravitating and stationary/static configuration exists or not is fundamental issue in general relativity. The famous and elegant Lichnerowicz theorem tells us that the vacuum and strictly stationary spacetimes should be static \cite{Lich1955}. The phrase ``strict stationarity" means that the existence of the timelike Killing vector field in the whole region of the spacetime is assumed (no black holes!). Since the total mass of the spacetime is zero, the positive mass theorem shows us that the spacetime should be the Minkowski spacetime \cite{Schoen81, Witten81}. The similar discussion has been extended into asymptotically anti-deSitter spacetimes \cite{BGH}. But, we know that there is the static non-trivial solution for the Einstein-Yang-Mills systems \cite{Bartnik88} (See Ref. \cite{Winstanley2008} and references therein). Holographic arguments of condensed matters is deeply related to the existence of non-trivial self-gravitating configurations in asymptotically anti-deSitter spacetimes (See Ref. \cite{Hartnoll:2011fn} for a review). Recently, there are interesting new issues on self-gravitating objects and final fate of gravitational collapse in asymptotically anti-deSitter spacetimes \cite{Dias:2011at, Bizon:2011gg, Dias:2011ss}(See also Ref. \cite{Stotyn2012, Gentle2012}). Therein non-stationary numerical solutions with $1$-Killing vector field was found in the Einstein-complex scalar system \cite{Dias:2011at}(This is a kind of boson stars. For boson stars, see Ref. \cite{Liebling2012} and reference therein). A kind of no-go theorem or Lichnerowicz-type theorem is also important. This is because they provide us some definite informations about the above issue in an implicit way. In this paper we present a no-go theorem for non-trivial self-gravitating configurations composed of $p$-form fields/complex scalar fields in strictly stationary spacetimes. We will consider asymptotically flat or anti-deSitter spacetimes. This no-go does not contradict with the results of recent works \cite{Dias:2011at, Bizon:2011gg, Dias:2011ss} because the spacetimes there are not strictly stationary or there are some coupling between gauge fields and scalar fields and so on. See Ref. \cite{Gibbons:1990um} for related issues about static configurations. The organisation of this paper is as follows. In Sec. II, we discuss the strictly stationary spacetimes in four dimensions, and show that the Maxwell field and complex scalar field cannot have non-trivial configuration. In Sec. III, we generalised this into higher dimensions with $p$-form fields and complex scalar fields. In Appendix A, we present a technical detail. In Appendix B, we discuss an alternative argument for asymptotically anti-deSitter spacetimes. \section{four dimensions} Bearing the recent work \cite{Dias:2011at} in mind, we consider the following system \begin{equation} L=R-\frac{1}{2}F^2-2|\partial \pi|^2-2\Lambda, \end{equation} $F$ and $\pi$ are the field strength of the Maxwell field and a complex scalar field, respectively. There is no source term for the Maxwell field and no potential of $\pi$. The Einstein equations are \begin{equation} R_{ab}=F_a^{~c}F_{bc}-\frac{1}{4}g_{ab}F^2+\partial_a \pi \partial_b \pi^*+\partial_a \pi^* \partial_b \pi + \Lambda g_{ab}. \end{equation} Let us focus on the strictly stationary spacetimes, that is, we assume that there is a timelike Killing vector field $k^a$ everywhere. In addition, we assume that the Maxwell field and complex scalar field are also stationary, $\mbox \pounds_k F=0, k^a \partial_a \pi =0$. Then we see $\mbox \pounds_k T_{ab}=0$ which is consistent with the spacetime stationarity. The twist vector $\omega^a$ is defined by \begin{equation} \omega_a=\frac{1}{2}\epsilon_a^{~bcd}k_b \nabla_c k_d. \end{equation} Then, from the definition of $\omega_a$, one can show \begin{equation} \nabla_a (\omega^a V^{-4})=0, \label{divtwist} \end{equation} where $V^2=-k_ak^a$. We introduce the electric and magnetic components of the Maxwell field as \begin{equation} E_a = k^b F_{ba} \end{equation} and \begin{equation} B_a = - \frac{1}{2} \epsilon_{abcd}F^{bc}k^d, \end{equation} respectively. Using $E^a$ and $B^a$, the field strength of the Maxwell field is written as \begin{equation} V^2F_{ab}=-2k_{[a}E_{b]}+\epsilon_{abcd}k^cB^d. \end{equation} The source-free Maxwell equation becomes \begin{equation} \nabla_{[a}E_{b]}=0, \end{equation} \begin{equation} \nabla_{[a}B_{b]}=0, \end{equation} \begin{equation} \nabla_a (E^aV^{-2})-2 \omega_a B^a V^{-4}=0 \end{equation} and \begin{equation} \nabla_a (B^aV^{-2})+2 \omega_a E^a V^{-4}=0. \end{equation} From the first two equations, we see that $E^a$ and $B^a$ have the potentials as \begin{equation} E_a=\nabla_a \Phi,~~B_a=\nabla_a \Psi. \end{equation} Using the Einstein equations, we can show that \begin{equation} \nabla_{[a}\omega_{b]}=B_{[a}E_{b]} \end{equation} holds. The right-hand side is the Poynting flux. To show the above, we used the assumption of the stationarity of the complex scalar field. Using the fact that the electric and magnetic fields are written in terms of the potential for each, we can rewrite the above in several different ways, for examples, we have the following typical two equations \begin{equation} \nabla_{[a}(\omega_{b]}-\Psi E_{b]})=0 \end{equation} and \begin{equation} \nabla_{[a}(\omega_{b]}+\Phi B_{b]})=0. \end{equation} Therefore, the existence of scalar functions are guaranteed as \begin{equation} \omega_a-\Psi E_a=\nabla_a U_E \label{Ue} \end{equation} and \begin{equation} \omega_a+\Phi B_a=\nabla_a U_B. \label{Ub} \end{equation} Then, using the Maxwell equation, Eqs. (\ref{divtwist}), (\ref{Ue}) and (\ref{Ub}) give us \begin{equation} \nabla_a \Bigl(U_E \frac{\omega^a}{V^4}-\frac{\Psi}{2V^2}B^a \Bigr)=\frac{\omega_a \omega^a}{V^4}-\frac{B_a B^a}{2V^2}. \label{UE} \end{equation} and \begin{equation} \nabla_a \Bigl(U_B \frac{\omega^a}{V^4}-\frac{\Phi}{2V^2}E^a \Bigr)=\frac{\omega_a \omega^a}{V^4}-\frac{E_a E^a}{2V^2}. \label{UB} \end{equation} They correspond to Eq. (6.35) in page 156 of Ref. \cite{Carter} which has an error of the sign(For example, see Ref. \cite{Heusler1998}). On the other hand, the Einstein equations give us \begin{equation} \frac{2}{V^2}R_{ab}k^ak^b= \nabla_a \Bigl(\frac{\nabla^a V^2}{V^2} \Bigr) + 4 \frac{\omega_a \omega^a}{V^4} =-2\Lambda + \frac{E_a E^a+B_a B^a}{V^2}. \label{rkk} \end{equation} Note that this corresponds to the Raychaudhuri equation for non-geodesics. It is easy to see that Eqs. (\ref{UE}), (\ref{UB}) and (\ref{rkk}) implies \begin{equation} \nabla_a \Bigl(\frac{\nabla^a V^2}{V^2}+W^a \Bigr)=-2\Lambda, \end{equation} where \begin{equation} W^a=2(U_E+U_B)\frac{\omega^a}{V^4}-\frac{\Psi B^a+\Phi E^a}{V^2}. \end{equation} Let us first consider the cases with $ \Lambda=0$. Then we see \begin{equation} \nabla_a \Bigl(\frac{\nabla^a V^2}{V^2}+W^a \Bigr)=0,\label{vw} \end{equation} The space volume integral of the above implies the surface integral. Since $W^a$ does not contribute to the surface integral, we see that it will be the total mass and then \begin{equation} M=0 \end{equation} (See Appendix A). Here note that the positive mass theorem holds because the dominant energy condition is satisfied. Thus, the corollary of the positive mass theorem tells us that the spacetime should be the Minkowski spacetime \cite{Schoen81, Witten81}. This means that the electro-magnetic fields and complex scalar field vanish. Next we consider the cases with $\Lambda <0$. Introducing the vector field, $r^a$, satisfying \begin{equation} \nabla_a r^a=-2\Lambda, \label{divr} \end{equation} we find \begin{equation} \nabla_a \Bigl(\frac{\nabla^a V^2}{V^2}-r^a+W^a \Bigr)=0. \label{surface} \end{equation} The volume integral shows us \begin{equation} M=0 \end{equation} again. Then the positive mass theorem implies that the spacetime should be the exact anti-deSitter spacetime \cite{Gibbons83, Gibbons83-2}. In Appendix A, we discuss the existence of $r^a$ satisfying Eq. (\ref{divr}). However, one may not want to introduce this $r^a$. This is possible for a restricted case. In Appendix B, we present an alternative proof for asymptotically anti-deSitter spacetimes. Price we have to pay is that we cannot include the Maxwell field in the argument. One may wonder if one can extend this result to the cases with positive cosmological constant, $\Lambda>0$. Although there are some efforts \cite{dSPET}, we do not have the positive mass theorem which holds for general asymptotically deSitter spacetimes. Thus, we cannot have the same statement with our current result. Note that almost of all basic equations presented here was derived in Ref. \cite{Heusler1996}. However, these equations was applied to the stationary {\it axisymmetric} black holes to derive the Smarr formula and so on. On the other hand, we focused on the strictly stationary spacetimes which are not ristricted to be axisymmetric in general and do not contain black holes. \section{Higher dimensions} Let us examine the same issue in higher dimensions. The Lagrangian we consider is \begin{equation} {\cal L}=R-\frac{1}{p!}H_{(p)}^2-2|\partial \pi|^2-2\Lambda, \end{equation} where $H_{(p)}$ is the field strength of a $(p-1)$-form field potential and $\pi$ is a complex scalar field. We consider the strictly stationary spacetimes, $p$-form fields and complex scalar fields, $\mbox \pounds_k H_{(p)}=0, \mbox \pounds_k \pi=0$. The Einstein equations become \begin{eqnarray} R_{ab} = \frac{1}{p!}\Bigl(pH_a^{~c_1 \cdots c_{p-1}}H_{bc_1 \cdots c_{p-1}}-\frac{p-1}{n-2}g_{ab}H_{(p)}^2 \Bigr) +\partial_a \pi \partial_b \pi^*+\partial_a \pi^* \partial_b \pi +\frac{2}{n-2}\Lambda g_{ab}. \end{eqnarray} The field equations for source free $p$-form field are \begin{eqnarray} \nabla_a H^{a_1 \cdots a_{p-1}a}=0 \end{eqnarray} and the Bianchi identity. Let us decompose the $p$-form field strength into the electric~($E_{a_1 \cdots a_{p-1}}$) and magnetic parts~($B_{a_1 \cdots a_{n-p-1}}$) as \begin{eqnarray} V^2H_{a_1 \cdots a_{p}} = -p k_{[a_1}E_{a_2 \dots a_{p}]} +\epsilon_{a_1 \cdots a_{p}a_{p+1}a_{p+2}\cdots a_n} k^{a_{p+1}}B^{a_{p+2}\cdots a_n}. \end{eqnarray} where we define the each component by \begin{eqnarray} E_{a_1 \dots a_{p-1}}=H_{aa_1 \cdots a_{p-1}}k^a \end{eqnarray} and \begin{eqnarray} B_{a_1 \dots a_{n-p-1}}=\frac{1}{p!(n-p-1)!}\epsilon_{b_1 \cdots b_p c a_1 \cdots a_{n-p-1}}k^cH^{b_1 \cdots b_{p1}}. \end{eqnarray} Here we define the twist tensor $\omega_{a_1 \cdots a_{n-3}}$ as \begin{eqnarray} \omega_{a_1 \cdots a_{n-3}}=\alpha \epsilon_{a_1 \cdots a_{n-3}bcd}k^b \nabla^c k^d, \end{eqnarray} where $\alpha$ is a constant. From the definition of the twist, it is easy to check that \begin{eqnarray} \nabla_{a_{n-3}}\Bigl(\frac{\omega^{a_1 \cdots a_{n-3}}}{V^4} \Bigr)=0 \label{divw} \end{eqnarray} holds. From the field equations, we have \begin{eqnarray} \nabla_{[a_1} E_{a_2 \cdots a_{p}]}=0 \end{eqnarray} and \begin{eqnarray} \nabla_{[a_1} B_{a_2 \cdots a_{n-p}]}=0. \end{eqnarray} Then there are the potentials, that is, \begin{eqnarray} E_{a_1 \cdots a_{p-1}}=\nabla_{[a_1} \Phi_{a_2 \cdots a_{p-1}]} \end{eqnarray} and \begin{eqnarray} B_{a_1 \cdots a_{p-1}}=\nabla_{[a_1} \Psi_{a_2 \cdots a_{n-p-1}]}. \end{eqnarray} It is seen from the definition of $E_{a_1 \cdots a_{p-1}}$ and $B_{a_1 \cdots a_{p-1}}$ that $k^{a_1} \Phi_{a_1\dots a_{p-2}}= 0$ and $k^{a_1} \Psi_{a_1\dots a_{n-p-2}}=0$ hold. The other field equations give us \begin{eqnarray} \Phi_{a_1 \cdots a_{p-2}}\nabla_{a_{p-1}} \Bigl(\frac{E^{a_1 \dots a_{p-2}a_{p-1}}}{V^2} \Bigr) =\alpha^{-1}(-1)^n \omega^{a_1 \cdots a_{p-2}b_1 \cdots b_{n-p-1}}\Phi_{a_1 \cdots a_{p-2}}\frac{B_{b_1 \cdots b_{n-p-1}}}{V^4} \label{twistB} \end{eqnarray} and \begin{eqnarray} \Psi_{a_1 \cdots a_{n-p-2}}\nabla_{a} \Bigl(\frac{B^{a_1 \dots a_{n-p-2}a}}{V^2} \Bigr) =-\alpha^{-1}\frac{(-1)^p}{(n-p-1)!(p-1)!} \omega^{b_1 \cdots b_{p-1}a_1 \cdots a_{n-p-2}}\Psi_{a_1 \cdots a_{n-p-2}} \frac{E_{b_1 \cdots b_{p-1}}}{V^4}. \end{eqnarray} Using the Einstein equations, we can show \begin{eqnarray} \alpha^{-1}\epsilon^{abcd_1 \cdots d_{n-3}}\nabla_c \omega_{d_1 \cdots d_{n-3}} & = & 2(n-3)!(-1)^n (k^aR^b_{~c}-k^bR^a_{~c})k^c \nonumber \\ & = & -\frac{2(-1)^{n+p}(n-3)!}{(p-1)!} \epsilon^{abcd_1 \cdots d_{n-3}}E_{cd_1 \cdots d_{p-2}}B_{d_{p-1}\cdots d_{n-3}}. \end{eqnarray} Note that we used the fact of the stationarity of the complex scalar field. Then we see that there are the $(n-4)$-forms $U^E$ and $U^B$ satisfying \begin{eqnarray} \nabla_{[a_1}U^E_{a_2 \cdots a_{n-3}]} =\omega_{a_1 \cdots a_{n-3}}-\frac{(-1)^n 2\alpha (n-3)!}{(p-1)!} E_{[a_1 \cdots a_{p-1}}\Psi_{a_p \cdots a_{n-3}]} \end{eqnarray} and \begin{eqnarray} \nabla_{[a_1}U^B_{a_2 \cdots a_{n-3}]} =\omega_{a_1 \cdots a_{n-3}}+\frac{(-1)^{n+p} 2\alpha (n-3)!}{(p-1)!} \Phi_{[a_1 \cdots a_{p-2}}B_{a_{p-1} \cdots a_{n-3}]}, \end{eqnarray} respectively. Using $U^{E,B}$ and Eq. (\ref{divw}), we can have the following equations \begin{eqnarray} \nabla_a \Bigl( U^E_{a_1 \cdots a_{n-4}}\frac{\omega^{a_1 \cdots a_{n-4}a}}{V^4} -(-1)^p \alpha^2 \beta \frac{\Psi_{a_1 \cdots a_{n-p-2}}B^{a_1 \cdots a_{n-p-2}a}}{V^2} \Bigr) =(-1)^n \Bigl(\frac{\omega^2}{V^4}-\alpha^2 \beta \frac{B^2}{V^2}\Bigr) \label{idE} \end{eqnarray} and \begin{eqnarray} \nabla_a \Bigl( U^B_{a_1 \cdots a_{n-4}}\frac{\omega^{a_1 \cdots a_{n-4}a}}{V^4} -(-1)^{n+p} \alpha^2 \gamma \frac{\Phi_{a_1 \cdots a_{p-2}}E^{a_1 \cdots a_{p-2}a}}{V^2} \Bigr) =(-1)^n \Bigl(\frac{\omega^2}{V^4}-\alpha^2 \gamma \frac{E^2}{V^2}\Bigr),\label{idB} \end{eqnarray} where $\beta=2(n-3)!(n-p-1)!$ and $\gamma=2(n-3)!/(p-1)!$. On the other hand, the Einstein equations give us \begin{eqnarray} \frac{2}{V^2} R_{ab}k^ak^b=\nabla_a \Bigl( \frac{\nabla^a V^2}{V^2} \Bigr) + \frac{1}{\alpha^2 (n-3)!} \frac{\omega^2}{V^4} =\frac{2(n-p-1)}{(p-1)!(n-2)}\frac{E^2}{V^2}+\frac{2(p-1)(n-p-1)!}{n-2}\frac{B^2}{V^2} -\frac{4}{n-2}\Lambda. \label{ein} \end{eqnarray} Then, together with Eqs. (\ref{idE}) and (\ref{idB}), this implies \begin{eqnarray} \nabla_a \Bigl( \frac{\nabla^a V^2}{V^2}+X^a \Bigr)=-\frac{4}{n-2}\Lambda, \label{surface2} \end{eqnarray} where $X^a$ is defined by \begin{eqnarray} X^a& = & \frac{(-1)^n}{\alpha^2(n-2)!}\Bigl( (p-1)U^E_{a_1 \cdots a_{n-4}}+(n-p-1)U^B_{a_1 \cdots a_{n-4}} \Bigr) \frac{\omega^{a_1 \cdots a_{n-4}a}}{V^4}\nonumber \\ & & -(-1)^{n+p}\frac{2(p-1)(n-p-1)!}{n-2} \frac{\Psi_{a_1 \cdots a_{n-p-2}}B^{a_1 \cdots a_{n-p-2}a}}{V^2} -(-1)^p \frac{2(n-p-1)}{(p-1)!(n-2)} \frac{\Phi_{a_1 \cdots a_{p-2}}E^{a_1 \cdots a_{p-2}a}}{V^2}. \end{eqnarray} In the similar argument with four dimensional case, we can show that the mass vanishes. Then, using the positive mass theorem in higher dimensions \footnote{If one considers spin manifolds, the positive mass theorem is easily proven as in the four dimensional Witten's version \cite{Witten81}}, this means that the spacetime is exactly Minkowski/anti-deSitter spacetimes depending on the presence of the negative cosmological constant. In any cases, the $p$-form fields and complex scalar fields vanish. For asymptotically anti-deSitter case, we had to introduce the vector field $r^a$ satisfying \begin{eqnarray} \nabla_a r^a=-\frac{4}{n-2}\Lambda. \label{defr} \end{eqnarray} The existence of this $r^a$ is discussed in Appendix A. As in the four dimensions, the argument which avoids to introduce $r^a$ is given in Appendix B. \section{Summary and discussion} In this paper we showed that strictly stationary spacetimes with $p$-form and complex scalar fields should be Minkowski or anti-deSitter spacetime depending on the presence of the negative cosmological constant. From our result, we have no room where we have the self-gravitating solution composed of complex scalar fields in strictly stationary spacetimes. Therefore, if one wishes to explore new solution, one has to think of the set-up which breaks some assumptions imposed here. For example, we can find a new configuration which is non-stationary, but has non-stationary $1$-Killing vector field \cite{Dias:2011at}. For asymptotically anti-deSitter spacetimes, we had to introduce a vector $r^a$ to show the no-go. As shown in Appendix B, there is a way to avoid this additional treatment for the Einstein-complex scalar system. However, it it quite hard to extend this into the cases with $p$-form fields. This is left for future works. \begin{acknowledgments} TS is partially supported by Grant-in-Aid for Scientific Research from Ministry of Education, Culture, Sports, Science and Technology(MEXT) of Japan (No.~21244033). SO is supported by JSPS Grant-in-Aid for Scientific Research (No. 23-855). SO and RS are supported by the Grant-in-Aid for the Global COE Program "The Next Generation of Physics, Spun from Universality and Emergence" from MEXT of Japan. The authors also thank the Yukawa Institute for Theoretical Physics at Kyoto University, where this work was initiated during the YITP-T-11-08 on "Recent advances in numerical and analytical methods for black hole dynamics". \end{acknowledgments}
{ "attr-fineweb-edu": 1.908203, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd_TxK6nrxq6DoPA3
\section{Introduction} One of the simplest and most general consequences of the effective string description of the quark confinement \cite{first,Nambu:1974zg,Nambu:1978bd} is the presence of measurable effects on physical observables of the gauge theory, produced by the quantum fluctuations of that string \cite{lsw,lu}. The most widely known is the L\"uscher correction to the interquark confining potential $V$ at large distance $r$ \begin{equation} V(r)=\sigma\,r+2\,\mu-(d-2)\frac\pi{24r}+O(1/r^2)\,, \label{pot} \end{equation} where $\sigma$ is the string tension and $\mu$ a self-energy term. The attractive, Coulomb-like correction is universal in the sense that it is the same for whatever confining gauge theory and depends only on the $d-2$ transverse oscillation modes of the string. A similar universal effect has been found in the low temperature behaviour of the string tension \cite{pa} \begin{equation} \sigma(T)=\sigma-(d-2)\frac\pi6 T^2+O(T^4)\,. \label{st} \end{equation} Both effects may be rephrased by saying that the infrared limit of the effective string is described by a two-dimensional conformal field theory (CFT) with conformal anomaly $c=d-2$. In this language, the (generalised) L\"uscher term $-\frac{c\,\pi}{24\,r}$ is the zero-point energy of a 2D system of size $r$ with Dirichlet boundary conditions, while the $-\frac{c\,\pi}{6}T^2$ term is the zero-point energy density in a cylinder (i.e. the string world-sheet of the Polyakov correlator) of period $L=1/T$ \cite{bcn}. In this paper we propose a method to extend these results to a more general class of confining objects of $\mathop{\rm SU}(N)$ gauge theories, the k-strings, describing the infrared behaviour of the flux tube joining sources in representations with $N-$ality $k$. There are of course infinitely many irreducible representations corresponding to the same value of $k$. No matter what representation $\mathscr{R}$ is chosen, the stable string tension $\sigma_k$ depends only on the $N-$ality $k$ of $\mathscr{R}$, i.e. on the number (modulo $N$) of copies of the fundamental representation needed to build $\mathscr{R}$ by tensor product, since the sources may be screened by gluons. As a consequence, at sufficiently large distances, the heavier strings find it energetically favourable to decay into the string of smallest string tension, called k-string. The spectrum of k-string tensions has been extensively studied in recent years, in the continuum \cite{ds}--\cite{jr} as well as on the lattice \cite{lt1}--\cite{ddpv}. So far, in numerical analyses one typically measured the temperature-dependent k-string tensions $\sigma_k(T)$ through the Polyakov correlators and then extrapolated to $T=0$ using (\ref{st}), hence assuming a free bosonic string behaviour. Recently this assumption has been questioned by a numerical experiment. It showed that in a 3D $\mathbb{Z}_4$ gauge theory, though the 1-string fitted perfectly the free string formulae with a much higher precision than in the $\mathop{\rm SU}(N)$ case, the 2-string failed to meet free string expectations \cite{z4kstr_pietro}. One could object that there is no compelling reason for a 2-string of a $\mathbb{Z}_4$ gauge system to behave like a 2-string of $\mathop{\rm SU}(N)$ gauge system; a k-string can be seen as a bound state of $k$ 1-strings and the binding force would presumably depend on the specific properties of the gauge system. On the other hand, from a theoretical point of view there are good reasons to expect values of $c$ larger than $d-2$. In fact the conformal anomaly can be thought of as counting the number of degrees of freedom of the k-string. Therefore the relevant degrees of freedom are not only the transverse displacements but also the splitting of the k-string into its constituent strings. If the mutual interactions where negligible, each constituent string could vibrate independently so we had $c=k\,(d-2)$. Thus we expect that $c$ can vary in the range \begin{equation} d-2 \le\, c\,\le k\,(d-2)\,. \end{equation} An unexpected insight into the actual value of $c$ comes when considering the infrared properties of the N-point Polyakov correlators related to the baryon vertex of $\mathop{\rm SU}(N)$ \begin{equation} \langle\, P_f(\vec{r}_1)\,P_f(\vec{r}_2)\dots P_f(\vec{r}_N)\,\rangle_T\,, \end{equation} where $P_f(\vec{r}_i)$ is the Polyakov line in the fundamental representation identified by $d-1$ spatial coordinates $\vec{r}_i$ and directed along the periodic temporal direction of size $L=1/T$. If all the distances $\vert\vec{r}_i-\vec{r}_j\vert$ are much larger than any other relevant scale, this correlator is expected to obey a simple scaling property. When combining this fact with the circumstance that, depending on the positions $\vec{r}_i$ of the sources, some strings of the baryon vertex may coalesce into k-strings \cite{Hartnoll:2004yr,bary}, one obtains the geometric constraint \begin{equation} \sigma_k(T)=\sigma_k-(d-2)\frac{\pi\,\sigma_k}{6\,\sigma}\,T^2+O(T^3)\,, \label{main} \end{equation} which is the main result of this paper. We check this formula in a 3D $\mathbb{Z}_4$ gauge model where, thanks to duality, very efficient simulation techniques are available, yielding high precision results which give fairly convincing evidence for the scaling law (\ref{main}). The contents of this paper are as follows. In the next Section we expose in detail the above-mentioned scaling argument, while in the following Section we describe a lattice calculation on a three-dimensional $\mathbb{Z}_4$ gauge theory where combining duality transformation with highly efficient simulation techniques it is possible to confirm that the stable 2-string matches nicely Eq.(\ref{main}). We finish with a discussion of our results and some of their implications. \section{Scaling form of the Polyakov correlators} The main role of the string picture of confinement is to fix the functional form of the vacuum expectation value of gauge invariant operators in the infrared limit. It predicts two different asymptotic behaviours of the correlation function of a pair of Polyakov loops $\langle P_f(\vec{r}_1)\,P^\dagger_f(\vec{r}_2\rangle_T$, when both $r\equiv\vert\vec{r}_1-\vec{r}_1\vert$ and $L\equiv1/T$ are large, depending on the value of the ratio $2r/L$. Using (\ref{pot}) and (\ref{st}) we can write \begin{equation} -\frac{1}{L}\log\langle P_f(\vec{r}_1)\,P^\dagger_f(\vec{r}_2)\rangle_T= V(r)+ O(e^{-\pi\,L/r})~,~~2\,r<\,L\,, \end{equation} \begin{equation} -\frac{1}{L}\log\langle P_f(\vec{r}_1)\,P^\dagger_f(\vec{r}_2)\rangle_T= \sigma(T)\,r+2\mu+\frac{1}{2L}\log{\frac{2r}{L}}+O(e^{-4\pi r/L})~,~~2\,r>\,L\,, \label{logppst} \end{equation} \noindent where $V(r)$ and $\sigma(T)$ are given by (\ref{pot}) and (\ref{st}). There are strong indications that the $r^{-3}$ and $T^3$ terms are zero and the $r^{-4}$ and $T^4$ are universal (see for instance the discussion in \cite{jk} and references quoted therein). For our purposes we need only the first universal term, which is directly related to the central charge of the CFT describing the IR limit of the underlying confining string. In this approximation the Polyakov loop correlation functions should decay at large $r$ while keeping constant $T$ as \begin{equation} \langle P_f(\vec{r}_1)\,P^\dagger_f(\vec{r}_2)\rangle_T \stackrel{\sim}{\propto} \exp\left[-\sigma(T)\,r\,L-2\mu\,L\right]\,. \label{ppst} \end{equation} Similar expansions are expected to be valid also for Polyakov correlators describing more specific features of $\mathop{\rm SU}(N)$ gauge theory, like those involving baryonic vertices. \FIGURE{\epsfig{file=ckfig/bsu3.eps,width=7.5 cm} \caption{The three-bladed world-sheet of a static SU(3) baryon. \label{Figure:1}}} A baryon vertex is a gauge-invariant coupling of $N$ multiplets in the fundamental representation $f$ which gives rise to configurations of finite energy, or baryonic potential, with $N$ external sources. At finite temperature $T$ these sources are the Polyakov lines $P_f(\vec{r}_i)$. Assume for a moment $N=3$. When the mutual distances $\vert\vec{r}_i-\vec{r}_j\vert$ $(i,j=1,2,3)$ are all large, three confining strings of chromo-electric flux form, which, starting from the three sources, meet a common junction; their world-sheet forms a three-bladed surface with a common intersection \cite{deForcrand:2005vv,Bissey:2006bz} (see Fig.\ref{Figure:1}). The total area $\mathbb{A}$ of this world-sheet is $L\,\ell(\vec{r}_1,\vec{r}_2 ,\vec{r}_3 )$ where $\ell$ is the total length of the string. The stable configuration (hence the position of the common junction) is the one minimising $\mathbb{A}$. The balance of tensions implies angles of $\frac{2\pi}3$ between the blades. The complete functional form of the 3-point correlator is substantially unknown. Nonetheless, the static baryon potential, defined as \begin{equation} V=-\lim_{T\to\infty}T\,\log\langle\, P_f(\vec{r}_1)\,P_f(\vec{r}_2)\, P_f(\vec{r}_3)\,\rangle_T\,, \end{equation} has a simple form in the IR limit: \begin{equation} V=\sigma\,\ell(\vec{r}_1,\vec{r}_2 ,\vec{r}_3)+3\mu+\dots \end{equation} The universal $1/r$ corrections have been calculated in \cite{Jahn:2003uz}. In the IR limit at finite temperature, i.e. $\vert\vec{r}_i-\vec{r}_j\vert\gg L ~ \forall i\not=j$, we assume, in analogy with (\ref{ppst}), \begin{equation} \langle\, P_f(\vec{r}_1)\,P_f(\vec{r}_2)\dots P_f(\vec{r}_N)\,\rangle_T =e^{-F_N} \sim\exp\left[-\sigma(T)\,L\,\ell(\vec{r}_1,\vec{r}_2 ,\dots,\vec{r}_N) -N\mu\,L\right]\,, \label{pnst} \end{equation} or, more explicitly, \begin{equation} F_N(\ell,L)\sim\sigma\,L\ell-(d-2)\frac{\pi\,\ell}{6\,L}+N\mu\,L~, \, ~(\vert\vec{r}_i-\vec{r}_j\vert\gg L ~ \forall i\not=j)\,, \label{fb} \end{equation} where the coefficient of the $\ell/L$ term specifies that in this IR limit the behaviour of the baryon flux distribution is described by a CFT with conformal anomaly $d-2$ on the string world-sheet singled out by the position of the external sources. The above considerations are completely standard and nothing new happens so far. \FIGURE{ \includegraphics[width=7cm]{ckfig/tetrabary.eps} \caption{Confining string configuration of a SU(4) static baryon. The thick line is a 2-string. \label{Figure:2}}} \noindent The surprise comes when considering the latter expression in the case $N>3$. Notice that, depending on the positions $\vec{r}_i$ of the sources, some fundamental strings of the baryon vertex may coalesce into k-strings \cite{Hartnoll:2004yr,bary}. As a consequence, the shape of the world-sheet changes in order to balance the string tensions and $\ell$ becomes a weighted sum, where a k-string of length $\lambda$ contributes with a term $\lambda\,\frac{\sigma_k(T)}{\sigma(T)}\,$, where $\sigma(T)$ is given by (\ref{st}), while \begin{equation} \sigma_k(T)=\sigma_k-c_k\,\frac\pi6 T^2+O(T^3)\,, \label{skt} \end{equation} where $c_k$ is the conformal anomaly of the k-string. When the string is in the fundamental representation the $T^3$ term (\ref{st}) is missing \cite{lw4,dr,hdm}, however we cannot exclude it in the k-string with $k>1$. The string tension ratio is \begin{equation} \frac{\sigma_k(T)}{\sigma(T)}=\frac{\sigma_k}{\sigma} +\frac{\sigma_k}{\sigma} \left(\frac{d-2}\sigma-\frac{c_k}{\sigma_k}\right) \frac\pi6 T^2+O(T^3)\,; \label{rskt} \end{equation} if the coefficient of the $T^2$ term does not vanish the asymptotic functional form (\ref{fb}) of the free energy gets modified. As a simple, illustrative example, let us consider the chromo-electric flux distribution of a 4D $SU(4)$ gauge system generated by four external quarks placed at the vertices of a regular tetrahedron (see Fig.~\ref{Figure:2}). Preparing the external sources in a symmetric configuration does not necessarily imply that the distribution of the gauge flux preserves the tetrahedral symmetry. In fact, the formation of a 2-string breaks this symmetry (see Fig.\ref{Figure:2}). The actual symmetry breaking or restoration depends on the cost in free energy of the configuration, of course. It is easy to show that, when $\frac{\sigma_2}{\sigma}<\frac2{\sqrt{3}}$, the tetrahedral symmetry is spontaneously broken \cite{bary} and the length $\lambda$ of the 2-string, which is a function of the ratio $\sigma_2/\sigma$, is given by \begin{equation} \lambda=\frac r{\sqrt{2}}-\frac{r\,\sigma_2}{\sqrt{4\,\sigma^2- \sigma_2^2}}\,, \label{la} \end{equation} where $r$ is the edge length. The free energy $F_4$ of the four-quark system is \begin{equation} F_4=\sigma(T)\,L\,\ell(T)+4\mu L=\sigma(T)\,L\,\left(2\,r\sqrt{1- \left(\frac{\sigma_2(T)}{2\sigma(T)}\right)^2}+\frac{r}{\sqrt{2}}\, \frac{\sigma_2(T)}{\sigma(T)}\right)+4\mu\, L\,. \end{equation} Now the total length $\ell$ of the string may depend on $T$ through the ratio $\sigma_2(T)/\sigma(T)$. Expanding in $T=1/L$ as in (\ref{rskt}) we get \begin{equation} F_4=\sigma\,L\,\ell(0)-\frac{\pi}6\left[(d-2)\frac{\ell(0)}L- {\sigma_2}\left(\frac{d-2}\sigma-\frac{c_2}{\sigma_2}\right) \frac\lambda{L}\right] +4\mu\,L+O(1/L^2)\,. \end{equation} Clearly the term proportional to $\lambda$ violates the expected asymptotic form of the free energy (\ref{fb}), unless $c_2=(d-2)\,\sigma_2/\sigma\,$. \FIGURE{\epsfig{file=ckfig/sangles.eps,width=7 cm}\caption{The balance of the string tensions. \label{Figure:3}}} More generally, the baryonic free energy keeps the expected asymptotic form (\ref{fb}) only if the world-sheet shape does not change while varying $T$. Now in a generic string configuration contributing to $\mathop{\rm SU}(N)$ baryon potential, the angles at a junction of three arbitrary k-strings are given by (see Fig.\ref{Figure:3}) \begin{equation} \cos\theta_i=\frac{\sigma_j(T)^2+\sigma_k(T)^2-\sigma_i(T)^2} {2\,\sigma_j(T)\,\sigma_k(T)} \end{equation} and others obtained by cyclic permutations of the indices $i,j,k$. As a consequence, these angles are kept fixed only if all the string tension ratios are constant up to $T^3$ terms, i.e. only if \begin{equation} \frac{c_i}{\sigma_i}=\frac{c_j}{\sigma_j}=\frac{c_k}{\sigma_k}= \frac{(d-2)}{\sigma} \end{equation} which leads directly to \begin{equation} \sigma_k(T)=\sigma_k-(d-2)\frac{\pi\,\sigma_k}{6\,\sigma}\,T^2+O(T^3) \end{equation} as anticipated in the Introduction. \section{The 3D $\mathbb{Z}_4$ gauge model and its dual} The above general argument on the finite temperature corrections of the k-string tensions suggests a different behaviour with respect to the usual assumption that these corrections are those produced by a free bosonic string. Since the comparison with theoretical predictions of k-string tensions is sensitive to this behaviour, it is important to check its validity. In this Section we address such a question with a lattice calculation in a 3D $\mathbb{Z}_4$ gauge theory which is perhaps the simplest gauge system where there is a stable 2-string. We work on a periodic cubic lattice $L\times L\times L_\tau$. The degrees of freedom are the fourth roots of the identity $\zeta_l$ $~(\zeta^4=1)$, defined on the links $l$ of the lattice. The partition function is \begin{equation} Z(\beta_f,\beta_{ff})=\prod_l\sum_{\zeta_l=\pm1,\pm i}e^{\sum_p (\beta_f{\cal U}_p+\beta_{ff}{\cal U}^2_p/2+c.c.)}\,, \end{equation} where the sum is extended to all plaquettes $p$ of the lattice and ${\cal U}_p=\prod_{l\in p}\zeta_l$; $\beta_f$ and $\beta_{ff}$ are two coupling constants. When they vary in a suitable range the system belongs to a confining phase. In analogy with the $\mathop{\rm SU}(N)$ case we say that ${\cal U}_p$ is in the fundamental ($f$) representation while ${\cal U}_p^2$ lies in the double-fundamental ($ff$) representation. From a computational point of view it is convenient to recast $Z$ as the partition function of two coupled $\mathbb{Z}_2$ gauge systems \begin{equation} Z(\beta_f,\beta_{ff}) =\prod_l \sum_{ \{ U_l = \pm 1 , V_l = \pm 1 \} } e^{\sum_p\beta_f(U_p+V_p)+\beta_{ff} U_p V_p}\,, ~\left(U_p=\prod_{l\in p}U_l\,,V_p=\prod_{l\in p}V_l\right)\,. \label{cz2} \end{equation} The external sources generating the 1-string and the 2-string are given respectively by the two products \begin{equation} P_f(\vec{r})\equiv\prod_{l\in\gamma_{\vec{r}}}U_l~{\rm or}~ \prod_{l\in\gamma_{\vec{r}}}V_l~;~~ P_{ff}(\vec{r})\equiv \prod_{l\in\gamma_{\vec{r}}}U_l\,V_l\,, \label{pfpff} \end{equation} where $\gamma_{\vec{r}}$ is a closed path in the lattice which winds once around the temporal direction $L_\tau$ and passes through $\vec{r}$. This model, as any three-dimensional abelian gauge system, admits a spin model as its dual. We have recently shown \cite{Giudice:2006hw} that this $\mathbb{Z}_4$ gauge model is dual to a spin model with global $\mathbb{Z}_4$ symmetry which can be written as a symmetric Ashkin-Teller (AT) model, i.e.~two coupled Ising models defined by the two-parameter action \begin{equation} S_{AT} = -\sum_{\link{xy}} \left[ \beta (\sigma_x \sigma_y + \tau_x \tau_y) + \alpha (\sigma_x \sigma_y \tau_x \tau_y) \right]\,, \end{equation} where $\sigma_x$ and $\tau_x$ are Ising variables ($\sigma,\tau=\pm 1$) associated to the site $x$ and the sum is over all the links $\link{xy}$ of the dual cubic lattice. The two couplings $\alpha$ and $\beta$ are related to the gauge couplings $\beta_f$ and $\beta_{ff}$ by \cite{Giudice:2006hw} \begin{eqnarray} \alpha & = & \frac{1}{4} \log \left( \frac{(\coth\beta_f+\tanh\beta_f\tanh\beta_{ff}) (\coth\beta_f+\tanh\beta_f\coth\beta_{ff})} {2+\tanh\beta_{ff}+\coth\beta_{ff}} \right) \,, \\ \beta & = & \frac{1}{4} \log \left( \frac{1+\tanh^2\beta_f\tanh\beta_{ff}} {\tanh^2\beta_f+\tanh\beta_{ff}} \right) \, . \end{eqnarray} The duality transformation maps any physical observable of the gauge theory into a corresponding observable of the spin model. In particular it is well known that the Wilson loops are related to suitable \emph{flips} of the couplings of the spin model. We have found the identity \begin{equation} \avg{V_P}_{gauge} = \avg{e^{-2(\beta+\alpha\tau_x\tau_y) \sigma_x\sigma_y}}_{AT} \end{equation} generalising the known dual identity of the Ising model. Similarly, flipping the signs of both spins $\sigma_x$ and $\tau_x$ we get the plaquette variable in the $k=2$ representation as $\avg{U_P V_P}$. Combining together a suitable set of plaquettes we may build up any Wilson loop or Polyakov-Polyakov correlator with $k=1$ or $k=2$. \subsection{Monte Carlo simulations} Our interest in writing this model in terms of Ising variables is twofold. First we can easily apply for the simulation a very efficient non-local algorithm \cite{wd}, basically similar to the standard Fortuin-Kasteleyn cluster method of Ising systems: each update step is composed by an update of the $\sigma$ variables using the current values of the $\tau$'s as a background (thus locally changing the coupling from $\beta$ to $\beta \pm \alpha$ according to the value of $\tau_x \tau_y$ on the link $\link{xy}$), followed by an update of the $\tau$'s using the $\sigma$ values as background. \TABULAR{cc|c|c|}{\cline{3-4} &&$\alpha=0.05$&$\alpha=0.07$\\ &&$\beta=0.2070$&$\beta=0.1975$\\ \hline \multicolumn{2}{|c|}{$\sigma\equiv\sigma_f$}&0.02085(10)&0.0157(1)\\ \multicolumn{2}{|c|}{ $\sigma_2\equiv\sigma_{ff}$}&0.0328(5)&0.0210(5)\\ \multicolumn{2}{|c|}{$\sigma_2-\sigma$}&0.01195(51)&0.0053(5) \\ \hline \multicolumn{1}{|c|}{~}&$N_\tau=9$& 0.00864(6)&--\\ \multicolumn{1}{|c|}{~}&$N_\tau=10$& 0.00951(8)&0.003700(30)\\ \multicolumn{1}{|c|}{~}&$N_\tau=11$& 0.01010(10)&0.004220(35)\\ \multicolumn{1}{|c|}{~}&$N_\tau=12$& 0.01050(15)&0.004550(35)\\ \multicolumn{1}{|c|}{$\Delta\,\sigma(T)$}&$N_\tau=13$& 0.01080(20)&0.004750(40)\\ \multicolumn{1}{|c|}{~}&$N_\tau=14$& 0.01100(25)&0.004910(40)\\ \multicolumn{1}{|c|}{~}&$N_\tau=15$& --&0.005020(45)\\ \multicolumn{1}{|c|}{~}&$N_\tau=16$& --&0.005110(50)\\ \hline \multicolumn{2}{|c|}{ $\Delta\,\sigma(0)$}&0.01271(2)&0.00591(1)\\ \hline} {String tension differences $\Delta\sigma(T)$ as resulting from fits to Eq.(\ref{expo}). $\Delta\,\sigma(0)$ is evaluated from a fit to Eq.(\ref{fit}) using $\Delta\sigma(T)$ data. All the quantities are expressed in lattice spacing units.\label{Table:1}} Secondly, by flipping a suitable set of couplings, we can insert any Wilson loop or Polyakov correlator directly in the Boltzmann factor, producing results with very high precision. If, for instance, we generate a sequence of Monte Carlo configurations where the $\sigma_x$ couplings of all the links crossing the cylindric surface bounded by the Polyakov lines $\gamma_{\vec{r}_1}$ and $\gamma_{\vec{r}_2}$ are flipped, then the average of whatever observable ${\cal Q}$ is actually the quantity \begin{equation} \frac{\langle\,{\cal Q}\,P_f(\vec{r}_1)\,P^\dagger_f(\vec{r}_2)\rangle_T}{ \langle\,P_f(\vec{r}_1)\,P^\dagger_f(\vec{r}_2)\rangle_T}\,. \end{equation} In our numerical experiment we choose ${\cal Q}= P_f(\vec{r}_1)\, P^\dagger_f(\vec{r}_2)$, therefore the corresponding averages yield directly, according to (\ref{pfpff}), the ratio \begin{equation} R(\vert\vec{r}_1-\vec{r}_2\vert,T)=\frac{\langle\,P_{ff}(\vec{r}_1) \,P_{ff}(\vec{r}_2)\rangle_T}{ \langle\,P_f(\vec{r}_1)\,P^\dagger_f(\vec{r}_2)\rangle_T}\,. \end{equation} We estimated this vacuum expectation value with a very powerful method, based on the linking properties of the FK clusters \cite{gv}: for each FK configuration generated by the above-mentioned algorithm one looks for paths in the clusters linked with the loops $\gamma_{\vec{r}_1}$ and $\gamma_{\vec{r}_2}$. If there is no path of this kind we put $ {\cal Q}= 1$, otherwise we set ${\cal Q}=0$. The algorithm we used to determine the linking properties is described in~\cite{ziff}. This method leads to an estimate of $R(\vert\vec{r}_1-\vec{r}_2\vert,T)$ with reduced variance with respect to the conventional numerical estimates. \subsection{Results} We performed our Monte Carlo simulations on the AT model, at two different points of the confining region, for which we measured previously the string tensions \cite{z4kstr_pietro} (see Table 1) . We worked on a cubic periodic lattice of size $128\times128\times N_\tau$ with $N_\tau$ chosen in such a way that temperature of our simulations ranged from $T/T_c\simeq0.5$ to $T/T_c\simeq0.8$ and we took the averages over $10^6$ configurations in each point. \FIGURE{\epsfig{file=ckfig/graf-corr-tot.eps,width=10 cm}\caption{Plot of $G(r)_{ff}/G(r)_{f}$ on a log scale. \label{Figure:ratiocorr}}} The large distance behaviour of the data is well described by a purely exponential behaviour (see Fig.\ref{Figure:ratiocorr}) \FIGURE {\epsfig{file=ckfig/c2R.eps,angle=270,width=11 cm} \caption{The fitted value of $\Delta\sigma$ to Eq.(\ref{expo}) as a function of the minimal distance $r_{min}$ considered for different values of $N_\tau$. The large plateaux show the stability of the fit. \label{Figure:4}}} \begin{equation} G(r)_{ff}/G(r)_{f}=\frac{\langle\,P_{ff}(\vec{r}_1)\,P_{ff}(\vec{r}_2)\rangle_T} {\langle\,P_f(\vec{r}_1)\, P^\dagger_f(\vec{r}_2)\rangle_T}\propto e^{-\Delta\sigma\,r\,N_\tau}\,, \label{expo} \end{equation} with $\Delta\,\sigma=\sigma_{ff}-\sigma_f$. Comparison with (\ref{logppst}) shows that the logarithmic term, which is a potential source of systematic errors when neglected in Polyakov correlators, here is cancelled in the ratio. Since (\ref{expo}) is an asymptotic expression, valid in the IR limit, we fitted the data to the exponential by progressively discarding the short distance points and taking all the values in the range $r_{min}\leq r\leq r_{max}=60\,a$ with $r_{min}$ varying from 15 to 40 lattice spacings $a$. The resulting value of $\Delta\sigma$ turns out to be very stable, as Fig. \ref{Figure:4} shows. The whole set of values of $\Delta\,\sigma(T)$ as functions of the inverse temperature $N_\tau=1/T$ are reported in Table 1. According to Eq.(\ref{main}), in the low temperature limit we expect the asymptotic behaviour \begin{equation} \Delta\,\sigma(T)=\Delta\,\sigma(0)\left(1-\frac\pi{6\,\sigma}T^2\right) + O(T^3) \,. \label{fit} \end{equation} Assuming for $\sigma$ the values estimated in \cite{z4kstr_pietro}, we used $\Delta\sigma(0)$ as the only fitting parameter. Neglecting one or two points too close to $T_c$ we got very good fits to (\ref{fit}) as shown in Figs. \ref{Figure:5} and \ref{Figure:6}. The fitting parameter $\Delta\,\sigma(0)$ as well as the estimates of $\Delta\,\sigma(T)$ are reported in Table 1. Note that $\Delta\,\sigma(0)$ agrees with the difference of the previous estimates \cite{z4kstr_pietro}, also reported in Table 1, however the error is reduced by a factor of 25 (first set of data) and even of 50 (second set of data). The reason of this gain in precision is due to the fact that $\sigma_2$ was evaluated from a fit to (\ref{ppst}), even taking in account the Next-to-Leading-Order terms, which was rather poor because the 2-string does not behave as a free bosonic string \cite{z4kstr_pietro}. On the contrary our fits to (\ref{expo}) and (\ref{fit}) are very stable and the corresponding reduced $\chi^2/d.o.f$ are of the order of 1 or less. \FIGURE {\epsfig{file=ckfig/c1.eps,angle=270,width=10 cm} \caption{Values of $\Delta\sigma(T)$ from the first set of data of Table 1 versus $T^2=1/N^2_\tau$. The solid line is a one-parameter fit to Eq.(\ref{fit}). \label{Figure:5}}} \FIGURE {\epsfig{file=ckfig/c2.eps,angle=270,width=10 cm} \caption{Same as Fig.\ref{Figure:5}, but for the second set of data of Table 1. \label{Figure:6}}} \section{Discussion} Our calculations in this paper were in two parts. In the first part we investigated the string nature of long flux tubes generated in an $\mathop{\rm SU}(N)$ gauge theory by external sources in representations with $N-$ality $k>1$. In a previous work \cite{z4kstr_pietro} we were led to the conclusion that these k-strings are not adequately described by the free bosonic string picture. In this paper we argued that the central charge of the effective k-string theory is not simply $d-2$, like in the free bosonic string, but $c_k=(d-2)\frac{\sigma_k}\sigma$. This simple recipe was a straightforward consequence of demanding that the asymptotic functional form of Polyakov correlators associated to the baryon vertex should not change when varying the mutual positions of the Polyakov lines, the reason being that certain positions create k-strings inside the baryon vertex which modify the functional form of the correlator unless $\frac{\sigma_k(T)}{\sigma(T)}={\rm constant}+O(T^3)$. This in turn fixes unambiguously the value of $c_k$. Although the geometric derivation is quite general and the resulting expression for $c_k$ is appealing for its simplicity, it would be very important to find some independent quantum argument in support of it. The second part of the paper dealt with 2-strings in a 3D $\mathbb{Z}_4$ gauge model and in particular with the difference $\Delta\sigma(T)=\sigma_2(T)-\sigma(T)$ of the string tensions as a function of the temperature. Combining together three essential ingredients, namely the duality transformation, an efficient non-local cluster algorithm and finally a choice of flipped links which allows to directly measure the ratio of Polyakov correlators belonging to different representations, we obtained values for $ \Delta\sigma(T)$ with unprecedented precision which nicely agree with our general formula (\ref{main}). There is however much scope for improving these calculations: as Fig. \ref{Figure:5} and \ref{Figure:6} show, with little more effort it would be possible to evaluate also the corrections of order $T^3$ and $T^4$, that in the case of fundamental string are expected to be universal. \acknowledgments We are grateful to Michele Caselle, Paolo Grinza and Ettore Vicari for useful discussions.
{ "attr-fineweb-edu": 1.472656, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd_c5qg5A55BmHN5F
\section*{Abstract} \paragraph*{} Genetic and pharmacological perturbation experiments, such as deleting a gene and monitoring gene expression responses, are powerful tools for studying cellular signal transduction pathways. However, it remains a challenge to automatically derive knowledge of a cellular signaling system at a conceptual level from systematic perturbation-response data. In this study, we explored a framework that unifies knowledge mining and data mining approaches towards the goal. The framework consists of the following automated processes: 1) applying an ontology-driven knowledge mining approach to identify functional modules among the genes responding to a perturbation in order to reveal potential signals affected by the perturbation; 2) applying a graph-based data mining approach to search for perturbations that affect a common signal with respect to a functional module, and 3) revealing the architecture of a signaling system organize signaling units into a hierarchy based on their relationships. Applying this framework to a compendium of yeast perturbation-response data, we have successfully recovered many well-known signal transduction pathways; in addition, our analysis have led to many hypotheses regarding the yeast signal transduction system; finally, our analysis automatically organized perturbed genes as a graph reflecting the architect of the yeast signaling system. Importantly, this framework transformed molecular findings from a gene level to a conceptual level, which readily can be translated into computable knowledge in the form of rules regarding the yeast signaling system, such as ``if genes involved in MAPK signaling are perturbed, genes involved in pheromone responses will be differentially expressed''. \newpage{} \section*{Introduction} Model organisms, such as \textit{Saccharomyces cerevisiae} and \textit{Drosophila melanogaster}, are powerful systems to study cellular signal transduction, because they are amenable to systematic genetic and pharmacological perturbations, enabling biologists to infer whether a gene is involved in a signal transduction pathway through studying perturbation-response data. The premise for elucidating signal transduction pathways from systematic perturbation experiments is that, if perturbation of a set of genes consistently causes a common cellular response, e.g., a phenotype presented as the differential expression of a module of genes, the perturbed genes are likely the members (or modulators) of the signal transduction pathway that leads to the phenotype. In this study, we refer to a \textit{signal} from an information theory \citep{Cover2006} point of view, in which a signal is a latent variable whose state contains information with respect to another variable, e.g., the expression state of a gene module or the state of another signal. From the same viewpoint, a signaling system consists of a set of latent variables connected as a network, in which an edge exists between a pair of signals if the state of one signal affects that of the other, i.e., information can be transmitted between the signals, and the relay of signals along the paths in the network enables the system to encode complex information. From a cell biology viewpoint, a signal transduction pathway consists of a collection of signaling molecules that detect and transmit a signal that has a physical or chemical form, e.g., the represent of pheromone in the environment. In such a system, a signal is encoded as a change in the state of a signaling molecule, often manifested as a change in the structural conformation of a protein, chemical modification of a signaling molecule, or a change in the concentration of a signaling molecule. While it would be ideal to find a one-to-one mapping between the signaling molecules in cells and the signals in the information theory framework, such a mapping can be difficult to obtain and too complex to represent. Representing cellular signaling systems within the abstract information-theory framework provides the following advantages: 1) it enables us to use latent variables to represent the state of yet unknown signaling molecules; 2) it allows us to represent the biological signals encoded by a group of signaling molecules into a single-bit signal, if the signals encoded by these molecules convey a common piece of information with respect to other variables. We refer to such a group of signaling molecules as a \textit{signaling unit}. The following example illustrate the parallelism between the biological entities and and their counterparts in a computational model. A pheromone receptor in a yeast cell and its associated G-proteins can be thought of as one signaling unit, as they function together to detect the signal of pheromone in an inseparable manner. Another exemplary signaling unit is the cascade of mitogen-activated protein kinases (MAPKs), which transduce signals among themselves through a chain of protein phosphorylation reactions almost in a deterministic fashion. The states of these signaling units can be represented as two single-bit signals in a computational model. When a yeast cell is exposed to pheromone, the receptor unit detects the signal and transmit the signal to MAPKs unit\citep{Gustin,Herskowitz1995}, which further relays the signal to down stream signaling units to regulate expression of downstream genes involved in mating. These relationships between signaling units can be represented as edges in the model. Moreover, in addition to pheromone response, the MAPK signaling unit also interacts with other signaling units to transmit the signals that affect filamentation/invasion processes \citep{Gustin,Herskowitz1995}; such branching and cross-talks between different signaling pathways can be represented as a network connecting signals in the computational model. Thus, the general task of using systematic perturbation data to study a cellular signaling system can be reduced to the following specific tasks: 1) revealing the signals embedded in the convoluted molecular phenotype data such as microarrays, 2) identifying perturbed genes that affect a common signal, 3) grouping perturbed genes into signaling units based on the information they encode, and 4) inferring the paths between signaling units where a path may or may not correspond to a signal transduction pathway in conventional cell biology. In the seminal work by \cite{Hughes}, yeast cells were subjected to over 300 types of systematic perturbations (gene deletions and chemical treatments\footnote{From here on, we refer to such a treatment experiment as a perturbation instance}) and the transcriptional responses to the perturbations were measured using microarrays. This dataset has been widely used to test different computational approaches for investigating the relationship between perturbed genes and responding genes \citep{Hughes,TanaySAMBA2002,OurfaliSPINE2007,MarkowetzNEM2007,Yeger-Lotem,Huang}. For example, using a conventional hierarchical clustering approach, Hughes \textit{et al} grouped perturbed genes into clusters to elucidate the cellular functions of some genes, based on the fact that perturbing these genes produced gene expression profiles similar to those resulting from perturbing the known members of certain pathways. To relax the requirement of global similarity by hierarchical clustering, other researchers have studied approaches to connect a subset of perturbation instances to a subset of responding genes in order to find context specific information between the perturbation and the responses~\citep{TanaySAMBA2002}. Such a task is often cast as a biclustering problem \citep{MadeiraBiClustering2004, Cheng2000, Erten2010}. More recently, sophisticated graph-based algorithms have been applied to the dataset to study potential signal pathways \citep{Yeger-Lotem,Huang,OurfaliSPINE2007}. The basic idea underlying the studies by Yeger-Lotem \textit{et al} and Huang \textit{et al } is to model the information flow from perturbed genes to responding genes through a PPI network by employing graph search algorithms, e.g., price collecting Steiner tree algorithms. While the above studies have led to many biological insights regarding the system at a gene level, they did not address the task of discovering signaling units and representing the findings at a conceptual level in order to derive computable knowledge, such as the rule: \textit{if a gene involved in MAPK pathway is deleted, the cellular response to pheromone will be affected}. Transforming experimental data into concepts and further elucidating the relationship among the concepts are critical steps of knowledge acquisition and knowledge representation. The scale of contemporary biotechnologyies further necessitates computational approaches to perform such tasks in an automated manner in order to facilitate knowledge discovery by human experts. Yet, the development of such techniques is severely lagging behind the pace of data generation. In this paper, we report a proof of concept framework that unifies knowledge mining and data mining to derive knowledge regarding a signaling system in an automatic manner, and we refer to the overall approach as ontology-driven knowledge discovery of signaling pathways (OKDSP). We tested the framework using the yeast perturbation-response data by \cite{Hughes} to illustrate its utility. \section*{Results and Discussion} A key step of ``reverse engineering'' signaling pathways using systematic perturbations data is to identify perturbations that convey the same information, in other words, to first find the ``jigsaw puzzle'' pieces belonging to a signal transduction pathway. For example, a classic yeast genetic approach is to search for deletion strains that exhibit a common phenotype as a means for identifying genes potentially involved in a signaling pathway carrying information with respect to the phenotype \citep{Winzeler1999}. The advent of genome technologies enables biologists to use genome-scale data, such as gene expression data, as ``molecular phenotypes'' to study the impact of systematic perturbations \citep{Hughes}. In general, a perturbation treatment, such as deleting a gene, often affects multiple biological processes. For example, deleting a gene involved in ergosterol metabolism will affect the organization of cell membrane, which in turn will affect multiple signaling pathways located in the membrane. As such, the overall cellular response to a perturbation instance, which often manifested as a long list of differentially expressed genes, inevitably reflects a mixture of responses to multiple signals. Thus, we are confronted with two fundamental tasks when studying systematic perturbation data: 1) dissecting signals from the convoluted gene expression responses to a perturbation instance, i.e., finding a module of genes whose expression state reflects the state of a signal transduced along a signaling pathway, 2) identifying a set of perturbation instances that affects the signal regulating a common expression module. To address the tasks, we hypothesize that, if a module of genes---whose functions are coherently related---responds to multiple perturbation instances in a coordinated manner, the genes in the module are likely regulated by a common signal, and the perturbation instances affect this signal. Based on this assumption, we can first decompose the overall expression response to a perturbation instance into functional modules, with each module potentially responding to a distinct signal; then we can investigate if a functional module is repeatedly affected in multiple perturbation instances. In this study, we developed an ontology-based knowledge mining approach to identify functional modules, and we then developed a novel bipartite-graph-based data mining approach to search for perturbation instances affecting a common signal. Based on the results from the steps above, we further identified signaling units and revealed their organization in a signaling system using a graph-based algorithm. \subsection*{Identifying functional modules through knowledge mining} The Gene Ontology \citep{Ashburner00} (GO) contains a collection of biological concepts (GO terms) describing molecular biology aspects of genes. The relationship among the concepts are represented in a directed acyclic graph (DAG). An edge reflecting an ``is-a'' relationship between a pair of GO terms indicates that the concept encoded by the parent term is more general and subsumes the concept of the child term. The GO has been widely used to annotate the function of genes of different model organisms, therefore it is natural to treat a set of genes annotated with a common GO term as a \textit{functional module}, a widely used approach in bioinformatics analyses \citep{SegalModuleMap2004,Subramanian2005}. \begin{figure}[h] \centering \scalebox{0.4}{\includegraphics{Figure_1.eps}} \caption{{\bf Characterization of the summary GO terms.} {\bf A.} The histograms of the number of genes associated with each GO term before and after ontology-guided knowledge mining: 1) the original GO annotations for all responding genes (blue), and 2) the GO terms returned by the instance-based module search (red). {\bf B.} The distribution of the levels of the above GO term sets in the ontology hierarchy are shown as normalized histograms. Level $0$ represents the root of the Biological Process namespace.} \label{FigureHistGrams} \end{figure} We first investigated if original GO annotations from the GO database are suitable to represent the major functional themes of genes responding to perturbations in our setting. Based on the results of gene expression analysis performed by Hughes \textit{et al}, $5,289$ genes were determined to be differentially expressed in response to one or more perturbation instance(s). We identified all the GO terms that have been used to annotate these genes and retained a subset that belong to the Biological Processes domain of the GO, which consisted of 1,739 unique GO terms. We studied the distribution of the number of genes annotated by each GO term, and the results are shown as a histogram in {Figure \ref{FigureHistGrams}}. The figure shows that a large number of original GO annotations was associated with only a few genes; in fact almost half ($43.93\%$) of the GO terms was associated with only 1 or 2 genes. The results reflect the fact that, while original GO annotations are highly specific and informative with regards to individual genes, they would fail to represent the major functional themes of a set of genes. Therefore, there is a need to identify more general terms to represent major functional themes. We then formulated the task of finding functional modules as follows: given a list of genes responding to a perturbation instance and their GO annotations, assign the genes into non-disjoint functional modules, such that genes within a module participate in coherently related biological processes. This was achieved by utilizing the hierarchical organization of the GO to group a subset of genes under a suitable GO term, which retains as much of original the semantic information as possible. We developed novel quantitative metrics for objectively assessing the fitness of a summarizing GO term, which enabled us to find a term that covered many genes and yet minimized the loss of semantic information the original annotations. Our criteria for a summarizing GO term included: 1) requiring the summarizing term to be statistically enriched in the input gene list, and 2) requiring the functions of the genes in a module to be semantically coherent when measured with a functional coherence metric previously developed by our group (\cite{Richards2010}, see Methods section). This enabled us to dynamically search for suitable terms along the GO hierarchy and to group genes under suitable summary terms in a manner that is specific for each input gene list, rather than using pre-fixed annotations \citep{Subramanian2005}. We refer to this approach as a knowledge mining approach because it searches for a new representation of the function of genes through assimilating knowledge represented by the original annotations. \begin{figure}[h] \centering \scalebox{0.4}{\includegraphics{Figure_3_new.eps}} \caption{ {\bf Functional coherence of modules.} {\bf A.} The cumulative distribution of functional coherence p-values of the responding modules identified by different methods: MBSEC with module-based input graphs (red), SAMBA with module-based input graphs (green), and SAMBA with the global input graph (blue). {\bf B.} The cumulative distribution of functional coherence p-values of the perturbation modules identified by different methods: MBSEC with module-based input graphs (red), SAMBA with module-based input graphs (green), and SAMBA with the global input graph (blue). } \label{ModuleCoherence} \end{figure} Applying this approach, we identified functionally coherent modules for each perturbation experiment. Further, we merged the modules from different perturbation instances that shared a common GO annotation. The procedure led to a total of 527 distinct functional modules, each summarized with a distinct GO term. The statistics of the modules, the number of genes annotated by summarizing terms and the levels of the terms in the GO hierarchy, are shown in Figure \ref{FigureHistGrams}. It is interesting to note that while the summarizing GO terms tend to annotate more genes than the original ones, the distribution of the terms along the GO hierarchy is quite close to the original annotations, indicating that our approach retained a level of semantic specificity similar to the original annotations. We further investigated the modules and found the results biologically sensible. For example, we found that 38 genes were grouped into a module annotated with the term GO:0008643 ({\it carbohydrate transport}) (from here on, we name a functional module using its summary GO term), including 17 genes in hexose transport \{HXT1, HXT2, ..., HXT17\}. The original annotations of the genes in the module included GO:0051594 ({\it detection of glucose}, covering 3 genes), GO:0005536 ({\it glucose binding}, covering 3 genes), GO:0005338 ({\it nucleotide-sugar transmembrane transporter activity}, covering 4 genes), GO:0005353 ({\it fructose transmembrane transporter activity}, covering 16 genes), and so on. Our algorithm summarized the function of the genes using the term GO:0008643 ({\it carbohydrate transport}), which we believe does not result in a significant loss of information regarding the individual genes, thus providing a sensible representation of overall function of a larger group of genes. A list of function modules is shown in the supplementary website. \begin{figure}[h] \centering \scalebox{0.4}{\includegraphics{Figure_Bipartite_density.eps}} \caption{ {\bf Subgraph connectivity.} Cumulative distribution of within bipartite subgraph connectivity of the modules identified in three experiments: MBSEC with module-based input graphs (red), SAMBA with module-based input graphs (green), and SAMBA with global input graph (blue). } \label{WithinModuleConn} \end{figure} \begin{figure}[h] \centering \scalebox{0.4}{\includegraphics{Figure_5_new.eps}} \caption{ {\bf Protein-protein physical and genetic interactions within modules.} {\bf A.} The cumulative distribution of the within module PPI/GI connectivity ratios of responding modules identified by different methods: MBSEC with module-based input graphs (red), SAMBA with module-based input graphs (green), and SAMBA with the global input graph (blue). {\bf B.} The cumulative distribution of the connectivity ratios within perturbation modules identified by different methods: MBSEC with module-based input graphs (red), SAMBA with module-based input graphs (green), and SAMBA with the global input graph (blue). } \label{ModuleInteraction} \end{figure} \subsection*{Searching for perturbation instances affecting a common signal} Using a functional module from the previous section as a putative unit responding to a cellular signal, we further searched for the perturbation instances affected to the functional module. Success in finding a set of functionally coherent genes that repeatedly co-responded to multiple perturbation instances would provide a strong indication that the responding genes are regulated as a unit by a common signal and that the perturbation instances may have affected such a signal. We addressed the searching task in the following steps: 1) Given a functional module, we first created a bipartite graph using all perturbation instances on one side and the genes in the functional module on the other side, referred to as a functional-module-based graph. In such a graph, an edge between a perturbation instance and a responding gene indicates that the gene is differentially expressed in response to the instance. 2) We then searched for a densely connected subgraph satisfying the following conditions: a) each vertex was on average connected to a given percent, $r$, of the vertices on the opposite side, and b) the size (number of vertices) of the subgraph was maximized. We refer to the vertices on the perturbation side of a densely connected subgraph as a \textit{perturbation module}, and those on the responding side as a \textit{response module}. The problem of finding such a subgraph from a bipartite graph belongs to a family bi-clustering problems \citep{MadeiraBiClustering2004, Cheng2000,Erten2010}, which are NP-hard. There are many approximate algorithms for solving the problem (see the review by \cite{MadeiraBiClustering2004}), but our formulation has distinct objectives, which allow us to specify the degree of connectivity between perturbation and responding modules. We have developed and implemented a greedy algorithm, referred to as the maximal bipartite subgraph with expected connectivity (MBSEC) algorithm, to solve this problem, see Methods. We performed experiments to test the following two hypotheses: 1) using functional-module-based graphs as inputs for a dense-subnetwork searching algorithm would enhance the capability of identifying signaling pathways; 2) specifically pursuing high density of a subgraphs enhances the capability of finding signaling pathways. To test the first hypothesis, we applied an algorithm referred to as the statistical-algorithmic method for bicluster analysis (SAMBA) by \cite{TanaySAMBA2002} to assess the impact of different inputs on the quality of perturbation-response modules. SAMBA is a well-established algorithm that solves the biclustering problem under a bipartite graph setting, which is similar to problem setting. We first applied the SAMBA (implemented in the Expander program, v5.2) with default settings to the global bipartite graph consisting of all 5,289 responding genes and 300 perturbations, which returned a total of $304$ subgraphs. We then applied the SAMBA program to each of the functional-module-based graphs, and a total of $131$ subgraphs were returned. To test the second hypothesis, we applied the MBSEC algorithms to the same functional-module-based graphs as in the previous experiment, using the following parameter settings: $r \geq 0.75$ and $s \geq 4$. The experiment identified a total of $122$ subgraphs that satisfied the requirements. We assessed the overall quality of a perturbation (or a responding) module by determining the functional coherence score of the module using the method previously developed by our group \citep{Richards2010}. This method measures functional relatedness of a set of genes based on the semantic similarity of their functional annotations and provides a p-value of the coherence score of a gene set. The key idea of this method is as follows: given a set of genes, map the genes to a weighted graph representing the ontology structure of the GO, in which the weight of an edge reflects the semantic distance between the concepts represented by the a pair of GO terms; identify a Steiner tree that connects the GO terms annotating these genes and measure how closely the genes are located within the graph using the total length of the tree; apply a statistical model to assess if the genes in the set are more functionally related than those from a random gene set. A gene set with a small p-value would indicate that the functions of the genes are coherently related to each other. Figure \ref{ModuleCoherence} shows the results of functional coherence analysis of responding modules (Panel A) and perturbation modules (Panel B) by plotting the cumulative distribution of the modules based on their p-values. Panel A shows that all responding modules returned by our MBSEC algorithm and those returned by SAMBA with functional-module-based graphs as input were assessed as functionally coherent. This is not surprising, as all the input modules were functionally coherent (p-value $\le 0.05$), and therefore the returned responding modules, which were sets of the input modules, were likely to be coherent. In comparison, when using the global perturbation-response bipartite graph as input, about {70\%} of the responding modules identified by SAMBA were assessed to be coherent. The results indicate that, while the SAMBA algorithm is capable of identifying biclusters with coherent responding modules, a high percentage of returned responding modules contains a mixture of genes involved in diverse biological processes. Since the goal is to find perturbation instances that likely constitute a signaling pathway, it is more interesting to inspect if the genes in a perturbation module are coherently related. We assessed the functional coherence of the perturbation modules returned from the three experiments for the impact of different inputs and algorithms on the results (see Panel B of Figure \ref{ModuleCoherence}). A higher percentage of perturbation modules was functionally coherent when functional-module-based graphs were used as inputs for SAMBA when compared with those from the SAMBA with a global graph, indicating that indeed perturbation instances densely connected to a functionally coherent responding module were more coherent themselves, i.e., they were more likely to function together. When comparing the results from MBSEC algorithm with those from the SAMBA, our algorithm returned the highest percentage of functionally coherent perturbation modules. The results indicate that, when inputs are the same, specifically pursuing high density subgraphs enhances the quality of identified perturbation modules. We further inspected the within subgraph connectivity, determined as the number of edges within a subgraph over the number of maximal possible edges ($n \times m$, with $n$ and $m$ representing the number of vertices on each side respectively), to investigate if the differences in functional coherence of the modules were related to the capabilities of the algorithms to find densely connected graphs. Figure \ref{WithinModuleConn} shows that there were striking differences in the connectivity of the subgraphs returned from three experiments. The results also support the notion that enhanced capability of finding densely connected perturbation-response bipartite graph underlies the capability of identifying coherent modules. In addition to assessing the functional relationship of the genes, we further quantified and compared within module physical and genetic interactions, which provided another line of evidence for assessing if genes in the modules were functionally related. Using protein-protein physical interaction and genetic interaction data from the BioGrid \citep{StarkBioGrid2010}, we calculated the ratio of the number of known interactions within a module containing $N$ genes over the maximum number of possible interactions for the module ($1/2 * N(N-1)$). We plot the cumulative distributions of modules based on their interaction ratios in Figure \ref{ModuleInteraction}. The figure shows that there are more physical and/or genetic interactions within both perturbation and responding modules identified by our methods, indicating that indeed the genes in these modules are more likely to function together. Taken together, these results indicate that, by constraining the search space to \textit{functionally coherent genes} and explicitly requiring a degree of connectivity of subgraphs, our approach enhances the capability of identifying perturbation modules in which the genes are more likely to physically interact with each other to participate in coherently related biological processes. Thus they likely participate in a common signaling pathway and carry a common signal. \subsection*{Discovering signaling pathways based on perturbation-responding subgraph} A subgraph consisting of a perturbation and a responding module reflects the fact that the perturbation instances affected the signal controlling the expression state of the genes in the responding module. It is interesting to see if a perturbation module contains the members and/or modulators of a signaling pathway. Indeed, we found many of the identified perturbation modules corresponded to well-known signaling pathways. For example, our analysis identified a subgraph consisting of a responding module of $8$ genes annotated by the GO term GO:0019236 ({\it response to pheromone}) and a perturbation module consisting of $16$ perturbation instances: \{{\color{blue}$STE11$}, {\color{blue}$STE4$}, {\color{blue}$DIG1$}, {\color{blue}$DIG2$}, $HMG2$, {\color{blue}$FUS3$}, $KSS1$, $RAD6$, {\color{blue}$STE7$}, {\color{blue}$STE18$}, {\color{blue}$STE5$}, {\color{blue}$CDC42$}, {\color{blue}$STE12$}, $STE24$, $SOD1$, $ERG28$ \}. In the list of the perturbation instances, we highlighted (with blue font) the genes that are known to be members of the well-studied yeast pheromone response pathway reviewed by \cite{Gustin}, which listed 20 gene products as the members of the pathway. In the study by \cite{Hughes}, 12 out of those 20 genes were deleted. We found that 10 out of these 12 perturbation instances were included in the perturbation module of this subgraph. This result indicates that our approach is capable of re-constituting the majority of the genes involved in the pheromone signaling pathway. Inclusion of ergosterol metabolism enzymes, ERG28 and HMG2, in the perturbation module indicates that our approach can also identify the modulators of a signaling pathway. \begin{figure*}[ht] \centering \scalebox{.5}{\includegraphics{Figure_6c.eps}} \caption{ {\bf Example perturbation-responding subgraphs.} Two example subgraphs are shown: {\bf Panel A} GO:0019236 ({\it response to pheromone}) and {\bf Panel B}: GO:0006826 ({\it iron ion transport}). For each subgraph, the perturbation instances (green hexagons) are shown in the top tier; responding genes (blue circles) are shown in the middle tiers; and the transcription factor modules (grey triangles) are shown in the bottom tier. To avoid an overly crowded figure, a red dash line indicates that a perturbation instance and a responding gene is NOT connected. } \label{fig:PertbNetwork} \end{figure*} In addition to ``re-discovering'' the known signaling pathways, analysis of subgraphs obtained in this study led novel hypotheses. For example, in one subgraph, the responding module was annotated with GO:0006826 ({\it iron ion transport}) and consisted entirely of genes involved in cellular iron homeostasis, including iron transporters and ferric reductases, shown in Panel B of Figure \ref{fig:PertbNetwork}. These genes are known to be primarily regulated by the iron-responsive transcription factor Aft1p and partially comprise the iron regulon in yeast \citep{YamaguchiIwai1995}. Intriguingly, the perturbed gene set consisted largely of proteins involved in mitochondrial translation, including gene products involved in mitochondrial ribosomal subunits ($RML2$, $RSM18$, $MRPL33$), translation ($HER2$, $DIA4$, $AEP2$), and RNA processing ($MSU1$). These data lead to a novel hypothesis that perturbation of mitochondrial protein synthesis will lead to changes in the iron sensing process. In fact, such a link has only recently been suggested, in that iron-sulfur complex synthesis in mitochondria, which requires a set of 10 distinct protein components \citep{Lill2000}, directly impacts cellular iron uptake and utilization \citep{Hausmann2008,Rutherford2005}. Indeed, these data provide a rationale for the new hypothesis that mitochondria translation plays an essential role in cell iron homeostasis through iron-sulfur complex synthesis. We have visualized all the perturbation-responding module pairs identified in our experiments and show the results on the supplementary website. The data allow readers, particularly yeast biologists, to inspect the results and assess the quality of the modules, and more importantly, to explore new hypotheses regarding yeast signaling systems. In Figure \ref{fig:PertbNetwork}, we show the subgraphs related to GO:0019236 ({\it response to pheromone}) and GO:0006826 ({\it iron ion transport}). In this figure, we show the perturbation instances (green hexagons) and responding modules (blue circles) in two tiers. Due to the fact that the connections between the perturbation and the responding module are very dense, which would interfere with visualization, we reversely indicate perturbation instances and responding genes that were NOT connected, shown as the red dash-lines in the figure. Using a graph-based algorithm \citep{LuS2011}, we further identified transcription factor (red triangles) modules that are likely responsible for the co-expression of the genes in the responding modules. Including TF information in data visualization further enhances interpretation of the subgraphs. For example, the fact that each responding module in this figure are connected (thus potentially regulated) by a TF module further strengthens the hypothesis that the genes are co-regulated together as a unit responding to a common signal. \subsection*{Revealing organization of cellular signals} Our approach enabled us to use responding modules to reflect major signals in a cellular system and perturbation instances that affect these signals. We have found that many perturbation instances were involved in multiple perturbation-response subgraphs, indicating that the signal affected by such a perturbed instances was connected to multiple signals through cross-talks. This observation offered us an opportunity to further investigate the organization of cellular signals by studying what signals each perturbation instance affects, and how the signals are related to each other. For example, it is interesting to investigate whether a set of perturbation instances affects a common set of responding modules--- that is, the information encoded by these genes is identical---so that we can group them as a signaling unit. Similarly, it is of interest to investigate whether the responding modules (signals) affected by one perturbed gene are a subset of those affected by another perturbed gene, and to utilize such a relationship to organize the signals. The latter task is closely related to that addressed by the nested effect model \citep{MarkowetzNEM2007}, which aims to capture the hierarchical relationship among perturbation instances based on the genes they affect. Since the nested effect model used an individual gene as a responding unit, the scale of the problem became intractable (exponential) and a Markov chain Monte Carlo algorithm was employed. In contrast, our approach used conceptualized responding modules, which provided two advantages: 1) the projection of high-dimensional data at the gene level to a low-dimensional and semantic-rich concept level reduces complexity of the task; 2) the unique annotation associated with each module renders the task of determining subset relationship among perturbation instances a trivial task. These characteristics enabled us to develop a polynomial algorithm (see Methods), to organize the perturbation instances into a (DAG). In such a graph, each node is comprised of a set of perturbation instances that share common responding modules, i.e., a signaling unit; an edge between a pair of nodes indicates that the signals affected by the parent node subsume those carried by the child node. We collected all perturbation modules that contained at least $8$ perturbation instances and organized perturbation instances into a DAG as shown in Figure \ref{fig:hierachyGraph}. Inspecting the perturbation nodes including multiple genes, we found that the genes in these nodes tend to participate in coherently related biological processes, and they often physically interact with each other at high frequencies (data not shown). For example, one perturbation node (highlighted with a blue border) in Figure~\ref{fig:hierachyGraph} contains multiple STE (sterility) genes, a set of well-studied genes that mediates pheromone signaling in yeast, and they share common responding modules annotated with the functions ``response to pheromone'' (GO:0019236) and ``sexual reproduction'' (GO:0019953). Thus our method is capable of identifying perturbed instances whose information can be encoded using a one-bit signal---a switch affecting expression of the genes responding to pheromone signaling. Visualization of the relationship of perturbation instances in a DAG enables a biologist to investigate how signals are combined to generate a cellular response. For example, there is a perturbation node (highlighted with a red border) in Figure~\ref{fig:hierachyGraph} containing $DIG1$, $DIG2$, $SOD1$, $FUS3$ and $KSS1$, all of which, except $SOD1$, are involved in MAPK activities. Our results show that there is a path connecting this node to the aforementioned STE node, and then further to the ``respond to pheromone'' responding module, indicating that the gene products of the two nodes work together to transmit signals in response to pheromone. Indeed, it is well known that MAPK activities are required in the pheromone signaling pathway \citep{Gustin,Herskowitz1995}. Yet, our results clearly presented the fact that the MAPK node also carries information besides pheromone response, it also affects the biological processes of ``proteolysis'' (GO:0006508) process, for example. \begin{figure*}[ht] \centering \scalebox{0.9}{\includegraphics{Figure_7c.eps}} \caption{{\bf Organizing perturbation instances and responding modules} In this graph, responding modules are represented as green oval nodes, with each being annotated by a GO term. The rectangle nodes are perturbation nodes, which may contain one or more genes that share a common set of responding modules. } \label{fig:hierachyGraph} \end{figure*} Another interesting observation is that the hierarchical organization of the perturbation instances reflects their relative position in a signaling cascade. For example, perturbation of ergosterol metabolism genes, ERG2, ERG3, HMG2, ERG11, and ERG28, tend to have a broad impact on different signals, including the pheromone response pathway. This is understandable: as a critical component of the plasma membrane, ergosterol influences the organizational compartments of the plasma membrane such as lipid rafts \citep{Simon2011}, which in turn affect the organization of signaling molecules in the membrane. As such, perturbation of these genes has a broad impact on diverse cellular signals. Our results indicate that $HMG2$ and $ERG28$ are connected to the STE node to influence the expression of the pheromone responding module. The role of ergosterol metabolism in modulating pheromone response signaling has only recently been studied by \cite{JinH2008}. More interestingly, our results indicate that perturbation of distinct enzymes of ergosterol metabolism leads to distinct cellular signals, presumably by perturbing the production of distinct species of ergosterols. The view that distinct lipid species encode/regulate disparate signals is widely accepted in the lipidomics research domain \citep{Parks1995}. \section*{Summary} In this study, we developed a proof of concept framework for unifying knowledge mining and data mining to conceptualize the findings from systematic perturbation experiments in order to enhance the capability of identifying signal transduction pathways. The innovations of our approach are reflected in the following aspects: 1) an ontology-driven approach for identifying functional modules from a genes list in a dynamic and data-driven (instance-based) manner and projecting molecular finding to a conceptual level, 2) innovative formulation of the biclustering problem in terms of a constrained search space and new objective functions, and 3) a novel graph algorithm that enables organizing signaling molecules at a system level in a tractable manner for the first time. We have demonstrated that conceptualization of cellular responses to systematic perturbations enhances the capability of identifying perturbation instances that participate in specific signal transduction pathways. To the best of our knowledge, this is the first report of a computational framework capable of automatically assimilating the information from systematic perturbation data to reveal the architecture of a cellular signaling system at a \textit{conceptual} level that can be readily interpreted by biologists to gain insights of a system. More importantly, conceptualization of experimental results is a critical step towards the ultimate goal of systems biology---acquiring computable knowledge from experimental data for reasoning and hypothesis generation. Our results already laid the foundation to derive abstract knowledge. For example, one can translate a path from a perturbation node to a responding module in Figure \ref{fig:hierachyGraph} into a rule as follows: ``if genes involved in MAPK signaling are perturbed, genes involved in pheromone responses will be differentially expressed''. A rule like this represents the relationships between perturbed genes and responding genes at a conceptual level. Equipped with rules and facts, a computing agent can then make a prediction that perturbation of a newly discovered gene may lead to the differential expression of genes involved \textit{pheromone responses}, if the gene is found to be involved in \textit{MAPK signaling}. Ongoing research is devoted to acquiring and representing facts, assertions and rules from systems biology data in an accurate and generalizable manner. \begin{figure}[htb] \vspace{-3mm \footnotesize \begin{tabbing} xxxx\=xx\=xx\=xx\=xx\=xx\=xx\=xx\=xx\=xx\=xx\=xx\=xx\=\kill {\bf Algorithm-1 HDSubgraph$(G, r,s)$}\\ \textbf{Input:} $G=(V_1,V_2,E)$ -- a bipartite graph, $r$ -- the connectivity ratio of the subgraph,\\ \>\>~ and $s$ -- the minimum number of perturbations in the solution.\\ \textbf{Output:} A highly dense subgraph\\ \\ 1. $G_{sub}=\emptyset$; $Score_{best}=-1$;\\ 2. {\bf for each} subset $S_1$ of size $s-1$ in $V_1$ {\bf do}\\ 3. \> $V_{remain} = V_1 - S_1$; $V''_1=S_1$; $Status = 1$; \\ 4. \> {\bf while} $Status=1$ {\bf do}\\ 5. \>\> $Score_{temp}=-1$; $G'_{sub}=\emptyset$; $Status = 0$;\\ 6. \>\> {\bf for each} $u \in V_{remain}$ {\bf do}\\ 7. \>\>\> $V'_1=V''_1 \cup \{u\}$; $V'_2=\{v| v \in V_2$ and $v$ connects to at least\\ \>\>\>\> $r|V'_1|$ vertices in $V'_1$\};\\ 8. \>\>\> Calculate the score of induced subgraph $G'=(V'_1,V'_2,E')$\\ \>\>\>\> and save the score to $SC$;\\ 9. \>\>\> {\bf if} $SC>Score_{temp}$ {\bf then}\\ 10. \>\>\>\> $Score_{temp}=SC$; $G'_{sub}=G'$;\\ 11. \>\> {\bf if} $Score_{best}<Score_{temp}$ {\bf then}\\ 12. \>\>\> $G_{sub}=G'_{sub}$; $Score_{best}=Score_{temp}$; $Status = 1$;\\ 13. \>\>\> Assign $V'_1$ of $G_{sub}$ to $V''_1$; $V_{remain}=V_1-V''_1$;\\ 14. {\bf return} $G_{sub}$;\\ {\bf Note:} 1. $score(G') = \sum_{x \in V'_1}((1+0.001)|V'_2| - 1/(1-r)(|V'_2| - degree_{G'}(x)))$. When a new node $x$ is added, \\ \>\>\>~ there is a score gain if the degree of $x$ in $G'$ is at least $r|V'_2|$; else a penalty will be applied.\\ \>\> 2. The $s$ is smaller than or equal to the minimum number of perturbations in the solution. The growth \\ \>\>\>~ of $s$ will greatly increase the running time of the algorithm. \end{tabbing} \vspace*{-3mm} \caption{Greedy algorithm to find the highly dense bipartite subgraph} \label{GreedyAlg_2} \end{figure} \section*{Materials and Methods} The microarray data from the systematic perturbation experiments by \cite{Hughes} were collected, and differentially expressed genes responding to each perturbation were identified based the analysis of the original paper. Given a list of differentially expressed genes responding to a perturbation instance, we represent the genes and their annotations using a data structure referred to as GOGene graph \citep{Muller09}. In such a graph, a node represents a GO term and a directed edge between a pair of nodes reflects an "is-a" relationship between the GO terms; in addition, each node keeps track of the genes it annotates, therefore the graph contains information on both GO terms and genes. The procedure for searching for summarizing GO terms iterates through the following steps: 1) perform an enrichment analysis \citep{Khatri05} for each leaf GO term among the instance-specific responding genes; 2) select the GO term with the biggest p-value (least enriched) and merge its genes to the parent node with the shortest semantic distance as defined by Jin \textit{et al} \citep{Jin2010}; 3) trim the term off the graph; 4) repeat the above procedures. We stop trimming a GO term once it is significantly enriched (p-value $\leq$ 0.05) and the genes summarized by the term remained functionally coherent \citep{Richards2010}, and its associated genes are treated as a functionally coherent module; otherwise all non-significant terms would eventually be merged to the root node of the GO hierarchy and their associated genes are deemed as not coherently related. To assess the functional coherence, we applied the method developed by \cite{Richards2010}. In this approach, the ontology structure of the GO is represented as a weighted graph, in which an edge weight represents the semantic distances between a pair of GO terms. When given a list of genes, the genes are associated to their annotation GO terms and a Steiner tree connecting all genes is identified. Using the total length of the Steiner tree as a score reflecting the functional relatedness of the genes, a statistical model is applied to assess the probability of observing such a score if sets with the same size are randomly drawn from yeast genome. See \cite{Richards2010} for details. To search for a densely connected perturbation-responding subgraph in a bipartite graph, we formulated the task as follows: given a bipartite graph $G$, find a subgraph $G'=(V'_1,V'_2,E')$ of $G$ that satisfies the following conditions: 1) $(|V'_1| \geq s ) \bigcap (|V'_2| \geq s)$, where $s$ is a user defined threshold for cluster size; 2) each vertex in $V'_1$ connects to at least $|V'_2| \times r$ vertices in $V'_2$, and each vertex in $V'_2$ connects to at least $|V'_1|\times r$ vertices in $V'_1$, where the parameter $r\in [0, 1]$ is a connectivity ratio defined by users; and 3) the size of the subgraph {($|V'_1| + |V'_2|$ )} is maximized. We set the parameters as follows: $s=4$ and $r = 0.75$. The algorithms for searching for the subgraph is shown in Figure \ref{GreedyAlg_2}. \begin{figure}[htb] \footnotesize \begin{tabbing} xxxx\=xx\=xx\=xx\=xx\=xx\=xx\=xx\=xx\=xx\=xx\=xx\=xx\=\kill {\bf Algorithm organizing signaling components}\\ \textbf{Input:} A set of perturbation-responding subgraphs represented as a dictionary $D$, in which a key is \\ \>\>a perturbation instance and its value is a list of the responding modules (RMs) it connects\\ \textbf{Output:} A DAG organization of perturbation instances and RMs\\ \\ {\it \# Create a DAG consisting of perturbation instances and RMs}\\ 1. Create an empty graph $G$;\\ 2. Add all RMs to $G$ as RM nodes; \\ 3. Combine perturbation instances that connect to an identical set of RMs \\ \>into a joint perturbation node; add all resulting perturbation nodes into $G$;\\ 4. Add directed edges between perturbation nodes and RM nodes as specified in $D$\\ 5. Add directed edges between a pair of perturbation nodes, $n_1$ and $n_2$,\\ \> if the set of RMs associated with $n_2$ is a subset of that associated with $n_1$; \\ \\ {\it \#Simplify the DAG}\\ 6. {\bf for} each node $n_1$ {\bf do}\\ 6.1.\> {\bf for} each node $n_2$ that is a descendent of node $n_1$ {\bf do}\\ 6.2.\>\> {\bf if} $n_2$ has a parent node that is a descendant of $n_1$ {\bf then}\\ 6.3.\>\>\> Remove edge $(n_1,n_2)$;\\ 7. {\bf return} $G$; \end{tabbing} \vspace*{-3mm} \caption{Algorithm for organizing perturbation instances and RMs} \label{GreedyAlg_3} \end{figure} To organize perturbation instances based on their signals, we developed an algorithm to organize the perturbed instances into a DAG. In such a graph, there are two types of nodes: responding module nodes and perturbation nodes. Our algorithm groups perturbation instances that share identical responding modules into a common perturbation node, a signaling unit, and connect the perturbation node to its corresponding responding modules. The algorithm further organizes perturbation nodes such that, if signals by a perturbation node subsume those of another, a directed edge pointing to the subsumed node is added between them. The algorithm is shown in Figure \ref{GreedyAlg_3}. \section*{Acknowledgement} The authors would like to thank Ms Vicky Chen and Joyeeta Dutta-Moscato for reading and editing manuscript, and Drs. Nabil Matmati and David Montefusco for discussions. \paragraph{\it Funding:} This research was partially supported by the following NIH grants: R01LM011155 and R01LM010144. \section*{Author Contribution} XL conceived the project; SL performed the majority of the analyses; BJ contributed to the methods of knowledge mining; LAC helped with biological interpretation of results; XL, SL and LAC drafted the manuscript.
{ "attr-fineweb-edu": 1.523438, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUda3xK7Tt6CA5HPaE
\section{Introduction} The metal-to-insulator transition (MIT) in a nanostructure can be induced by the Coulomb interaction, as proposed by Mott-Hubbard \cite{16ref1,17ref1} or by disorder, as proposed by Anderson \cite{14ref1}. In the presence of both correlations and randomness one faces the interesting and far from be fully understood Mott-Anderson physics \cite{1, 2, 5, 7, 8, 10, 21, 22, ent1, ent2, francaAmico2011}. Theoretical investigations of MIT in complex systems via exact methods are challenging and restricted to small systems. Must of the studies applies instead dynamical mean-field theory (DMFT) \cite{1ref6}, which properly accounts for the electronic interaction and the disorder potential, but are still demanding and limited to simple systems. Recently we have proposed an alternative approach in which the quantum entanglement $-$ quantified via density-functional theory (DFT) \cite{6ref6, 7ref6} calculations $-$ is used to explore the MIT in interacting disordered chains \cite{prb}. This methodology has been proven to be reliable when compared to exact density-matrix renormalization group (DMRG) data and has been also successfully applied to investigate the superfluid-to-insulator transition (SIT) in disordered superfluids \cite{gui1, gui2}. In both MIT and SIT cases entanglement was found to be a witness of {\it i)} the so-called full Anderson localization, associated to a real-space localization of pairs; {\it ii)} the Mott localization and {\it iii)} the Mott-like localization, associated to an effective density phenomenon. However the MIT study \cite{prb} was restricted to a fixed interaction strength, non-magnetized systems and at zero temperature. Here we apply the same methodology to explore the Mott-Anderson physics in all the regimes of interaction and to investigate the impact of the magnetization and of the temperature in the MIT. We find that the minimum disorder strength necessary to the full Anderson localization is strongly dependent on the interaction regime. We also find an intrinsic connection between the level of the localization and the interplay between interaction and disorder. In magnetized systems, we find that the minimum entanglement characterizing the full Anderson localization is split into two minima, one for each spin species. Although the temperature fades away all types of localization, our results reveal that the full Anderson localization survives for higher temperatures than the Mott-like localization. \section{Theoretical Model} We simulate the disordered interacting lattices via the one-dimensional Hubbard model, \begin{equation}{} H = -t\sum_{\langle ij \rangle \sigma}(\hat{c}^{\dagger}_{i\sigma}\hat{c}_{j\sigma}+ H.c.) + U\sum_i\hat{n}_{i\uparrow}\hat{n}_{i\downarrow} + \sum_{i\sigma}V_i\hat{n}_{i\sigma}, \end{equation}{} \noindent with on-site disorder potential $V_i$ characterized by a certain concentration $C\equiv L_V/L$ of randomly distributed impurities, where $L_V$ is the number of impurity sites and $L$ the chain size. The density operator is $\hat{n}_{i\sigma} = \hat{c}^{\dagger}_{i\sigma}\hat{c}_{i\sigma}$, the average density is $n=N/L=n_\uparrow+n_\downarrow$ and the magnetization is $m=n_\uparrow-n_\downarrow$, where $N = N_{\uparrow} + N_{\downarrow}$ is the total number of particles and $\hat{c}^{\dagger}_{i\sigma}$ ($\hat{c}_{i\sigma}$) is the creation (annihilation) operator, with $z$-spin component $\sigma = \uparrow,\downarrow$ at site $i$. All the energies are in units of $t$ and we set $t=1$. We consider the average single-site entanglement: a bipartite entanglement between each site with respect to the remaining $L-1$ sites averaged over the sites. This ground-state entanglement is quantified via the average linear entropy, \begin{equation} \mathcal L=\frac{1}{L}\sum_i \mathcal L_i=1-\frac{1}{L}\sum_i\left(\text{w}_{\uparrow,i}^2+\text{w}_{\downarrow,i}^2+\text{w}_{2,i}^2+\text{w}_{0,i}^2\right), \end{equation} where $w_{\uparrow,i}$,$w_{\downarrow,i}$, $w_{2,i}$ e $w_{0,i}$ are the occupation probabilities for the four possible states of site $i$: single occupation with spin up, single occupation with spin down, double occupation and empty, respectively. At finite temperature, the probabilities are calculated with respect to a thermal state $\rho_\beta = \sum_n e^{-\beta E_n} \ket{n} \bra{n}$, where $\ket{n}$ is an eigenstate of the Hamiltonian with energy $E_n$ and $\beta = 1/k_B T$ is the inverse temperature. Thus for small chains ($L=8$) we calculate Eq.(2) by diagonalizing the full Hamiltonian. We also explore larger ($L=100$) disordered chains at $T=0$ via density-functional theory calculations. In this case, instead of Eq.(2), we adopt an approximate density functional \cite{francaAmico2011} for the linear entropy of homogeneous chains, \begin{equation} \mathcal L^{hom}(n,U>0)\approx2n-\frac{3n^2}{2}+[(4n-2)\alpha(U)-4\alpha(U)^2]\times\Theta[n-\alpha(U)-1/2], \end{equation} where $\Theta(x)$ is a step function, with $\Theta(x)=0$ for $x<0$ and $\Theta(x)=1$ for $x\geq 0$, and $\alpha(U)$ is given by \begin{equation} \alpha(U)=2\int_0^\infty \frac{J_0(x)J_1(x)\exp\left[Ux/2\right]}{\left(1+\exp[Ux/2]\right)^2}, \end{equation} where $J_k(x)$ are Bessel functions of order $k$. This density functional, Eq.(3), was specially designed to be used in LDA approximations for calculating the linear entropy of inhomogeneous systems via DFT calculations. Thus the entanglement in our large disordered chains are approximately obtained via LDA: \begin{equation} \mathcal L\approx \mathcal L^{LDA}\equiv \frac{1}{L}\sum_i\mathcal L^{hom}(n,U>0)|_{n\rightarrow n_i}, \end{equation} where the density profile $\{n_i\}$ is calculated via standard (Kohm-Sham iterative scheme) DFT calculations within LDA for the energy, in which the exact Lieb-Wu \cite{lw} energy is used as the homogeneous input. For each set of parameters ($C,V;U,n,m$), $\mathcal L$ is obtained through an average over $100$ samples of random disorder samples to ensure that the results are not dependent on specific configurations of impurities. Notice that this huge amount of data would be impracticable via exact methods such as DMRG (for a comparison between our DFT approach and DMRG calculations, see Supplementary Material of Ref.\cite{prb}). \begin{figure} [t!] \centering \includegraphics[width=\textwidth]{fig1.pdf} \caption{Entanglement of disordered nanostructures as a function of the particle interaction: (a) for several concentrations $C$ of impurities with strength $V=-20t$ and (b) for several disorder strengths $V$ at the critical concentration $C_C=100n/2=40\%$.} \end{figure} \section{Results and Discussion} We start by exploring the Mott-Anderson physics at zero temperature via the entanglement as a function of interaction. In Figure 1a we consider several concentrations of impurities with a fixed strength $V=-20t$, thus ranging from strong ($|V|>>U$) to moderate ($|V|\approx U$) disorder. As disorder becomes more relevant, i.e. for $U\rightarrow 0$, entanglement decreases and saturates for any concentration $C>0$. This saturation characterizes the localization: the disorder potential freezes the electronic degrees of freedom such that $\mathcal L\rightarrow \text{constant}$. Fig. 1a also shows the non-monotonic behavior of entanglement with $C$, whose minimum occurs at the critical concentration $C_C=100n/2=40\%$ for $V<0$ (for $V>0$, $C_C=100(1-n/2)$), observed previously in the MIT \cite{prb} and in the SIT \cite{gui1,gui2}. This minimum entanglement has been associated $-$ in both MIT and SIT cases $-$ to a {\it fully localized} state, marked by $\mathcal L\rightarrow 0$ for $|V|\rightarrow \infty$ due to real-space localization of pairs (as also confirmed by the average occupation probabilities, see Figure 2). While for the MIT the full localization was found to appear for $|V|\geq V_{min}\approx 3t$ for $U=5t$, for the SIT the same minimum $V_{min}\approx 3t$ was found for any interaction \cite{gui2}. However we observe now a distinct feature: Fig. 1a reveals that depending on the interaction strength ($U>10t$) even a strong disorder potential as $V=-20t$ is not enough to fully localize the system, i. e. $\mathcal L\neq 0$ at $C_C=40\%$. In other words, $V_{min}$ in the MIT case is strongly affected by the interaction. To further analyze this interplay between $U$ and $V$, in Fig. 1b we focus on the critical concentration $C_C$ and vary instead the potential strength $V$. We find that for $|V|\lesssim U$ the degree of entanglement is essentially independent on $V$, suggesting that the system presents the same degree of localization for a given $U$ and that this is a weak localization, since the degree of entanglement is very close to the clean $V=0$ case. In contrast, for $|V|\gtrsim U$ the degree of entanglement decreases with $U$ decreasing and is smaller for higher $|V|$, reaching the full localization when $|V|\rightarrow \infty$. As $U$ increases, a stronger $V$ is required for having $\mathcal L \rightarrow 0$, confirming thus that the full localization in the MIT requires a minimum disorder strength $V_{min}$ which is dependent on the interaction. For the particular case of $V=-1t$, such that $U$ is always $U\gtrsim |V|$, we don't find the characteristic decreasing of entanglement when $U\rightarrow 0$, indicating that the full localization does not occur in this case. \begin{figure} [t!] \centering \includegraphics[scale=.65]{fig2_new.pdf} \caption{Entanglement of disordered nanostructures as a function of the impurities' concentration for several attractive (a) and repulsive (b) disorder strengths at a fixed $U$, and for several interaction strengths at a fixed $V$ (c). (d) and (e) average occupation probabilities as a function of the impurities concentration: double occupancies (d) and single-occupation probabilities (e) at impurity ($\bar{\text{w}}_2^V$, $\bar{\text{w}}_\uparrow^V$) and non-impurity sites ($\bar{\text{w}}_2^{V = 0}$, $\bar{\text{w}}_\uparrow^{V = 0}$). In all cases $L=100$ and $n = 0.8$.} \end{figure} \begin{figure}[t!] \centering \includegraphics[scale=.9]{fig3.pdf} \caption{(a) Entanglement as a function of the concentration of impurities for several temperatures for $n=0.75$. (b) Entanglement as a function of the temperature for several concentrations for $n=0.75$. (c) Entanglement as a function of concentration for several magnetizations $m=n_\uparrow - n_\downarrow$ for $n=1.0$. In all cases $L=8$, $U=5t$ and $V=-10t$.} \end{figure} Next we analyze the impact of the impurities' concentration on the entanglement for several attractive, Figure 2a, and repulsive, Figure 2b, disorder strengths. In both cases we see the signature of the full Anderson localization for $|V|\gtrsim U$: minimum entanglement at the critical concentration $C_C=100n/2$ for $V<0$ and $C_C=(1-n/2)100$ for $V>0$, with $\mathcal L\rightarrow 0$ for $|V|\rightarrow \infty$. For $|V|\lesssim U$ the minimum at $C_C$ disappears, so the system does not fully localize. We also see the extra minimum at $C^\text{\small *}_C=100n$ for $V<0$ (Fig. 2a) and at $C_C^\text{\small *}=(1-n)100$ for $V>0$ (Fig. 2b) associated to a Mott-like localization \cite{prb}, in which the effective density is equal to 1 either at the impurity sites (for $V<0$) or at the non-impurity sites (for $V>0$). For attractive disorder this means that the average double occupancy in the impurity sites ($\bar{\text{w}}_2^V$) tends to zero due to the repulsion $U$, while the single-particle probability ($\bar{\text{w}}_\uparrow^V$) tends to a maximum, as confirmed by Figures 2c and 2d (for repulsive disorder, the same holds for the non-impurity sites: $\bar{\text{w}}_2^{V=0}\rightarrow 0$, $\bar{\text{w}}_\uparrow^{V=0}\rightarrow \text{maximum}$). Notice however that the Mott-like MIT requires a minimum amount of disorder to occur. Thus the two entanglement minima $-$ full Anderson and Mott-like localizations $-$ are intrinsically connected through the interplay between interaction and disorder. In Figure 2e one can see that if the interaction is too small compared to the disorder strength ($U\lesssim |V|/2$) only the minimum related to the full Anderson localization persists, while if $U$ is strong compared to $V$ ($U\gtrsim |V|$) only the minimum related to the Mott-like localization holds, the two minima appearing only for $U\gtrsim 10t$, $|V|\gtrsim U$. In Figures 3a and 3b we show the impact of the temperature on both the full Anderson and the Mott-like localization. As the temperature increases the two minima $-$ at $C=100n/2=37.5\%$ (full Anderson) and at $C=100n=75\%$ (Mott-like) $-$ are attenuated. Our results reveal that the full Anderson localization survives for higher temperatures than the Mott-like localization, however for $T=20$ there remains no localization in the system, since entanglement is high and maximum for any concentration. Finally, while all the above calculations were performed with non-magnetized chains, i. e. for $n_\uparrow=n_\downarrow=n/2$, in Figure 3c we analyze the impact of the magnetization $m=n_\uparrow - n_\downarrow\neq 0$ on the entanglement minimum related to the full Anderson localization. We find that the minimum at $C_C=100n/2=50\%$ for $m=0$ is now split into two minima: one at $C_C=100n_\uparrow$ and the other at $C_C=100n_\downarrow$. Our results thus reveal that the localization occurs separately for each species, thus with two critical densities $n_{C,\uparrow}=L_V$ and $n_{C,\downarrow}=L_V$. Fig.3d shows however that the magnetized systems never reach the full localization: there remain spin degrees of freedom due to the unpaired majority species such that $\mathcal{L}$ saturates finite values. \section{Conclusion} In summary, we have explored the Mott-Anderson physics by analyzing the entanglement of interacting disordered chains. We find that the interplay between interaction (U) and disorder strength (V) defines the type and the degree of localization. For weak interactions, $U\lesssim |V|/2$, there only appears the full Anderson localization, marked by entanglement approaching zero when $|V|\rightarrow \infty$. In contrast, for weak disorder, $|V|\lesssim U$, only the Mott-like localization holds, associated to an effective density equal to 1. The two types of localization, full Anderson and Mott-like, occurring only when both $U$ and $V$ are strong enough: $U\gtrsim 10t$, $|V|\gtrsim U$. For sufficiently strong interaction, $U\gtrsim |V|$, the entanglement is independent on the disorder potential and very close to the clean (non-disordered) case, suggesting thus that the localization is weak in this case. Our results also show that the temperature fades the localization phenomena, but that the full Anderson localization minimum survives for higher temperatures ($T\sim 2$) than the Mott-like localization. Finally we have shown that that the entanglement minimum related to the full Anderson localization is split into two when there is a magnetization in the system, one for each spin species, but in this case the localization is weaker due to remaining spin degrees of freedom, with entanglement saturating at finite values. \section{Data Availability} The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
{ "attr-fineweb-edu": 1.77832, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdaY5qWTBGh9UQ2ZX
\section{\label{sec:intr}Introduction} Since the experimental discovery of the neutron halo phenomenon in $^{11}$Li~\cite{PRL1985Tanihata_55_2676}, the structure of exotic nuclei, especially those close to the neutron or proton drip line, has attracted wide attentions in theory and experiment~\cite{PPNP1996PRing_37_193,PR2004Jonson_389_1,PR2005Vretenar_409_101,PPNP2006MengJ_57_470,PPNP2013Tanihata_68_215,JPG2015JMeng_42_093101}. In these drip-line nuclei, since the neutron or the proton Fermi surface is very close to the continuum threshold, the valence nucleons can be easily scattered to the single-particle resonant states in the continuum by the pairing correlation and thus the couplings between the bound states and the continuum become very important~\cite{PRC1996Dobaczewski_53_2809,PRL1996Meng_77_3963,PRL1997Poschl_79_3841,PRC2015Changizi_91_024305,PRC2015Qi_92_051304}. Unexpected properties very different from those of normal nuclei have been observed or predicted, such as halo phenomenon~\cite{PRL1996Meng_77_3963,PRL1997Poschl_79_3841}, giant halo~\cite{PRL1998MengJ_80_460,PRC2002MengJ_65_041302, PRC2003Sandulescu_68_054323,SCP2003ZhangSQ_46_632,PRC2006TerasakiJ_74_054318,PRC2006Grasso_74_064317}, new magic number~\cite{PRL2000Ozawa_84_5493}, and deformed halo~\cite{PRC2010Hamamoto_81_021304,PRC2010Zhou_82_011301,PRC2012Li_85_024312,PLB2014SSZhang_730_30}. Therefore, the resonant states close to the continuum threshold are essential for the investigation of exotic nuclei, and are also closely relevant to the nucleosynthesis of chemical elements in the universe~\cite{PRC2012SSZhang_86_032802,PRC2015Faestermann_92_052802}. Moreover, because the majority of nuclei far from stability are deformed, interplay between deformation and near-threshold orbitals becomes pretty important as shell structure evolves with deformation. Based on the conventional scattering theories, many approaches, such as R-matrix theory~\cite{PR1947Wigner_72_29,PRL1987Hale_59_763}, K-matrix theory~\cite{PRC1991Humblet_44_2530}, S-matrix theory~\cite{Book1972Taylor-ScatteringTheor,PRC2002CaoLG_66_024311}, Jost function approach~\cite{PRL2012LuBN_109_072501, PRC2013LuBN_88_024323}, and the scattering phase shift (SPS) method~\cite{Book1972Taylor-ScatteringTheor,PRC2010LiZP_81_034311,SCP2010Li_53_773} have been developed to study the single-particle resonant states. Meanwhile, the techniques for bound states have been extended for the single-particle resonant states, such as the analytical continuation of the coupling constant~(ACCC) method~\cite{Book1989Kukulin-TheorResonance,PRC1997Tanaka_56_562,PRC1999Tanaka_59_1391,PRC2000Cattapan_61_067301,CPL2001YangSC_18_196,PRC2004ZhangSS_70_034308,PRC2005Guo_72_054319,EPJA2007SSZhang_32_43,PRC2012SSZhang_86_032802,EPJA2012SSZhang_48_40,EPJA2013SSZhang_49_77,PLB2014SSZhang_730_30,PRC2015Xu_92_024324}, the real stabilization method~(RSM)~\cite{PRA1970Hazi_1_1109,PRL1993Mandelshtam_70_1932,PRA1999Kruppa_59_3556,NPA2004Hagino_735_55,PRC2008ZhangL_77_014312}, the complex scaling method~(CSM)~\cite{PR1983Ho_99_1,PRC1986Gyarmati_34_95,PRC1988Kruppa_37_383,PRL1997Kruppa_79_2217,PRC2006Arai_74_064311,CPC2010Guo_181_550,PRC2010JYGuo_82_034318,PRC2014ZLZhou_89_034307,PRC2015MShi_92_054313,PRC2012QLiu_86_054312,PRC2014Shi_90_034319,PRC2016Shi_94_024302,EPJA2017Shi_53_40}, the complex-scaled Green's function (CGF) method~\cite{PLB1998Kruppa_431_237,PTP2005Suzuki_113_1273,PRC2016Shi_94_024302,EPJA2017Shi_53_40}, and the complex momentum representation~(CMR) method~\cite{NPA1968Berggren_109_265,PRC1978Kwon_18_932,JPA1979Sukumar_12_1715,PRC2006Hagen_73_034321,PRL2016Li_117_062502,PRC2017Fang_95_024311,PRC2017Tian_95_064329}. Especially, the SPS~\cite{PRC2010LiZP_81_034311,SCP2010Li_53_773}, ACCC~\cite{PRC2000Cattapan_61_067301,PLB2014SSZhang_730_30,PRC2015Xu_92_024324}, CSM~\cite{PRC2012QLiu_86_054312,PRC2014Shi_90_034319,PRC2016Shi_94_024302,EPJA2017Shi_53_40}, CMR~\cite{PRC2017Fang_95_024311}, CGF~\cite{PRC2016Shi_94_024302,EPJA2017Shi_53_40}, and RSM~\cite{NPA2004Hagino_735_55} methods have been developed to investigate the resonances in deformed systems. Green's function (GF) method~\cite{PRB1992Tamura_45_3271,PRA2004Foulis_70_022706,Book2006Eleftherios-GF} is also an efficient tool for studying the single-particle resonant states with the following advantages: treating the discrete bound states and the continuum on the same footing; providing both the energies and widths for the resonant states directly; and having the correct asymptotic behaviors for the wave functions. Nonrelativistically and relativistically, there are already many applications of the GF method in the nuclear physics to study the contribution of continuum to the ground states and excited states. Nonrelativistically, in the spherical case, in 1987, Belyaev \emph{et~al.} constructed the Green's function in the Hartree-Fock-Bogoliubov (HFB) theory in the coordinate representation~\cite{SJNP1987Belyaev_45_783}. Afterwards, Matsuo applied this Green's function to the quasiparticle random-phase approximation (QRPA)~\cite{NPA2001Matsuo_696_371}, which was further used to describe the collective excitations coupled to the continuum~\cite{PTPS2002Matsuo_146_110, PRC2005Matsuo_71_064326,NPA2007Matsuo_788_307,PTP2009Serizawa_121_97,PRC2009Mizuyama_79_024313,PRC2010Matsuo_82_024318,PRC2011Shimoyama_84_044317}, microscopic structures of monopole pair vibrational modes and associated two-neutron transfer amplitudes in neutron-rich Sn isotopes~\cite{PRC2013Shimoyama_88_054308}, and neutron capture reactions in the neutron-rich nuclei~\cite{PRC2015Matsuo_91_034604}. Recently, Zhang \emph{et~al.} developed the fully self-consistent continuum Skyrme-HFB theory with GF method~\cite{PRC2011ZhangY_83_054301,PRC2012YZhang_86_054318}, which was further extended for odd-A nuclei~\cite{arXivSun2013}. In the deformed case, in 2009, Oba \emph{et~al.} extended the continuum HFB theory to include deformation on the basis of a coupled-channel representation and explored the properties of the continuum and pairing correlation in deformed nuclei near the neutron drip line~\cite{PRC2009Oba_80_024301}. Relativistically, in the spherical case, in Refs.~\cite{PRC2009Daoutidis_80_024309,PRC2010DYang_82_054305}, the fully self-consistent relativistic continuum random-phase-approximation (RCRPA) was developed with the Green's function of the Dirac equation and used to study the contribution of the continuum to nuclear collective excitations. Recently, we have developed the continuum covariant density functional theory (CDFT) based on the GF method and calculated the accurate energies and widths of the single-neutron resonant states for the first time~\cite{PRC2014TTSun_90_054321}. This method has been further extended to describe single-particle resonances for protons~\cite{JPG2016TTSun_43_045107} and $\Lambda$ hyperons~\cite{PRC2017Ren_95_054318}. In 2016, further containing pairing correlation, we developed Green's function relativistic continuum Hartree-Bogoliubov (GF-RCHB) theory, by which the continuum was treated exactly and the giant halo phenomena in neutron-rich Zr isotopes were studied~\cite{Sci2016Sun_46_12006}. In this work, we aim to develop a self-consistent deformed continuum covariant density functional theory with the GF method, which can treat pairing correlation, deformation, and continuum uniformly. As the first step, GF method for a deformed Dirac equation, {\it i.e.}, deformed GF-Dirac model, will be implemented and applied to an illustrative calculation of single-neutron bound and resonant states in a quadrupole-deformed Woods-Saxon potential. The paper is organized as follows. In Sec.~\ref{sec:form}, we give the formulations of coupled-channel Dirac equation and Green's function partial-wave expansion with exact boundary conditions. After the numerical details in Sec.~\ref{sec:Numer}, we present results and discussions in Sec.~\ref{sec:resu}. Finally, a brief summary is given in Sec.~\ref{sec:Sum}. \section{\label{sec:form}Formalism} \subsection{Coupled-channel Dirac equation} In the CDFT~\cite{ANP1986Serot_16_1, RPP1989Reinhard_52_439, PPNP1996PRing_37_193,PR2005Vretenar_409_101, PPNP2006MengJ_57_470}, nucleons are described as Dirac spinors moving in a mean-field potential, and the corresponding Dirac equation is \begin{equation} [{\bm \alpha}\cdot{\bm p}+V({\bm r})+\gamma_0(M+S({\bm r}))]\psi({\bm r})=\varepsilon\psi({\bm r}), \label{Eq:DiracEq} \end{equation} where ${\bm \alpha}$ and $\gamma_0$ are the Dirac matrices, $M$ is the nucleon mass, and $S({\bm r})$ and $V({\bm r})$ are the scalar and vector potentials, respectively. In the following discussions, for simplicity, only the axially quadrupole deformations are considered and the potentials take the following form, \begin{subequations} \begin{eqnarray} &&S({\bm r})=S_0(r)+S_2(r)Y_{20}(\theta,\phi),\\ &&V({\bm r})=V_0(r)+V_2(r)Y_{20}(\theta,\phi), \end{eqnarray}% \label{Eq:Potential}% \end{subequations}% where $S_0(r)$ and $V_0(r)$ are the spherical parts and $S_2(r)Y_{20}(\theta,\phi)$ and $V_2(r)Y_{20}(\theta,\phi)$ are the quadrupole parts. For a nucleon in the axially quadrupole-deformed potential, parity $\pi$ and angular momentum $z$ component $\Omega$ are good quantum numbers and the single-particle wave function can be expanded in terms of spherical Dirac spinors, \begin{equation} \psi_\Omega=\sum_{lj} \left( \begin{array}{c} {\displaystyle i \frac{G_{\Omega lj}(r)}{r}}\\ {\displaystyle \frac{F_{\Omega lj}(r)}{r}{\sigma\cdot\hat {\bm r}}} \end{array} \right) Y_{lj\Omega}(\theta,\phi),% \label{Eq:Dspinor}% \end{equation}% where $G_{\Omega lj}(r)/r$ and $F_{\Omega lj}(r)/r$ are, respectively, the radial wave functions for the upper and lower components, and $Y_{lj\Omega}(\theta,\phi)$ are the spinor spherical harmonics~\cite{Book1988Varshalovich-QuantumTheory}. Hereafter, the ``channels" are labeled by the quantum number $L=(lj)$ for convenience. The Dirac equation~(\ref{Eq:DiracEq}) is then transformed to a coupled-channel form for the radial wave functions, \begin{subequations} \begin{eqnarray} \label{Eq:G} &&\frac{{\rm d}G_{\Omega L}}{{\rm d}r}+\frac{\kappa}{r}G_{\Omega L}-(\varepsilon_\Omega+2M)F_{\Omega L}\nonumber\\ &&~~~~~~+\sum_{L'\lambda}(V_\lambda-S_\lambda)A(\lambda,L',L,\Omega)F_{\Omega L'}=0,\\ \label{Eq:F} &&\frac{{\rm d}F_{\Omega L}}{{\rm d}r}-\frac{\kappa}{r}F_{\Omega L}+\varepsilon_\Omega G_{\Omega L} \nonumber\\ &&~~~~~~-\sum_{L'\lambda}(V_\lambda+S_\lambda)A(\lambda,L',L,\Omega)G_{\Omega L'}=0,% \end{eqnarray}% \label{Eq:DDirac}% \end{subequations}% where $\kappa=(-1)^{j+l+1/2}(j+1/2)$, the index of the potential $\lambda$ takes $0$ and $2$, respectively, for the spherical and quadrupole parts of the potential, and $A(\lambda,L',L,\Omega)$ has the form, \begin{eqnarray} &&A(\lambda,L',L,\Omega)= \langle Y_{L\Omega}|Y_{\lambda 0}|Y_{L'\Omega}\rangle\nonumber\\ &&\hspace{2.21cm}=(-1)^{1/2+\Omega}\sqrt{\frac{(2j+1)(2j'+1)}{4\pi}} \nonumber\\ &&\hspace{2.5cm}\times \left( \begin{array}{ccc} j & \lambda & j'\\ -\Omega & 0 & \Omega \end{array} \right) \left( \begin{array}{ccc} j' & \lambda & j\\ \frac{1}{2} & 0 & -\frac{1}{2} \end{array} \right). \label{Eq:A-CG} \end{eqnarray}% Note that the single-particle energy in Eq.~(\ref{Eq:DDirac}) is shifted with respect to $M$ compared to that in Eq.~(\ref{Eq:DiracEq}). The couplings among different channels in Eq.~(\ref{Eq:DDirac}) are governed by the deformed potentials, \begin{equation} v^{\pm}_{LL'}=\sum_{\lambda}(V_\lambda\pm S_\lambda)A(\lambda,L',L,\Omega). \end{equation} In the practical calculations, we have to truncate the partial-wave expansion and we denote $N$ to represent the number of partial waves to be included. \subsection{Boundary conditions for the wave functions} To describe the single-particle states properly, correct asymptotic behaviors at $r\rightarrow 0$ and $r\rightarrow\infty$ must be considered. At $r\rightarrow 0$, the Dirac spinor is regular and satisfies \begin{eqnarray} \left( \begin{array}{c} G_{\Omega L}(r) \\ F_{\Omega L}(r) \\ \end{array} \right) &\longrightarrow& r\left( \begin{array}{c} j_l(k r) \\ \frac{\kappa}{|\kappa|}\frac{\varepsilon-v^{+}_{LL'}}{k}j_{\tilde{l}}(kr)\\ \end{array} \right)\nonumber\\ &\longrightarrow&\left( \begin{array}{c} \frac{r}{(2l+1)!!}(kr)^l \\ \frac{\kappa}{|\kappa|} \frac{r(\varepsilon-v^{+}_{LL'})}{k(2\tilde{l}+1)!!}(k r)^{\tilde{l}} \\ \end{array} \right), \label{Eq:behavior_r0} \end{eqnarray} where $k^2=(\varepsilon-v^{+}_{LL'})(\varepsilon-v^{-}_{LL'}+2M)>0$, quantum number $\tilde{l}$ is defined as $\tilde{l}=l+(-1)^{j+l+1/2}$, and $j_l(k r)$ is the spherical Bessel function of the first kind. At $r\rightarrow\infty$, the Dirac spinor is exponentially decaying for the bound states and oscillating outgoing for the continuum, {\it i.e.}, we have \begin{eqnarray} \left( \begin{array}{c} G_{\Omega L}(r) \\ F_{\Omega L}(r) \\ \end{array} \right) &\longrightarrow& \left( \begin{array}{c} r\sqrt{\frac{2Kr}{\pi}}K_{l+\frac{1}{2}}(Kr) \\ \frac{-Kr}{\varepsilon+2M} \sqrt{\frac{2Kr}{\pi}}K_{\tilde{l}+\frac{1}{2}}(Kr)\\ \end{array} \right)\nonumber\\ &\longrightarrow&\left( \begin{array}{c} 1 \\ -\frac{K}{\varepsilon+2M}\\ \end{array} \right)e^{-Kr}, \label{Eq:behavior_rinf-2} \end{eqnarray} for the Dirac spinor with $\varepsilon<0$ and \begin{eqnarray} \left( \begin{array}{c} G_{\Omega L}(r) \\ F_{\Omega L}(r) \\ \end{array} \right) &\longrightarrow&\left( \begin{array}{c} r h^{(1)}_l(k r) \\ \frac{\kappa}{|\kappa|}\frac{ik r}{\varepsilon+2M}h^{(1)}_{\tilde{l}}(k r) \\ \end{array} \right)\nonumber\\ &\longrightarrow&\left( \begin{array}{c} 1 \\ \frac{\kappa}{|\kappa|} \frac{ik}{\varepsilon+2M} \\ \end{array} \right)e^{ikr}, \label{Eq:behavior_rinf-1} \end{eqnarray} for the Dirac spinor with $\varepsilon >0$. Here, $K_{l+\frac{1}{2}}(Kr)$ is the modified spherical Bessel function with $K^2=-(\varepsilon-v^{+}_{LL'})(\varepsilon-v^{-}_{LL'}+2M)>0$ and $h^{(1)}_l(k r)$ is the spherical Hankel function of the first kind. \subsection{Green's function partial-wave expansion} The Green's function defined for the Dirac equation in the coordinate space obeys \begin{equation} [\varepsilon-\hat{h}({\bm r})]\mathcal{G}({\bm r},{\bm r'};\varepsilon)=\delta({\bm r}-{\bm r'}), \label{Eq:GF-definition} \end{equation} where $\hat{h}({\bm r})$ is the Dirac Hamiltonian and $\varepsilon$ is an arbitrary single-particle energy. With a complete set of eigenstates $\psi_i({\bm r})$~($i=\Omega^\pi$) and eigenvalues $\varepsilon_i$ of the Dirac equation, the Green's function can also be represented as \begin{equation} \mathcal{G}({\bm r},{\bm r'};\varepsilon)=\sum_i\frac{\psi_i({\bm r})\psi_{i}^\dag({\bm r'})}{\varepsilon-\varepsilon_i}. \end{equation} Corresponding to the upper and lower components of the Dirac spinor $\psi_{i}({\bm r})$, the Green's function can be denoted as a $2\times 2$ matrix form, \begin{equation} \mathcal{G}({\bm r},{\bm r'};\varepsilon)= \left( \begin{array}{cc} \mathcal{G}^{(11)}({\bm r},{\bm r'};\varepsilon) & \mathcal{G}^{(12)}({\bm r},{\bm r'};\varepsilon) \\ \mathcal{G}^{(21)}({\bm r},{\bm r'};\varepsilon) & \mathcal{G}^{(22)}({\bm r},{\bm r'};\varepsilon) \\ \end{array} \right). \end{equation} Using the partial-wave expansion, the Green's function with a given $\Omega$ can be expanded as \begin{equation} \mathcal{G}_{\Omega}({\bm r},{\bm r'};\varepsilon)=\sum_{LL'}Y_{L\Omega}(\theta,\phi)\frac{\mathcal{G}_{\Omega LL'}(r,r';\varepsilon)}{rr'}Y_{L'\Omega }^{*}(\theta',\phi'), \label{Eq:GF-expansion} \end{equation} where $\mathcal{G}_{\Omega LL'}(r,r';\varepsilon)$ is the radial Green's function coupled with partial waves $L$ and $L'$. Correspondingly, we can also write $\mathcal{G}_{\Omega LL'}(r,r';\varepsilon)$ in a $2\times2$ matrix form, \begin{equation} \mathcal{G}_{\Omega LL'}(r,r';\varepsilon) = \left( \begin{array}{cc} \mathcal{G}_{~\Omega LL'}^{(11)}(r,r';\varepsilon) & \mathcal{G}_{~\Omega LL'}^{(12)}(r,r';\varepsilon) \\ \mathcal{G}_{~\Omega LL'}^{(21)}(r,r';\varepsilon) & \mathcal{G}_{~\Omega LL'}^{(22)}(r,r';\varepsilon) \\ \end{array} \right), \end{equation} or specifically in a $2N\times 2N$ matrix form in consideration of the cutoff of the number of partial waves being $N$. According to the definition of the Green's function for the Dirac equation in Eq.~(\ref{Eq:GF-definition}), it can be easily derived that the radial Green's function $\mathcal{G}_{\Omega LL'}(r,r';\varepsilon)$ satisfies the following coupled-channel equation, \begin{eqnarray} &&\hspace{-0.5cm}\left( \begin{array}{cc} -\varepsilon & {\displaystyle -\frac{d}{dr}+\frac{\kappa}{r}} \\ {\displaystyle \frac{d}{dr}+\frac{\kappa}{r}} & -\varepsilon-2M \\ \end{array} \right)\mathcal{G}_{\Omega LL'}(r,r';\varepsilon)\nonumber\\ &&\hspace{-0.5cm}+\sum_{ L''}\left( \begin{array}{cc} {\displaystyle v^+_{LL''}} & 0 \\ 0 & {\displaystyle v^-_{LL''}} \\ \end{array} \right) \mathcal{G}_{\Omega L''L'}(r,r';\varepsilon) =\frac{\delta(r-r')}{rr'}J,~~ \label{Eq:GF-partial} \end{eqnarray} where \begin{equation} J=\left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \\ \end{array} \right)\otimes I_{N}, \end{equation} with $I_{N}$ being the $N$-dimensional unit matrix. Finally, a Green's function considering the exact asymptotic behaviors of Dirac spinors and satisfying Eq.~(\ref{Eq:GF-partial}) can be constructed as \begin{eqnarray} &&\mathcal{G}_{\Omega LL'}(r,r';\varepsilon)=\nonumber\\ &&\sum_{L''L'''}\left[\phi^{({\rm in})*}_{\Omega L''L}(r,\varepsilon)W^{-1}_{ \Omega L''L'''}\phi^{({\rm out})}_{\Omega L'''L'}(r', \varepsilon)\theta(r'-r)\right.\nonumber\\ &&\hspace{0.22cm}\left.+\phi^{({\rm out})*}_{\Omega L''L}(r,\varepsilon)W^{-1}_{\Omega L'''L''}\phi^{({\rm in})}_{\Omega L'''L'}(r',\varepsilon)\theta(r-r')\right],% \label{Eq:GF_element}% \end{eqnarray}% where $\theta(r-r')$ is the radial step function, $\phi^{({\rm in})}_{\Omega LL''}(r, \varepsilon)$ and $\phi^{({\rm out})}_{\Omega LL''}(r, \varepsilon)$ are two linearly independent Dirac spinors \begin{eqnarray} \phi^{({\rm in})}_{\Omega LL''}(r,\varepsilon)&=&\left( \begin{array}{c} G^{({\rm in})}_{\Omega LL''}(r,\varepsilon) \\ F^{({\rm in})}_{\Omega LL''}(r,\varepsilon) \\ \end{array} \right),\nonumber\\ \phi^{({\rm out})}_{\Omega LL''}(r,\varepsilon)&=&\left( \begin{array}{c} G^{({\rm out})}_{\Omega LL''}(r,\varepsilon) \\ F^{({\rm out})}_{\Omega LL''}(r,\varepsilon) \\ \end{array} \right), \end{eqnarray} which are obtained by integrating the following coupled-channel Dirac equation \begin{eqnarray} &&\hspace{-0.5cm}\left( \begin{array}{cc} -\varepsilon & {\displaystyle -\frac{d}{dr}+\frac{\kappa}{r}} \\ {\displaystyle \frac{d}{dr}+\frac{\kappa}{r}} & -\varepsilon-2M \\ \end{array} \right)\left( \begin{array}{c} G_{\Omega LL''}(r, \varepsilon) \\ F_{\Omega LL''}(r, \varepsilon) \\ \end{array} \right) \nonumber \\ &&\hspace{-0.7cm} +\sum_{L'}\left( \begin{array}{cc} {\displaystyle v^+_{LL'}} & 0 \\ 0 & {\displaystyle v^-_{LL'}} \\ \end{array} \right)\left( \begin{array}{c} G_{\Omega L'L''}(r, \varepsilon) \\ F_{\Omega L'L''}(r, \varepsilon) \\ \end{array} \right)=0,% \end{eqnarray}% in the whole $r$ space using a fourth-order Runge-Kutta algorithm starting from the boundary conditions at $r\rightarrow 0$ and $r\rightarrow \infty$, respectively. The Dirac spinor matrices $G_\Omega^{\rm (in/out)}$ and $F_\Omega^{\rm (in/out)}$ take the following form at the boundaries \begin{equation} \hspace{-0.5cm} \left( \begin{array}{c} G^{\rm (in/out)}_{\Omega} \\ F^{\rm (in/out)}_{\Omega} \\ \end{array} \right) \rightarrow\left( \begin{array}{cccc} G^{\rm (in/out)}_{\Omega 11} & 0 & \cdots & 0 \\ 0 & G^{\rm (in/out)}_{\Omega 22} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots &G^{\rm (in/out)}_{\Omega NN} \\ F^{\rm (in/out)}_{\Omega 11} & 0 & \cdots & 0 \\ 0 & F^{\rm (in/out)}_{\Omega 22} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots &F^{\rm (in/out)}_{\Omega NN} \\ \end{array} \right), \end{equation} where $G^{\rm (in)}_{\Omega LL}$ and $F^{\rm (in)}_{\Omega LL}$ are the asymptotic solutions at $r\to 0$ in Eq. (\ref{Eq:behavior_r0}) and $G^{\rm (out)}_{\Omega LL}$ and $F^{\rm (out)}_{\Omega LL}$ are those at $r\to\infty$ in Eqs. (\ref{Eq:behavior_rinf-2}, \ref{Eq:behavior_rinf-1}). In Eq.~(\ref{Eq:GF_element}), $W_{\Omega L''L'''}$ is the Wronskian matrix element \begin{eqnarray} &&W_{\Omega L''L'''}=\sum_{L}\left[G^{({\rm out})}_{\Omega L''L}(r,\varepsilon)F^{({\rm in})*}_{\Omega L'''L}(r,\varepsilon)\right.\nonumber\\ &&\hspace{2.3cm}\left.-F^{({\rm out})}_{\Omega L''L}(r,\varepsilon)G^{({\rm in})*}_{\Omega L'''L}(r,\varepsilon)\right], \end{eqnarray} which is $r$ independent. \subsection{Density of states} In the GF-CDFT model, the single-particle energies of bound states and energies and widths of resonant states can be extracted directly from the density of states $n(\varepsilon)$~\cite{Book2006Eleftherios-GF}, \begin{equation} n(\varepsilon)=\sum_i\delta(\varepsilon-\varepsilon_i), \end{equation} where $\varepsilon_{i}$ is the eigenvalue of the Dirac equation, $\varepsilon$ is a real single-particle energy, and $\sum_{i}$ is summation for the discrete states and the integral for the continuum. For the bound states, the density of states $n(\varepsilon)$ exhibits discrete $\delta$-function at $\varepsilon=\varepsilon_{i}$, while in the continuum $n(\varepsilon)$ has a continuous distribution. By introducing an infinitesimal imaginary part $``i\epsilon"$ to energy $\varepsilon$, it can be proved that $n(\varepsilon)$ can be calculated by integrating the imaginary part of the Green's function over the coordinate space~\cite{PRC2014TTSun_90_054321}, \begin{equation} n(\varepsilon)=-\frac{2}{\pi }\int d{\bm r}{\rm Im}[\mathcal{G}^{(11)}({\bm r},{\bm r};\varepsilon+i\epsilon) +\mathcal{G}^{(22)}({\bm r},{\bm r};\varepsilon+i\epsilon)]. \end{equation} And the density of states for each $\Omega^\pi$ is \begin{eqnarray} &&n_{\Omega}(\varepsilon)=-\frac{2}{\pi }\sum_L\int d{r}{\rm Im}[\mathcal{G}_{\Omega LL}^{(11)}({r},{r};\varepsilon+i\epsilon)\nonumber \\ &&\hspace{3.3cm}+\mathcal{G}_{\Omega LL}^{(22)}({r},{ r};\varepsilon+i\epsilon)]. \label{EQ:DOS} \end{eqnarray} Note that with the infinitesimal imaginary part $``i\epsilon"$ in the single-particle energy, the density of states for discrete single-particle states in shapes of $\delta$-function~(no width) is simulated by a Lorentzian function with the full-width at half-maximum (FWHM) of $2\epsilon$. \section{Numerical details}\label{sec:Numer} In this work, the radial parts of the quadrupole-deformed potentials in the Dirac equation~(\ref{Eq:DiracEq}) are taken in a Woods-Saxon form as follows, \begin{eqnarray} S_{0}(r)&=&S_{\rm WS}f(r),\nonumber\\ V_{0}(r)&=&V_{\rm WS}f(r),\nonumber\\ S_{2}(r)&=&-\beta S_{\rm WS}k(r),\nonumber\\ V_{2}(r)&=&-\beta V_{\rm WS}k(r), \end{eqnarray} with \begin{equation} {\displaystyle f(r)=\frac{1}{1+{\rm{exp}}(\frac{r-R}{a})}~~\text{and}~~k(r)=r\frac{df(r)}{dr}}. \end{equation} To compare the present results with those by the ACCC and SPS methods, we adopt the same parameters for the potentials as in Ref.~\cite{PRC2015Xu_92_024324}, which are determined by reproducing the results for the $p$-wave halo candidate nucleus $^{37}$Mg~\cite{PRL2014Kobayashi_112_242501} by the self-consistent spherical RHB theory with PC-PK1 parameter~\cite{PRC2010Zhao_82_054319}. Specifically, the depths of the scalar and vector potentials are chosen as $S_{\rm WS}=-420.3$~MeV and $V_{\rm WS}=349.7$~MeV, respectively, the radius $R=3.705$~fm, the diffuseness $a=0.67$~fm, and $\beta$ is the axial deformation parameter of the potential. The coupled-channel Dirac equation is solved in the radial space with mesh step of $0.1$~fm and a cutoff at $R_{\rm box}=20$~fm. To calculate the density of states $n_{\Omega}(\varepsilon)$, the parameter $\epsilon$ in Eq.~(\ref{EQ:DOS}) is taken as $1\times10^{-6}$~MeV and the energy step $d\varepsilon$ is $1\times10^{-4}$~MeV. With this energy step, the accuracy for energies and widths of the single-particle resonant states can be up to $0.1$~keV. \section{\label{sec:resu}Results and discussion} \begin{figure}[!t] \centering \includegraphics[width=0.48\textwidth]{Fig1} \caption{(Color online) Density of states $n_{\Omega}(\varepsilon)$ for neutrons with $\Omega^{\pi}=1/2^+$ (a) and $\Omega^{\pi}=1/2^-$ (b), obtained by solving the coupled-channel Dirac equation using the GF method by setting the deformation parameter $\beta=0$ and the number of coupled partial waves $N=5$~(black-solid lines), in comparison with the results calculated from spherical GF code (colored-dashed lines). The red-dashed line in the continuum is the total density of states by summing those for five spherical states. The black-dashed line at $\varepsilon=0$~MeV indicates the position of the continuum threshold.} \label{fig1} \end{figure} \begin{table*}[htp!] \center \caption{Neutron single-particle energies $\varepsilon$ for bound states (upper part) and energies and widths $\varepsilon(\Gamma)$ for resonant states (lower part) extracted from the density of states $n_{\Omega}(\varepsilon)$ obtained by solving the coupled-channel Dirac equation (coupled-Dirac) and spherical radial Dirac equation (radial-Dirac) using the GF method. For comparison, the deformation parameter $\beta=0$ and the number of coupled partial wave $N=1$ are chosen in the calculation of the coupled-channel Dirac equation. All quantities are in MeV.} \label{Tab1} \begin{tabular}{ccccccc} \hline\hline \multirow{2}*{$nl_{j}$ }& \multicolumn{2}{c}{positive parity } &~~~& \multirow{2}*{$nl_{j}$ } & \multicolumn{2}{c}{negative parity }\\ \cline{2-3}\cline{6-7} &coupled-Dirac&radial-Dirac& & &coupled-Dirac&radial-Dirac\\\hline $1s_{1/2}$&$-47.6536$ &$-47.6536$ && $1p_{3/2}$ &$-31.4805$ &$-31.4805$ \\ $1d_{5/2}$&$-15.7474$ &$-15.7474$ && $1p_{1/2}$ &$-27.4582$ &$-27.4582$ \\ $2s_{1/2}$&$-10.3644$ &$-10.3644$ && $1f_{7/2}$ &$-1.5032$ &$-1.5032$ \\ $1d_{3/2}$&$-9.6520$ &$-9.6520$ && $2p_{3/2}$ &$-0.1006$ &$-0.1006$ \\ \hline $1g_{9/2}$ &$9.4879(2.2233)$&$9.4879(2.2233)$&&$2p_{1/2}$&$0.3917(0.3520)$&$0.3917(0.3520)$\\ & & &&$1f_{5/2}$&$3.8856(0.7780)$&$3.8856(0.7780)$\\ \hline\hline \end{tabular} \end{table*} Firstly, numerical check for the new developed deformed GF-Dirac code is carried out by comparing the results calculated from the coupled-channel Dirac equation by setting the deformation parameter $\beta=0$ with those from the spherical GF code~\cite{PRC2014TTSun_90_054321}. In Fig.~\ref{fig1}, density of states $n_{\Omega}(\varepsilon)$ for neutrons with $\Omega^{\pi}=1/2^+$ and $1/2^-$ are plotted as a function of the single-particle energy $\varepsilon$. The peaks of the $\delta$-functional shape below the continuum threshold correspond to the single-particle bound states and spectra with $\varepsilon>0$ are the continuum. In each panel, the black-solid line denotes the results obtained by solving the coupled-channel Dirac equation with the number of coupled partial waves $N=5$ while the colored-dashed lines are those by solving the spherical radial Dirac equation. In the continuum, the red-dashed line is the total density of states calculated by summing those of corresponding five states in the spherical Dirac equation. In panel~(a), the five partial waves for the state $\Omega^{\pi}=1/2^+$ are chosen as $s_{1/2}$, $d_{3/2}$, $d_{5/2}$, $g_{7/2}$, and $g_{9/2}$, and in panel~(b), those for $\Omega^{\pi}=1/2^-$ are $p_{1/2}$, $p_{3/2}$, $f_{5/2}$, $f_{7/2}$, and $h_{9/2}$. Note that a factor of $2/(2j+1)$ is multiplied in calculating the density of states for the corresponding spherical state in the spherical code, considering the difference of the degree of degeneracy between the deformed code and spherical one. It is remarkable that all the peaks below the threshold and spectrum in the continuum obtained by the deformed and spherical codes are completely consistent. \begin{figure}[tp!] \centering \includegraphics[width=0.45\textwidth]{Fig2} \caption{(Color online) Density of states $n_{\Omega}(\varepsilon)$ for neutrons with $\Omega^{\pi}=1/2^\pm, 3/2^\pm, 5/2^\pm, 7/2^\pm$ obtained by solving the coupled-channel Dirac equation with a quadrupole-deformed Woods-Saxon potential using the GF method (blue-solid lines). The deformation parameter $\beta=0.47$ and the number of coupled partial waves $N=8$ are chosen in the calculations. Above the continuum threshold denoted by the black-dashed line, the results are compared with those for free neutrons calculated with potentials $V(r)=S(r)=0$ (red-solid lines).} \label{fig2} \end{figure} \begin{table*}[!ht] \center \caption{Energies and widths $\varepsilon(\Gamma)$ of the single-neutron resonant states $\Omega[Nn_z\Lambda]$ obtained by solving the coupled-channel Dirac equation with a quadrupole-deformed Woods-Saxon potential using the GF method, in caparison with those by ACCC and SPS approaches. The deformation parameter $\beta=0.47$ and the number of coupled partial waves $N=8$ are chosen in all the calculations. All quantities are in MeV. } \label{Tab2} \begin{tabular}{ccccc} \hline\hline positive parity&$1/2[440]$ &$ 3/2[431]$ &$5/2[422]$ &$7/2[413]$ \\\hline GF &$2.4577(1.2302)$ &$4.7227(3.5229)$ &$6.9033(3.1913)$ &$10.0278(2.8531)$ \\ ACCC &$1.5015(2.8036)$ &$4.1096(2.1137)$ &$7.0017(2.5306)$ &$10.1481(3.5502)$ \\ SPS &$1.7000(3.4072)$ &$4.4800(3.5874)$ &$7.0600(2.5575)$ &$10.2000(4.2735)$ \\ \hline\hline negative parity&$1/2[301]$ &$3/2[301]$ &$5/2[303]$ &$7/2[303]$ \\\hline GF &$0.4038(0.2454)$ &$2.9801(1.3166)$ &$8.0261(5.2618)$ &$3.0749(0.3915)$\\ ACCC &$0.4616(0.1909)$ &$2.5477(1.9764)$ &$8.2797(5.2905)$ &$3.1054(0.2958)$\\ SPS &$0.3750(0.3438)$ &$2.9500(1.0724)$ &$8.2200(5.9613)$ &$3.1000(0.3158)$\\ \hline\hline \end{tabular} \end{table*} \begin{figure}[t!] \centering \includegraphics[width=0.45\textwidth]{Fig3} \caption{(Color online) Energies $\varepsilon$ and widths $\Gamma$ of the single-neutron resonant states in the quadrupole-deformed Woods-Saxon potential with $\beta=0.47$ obtained by the GF method, in comparison with those by the ACCC and SPS approaches.} \label{fig3} \end{figure} From the density of states, the energies for the bound states and the energies ($\varepsilon$) and widths ($\Gamma$) for the resonant states can be extracted. Here, $\varepsilon$ and $\Gamma$ are defined as the positions and the FWHM of resonant peaks, which are the differences between the density of states for neutrons in the mean-field potential and free neutrons~\cite{PRC2014TTSun_90_054321}. In Table~\ref{Tab1}, we list in the upper part the single-particle energies $\varepsilon$ for bound states, and in the lower part the energies and widths $\varepsilon(\Gamma)$ for resonant states obtained by solving the coupled-channel Dirac equation (coupled-Dirac), in comparison with the results by solving the spherical radial Dirac equation (radial-Dirac). For comparison, the deformation parameter $\beta=0$ and the number of coupled partial waves $N=1$ are chosen in the calculation of the coupled-channel Dirac equation. From Table~\ref{Tab1}, it can be seen that all the energies and widths obtained by the two codes are exactly equal. In one word, the results in Fig.~\ref{fig1} and Table~\ref{Tab1} demonstrate that the present model is fully correct in the decoupled case with $\beta=0$. Secondly, we further check the new developed model by comparing the results with those from the ACCC and SPS methods \cite{PRC2015Xu_92_024324} for a specific deformed potential with $\beta=0.47$ and the number of coupled partial waves $N=8$. In Fig.~\ref{fig2}, the density of states $n_{\Omega}(\varepsilon)$ for neutrons with $\Omega^{\pi}=1/2^\pm, 3/2^\pm, 5/2^\pm$, and $7/2^\pm$ are plotted as a function of single-neutron energy $\varepsilon$, obtained by solving the coupled-channel Dirac equation with a quadrupole-deformed potential. Same as in Fig.~\ref{fig1}, the peaks of the $\delta$-functional shape below the continuum threshold correspond to bound states and spectra with $\varepsilon>0$ are the continuum. By comparing the density of states for neutrons (blue-solid lines) and those for free neutrons obtained with potentials $V(r)=S(r)=0$ (red-solid lines) in the continuum, one can easily find out the resonant states. It is clear that the density of states $n_{\Omega}(\varepsilon)$ for the resonant states sit atop of those for free particles. Accordingly, single-neutron resonant states are observed in all the $\Omega^\pi$ blocks. From the density of states in Fig.~\ref{fig2}, we extract the energies $\varepsilon$ and widths $\Gamma$ of single-neutron resonant states, which are listed in Table~\ref{Tab2} and also plotted on the $\varepsilon$-$\Gamma$ plan in Fig.~\ref{fig3}. For comparison, the results calculated by the ACCC and SPS methods~\cite{PRC2015Xu_92_024324} are also shown. All the states here are labeled by the Nilsson quantum numbers $\Omega[Nn_z\Lambda]$, in which $N$ is the principal quantum number, $n_z$ is the number of nodes of the wave function in the $z$ direction, and $\Lambda$ is the projection of the orbital angular momentum $l$ onto the $z$ axis. In Table~\ref{Tab2}, it can be seen that the energies and widths of the single-neutron resonant states obtained by solving the coupled-channel Dirac equation using the GF method provide excellent agreement with the ACCC and SPS method for all the single-neutron resonant states except the $1/2[440]$ resonant state, which is very broad. In Fig.~\ref{fig3}, great agreements between GF method and other two methods are also presented for most of the resonant states. However, the GF and SPS methods don't predict resonant states with widths greater than $6$~MeV, which is due to the large mixing of the low-$l$ ($p$-wave) components~\cite{PRC2010LiZP_81_034311}. Anyhow, according to the results in Table~\ref{Tab2} and Fig.~\ref{fig3}, the resonant states predicted by the GF method in a quadrupole-deformed coupled-channel Dirac equation are reliable. \begin{figure}[t!] \centering \includegraphics[width=0.5\textwidth]{Fig4} \caption{(Color online) Single-neutron levels $\Omega[N n_{z} \Lambda]$ as a function of deformation parameter $\beta$ in a quadrupole-deformed Woods-Saxon potential.} \label{fig4} \end{figure} Finally, we apply the deformed GF-Dirac model to investigate the recently reported halo candidate nucleus $^{37}$Mg~\cite{PRL2014Kobayashi_112_242501}, in which the interplay between deformation and orbital structure near threshold is very important. In Fig.~\ref{fig4}, the single-neutron levels for bound and resonant obitals are presented as a function of the deformation parameter $\beta$ in a quadrupole-deformed Woods-Saxon potential. In the spherical case with $\beta=0$, obvious energy gaps $N=8$ and $20$ are obtained while the energy gap $N=28$ disappears as the energy difference between the states $1f_{7/2}$ and $2p_{3/2}$ is only $\sim1.5$~MeV. This is consistent with the results obtained by the ACCC calculations in Ref.~\cite{PRC2015Xu_92_024324} and the systematic calculations on $N=28 $ isotones by the triaxial relativistic Hartree-Bogoliubov model with DD-PC1 density functional~\cite{PRC2011Li_84_054304}. To study the halo phenomena, the weakly bound states and low-lying resonant states near the threshold are of great importance, such as the levels $1/2[321]$, $3/2[312]$, and $1/2[301]$ in Fig.~\ref{fig4}. In particular, crossing phenomenon between the configurations $1/2[321]$ and $5/2[312]$ happens at a deformation of approximately $0.5$, which may enhance the probability to occupy the $1/2[321]$ orbital coming from $2p_{3/2}$ and explain the recent observation of a $p$-wave one-neutron halo configuration in $^{37}$Mg~\cite{PRL2014Kobayashi_112_242501}. Similar conclusions are obtained in the studies using ACCC method~\cite{PRC2015Xu_92_024324} and coupled-channel Schr\"{o}dinger equation by Hamamoto~\cite{PRC2007Hamamoto_76_054319}, though some differences still exist. In our and Hamamoto's calculations, the resonant states $1/2[310]$ and $3/2[312]$ disappear for large deformation. Besides, in Hamamoto's calculation, the $1/2[301]$ state is not obtained. \section{\label{sec:Sum}Summary} In this work, we apply Green's function method to investigate the resonant states in a quadrupole-deformed Woods-Saxon potential by solving a coupled-channel Dirac equation for the first time. Detailed formalism for the construction of Green's function on the basis of coupled-channel representation is presented in the coordinate space. To verify the deformed GF-Dirac code, numerical checks are carried out in two steps. Firstly, we solve the coupled-channel Dirac equation with the deformation parameter $\beta=0$ and compare the obtained density of states as well as the extracted energies and widths for resonant states to those by solving the spherical radial Dirac equation. Exactly consistent results obtained with the two methods demonstrate that the deformed GF-Dirac model is fully correct in the decoupled case. Secondly, for a specific deformed potential with $\beta=0.47$, we do the calculations using GF method and compare the results with those obtained by ACCC and SPS methods. The agreement among these three methods indicates that the GF method is reliable for predicting resonant states in a quadrupole-deformed potential. Finally, as an application, we investigate the recently reported halo candidate nucleus $^{37}$Mg~\cite{PRL2014Kobayashi_112_242501} and present the evolution of the single-neutron levels for bound and resonant states as a function of the deformation parameter. The pattern of the Nilsson levels calculated by the deformed GF-Dirac model is consistent with those by the deformed ACCC method~\cite{PRC2015Xu_92_024324} and coupled-channel Schr\"{o}dinger equation~\cite{PRC2007Hamamoto_76_054319}. It is found that crossing phenomenon between the configurations $1/2[321]$ and $5/2[312]$ happens at a deformation of approximately $0.5$, which may enhance the probability to occupy the $1/2[321]$ orbital coming from $2p_{3/2}$ and explain the observation of a $p$-wave one-neutron halo configuration in $^{37}$Mg. \begin{acknowledgements} This work was partly supported by the National Natural Science Foundation of China (Grant Nos.~11505157, 11475140, 11375015, 11175002, and 11335002) and the Physics Research and Development Program of Zhengzhou University (Grant No.~32410017). \end{acknowledgements}
{ "attr-fineweb-edu": 1.844727, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdcE5qX_Bxm4Tr2eQ
\section{INTRODUCTION} The $\psi(3770)$ resonance is the lowest-energy charmonium state above the threshold for decay to charmed meson pairs. The expectation that the $\psi(3770)$ should decay predominantly to $D^0\bar{D}^0$ and $D^+D^-$ has been validated by experiment~\cite{Olive:2016xmw}, although inconsistent results for the branching fraction of $\psi(3770)$ to non-$D\bar{D}$ final states have been reported~\cite{Ablikim:2008zzb,Besson:2005hm}. The cross sections $\sigma(e^+e^- \rightarrow D^0\bar{D}^0$) and $\sigma(e^+e^- \rightarrow D^+D^-$) at center-of-mass energy $E_{\rm{cm}}=$3.773~GeV, the peak of the $\psi(3770)$ resonance, can be measured precisely and are necessary input for normalizing some measurements of charmed meson properties in $\psi(3770)$ decays. The most precise determinations to date are from the CLEO-c Collaboration~\cite{Bonvicini:2014ab} using 818~pb$^{-1}$ of $e^+e^-$ annihilation data at $E_{\rm{cm}}=3774 \pm 1$~MeV, $\sigma(e^+e^- \rightarrow D^0\bar{D}^0)=(3.607 \pm 0.017 \pm 0.056)$~nb and $\sigma(e^+e^- \rightarrow D^+D^-)=(2.882 \pm 0.018 \pm 0.042)$~nb. In this paper we report measurements of the $D\bar{D}$ cross sections using fully reconstructed $D^0$ and $D^+$ mesons in a $\psi(3770)$ data sample that is approximately $3.6$ times larger than CLEO-c's. Here and throughout this paper, charge-conjugate modes are implied unless explicitly stated. Our procedure is an application of the $D$-tagging technique developed by the MARK~III Collaboration~\cite{Baltrusaitis:1985iw}, exploiting the kinematics of $D\bar{D}$ production just above threshold at the $\psi(3770)$ resonance. We use ratios of fully reconstructed $D$ mesons (``single tags'') and $D \bar{D}$ events (``double tags'') to determine the total numbers of $D \bar{D}$ pairs. This procedure benefits from the cancellation of systematic uncertainties associated with efficiencies and input branching fractions, giving better overall precision than measurements based on single tags. The production of $D^0\bar{D}^0$ pairs in a pure $C=-1$ state complicates the interpretation of measurements at $\psi(3770)$ by introducing correlations between the $D^0$ and $\bar{D}^0$ decays. We apply corrections derived by Asner and Sun~\cite{Asner:2005wf} to remove the bias introduced by these correlations. \section{BESIII DETECTOR} Our measurement has been made with the BESIII detector at the BEPCII collider of the Institute for High Energy Physics in Beijing. Data were collected at the $\psi(3770)$ peak, with $E_{\rm{cm}}=3.773$~GeV. The integrated luminosity of this sample has previously been determined with large-angle Bhabha scattering events to be 2.93~fb$^{-1}$~\cite{Ablikim:2014gna,Ablikim:2015orh}, with an uncertainty of 0.5\% dominated by systematic effects. An additional data sample of 44.9~pb$^{-1}$ at $E_{\rm{cm}}=3.650$~GeV has been used to assess potential background from continuum production under the $\psi(3770)$. BESIII is a general-purpose magnetic spectrometer with a geometrical acceptance of 93\% of $4\pi$. Charged particles are reconstructed in a 43-layer helium-gas-based drift chamber~(MDC), which has an average single-wire resolution of 135~$\mu$m. A uniform axial magnetic field of 1~T is provided by a superconducting solenoid, allowing the precise measurement of charged particle trajectories. The resolution varies as a function of momentum, and is 0.5\% at 1.0~GeV/$c$. The MDC is also instrumented to measure the specific ionization ($dE/dx$) of charged particles for particle identification. Additional particle identification is provided by a time-of-flight system (TOF) constructed as a cylindrical (``barrel") structure with two 5-cm-thick plastic-scintillator layers and two ``end caps'' with one 5-cm layer. The time resolution in the barrel is approximately 80~ps, and in the end caps it is 110~ps. Just beyond the TOF is an electromagnetic calorimeter~(EMC) consisting of 6240 CsI(Tl) crystals, also configured as a barrel and two end caps. For 1.0-GeV photons, the energy resolution is 2.5\% in the barrel and it is 5\% in the end caps. This entire inner detector resides in the solenoidal magnet, which is supported by an octagonal flux-return yoke instrumented with resistive-plate counters interleaved with steel for muon identification (MUC). More detailed information on the design and performance of the BESIII detector can be found in Ref.~\cite{Ablikim:2009aa}. \section{TECHNIQUE} To select a $D\bar{D}$ event, we fully reconstruct a $D$ using tag modes that have sizable branching fractions and can be reconstructed with good efficiency and reasonable background. We use three $D^0$ and six $D^+$ tag modes: $D^0 \to K^- \pi^+$, $D^0 \to K^- \pi^+ \pi^0$, $D^0 \to K^- \pi^+ \pi^+ \pi^-$, $D^+ \to K^-\pi^+ \pi^+$, $D^+ \to K^- \pi^+ \pi^+ \pi^0$, $D^+ \to K_S^0 \pi^+$, $D^+ \to K_S^0 \pi^+ \pi^0$, $D^+ \to K_S^0 \pi^+ \pi^+ \pi^-$, and $D^+ \to K^- K^+ \pi^+$. When both the $D$ and $\bar{D}$ in an event decay to tag modes we can fully reconstruct the entire event. These double-tag events are selected when the event has two single tags and satisfies the additional requirements that the reconstructed single tags have opposite net charge, opposite-charm $D$ parents and no shared tracks. The yield $X_{i}$ for single-tag mode $i$ is given by Eq.~\autoref{eq:stagnd}: \begin{equation} X_{i} = N_{D\bar{D}} \cdot \mathcal{B}(D\to i)\cdot \epsilon_{i}, \label{eq:stagnd} \end{equation} \noindent where $N_{D\bar{D}}$ is the total number of $D\bar{D}$ events, $\mathcal{B}(D\to i)$ is the branching fraction for decay mode $i$, and $\epsilon_{i}$ is the reconstruction efficiency for the mode, determined with Monte Carlo (MC) simulation. Extending this reasoning, the yields for $\bar{D}$ decaying to mode $j$ and for $ij$ double-tag events, in which the $D$ decays to mode $i$ and the $\bar{D}$ decays to mode $j$, are given as follows: \begin{equation} Y_{j} = N_{D\bar{D}}\cdot \mathcal{B}(\bar{D}\to j)\cdot \epsilon_{j} \label{eq:stagndbar} \end{equation} \noindent and \begin{equation} Z_{ij}=N_{D\bar{D}}\cdot \mathcal{B}(D\to i)\cdot \mathcal{B}(\bar{D}\to j)\cdot \epsilon_{ij}. \label{eq:dtagnd} \end{equation} \noindent In these equations, $Z_{ij}$ is the yield for the double-tag mode $ij$, and $\epsilon_{ij}$ is the efficiency for reconstructing both tags in the same event. Combining Eqs.~\autoref{eq:stagnd}, ~\autoref{eq:stagndbar} and ~\autoref{eq:dtagnd}, $N_{D\bar{D}}$ can be expressed as \begin{equation} N_{D\bar{D}} = \frac{X_{i}\cdot Y_{j}\cdot \epsilon_{ij}}{Z_{ij}\cdot \epsilon_{i}\cdot \epsilon_{j}}. \end{equation} \noindent The cancellation of systematic uncertainties occurs through the ratio of efficiencies $\epsilon_{ij}/(\epsilon_i \cdot \epsilon_j)$. The measured $N_{D\bar{D}}$ from each combinations of $i$ and $j$ are then averaged, weighted by their statistical uncertainties. Finally, to determine cross sections we divide $N_{D\bar{D}}$ by the integrated luminosity $\mathcal{L}$ of the $\psi(3770)$ sample, $\sigma(e^+e^- \rightarrow D\bar{D}) =N_{D\bar{D}}/{\mathcal{L}}$. \section{PARTICLE RECONSTRUCTION} Detection efficiencies and backgrounds for this analysis have been studied with detailed simulations of the BESIII detector based on GEANT4~\cite{Agostinelli:2002hh}. High-statistics MC samples were produced for generic $D^0\bar{D}^0$ and $D^+D^-$ decays from $\psi(3770)$, $q\bar{q} \to\text{light hadrons}$ $(q = u, d$ or $s)$, $\tau^+\tau^-$, and radiative return to $J/\psi$ and $\psi(3686)$. The $D^0\bar{D}^0$, $D^+D^-$, $q\bar{q}$, and $\tau^+\tau^-$ states were generated using KKMC~\cite{Jadach:1999vf,Jadach:2000ir}, while the $\gamma J/\psi$ and $\gamma \psi(3686)$ were generated with EvtGen~\cite{Lange:2001uf}. All were then decayed with EvtGen, except for the $q\bar{q}$ and $\tau^+\tau^-$, which were modeled with the LUNDCHARM~\cite{Chen:2000tv} and the TAUOLA~\cite{taula26,Jadach:1999vf} generators, respectively. Data and MC samples are treated identically for the selection of $D$ tags. All particles used to reconstruct a candidate must pass requirements specific to the particle type. Charged particles are required to be within the fiducial region for reliable tracking ($|\cos\theta| < 0.93$, where $\theta$ is the polar angle relative to the beam direction) and to pass within 1~cm (10~cm) of the interaction point in the plane transverse to the beam direction (along the beam direction). Particle identification is based on TOF and $dE/dx$ measurements, with the identity as a pion or kaon assigned based on which hypothesis has the higher probability. To be selected as a photon, an EMC shower must not be associated with any charged track~\cite{Ablikim:2015ycp}, must have an EMC hit time between $0$ and $700$~ns to suppress activity that is not consistent with originating from the collision event, must have an energy of at least $25$~MeV if it is in the barrel region of the detector ($|\cos\theta| < 0.8$), and $50$~MeV if it is in the end cap region ($0.84 < |\cos\theta| < 0.92$) to suppress noise in the EMC as a potential background to real photons. Showers in the transition region between the barrel and end cap are excluded. $K_S^0$ mesons are reconstructed from the decay into $\pi^+\pi^-$. Because of the cleanliness of the selection and the possibility of a measurably displaced decay vertex, the pions are not required to pass the usual particle identification or interaction-point requirements. A fit is performed with the pions constrained to a common vertex and the $K_S^0$ candidate is accepted if the fit satisfies $\chi^2<100$ and the candidate mass is within $\sim3\sigma$ of the nominal $K_S^0$ mass ($487-511$~MeV/$c^2$). The momentum of the $K_S^0$ that is obtained from the constrained-vertex fit is used for the subsequent reconstruction of $D$-tag candidates. $\pi^0$ mesons are reconstructed through the decay into two photons. Both photons for a $\pi^0$ candidate must pass the above selection criteria, and at least one of them must be in the barrel region of the detector. To be accepted a $\pi^0$ candidate must have an invariant mass between $115$~MeV/$c^2$ and $150$~MeV/$c^2$. The photons are then refitted with a $\pi^0$ mass constraint and the resulting $\pi^0$ momentum is used for the reconstruction of $D$-tag candidates. \section{EVENT SELECTION} In addition to the requirements on the final-state particles, the reconstructed $D$-tag candidates must pass several additional requirements that ensure the measured candidate energy and momentum are close to the expected values for production via $\psi(3770) \to D\bar{D}$. The first of these requirements is $\Delta E = E_D - E_{\rm{beam}}\simeq 0$, where $E_D$ is the energy of the reconstructed $D$ candidate and $E_{\rm{beam}}$ is the beam energy. In calculating $\Delta E$ we use the beam energy calibrated with $D^0$ and $D^+$ decays, combining groups of nearby runs to obtain sufficient statistics. Selection requirements on $\Delta E$ are determined separately for each tag mode for data and MC to account for differing resolutions. As shown in Table~\ref{table:delecuts}, for modes decaying into all charged tracks, the requirements are set to $\pm 3\sigma$ about the mean, while for modes with a $\pi^0$, the requirements are asymmetric about the mean, extending on the low side to $-4\sigma$ to accommodate the tail from the photon energy resolution. \begin{table*}[htbp] \caption{The selected range on $\Delta E$ is $\pm3\sigma$ about the mean, except that for modes with a $\pi^0$ an extended lower bound of $-4\sigma$ is used. The resolutions and means are extracted by fitting with a double Gaussian, weighted by the two Gaussian yields, and determined separately for data and MC. } \label{table:delecuts} \begin{center} \small \begin{tabular}{l|c|c|c|c} \hline \multicolumn{1}{c|}{} & \multicolumn{ 2}{c|}{MC} & \multicolumn{2}{c}{Data} \\ \hline \multicolumn{1}{c|}{Tag mode} & \multicolumn{1}{c|}{$\sigma$ (MeV)} & \multicolumn{1}{c|}{Mean (MeV)} & \multicolumn{1}{c|}{$\sigma$ (MeV)} & \multicolumn{1}{c}{Mean (MeV)} \\ \hline $D^0 \to K^- \pi^+$ & $7.6$ & $-0.4$ & $9.4$ & $-0.8$ \\ \hline $D^0 \to K^- \pi^+ \pi^0$ & $14.1$ & $-7.6$ & $15.4$ & $-7.6$ \\ \hline $D^0 \to K^- \pi^+ \pi^+ \pi^-$ & $8.2$ & $-1.4$ & $9.8$ & $-2.0$ \\ \hline $D^+ \to K^- \pi^+ \pi^+$ & $7.2$ & $-0.9$ & $8.6$ & $-1.2$ \\ \hline $D^+ \to K^- \pi^+ \pi^+ \pi^0$ & $12.8$ & $-6.9$ & $13.7$ & $-6.9$ \\ \hline $D^+ \to K^{0}_{S} \pi^+$ & $6.7$ & $0.4$ & $8.4$ & $-0.1$ \\ \hline $D^+ \to K^{0}_{S} \pi^+ \pi^0$ & $14.6$ & $-7.7$ & $16.2$ & $-7.9$ \\ \hline $D^+ \to K^{0}_{S} \pi^+ \pi^+ \pi^-$ & $8.2$ & $-1.1$ & $10.4$ & $-1.7$ \\ \hline $D^+ \to K^+ K^- \pi^+$ & $6.2$ & $-1.1$ & $7.2$ & $-1.5$ \\ \hline \end{tabular} \end{center} \end{table*} Figure~\ref{fig:deltae} shows the data and MC overlays of the $\Delta E$ distributions by mode. \begin{center} \includegraphics[width=8.5cm]{DelEOverlays.pdf} \figcaption {\label{fig:deltae}(color online) $\Delta E$ line shape for various single-tag mode (arbitrarily scaled). Starting from the top left, the modes are: (a) $D^0 \to K^- \pi^+$, (b) $D^0 \to K^- \pi^+ \pi^0$, (c) $D^0 \to K^- \pi^+ \pi^+ \pi^-$, (d) $D^+ \to K^- \pi^+ \pi^+$, (e) $D^+ \to K^- \pi^+ \pi^+ \pi^0$, (f) $D^+ \to K_S^0 \pi^+$, (g) $D^+ \to K_S^0 \pi^+ \pi^0$, (h) $D^+ \to K_S^0 \pi^+ \pi^+ \pi^-$, and (i) $D^+ \to K^+ K^- \pi^+$. These plots overlay the 3.773~GeV data (blue dashed histograms) and the corresponding narrower-width MC (red solid histograms). Only requirements on the constituent particles and a very loose $\text{M}_{\text{BC}}$ requirement ($1.83$~GeV/$c^2$ $\leq\text{M}_{\text{BC}}\leq 1.89$~GeV/$c^2$) have been applied.} \end{center} The second variable used in selecting $D$ tags is the beam-constrained mass $\text{M}_{\text{BC}} c^{2} = \sqrt{E_{\rm{beam}}^2 - |\textbf{p}_{\rm{tag}}c|^2}$, where $\textbf{p}_{\rm{tag}}$ is the 3-momentum of the candidate $D$. We use $\text{M}_{\text{BC}}$ rather than the invariant mass because of the excellent precision with which the beam energy is known. The requirement that $\text{M}_{\text{BC}}$ be close to the known $D$ mass ensures that the $D$ tag has the expected momentum. After application of the $\Delta E$ requirement to single-tag candidates of a given mode, we construct an $\text{M}_{\text{BC}}$ distribution in the region of the known masses of charmed mesons ($1.83-1.89$~GeV/$c^2$). For the MC a small upward shift of just under 1~MeV/$c$ is applied to the measured $D$ momentum for the calculation of $\text{M}_{\text{BC}}$ to compensate for input parameters that do not precisely match data. Initial inspection of the distribution in data for the two-body mode $D^0 \to K^- \pi^+$ exhibited peaking near the high end of the $\text{M}_{\text{BC}}$ range not seen in MC. We demonstrated this to be background from cosmic ray and QED events. To eliminate it from the distribution, additional requirements are applied in selecting $D^0 \to K^- \pi^+$ candidates with exactly two charged tracks. We veto these events if they satisfy at least one of the following conditions: TOF information consistent with a cosmic ray event, particle identification information consistent with an $e^+e^-$ hypothesis, two tracks with EMC energy deposits consistent with an $e^+e^-$ hypothesis, or either track with particle identification and MUC information consistent with being a muon. \section{YIELDS AND EFFICIENCIES} The $\text{M}_{\text{BC}}$ distribution for single-tag candidates for each mode is fitted with a MC-derived signal shape and an ARGUS function background~\cite{Albrecht:1990am}. The signal shape is convolved with a double Gaussian with a common mean to allow for differences in $\text{M}_{\text{BC}}$ resolution between data and MC. Charge-conjugate modes are fitted simultaneously with the double-Gaussian signal-shape parameters constrained to be the same and the normalizations and background parameters allowed to vary independently in the fit. Peaking backgrounds contributed by decay modes that have similar final states to the signal mode are included in the signal shape, although the yields are corrected after the fit to count only true signal events. An example $\text{M}_{\text{BC}}$ fit is shown in Fig.~\ref{fig:fullparamstagfits}. (The full set of fits is provided in the Appendix.) In events with multiple single-tag candidates, the best candidate is chosen per mode and per charm to be the one with the smallest $|\Delta E|$. Based on the fit results tight mode-dependent requirements on $\Delta E$ are applied. To determine the tag yield, the $\text{M}_{\text{BC}}$ histogram is integrated within the signal region, $1.8580$ GeV/$c^2\le\text{M}_{\text{BC}}\le1.8740$ GeV/$c^2$ for $D^0$ modes and $1.8628$ GeV/$c^2\le\text{M}_{\text{BC}}\le1.8788$ GeV/$c^2$ for $D^+$ modes, and then the analytic integral of the ARGUS function in this region is subtracted. The efficiency for each of the 18 single-tag modes is found by using MC truth information to determine the total number generated for the denominator and using the same cut-and-count method as used for data to determine the numerator. The single-tag yields and efficiencies are summarized in Table~\ref{table:stagyieldseffs}, where the efficiencies include branching fractions for $\pi^0\to\gamma\gamma$ and $K_S^0\to\pi^+\pi^-$ decays. \begin{center} \includegraphics[width=8.5cm]{ST_mbc_201Plus.pdf} \figcaption {\label{fig:fullparamstagfits}(color online) $\text{M}_{\text{BC}}$ fit for single-tag mode $D^+ \to K^- \pi^+ \pi^+ \pi^0$, from data. Blue dash-dot (green dashed) line represents the total fit (the fitted background shape) and the red solid curve corresponds to the fitted signal shape.} \end{center} \begin{table*}[htbp] \begin{center} \tabcaption{\label{table:stagyieldseffs} Single-tag yields after subtracting their corresponding peaking backgrounds from data and efficiencies from MC, as described in the text. The uncertainties are statistical only.} \scalebox{0.85} { \begin{tabular}{l|rcr|rcl||l|rcr|rcl} \hline \multicolumn{1}{c|}{Tag mode} & \multicolumn{3}{c|}{Yield} & \multicolumn{3}{c||}{Efficiency (\%)} & \multicolumn{1}{c|}{Tag mode} & \multicolumn{3}{c|}{Yield} & \multicolumn{3}{c}{Efficiency (\%)} \\ \hline $D^0 \to K^- \pi^+ $ & $260,915$ & $\pm$ & $520$ & $63.125$ & $\pm$ &$0.007$ & $\bar{D}^0 \to K^+ \pi^-$ & $262,356$ & $\pm$ & $522$ & $64.272$ & $\pm$ & $0.006$ \\ \hline $D^0 \to K^- \pi^+ \pi^0 $ & $537,923$ & $\pm$ & $845$ & $35.253$ & $\pm$ & $0.007$ & $\bar{D}^0 \to K^+ \pi^- \pi^0$ & $544,252$ & $\pm$ & $852$ & $35.761$ & $\pm$ & $0.007$ \\ \hline $D^0 \to K^- \pi^+ \pi^+ \pi^- $ & $346,583$ & $\pm$ & $679$ & $38.321$ & $\pm$ & $0.007$ & $\bar{D}^0 \to K^+ \pi^+ \pi^- \pi^-$ & $351,573$ & $\pm$ & $687$ & $39.082$ & $\pm$ & $0.007$ \\ \hline $D^+ \to K^- \pi^+ \pi^+$ & $391,786$ & $\pm$ & $653$ & $50.346$ & $\pm$ & $0.005$ & $D^- \to K^+ \pi^- \pi^-$ & $394,749$ & $\pm$ & $656$ & $51.316$ & $\pm$ & $0.005$ \\ \hline $D^+ \to K^- \pi^+ \pi^+ \pi^0$ & $124,619$ & $\pm$ & $529$ & $26.138$ & $\pm$ & $0.014$ & $D^- \to K^+ \pi^- \pi^- \pi^0$ & $128,203$ & $\pm$ & $539$ & $26.586$ & $\pm$ & $0.015$ \\ \hline $D^+ \to K^{0}_{S} \pi^+$ & $48,185$ & $\pm$ & $229$ & $36.726$ & $\pm$ & $0.008$ & $D^- \to K^{0}_{S} \pi^-$ & $47,952$ & $\pm$ & $228$ & $36.891$ & $\pm$ & $0.008$ \\ \hline $D^+ \to K^{0}_{S} \pi^+ \pi^0$ & $114,919$ & $\pm$ & $471$ & $20.687$ & $\pm$ & $0.011$ & $D^- \to K^{0}_{S} \pi^- \pi^0$ & $116,540$ & $\pm$ & $472$ & $20.690$ & $\pm$ & $0.011$ \\ \hline $D^+ \to K^{0}_{S} \pi^+ \pi^+ \pi^-$ & $63,018$ & $\pm$ & $421$ & $21.966$ & $\pm$ & $0.019$ & $D^- \to K^{0}_{S} \pi^+ \pi^- \pi^-$ & $62,982$ & $\pm$ & $421$ & $21.988$ & $\pm$ & $0.019$ \\ \hline $D^+ \to K^+ K^- \pi^+$ & $34,416$ & $\pm$ & $258$ & $41.525$ & $\pm$ & $0.042$ & $D^- \to K^+ K^- \pi^-$ & $34,434$ & $\pm$ & $257$ & $41.892$ & $\pm$ & $0.042$ \\ \hline \end{tabular} } \end{center} \end{table*} Double tags are fully reconstructed events in which both the $D$ and the $\bar {D}$ pass the selection criteria for one of the tag modes. In events with multiple double-tag candidates, the best candidate per mode combination per event is chosen with the $[\text{M}_{\text{BC}}(D) + \text{M}_{\text{BC}}(\bar{D})]/2$ closest to the known $D$ mass. Following a procedure similar to the single-tag counting, we fit the two-dimensional distribution of $\text{M}_{\text{BC}}(\bar{D})$ vs. $\text{M}_{\text{BC}}(D)$ for the selected single-tag modes to define the signal region for a cut-and-count determination of the double-tag yield. A more sophisticated treatment of the background is required because of the correlations between the tags. The signal shape is again derived from MC, using truth information and including peaking backgrounds with the signal. We found that convolving the MC shape with smearing functions to account for the small data/MC resolution difference did not appreciably improve the accuracy of the tag yields, so no signal smearing is included in the double-tag fits. The background shapes in the double-tag fits correspond to four possible ways of mis-reconstructing an event, as shown in Fig.~\ref{fig:twodsbsubtraction}. A direct product of a MC-derived signal shape with an analytic ARGUS function background, with shape parameters fixed to those of the corresponding single-tag fit, is used to represent the background contributed by events with a correctly reconstructed $D$ and incorrectly reconstructed $\bar{D}$. The background shape for the charm-conjugate case is similarly constructed. For completely reconstructed continuum events or fully reconstructed but mispartitioned $D\bar{D}$ events (with particles assigned incorrectly to the $D$ and $\bar{D}$), a direct product of a double-Gaussian function and an ARGUS function rotated by $45^{\circ}$ is used. The kinematic limit and exponent parameters of the rotated ARGUS function are fixed, while the slope parameter is allowed to be free in the fit. Finally, the remaining background events with neither $D$ nor ${\bar D}$ correctly reconstructed are modeled with a direct product of two ARGUS functions, with parameters taken from the corresponding single-tag fits. An example fit to data is shown in Fig.~\ref{fig:detailedtagfit}. (The full set of fits is provided in the Appendix.) \begin{center} \includegraphics[width=8.5cm]{TwoDmBCSBsubtraction.pdf} \figcaption {\label{fig:twodsbsubtraction}(color online) The two-dimensional $\text{M}_{\text{BC}}$ plane divided into regions dominated by signal and various backgrounds. These regions represent the shapes used in the double-tag fitting method and sideband corrections described in the text.} \end{center} After the two-dimensional fit is performed, the $\text{M}_{\text{BC}}$ histogram is integrated within the same signal region as the single-tag fits, and the integrals of the four background shapes are subtracted from this total. The resultant double-tag yields and efficiencies, which include branching fractions for $\pi^0\to\gamma\gamma$ and $K_S^0\to\pi^+\pi^-$ decays, are summarized in Tables~\ref{table:dtagd0yieldseffs} and \ref{table:dtagdpyieldseffs}. \begin{table*}[htbp] \caption{$D^0\bar{D}^0$ double-tag yields from data and efficiencies from MC, as described in the text. The uncertainties are statistical only.} \label{table:dtagd0yieldseffs} \begin{center} \begin{tabular}{l|rcr|rcl} \hline \multicolumn{1}{c|}{Tag mode} & \multicolumn{3}{c|}{Yield} & \multicolumn{3}{c}{Efficiency (\%)} \\ \hline $D^0 \to K^- \pi^+ $~vs.~$ \bar{D}^0 \to K^+ \pi^-$ & $6,545$ & $\pm$ & $81$ & $42.58$ & $\pm$&$0.13$ \\ \hline $D^0 \to K^- \pi^+ $~vs.~$ \bar{D}^0 \to K^+ \pi^- \pi^0$ & $14,701$ & $\pm$ & $122$ & $24.90$ & $\pm$ & $0.06$ \\ \hline $D^0 \to K^- \pi^+ $~vs.~$ \bar{D}^0 \to K^+ \pi^+ \pi^- \pi^-$ & $9,096$ & $\pm$ & $96$ & $25.54$ & $\pm$ & $0.08$ \\ \hline $D^0 \to K^- \pi^+ \pi^0 $~vs.~$ \bar{D}^0 \to K^+ \pi^-$ & $14,526$ & $\pm$ & $122$ & $24.94$ & $\pm$ & $0.06$ \\ \hline $D^0 \to K^- \pi^+ \pi^0 $~vs.~$ \bar{D}^0 \to K^+ \pi^- \pi^0$ & $30,311$ & $\pm$ & $176$ & $13.94$ & $\pm$ & $0.03$ \\ \hline $D^0 \to K^- \pi^+ \pi^0 $~vs.~$ \bar{D}^0 \to K^+ \pi^+ \pi^- \pi^-$ & $18,651$ & $\pm$ & $139$ & $14.35$ & $\pm$ & $0.03$ \\ \hline $D^0 \to K^- \pi^+ \pi^+ \pi^- $~vs.~$ \bar{D}^0 \to K^+ \pi^-$ & $8,988$ & $\pm$ & $96$ & $25.77$ & $\pm$ & $0.08$ \\ \hline $D^0 \to K^- \pi^+ \pi^+ \pi^- $~vs.~$ \bar{D}^0 \to K^+ \pi^- \pi^0$ & $18,635$ & $\pm$ & $139$ & $14.32$ & $\pm$ & $0.03$ \\ \hline $D^0 \to K^- \pi^+ \pi^+ \pi^- $~vs.~$ \bar{D}^0 \to K^+ \pi^+ \pi^- \pi^-$ & $11,572$ & $\pm$ & $110$ & $14.86$ & $\pm$ & $0.04$ \\ \hline \end{tabular} \end{center} \end{table*} \begin{table*}[htbp] \caption{$D^+D^-$ double-tag yields from data and efficiencies from MC, as described in the text. The uncertainties are statistical only.} \label{table:dtagdpyieldseffs} \begin{center} \begin{tabular}{l|rcr|rcl} \hline \multicolumn{1}{c|}{Tag mode} & \multicolumn{3}{c|}{Yield} & \multicolumn{3}{c}{Efficiency (\%)} \\ \hline $D^+ \to K^- \pi^+ \pi^+ $~vs.~$ D^- \to K^+ \pi^- \pi^-$ & $18,800$ & $\pm$ & $138$ & $26.02$ & $\pm$ & $0.05$ \\ \hline $D^+ \to K^- \pi^+ \pi^+ $~vs.~$ D^- \to K^+ \pi^- \pi^- \pi^0$ & $5,981$ & $\pm$ & $80$ & $13.62$ & $\pm$ & $0.05$ \\ \hline $D^+ \to K^- \pi^+ \pi^+ $~vs.~$ D^- \to K^{0}_{S} \pi^-$ & $2,368$ & $\pm$ & $49$ & $18.45$ & $\pm$ & $0.12$ \\ \hline $D^+ \to K^- \pi^+ \pi^+ $~vs.~$ D^- \to K^{0}_{S} \pi^- \pi^0$ & $5,592$ & $\pm$ & $75$ & $10.51$ & $\pm$ & $0.04$ \\ \hline $D^+ \to K^- \pi^+ \pi^+ $~vs.~$ D^- \to K^{0}_{S} \pi^+ \pi^- \pi^-$ & $2,826$ & $\pm$ & $53$ & $10.82$ & $\pm$ & $0.06$ \\ \hline $D^+ \to K^- \pi^+ \pi^+ $~vs.~$ D^- \to K^+ K^- \pi^-$ & $1,597$ & $\pm$ & $40$ & $20.87$ & $\pm$ & $0.15$ \\ \hline $D^+ \to K^- \pi^+ \pi^+ \pi^0 $~vs.~$ D^- \to K^+ \pi^- \pi^-$ & $6,067$ & $\pm$ & $80$ & $13.48$ & $\pm$ & $0.05$ \\ \hline $D^+ \to K^- \pi^+ \pi^+ \pi^0 $~vs.~$ D^- \to K^+ \pi^- \pi^- \pi^0$ & $1,895$ & $\pm$ & $53$ & $6.79$ & $\pm$ & $0.06$ \\ \hline $D^+ \to K^- \pi^+ \pi^+ \pi^0 $~vs.~$ D^- \to K^{0}_{S} \pi^-$ & $693$ & $\pm$ & $26$ & $9.82$ & $\pm$ & $0.11$ \\ \hline $D^+ \to K^- \pi^+ \pi^+ \pi^0 $~vs.~$ D^- \to K^{0}_{S} \pi^- \pi^0$ & $1,726$ & $\pm$ & $44$ & $5.22$ & $\pm$ & $0.04$ \\ \hline $D^+ \to K^- \pi^+ \pi^+ \pi^0 $~vs.~$ D^- \to K^{0}_{S} \pi^+ \pi^- \pi^-$ & $857$ & $\pm$ & $33$ & $5.41$ & $\pm$ & $0.06$ \\ \hline $D^+ \to K^- \pi^+ \pi^+ \pi^0 $~vs.~$ D^- \to K^+ K^- \pi^-$ & $549$ & $\pm$ & $24$ & $10.78$ & $\pm$ & $0.15$ \\ \hline $D^+ \to K^{0}_{S} \pi^+ $~vs.~$ D^- \to K^+ \pi^- \pi^-$ & $2,352$ & $\pm$ & $48$ & $18.96$ & $\pm$ & $0.12$ \\ \hline $D^+ \to K^{0}_{S} \pi^+ $~vs.~$ D^- \to K^+ \pi^- \pi^- \pi^0$ & $722$ & $\pm$ & $27$ & $9.80$ & $\pm$ & $0.12$ \\ \hline $D^+ \to K^{0}_{S} \pi^+ $~vs.~$ D^- \to K^{0}_{S} \pi^-$ & $269$ & $\pm$ & $16$ & $13.95$ & $\pm$ & $0.27$ \\ \hline $D^+ \to K^{0}_{S} \pi^+ $~vs.~$ D^- \to K^{0}_{S} \pi^- \pi^0$ & $678$ & $\pm$ & $26$ & $7.67$ & $\pm$ & $0.10$ \\ \hline $D^+ \to K^{0}_{S} \pi^+ $~vs.~$ D^- \to K^{0}_{S} \pi^+ \pi^- \pi^-$ & $383$ & $\pm$ & $20$ & $7.90$ & $\pm$ & $0.13$ \\ \hline $D^+ \to K^{0}_{S} \pi^+ $~vs.~$ D^- \to K^+ K^- \pi^-$ & $191$ & $\pm$ & $14$ & $15.2$ & $\pm$ & $0.34$ \\ \hline $D^+ \to K^{0}_{S} \pi^+ \pi^0 $~vs.~$ D^- \to K^+ \pi^- \pi^-$ & $5,627$ & $\pm$ & $75$ & $10.64$ & $\pm$ & $0.04$ \\ \hline $D^+ \to K^{0}_{S} \pi^+ \pi^0 $~vs.~$ D^- \to K^+ \pi^- \pi^- \pi^0$ & $1,708$ & $\pm$ & $43$ & $5.28$ & $\pm$ & $0.04$ \\ \hline $D^+ \to K^{0}_{S} \pi^+ \pi^0 $~vs.~$ D^- \to K^{0}_{S} \pi^-$ & $624$ & $\pm$ & $25$ & $7.67$ & $\pm$ & $0.10$ \\ \hline $D^+ \to K^{0}_{S} \pi^+ \pi^0 $~vs.~$ D^- \to K^{0}_{S} \pi^- \pi^0$ & $1,557$ & $\pm$ & $40$ & $4.08$ & $\pm$ & $0.03$ \\ \hline $D^+ \to K^{0}_{S} \pi^+ \pi^0 $~vs.~$ D^- \to K^{0}_{S} \pi^+ \pi^- \pi^-$ & $747$ & $\pm$ & $28$ & $4.26$ & $\pm$ & $0.05$ \\ \hline $D^+ \to K^{0}_{S} \pi^+ \pi^0 $~vs.~$ D^- \to K^+ K^- \pi^-$ & $503$ & $\pm$ & $23$ & $8.51$ & $\pm$ & $0.13$ \\ \hline $D^+ \to K^{0}_{S} \pi^+ \pi^+ \pi^- $~vs.~$ D^- \to K^+ \pi^- \pi^-$ & $2,857$ & $\pm$ & $53$ & $11.01$ & $\pm$ & $0.06$ \\ \hline $D^+ \to K^{0}_{S} \pi^+ \pi^+ \pi^- $~vs.~$ D^- \to K^+ \pi^- \pi^- \pi^0$ & $924$ & $\pm$ & $34$ & $5.44$ & $\pm$ & $0.06$ \\ \hline $D^+ \to K^{0}_{S} \pi^+ \pi^+ \pi^- $~vs.~$ D^- \to K^{0}_{S} \pi^-$ & $313$ & $\pm$ & $18$ & $7.72$ & $\pm$ & $0.13$ \\ \hline $D^+ \to K^{0}_{S} \pi^+ \pi^+ \pi^- $~vs.~$ D^- \to K^{0}_{S} \pi^- \pi^0$ & $778$ & $\pm$ & $29$ & $4.17$ & $\pm$ & $0.05$ \\ \hline $D^+ \to K^{0}_{S} \pi^+ \pi^+ \pi^- $~vs.~$ D^- \to K^{0}_{S} \pi^+ \pi^- \pi^-$ & $468$ & $\pm$ & $24$ & $4.28$ & $\pm$ & $0.06$ \\ \hline $D^+ \to K^{0}_{S} \pi^+ \pi^+ \pi^- $~vs.~$ D^- \to K^+ K^- \pi^-$ & $246$ & $\pm$ & $18$ & $8.96$ & $\pm$ & $0.19$ \\ \hline $D^+ \to K^+ K^- \pi^+ $~vs.~$ D^- \to K^+ \pi^- \pi^-$ & $1,576$ & $\pm$ & $40$ & $21.31$ & $\pm$ & $0.16$ \\ \hline $D^+ \to K^+ K^- \pi^+ $~vs.~$ D^- \to K^+ \pi^- \pi^- \pi^0$ & $509$ & $\pm$ & $23$ & $10.41$ & $\pm$ & $0.15$ \\ \hline $D^+ \to K^+ K^- \pi^+ $~vs.~$ D^- \to K^{0}_{S} \pi^-$ & $185$ & $\pm$ & $14$ & $14.48$ & $\pm$ & $0.33$ \\ \hline $D^+ \to K^+ K^- \pi^+ $~vs.~$ D^- \to K^{0}_{S} \pi^- \pi^0$ & $468$ & $\pm$ & $22$ & $8.23$ & $\pm$ & $0.13$ \\ \hline $D^+ \to K^+ K^- \pi^+ $~vs.~$ D^- \to K^{0}_{S} \pi^+ \pi^- \pi^-$ & $232$ & $\pm$ & $18$ & $8.62$ & $\pm$ & $0.19$ \\ \hline $D^+ \to K^+ K^- \pi^+ $~vs.~$ D^- \to K^+ K^- \pi^-$ & $156$ & $\pm$ & $16$ & $16.46$ & $\pm$ & $0.53$ \\ \hline \end{tabular} \end{center} \end{table*} We must correct the yields determined with the $\text{M}_{\text{BC}}$ fits (data and MC) for contributions from background processes that peak in the signal region. Such backgrounds come from other $D$ decays with similar kinematics and particle compositions as the specific signal mode. We rely on MC, generated with world-average branching fractions~\cite{Olive:2016xmw}, to determine the fraction of peaking background events, as well as to calculate their selection efficiencies. We apply MC-determined corrections for these in every case where more than 0.01\% of the fitted yield is attributable to peaking background. The largest contribution of peaking background is for $D^+ \to K^0_S \pi^+ \pi^+ \pi^-$, approximately 2.5\% of the fitted yield. $D^0 \to K^- \pi^+ \pi^+ \pi^-$ and $D^+ \to K^0_S \pi^+ \pi^0$ both have $\sim 2.0\%$ of their fitted yields from peaking backgrounds, and all other modes have less than $1.0\%$. Because the peaking backgrounds come from well understood processes, like doubly Cabibbo-suppressed modes, simultaneous misidentification of both a pion and a kaon in an event, and charged pion pairs not from $K^0_S$ decays that pass the $K^0_S$ invariant mass requirement, we are confident that they are well modeled by the MC. The analysis described above results in a set of measured values of $N_{D\bar{D}{ij}}$, the number of $D\bar{D}$ events determined with the single- and double-tag yields of positive tag mode $i$ and negative tag mode $j$. The uncertainties are highly mode dependent because of branching fractions, efficiencies and backgrounds, so these measurements must be combined into an uncertainty-weighted mean taking into account correlations within and between the mode-specific measurements. We use an analytic procedure for this and demonstrated its reliability with a toy MC study. \begin{center} \includegraphics[width=8.5cm]{DT_mbc.pdf} \figcaption {\label{fig:detailedtagfit}(color online) Example two-dimensional $\text{M}_{\text{BC}}$ double-tag fit from data as described in the text, for tag mode $K^+\pi^-\pi^-$ vs. $K^-\pi^+\pi^+\pi^0$. The top left figure is a scatter plot of the data and the top right is a scatter plot of the fit to the data. The bottom two plots are overlays of data and the fit projected onto the positive and negative charm $\text{M}_{\text{BC}}$ axes. The red dashed (blue solid) lines represent the total fits (the fitted signal shapes) and the solid green curves are the fitted background shapes. The magenta curve corresponds to the case when $D^-\to K^+\pi^-\pi^-$ is reconstructed correctly, while $D^+\to K^-\pi^+\pi^+\pi^0$ is not.} \end{center} For our full $2.93\mathrm{~fb}^{-1}$ $\psi(3770)$ data sample we find $N_{D^0\bar{D}^0}=(10,621\pm29)\times10^3$ and $N_{D^+D^-}=(8,296\pm31)\times10^3$. Using the integrated luminosity from Ref.~\cite{Ablikim:2015orh}, we obtain observed cross sections for $D {\bar D}$ production at the $\psi(3770)$ of $\sigma(e^+e^- \rightarrow D^0\bar{D}^0)=(3.623 \pm 0.010)$~nb and $\sigma(e^+e^- \rightarrow D^+D^-)=(2.830 \pm 0.011)$~nb. Here, the uncertainties are statistical only. The summed $\chi^2$ values relative to the mean for all pairs of tag modes are $13.2$ for $D^0\bar{D}^0$ ($9$ modes) and $53.6$ for $D^+D^-$ ($36$ modes). We verified the reliability of our yield measurements with an ``In vs. Out'' test with MC by randomly partitioning our MC (signal and background) into ten statistically independent data-sized sets. We determined single- and double-tag yields for these subsamples, calculated the $N_{D\bar{D}}$ and compared these to the true values for each. The overall $\chi^2$ for these ten tests was $10.7$ for $N_{D^0\bar{D}^0}$ and $12.4$ for $N_{D^+D^-}$, demonstrating that our procedure reliably determines both $N_{D\bar{D}}$ and its statistical uncertainty. In a second test, the data sample was partitioned in time into five subsamples of approximately 0.5~fb$^{-1}$ each and measured $\sigma(e^+e^- \rightarrow D^0\bar{D}^0)$ and $\sigma(e^+e^- \rightarrow D^+D^-)$ for each. The values of $\chi^2$ for the hypothesis of equal values for all intervals were $5.4$ and $6.0$, respectively. \section{EFFECTS OF QUANTUM CORRELATIONS} As mentioned earlier in this paper, the $D^0 \bar {D}^0$ yield and cross section must be corrected for correlations introduced by production through a pure $C=-1$ state at the $\psi(3770)$. Asner and Sun~\cite{Asner:2005wf} provide correction factors that can be applied directly to our measured yields with Eq.~\autoref{eq:qceqn1} for $D^0\to f$ and $\bar{D}^0\to f'$ and Eq.~\autoref{eq:qceqn2} for the case $f=f'$. \begin{equation} N_{D^0\bar{D}^0}^{\rm{measured}} = N_{D^0\bar{D}^0}^{\rm{true}}\times (1+r_f\tilde{y}_f+r_{f'}\tilde{y}_{f'}+r_fr_{f'}v^-_{ff'})\\ \label{eq:qceqn1} \end{equation} \begin{equation} N_{D^0\bar{D}^0}^{\rm{measured}} = N_{D^0\bar{D}^0}^{\rm{true}}\times(1+2r_f\tilde{y}_f-r^2_f(2-z^2_f))\\ \label{eq:qceqn2} \end{equation} The quantities appearing in these equations can be expressed in terms of measured parameters of $D^0$ decays and $D^0\bar{D}^0$ mixing, with $v^-_{jk} = (z_jz_k - w_jw_k)/2$, $z_j = 2\cos{\delta_j}$ and $w_j = 2\sin{\delta_j}$. $r_j$ and $\delta_j$ are defined by $\langle j|\bar{D}^0\rangle/\langle j|D^0\rangle = r_je^{i\delta_j}$, where $r_j = |\langle j|\bar{D}^0\rangle/\langle j|D^0\rangle|$, and $\delta_j$ is the average strong phase difference for the Cabibbo-favored tag mode. The usual mixing parameters $x$ and $y$, which are related to the differences in masses and lifetimes of the two mass eigenstates, enter through $\tilde{y}_j = y\cos{\delta_j} + x\sin{\delta_j}$. The $D^0\to K^-\pi^+\pi^0$ and $K^-\pi^+\pi^+\pi^-$ tag modes require a slightly more complicated treatment because they are mixtures of modes with different phases. This requires introducing coherence factors $R_j$ to characterize the variation of $\delta_j$ over phase space, with $z_j$ and $w_j$ being redefined as $z_j = 2R_j\cos{\delta_j}$ and $w_j = 2R_j\sin{\delta_j}$~\cite{TQCA2}. Table~\ref{tab:qcinputs} shows the input parameters that are used to obtain the correction factors and Fig.~\ref{fig:D0xsec_eachmodes} shows the corrections to $\sigma(e^+e^- \rightarrow D^0\bar{D}^0)$ for each of the nine double-tag modes, along with the average. The overall effect is a relative change in $N_{D^0\bar{D}^0}$ of approximately $-0.2\%$, with final corrected values of $N_{D^0\bar{D}^0} = (10,597 \pm 28) \times 10^3$ and $\sigma(e^+e^- \rightarrow D^0\bar{D}^0)=(3.615 \pm 0.010)$~nb. The uncertainties are statistical only. The summed $\chi^2$ value relative to the mean for all pairs of tag modes is $11.8$ for $D^0\bar{D}^0$ ($9$ modes). \begin{center} \tabcaption{\label{tab:qcinputs} Input parameters for the quantum correlation corrections. } \def\1#1#2#3{\multicolumn{#1}{#2}{#3}} \scalebox{1.0} { \begin{tabular}{l l } \hline $x = 0.0037\pm0.0016$ & \cite{hfag}\\ $y = 0.0066^{+0.0007}_{-0.0010}$ & \cite{hfag}\\ $r_{K\pi}^2 = 0.00349\pm0.00004$ & \cite{hfag}\\ $\delta_{K\pi} = (11.8^{+9.5}_{-14.7})$\degree & \cite{hfag}\\ $r_{K\pi\pi^0} = 0.0447\pm0.0012$ & \cite{UKCLEO} \\ $\delta_{K\pi\pi^0} = (198^{+14}_{-15})$\degree (*) & \cite{UKCLEO} \\ $R_{K\pi\pi^0} = 0.81\pm0.06$ & \cite{UKCLEO} \\ $r_{K3\pi} = 0.0549\pm0.0006$ & \cite{UKCLEO} \\ $\delta_{K3\pi} = (128^{+28}_{-17})$\degree (*) & \cite{UKCLEO} \\ $R_{K3\pi} = 0.43^{+0.17}_{-0.13} $ & \cite{UKCLEO} \\ \hline \1{2}{l}{(*) $180$\degree difference in} \\ \1{2}{l}{phase convention from Ref.~\cite{hfag}.}\\ \end{tabular} } \end{center} \begin{center} \includegraphics[keepaspectratio=true,width=8.5cm,angle=0]{D0xsec_eachmodes.pdf}\\ \figcaption{\label{fig:D0xsec_eachmodes}(color online) $\sigma(e^+e^- \rightarrow D^0\bar{D}^0)$ for the nine double-tag modes, as labeled on the horizontal axis. The red (black) points show the $D^0\bar{D}^0$ cross section values with (without) the quantum correlation correction. The light red (black shaded) band denotes the one-standard-deviation bound of the weighted average of the corrected (uncorrected) measurements.} \end{center} \section{SYSTEMATIC UNCERTAINTIES} The sources of systematic uncertainty that have been considered for the $D^0\bar{D}^0$ and $D^+D^-$ cross section measurements are listed in Table~\ref{table:systematics}. \begin{table*}[htbp] \caption{Systematic uncertainties in the cross section measurements in \%.} \label{table:systematics} \begin{center} \begin{tabular}{l|c|c} \hline \multicolumn{1}{c|}{Source} & $\sigma(e^+e^- \to D^0\bar{D}^0)$ & $\sigma(e^+e^- \to D^+D^-)$ \\ \hline Multiplicity-dependent efficiency & $0.4$ & $0.1$ \\ Other-side multiplicity & $<0.01$ & $0.22$ \\ Best-candidate selection & $0.45$ & $0.07$ \\ Single tag fit background shape & $0.54$ & $0.64$ \\ Single tag fit signal shape & $0.26$ & $0.19$ \\ Double tag fit & $0.28$ & $0.19$ \\ Cosmic/lepton veto & $0.06$ & N/A \\ $\psi(3770)$ line shape for ISR & $0.15$ & $0.25$ \\ FSR simulation & $0.11$ & $0.10$ \\ Quantum correlation correction & $0.2$ & N/A \\ Integrated luminosity & $0.5$ & $0.5$ \\ \hline Total & $1.05$ & $0.93$ \\ \hline \end{tabular} \end{center} \end{table*} The double-tag technique used to determine the event yields and cross sections $\sigma(e^+e^- \rightarrow D^0\bar{D}^0)$ and $\sigma(e^+e^- \rightarrow D^+D^-)$ has the benefit of substantial cancellation of systematic uncertainties. Detector effects including tracking, particle identification, and $\pi^0$ and $K_S^0$ reconstruction, along with tag-mode resonant substructure and the $\Delta E$ requirement, all affect both single and double tags. There are, however, event-dependent effects that do not cancel in the efficiency ratio $\epsilon_{ij}/(\epsilon_i \cdot \epsilon_j)$. The event environment in which $D$ mesons are tagged affects the efficiency because higher multiplicities of charged tracks or $\pi^0$s lower the tagging efficiency. This can arise due to three possible sources: (1) differences in multiplicity-dependent efficiencies between data and MC, (2) differences between the other-side multiplicities in data and MC due to imperfect knowledge of $D$ meson decay modes and rates, and (3) sensitivity of the best candidate selection to the number of fake-tag background events. To assess a possible uncertainty due to the first source, we study efficiencies of tracking and particle identification for charged pions and kaons, as well as $\pi^0$ reconstruction, based on doubly tagged $D^0\bar{D^0}$ and $D^+D^-$ samples. We estimate uncertainties while observing how well our MC simulates these efficiencies in data with different particle multiplicities. We evaluate the effect of the second source for both tracks and $\pi^0$s by reweighting the MC to better match the multiplicities in data. In this we assume that data and MC are consistent in the single track and $\pi^0$ reconstruction efficiencies. We obtain corrected efficiencies separately for each tag mode, and the difference with the nominal efficiency is used as the systematic uncertainty. The effect is larger for tag modes with greater multiplicity, and so the overall effect on $D^+D^-$ is greater than that on $D^0\bar{D}^0$. The third source arises due to the fact that we resolve multiple-candidate events when choosing single tags based on the smallest $|\Delta E|$. This selection is imperfect and sometimes the wrong candidate is chosen, lowering the efficiency for multiple-candidate events relative to single-candidate events. Although a best-candidate selection is also applied to double tags, the number of multiple candidates in this case is small and the selection based on two beam-constrained masses is more reliable, so only the systematic uncertainty of best-candidate selection for single tags is considered. Such uncertainty only arises when both the multiple-candidate rate is different between data and MC and the single- and multiple-candidate efficiencies are different. These quantities can be measured both in data and MC, and the observed differences are propagated through to the systematic uncertainties in the cross sections. Even though we fit both single and double tags to obtain the yields and efficiencies, the differences between one- and two-dimensional fits and the much lower background levels of the double-tag $\text{M}_{\text{BC}}$ distributions limit the cancellation. We consider several variations of the fitting procedures and use the changes in efficiency-corrected yields to estimate the systematic uncertainties. The uncertainty due to the single-tag background shape is probed by substituting a MC-derived background for the ARGUS function. The uncertainty due to the signal shape is assessed by altering the smearing of the MC-derived shape (single-Gaussian-convolved instead of the double-Gaussian-convolved). To assess the uncertainty in the double-tag fitting procedure, we obtain double-tag yields and efficiencies with an alternative sideband-subtraction method, dividing the two-dimensional $\text{M}_{\text{BC}}$ plane into sections representing the signal and various background components, as shown in Fig.~\ref{fig:twodsbsubtraction}. The signal area is the same as that used when fitting. Horizontal and vertical bands are used to represent combinations with one correctly and one incorrectly reconstructed $D$; a diagonal band represents the background from completely reconstructed continuum events or mispartitioned $D\bar{D}$ events; and two triangles are used to represent the remaining background, which is mostly flat. An estimate of the flat background is scaled by the ratios of the sizes of each of the other background regions and subtracted to obtain estimates of the non-flat backgrounds. These backgrounds are then scaled with area and ARGUS background parameters obtained from single-tag fits to determine the overall background subtraction and yield for the signal region for a specific tag mode. The difference in efficiency-corrected double-tag yields for each mode between this method and the standard procedure is taken as the systematic uncertainty associated with the double-tag fitting method. The cosmic and lepton veto suppresses cosmic ray and QED background in the single-tag selection for the $D^0\to K^-\pi^+$ mode. A cosmic ray background event is produced by a single particle that is incorrectly reconstructed as two oppositely charged tracks. The net momentum of the two tracks is therefore close to zero, and typical QED events also have small net momentum. This small momentum produces $\text{M}_{\text{BC}}$ values close to the beam energy, so that residual cosmic ray and QED events passing the veto distort the $\text{M}_{\text{BC}}$ distribution. Because the processes responsible are not included in our MC samples or well described by the ARGUS background function, the fit results may be affected. To assess this effect, we performed alternative single-tag fits for $D^0\to K^-\pi^+$ with a cut-off in $\text{M}_{\text{BC}}$ at $1.88$~GeV/$c^2$, excluding the range where cosmic and QED events can contribute. We found the resulting difference from the standard fit procedure to be $0.18\%$, which we take as the systematic uncertainty due to this effect. The line shape of the $\psi(3770)$ affects our analysis through the modeling of initial-state radiation (ISR) at the peak of the resonance. The cross section for $\psi(3770)$ production in radiative events depends on the cross section value at the lower effective $E_{\text{cm}}$ that results from ISR. While this may partially cancel in the ratio, we treat it separately for single and double tags because yields and efficiencies are affected with opposite signs, and because correlations are introduced for the double-tag fits that are not present in the single-tag fits. The MC-determined efficiencies are affected through the $\Delta E$ requirements, which select against large ISR because the $\Delta E$ calculation assumes that the energy available to the $D$ is the full beam energy. The data yields are affected via the $\text{M}_{\text{BC}}$ fit shape, which acquires an asymmetric high-side tail through the contribution of $\psi(3770)$ production via ISR. More ISR causes a larger high-side tail in both the single- and double-tag signal shapes. Additionally, because both $D$ mesons lose energy when ISR occurs, double-tag events that include ISR will have a correlated shift in $\text{M}_{\text{BC}}$, causing such events to align with the diagonal to the high-side of the signal region in the two-dimensional $\text{M}_{\text{BC}}$ plane. We use a preliminary BESIII measurement of the $\psi(3770)$ line shape to re-weight the MC and repeat the $D$-counting procedure. Combining the mode-by-mode variations in $N_{D\bar{D}}$ leads to the systematic uncertainty associated with the $\psi(3770)$ line shape given in Table~\ref{table:systematics}. The MC modeling of final-state radiation (FSR) may lead to a systematic difference between data and MC tag-reconstruction efficiencies. FSR affects our measurement from the tag-side, so any systematic effect will also have some cancellation. To assess the uncertainty due to FSR we created signal MC samples with and without modeling of FSR and measured the changes in tag reconstruction efficiencies. The largest difference was for $D^0 \to K^- \pi^+$, where the relative change in single-tag reconstruction efficiency was $~4\%$. The $D^0 \to K^- \pi^+, \bar{D}^0 \to K^+ \pi^-$ double-tag reconstruction efficiency also changed when FSR was turned off, but the cancellation was not complete, with the ratio of efficiencies changing by $1.2\%$. Because the variation of turning on and off FSR modeling is judged to be too extreme (FSR definitely happens), we take $25\%$ of this difference as our systematic uncertainty due to FSR modeling, a $0.3\%$ relative uncertainty on the MC reconstruction efficiency ratio. To be conservative, we take the largest change, for the $D^0 \to K^- \pi^+$ mode, as the systematic uncertainty for all modes. The correction in the $D^0\bar{D}^0$ cross section due to the treatment of quantum correlations incurs systematic uncertainty associated with the parameters $x$, $y$, $\delta_{K\pi}$, and $r_{K\pi}^2$, for which Ref.~\cite{hfag} provides correlation coefficients. Ref.~\cite{UKCLEO} provides a similar coefficient table for the rest of the variables. In evaluating our systematic uncertainty, we have doubled the reported uncertainties and treated them incoherently. Toy MC calculations were used to propagate these uncertainties to $N_{D^0\bar{D}^0}$, giving a systematic uncertainty in the $D^0\bar{D}^0$ cross section of $0.2\%$. Finally, for the calculation of cross sections, the relative systematic uncertainty due to the integrated luminosity measurement is determined in Ref.~\cite{Ablikim:2014gna,Ablikim:2015orh} to be $0.5\%$. \section{RESULTS AND CONCLUSIONS} The separate sources of systematic uncertainty given in Table~\ref{table:systematics} are combined, taking correlations among them into account, to give overall systematic uncertainties in the $D^0\bar{D}^0$ and $D^+D^-$ cross sections of $1.05\%$ and $0.93\%$, respectively. Including these systematic uncertainties, the final results of our analysis are as follows: \begin{equation*} \begin{array}{r c l} N_{D^0\bar{D}^0} & = & (10,597\pm28\pm98)\times10^3, \\ N_{D^+D^-} & = & (8,296\pm31\pm65)\times10^3, \\ \sigma(e^+e^-\to D^0\bar{D}^0) & = & (3.615\pm0.010\pm0.038)~\rm{nb}, \\ \sigma(e^+e^-\to D^+D^-) & = & (2.830\pm0.011\pm0.026)~\rm{nb}, \\ \sigma(e^+e^-\to D\bar{D}) & = & (6.445\pm0.015\pm0.048)~\rm{nb}, \\ \multicolumn{3}{c}{\text{and}}\\ \multicolumn{3}{l}{\sigma(e^+e^-\to D^+D^-)/\sigma(e^+e^-\to D^0\bar{D}^0)} \\ & = & (78.29\pm0.36\pm0.93)\%, \\ \end{array} \end{equation*} \noindent where the uncertainties are statistical and systematic, respectively. In the determinations of $\sigma(e^+e^-\to D\bar{D})$ and $\sigma(e^+e^-\to D^+D^-)/\sigma(e^+e^-\to D^0\bar{D}^0)$, the uncertainties of the charged and neutral cross sections are mostly uncorrelated, except the systematic uncertainties due to the assumed $\psi(3770)$ line shape, the FSR simulation, and the measurement of the integrated luminosity. In conclusion, we have used 2.93~fb$^{-1}$ of $e^+e^-$ annihilation data at the $\psi(3770)$ resonance collected by the BESIII detector at the BEPCII collider to measure the cross sections for the production of $D^0\bar{D}^0$ and $D^+D^-$. The technique is full reconstruction of three $D^0$ and six $D^+$ hadronic decay modes and determination of the number of $D^0\bar{D}^0$ and $D^+D^-$ events using the ratio of single-tag and double-tag yields. We find the cross sections to be $\sigma(e^+e^- \rightarrow D^0\bar{D}^0)=(3.615 \pm 0.010 \pm 0.038)$~nb and $\sigma(e^+e^- \rightarrow D^+D^-)=(2.830 \pm 0.011 \pm 0.026)$~nb, where the uncertainties are statistical and systematic, respectively. These results are consistent with and more precise than the previous best measurement by the CLEO-c Collaboration~\cite{Bonvicini:2014ab} and are necessary input for normalizing some measurements of charmed meson properties in $\psi(3770)$ decays. \acknowledgments{The authors are grateful to Werner Sun of Cornell University for very helpful discussions. The BESIII collaboration thanks the staff of BEPCII and the computing center for their hard efforts.} \end{multicols} \section*{Appendix} \begin{center} \includegraphics[width=12.5cm]{st_d0.pdf} \figcaption{\label{std0} $\text{M}_{\text{BC}}$ fits for single-tag modes; (a) $D^0\to K^-\pi^+$, (b) $D^0\to K^-\pi^+\pi^0$, (c) $D^0\to K^-\pi^+\pi^+\pi^-$, (d) $\bar{D}^0\to K^+\pi^-$, (e) $\bar{D}^0\to K^+\pi^+\pi^-\pi^-$, (f) $\bar{D}^0\to K^+\pi^+\pi^-\pi^-$. Blue solid, red dotted, and green dashed lines represent the total fits, the fitted signal shapes, and the fitted background shapes, respectively, while black histograms correspond to the expected peaking background components.} \end{center} \vspace{2mm} \begin{center} \includegraphics[width=12.5cm]{st_dp.pdf} \figcaption{\label{stdp} $\text{M}_{\text{BC}}$ fits for single-tag modes; (a) $D^+\to K^-\pi^+\pi^+$, (b) $D^+\to K^-\pi^+\pi^+\pi^0$, (c) $D^+\to K_S^0\pi^+$, (d) $D^-\to K^+\pi^-\pi^-$, (e) $D^-\to K^+\pi^-\pi^-\pi^0$, (f) $D^-\to K_S^0\pi^-$, (g) $D^+\to K_S^0\pi^+\pi^0$, (h) $D^+\to K_S^0\pi^+\pi^+\pi^-$, (i) $D^+\to K^+K^-\pi^+$, (j) $D^-\to K_S^0\pi^-\pi^0$, (k) $D^-\to K_S^0\pi^+\pi^-\pi^-$, (l) $D^-\to K^+K^-\pi^-$. Blue solid, red dotted, and green dashed lines represent the total fits, the fitted signal shapes, and the fitted background shapes, respectively, while black histograms correspond to the expected peaking background components. } \end{center} \begin{center} \includegraphics[width=13.5cm]{dt_d0.pdf} \figcaption{\label{dtd0} Two-dimensional $\text{M}_{\text{BC}}$ fits projected onto the positive and negative charm $\text{M}_{\text{BC}}$ axes for various double-tag modes; (a) $D^0\to K^-\pi^+$ vs. (b) $\bar{D}^0\to K^+\pi^-$, (c) $D^0\to K^-\pi^+$ vs. (d) $\bar{D}^0\to K^+\pi^-\pi^0$, (e) $D^0\to K^-\pi^+$ vs. (f) $\bar{D}^0\to K^+\pi^+\pi^-\pi^-$, (g) $D^0\to K^-\pi^+\pi^0$ vs. (h) $\bar{D}^0\to K^+\pi^-$, (i) $D^0\to K^-\pi^+\pi^0$ vs. (j) $\bar{D}^0\to K^+\pi^-\pi^0$, (k) $D^0\to K^-\pi^+\pi^0$ vs. (l) $\bar{D}^0\to K^+\pi^+\pi^-\pi^-$, (m) $D^0\to K^-\pi^+\pi^+\pi^-$ vs. (n) $\bar{D}^0\to K^+\pi^-$, (o) $D^0\to K^-\pi^+\pi^+\pi^-$ vs. (p) $\bar{D}^0\to K^+\pi^-\pi^0$, (q) $D^0\to K^-\pi^+\pi^+\pi^-$ vs. (r) $\bar{D}^0\to K^+\pi^+\pi^-\pi^-$. Red solid and blue dotted curves represent the total fits and the fitted signal shapes, respectively. Green long-dashed and orange solid lines correspond to the fitted non-peaking background shapes, while cyan and magenta short-dashed curves are the fitted peaking background components.} \end{center} \begin{center} \includegraphics[width=13.5cm]{dt_dp_200.pdf} \figcaption{\label{dtdp200} Two-dimensional $\text{M}_{\text{BC}}$ fits projected onto the positive and negative charm $\text{M}_{\text{BC}}$ axes for various double-tag modes; (a) $D^+\to K^-\pi^+\pi^+$ vs. (b) $D^-\to K^+\pi^-\pi^-$, (c) $D^+\to K^-\pi^+\pi^+$ vs. (d) $D^-\to K^+\pi^-\pi^-\pi^0$, (e) $D^+\to K^-\pi^+\pi^+$ vs. (f) $D^-\to K_S^0\pi^-$, (g) $D^+\to K^-\pi^+\pi^+$ vs. (h) $D^-\to K_S^0\pi^-\pi^0$, (i) $D^+\to K^-\pi^+\pi^+$ vs. (j) $D^-\to K_S^0\pi^+\pi^-\pi^-$, (k) $D^+\to K^-\pi^+\pi^+$ vs. (l) $D^-\to K^+K^-\pi^-$. Red solid and blue dotted curves represent the total fits and the fitted signal shapes, respectively. Green long-dashed and orange solid lines correspond to the fitted non-peaking background shapes, while cyan and magenta short-dashed curves are the fitted peaking background components.} \end{center} \vspace{2mm} \begin{center} \includegraphics[width=13.5cm]{dt_dp_201.pdf} \figcaption{\label{dtdp201} Two-dimensional $\text{M}_{\text{BC}}$ fits projected onto the positive and negative charm $\text{M}_{\text{BC}}$ axes for various double-tag modes; (a) $D^+\to K^-\pi^+\pi^+\pi^0$ vs. (b) $D^-\to K^+\pi^-\pi^-$, (c) $D^+\to K^-\pi^+\pi^+\pi^0$ vs. (d) $D^-\to K^+\pi^-\pi^-\pi^0$, (e) $D^+\to K^-\pi^+\pi^+\pi^0$ vs. (f) $D^-\to K_S^0\pi^-$, (g) $D^+\to K^-\pi^+\pi^+\pi^0$ vs. (h) $D^-\to K_S^0\pi^-\pi^0$, (i) $D^+\to K^-\pi^+\pi^+\pi^0$ vs. (j) $D^-\to K_S^0\pi^+\pi^-\pi^-$, (k) $D^+\to K^-\pi^+\pi^+\pi^0$ vs. (l) $D^-\to K^+K^-\pi^-$. Red solid and blue dotted curves represent the total fits and the fitted signal shapes, respectively. Green long-dashed and orange solid lines correspond to the fitted non-peaking background shapes, while cyan and magenta short-dashed curves are the fitted peaking background components.} \end{center} \begin{center} \includegraphics[width=13.5cm]{dt_dp_202.pdf} \figcaption{\label{dtdp202} Two-dimensional $\text{M}_{\text{BC}}$ fits projected onto the positive and negative charm $\text{M}_{\text{BC}}$ axes for various double-tag modes; (a) $D^+\to K_S^0\pi^+$ vs. (b) $D^-\to K^+\pi^-\pi^-$, (c) $D^+\to K_S^0\pi^+$ vs. (d) $D^-\to K^+\pi^-\pi^-\pi^0$, (e) $D^+\to K_S^0\pi^+$ vs. (f) $D^-\to K_S^0\pi^-$, (g) $D^+\to K_S^0\pi^+$ vs. (h) $D^-\to K_S^0\pi^-\pi^0$, (i) $D^+\to K_S^0\pi^+$ vs. (j) $D^-\to K_S^0\pi^+\pi^-\pi^-$, (k) $D^+\to K_S^0\pi^+$ vs. (l) $D^-\to K^+K^-\pi^-$. Red solid and blue dotted curves represent the total fits and the fitted signal shapes, respectively. Green long-dashed and orange solid lines correspond to the fitted non-peaking background shapes, while cyan and magenta short-dashed curves are the fitted peaking background components.} \end{center} \vspace{2mm} \begin{center} \includegraphics[width=13.5cm]{dt_dp_203.pdf} \figcaption{\label{dtdp203} Two-dimensional $\text{M}_{\text{BC}}$ fits projected onto the positive and negative charm $\text{M}_{\text{BC}}$ axes for various double-tag modes; (a) $D^+\to K_S^0\pi^+\pi^0$ vs. (b) $D^-\to K^+\pi^-\pi^-$, (c) $D^+\to K_S^0\pi^+\pi^0$ vs. (d) $D^-\to K^+\pi^-\pi^-\pi^0$, (e) $D^+\to K_S^0\pi^+\pi^0$ vs. (f) $D^-\to K_S^0\pi^-$, (g) $D^+\to K_S^0\pi^+\pi^0$ vs. (h) $D^-\to K_S^0\pi^-\pi^0$, (i) $D^+\to K_S^0\pi^+\pi^0$ vs. (j) $D^-\to K_S^0\pi^+\pi^-\pi^-$, (k) $D^+\to K_S^0\pi^+\pi^0$ vs. (l) $D^-\to K^+K^-\pi^-$. Red solid and blue dotted curves represent the total fits and the fitted signal shapes, respectively. Green long-dashed and orange solid lines correspond to the fitted non-peaking background shapes, while cyan and magenta short-dashed curves are the fitted peaking background components.} \end{center} \begin{center} \includegraphics[width=13.5cm]{dt_dp_204.pdf} \figcaption{\label{dtdp204} Two-dimensional $\text{M}_{\text{BC}}$ fits projected onto the positive and negative charm $\text{M}_{\text{BC}}$ axes for various double-tag modes; (a) $D^+\to K_S^0\pi^+\pi^+\pi^-$ vs. (b) $D^-\to K^+\pi^-\pi^-$, (c) $D^+\to K_S^0\pi^+\pi^+\pi^-$ vs. (d) $D^-\to K^+\pi^-\pi^-\pi^0$, (e) $D^+\to K_S^0\pi^+\pi^+\pi^-$ vs. (f) $D^-\to K_S^0\pi^-$, (g) $D^+\to K_S^0\pi^+\pi^+\pi^-$ vs. (h) $D^-\to K_S^0\pi^-\pi^0$, (i) $D^+\to K_S^0\pi^+\pi^+\pi^-$ vs. (j) $D^-\to K_S^0\pi^+\pi^-\pi^-$, (k) $D^+\to K_S^0\pi^+\pi^+\pi^-$ vs. (l) $D^-\to K^+K^-\pi^-$. Red solid and blue dotted curves represent the total fits and the fitted signal shapes, respectively. Green long-dashed and orange solid lines correspond to the fitted non-peaking background shapes, while cyan and magenta short-dashed curves are the fitted peaking background components.} \end{center} \vspace{2mm} \begin{center} \includegraphics[width=13.5cm]{dt_dp_205.pdf} \figcaption{\label{dtdp205} Two-dimensional $\text{M}_{\text{BC}}$ fits projected onto the positive and negative charm $\text{M}_{\text{BC}}$ axes for various double-tag modes; (a) $D^+\to K^+K^-\pi^+$ vs. (b) $D^-\to K^+\pi^-\pi^-$, (c) $D^+\to K^+K^-\pi^+$ vs. (d) $D^-\to K^+\pi^-\pi^-\pi^0$, (e) $D^+\to K^+K^-\pi^+$ vs. (f) $D^-\to K_S^0\pi^-$, (g) $D^+\to K^+K^-\pi^+$ vs. (h) $D^-\to K_S^0\pi^-\pi^0$, (i) $D^+\to K^+K^-\pi^+$ vs. (j) $D^-\to K_S^0\pi^+\pi^-\pi^-$, (k) $D^+\to K^+K^-\pi^+$ vs. (l) $D^-\to K^+K^-\pi^-$. Red solid and blue dotted curves represent the total fits and the fitted signal shapes, respectively. Green long-dashed and orange solid lines correspond to the fitted non-peaking background shapes, while cyan and magenta short-dashed curves are the fitted peaking background components.} \end{center} \vspace{10mm} \vspace{-1mm} \centerline{\rule{80mm}{0.1pt}} \vspace{2mm} \begin{multicols}{2}
{ "attr-fineweb-edu": 1.624023, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdco5qX_AY5k-y45u
\section{Introduction} In online minimization problems with deadlines, requests are released over a timeline. Each request has an associated deadline, by which it must be served by any feasible solution. The goal of an algorithm is to give a solution which minimizes the total cost incurred in serving the given requests. Another model, which generalizes the deadline model, is that of online problems with delay. In those problems, requests again arrive over a timeline. While requests no longer have a deadline, each pending request (i.e. a request which has been released but not yet served) incurs growing delay cost. The total cost of the algorithm is the cost of serving requests plus the total delay incurred over those requests; the delay cost thus motivates the algorithm to serve requests earlier. In this paper, we consider classic network design problems in the deadline/delay setting. In the classic (offline) setting of network design, one is given a graph of $n$ nodes and a set of connectivity requests (e.g. pairs of nodes to connect). The input contains a collection of elements (e.g. edges) with associated cost. A request is satisfied by any subset of elements which serves the connectivity request (e.g. a set of edges which connects the requested pair of nodes). A feasible solution for the offline problem is a set of elements which simultaneously satisfies all connectivity requests. Such an offline network design problem induces an online problem with deadlines/delay as follows. The input graph is again given in advance. The requests, however, arrive over a timeline (with either a deadline or a delay function). At any point in time, the algorithm may choose to \emph{transmit} an offline solution (i.e. a set of elements); each pending request that is served by the transmitted solution in the offline setting is served by this transmission in the online setting. In keeping with previous work on these problems, this paper considers the \emph{clairvoyant} model, in which the deadline of a request -- or its future accumulation of delay -- is revealed to the algorithm upon the release of the request. We next discuss such induced network design problems with deadlines/delay that have been previously considered. The usual solution for such problems is to randomly embed the general input into a tree, incurring a distortion to the metric space, then solving the problem on the resulting tree. In this paper, we present frameworks which \emph{bypass} this usual mode of work, enabling improved guarantees, generality and simplicity. \paragraph{Steiner tree with deadlines/delay.}In this problem, requests are released on nodes of a graph with costs to the edges. Serving these requests comprises transmitting a subgraph which connects the request and a designated root node of the graph. This problem was studied in the case in which the graph is a tree -- in this case it is called the \textbf{multilevel aggregation problem} (first presented in \cite{DBLP:conf/esa/BienkowskiBBCDF16}). With $D$ the depth of the input tree, the best known results for multilevel aggregation are $O(D)$ competitiveness for the deadline model by Buchbinder \textit{et al.}~\cite{DBLP:conf/soda/BuchbinderFNT17}, and $O(D^2)$ competitiveness for the delay model in ~\cite{DBLP:conf/focs/AzarT19}. Thus, a simple algorithm for general Steiner tree with deadlines/delay based on metric tree embedding for this problem is to embed a general graph into a tree, and then using the best multilevel aggregation algorithms; in both the deadline and delay case, this can be seen to yield $O(\log^2 n)$-competitive randomized algorithms. \paragraph{Facility location with deadlines/delay.}In this problem, presented in \cite{DBLP:conf/focs/AzarT19}, the input graph has weights to the edges and facility costs to the nodes. Requests arrive on the nodes of the graph, to be served by transmissions. A transmission consists of a set of facilities $U$, and a collection of pending requests $Q$. The transmission serves the requests of $Q$, and has a cost which is the sum of facility costs of the nodes in $U$, plus the sum of distances from each request of $Q$ to the closest facility in $U$. The best known algorithms for both the deadline and delay variants of this problem, also based on tree embedding, are randomized and $O(\log^2 n)$ competitive -- but apply only to the uniform problem, where the nodes' facility costs are identical. This paper introduces a general deterministic framework for solving such network design problems on general graphs, with deadlines or with delay, which does not rely on tree embeddings. This framework obtains improved results to both previous problems, as well as new results for Steiner forest, nonuniform facility location, multicut, Steiner network, node-weighted Steiner forest and directed Steiner tree. \subsection{Our Results} We now state specifically our results for network design problems with deadlines/delay. Let $\mathcal{E}$ be the collection of elements in an offline network design problem. In this paper, we show the following results. \begin{enumerate} \item If there exists a deterministic (randomized) $\gamma$-approximation for the offline network design problem which runs in polynomial time, then there exists an $O(\gamma \log |\mathcal{E}|)$-competitive deterministic (randomized) algorithm for the induced problem with deadlines, which also runs in polynomial time. \item If there exists a deterministic (randomized) $\gamma$-approximation for the \emph{prize-collecting} variant of the offline network design problem, then there exists an $O(\gamma \log |\mathcal{E}|)$-competitive deterministic (randomized) algorithm for the induced problem with delay, which also runs in polynomial time. \end{enumerate} Each of those results is obtained through designing a framework which encapsulates the given approximation algorithm. We consider several network design problems on a graph of $n$ nodes, which are described in Subsection \ref{subsec:Intro_ConsideredProblems}. Plugging into our frameworks previously-known offline approximations (for either the original or prize-collecting variants) yields the results summarized in Table \ref{tab:Intro_ResultsTable}. Except for the algorithm for directed Steiner tree (which is randomized and runs in quasi-polynomial time due to the encapsulated approximation), all algorithms are deterministic and run in polynomial time. \begin{table}[h!] \begin{center} \caption{Framework Applications} \label{tab:Intro_ResultsTable} \begin{tabular}{l|c|c} & With Deadlines & With Delay\\ \hline Edge-weighted Steiner forest & $O(\log n)$ & $O(\log n)$\\ Multicut & $O(\log^2 n)$ & $O(\log^2 n)$\\ Edge-weighted Steiner network & $O(\log n)$ & $O(\log n)$\\ Node-weighted Steiner forest &$O(\log^2 n)$ & $O(\log^2 n)$\\ Facility location (non-uniform)& $O(\log n)$ & $O(\log n)$\\ Directed Steiner tree & $O\left(\frac{\log^3 n}{\log \log n}\right)$ & ? \footnotemark\\ \end{tabular} \end{center} \end{table} \footnotetext{We could find no approximation result for prize-collecting directed Steiner tree. We conjecture that such an approximation algorithm exists which loses only a constant factor apart from the best approximation for the original offline problem, in which case we obtain an identical guarantee to the deadline case.} Our frameworks improve on previous results in the following way: \begin{enumerate} \item For Steiner tree with deadlines/delay, we give $O(\log n)$-competitive deterministic algorithms, while the best previously-known algorithms are randomized and $O(\log^2 n)$-competitive \cite{DBLP:conf/esa/BienkowskiBBCDF16,DBLP:conf/focs/AzarT19}. \item For facility location with deadlines/delay, the best previously-known algorithms are randomized, $O(\log^2 n)$-competitive \cite{DBLP:conf/focs/AzarT19}, and apply only for the uniform case (where facilities have the same opening cost). We give $O(\log n)$-competitive, deterministic algorithms which apply also for the non-uniform case. \end{enumerate} For node-weighted Steiner forest and directed Steiner tree, our results are relatively close to the optimal solution -- in appendix we show an $\Omega(\sqrt{\log n})$ lower bound on competitiveness through applying the lower bound of \cite{DBLP:journals/corr/abs-1807-08543} for set cover with delay. As an information-theoretic lower bound, it applies for algorithms with unbounded computational power. While the common regime in problems with deadlines/delay is that the number of requests $k$ is unbounded and the number of nodes $n$ is finite, we also address the opposite regime in which $k$ is small -- the latter being more popular in classic network design problems. We achieve the best of both worlds -- namely, we show a modification to the deadline/delay frameworks which replaces $n$ by $\min\{n,k\}$ in the competitiveness guarantees. This modification applies to all problems considered in this paper except for facility location, but conjecture that a similar algorithm would apply there as well. \subsection{Our Techniques} The \textbf{deadline framework} performs services (i.e. transmissions) of various costs; the logarithmic class of the cost of a service is called its level. Pending requests also have levels, which are maintained by the algorithm. Whenever a pending request of level $j$ reaches its deadline, a service of level $j+1$ starts. This service is only meant to serve requests of lower or equal level (we call such requests eligible for the service). After a service concludes, the level of remaining eligible requests is raised to that of the service. Intuitively, this means that once a pending request has seen a service of cost $2^j$, it refuses to be served by any cheaper service. This makes use of the aggregation property -- higher-cost services tend to be more cost-effective per request. When a service is triggered, it has to choose which of the eligible requests to serve, subject to its budget constraint. The service prioritizes requests of earlier deadline, adding them until the budget is exceeded. The cost of serving those requests is estimated using the encapsulated approximation algorithm. The main idea of levels exists in the \textbf{delay framework} as well. However, handling general delay functions requires more intricate procedures -- namely, for triggering a service and for choosing which requests to serve. The delay framework maintains an \emph{investment counter} for each pending request, which allows a service to pay for the delay of a request (i.e. the delay cost is charged to the budget of the service). A service is started when a large amount of delay for which no service has paid has accumulated on the requests of a particular level $j$ -- the started service is of level $j+1$. When choosing which of the eligible requests to serve, the algorithm considers the first point in time in which an eligible request would accumulate delay which is not paid for by its investment counter. Using its budget of $2^j$, it then attempts to push back this point in time farthest into the future -- it does so either by raising the investment counters, or by serving requests. The way to balance these two methods is problem-specific -- the framework thus formulates a prize-collecting instance, where the penalties represent future delay, and calls the encapsulated prize-collecting approximation algorithm to solve it. \subsection{Considered Problems} \label{subsec:Intro_ConsideredProblems} In this paper, we consider the induced deadline/delay problems of several network design problems. We now introduce those problems. \textbf{Steiner tree and Steiner forest.} In the Steiner forest problem, each request is a pair of terminals (i.e. nodes in the input graph), and the elements are the edges. A request is satisfied by a set of edges if the two terminals of the request are connected by those edges. The Steiner tree problem is an instance of Steiner forest in which the input also designates a specific node as the root, such that every request contains the root as one of its two terminals. A special case of the Steiner tree problem is the multilevel aggregation problem, in which the graph is a tree. We also consider a stronger variant of the Steiner forest problem, in which each request is a \emph{subset} of nodes to be connected. While this problem is identical to the original Steiner forest in the offline setting (as the subset can be broken down to pairs), their induced deadline/delay problems are substantially different. \textbf{Multicut.} In the offline multicut problem, each request is again a pair of terminals, and the elements are again the edges. A request is satisfied by a set of edges which, if removed from the original graph, would disconnect the pair of terminals. As in Steiner forest, it makes sense to define the stronger variant in which each request is a subset of nodes which must be disconnected from each other -- while both variants are equivalent in the offline setting, their induced deadline/delay problems are distinct. \textbf{Node-weighted Steiner forest.} In this problem, the elements are the nodes, rather than edges. Each request is again a pair of terminals, and is satisfied by a solution which contains (in addition to the terminals themselves) nodes that connect the pair of terminals. \textbf{Edge-weighted Steiner network.} This problem is identical to the Steiner forest problem, except that each request $q$ comes with a demand $f(q) \in \mathbb{N}$. A request is satisfied by a set of edges that contains $f(q)$ edge-disjoint paths between the terminals. \textbf{Directed Steiner tree.} This problem is identical to the Steiner tree problem, except that the graph is now directed. Each pair request, where one of its terminals is the root, is satisfied by a set of edges that contain a directed path from the root to the other terminal. \textbf{Facility location.} In the facility location problem, the requests are on the nodes of the graph. The elements are the nodes of the graph, upon which facilities can be opened. The cost of the solution is the total cost of opened facilities (opening cost) plus the distances from each request to the closest facility (connection cost). The connection cost prevents facility location from being strictly compliant to the analysis of the framework we present. However, we nonetheless show that the framework itself applies to facility location as well. \subsection{Related Work} The classic online consideration of network design problems has been studied in numerous papers (e.g. \cite{DBLP:journals/siamdm/ImaseW91,DBLP:journals/algorithmica/Fotakis08,Berman1997,6108170,DBLP:journals/siamcomp/GuptaKR12,DBLP:journals/talg/AlonAABN06}). In this genre of problems, the connectivity requests arrive one after the other in a sequence (rather than over time), and must be served immediately by buying some elements which serve the request. These bought elements remain bought until the end of the sequence, and can thus be used to serve future requests. This is in contrast to the deadline/delay model considered in this paper, where the elements are \emph{transmitted} rather than bought, and thus future use of these elements requires transmitting them again (at additional cost). There is no connection between the classic online variant of a problem and the deadline/delay variant -- that is, neither problem is reducible to the other. There could be a stark difference in competitiveness between the two models, which depends on the network design problem. For some problems, the classic online admits much better competitive algorithms -- for example, in the multilevel aggregation problem, the classic online problem is Steiner tree on a tree, which is trivially $1$-competitive (while the best known algorithms for multilevel aggregation with deadlines/delay have logarithmic ratio). For other problems, the opposite is true -- for classic online directed Steiner tree, a lower bound of $\Omega(n^{1-\epsilon})$ exists on the competitiveness of any deterministic algorithm, for every $\epsilon>0$. In contrast, for directed Steiner tree with deadlines/delay, we present in this paper polylogarithmic-competitive algorithms. The multilevel aggregation problem was first considered by Bienkowski \textit{et al.}~\cite{DBLP:conf/esa/BienkowskiBBCDF16}, who gave an algorithm with competitiveness which is exponential in the depth $D$ of the input tree, for the delay model. This result was then improved, first to $O(D)$ for the deadline model by Buchbinder \textit{et al.}~\cite{DBLP:conf/soda/BuchbinderFNT17}, and then to $O(D^2)$ for the general delay model in~\cite{DBLP:conf/focs/AzarT19}. These results yield $O(\log^2 n)$-competitive randomized algorithms for Steiner tree with deadlines/delay on general graphs, through metric embeddings; for more general Steiner problems (e.g. Steiner forest, node-weighted Steiner tree) no previously-known algorithm exists. The multilevel aggregation also generalizes some past lines of work -- the TCP acknowledgement problem \cite{TCPAck_DBLP:conf/stoc/DoolyGS98,TCPAck_DBLP:journals/algorithmica/KarlinKR03,TCPAck_DBLP:conf/esa/BuchbinderJN07} is multilevel aggregation with $D=1$, and the joint replenishment problem \cite{JointRep_DBLP:conf/soda/BuchbinderKLMS08, JointRep_DBLP:journals/algorithmica/BritoKV12, JointRep_DBLP:conf/soda/BienkowskiBCJNS14} is multilevel aggregation with $D=2$. Another problem studied in the context of delay is that of matching with delay \cite{Matching_DBLP:conf/approx/AshlagiACCGKMWW17,Matching_DBLP:conf/ciac/EmekSW17,Matching_DBLP:conf/stoc/EmekKW16,Matching_DBLP:conf/waoa/AzarF18,Matching_DBLP:conf/waoa/BienkowskiKLS18,Matching_DBLP:conf/waoa/BienkowskiKS17}. In this problem, requests arrive on points of a metric space, and gather delay until served. The algorithm may choose to serve two pending requests, at a cost which is the distance between those two requests in the metric space. This problem seems hard without making assumptions on the delay function, and thus is usually considered when the delay functions are identical and linear. The $k$-server problem in the deadline/delay context has also been studied \cite{DBLP:conf/stoc/AzarGGP17,DBLP:conf/sirocco/BienkowskiKS18,DBLP:conf/focs/AzarT19}. In this problem, $k$ servers exist in a metric space, and requests again arrive on points of the space, gathering delay. To serve a request, the algorithm must move a server to that request, paying the distance between the server and the request. \section{Model and Deadline Framework} \label{sec:FWD} We are given a set $\mathcal{E}$ of elements, with costs $c:\mathcal{E} \to \mathbb{R}^+$. Requests are released over time, and we denote the release time of a request $q$ by $r_q$. Each request has a deadline $d_q$, by which it must be served. At any point in time, the algorithm may transmit a subset of elements $E\subseteq \mathcal{E}$, at a cost $\sum_{e\in E} c(e)$. Each request $q$ is satisfied by a collection of subsets $X_q \subseteq 2^{\mathcal{E}}$ which is \emph{upwards-closed} -- that is, if $E_1\subseteq E_2 \subseteq \mathcal{E}$ and we have that $E_1 \in X_q$ then $E_2 \in X_q$. If the algorithm transmits the set of elements $E$, then all pending requests $q$ such that $E \in X_q$ are served by that transmission. To give a concrete example of this abstract structure, consider the Steiner forest problem. In this problem, the elements $\mathcal{E}$ are the edges of a graph. For a request $q$ for the terminals $(u_1,u_2)$, the collection $X_q$ is the collection of edge sets $E'$ such that $(u_1, u_2)$ are in the same connected component in the spanning subgraph with edges $E'$. One can also look at the corresponding offline problem -- given a set of requests $Q$, find a subset of elements $E'$ of the minimal total cost such that $E'\in X_q$ for every $q\in Q$. Now, consider a class of problems of this form -- such as Steiner tree for example -- and denote this class by $\text{\sc ND}$. The main result of this section is the following. \begin{thm} \label{thm:FWD_Competitiveness} If there exists a $\gamma$ deterministic (randomized) approximation algorithm for $\text{\sc ND}$ which runs in polynomial time, then there exists an $O(\gamma\log |\mathcal{E}|)$-competitive deterministic (randomized) algorithm for $\text{\sc ND}$ with deadlines, which also runs in polynomial time. \end{thm} \begin{rem} \label{rem:FWD_QuasiPolynomialTime} If the approximation algorithm runs in \emph{quasi-polynomial} time, then the online algorithm also runs in quasi-polynomial time. \end{rem} \begin{rem} \label{rem:FWD_LasVegas} In this paper, we consider randomized approximation algorithms which have deterministic approximation guarantees and expected running time guarantees. Converting a randomized algorithm of \emph{expected} approximation guarantee and \emph{deterministic} running time to the format we consider can be achieved with repeated running of the algorithm until the resulting approximation is at most a factor of $2$ from the expected guarantee -- Markov's inquality ensures that the expected running time of this new algorithm is small. The only requirement for this conversion is that the algorithm is able to know whether its approximation meets the expected guarantee -- this requirement is met, for example, in all approximation algorithms based on LP solving + rounding (and in particular, all randomized algorithms in this paper). \end{rem} For a set of requests $Q$, we denote the solution for the offline problem returned by the $\gamma$ approximation by $\text{\sc ND}(Q)$. We also denote the optimal solution by $\text{\sc ND}^*(Q)$. \subsection{The Framework} We now present a framework for encapsulating an approximation algorithm for $\text{\sc ND}$ to obtain a competitive algorithm for $\text{\sc ND}$ with deadlines, thus proving Theorem \ref{thm:FWD_Competitiveness}. \paragraph*{Calls to approximation algorithm.} The framework makes calls to the approximation algorithm for $\text{\sc ND}$ -- we denote such a call on a set of requests $Q$ by $\text{\sc ND}(Q)$ (the universe of elements $\mathcal{E}$, and the elements' costs, are identical to those of the online problem). Similarly, we denote the optimal solution for this set of requests by $\text{\sc ND}^*(Q)$. The framework also makes calls to $\text{\sc ND}$ where the costs of the elements are modified -- namely, that the cost of some subset of elements $E_0\subseteq \mathcal{E}$ is set to $0$. We use $\text{\sc ND}_{E_0 \gets 0}$ to denote such calls. When calling the approximation algorithm, we store the resulting solution (i.e. subset of elements) in a variable. If a solution is stored in a variable $S$, we use $c(S)$ to refer to the cost of that solution. Note that this cost is not necessarily the sum of costs of elements in that solution -- it is possible that the solution is for an instance in which the costs of some set of elements $E_0$ are set to $0$. \paragraph*{Algorithm's description.} The framework is given in Algorithm \ref{alg:FWD_Algorithm}. For each pending request $q$, the algorithm maintains a level $\ell_q$. Upon the arrival of a new request $q$, the function $\UponRequest$ is called. This function assigns the initial value of the level of $q$, which is initially supposed to be the logarithmic class of the cost of the least expensive (offline) solution for $q$ -- the algorithm approximates this by making a call to the approximation algorithm on $\{q\}$, then dividing by the approximation ratio $\gamma$. Over time, the level of a request may increase. Whenever a deadline of a pending request is reached, the function $\UponDeadline$ is called, and the algorithm starts a service. Services also have levels -- the level of a service $\lambda$, denoted by $\ell_\lambda$, is always $\ell_q +1$, where $q$ is the request which triggered the service. Intuitively, the service $\lambda$ is ``responsible'' for all pending requests of level at most $\ell_\lambda$ -- these requests are called the \emph{eligible} requests for $\lambda$. Overall, the service spends $O(\gamma \cdot 2^{\ell_\lambda})$ cost solely on serving these eligible requests. The service constructs a transmission, which occurs at the end of the service. First, the service adds to the transmission all ``cheap'' elements -- those that cost at most $\frac{2^{\ell_\lambda}}{|\mathcal{E}|}$. Then, the service decides which of the eligible requests to serve, using the following procedure. It considers the requests by order of increasing deadline, adding them to the set of requests to serve. This process stops when either the cost of serving those requests, as estimated by the approximation algorithm, exceeds the budget ($O(\gamma \cdot 2^{\ell_\lambda})$), or the requests are all served. Since the amount by which the budget was exceeded in the ultimate iteration is unknown, the service transmits the solution found in the \emph{penultimate} iteration, in addition to a "singleton" solution to the last request to be served. The final step in the service is to ``upgrade'' the level of all eligible requests which are still pending after the transmission of the service. The level of those requests is assigned the level of the service. \begin{algorithm} \renewcommand{\algorithmcfname}{Algorithm} \caption{\label{alg:FWD_Algorithm} Network Design with Deadlines Framework} \EFn{\UponRequest{$q$}}{ Set $ S_q \gets \text{\sc ND}(\{q\})$ Set $ I_q \gets \frac{c(S_q)}{\gamma} $. Set $\ell_q \gets \left\lfloor\log \left(I_q\right)\right\rfloor$ \tcp*[h]{the level of the request} } \BlankLine \EFn(\tcp*[h]{upon the deadline of a pending request $q$}){\UponDeadline{$q$}}{ Start a new service $\lambda$, which we now describe. Set $\ell_\lambda \gets \ell_q + 1$. \label{line:FWD_SetServiceLevel} Set $Q_\lambda \gets \emptyset$. \BlankLine \tcp*[h]{buy all cheap elements} Set $E_0 \gets \left\{ e\in \mathcal{E} \middle| c(e) \le \frac{2^{\ell_\lambda}}{|\mathcal{E}|} \right\}$. \label{line:FWD_BuyCheapEdges} \BlankLine \tcp*[h]{add eligible requests by order of deadline, until budget is exceeded} Set $S \gets \emptyset$. \While{there exists a pending $q' \notin Q_\lambda$ such that $\ell_{q^\prime} \le \ell_\lambda$}{\label{line:FWD_AddingRequestsToService} Let $q_{\text{last}} \notin Q_\lambda$ be the pending request with the earliest deadline such that $\ell_{q^\prime} \le \ell_\lambda$. Set $Q_\lambda \gets Q_\lambda \cup \{q_{\text{last}}\}$ Set $S' \gets \text{\sc ND}_{E_0 \gets 0}(Q_\lambda)$. \lIf{$c(S') \ge \gamma\cdot 2^{\ell_\lambda}$}{\Break}\label{line:FWD_ServiceIsFull} Set $S \gets S'$. } \BlankLine Transmit the solution $E_0 \cup S \cup S_{q_{\text{last}}}$. \tcp*[h]{serve $Q_\lambda$} \label{line:FWD_ServeRequests} \BlankLine \tcp*[h]{upgrade still-pending requests to service's level} \ForEach{pending request $q^\prime$ such that $\ell_{q^\prime} \le \ell_\lambda$}{ Set $\ell_{q^\prime} \gets \ell_\lambda $ \label{line:FWD_ServiceSetsRequestLevel} } } \end{algorithm} \subsection{Analysis} To prove Theorem \ref{thm:FWD_Competitiveness}, we require the following definitions. \subsubsection*{\emph{Definitions and Algorithm's Properties}} Before delving into the proof of Theorem \ref{thm:FWD_Competitiveness}, we first define some terms used throughout the analysis, and prove some properties of the algorithm. For a service $\lambda$, we call the value set to $\ell_{\lambda}$ the \emph{level} of $\lambda$; observe that this value does not change once defined. Similarly, for a request $q$, we call $\ell_q$ the level of $q$. Note that unlike services, the level of a request may change over time (more specifically, the level can be increased). \begin{defn}[Service Pointer] Let $q$ be a request. We define $\operatorname{ptr}_q$ to be the last service $\lambda$ such that $\lambda$ sets $\ell_q \gets \ell_\lambda$ in Line \ref{line:FWD_ServiceSetsRequestLevel}. If there is no such service, we write $\operatorname{ptr}_q = \text{\sc null}$. Similarly, we define $\operatorname{ptr}_q(t)$ to be the last service $\lambda$ before time $t$ such that $\lambda$ sets $\ell_q \gets \ell_\lambda$ in Line \ref{line:FWD_ServiceSetsRequestLevel} (with $\operatorname{ptr}_q(t)=\text{\sc null}$ if there is no such service). \end{defn} \begin{defn}[Eligible Requests] \label{defn:FWD_EligibleRequest} Consider a service $\lambda$ and a request $q$ which is pending upon the start of $\lambda$, and has $\ell_q \le \ell_\lambda$ at that time. We say that $q$ was \emph{eligible} for $\lambda$. \end{defn} \begin{defn}[Types of Services] \label{defn:FWD_ServiceTypes} For a service $\lambda$, we say that: \begin{enumerate} \item $\lambda$ is \emph{charged} if there exists some future service $\lambda'$, which is triggered by a pending request $q$ reaching its deadline such that $\operatorname{ptr}_{q} (t_{\lambda'}) = \lambda$. We say that $\lambda'$ \emph{charged} $\lambda$. \item $\lambda$ is \emph{imperfect} if the $\Break$ command of Line \ref{line:FWD_ServiceIsFull} was reached in $\lambda$. Otherwise, we say that $\lambda$ is \emph{perfect}. \item $\lambda$ is \emph{primary} if, when triggered by the expired deadline of the pending request $q$, this request $q$ has $\operatorname{ptr}_q (t_\lambda) = \text{\sc null}$. Otherwise, $\lambda$ is \emph{secondary}. \end{enumerate} \end{defn} A visualization of a possible set of services can be seen in Figure \ref{fig:FWD_ServiceDiagram}. \begin{figure}[tb] \begin{center} \includegraphics{Figures/ServiceTypes} \end{center} This figure shows a possible set of services in a run of the algorithm. Each service is denoted by a star, where the location of the star indicates the time and level of the service. Primary services are denoted by red stars, and secondary services are denoted by blue stars. Each secondary service charges a previous service, of level one below its own; this charging is denoted by a directed edge from the secondary service to the charged service. Since every service can charge -- or be charged -- at most once, the edges form disjoint paths. A property maintained by the algorithm is that a service ``dominates'' the quadrant of lesser-or-equal level and time -- once such a service occurs, no future secondary service would charge a service in this quadrant. \caption{\label{fig:FWD_ServiceDiagram} Visualization of Services} \end{figure} Fix any input set of requests $Q$. We denote by $\Lambda$ the final set of services by the algorithm. For every service $\lambda \in \Lambda$, we denote by $Q_\lambda$ the set of requests served by $\lambda$ (this is identical to the final value of the variable $Q_\lambda$ in the algorithm). We define $c(\lambda)$ to be the cost of the service $\lambda$. For any subset $\Lambda' \subseteq \Lambda$, we also write $c(\Lambda')=\sum_{\lambda\in \Lambda'}c(\lambda)$. Note that $\text{\sc alg} = c(\Lambda)$. We denote the set of primary services made by the algorithm by $\Lambda_1$, and the set of secondary services by $\Lambda_2$, such that $\Lambda = \Lambda_1 \cup \Lambda_2$. We denote the set of charged services by $\Lambda^\circ$. \begin{prop} \label{prop:FWD_UniqueCharge} Each service $\lambda \in \Lambda^\circ$ is charged by at most one service. \end{prop} \begin{proof} Assume for contradiction that $\lambda$ is charged by both $\lambda_1$ and $\lambda_2$, at times $t_1$ and $t_2$ respectively, and assume without loss of generality that $t_1 <t_2$. $\lambda_2$ charged $\lambda$ due to the pending request $q_2$, such that $\ell_{q_2} = \ell_\lambda$ and $\operatorname{ptr}_{q_2}(t_{\lambda_2}) = \lambda$. Note that $q_2$ was pending before both $\lambda$ and $\lambda_2$, and was thus pending before $\lambda_1$. But after $\lambda_1$, all pending requests are of level at least $\ell_{\lambda_1} = \ell_\lambda + 1$, in contradiction to having $\ell_{q_2} = \ell_\lambda$ immediately before $\lambda_2$. \end{proof} The following lemma we prove shows that for a set of requests which exist in the same time, the collection of charged services which serve them has at most one service from each level. \begin{defn} \label{defn:FWD_IntersectingSet} We say that a set of requests ${Q'}=\{q_{1},\cdots,q_{k}\}$ is \emph{intersecting} if there exists time $t$ such that $t\in [r_{q_i},d_{q_i}]$ for every $i\in\{1,\cdots,k\}$. We call $t$ an \emph{intersection time} of ${Q'}$. \end{defn} \begin{lem} \label{lem:FWD_UniqueClass} Let $Q'$ be an intersecting set of requests. Let $\Lambda_{Q'} \subseteq \Lambda^{\circ}$ be the set of charged services in which a request from $Q'$ is served. Then for every $j\in\mathbb{Z}$, there exists at most one service $\lambda\in \Lambda_{Q'}$ such that $\ell_{\lambda}=j$. \end{lem} \begin{proof} Assume for contradiction that there exists $j\in\mathbb{Z}$ for which there exist two distinct services $\lambda_{1},\lambda_{2}\in \Lambda_{Q'}$ such that $\ell_{\lambda_{1}}=\ell_{\lambda_{2}}=j$. Assume without loss of generality that $t_{\lambda_1}<t_{\lambda_2}$. In addition, let $q_{1}\in {Q'}$ be a request served by $\lambda_{1}$, and define $q_{2}\in {Q'}$ to be a request served by $\lambda_{2}$. Let $t$ be an intersection time of ${Q'}$. Since $\lambda_1$ is charged, there exists a request $q'$ which was pending at its deadline, triggering a service $\lambda'$, such that $\operatorname{ptr}_{q'}(t_{\lambda'})= \lambda_1$. From the definition of $\operatorname{ptr}_{q'}$, we have that $\ell_{q'} = \ell_\lambda$ at time $t_{\lambda'}$. Thus, the service $\lambda'$ must be of level exactly $j+1$. Also note that $q'$ was eligible for $\lambda_1$. Consider the following two cases: \begin{enumerate} \item $t_{\lambda'}>t_{\lambda_2}$. Since $q'$ was pending at $t_{\lambda_1}$ and at $t_{\lambda'}$, and since $t_{\lambda_1} <t_{\lambda_2} <t_{\lambda'}$, we have that $q'$ was pending at $t_{\lambda_2} $. Observe that $\ell_{q'} = \ell_{\lambda_1}$ at $t_{\lambda_2}$, since $\lambda_1$ occurred before $\lambda_2$. But this means that $q'$ was eligible for $\lambda_2$, but was not served (since it was pending at $t_{\lambda'}$). Thus, $\lambda_2$ set $\ell_{q'} \gets \ell_{\lambda_2}$ in Line \ref{line:FWD_ServiceSetsRequestLevel}, in contradiction to having $\operatorname{ptr}_{q'} (t_{\lambda'})= \lambda_1$. \item $t_{\lambda'} < t_{\lambda_2}$. Consider that since $\operatorname{ptr}_{q'} (t_{\lambda'})= \lambda_1$, we know that $q'$ was eligible for $\lambda_1$. The service $\lambda_1$ added eligible requests by order of increasing deadline, and thus we know that the deadline of $q'$ is after the deadline of $q_1$. We know that ${Q'}$ is an intersecting set of requests, and thus $r_{q_2} \le d_{q_1}$. Therefore, we have that $r_{q_2} < d_{q'} = t_{\lambda'} < t_{\lambda_2}$, and thus $q_2$ was pending at $t_{\lambda'}$. We know that $q_2$ was eligible for $\lambda_2$, and thus $\ell_{q_2} \le j$ at that time. But this contradicts the fact that after $\lambda'$, every pending request has level at least $\ell_{\lambda'} = j+1$. \end{enumerate} \end{proof} We now move on to proving Theorem \ref{thm:FWD_Competitiveness}. The proof consists of upper-bounding the cost of the algorithm and lower-bounding the cost of the optimal solution. \subsubsection*{\emph{Upper-bounding $\text{\sc alg}$}} We prove the following lemma, which provides an upper bound on the cost of the algorithm. \begin{lem} \label{lem:FWD_ALGUpperBound} $\text{\sc alg} \le O(\gamma) \cdot \left(\sum_{\lambda\in \Lambda_1} 2^{\ell_\lambda} + \sum_{\lambda\in \Lambda^\circ} 2^{\ell_\lambda}\right)$ \end{lem} \begin{prop} \label{prop:FWD_ServiceCostBoundedByLevel} The total cost of a service $\lambda$ is at most $O(\gamma)\cdot 2^{\ell_\lambda}$. \end{prop} \begin{proof} The cost of the service $\lambda$ is the cost of the transmission in Line \ref{line:FWD_ServeRequests}. The cost of this transmission is at most the sum of the three following costs: $C(E_0)$, $c(S)$, and $c(S_{q_\textup{last}})$. The total cost of $E_0$, by definition of $E_0$, is at most $2^{\ell_\lambda}$. The cost $c(S)$ is at most $\gamma \cdot 2^{\ell_\lambda}$. To see this, observe that the loop of Line \ref{line:FWD_AddingRequestsToService} either ends in the first iteration (in which case $S=\emptyset$ and the cost is zero), or continues for two or more iterations. In the second case, consider the iteration before last -- since we did not break out of the loop, we have that $c(S) \le \gamma \cdot 2^{\ell_\lambda}$. As for the cost $c(S_{q_\textup{last}})$, consider the initial level of $q_{\textup{last}}$. Levels only increase over time, and we know that upon the service $\lambda$ we had that $\ell_{q_{\textup{last}}} \le \ell_\lambda$. Thus, the initial level of $q_{\textup{last}}$ was at most $\ell_\lambda$. According to the way in which the initial level is set, we thus have that $c(S_{q_\textup{last}}) \le 2\gamma\cdot 2^{\ell_\lambda}$. Summing over the three costs completes the proof. \end{proof} \begin{prop} \label{prop:FWD_OnlyFullServicesCharged} Only imperfect services can be charged. \end{prop} \begin{proof} Observe that a perfect service serves all eligible requests. Thus, Line \ref{line:FWD_ServiceSetsRequestLevel} is not called in such a service, which implies that the service is not charged. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:FWD_ALGUpperBound}] Observe that $\text{\sc alg} = c(\Lambda_1) + c(\Lambda_2)$. First, observe that through Proposition \ref{prop:FWD_ServiceCostBoundedByLevel} we have that $c(\Lambda_1) \le O(\gamma) \cdot \sum_{\lambda\in \Lambda_1} 2^{\ell_\lambda}$. It remains to show that $c(\Lambda_2)\le O(\gamma)\cdot \sum_{\lambda\in \Lambda^\circ} 2^{\ell_\lambda}$. Observe that every secondary service $\lambda$ of level $j$ charges a previous service $\lambda'\in \Lambda^\circ$ of level $(j-1)$. From Proposition \ref{prop:FWD_ServiceCostBoundedByLevel}, we have that $c(\lambda) \le O(\gamma)\cdot 2^j$, and thus $c(\lambda) \le O(\gamma)\cdot 2^{\ell{\lambda'}}$. Summing over all secondary services completes the proof, where Proposition \ref{prop:FWD_UniqueCharge} guarantees that no charged service is counted twice. \end{proof} \subsubsection*{\emph{Lower-bounding $\text{\sc opt}$}} Fix the optimal solution for the given input, which consists of the services $\Lambda^*$ made in various points in time. Denote by $\text{\sc opt}$ the cost of this optimal solution. To complete the proof of Theorem \ref{thm:FWD_Competitiveness}, we require the following two lemmas which lower-bound the cost of the optimal solution. \begin{lem} \label{lem:FWD_PrimaryLowerBoundsOPT} $\sum_{\lambda\in \Lambda_1} 2^{\ell_\lambda} \le O(1)\cdot \text{\sc opt}$ \end{lem} \begin{lem} \label{lem:FWD_ChargeLowerBoundsOPT} $\sum_{\lambda\in \Lambda^\circ} 2^{\ell_\lambda} \le O(\log |\mathcal{E}|) \cdot \text{\sc opt}$ \end{lem} \begin{proof}[Proof of Lemma \ref{lem:FWD_PrimaryLowerBoundsOPT}] Observe that two primary services $\lambda_1,\lambda_2$ of the same level are triggered by two requests $q_1,q_2$ which are disjoint -- i.e. $[r_{q_1}, d_{q_1}] \cap [r_{q_2}, d_{q_2}] = \emptyset$. Otherwise, if $q_1$ and $q_2$ are not disjoint, then without loss of generality assume that $d_{q_1}\in [r_{q_2}, d_{q_2}]$. In this case, $\lambda_1$ would consider $q_2$, which is eligible (as $q_1,q_2$ are of the same level). This would either lead to $\lambda_1$ serving $q_2$, or $\operatorname{ptr}_{q_2}(t_{\lambda_2})\neq \text{\sc null}$, both of which are contradictions to $\lambda_2$ being primary. Therefore, the requests triggering primary services of any specific level form a set of disjoint intervals. Now, let $m_j$ be the number of primary services of level $j$, and let $j_{\max}$ be the maximum level of a primary service. Denoting $x^+ = \max(x,0)$, we have that \begin{align*} \sum_{\lambda\in \Lambda^1} 2^{\ell_\lambda} & = \sum_{j=-\infty}^{j_{\max}} m_j \cdot 2^j \\ & \le \sum_{j=-\infty}^{j_{\max}} \left(m_j - \max_{j'>j}\{m_{j'}\} \right)^+ \cdot 2^{j+1} \\ & = 4 \cdot \sum_{j=-\infty}^{j_{\max}} \left(m_j - \max_{j'>j}\{m_{j'}\} \right)^+ \cdot 2^{j-1} \end{align*} where the inequality is through changing the order of summation and summing a geometric series. Now, consider the optimal solution. For each primary service $\lambda$ triggered by a request $q$, we know that $\ell_{q} = \ell_\lambda -1$, and that $\operatorname{ptr}_{q}(t_\lambda) = \text{\sc null}$. Thus, $\ell_\lambda -1$ was the initial level of $q$, set in $\UponRequest$. Thus, we have that $\text{\sc ND}^*(\{q\})\ge \frac{\text{\sc ND}(\{q\})}{\gamma} \ge 2^{\ell_\lambda-1}$. This implies that the optimal solution must create $m_{j_{\max}}$ services of cost at least $2^{j_{\max}-1}$ each, to serve the (disjoint) requests which trigger level $j_{\max}$ primary services. In addition, the optimal solution must create at least $(m_{j_{\max}-1} - m_{j_{\max}})^+$ \emph{additional} services, of cost at least $2^{j_{\max}-2}$ each, to service requests that trigger level $(j_{\max} -1)$ primary services. Repeating this argument, for each level $j$ the optimal solution must pay an additional cost of $\left(m_j - \max_{j'>j}\{m_{j'}\} \right)^+ \cdot 2^{j-1}$. Overall, we have that \[\text{\sc opt} \ge \sum_{j=-\infty}^{j_{\max}} \left(m_j - \max_{j'>j}\{m_{j'}\} \right)^+ \cdot 2^{j-1}\] and thus $\sum_{\lambda\in \Lambda_1} 2^{\ell_\lambda} \le 4\cdot \text{\sc opt}$. \end{proof} It remains to prove Lemma \ref{lem:FWD_ChargeLowerBoundsOPT}, i.e. charging $2^{\ell_\lambda}$ for each service $\lambda \in \Lambda^\circ$ to the optimal solution times $O(\log |\mathcal{E}|)$. To do this, we split this charge of $2^{\ell_\lambda}$ between the services of the optimal solution. Proposition \ref{prop:FWD_LocalCharge} shows that this charge is valid. For a service $\lambda^*\in \Lambda^*$ made by the optimal solution, denote the set of requests served in $\lambda^*$ by $Q_{\lambda^*}$. Recall that for a service $\lambda \in \Lambda$ made by the algorithm, $Q_\lambda$ is the set of requests served by $\lambda$. For every $\lambda \in \Lambda$ and $\lambda^* \in \Lambda^*$, we define for ease of notation $Q_{\lambda \cap \lambda^*} \triangleq Q_\lambda \cap Q_\lambda^*$. For a set of requests $Q'$, we denote the cost of the optimal offline solution for $\text{\sc ND}$ on $Q'$ by $\text{\sc ND}^*(Q')$. We also use $\text{\sc ND}^*_{E_0 \gets 0}(Q')$ to refer to the cost of the optimal offline solution for $Q'$ where the costs of the elements $E_0 \subseteq \mathcal{E}$ is set to $0$. For a service $\lambda \in \Lambda$, we denote by $E_0^\lambda$ the value set to $E_0$ in Line \ref{line:FWD_BuyCheapEdges} during the service $\lambda$. The outline of the charging scheme is given in Figure \ref{fig:FWD_ChargingToOptimal}. \begin{figure}[tb] \subfloat[\label{subfig:FWD_ChargingToOptimal_Charging}Charging Scheme]{\includegraphics[width=0.45\textwidth]{Figures/ChargingScheme_Deadline.pdf}} \hfill \subfloat[\label{subfig:FWD_ChargingToOptimal_Sink}Charges to Optimal Service]{\includegraphics[width=0.45\textwidth]{Figures/ChargingScheme_OptLoadDeadline.pdf}}\\ \quad Subfigure \ref{subfig:FWD_ChargingToOptimal_Charging} shows the services of $\Lambda^\circ$ and the services of the optimal algorithm, as well as the charging of costs to the optimal solution. The amount $\min\{2^{\ell_\lambda}, \text{\sc ND}^*_{E_0^\lambda\gets 0}(q_{\lambda\cap\lambda^*})\}$ is charged by the service $\lambda \in \Lambda^\circ$ to the optimal service $\lambda^*$. In the proof of Lemma \ref{lem:FWD_ChargeLowerBoundsOPT}, we show that these charges are sufficient, i.e. each service $\lambda\in \Lambda^\circ$ charges at least $2^{\ell_\lambda}$. \quad Subfigure \ref{subfig:FWD_ChargingToOptimal_Sink} shows the validity of the charging, given in Proposition \ref{prop:FWD_LocalCharge}. This proposition shows that the total amount charged to an optimal service $\lambda^*$ exceedes its cost by a factor of at most $O(\log |\mathcal{E}|)$. This is shown by partitioning the services which charge cost to $\lambda^*$ into three types. The first type (green) is low-level services, which are shown to charge a total of at most $O(1)\cdot c(\lambda^*)$. The second type (yellow) is medium-level services. Each of these charges at most $c(\lambda^*)$, but there are at most $O(\log |\mathcal{E}|)$ such yellow services. The last type (red), high-level services, are shown to charge $0$ to $\lambda^*$. \caption{\label{fig:FWD_ChargingToOptimal} Visualization of Services} \end{figure} \begin{prop} \label{prop:FWD_LocalCharge} There exists a constant $\beta$ such that for every optimal service $\lambda^* \in \Lambda^*$, we have that \begin{equation} \label{eq:FWD_ChargePerOptimalService} \sum_{\lambda\in \Lambda^\circ}\min\{2^{\ell_\lambda},\text{\sc ND}^*_{E_0^\lambda\gets 0}(Q_{\lambda \cap \lambda^*})\} \le \beta\log |\mathcal{E}| \cdot c(\lambda^*) \end{equation} \end{prop} \begin{proof} Fix an optimal service $\lambda^* \in \Lambda^*$. Denote by $\Lambda'\subseteq \Lambda^\circ$ the subset of charged services made by the algorithm in which a request from $Q_{\lambda^*}$ is served (other services, for which $Q_{\lambda \cap \lambda^*}=\emptyset$, need not be considered). Observe that $Q_{\lambda^*}$ is an intersecting set, as the optimal solution served $Q_{\lambda^*}$ is a single point in time. Lemma \ref{lem:FWD_UniqueClass} implies that for every level $j$, there exists at most one $j$-level service in $\Lambda'$. Define $\ell = \lfloor \log (c(\lambda^*)) \rfloor$. Now, consider the following cases for a service $\lambda\in \Lambda'$: \begin{enumerate} \item $\ell_\lambda \le \ell$. Each such $\lambda$ contributes at most $2^{\ell_\lambda}$ to the left-hand side of Equation \ref{eq:FWD_ChargePerOptimalService}. Summing over at most one service from each level yields a geometric sum which is at most $2^{\ell +1} \le 2\cdot c(\lambda^*)$. \item $\ell < \ell_\lambda < \ell + \lceil \log |\mathcal{E}| \rceil+ 1$. For such $\lambda$, observe that $\min\{2^{\ell_\lambda}, \text{\sc ND}^*_{E_0^\lambda \gets 0}(Q_{\lambda \cap \lambda^*})\} \le \text{\sc ND}^*(Q_\lambda) \le c(\lambda^*)$. Summing over at most a single service from each level, the total contribution to the left-hand side of Equation \ref{eq:FWD_ChargePerOptimalService} from these levels is at most $\lceil \log |\mathcal{E}| \rceil\cdot c(\lambda^*)$. \item $\ell_\lambda \ge \ell + \lceil \log |\mathcal{E}| \rceil +1$. Observe that $\min\{2^{\ell_\lambda}, \text{\sc ND}^*_{E_0^\lambda \gets 0}(Q_{\lambda \cap \lambda^*})\} \le \text{\sc ND}^*_{E_0^\lambda \gets 0}(Q_{\lambda^*})$. We now claim that $\text{\sc ND}^*_{E_0^\lambda \gets 0}(Q_\lambda^*) =0$, which implies that the total contribution from these levels to the left-hand side of Equation \ref{eq:FWD_ChargePerOptimalService} is $0$. Indeed, consider that every element in $\lambda^*$ costs at most $c(\lambda^*) \le 2^{\ell +1}$. Thus, since $2^{\ell_\lambda} \ge 2^{\ell+1} \cdot |\mathcal{E}|$, we have that $\lambda$ added all elements of $\lambda^*$ to $E_0^\lambda$ in Line \ref{line:FWD_BuyCheapEdges}. Thus, $\lambda^*$ is itself a feasible solution for $Q_{\lambda^*}$ of cost $0$, completing the proof. \end{enumerate} Summing over the contributions from each level completes the proof. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:FWD_ChargeLowerBoundsOPT}] It is enough to show that for every charged service $\lambda \in \Lambda^\circ$, we have that \begin{equation} \label{eq:FWD_GlobalChargeIsLocalCharge} 2^{\ell_\lambda} \le \sum_{\lambda^* \in \Lambda^*} \min\{2^{\ell_\lambda},\text{\sc ND}^*_{E_0^\lambda\gets 0}(Q_{\lambda \cap \lambda^*})\} \end{equation} Summing over all $\lambda \in \Lambda^\circ$ and using Proposition \ref{prop:FWD_LocalCharge} would immediately yield the lemma. If one of the summands on the right-hand side of Equation \ref{eq:FWD_GlobalChargeIsLocalCharge} is $2^{\ell_\lambda}$, the claim clearly holds, and the proof is complete. Otherwise, the right-hand side is exactly $\sum_{\lambda^* \in \Lambda^*}\text{\sc ND}^*_{E_0^\lambda\gets 0}(Q_{\lambda \cap \lambda^*}) $. Observe that $\bigcup_{\lambda^* \in \Lambda^*} Q_{\lambda \cap \lambda^*} = Q_\lambda$, and thus a feasible solution for $Q_\lambda$ is to take the union of the elements of the optimal solutions for $Q_{\lambda \cap \lambda^*}$ for every $\lambda^*$. This implies that \[ \text{\sc ND}^*_{E_0^\lambda\gets 0}(Q_\lambda) \le \sum_{\lambda^* \in \Lambda^*}\text{\sc ND}^*_{E_0^\lambda\gets 0}(Q_{\lambda \cap \lambda^*}) \] We claim that $2^{\ell_\lambda} \le \text{\sc ND}^*_{E_0^\lambda\gets 0}(Q_\lambda)$, which completes the proof. Indeed, from Proposition \ref{prop:FWD_OnlyFullServicesCharged}, we know that $\lambda$ is an imperfect service. This means that during the construction of $Q_\lambda$, the loop of Line \ref{line:FWD_AddingRequestsToService} was completed in the $\Break$ command of Line \ref{line:FWD_ServiceIsFull}. Observing the value of the variable $S'$ at that line, we have that $c(S') \ge \gamma\cdot 2^{\ell_\lambda}$. Since $S'$ was obtained from a call to $\text{\sc ND}_{E_0^\lambda \gets 0}(Q_\lambda)$, the guarantee of the approximation algorithm for $\text{\sc ND}$ implies that $\text{\sc ND}^*_{E_0 \gets 0}(Q_\lambda) \ge 2^{\ell_\lambda}$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:FWD_Competitiveness}] The competitiveness of the algorithm results immediately from Lemmas \ref{lem:FWD_ALGUpperBound}, \ref{lem:FWD_PrimaryLowerBoundsOPT} and \ref{lem:FWD_ChargeLowerBoundsOPT}. As for the running time, it is clear that the main cost of the algorithm is calling the approximation algorithm $\text{\sc ND}$, and that this is done $O(|Q|)$ times (every iteration of the loop in Line \ref{line:FWD_AddingRequestsToService} adds a request to the ongoing service). \end{proof} \section{Applications and Extensions of the Deadline Framework} \label{sec:APP} In this section, we apply the framework to solving some network design problems in the deadline model, as well as describe some extensions of the framework. \subsection{Edge-Weighted Steiner Tree and Steiner Forest} \label{subsec:APP_SFD} In this subsection, we consider the edge- weighted Steiner tree problem with deadlines. In this problem, we are given a (simple) graph $G=(V,E)$ of $n$ nodes, with a cost function $c:E \to \mathbb{R}^+$ on the edges. In addition, the input designates a node $\rho \in V$ as the \emph{root}. Requests arrive over time, each with an associated deadline, where each request is a terminal $u\in V$. At any point in time, the algorithm may transmit some subset of edges $E'\subseteq E$, at a cost which is $\sum_{e\in E'} c(e)$. A pending request $q$ for a node $u\in V$ is considered served by this transmission if $u$ is in the same connected component as $\rho$ in the subgraph $G'=(V,E')$. A more general problem is the edge-weighted Steiner forest problem with deadlines. In this problem, we are again given a simple graph $G=(V,E)$ of $n$ nodes, and a cost function $c:E\to \mathbb{R}^+$ on the edges. Each request is now a pair of terminals $(u_1,u_2)\in V$. Again, the algorithm can transmit a subset of edges $E'$, paying $\sum_{e\in E'} c(e)$, and serving any pending request $q$ on $(u_1,u_2)$ such that $u_1,u_2$ are in the same connected component in $G'=(V,E')$. Observe that Steiner tree with deadlines is a special case of Steiner forest with deadlines where each requested pair contains the root $\rho$. The Steiner forest with deadlines problem is a special case of the $\text{\sc ND}$ problem we described in Section \ref{sec:FWD}. The collection of elements in this case is the set of edges. For a request $q$ between two terminals $(u_1,u_2)$, the set $X_q$ of transmissions satisfying $q$ is the set of all transmissions $E'\subseteq E$ such that $u_1$ and $u_2$ are in the same connected component in the subgraph $(V,E')$. We apply the framework of Section \ref{sec:FWD} to the Steiner forest with deadlines problem, thus obtaining an algorithm for both Steiner tree and Steiner forest with deadlines. The following theorem is due to Goemans and Williamson~\cite{Goemans1992}. \begin{pthm}[\cite{Goemans1992}] \label{thm:APP_SFD_GoemansWilliamson} There exists a deterministic $2$-approximation for (offline) edge-weighted Steiner forest. \end{pthm} Plugging the algorithm of Theorem \ref{thm:APP_SFD_GoemansWilliamson} into the framework of Section \ref{sec:FWD}, and observing that $\log |E| \le 2\log n$, we obtain the following theorem. \begin{thm} There exists an $O(\log n)$-competitive deterministic algorithm for edge-weighted Steiner forest with deadlines which runs in polynomial time. \end{thm} \subsubsection*{Strong Edge-Weighted Steiner Forest} In the original Steiner forest problem (without deadlines), requesting pairs could be used to ensure connectivity between more than two nodes in the graph. Indeed, one could guarantee connectivity between $k$ nodes by releasing $k-1$ pair requests. In the Steiner forest with deadlines problem, this is no longer the case. Since the transmissions serving the $k-1$ pair requests can occur in different times, there is no guarantee that there exists a point in time in which \emph{all} $k$ nodes are connected. This motivates the \emph{strong} Steiner forest problem with deadlines, in which requests consist of \emph{subsets} of nodes which must be connected at the same time. The corresponding offline problem is still regular Steiner forest (since subset requests can be reduced to pair requests in the offline setting). Thus, we can apply the framework to the approximation algorithm of Goemans and Williamson~\cite{Goemans1992} as for the standard Steiner forest with deadlines, and obtain the following theorem. \begin{thm} There exists an $O(\log n)$-competitive deterministic algorithm for strong edge-weighted Steiner forest with deadlines which runs in polynomial time. \end{thm} \subsection{Multicut} In this subsection, we consider the multicut problem with deadlines. In this problem, we are again given a (simple) graph $G=(V,E)$ of $n$ nodes, with a cost function $c:E\to \mathbb{R}^+$ on the edges. Requests arrive over time, each with an associated deadline, where each request is a pair of terminals $\{u_1,u_2\}\in V$. At any point in time, the algorithm may choose to momentarily disrupt a subset of edges $E'\subseteq E$, at a cost of $\sum_{e\in E'}c(e)$. A pending request $q$, which consists of the pair or terminals $\{u_1, u_2\}$, is served by this disruption if $u_1$ and $u_2$ are in two distinct connected components in the graph $G'=(V,E\backslash E')$. This problem is a special case of the $\text{\sc ND}$ problem we described in Section \ref{sec:FWD}. The collection of elements in this case is again the set of edges. For any request $q$ for a pair of terminals $\{u_1,u_2\}$, the set of satisfying transmissions $X_q$ is the collection of subsets of edges of the form $E'$ such that $u_1$ and $u_2$ are in two distinct connected components in the subgraph $(V, E\backslash E')$. The following result is due to Garg \textit{et al.}~\cite{Garg1993}. \begin{pthm}[\cite{Garg1993}] \label{thm:APP_MCD_GargEtAl} There exists a deterministic, polynomial-time, $O(\log n)$-approximation for multicut. \end{pthm} Plugging the approximation algorithm of Theorem \ref{thm:APP_MCD_GargEtAl} into the framework of Section \ref{sec:FWD}, and observing that $\log |E| \le 2\log n$, yields the following theorem. \begin{thm} \label{thm:MCD_Competitiveness} There exists a deterministic $O(\log^2 n)$-competitive algorithm for multicut with deadlines which runs in polynomial time. \end{thm} \subsubsection*{Strong Multicut} As was the case in Steiner forest, using pair requests in the original offline multicut problem could ensure disconnection between subsets of nodes, which is not the case for the deadline problem. This again motivates a strong version of multicut with deadlines, in which each request is a collection of nodes to be simultaneously disconnected from one another through disrupting some edges. As in the Steiner forest problem, the fact that these subset requests can be reduced in the offline case to pair requests allows us to use the approximation algorithm of Theorem \ref{thm:APP_MCD_GargEtAl} in the framework of Section \ref{sec:FWD}, yielding the following theorem. \begin{thm} There exists an $O(\log^2 n)$-competitive deterministic algorithm for strong multicut with deadlines which runs in polynomial time. \end{thm} \subsection{Node-Weighted Steiner Forest} The Steiner forest (and Steiner tree) problems have also been considered in the setting in which vertices, rather than edges, are bought. In this subsection, we apply the framework in this setting. Formally, in the node-weighted Steiner forest with deadlines problem, we are given a graph $G=(V,E)$ such that $|V|=n$, and a cost function $c:V\to \mathbb{R}^+$ over the vertices. Each request $q$ is of two terminals $u_1,u_2 \in V$, and comes with an associated deadline. At any point in time, the algorithm may transmit a subset of vertices $V'\subseteq V$, at a cost of $\sum_{v\in V'} c(v)$. This transmission serves a pending request $q$ if $u_1$ and $u_2$ are in the same connected component in the subgraph induced by $V'$ (and in particular $u_1,u_2 \in V'$). The node-weighted Steiner forest is a special case of the $\text{\sc ND}$ problem we described in Section \ref{sec:FWD}. The collection of elements in this case is the set of nodes. For a request $q$ for a pair of terminals $(u_1,u_2)$, the set of satisfying transmissions $X_q$ is the collection of node subsets $V'\subseteq V$ such that $u_1$ and $u_2$ are connected in the subgraph induced by $V'$. We apply the framework of Section \ref{sec:FWD} to the node-weighted Steiner forest with deadlines problem, thus obtaining an algorithm for the node-weighted versions of both Steiner tree and Steiner forest with deadlines. The following theorem is due, independently, to Bateni \emph{et al.}~\cite{DBLP:journals/siamcomp/BateniHL18} and Chekuri \emph{et al.}~\cite{DBLP:conf/approx/ChekuriEV12}. \begin{pthm}[\cite{DBLP:journals/siamcomp/BateniHL18,DBLP:conf/approx/ChekuriEV12}] \label{thm:APP_NWSFD_BateniEtAl} There exists a polynomial-time, deterministic $O(\log n)$-approximation algorithm for node-weighted Steiner forest. \end{pthm} Applying the framework of Section \ref{sec:FWD} yields the following theorem. \begin{thm} There exists an $O(\log^2 n)$-competitive deterministic algorithm for node-weighted Steiner forest with deadlines which runs in polynomial time. \end{thm} \subsection{Edge-Weighted Steiner Network} The (edge-weighted) Steiner network problem with deadlines is identical to the Steiner forest with deadlines problem in Subsection \ref{subsec:APP_SFD}, except that every pair request $q$ on two terminals $u_1,u_2 \in V$ also has an associated demand $f(q) \in \mathbb{N}$. A transmission of edges $E'$ now serves a pending request $q$ if there exist $f(q)$ edge-disjoint paths from $u_1$ to $u_2$ in the graph $(V,E')$. The edge-weighted Steiner network is again a special case of $\text{\sc ND}$. As in the Steiner forest, the elements are the edges of the graph. For each request $q$ for a pair of terminals $\{u_1,u_2\}$ with demand $f(q)$, the set of satisfying transmissions $X_q$ is the collection of subsets of edges $E'\subseteq E$ such that there exist $f(q)$ edge-disjoint paths from $u_1$ to $u_2$ in $(V,E')$. The following Theorem is due to Jain~\cite{DBLP:journals/combinatorica/Jain01}. \begin{pthm}[\cite{DBLP:journals/combinatorica/Jain01}] \label{thm:APP_SND_Jain} There exists a polynomial-time, deterministic, $2$-approximation for offline edge-weighted Steiner network. \end{pthm} Plugging the offline approximation algorithm of Theorem \ref{thm:APP_SND_Jain} into the framework of Section \ref{sec:FWD}, and again observing that $\log |E| \le 2\log n$, yields the following theorem. \begin{thm} \label{thm:APP_SND_Competitiveness} There exists an $O(\log n)$-competitive deterministic algorithm for edge-weighted Steiner network with deadlines which runs in polynomial time. \end{thm} \subsection{Directed Steiner Tree} In the directed Steiner tree problem with deadlines, we are given a (simple) directed graph $G=(V,E)$, costs $c:E\to \mathbb{R}^+$ to the edges and a designated root $\rho \in V$. Each request $q$ is a terminal $v\in V$. At any point in time, the algorithm may transmit a set of directed edges $E'\subseteq E$. A pending request $q$ for a terminal $v$ is served by this transmission if there exists a (directed) path from $\rho$ to $v$ in the subgraph $G'=(V,E')$. This problem is also a special case of $\text{\sc ND}$ in the same way as the undirected Steiner tree. That is, the elements are the edges of the tree, and a set of edges $E'\subseteq E$ is in $X_q$, for a request $q$ of a terminal $v$, if there exists a directed path from $\rho$ to $v$ in the graph $(V,E')$. The following theorem is due to Grandoni \textit{et al.}~\cite{Grandoni:2019:OAA:3313276.3316349}. \begin{pthm}[\cite{Grandoni:2019:OAA:3313276.3316349}] \label{thm:APP_DSD_Grandoni} There exists a randomized $O(\frac{\log^2 n }{\log \log n})$-approximation for directed Steiner tree, which runs in quasi-polynomial time (specifically, $O(n^{\log^5 n})$ time). \end{pthm} As a result of plugging the algorithm of Theorem \ref{thm:APP_DSD_Grandoni} into the framework of Section \ref{sec:FWD}, and again observing that $\log |E| \le 2\log n$, yields the following theorem. \begin{thm} \label{thm:APP_DSD_Competitiveness} There exists a randomized $O(\frac{\log ^3 n}{\log \log n})$-competitive algorithm for directed Steiner tree with deadlines, which runs in quasi-polynomial time. \end{thm} \subsection{Facility Location} In the facility location with deadlines problem, we are given a graph $G=(V,E)$, such that $|V|=n$. We are also given a facility opening cost $f:V\to \mathbb{R}^+$, and weights $w:E\to \mathbb{R}^+$ to the edges. Requests arrive over time on the nodes of the graph, each with an associated deadline. At any point in time, the algorithm may choose a node $v\in V$, open a facility at that node, and choose some subset of pending requests $Q'$ to connect to that facility. This action serves the pending requests of $Q'$. Immediately after performing this atomic action, the facility disappears. The total cost of this transmission is $f(v)$ (the opening cost of the facility) plus $\sum_{q\in Q'} \delta(v,q)$, where $\delta$ is the shortest-path metric on nodes induced by the edge weights $w$. The set of elements in this case is the set of nodes $V$ (where buying a node means opening a facility at that node). Observe that facility location does \textbf{not} conform neatly to the $\text{\sc ND}$ structure of the problems addressed in our framework -- indeed, opening facilities does not immediately serve requests, and paying an additional connection cost is required. One could force the problem into the framework by adding the connections (i.e. shortest paths from a request to facility) as elements -- however, as each request requires a different connection, this would result in $\Theta(n|Q|)$ elements, where $Q$ is the set of requests. The resulting loss over the approximation algorithm in this case would be $\Theta(\log n +\log |Q|)$. Nevertheless, we show that the framework can be applied without any modification to the facility location problem, with only the facilities as elements, yielding the desired guarantee ($O(\log n)$ loss). In this subsection, we modify the necessary parts in the analysis of the framework in order to fit the facility location problem. First, we consider a constant-approximation algorithm for the offline facility location problem. There are many such algorithms; the following is due to Jain and Vazirani~\cite{Jain:2001:AAM:375827.375845}. \begin{pthm}[\cite{Jain:2001:AAM:375827.375845}] \label{thm:APP_FLD_JainVazirani} There exists a polynomial-time, deterministic $\gamma_{\text{\sc FL}}$-approximation for offline facility location, where $\gamma_{\text{\sc FL}}=3$. \end{pthm} In this subsection, we prove that plugging the approximation algorithm of Theorem \ref{thm:APP_FLD_JainVazirani} into the framework of Section \ref{sec:FWD} yields the following theorem. \begin{thm} \label{thm:APP_FLD_Competitiveness} There exists an $O(\log n)$-competitive deterministic algorithm for facility location with deadlines, which runs in polynomial time. \end{thm} \begin{rem} \label{rem:APP_FLD_SolutionNature} While the framework for facility location is the same as for $\text{\sc ND}$, an important remark must be made about the nature of facility location solutions. In the original framework for $\text{\sc ND}$, we hold solutions in variables, where a solution $S$ is a subset of the universe of elements $\mathcal{E}$. In facility location, a solution $S$ to $\text{\sc FL}(Q)$ (the offline facility location problem on the set of requests $Q$) is of different form -- $S$ contains a subset $F\subseteq \mathcal{E} = V$ of facilities to open, \emph{plus} a mapping $\phi:Q\to F$ from the input requests to the facilities of $F$, which determines the connection cost of the solution. The cost of the solution $S=(F,\phi)$, referred to as $c(S)$ in the framework, is now the opening cost $\sum_{v\in F} f(v)$ plus the connection cost $\sum_{q\in Q} \delta(q,\phi(q))$. As for transmissions in Line \ref{line:FWD_ServeRequests}, transmitting $E_0 \cup S \cup S_{q_{\text{last}}}$ refers to transmitting the facilities of $E_0$, $S$ and $S_q$, and connecting requests according to the mappings of $S$ and $S_q$. \end{rem} \subsubsection*{Analysis} Consider that theorem \ref{thm:APP_FLD_Competitiveness} would result immediately if we could reprove Lemmas \ref{lem:FWD_ALGUpperBound}, \ref{lem:FWD_PrimaryLowerBoundsOPT} and \ref{lem:FWD_ChargeLowerBoundsOPT} for facility location with deadlines. The proofs of Lemmas \ref{lem:FWD_ALGUpperBound} and \ref{lem:FWD_PrimaryLowerBoundsOPT} go through in an identical way to the original framework. As for Lemma \ref{lem:FWD_ChargeLowerBoundsOPT}, the only change required is in the proof of Proposition \ref{prop:FWD_LocalCharge}. We now go over the necessary changes. \begin{proof}[Proof of Proposition \ref{prop:FWD_LocalCharge} for facility location] We use the notation defined in the original proof of Proposition \ref{prop:FWD_LocalCharge}. Observe that the proof of the proposition goes through until the case analysis of each service $\lambda \in \Lambda'$. The two first cases (namely, that $\ell_\lambda \le \ell$ or $\ell < \ell_\lambda < \ell + \lceil \log |\mathcal{E}| \rceil+ 1$) go through entirely. The difference is in the third case, in which $\ell_\lambda \ge \ell + \lceil \log |\mathcal{E}| \rceil +1$. As was the argument in the original proof, it holds that all facilities that were opened in $\lambda^*$ are also open in $\lambda$. Now, consider that there exists a solution for $Q_{\lambda \cap \lambda^*}$ which connects each request to its facility in $\lambda^*$. Therefore, we have that $\text{\sc ND}_{E_0^\lambda \gets 0}(Q_{\lambda \cap \lambda^*})$ is at most the connection cost of the requests of $Q_{\lambda \cap \lambda^*}$ in $\lambda^*$. Summing over all services $\lambda$ of this class yields that the total contribution to the left-hand side of Equation \ref{eq:FWD_ChargePerOptimalService} is at most the connection cost incurred by the optimal solution in $\lambda^*$, which is at most $c(\lambda^*)$. Combining this third case with the previous two cases completes the proof. \end{proof} \subsection{Exponential-Time Algorithms} In online algorithms, one is often interested in the information-theoretic bounds on competitiveness, without limitations on running time. The framework of Section \ref{sec:FWD} supports such constructions -- plugging in the algorithm which solves the offline problem optimally yields the following theorem. \begin{thm} There exists an $O(\log |\mathcal{E}|)$-competitive algorithm for $\text{\sc ND}$ with deadlines (with no guarantees on running time). In particular, there exists an $O(\log n)$competitive algorithm for all problems in this paper, where $n$ is the number of nodes in the input graph. \end{thm} \section{Delay Framework} \label{sec:FWY} We now consider the $\text{\sc ND}$ problem with delay. This problem is identical to the problem with deadlines, except that instead of a deadline, each request $q$ is associated with a continuous, monotone-nondecreasing delay function $d_q(t)$, which is defined for every $t$, and tends to infinity as $t$ tends to infinity (ensuring that every request must be served eventually). The framework we present for problems with delay requires an approximation algorithm for the prize-collecting variant of the offline problem. In the prize-collecting $\text{\sc ND}$ problem, denoted $\text{\sc PCND}$, the input is again a set of requests $Q$, and an additional penalty function $\pi:Q \to \mathbb{R}^+$. A solution is a subset of elements $E$ which serves some subset $Q'\subseteq Q$ of the requests. The cost of the solution is $\sum_{e\in E} c(e) + \sum_{q\in Q\backslash Q'} \pi(q)$ -- that is, the total cost of the elements bought plus the penalties for unserved requests. \begin{thm} \label{thm:FWY_Competitiveness} If there exists a $\gamma$ deterministic (randomized) approximation algorithm for $\text{\sc PCND}$ which runs in polynomial time, then there exists a $O(\gamma \log |\mathcal{E}|)$-competitive deterministic (randomized) algorithm for $\text{\sc ND}$ with delay, which runs in polynomial time. \end{thm} Note that Remarks \ref{rem:FWD_QuasiPolynomialTime} and \ref{rem:FWD_LasVegas} apply here as well. \subsection{The Framework} We now describe the framework for $\text{\sc ND}$ with delay. \paragraph*{Calls to the prize-collecting approximation algorithm.} The framework makes calls to the approximation algorithm $\text{\sc PCND}$ for the prize-collecting problem. Such a call is denoted by $\text{\sc PCND}(Q,\pi)$, where $Q$ is the set of requests and $\pi:Q\to \mathbb{R}^+$ is the penalty function. Some calls are made with the subscript $E_0 \gets 0$, for some subset of elements $E_0$. This notation means calling $\text{\sc PCND}$ on the modified input in which the cost of the elements $E_0$ is set to $0$. The framework also makes calls to $\text{\sc ND}$, an approximation algorithm for the original (not prize-collecting) variant of $\text{\sc ND}$. This approximation algorithm is obtained through calling $\text{\sc PCND}$ with penalties of $\infty$ for each request. \paragraph*{Investment counter.} The algorithm maintains for each request $q$ an \emph{investment counter} $h_q$. Raising this counter corresponds to paying for delay (both past and future) incurred by the request $q$. When referring to the value of the counter at a point in time $t$, we write $h_q(t)$. \begin{defn}[Residual delay] We define the \emph{residual delay} of a pending request $q$ at time $t$ to be $\rho_q(t) = \max(0, d_q(t) - h_q(t))$. Intuitively, this is the amount of delay incurred by $q$ which no service has covered until time $t$. For a set of requests $Q$ pending at time $t$, we also define $\rho_Q(t) = \sum_{q\in Q} \rho_q(t)$. \end{defn} \begin{defn}[Penalty function $\pi_{t \to t'}$] At a time $t$, and for every future time $t'>t$, we define the penalty function $\pi_{t\to t'}$ on pending requests at time $t$ in the following way. For a request $q$ pending at time $t$, we have that $\pi_{t \to t'}(q)=\max(0, d_q(t')-h_q(t))$. Intuitively, the penalty for a request, as evaluated at time $t$, is the future residual delay of the request if the algorithm does not raise its investment counter until time $t'$. \end{defn} As in the deadline framework, the delay framework assigns a level $\ell_q$ to each pending request $q$. \begin{defn}[Critical level] At any point during the algorithm, we say that a level $j$ becomes \emph{critical} if the total residual delay of requests of level at most $j$ reaches $2^j$. \end{defn} \paragraph*{Algorithm's description.} The framework is given in Algorithm \ref{alg:FWY_Algorithm}. The algorithm consists of waiting until any level $j$ becomes critical, and then calling $\UponCritical(j)$. Whenever a new request $q$ is released, the function $\UponRequest(q)$ is called. The algorithm maintains the level of each pending request $q$, denoted $\ell_q$. This level is initially the logarithmic class of the cost of the cheapest solution (i.e. set of elements) serving $q$ (in fact, the algorithm estimates this by calling the approximation algorithm $\text{\sc ND}$ and dividing by its approximation ratio). Over time, the level of a request may increase. When a level $j$ becomes critical, this triggers a service $\lambda$ of level $\ell_{\lambda} = j+1$. Intuitively, the service $\lambda$ is responsible for all pending requests of level at most $\ell_\lambda$ -- these are called the eligible requests for $\lambda$. The service first starts by raising the investment counters of eligible requests until they all have zero residual delay. After doing so, the service observes the first point in the future in which such an eligible request has positive residual delay. The goal of the service is to push this point in time (called the forwarding time) as far into the future as possible, while spending at most $O(\gamma\cdot 2^{\ell_\lambda})$ cost. There are two methods of accomplishing this: the first is to raise the investment counters of the requests, and the second is serving the requests. The best course of action is to combine both methods in a smart manner -- deciding which eligible requests are to be served, and raising the investment counter for the remainder of the eligible requests. To achieve this, the service finds a solution to a prize-collecting instance which captures the problem of pushing back the forwarding time to some future time $t'$. In this instance, the requests are the eligible requests for $\lambda$, and the penalty for a request $q$ is the amount by which its investment counter $h_q$ must be raised so that $q$'s future residual delay would be $0$ at time $t'$. The forwarding time, as well as the corresponding prize-collecting solution, are returned by the call to the function \ForwardTime. If the solution returned by $\ForwardTime$ does not serve any requests (i.e. it only raises investment counters), the service modifies it to serve some arbitrary eligible request. While this does not affect the approximation ratio of the algorithm, it bounds the number of services by the number of requests, which bounds the running time of the algorithm. Now, the algorithm increases the investment counter of eligible requests which are not served by the solution (paying for their future delay until the forwarding time). The algorithm also upgrades the level of those requests, in a similar way to the deadline algorithm. Finally, the service transmits its solution, serving the remainder of the eligible requests. \DontPrintSemicolon \begin{algorithm} \renewcommand{\algorithmcfname}{Algorithm} \caption{\label{alg:FWY_Algorithm}Network Design with Delay Framework} \algorithmfootnote{\footnotemark[\value{footnote}] For the sake of the algorithm and its analysis, no requests outside $Q'_\lambda$ are considered served by this transmission.} \BlankLine \BlankLine \EFn{\UponRequest{$q$}}{ Set $ S_q \gets \text{\sc ND}(\{q\})$ Set $ I_q \gets \frac{c(S_q)}{\gamma} $. Set $\ell_q \gets \left\lfloor\log \left(I_q\right)\right\rfloor$ \tcp*[h]{the level of the request} } \BlankLine \EFn(\tcp*[h]{Upon a level $j$ becoming critical at time $t$}){\UponCritical{$j$}}{ Start a new service $\lambda$, which we now describe. Set $\ell_\lambda \gets j + 1$. \label{line:FWY_SetServiceLevel} \BlankLine \ForEach(\tcp*[h]{Clean residual delay of eligible requests}){request $q$ such that $\ell_q \le \ell_\lambda$}{Set $h_q \gets h_q + \rho_q(t)$} \label{line:FWY_CleanEligibleDelay} \BlankLine Set $E_0 = \left\{ e\in \mathcal{E} \middle| c(e) \le \frac{2^{\ell_\lambda}}{|\mathcal{E}|} \right\}$. \tcp*[h]{Buy all cheap elements} \label{line:FWY_BuyCheapEdges} \BlankLine \tcp*[h]{Forward time} Let $Q_\lambda$ be all pending requests of level at most $\ell_\lambda$. Set $(\tau,S) \gets \ForwardTime(E_0, Q_\lambda,\ell_\lambda)$. Let $Q'_\lambda \subseteq Q_\lambda$ be the subset of requests served in $S$. \BlankLine \tcp*[h]{make sure that the service serves at least one pending request} \lIf{$Q'_\lambda = \emptyset$}{\label{line:FWY_ForceService}for an arbitrary $q\in Q_\lambda$, set $Q'_\lambda\gets \{q\}$ and $S \gets S_q$.} \BlankLine \tcp*[h]{pay for future delay of requests unserved by the transmission, and upgrade requests} \ForEach(){$q\in Q_\lambda\backslash Q'_\lambda$} { Raise $h_q$ by $\pi_{t\to \tau}(q)$.\label{line:FWY_MakeInvestment} Set $\ell_{q} \gets \ell_\lambda$. \label{line:FWY_ServiceSetsRequestLevel} } \BlankLine Transmit the solution $E_0 \cup S$, serving the requests $Q'_\lambda$.\footnotemark \label{line:FWY_PerformService} } \BlankLine \end{algorithm} \begin{algorithm} \renewcommand{\algorithmcfname}{Procedure} \caption{Time Forwarding Procedure} \tcc*[h]{This function, called at time $t$, returns a future time $t''$ and a solution $S\subseteq \mathcal{E}$ to transmit which is a ``good'' solution to minimize the future delay of $Q_{\lambda}$ until time $t''$. See Proposition \ref{prop:FWY_TimeForwardingGuarantee} for the formal guarantee of this function.} \Fn{\ForwardTime{$E_0$, $Q_\lambda$, $j$}}{ Set $t'\gets t$, $Q'_\lambda\gets \emptyset$ and $S \gets \emptyset$. \label{line:FWY_InitSolution} \While{$Q_\lambda \backslash Q'_\lambda \neq \emptyset$}{ Let $t'' > t$ be the time in which $\sum_{q\in Q_\lambda \backslash Q'_\lambda}(\pi_{t\to t''}(q)-\pi_{t\to t'}(q))$ reaches $\gamma\cdot 2^{j}$. Set $S' \gets \text{\sc PCND}_{E_0\gets 0}(Q_\lambda, \pi_{t\to t''})$. \lIf{$c(S') \ge \gamma\cdot 2^{j}$}{\Break} Set $Q'_\lambda\subseteq Q_\lambda$ to be the set of requests served in $S'$. Set $t' \gets t''$ and $S \gets S'$. } \Return{$(t'',S)$} } \end{algorithm} \subsection{Analysis} As in the deadline case, we first consider some definitions and properties of the algorithm before delving into the proof of Theorem \ref{thm:FWY_Competitiveness}. \subsubsection*{\emph{Definitions and Algorithm's Properties}} Let $\lambda$ be a service which occurs at some time $t$, making a call to $\ForwardTime(E_0, Q_\lambda, j)$. This call returns the time $\tau$ and a solution $S$ for $\text{\sc PCND}_{E_0 \gets 0}(Q_\lambda, \pi_{t_\lambda\to \tau})$, where $\pi_\tau$ is as defined in $\lambda$. We prove the following property. \begin{prop} \label{prop:FWY_TimeForwardingGuarantee} The time $\tau$ and solution $S$ returned by $\ForwardTime$ have the following properties: \begin{enumerate} \item The cost of $S$ as a solution to $\text{\sc PCND}_{E_0 \gets 0}(Q_\lambda, \pi_{t \to \tau})$ is at most $2\gamma\cdot 2^j$. \item Either $S$ serves all requests in $Q_\lambda$ \textbf{or} $\text{\sc PCND}^*_{E_0 \gets 0}(Q_\lambda, \pi_{t \to \tau}) \ge 2^j$. \end{enumerate} \end{prop} \begin{proof} To prove the first property, consider the final values of the variables $t'$ and $t''$ in \ForwardTime, where the final value of $t''$ is the returned time $\tau$. Observe that the function maintains that $S$ has a cost of at most $\gamma \cdot 2^j$ as a solution for $\text{\sc PCND}_{E_0 \gets 0}(Q_\lambda, \pi_{t \to t'}$. Observing the lines in which the final values of $t'$ and $t''$ were set, we have one of two cases. In the first case, in which $t''=t'$, we are done. Otherwise, we have that $\sum_{q\in Q_\lambda \backslash Q'_\lambda} (\pi_{t \to t''}(q) - \pi_{t \to t'}(q)) = \gamma \cdot 2^j$. In words, the total penalty increase for the requests not served in $S$ from $\pi_{t\to t'}$ to $\pi_{t\to t''}$ is $\gamma\cdot 2^j$. Thus, the solution $S$ has a total cost of at most $2\gamma \cdot 2^j$, proving the first property. As for the second property, consider the loop of \ForwardTime. If it finishes through the loop's condition, $S$ serves all requests in $Q_\lambda$ and we are done. Otherwise, the loop is ended by the \Break command, in which case we know that the cost of $S'$ as a solution to $\text{\sc PCND}_{E_0 \gets 0}(Q_\lambda, \pi_{t \to t''})$ is at least $\gamma \cdot 2^j$. But since $S'$ is a $\gamma$ approximation for this problem, we have that $\text{\sc PCND}^*_{E_0 \gets 0}(Q_\lambda,\pi_{t\to t''}) \ge 2^j$, completing the proof. \end{proof} For every service $\lambda$, we denote by $t_\lambda$ the time in which $\lambda$ occurred. In the running of $\lambda$, consider time $\tau$ as returned by \ForwardTime. We call this time the \emph{forwarding time} of $\lambda$, and denote it by $\tau_\lambda$. We call the value set to $\ell_{\lambda}$ the \emph{level} of $\lambda$; observe that this value does not change once defined. Similarly, for a request $q$, we call $\ell_q$ the level of $q$. Note that unlike services, the level of a request may change over time (more specifically, the level can be increased). We redefine some of the definitions we used in the deadline case to fit the delay case. \begin{defn}[Service Pointer] Let $q$ be a request. We define $\operatorname{ptr}_q$ to be the last service $\lambda$ such that $\lambda$ sets $\ell_q \gets \ell_\lambda$ in Line \ref{line:FWY_ServiceSetsRequestLevel}. If there is no such service, we write $\operatorname{ptr}_q = \text{\sc null}$. Similarly, we define $\operatorname{ptr}_q(t)$ to be the last service $\lambda$ before time $t$ such that $\lambda$ sets $\ell_q \gets \ell_\lambda$ in Line \ref{line:FWY_ServiceSetsRequestLevel} (with $\operatorname{ptr}_q(t)=\text{\sc null}$ if there is no such service). \end{defn} \begin{defn} \label{defn:FWY_EligibleRequest} Consider a service $\lambda$ and a request $q$ which is pending upon the start of $\lambda$, and has $\ell_q \le \ell_\lambda$ at that time. We say that $q$ was \emph{eligible} for $\lambda$. \end{defn} In the algorithm, the set of eligible requests for a service $\lambda$ is the value of the variable $Q_\lambda$. We use this notation throughout the analysis, denoting the set of requests eligible for a service $\lambda$ by $Q_\lambda$. \begin{defn} \label{defn:FWY_ServiceTypes} For a service $\lambda$: \begin{enumerate} \item We say that $\lambda$ is \emph{charged} if there exists some future service $\lambda'$, which is triggered by some level $j$ becoming critical, and there exists a pending request $q$ which is of level $j$ and has positive residual delay immediately before $\lambda'$, such that $\operatorname{ptr}_{q} (t_{\lambda'}) = \lambda$. We say that $\lambda'$ charged $\lambda$. \item We say that $\lambda$ is \emph{perfect} if the solution $S$ returned by $\ForwardTime$ serves all of $Q_\lambda$. Otherwise, we say that $\lambda$ is \emph{imperfect}. \item We say that $\lambda$ is \emph{primary} if, when triggered upon $\ell_\lambda -1$ becoming critical, every pending request $q$ of level exactly $\ell_\lambda-1$ with positive residual delay has $\operatorname{ptr}_q (t_\lambda) = \text{\sc null}$. Otherwise, $\lambda$ is \emph{secondary}. \end{enumerate} \end{defn} Fix any input set of requests $Q$. We denote by $\Lambda$ the final set of services by the algorithm. We denote the set of primary services made by the algorithm by $\Lambda_1$, and the set of secondary services by $\Lambda_2$, such that $\Lambda = \Lambda_1 \cup \Lambda_2$. We denote the set of charged services by $\Lambda^\circ$. The algorithm explicitly maintains the following invariant. \begin{inv} \label{inv:FWY_SubcriticalInvariant} At any point $t$ during the algorithm, for every set of pending requests $Q'$ of level at most $j$, it holds that $\rho_{Q'}(t) \le 2^j$. \end{inv} The following observation is ensured by Lines \ref{line:FWY_PerformService} and \ref{line:FWY_MakeInvestment}. \begin{obs} \label{obs:FWY_NoResidualDelayUntilForwardTime} Let $\lambda$ be a service, and let $q$ be a request eligible for $\lambda$. Then $q$ has no residual delay between $t_\lambda$ and $\tau_\lambda$. \end{obs} \begin{prop} \label{prop:FWY_UniqueCharge} Each service is charged by at most one service. \end{prop} \begin{proof} Assume for contradiction that there exists a service $\lambda$ at time $t$ which is charged by both $\lambda_1$ and $\lambda_2$, at times $t_1$ and $t_2$ respectively, and assume without loss of generality that $t_1 <t_2$. Service $\lambda_2$ charged $\lambda$ due to the pending request $q_2$, such that $\ell_{q_2} = \ell_\lambda$ and $\operatorname{ptr}_{q_2}(t_{\lambda_2}) = \lambda$. $q_2$ was pending before both $\lambda$ and $\lambda_2$, and was thus pending before $\lambda_1$. But after $\lambda_1$, all pending requests are of level at least $\ell_{\lambda_1} = \ell_\lambda + 1$, in contradiction to having $\ell_{q_2} = \ell_\lambda$ immediately before $\lambda_2$. \end{proof} \begin{prop} \label{prop:FWY_ChargingOnlyAfterForwardTime} Suppose a service $\lambda\in \Lambda^\circ$ is charged by a service $\lambda'$. Then $t_{\lambda'} \ge \tau_\lambda$. \end{prop} \begin{proof} Suppose for contradiction that $t_{\lambda'} < \tau_\lambda$. Denote the level of service $\lambda$ by $j$. The service $\lambda'$ must be triggered by level $j$ becoming critical. Let $Q'$ be the set of requests of level at most $j$ with positive residual delay immediately before $t_{\lambda'}$. Since $\lambda'$ charged $\lambda$, there must be a request $q\in Q'$ such that $\operatorname{ptr}_q (t_{\lambda'}) = \lambda$. Thus, $q$ was eligible for $\lambda$. But thus Observation \ref{obs:FWY_NoResidualDelayUntilForwardTime} contradicts $q\in Q'$. \end{proof} \begin{lem} \label{lem:FWY_UniqueClass} Let $Q'$ be an set of requests, and let $r_{Q'} = \max_{q\in Q'} r_q$. Let $\Lambda$ be the set of charged services for which a request from $Q'$ was eligible and such that for every $\lambda\in \Lambda$ we have $\tau_\lambda \ge r_{Q'}$. Then for every $j\in\mathbb{Z}$, there exists at most one service $\lambda\in \Lambda$ such that $\ell_{\lambda}=j$. \end{lem} \begin{proof} Assume for contradiction that there exists $j\in\mathbb{Z}$ for which there exist two distinct services $\lambda_{1},\lambda_{2}\in\Lambda$ such that $\ell_{\lambda_{1}}=\ell_{\lambda_{2}}=j$. Assume without loss of generality that $t_{\lambda_1}<t_{\lambda_2}$. Let $\lambda'$ be the service that charged $\lambda_1$. The service $\lambda'$ must be a level $j+1$ service. Consider the two following cases: \begin{enumerate} \item $t_{\lambda'}>t_{\lambda_2}$. Since $\lambda'$ charged $\lambda$, there must be a request $q$ such that $\ell_q = \ell_{\lambda_1}$ and $\operatorname{ptr}_q (t_{\lambda'}) = \lambda_1$. Since $\operatorname{ptr}_q (t_{\lambda'}) = \lambda_1$, we have that $q$ was eligible for $\lambda_1$. Thus, since $t_{\lambda_1} < t_{\lambda_2} < t_{\lambda'}$, $q$ was pending at $\lambda_2$. Since the levels of requests can only increase over time, it must be that $\ell_q \le \ell_{\lambda_1} = \ell_{\lambda_2}$ immediately before $t_{\lambda_2}$. But then $q$ was eligible for $\lambda_2$, and thus $\lambda_2$ would call Line \ref{line:FWY_SetServiceLevel} on $q$, in contradiction to having $\operatorname{ptr}_q (t_{\lambda'})= \lambda_1$. \item $t_{\lambda'} < t_{\lambda_2}$. Using Proposition \ref{prop:FWY_ChargingOnlyAfterForwardTime}, we know that $t_{\lambda'} \ge \tau_{\lambda_1}$. Since $\lambda_1 \in \Lambda$, we thus have that $t_{\lambda'} \ge t_{Q'}$. Now, consider all pending requests of $Q'$ before $\lambda_2$. Since $t_{Q'} \le t_{\lambda'} < t_{\lambda_2}$, these requests were also pending before $\lambda'$. Since after $\lambda'$ all pending requests are of level at least $\ell_{\lambda'} = j+1$, none of these requests are eligible for $\lambda_2$. This is in contradiction to $\lambda_2 \in \Lambda$. \end{enumerate} This concludes the proof. \end{proof} \subsubsection*{\emph{Upper-bounding $\text{\sc alg}$.}} \begin{prop} \label{prop:FWY_DelayBoundedByCounters} The total delay cost of the algorithm is at most $\sum_{q\in Q} h_q$, for the final values of the counters $\{h_q\}_{q\in Q}$. \end{prop} \begin{proof} Consider a request $q$, served in some service $\lambda$ at time $t$. Since $q$ was served in $\lambda$, we know that $\ell_q \le \ell_\lambda$ at $t$. From Line \ref{line:FWY_CleanEligibleDelay}, we know that the service $\lambda$ raised $h_q$ so that the residual delay of $q$ becomes $0$. After this line, $h_q$ is at least $d_q(t)$. Since $q$ is served in $\lambda$, its delay does not increase further. \end{proof} To bound the cost of the algorithm, it is thus enough to bound the total cost of transmissions plus the sum of the final values of $h_q$ over requests $q\in Q$. We define the cost of a service $\lambda$, denoted by $c(\lambda)$, as the sum of the cost of the transmission made in that service and the total amount by which $\sum_{q\in Q} h_q$ is raised in that service. From Proposition \ref{prop:FWY_DelayBoundedByCounters}, we know that $\sum_{\lambda\in \Lambda} c(\lambda)$ is an upper bound to the cost of the algorithm. We denote this sum by $\widehat{\text{\sc alg}}$. \begin{lem} \label{lem:FWY_ALGUpperBound} $\widehat{\text{\sc alg}} \le O(\gamma) \cdot \left(\sum_{\lambda\in \Lambda_1} 2^{\ell_\lambda} + \sum_{\lambda\in \Lambda^\circ} 2^{\ell_\lambda}\right)$ \end{lem} \begin{prop} \label{prop:FWY_ServiceCostBoundedByLevel} The total cost of a service $\lambda$ is at most $O(\gamma)\cdot 2^{\ell_\lambda}$. \end{prop} \begin{proof} The cost incurred in $\lambda$ is at most the sum of the following costs: \begin{enumerate} \item The cost of raising the investment counters at Line \ref{line:FWY_CleanEligibleDelay}, which is at most $2^{\ell_\lambda}$ (using Invariant \ref{inv:FWY_SubcriticalInvariant}). \item The cost of transmitting the elements $E_0$ in Line \ref{line:FWY_PerformService}, which is at most $2^{\ell_\lambda}$. \item The added cost of transmitting $S$ in Line \ref{line:FWY_PerformService} (given that the transmission already contains $E_0$), and the cost of raising investment counters of requests by $\pi{t \to \tau}$ in Line \ref{line:FWY_MakeInvestment}. Observe that this cost is in fact the cost of $S$ as a solution for $\text{\sc PCND}_{E_0 \gets 0}(Q_\lambda, \pi_{t\to \tau})$. Since $S$ was obtained from a call to $\ForwardTime(E_0, Q_\lambda, \ell_{\lambda})$, and using Proposition \ref{prop:FWY_TimeForwardingGuarantee}, we have that this cost is at most $2\gamma\cdot 2^{\ell_\lambda}$. \item The cost of the possible transmission in Line \ref{line:FWY_ForceService}. The transmission is of $S_q$, for a request $q$ which is eligible for $\lambda$. Thus, we know that the cost of the transmission is at most $2\gamma\cdot 2^{\ell_\lambda}$. \end{enumerate} Overall, the costs sum to $O(\gamma)\cdot 2^j$, as required. \end{proof} In a perfect service, all eligible requests are served. Thus, Line \ref{line:FWY_ServiceSetsRequestLevel} is never called in a perfect service. The next observation follows. \begin{obs} \label{obs:FWY_OnlyFullServicesCharged} Only imperfect services can be charged. \end{obs} \begin{proof}[Proof of Lemma \ref{lem:FWY_ALGUpperBound}] Observe that $\widehat{\text{\sc alg}} = c(\Lambda_1) + c(\Lambda_2)$. First, observe that through Proposition \ref{prop:FWY_ServiceCostBoundedByLevel} we have that $c(\Lambda_1) \le O(\gamma) \cdot \sum_{\lambda\in \Lambda_1} 2^{\ell_\lambda}$. It remains to show that $c(\Lambda_2)\le O(\gamma)\cdot \sum_{\lambda\in \Lambda^\circ} 2^{\ell_\lambda}$. Observe that every secondary service $\lambda$ of level $j$ charges a previous service $\lambda'\in \Lambda^\circ$ of level $j-1$, which is imperfect by Observation \ref{obs:FWY_OnlyFullServicesCharged}. From Proposition \ref{prop:FWY_ServiceCostBoundedByLevel}, we have that $c(\lambda) \le O(\gamma)\cdot 2^j$, and thus $c(\lambda) \le O(\gamma)\cdot 2^{\ell_{\lambda'}}$. Summing over all secondary services completes the proof, where Proposition \ref{prop:FWY_UniqueCharge} guarantees that no charged service is counted twice. \end{proof} \subsubsection*{\emph{Lower-bounding $\text{\sc opt}$.}} Fix the set of services $\Lambda^*$ made in the optimal solution. To complete the proof of Theorem \ref{thm:FWY_Competitiveness}, we require the following two lemmas which lower-bound the cost of the optimal solution. \begin{lem} \label{lem:FWY_PrimaryLowerBoundsOPT} $\sum_{\lambda\in \Lambda_1} 2^{\ell_\lambda} \le O(1)\cdot \text{\sc opt}$ \end{lem} \begin{lem} \label{lem:FWY_ChargeLowerBoundsOPT} $\sum_{\lambda\in \Lambda^\circ} 2^{\ell_\lambda} \le O(\log n) \cdot \text{\sc opt}$ \end{lem} \begin{proof}[Proof of Lemma \ref{lem:FWY_PrimaryLowerBoundsOPT}] Consider a service $\lambda\in \Lambda_1$ of level $j$. $\lambda$ is triggered upon level $j-1$ becoming critical. Let $Q^\text{\sc crit}_\lambda$ be the set of requests with positive residual delay of level at most $j-1$ which triggered $\lambda$. Define $\sigma_\lambda$ to be the earliest release time of a request in $Q^\text{\sc crit}_\lambda$. Fix any level $j$. We claim that the intervals of the form $[\sigma_\lambda,t_\lambda]$ for every $j$-level service $\lambda\in \Lambda_1$ are disjoint. Assume otherwise, that some $ [\sigma_{\lambda_1},t_{\lambda_1}] $ and $ [\sigma_{\lambda_2},t_{\lambda_2}] $ intersect. Without loss of generality, assume that $t_{\lambda_1} \in [\sigma_{\lambda_2},t_{\lambda_2}]$. Then there exists a request $q \in Q^\text{\sc crit}_{\lambda_2}$ which was pending during $\lambda_1$, after which $\ell_q$ would be at least $j$, in contradiction to $q\in Q^\text{\sc crit}_{\lambda_2}$. Now, define $Q^=_\lambda\subseteq Q^\text{\sc crit}_\lambda$ to be the subset of requests in $Q^\text{\sc crit}_\lambda$ which are of level exactly $\ell_\lambda - 1$. Denote by $t_\lambda^-$ the time $t_\lambda$ immediately before the service $\lambda$. Using Invariant \ref{inv:FWY_SubcriticalInvariant}, we have that $\rho_{Q^\text{\sc crit}_\lambda \backslash Q^=_\lambda}(t_\lambda^-) \le 2^{\ell_\lambda-2} $. Thus, we have that $\rho_{Q^=_\lambda}(t_\lambda^-)\ge 2^{\ell_\lambda-2}$. In addition, since $\lambda \in \Lambda_1$, we have that $\operatorname{ptr}_q(t_\lambda) = \text{\sc null}$ for every $q\in Q^=_\lambda$. Thus, $I_q$ as defined in $\UponRequest$ is at least $2^{\ell_q} = 2^{\ell_\lambda - 1}$. Observe that according to the definition of $I_q$, and the approximation guarantee of $\text{\sc ND}$, we have that $I_q$ is a lower bound to the cost of any solution which serves $q$. Thus, we have that during the interval $[\sigma_\lambda,t_\lambda]$ the optimal solution has either served a request from $Q^=_\lambda$ (at a cost of at least $2^{\ell_\lambda-1}$), or paid a delay of $2^{\ell_\lambda-2}$ for the requests of $Q^=_\lambda$. Now, let $m_j$ be the number of primary services of level $j$, and let $j_{\max}$ be the maximum level of a primary service. Denoting $x^+ = \max(x,0)$, consider the optimal solution. It must pay at least $2^{j_{\max} -2}$ in either delay or service for each of the $m_{j_{\max}}$ intervals of the form $[\sigma_\lambda,t_\lambda]$ (for $\lambda\in \Lambda_1$ of level $j_{\max}$). For each such service $\lambda$, we charge the optimal solution $2^{j_{\max}-2}$ either for its delay or for a single service in the corresponding interval in which a request from $Q^=_\lambda$ was served. Now, consider the next level $j_{\max}-1$. We know that the optimal solution must incur $2^{j_{\max} -3}$ for each of the $m_{j_{\max} -1}$ intervals of this level. However, the optimal solution might already be charged for a service of level $j_{\max}$, and might use this service to save costs, serving an interval with cost less than $2^{j_{\max} -3}$. But this can only happen $m_{j_{\max}}$ times, and can only hit a single interval of level $j_{\max} -1$ (since those intervals are disjoint). Thus, we can charge at least $(m_{j_{\max}-1} -m_{j_{\max}})^+$ intervals an amount of $2^{j_{\max} -3}$, either for delay or for a single service of a level-$(j_{\max}-2)$ request. Repeating this argument, we get that the optimal solution pays at least $\left(m_j - \max_{j'>j} \{m_{j'} \} \right)^+ \cdot 2^{j-2}$ for each level $j$. As for the cost of the algorithm, we have that \begin{align*} c(\Lambda_1) &\le O(1)\cdot \sum_{j=-\infty}^{j_{\max}} m_j \cdot 2^j \\ & \le O(1)\cdot \sum_{j=-\infty}^{j_{\max}} \left(m_j - \max_{j'>j}\{m_{j'}\} \right)^+ \cdot 2^{j+1} \\ & \le O(1)\cdot \text{\sc opt} \end{align*} where the first inequality uses Proposition \ref{prop:FWY_ServiceCostBoundedByLevel} and the second inequality is through changing the order of summation and summing a geometric series. \end{proof} It remains to prove lemma \ref{lem:FWY_ChargeLowerBoundsOPT} by charging for each service $\lambda \in \Lambda^\circ$ the amount $2^{\ell_\lambda}$ to the optimal solution times $O(\log |\mathcal{E}|)$. As in the deadline case, we split the charge of $2^{\ell_\lambda}$ between the services made by the optimal solution, and show that each charge is locally valid. For a service $\lambda^* \in \Lambda^*$ of the optimal solution, we denote by $Q_{\lambda^*}$ the set of requests served by $\lambda^*$. We define the cost associated with $\lambda^*$, denoted by $c(\lambda^*)$, to be the transmission cost of $\lambda^*$ plus the total delay cost of the requests $Q_{\lambda^*}$ in the optimal solution. Recall that for a service $\lambda \in \Lambda$ made by the algorithm, $Q_\lambda$ is the set of requests eligible for $\lambda$. We define $Q_{\lambda \cap \lambda^*} = Q_\lambda \cap Q_{\lambda^*}$. For a set of requests $Q'$, we denote the cost of the optimal offline solution for $\text{\sc PCND}$ on $Q'$, with respect to a penalty function $\pi:Q'\to \mathbb{R}^+$, by $\text{\sc PCND}^*(Q',\pi)$. We also use $\text{\sc PCND}^*_{E_0 \gets 0}(Q', \pi)$ to refer to the cost of the optimal offline solution for $Q'$ where the costs of the elements $E_0 \subseteq \mathcal{E}$ is set to $0$. We also write $\text{\sc PCND}^*(Q',\pi)$ where $\pi$ is defined on a \emph{superset} of $Q'$; the penalty function in this case is the restriction of $\pi$ to $Q'$. For a service $\lambda \in \Lambda$, we denote by $E_0^\lambda$ the value set to $E_0$ in Line \ref{line:FWY_BuyCheapEdges} during the service $\lambda$. The outline of the proof of Lemma \ref{lem:FWY_ChargeLowerBoundsOPT} is shown in Figure \ref{fig:FWY_ChargingToOptimal}. \begin{figure}[tb] \subfloat[\label{subfig:FWY_ChargingToOptimal_Charging}Charging Scheme]{\includegraphics[width=0.45\textwidth]{Figures/ChargingScheme_Delay.pdf}} \hfill \subfloat[\label{subfig:FWY_ChargingToOptimal_Sink}Charges to Optimal Service]{\includegraphics[width=0.45\textwidth]{Figures/ChargingScheme_OptLoadDelay.pdf}}\\ \quad In a similar way to Subfigure \ref{subfig:FWD_ChargingToOptimal_Charging}, Subfigure \ref{subfig:FWY_ChargingToOptimal_Charging} shows the services of $\Lambda^\circ$ and the services of the optimal algorithm, as well as the charging of costs to the optimal solution. The amount $\min\{2^{\ell_\lambda}, \text{\sc PCND}^*_{E_0^\lambda\gets 0}(Q_{\lambda\cap\lambda^*}, \pi_{t_\lambda \to \tau_\lambda})\}$ is charged by the service $\lambda \in \Lambda^\circ$ to the optimal service $\lambda^*$. The proof of Lemma \ref{lem:FWY_ChargeLowerBoundsOPT} shows that these charges are sufficient, i.e. each service $\lambda\in \Lambda^\circ$ charges at least $2^{\ell_\lambda}$. \quad Subfigure \ref{subfig:FWY_ChargingToOptimal_Sink} shows the validity of the charging, given in Proposition \ref{prop:FWY_LocalCharge}. As in the deadline case, this proposition shows that the total amount charged to an optimal service $\lambda^*$ exceedes its cost by a factor of at most $O(\log |\mathcal{E}|)$. The argument is similar to Proposition \ref{prop:FWD_LocalCharge}. However, in addition to the three types of services in the deadline case (green, yellow, red), there is an additional type of service (pink), which consists of services $\lambda$ with $\tau_\lambda \le t_{\lambda^*}$. These pink services are shown to charge a total of at most $c(\lambda^*)$. \caption{\label{fig:FWY_ChargingToOptimal} Visualization of Services} \end{figure} \begin{prop} \label{prop:FWY_LocalCharge} There exists a constant $\beta$ such that for every optimal service $\lambda^* \in \Lambda^*$, we have that \begin{equation} \label{eq:FWY_ChargePerOptimalService} \sum_{\lambda\in \Lambda^\circ}\min\{ 2^{\ell_\lambda},\text{\sc PCND}^*_{E_0^\lambda \gets 0}(Q_{\lambda \cap \lambda^*}, \pi_{t_\lambda \to \tau_\lambda}) \} \le \beta\log |\mathcal{E}| \cdot c(\lambda^*) \end{equation} \end{prop} \begin{proof} Fix any service $\lambda^* \in \Lambda^*$ of the optimal solution. Observe that a service $\lambda \in \Lambda^\circ$ such that $Q_\lambda \cap Q_{\lambda^*} = \emptyset$ does not contribute to the left-hand side of Equation \ref{eq:FWY_ChargePerOptimalService}. Hence, it remains to consider only $\lambda \in \Lambda^\circ$ such that $Q_\lambda \cap Q_{\lambda^*} \neq \emptyset$; denote the set of such services by $\Lambda'$. Define $t^* = \max_{q\in Q_{\lambda^*}} r_q$. Each $\lambda \in \Lambda'$ is in one of the following cases. \paragraph*{Case 1: $\tau_\lambda \le t^*$.} Let $\Lambda^{\le t^*} \subseteq \Lambda'$ be the subset of such services. For every request $q$ eligible for $\lambda$, define $h_q^\lambda$ to be the value of the investment counter $h_q$ upon the start of $\lambda$. We have: \begin{align*} \sum_{\lambda \in \Lambda^{\le t^*} }\min\{ 2^{\ell_\lambda},\text{\sc PCND}^*_{E_0^\lambda \gets 0}(Q_{\lambda \cap \lambda^*},\pi_{t_\lambda \to \tau_\lambda}) \} &\le \sum_{\lambda \in \Lambda^{\le t^*} } \text{\sc PCND}^*_{E_0^\lambda \gets 0}(Q_{\lambda \cap \lambda^*},\pi_{t_\lambda \to \tau_\lambda}) \\ & \le \sum_{\lambda \in \Lambda^{\le t^*} } \sum_{q\in Q_{\lambda \cap \lambda^*}}\pi_{t_\lambda \to \tau_\lambda}(q) \\ & = \sum_{\lambda \in \Lambda^{\le t^*} } \sum_{q\in Q_{\lambda \cap \lambda^*}} \max\{0,d_{q}(\tau_\lambda) - h_q^\lambda\}\\ & = \sum_{q\in Q_{\lambda^*}}\sum_{\lambda \in \Lambda^{\le t^*} | q\in Q_\lambda} \max\{0,d_{q}(\tau_\lambda) - h_q^\lambda\} \end{align*} Now, fix any request $q\in Q_{\lambda^*}$. We claim that $\sum_{\lambda\in \Lambda^{\le t^*} | q\in Q_\lambda} \max\{0,d_q(\tau_\lambda) - h_q^\lambda\} \le d_q(t^*)$. To see this, consider the services in the sum by order of occurrence, denoted $\lambda_1,\cdots, \lambda_l$. We prove by induction that $\sum_{i'=0}^i \max\{0,d_q(\tau_{\lambda_{i'}}) - h_q^{\lambda_{i'}}\} \le d_q(t^*)$ for every $i\in [l]$, which proves the claim. Clearly, this holds for the base case of $i=1$, since $\max\{0,d_q(\tau_{\lambda_{1}}) - h_q^{\lambda_{1}}\} \le d_q(\tau_{\lambda_{1}}) \le d_q(t^*)$. We prove the inductive claim for $i>1$ by assuming it holds for $i-1$. Observe that $\lambda_1, \cdots, \lambda_{i-1}$ paid the penalty for $q$ (otherwise it would not be eligible for $\lambda_i$). Thus, we have that at the end of $\lambda_{i-1}$ we have that $h_q \ge \sum_{i'=0}^{i-1} \max\{0,d_q(\tau_{\lambda_{i'}}) - h_q^{\lambda_{i'}}\} \le d_q(t^*)$. Since $h_q^{\lambda_i}$ can only be larger, and since $\max\{0,d_q(\tau_{\lambda_{i}}) - h_q^{\lambda_{i}}\} \le d_q(t^*) - h_q^{\lambda_i}$, the inductive claim holds. Overall, for this case, we have that \[\sum_{\lambda \in \Lambda^{\le t^*} }\min\{ 2^{\ell_\lambda},\text{\sc PCND}^*_{E_0^\lambda \gets 0}(Q_{\lambda \cap \lambda^*}) \}\le \sum_{q\in Q_{\lambda^*}} d_q(t^*) \le c(\lambda^*) \] where the last inequality is due to the fact that $\lambda^*$ occurs no earlier than $t^*$, and thus the optimal solution incurs the delay of $Q_{\lambda^*}$ up to $t^*$. \paragraph*{Case 2: $\tau_\lambda > t^*$.} Denote by $\Lambda^{>t^*} \subseteq \Lambda'$ the set of such services. Using Lemma \ref{lem:FWY_UniqueClass}, for every level $j$ there exists at most one $j$-level service in $\Lambda^{>t^*}$. Define $\ell = \lfloor \log (c(\lambda^*_i)) \rfloor$, and consider the following subcases for $\lambda \in \Lambda^{>t^*}$: \begin{enumerate} \item $\ell_\lambda \le \ell$. In this case, we have that $\lambda$ contributes at most $2^{\ell_\lambda}$ to the left-hand side of Equation \ref{eq:FWY_ChargePerOptimalService}. Summing over at most a single service from each level yields a geometric sum which is at most $2^{\ell+1} \le 2\cdot c(\lambda^*)$. \item $\ell < \ell_\lambda < \ell + \lceil \log |\mathcal{E}| \rceil+ 1$. For such $\lambda$, observe that \[\min\{2^{\ell_\lambda}, \text{\sc PCND}^*_{E_0^\lambda \gets 0}(Q_{\lambda \cap \lambda^*}, \pi_{t_\lambda \to \tau_\lambda})\} \le \text{\sc ND}^*(Q_{\lambda^*}) \le c(\lambda^*) \] and thus the service $\lambda$ contributes at most $c(\lambda^*)$ to the left-hand side of Equation \ref{eq:FWY_ChargePerOptimalService}. Summing over at most one $\lambda$ from each level, their total contribution to the left-hand side of Equation \ref{eq:FWY_ChargePerOptimalService} is at most $\lceil \log |\mathcal{E}| \rceil \cdot c(\lambda^*)$. \item $\ell_\lambda \ge \ell + \lceil \log |\mathcal{E}| \rceil +1$. We claim that $\text{\sc PCND}^*_{E_0^\lambda \gets 0}(Q_{\lambda \cap \lambda^*}) = 0$, and thus the contribution to the left-hand side of Equation \ref{eq:FWY_ChargePerOptimalService} from these services is $0$. To prove this claim, observe that $\text{\sc PCND}^*_{E_0^\lambda \gets 0}(Q_{\lambda \cap \lambda^*}, \pi_{t_\lambda \to \tau_\lambda}) \le \text{\sc ND}^*_{E_0^\lambda \gets 0}(Q_\lambda^*)$. Consider that every element in $\lambda^*$ costs at most $c(\lambda^*) \le 2^{\ell +1}$. Thus, since $2^{\ell_\lambda} \ge 2^{\ell+1} \cdot |\mathcal{E}|$, we have that $\lambda$ added all elements of $\lambda^*$ to $E_0$ in Line \ref{line:FWY_BuyCheapEdges}. Note that since $\lambda^*$ served $Q_{\lambda^*}$, we have that $\text{\sc ND}^*_{E_0^\lambda \gets 0}(Q_\lambda^*) =0$, as required. \end{enumerate} Summing over the contributions from each level completes the proof. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:FWY_ChargeLowerBoundsOPT}] As in the deadline case, it is enough to show that for every charged service $\lambda \in \Lambda^\circ$, we have that \begin{equation} \label{eq:FWY_GlobalChargeIsLocalCharge} 2^{\ell_\lambda} \le \sum_{\lambda^*\in \Lambda^*} \min\{2^{\ell_\lambda}, \text{\sc PCND}^*_{E_0^\lambda \gets 0}(Q_{\lambda \cap \lambda^*}, \pi_{t_\lambda \to \tau_\lambda})\} \end{equation} Summing over all $\lambda \in \Lambda^\circ$ and using Proposition \ref{prop:FWY_LocalCharge} would immediately yield the lemma. If one of the summands on the right-hand side of Equation \ref{eq:FWY_GlobalChargeIsLocalCharge} is $2^{\ell_\lambda}$, the claim clearly holds, and the proof is complete. Otherwise, the right-hand side is exactly $\sum_{\lambda^* \in \Lambda^*} \text{\sc PCND}^*_{E_0^\lambda \gets 0}(Q_{\lambda \cap \lambda^*}, \pi_{t_\lambda \to \tau_\lambda})$. Now, since $\bigcup_{\lambda^* \in \Lambda^*} Q_{\lambda \cap \lambda^*} = Q_\lambda$, we can construct a feasible solution for $\text{\sc PCND}_{E_0^\lambda\gets 0}(Q_\lambda, \pi_{t_\lambda \to \tau_\lambda})$ by buying the elements in $\text{\sc PCND}^*_{E_0^\lambda\gets 0}(Q_{\lambda \cap \lambda^*}, \pi_{t_\lambda \to \tau_\lambda})$ for every $\lambda^* \in \Lambda^*$, and paying the penalty for unserved requests. Clearly, the cost of this solution is at most $\sum_{\lambda^* \in \Lambda^*} \text{\sc PCND}^*_{E_0^\lambda \gets 0}(Q_{\lambda \cap \lambda^*}, \pi_{t_\lambda \to \tau_\lambda})$, and thus \[ \text{\sc PCND}^*_{E_0^\lambda\gets 0}(Q_\lambda, \pi_{t_\lambda \to \tau_\lambda}) \le \sum_{\lambda^* \in \Lambda^*} \text{\sc PCND}^*_{E_0^\lambda \gets 0}(Q_{\lambda \cap \lambda^*}, \pi_{t_\lambda \to \tau_\lambda}) \] From Observation \ref{obs:FWY_OnlyFullServicesCharged}, we know that $\lambda$ is an imperfect service. Proposition \ref{prop:FWY_TimeForwardingGuarantee} thus implies that $2^{\ell_\lambda} \le \text{\sc PCND}^*_{E_0^\lambda\gets 0}(Q_\lambda, \pi_{t_\lambda \to \tau_\lambda})$, which completes the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:FWY_Competitiveness}] The competitiveness guarantee results immediately from Lemmas \ref{lem:FWY_ALGUpperBound}, \ref{lem:FWY_ChargeLowerBoundsOPT} and \ref{lem:FWY_PrimaryLowerBoundsOPT}. As for the running time of the algorithm, it is clear that it is determined by either Line \ref{line:FWY_BuyCheapEdges}, which takes $O(|\mathcal{E}|)$ time in each service, or by the number of calls made to the prize-collecting approximation algorithm $\text{\sc PCND}$ in the function \ForwardTime. We claim that the total number of calls made in each service is $O(k^2)$, with $k = |Q|$ the number of requests in the input. To see this, fix any service $\lambda$ at time $t$. Observe that the number of calls made to $\text{\sc PCND}$ in a service $\lambda$ is exactly the number of iterations of the loop in \ForwardTime. Denote the iterations of this loop in the service $\lambda$ by $I_1,\cdots, I_l$. For every iteration $I_i$, we denote by $t_i$ the value of the variable $t''$ set in iteration $I_i$, and denote by $S_i$ the $\text{\sc PCND}$ solution computed in $I_i$. Observe the state after iteration $i_k$ -- we know that the requests of $Q_\lambda$ gather a total delay of at least $k\gamma\cdot 2^{\ell_\lambda}$ between $t$ and $t_k$. Thus, there exists a request $q_1\in Q_\lambda$ which has delay of at least $\gamma\cdot 2^{\ell_\lambda}$. In any solution $S_i$ for $i>k$ (except possibly the final one $S_l$), we have that $q$ is served. This is since otherwise the cost of $S_i$ would exceed $\gamma\cdot 2^{\ell_\lambda}$, in contradiction to the loop not ending at the \Break command in \ForwardTime. Next, consider the iterations $i_{k+1},\cdots, i_{2k}$. Using the same argument, we know that there exists a request $q_2 \in Q_\lambda \backslash \{q_1\}$ that gathers at least $\gamma\cdot 2^{\ell_\lambda}$ delay until time $t_{2k}$. Thus, $S_i$ for $2k\le i < l$ serves $q_2$. Repeating this argument, we know that for $i\ge k^2$ the solution $S_i$ must serve all requests $Q_\lambda$, ending the loop. Note that the number of services performed in the algorithm is at most $k$, since each service serves some pending request (as ensured by Line \ref{line:FWY_ForceService}). Thus, the total running time consists of $O(k^3)$ calls to $\text{\sc PCND}$, and $O(k |\mathcal{E}|)$ time for Line \ref{line:FWY_BuyCheapEdges}. This completes the proof. \end{proof} \section{Applications and Extensions of the Delay Framework} In this section, we apply the framework of Section \ref{sec:FWY} to various problems, as we did for the deadline case. The requirement for the delay framework is an approximation algorithm for the prize-collecting problem. For some of the problems we consider, we cite appropriate prize-collecting algorithms. For others, we use a simple construction which yields a prize-collecting approximation algorithm from an approximation algorithm for the original problem. \paragraph{Edge-Weighted Steiner Tree and Forest.} The following result is due to Hajiaghayi and Jain~\cite{Hajiaghayi2006}. \begin{pthm}[\cite{Hajiaghayi2006}] There exists a polynomial-time, deterministic $3$-approximation for EW prize-collecting Steiner forest. \end{pthm} Plugging the algorithm of the previous theorem into the framework of Section \ref{thm:FWY_Competitiveness} yields the following result. \begin{thm} There exists an $O(\log n)$-competitive deterministic algorithm for EW Steiner forest with delay which runs in polynomial time. \end{thm} \paragraph{Multicut.} The result of Garg \textit{et al.}~\cite{Garg1993}, stated in Theorem \ref{thm:APP_MCD_GargEtAl}, is in fact an approximation with respect to the optimal fractional solution for the following LP relaxation (where $\mathcal{P}_q$ is the collection of paths connecting the two terminals of $q$). \begin{equation} \label{eq:MCD_PrimalLP} \arraycolsep=5pt \def2.2{2.2} \begin{array}{lcr} \text{minimize} & \sum_{e\in E}x_e c(e)& \\ \text{subject to} & \sum_{e\in P} x_e \ge 1 & \forall q\in Q, \in \mathcal{P}_q\\ & x_e \ge 0 &\forall e\in E \end{array} \end{equation} The corresponding prize-collecting LP relaxation, for a penalty function $\pi$, is the following. \begin{equation} \label{eq:MCD_PC_PrimalLP} \arraycolsep=5pt \def2.2{2.2} \begin{array}{lcr} \text{minimize} & \sum_{e\in E}x_e c(e) + \sum_{q \in Q} p_q \pi(q)& \\ \text{subject to} & \sum_{e\in P} x_e + p_q \ge 1 & \forall q\in Q, \in \mathcal{P}_q\\ & x_e \ge 0 &\forall e\in E \end{array} \end{equation} The following construction is a folklore construction of a prize-collecting approximation algorithm from an approximation algorithm for the original problem. First, we solve the prize-collecting LP in Equation \ref{eq:MCD_PC_PrimalLP} to obtain a solution $\left( \{x_e\}_{e\in E},\{p_q\}_{q\in Q} \right)$. For each request $q$ such that $p_q \ge \frac{1}{2}$ the algorithm pays the penalty. The remainder of the requests are solved by calling the approximation algorithm for the original (non-prize-collecting) problem. This construction can easily be seen to lose only a constant factor (namely, 2) over the approximation ratio of the original approximation algorithm. For the case of multicut, first observe that this construction is indeed implementable -- that is, the prize-collecting LP can be solved in polynomial time by using a classic separation oracle based on min-cut queries for each request. Thus, the resulting approximation guarantee for the construction is $O(\log n)$. Plugging the resulting algorithm into the framework of Section \ref{sec:FWY} yields the following result. \begin{thm} There exists a deterministic $O(\log^2 n)$-competitive algorithm for multicut with delay which runs in polynomial time. \end{thm} \paragraph{Node-Weighted Steiner Forest.} The following result is due to Bateni \textit{et al.}~\cite{DBLP:journals/siamcomp/BateniHL18}. \begin{pthm}[\cite{DBLP:journals/siamcomp/BateniHL18}] There exists a polynomial time, deterministic $O(\log n)$-approximation for node-weighted prize-collecting Steiner forest. \end{pthm} Plugging the algorithm of the previous theorem into the framework of Section \ref{thm:FWY_Competitiveness} yields the following result. \begin{thm} There exists an $O(\log^2 n)$-competitive deterministic algorithm for EW Steiner forest with delay which runs in polynomial time. \end{thm} \paragraph{Edge-Weighted Steiner Network.} The following result is due to Hajiaghayi and Nasri~\cite{DBLP:conf/latin/HajiaghayiN10}. \begin{pthm}[\cite{DBLP:conf/latin/HajiaghayiN10}] There exists a polynomial-time, deterministic $3$-approximation for EW prize-collecting Steiner network. \end{pthm} Plugging the algorithm of the previous theorem into the framework of Section \ref{thm:FWY_Competitiveness} yields the following result. \begin{thm} There exists an $O(\log n)$-competitive deterministic algorithm for EW Steiner network with delay which runs in polynomial time. \end{thm} \paragraph{Directed Steiner Tree} The recent result of Grandoni \textit{et al.}~\cite{Grandoni:2019:OAA:3313276.3316349} for directed Steiner tree is based on an approximation algorithm to a problem called Group Steiner Tree on Trees with Dependency Constraint (GSTTD), which they show is equivalent to directed Steiner forest. Their algorithm for GSTTD is an approximation with respect to the optimal solution to a rather complex LP relaxation, which involves applying Sherali-Adams strengthening to a base relaxation for GSTTD. At the time of writing this paper, we could not find a consideration of the prize-collecting variant of directed Steiner tree. We conjecture that a construction similar to shown here for Steiner forest would also apply for directed Steiner tree, yielding a prize-collecting algorithm with only a constant loss in approximation over the original algorithm of~\cite{Grandoni:2019:OAA:3313276.3316349}. While proving the existence of such a component is beyond the scope of this paper, we nonetheless state the resulting guarantee for directed Steiner tree with delay assuming that the component exists. \begin{thm} If there exists a $\gamma$-approximation for prize-collecting directed Steiner tree which runs in quasi-polynomial time, then there exists an $O(\gamma \log n)$-competitive algorithm for directed Steiner tree with delay which also runs in quasi-polynomial time. \end{thm} \subsection{Facility Location} The following result is due to Xu and Xu~\cite{DBLP:journals/jco/XuX09}. \begin{pthm} \label{thm:APY_FLY_XuAndXu}[\cite{DBLP:journals/jco/XuX09}] There exists a polynomial-time, deterministic $1.8526$-approximation for prize-collecting facility location. \end{pthm} In this subsection we prove the following result. \begin{thm} \label{thm:APY_FLY_Competitiveness} There exists a deterministic $O(\log n)$-competitive algorithm for facility location with delay. \end{thm} As previously observed in the deadline case, the facility location problem does not conform to the $\text{\sc ND}$ structure, and thus the framework cannot be applied to facility location in a black-box fashion and still obtain $O(\log n)$ loss. In the deadline case, we showed that the framework of Section \ref{sec:FWD} could still be directly applied to facility location; the only necessary modification was in the analysis -- namely, the proof of Lemma \ref{lem:FWD_ChargeLowerBoundsOPT}. In facility location with delay, however, this is not the case -- a minor modification to the framework itself is required. The modification is simply to ensure that during any ongoing service, the investment counter of a pending request never surpasses the cost of connecting that request to an open facility. The modification consists of replacing the \textbf{foreach} loop of Line \ref{line:FWY_MakeInvestment} with the modification in Snippet \ref{alg:APY_FLY_Algorithm}. \begin{algorithm} \renewcommand{\algorithmcfname}{Snippet} \caption{\label{alg:APY_FLY_Algorithm} Facility Location Modification} Let $F$ be the set of facilities opened in $S$. \ForEach{$q\in Q\backslash Q'_\lambda$}{ \eIf{$h_q + \pi_{t''}(q) \ge \min_{u\in F}\delta(u,q)$\label{Line:APY_FLY_ServiceIfCondition}}{ Set $h_q = \max(h_q, \min_{u\in F}\delta(u,q))$ Set $Q'_\lambda \gets Q'_\lambda \cup \{q\}$ Modify $S$ to also serve $q$ by connecting $q$ to $\arg \min_{u\in F} \delta(u,q)$. }{ Set $h_q \gets h_q +\pi_{t''}(q)$. Set $\ell_{q} \gets \ell_\lambda$. } } \end{algorithm} As was the case in facility location with deadlines, Remark \ref{rem:APP_FLD_SolutionNature} applies to the nature of solutions in the facility location with delay algorithm. \subsubsection*{Analysis} We show that the application of the framework in Section \ref{sec:FWD}, with the modification of Snippet \ref{alg:APY_FLY_Algorithm}, to the approximation algorithm of Theorem \ref{thm:APY_FLY_XuAndXu} proves Theorem \ref{thm:APY_FLY_Competitiveness}. As in the deadline case, we would like to reprove Lemmas \ref{lem:FWY_ALGUpperBound}, \ref{lem:FWY_PrimaryLowerBoundsOPT} and \ref{lem:FWY_ChargeLowerBoundsOPT} for facility location with delay, which would prove the theorem. For Lemma \ref{lem:FWY_ALGUpperBound}, consider that the cost of serving additional requests in the snippet is bounded by the investment counters of those requests -- thus, losing a factor of $2$, we ignore this additional cost. The remaining argument is identical to the original proof of Lemma \ref{lem:FWY_ALGUpperBound}. Lemma \ref{lem:FWY_PrimaryLowerBoundsOPT} goes through without modification. It remains to prove Lemma \ref{lem:FWY_ChargeLowerBoundsOPT} for our case. As in the deadline case, the only part of the proof which needs to be modified is the local-charging proposition, which is Proposition \ref{prop:FWY_LocalCharge}. \begin{proof}[Proof of Proposition \ref{prop:FWY_LocalCharge} for facility location] We use the notation defined in the original proof of Proposition \ref{prop:FWY_LocalCharge}. The proof breaks down in the third subcase of case 2 -- that is, the case of a service $\lambda$ which forwarded past time $t^*$, such that $\ell_\lambda \ge \ell + \lceil \log |\mathcal{E}| \rceil +1$. Let $\Lambda^\gg$ be the collection of services in this subcase. We claim that \[\sum_{\lambda\in \Lambda^\gg} \text{\sc PCND}^*_{E_0^\lambda \gets 0}(Q_{\lambda \cap \lambda^*},\pi_{\lambda,\tau_\lambda}) \le 2\cdot c^\text{\sc conn}(\lambda^*) \label{eq:APY_FLY_SubcaseProof} \] where $c^\text{\sc conn}(\lambda^*) \le c(\lambda^*)$ is the connection cost incurred by the optimal solution in $\lambda^*$. To show this, for every $\lambda \in \Lambda^\gg$ we define the following solution $\mathcal{S}$ for $\text{\sc PCND}_{E_0^\lambda \gets 0}(Q_{\lambda \cap \lambda^*},\pi_{\lambda,\tau_\lambda})$: \begin{enumerate} \item Open facilities at all nodes in $E_0^\lambda$, at cost $0$. \item For every request $q \in Q_{\lambda \cap \lambda^*}$: \begin{enumerate} \item If $\lambda$ is the last service in $\Lambda^\gg$ for which $q$ is eligible, connect $q$ to the closest facility in $E_0^\lambda$. \item Otherwise, pay the penalty $\pi_{\lambda, \tau_\lambda}(q)$. \end{enumerate} \end{enumerate} This solution has no opening cost, only connection and penalty costs. We now count the costs of those solutions by each request separately, attributing to a request $q\in Q_{\lambda^*}$ the connection and penalty cost incurred for it by the solutions. Fix a request $q\in Q_{\lambda^*}$, and denote by $\lambda_1, \cdots, \lambda_l \in \Lambda^\gg$ the services for which $q$ was eligible, ordered by time of occurrence. For every $i \in [l]$, denote by $\mathcal{S}_i$ the solution corresponding to $\lambda_i$. Denote by $E^*$ the set of facilities opened in $\lambda^*$ and observe that, as in the original proof, for every $\lambda_i$ for $i\in [l]$ we have that $E^*\subseteq E_0^{\lambda_i}$. Thus, the total cost due to $q$ is: \textbf{penalty}: penalty cost $\pi_{\lambda_i,\tau_{\lambda_i}}$ is paid in $\mathcal{S}_i$ for $i$ such that $\lambda_i$ does not serve $q$. The services $\lambda_i$ in which the solution pays the penalty for $q$ do not serve $q$; observe that in such services $h_q$ increases by $\pi_{\lambda_i,\tau_{\lambda_i}}$. After each such $\lambda_i$, we also have that $h_q \le \min_{v\in E_0^{\lambda_i}} \delta(v,q)$ -- otherwise, the \textbf{if} condition in Line \ref{Line:APY_FLY_ServiceIfCondition} in the snippet would force $q$ to be served, in contradiction. In particular, $h_q \le \min_{v\in E^*} \delta(v,q)$ after each such $\lambda_i$. This implies that the sum of penalty costs for $q$ is at most $ \min_{v\in E^*} \delta(v,q)$, which is the connection cost of $q$ in $\lambda^*$. \textbf{connection:} There exists at most one index $i\in [l]$ such that $\mathcal{S}_i$ connects $q$. Using again the fact that $E^*\subseteq E_0^{\lambda_i}$, the connection cost of request $q$ in $\mathcal{S}_i$ is at most the connection cost of $q$ in $\lambda^*$. We completes the proof of Equation \ref{eq:APY_FLY_SubcaseProof}. Thus, we have that the contribution from services $\lambda \in \Lambda^\gg$ to the left-hand side of Equation \ref{eq:FWY_ChargePerOptimalService} is at most $2\cdot c(\lambda^*)$, completing the proof of the proposition. \end{proof} \subsection{Exponential-Time Algorithms} As in the deadline case, one can use the framework of Section \ref{sec:FWY} to obtain the following information-theoretic upper bound on competitiveness. \begin{thm} There exists an $O(\log |\mathcal{E}|)$-competitive algorithm for $\text{\sc ND}$ with delay (with no guarantees on running time). In particular, there exists an $O(\log n)$-competitive algorithm for all problems considered in this paper, where $n$ is the number of nodes in the input graph. \end{thm} \section{Request-Based Regime} \label{sec:RBF} In problems with deadlines or with delay, the usual regime is that the number of requests is unbounded, and potentially much larger than the size of the underlying universe (e.g. the number of nodes in the graph). This is the regime we addressed in this paper thus far. However, for offline network design, the opposite regime is used -- i.e. that the universe is large, and the number of requests is much smaller. For such a regime, it is preferable to give guarantees in the number of requests $k$. In this section, we obtain the best of both worlds, namely a guarantee in the minimum between the number of requests and the size of the universe. The following theorem states the result of this section. \begin{thm} \label{thm:RBF_Competitiveness} If there exists a $\gamma$ deterministic (randomized) approximation algorithm for $\text{\sc ND}$, then there exists an $O(\gamma\log (\min\{k,|\mathcal{E}|\}))$-competitive deterministic (randomized) algorithm for $\text{\sc ND}$ with deadlines, which runs in polynomial time. \end{thm} \subsection{Proof of Theorem \ref{thm:RBF_Competitiveness}} To prove Theorem \ref{thm:RBF_Competitiveness}, we first show how to modify the framework of Section \ref{sec:FWD} to be $O(\gamma \log k)$-competitive, where $\gamma$ is the approximation ratio of the encapsulated approximation algorithm. We then describe a simple way to combine this modified framework with the original framework of Section \ref{sec:FWD} to prove Theorem \ref{thm:RBF_Competitiveness}. \subsubsection*{Modified $O(\gamma \log k)$-Competitive Framework} We describe the needed modification to the framework of Section \ref{sec:FWD} to achieve $(\gamma \log k)$-competitiveness. For the sake of describing the framework, we assume that the number of requests $k$ is known in advance (this assumption is later relaxed using standard doubling techniques). The single modification required is in the definition of $E_0$, as defined in $\UponDeadline$. Instead of adding all cheap elements (those that cost at most $\frac{2^{\ell_\lambda}}{|\mathcal{E}|}$), we instead iterate over pending requests which are cheap. Namely, the new framework is obtained by replacing Line \ref{line:FWD_BuyCheapEdges} with Snippet \ref{alg:RBF_E0Redefinition}, which defines $E_0$ in a different way. \begin{algorithm} \renewcommand{\algorithmcfname}{Snippet} \caption{\label{alg:RBF_E0Redefinition} Facility Location Modification} \While{there exists a pending request $q$ which is not served by $E_0$, such that $c(S_q)\le \frac{\gamma\cdot 2^{\ell_\lambda}}{k}$}{ Set $E_0 \gets E_0 \cup S_q$ } \end{algorithm} \subsubsection*{Analysis} The following theorem states the competitiveness of the modified framework. \begin{thm} \label{thm:RBF_IntermediaryCompetitiveness} The framework of Section \ref{sec:FWD}, when modified with Snippet \ref{alg:RBF_E0Redefinition}, is $O(\gamma \log k)$-competitive. \end{thm} The proof of Theorem \ref{thm:RBF_IntermediaryCompetitiveness} is very similar to the proof of Theorem \ref{thm:FWD_Competitiveness}. Lemma \ref{lem:FWD_ALGUpperBound} goes through in an almost identical way -- it is enough to notice that the cost of $E_0$ as defined in Snippet \ref{alg:RBF_E0Redefinition} never exceeds $\gamma \cdot 2^{\ell_\lambda}$. Lemma \ref{lem:FWD_PrimaryLowerBoundsOPT} also goes through in an identical manner. It remains to prove the following analogue to Lemma \ref{lem:FWD_ChargeLowerBoundsOPT}. \begin{lem}[Analogue of Lemma \ref{lem:FWD_ChargeLowerBoundsOPT}] \label{lem:RBF_ChargeLowerBoundsOPT} $\sum_{\lambda\in \Lambda^\circ} 2^{\ell_\lambda} \le O(\log k) \cdot \text{\sc opt}$ \end{lem} To prove Lemma \ref{lem:RBF_ChargeLowerBoundsOPT}, we only need to prove the following analogue of Proposition \ref{prop:FWD_LocalCharge}. The proof of Lemma \ref{lem:RBF_ChargeLowerBoundsOPT} from this analogue is identical to the proof of Lemma \ref{lem:FWD_ChargeLowerBoundsOPT} from Proposition \ref{prop:FWD_LocalCharge}. \begin{prop}[Analogue of Proposition \ref{prop:FWD_LocalCharge}] There exists a constant $\beta$ such that for every optimal service $\lambda^* \in \Lambda^*$, we have that \begin{equation} \label{eq:RBF_ChargePerOptimalService} \sum_{\lambda\in \Lambda^\circ}\min\{2^{\ell_\lambda},\text{\sc ND}^*_{E_0^\lambda\gets 0}(Q_{\lambda \cap \lambda^*})\} \le \beta\log k \cdot c(\lambda^*) \end{equation} \end{prop} \begin{proof} The proof is very similar to the proof of Proposition \ref{prop:FWD_LocalCharge}. Fix an optimal service $\lambda^* \in \Lambda^*$. Denote by $\Lambda'\subseteq \Lambda^\circ$ the subset of charged services made by the algorithm in which a request from $Q_{\lambda^*}$ is served (other services, for which $Q_{\lambda \cap \lambda^*}=\emptyset$, need not be considered). Observe that $Q_{\lambda^*}$ is an intersecting set, as the optimal solution served $Q_{\lambda^*}$ is a single point in time. Lemma \ref{lem:FWD_UniqueClass} implies that for every level $j$, there exists at most one $j$-level service in $\Lambda'$. Define $\ell = \lfloor \log (c(\lambda^*)) \rfloor$. Now, consider the following cases for a service $\lambda\in \Lambda'$: \begin{enumerate} \item $\ell_\lambda \le \ell$. Each such $\lambda$ contributes at most $2^{\ell_\lambda}$ to the left-hand side of Equation \ref{eq:FWD_ChargePerOptimalService}. Summing over at most one service from each level yields a geometric sum which is at most $2^{\ell +1} \le 2\cdot c(\lambda^*)$. \item $\ell < \ell_\lambda < \ell + \lceil \log k \rceil+ 1$. For such $\lambda$, observe that $\min\{2^{\ell_\lambda}, \text{\sc ND}^*_{E_0^\lambda \gets 0}(Q_{\lambda \cap \lambda^*})\} \le \text{\sc ND}^*(Q_\lambda) \le c(\lambda^*)$. Summing over at most a single service from each level, the total contribution to the left-hand side of Equation \ref{eq:FWD_ChargePerOptimalService} from these levels is at most $\lceil \log k \rceil\cdot c(\lambda^*)$. \item $\ell_\lambda \ge \ell + \lceil \log k \rceil +1$. Observe that $\min\{2^{\ell_\lambda}, \text{\sc ND}^*_{E_0^\lambda \gets 0}(Q_{\lambda \cap \lambda^*})\} \le \text{\sc ND}^*_{E_0^\lambda \gets 0}(Q_{\lambda \cap \lambda^*})$. We now claim that $\text{\sc ND}^*_{E_0^\lambda \gets 0}(Q_{\lambda \cap \lambda^*}) =0$, which implies that the total contribution from these levels to the left-hand side of Equation \ref{eq:RBF_ChargePerOptimalService} is $0$. Indeed, consider that $\text{\sc ND}^*(\{q\}) \le c(\lambda^*)$ for every request $q\in Q_{\lambda^*}$ (since $\lambda^*$ is itself a feasible solution). If, in addition, we have that $q\in Q_\lambda$, then $q$ was pending immediately before $\lambda$. From the approximation guarantee of $\text{\sc ND}$, we have that $c(S_q) \le \gamma \cdot \text{\sc ND}^*(\{q\}) \le \gamma \cdot c(\lambda^*) \le \gamma \cdot 2^{\ell+1}$. Thus, since $2^{\ell_\lambda} \ge 2^{\ell+1}\cdot k$, Snippet \ref{alg:RBF_E0Redefinition} guarantees that $E_0^\lambda$ serves $q$. Since this holds for every $q\in Q_{\lambda \cap \lambda^*}$, we have that $\text{\sc ND}^*_{E_0^\lambda \gets 0}(Q_{\lambda \cap \lambda^*})=0$. \end{enumerate} Summing over the contributions from each level completes the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:RBF_IntermediaryCompetitiveness}] The proof of the theorem results immediately from Lemmas \ref{lem:FWD_ALGUpperBound}, \ref{lem:FWD_PrimaryLowerBoundsOPT} and \ref{lem:RBF_ChargeLowerBoundsOPT}. The analysis of the running time remains the same. \end{proof} \subsubsection*{Proof of Theorem \ref{thm:RBF_Competitiveness}} First, we describe the doubling we use to relax the assumption that $k$ is known to the algorithm. We do this by guessing a value $\hat{k}$ for the number of requests -- initially a constant -- and running the framework of Theorem \ref{thm:RBF_IntermediaryCompetitiveness} for that value. When the number of requests exceeds $\hat{k}$, we send all new requests to a new instance of the algorithm (which is run in parallel to the previous instances), in which the guessed number of requests is $\hat{k}^2$. We then set $\hat{k}\gets \hat{k}^2$. The cost of the $i$'th instance is at most $\gamma \log \hat{k}_i \cdot \text{\sc opt}$, where $\hat{k}^i$ is the value of $\hat{k}$ used by the $i$'th instance. Consider that the final instance is that in which $\hat{k} \ge k$, and that for this instance we have $\hat{k}\le k^2$ and thus $\log \hat{k} \le 2\log k$. Since $\log \hat{k}$ grows by a factor of $2$ with each iteration, we have that the total cost of the algorithm is at most $4\gamma \log k\cdot \text{\sc opt}$, as required. To prove Theorem \ref{thm:RBF_Competitiveness}, we modify this by stopping the doubling process earlier: when $\hat{k}$ exceeds $|\mathcal{E}|$, we start a new instance of the original framework of Section \ref{sec:FWD}, and send all new requests to that instance. This is easily seen to achieve the desired competitiveness bound. \paragraph{Extension to Delay.} The modifications seen in this section for deadlines can also be applied to the delay framework of Section \ref{sec:FWY}, achieving an identical guarantee to Theorem \ref{thm:RBF_Competitiveness}. However, as is the case in the original delay framwork, we cannot allow a pending request which is not eligible to the current service to be served by this service -- otherwise, Proposition \ref{prop:FWY_DelayBoundedByCounters} would no longer hold, as the residual delay of an ineligible request might be nonzero. This yields the following result. \begin{thm} \label{thm:RBF_DelayCompetitiveness} If there exists a $\gamma$ deterministic (randomized) approximation algorithm for $\text{\sc PCND}$, then there exists an $O(\gamma\log (\min\{k,|\mathcal{E}|\}))$-competitive deterministic (randomized) algorithm for $\text{\sc ND}$ with delay, which runs in polynomial time. \end{thm} \subsection{Applications} We can apply this framework to the network design problems which conform to the structure of $\text{\sc ND}$. In Section \ref{sec:APP}, we chose to quote the approximation ratios of all offline approximation algorithms in terms of $n$ instead of $k$, since we were interested in a guarantee in $n$ (the reader can verify that the original guarantees of these algorithms are indeed in terms of $k$). In this section, we are interested in a guarantee in $\min\{k,n\}$. We thus replace $n$ with $\min\{n,k\}$ in the approximation ratios of all offline approximation algorithms stated in Section \ref{sec:APP}. Plugging those approximation algorithms into the framework, Theorem \ref{thm:RBF_Competitiveness} yields the following results: \begin{table}[h!] \begin{center} \caption{Framework Applications} \label{tab:RBF_ResultsTable} \begin{tabular}{l|c} Edge-weighted Steiner forest with deadlines & $O(\log \min\{k,n\})$\\ Multicut & $O(\log^2 \min\{k,n\})$ \\ Edge-weighted Steiner network & $O(\log \min\{k,n\})$ \\ Node-weighted Steiner forest &$O(\log^2 \min\{k,n\})$ \\ Directed Steiner tree & $O\left(\frac{\log^3 \min\{k,n\}}{\log \log \min\{k,n\}}\right)$ \\ \end{tabular} \end{center} \end{table} \section{Conclusions and Open Problems} This paper presented frameworks for network design problems with deadlines or delay, which encapsulate approximation algorithms for the offline network design problem, with competitiveness which is a logarithmic factor away from the approximation ratio of the underlying approximation algorithm. The running time of these frameworks has a polynomial overhead over the running time of the encapsulated approximation algorithm. In particular, in the formal online model with unbounded computation, this provides $O(\log n)$ upper bounds (with $n$ the number of vertices in the graph), when the offline problem is solved exactly. For some network design problems, as seen in Appendix \ref{sec:LB}, this is relatively tight -- that is, an information-theoretic lower bound of $\Omega(\sqrt{\log n})$ exists. Whether there exists an improved framework which can bridge this gap remains open. For the remaining network design problems, the gap is still large, as no non-constant lower bound is known. This raises the possibility of designing a framework which works for a restricted class of network design problems (which excludes node-weighted Steiner tree and directed Steiner tree), but yields constant competitiveness results for this restricted class. Either designing such a framework, or showing lower bounds, is an open problem. An additional open problem is to design a good approximation for prize-collecting directed Steiner tree. Applying Theorem \ref{thm:FWY_Competitiveness} to such a result would yield a competitive algorithm for directed Steiner tree with delay. \bibliographystyle{plain}
{ "attr-fineweb-edu": 1.822266, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdfk4uzlhbeH0Z8bD
\section{Conclusion Remarks} We studied the problem of cyber attack detection in water treatment networks. To this end, we proposed a structured detection framework to integrate spatial-temporal patterns, deep representation learning, and one-class detection. Specifically, we first partitioned the sensing data of WTNs into a sequence of fixed-size time segments. We then built a deep spaiotemporal representation learning approach to preserve the spatio-temporal patterns of attacks and normal behaviors. The representation learning approach includes two modules: (1) a temporal embedding module, which preserves the temporal dependencies within a time segment. Then, we constructed the spatiotemporal graphs by mapping the temporal embedding to the WTN as node attributes. (2) a spatial embedding module, which learns the fused spatio-temporal embedding from the spaiotemporal graphs. In addition, we developed an integrated one-class detection method with an improved pairwise kernel. The new kernel is capable of augmenting the difference between normal and attack patterns via the pairwise similarity among deep embedding vectors of system behaviors. Finally, we conducted extensive experiments to illustrate the effectiveness of our method: STOD achieves an accuracy of $91.65\%$, with average improvement ratios of $82.78\%$ and $22.96\%$ with respect to F1 and AUC, compared with the baseline methods. \section{Experimental Results} We conduct experiments to answer the following research questions: \begin{enumerate}[(1)] \item Does our proposed outlier detection framework (STOD) outperforms the existing methods? \item Is the spatio-temporal representation learning component of STOD necessary for improving detection performance? \item Is the proposed pairwise kernel better than other traditional kernels for cyber attack detection? \item How much time will our method and other methods cost? \end{enumerate} \subsection{Data Description} We used the secure water treatment system (SWAT) data set that is from Singapore University of Technology and Design for our study. The SWAT has project built one water treatment system and a sensor network to monitor and track the situations of the system. Then, they construct one attack model to mimic the cyber attack of this kind of system in the real scenario. The cyber attacks to and the sensory data of the system are collected to form the SWAT dataset. Table \ref{con:datades} show some important statistics of the SWAT dataset. Specifically, the SWAT dataset include a normal set (no cyber attacks) and an attack set (with cyber attacks). The time period of the normal data is from 22 December 2015 to 28 December 2015. The time period of the attack data is from 28 December 2015 to 01 January 2016, and 01 February 2016. There is no time period overlap between the normal data and the attack data on 28 January 2015. It is difficult to identify more water treatment network datasets. In this study, we focus on validating our method using this dataset. \begin{table}[htbp] \small \centering \setlength{\abovecaptionskip}{0.cm} \caption{Statics of the SWAT data set} \setlength{\tabcolsep}{1mm}{ \begin{tabular}{cccccc} \toprule Data Type & Sensor Count & Total Items & Attack Items & Pos/Neg \\ \midrule Normal & 51 & 496800 & 0 & - \\ Attack & 51 & 449919 & 53900 & 7:1 \\ \bottomrule \end{tabular}} \label{con:datades} \vspace{-0.3cm} \end{table} \subsection{Evaluation Metrics} We evaluate the performances of our method in terms of four metrics. Given a testset, a detection model will predict a set of binary labels (1: attack; 0: normal). Compared predicted labels with golden standard benchmark labels, we let $tp$, $tn$, $fp$, $fn$ be the sizes of the true positive, true negative, false positive, false negative sets, respectively. \begin{enumerate}[(1)] \item \textbf{Accuracy:} is given by: \begin{equation} Accuracy = \frac{tp+tn}{tp+tn+fp+fn} \end{equation} \item \textbf{Precision:} is given by: \begin{equation} Precision = \frac{tp}{tp+fp} \end{equation} \item \textbf{F-measure:} is the harmonic mean of precision and recall, which is given by: \begin{equation} F-measure = \frac{2\times Precision \times Recall}{Precision+Recall} \end{equation} \item \textbf{AUC:} is the area under the ROC curve. It shows the capability of a model to distinguish between two classes. \end{enumerate} \begin{figure*}[!thb] \setlength{\abovecaptionskip}{-8pt} \centering \subfigure{\label{fig:accuracy}\includegraphics[width=0.24\linewidth]{{image/Accuracy.pdf}}} \subfigure{\label{fig:precision}\includegraphics[width=0.24\linewidth]{{image/Precision.pdf}}} \subfigure{\label{fig:F1}\includegraphics[width=0.24\linewidth]{{image/F1.pdf}}} \subfigure{\label{fig:AUC}\includegraphics[width=0.24\linewidth]{{image/AUC.pdf}}} \caption{Comparison of different models in terms of Accuracy, Precision, F-measure and AUC .} \label{fig:overall} \end{figure*} \begin{figure*}[!thb] \setlength{\abovecaptionskip}{-8pt} \centering \subfigure{\label{fig:stod_accuracy}\includegraphics[width=0.24\linewidth]{{image/STOD_accuracy.pdf}}} \subfigure{\label{fig:stod_precision}\includegraphics[width=0.24\linewidth]{{image/STOD_Precision.pdf}}} \subfigure{\label{fig:stod_F1}\includegraphics[width=0.24\linewidth]{{image/STOD_Fmeasure.pdf}}} \subfigure{\label{fig:stod_AUC}\includegraphics[width=0.24\linewidth]{{image/STOD_AUC.pdf}}} \caption{Comparison of different phases of representation learning module based on Accuracy, Precision, F-measure , and AUC.} \label{fig:stod} \vspace{-0.5cm} \end{figure*} \begin{comment} \begin{table*}[!ht] \centering \caption{Comparison of different models in terms of Accuracy, Precision, Recall, F-measure and AUC .} \begin{tabular}{c|cc|cc|cc|cc|cc} \hline & Accuracy & Outperform & Precision & Outperform & Recall & Outperform & F-measure & Outperform & AUC & Outperform\\ \hline STOD & \textbf{0.9165} & - & 0.6479 & - & 0.6790 & - & \textbf{0.6631} & - & \textbf{0.8141} & - \\ DeepSVDD & 0.8607 & $+6.5\%$ & 0.4497 & $+44.1\%$ & 0.6759 & $+0.5\%$ & 0.5401 & $+22.8\%$ & 0.7000 & $+16.3\%$ \\ GANomaly & 0.8483 & $+8.0\%$ & 0.4212 & $+53.8\%$ & 0.6787 & $+0.04\%$ & 0.5198 & $+27.6\%$ & 0.6864 & $+18.6\%$ \\ LODA & 0.8223 & $+11.5\%$ & \textbf{0.7525} & $-13.9\%$ & 0.3813 & $+78.1\%$ & 0.5061 & $+31.0\%$ & 0.6710 & $+21.3\%$ \\ Isolation-Forest & 0.7977 & $+14.9\%$ & 0.3542 & $+82.9\%$ & 0.8167 & $-16.9\%$ & 0.4942 & $+34.2\%$ & 0.8000 & $+1.8\%$ \\ LOF & 0.3876 & $+136.5\%$ & 0.1581 & $+309.8\%$ & 0.9388 & $-27.7\%$ & 0.2706 & $+145.0\%$ & 0.6300 & $+29.2\%$ \\ KNN & 0.3447 & $+165.9\%$ & 0.1524 & $+325.1\%$ & 0.9677 & $-29.8\%$ & 0.2637 & $+151.4\%$ & 0.6100 & $+33.1\%$ \\ ABOD & 0.2817 & $+225.3\%$ & 0.1420 & $+356.3\%$ & \textbf{0.9793} & $-30.7\%$ & 0.2481 & $+167.3\%$ & 0.5800 & $+40.4\%$ \\ \hline \end{tabular} \label{tab:overall} \end{table*} \begin{table*}[!ht] \centering \caption{Comparison of different phases of representation learning based on Accuracy, Precision, Recall, F-measure , and AUC.} \begin{tabular}{c|cc|cc|cc|cc|cc} \hline & Accuracy & Outperform & Precision & Outperform & Recall & Outperform & F-measure & Outperform & AUC & Outperform\\ \hline STOD & \textbf{0.9165} & - & \textbf{0.6479} & - & 0.6790 & - & \textbf{0.6631} & - & \textbf{0.8141} & - \\ STODP1 & 0.1210 & $+657.4\%$ & 0.1210 & $+435.4\%$ & \textbf{1.0000} & $-32.1\%$ & 0.2159 & $+207.1\%$ & 0.5000 & $+62.8\%$ \\ STODP2 & 0.7173 & $+27.8\%$ & 0.2568 & $+152.3\%$ & 0.7052 & $-3.7\%$ & 0.3765 & $+76.1\%$ & 0.7121 & $+14.3\%$ \\ STODP3 & 0.4431 & $+106.8\%$ & 0.0912 & $+610.4\%$ & 0.4020 & $+68.9\%$ & 0.1487 & $+345.9\%$ & 0.4254 & $+91.4\%$ \\ \hline \end{tabular} \label{tab:diff_phase} \end{table*} \end{comment} \subsection{Baseline Algorithms} We compare the performances of our method (STOD) against the following ten baseline algorithms. \begin{enumerate}[(1)] \item DeepSVDD \cite{ruff2018deep}: expands the classic SVDD algorithm into a deep learning version. It utilizes a neural network to find the hyper-sphere of minimum volume that wraps the normal data. If a data sample falls inside of the hyper-sphere, DeepSVDD classifies the sample as normal, and attack otherwise. In the experiments, we set the dimensions of the spatio-temporal embedding $\mathbf{z}_i$ to $28 \times 28$. \item GANomaly \cite{akcay2018ganomaly}: is based on the GAN framework. It develop a new version of generator by using the encoder-decoder-encoder structure. The algorithm regards the difference between the embedding of the first encoder and the embedding of the second encoder as the anomaly score to distinguish normal and abnormal. In the experiments, we set the dimension of the spatio-temporal embedding vector $\mathbf{z}_i$ into $28 \times 28$. \item LODA \cite{pevny2016loda}: is an ensemble outlier detection model. It collects a series of weak anomaly detectors to produce a strong detector. In addition, the model fits real-time data flow and is resistant to missing values in the data set. In the experiments, we fed the learned representations into the LODA to detect. \item Isolation-Forest \cite{liu2008isolation}. The IsolationForest isolates observations by randomly selecting a feature and then randomly selecting a split value between the maximum and minimum values of the selected feature. In the experiments, we input spatio-temporal embedding vector $\mathbf{z}_i$ into Isolation-Forest, and set the number of estimators = 100, max sample numbers = 256. \item LOF \cite{breunig2000lof}. The principle of LOF is to measure the local density of data samples. If one data sample has low local density, the sample is an outlier. Otherwise, the sample is a normal sample. In the experiments, we input the spatio-temporal embedding vector $\mathbf{z}_i$ into LOF and set the number of neighborhood = 20, the distance metric for finding neighborhoods is euclidean distance. \item KNN \cite{soucy2001simple}. KNN selects k nearest neighborhoods of one data sample based on a distance metric. KNN calculates the anomaly score of the data sample according to the anomaly situation of the $k$ neighborhoods. In the experiments, we input spatio-temporal embedding vector $\mathbf{z}_i$ into KNN, and set the number of neighborhoods = 5, the adopted distance metric is euclidean distance. \item ABOD \cite{kriegel2008angle}. The ABOD method uses angle as a more robust measure to detect outliers. If many neighborhoods of one sample locate in the same direction to the sample, it is an outlier, otherwise, it is a normal sample. In the experiments, we input spatio-temporal embedding $\mathbf{z}_i$ into ABOD, set $k$ = 10. The angle metric is cosine value. \item STODP1. We proposed to partition the sensing data into non-overlapped segments. The global mean pooling technique was then applied to fuse the segments of different sensors into an averaged feature vector. We fed the fused feature vector into OC-SVM for outlier detection. The kernel of OC-SVM is defined in Equation \ref{con:newk}. \item STODP2. We applied the global mean pooling to the temporal embedding vectors generated by Section 3.A to obtain a global feature vector of WTN, which was fed into OC-SVM for outlier detection. In addition, the kernel of OC-SVM is our proposed kernel function defined in Equation \ref{con:newk}. \item STODP3. In order to study the effect of Seq2Seq, we remove the Seq2Seq module of our framework pipeline. The temporal segments of different sensors are organized as graph set. The graph set is input into graph embedding module to obtain the final embedding. Finally, the embedding is input into OC-SVM to do outlier detection. The kernel of the OC-SVM is defined in Equation \ref{con:newk}. \end{enumerate} In the experiments, the spatio-temporal representation learning phase of our framework is used to preserve the spatio-temporal patterns and data characteristics into feature learning. The one-class outlier detection phase of our framework is used to detect the cyber attack status of the water treatment system based on the spatio-temporal representation. We only use normal data to train our model. After the training phase, our model has the capability to detect the status of the testing data set that contains both normal and attack data. All the evaluations are performed on a x64 machine with Intel i9-9920X 3.50GHz CPU and 128GB RAM. The operating system is Ubuntu 18.04. \subsection{Overall Performances} We compare our method with the baseline models in terms of accuracy, precision, f-measure and AUC. Figure \ref{fig:overall} shows the average performances of our mtehod (STOD) is the best in terms of accuracy, f-measure and AUC; our method ranks second in terms of precision, compared with other baseline models. A potential interpretation of such observation is that the STOD captures the temporal effects (\textbf{delayed} , \textbf{continued}) and spatial effect (\textbf{cascading}) of cyber attacks by spatio-temporal representation learning part of STOD in a balanced way. With STOD captures more intrinsic features of cyber attacks, the model not only finds more attack samples but also makes fewer mistakes on normal samples. Thus, the distinguishing ability of STOD is improved greatly. But on a single evaluation metric, STOD maybe poorer than other baselines. Overall, STOD outperforms with respect to Accuracy, F-measure and ACU compared with baseline models, which signifies our detection framework owns the best attack detection ability. \begin{comment} {\color{red} A potential interpretation of such observation is that our method learns the characteristics of both normal and attack patterns in a balanced way. Thus, our method can capture more intrinsic rules of cyber attacks, which makes STOD not only find more attack samples but also make fewer mistakes on normal samples. This balanced learning way causes the STOD does not rank $1_{st}$ in terms of Precision and Recall respectively, but the comprehensive metric F-Measure is the most excellent. Other baseline models can not achieve this performance due to the limitations that they can not capture more features in data. Thus STOD model ranks $1_{st}$ with respect to Accuracy, F-measure and AUC, which signifies our framework owns the best cyber attack detection ability than other baselines. } \end{comment} \begin{figure*}[!thb] \setlength{\abovecaptionskip}{-8pt} \centering \subfigure{\label{fig:kernel_accuracy}\includegraphics[width=0.24\linewidth]{{image/kernel_acc.pdf}}} \subfigure{\label{fig:kernel_precision}\includegraphics[width=0.24\linewidth]{{image/kernel_pre.pdf}}} \subfigure{\label{fig:kernel_F1}\includegraphics[width=0.24\linewidth]{{image/kernel_F1.pdf}}} \subfigure{\label{fig:kernel_AUC}\includegraphics[width=0.24\linewidth]{{image/kernel_auc.pdf}}} \caption{Comparison of different kernels with respect to Accuracy, Precision, F-measure , and AUC.} \label{fig:kernel_func_exp} \vspace{-0.5cm} \end{figure*} Another observation is that the performances of LOF, ABOD, and KNN are much worse than other models. The possible reason is that these models exploit distance or angle-based assessment strategies. These geometrical measurements are vulnerable after projecting data into high dimensional space due to the ``curse of dimensionality''. Thus, these models can not achieve excellent performances. \subsection{Study of Representation Learning} The representation learning phase of our framework include: (1) partitioning sensor data streams into segments; (2) modeling the temporal dependencies with seq2seq; (3) modeling the spatial dependencies with graph embedding. What role does each of the three steps play in our framework? We will iteratively remove each of the three steps to obtain three different variants, namely STODP1, STODP2, STODP3. We then test compare the three variants with our original framework to examine the importance of the removed step for improving detection performances Figure \ref{fig:stod} shows the experimental results of STOD, STODP1, STODP2, and STODP3, which clearly show that STOD outperforms STODP1, STODP2, and STODP3 in terms of accuracy, precision, f-measure, and AUC with a large margin. A reasonable explanation of this phenomenon is that attack patterns are spatially and temporally structured, and, thus, when more buried spatio-temporal patterns are modeled, the method becomes more discriminant. The results validate the three steps (segmentation, temporal, spatial) of the representation learning phase is critical for attack pattern characterization. \subsection{Study of Pairwise Kernel Function } The kernel function is vital for the SVM based algorithm family. An effective kernel function can map challenging data samples into a high-dimensional space, and make these data samples more separable in the task of detection. We design experiments to validate the improved performances of our pairwise kernel function by comparing our pairwise kernel function with other baseline kernel functions. Specifically, the baseline kernels are as follows: \begin{enumerate}[(1)] \item \textbf{linear}. This kernel is a linear function. There are limited number of parameters in the linear kernel, so the calculation process is quick. The dimension of the new feature space is similar to the original space. \item \textbf{poly}. This kernel is a polynomial function. The parameters of the kernel are more than the linear kernel. It maps data samples into high dimensional space. \item \textbf{rbf}. This kernel is a Gaussian function that is a non-linear function. It exhibits excellent performance in many common situations. \item \textbf{sigmoid}. This kernel is a sigmoid function. When SVM utilizing this function to model data samples, the effect is similar to using a multi-layer perceptron. \end{enumerate} \begin{comment} \begin{figure}[htbp] \centering \setlength{\abovecaptionskip}{-0.03cm} \includegraphics[width=0.35\textwidth]{image/kernel_func.pdf} \caption{Comparison of different kernels with respect to Accuracy, Precision, F-measure, and AUC.} \label{fig:kernel_func_exp} \end{figure} \end{comment} Figure \ref{fig:kernel_func_exp} shows a comparison between our kernel and other baseline kernels with respect to all evaluation metrics. We observed that our kernel shows significant improvement, compared with other baseline kernels, in terms of Accuracy, Precision, F-measure, and AUC, This indicates that our kernel can effectively augment the attack patterns in original data, and maximize the difference between normal and attack patterns, by mapping original data samples into high dimensional feature space. This experiment validates the superiority of our pairwise kernel function. \subsection{Study of Time Costs} We aim to study the time costs of training and testing in different models. Specifically, we divided the dataset into six non-overlap folds. We then used cross-validation to evaluate the time costs of different models. \begin{figure}[htbp] \setlength{\abovecaptionskip}{-0.1cm} \centering \includegraphics[width=0.4\textwidth]{image/fit_time} \caption{Comparison of different models based on training time cost} \label{fig:fit_time} \vspace{-0.4cm} \end{figure} Figure \ref{fig:fit_time} shows the comparison of training time costs among different models. We find that the training time costs of each model is relatively stable. An obvious observation is GANomaly has the largest training time cost than other models. This is because the encoder-decoder-encoder network architecture design is time-consuming. In addition, the training time of STOD is slightly longer than OC-SVM. This can be explained by the fact that the similarity calculation of pairwise kernel function increases time costs, since we need to calculate the similarities between two representation vectors of each training data sample. \begin{figure}[htbp] \setlength{\abovecaptionskip}{-0.1cm} \centering \includegraphics[width=0.4\textwidth]{image/score_time} \caption{Comparison of different models based on testing time cost.} \label{fig:score_time} \vspace{-0.3cm} \end{figure} Figure \ref{fig:score_time} shows the comparisons of testing time costs among different models. The testing time costs of each model are relatively stable as well. And many of them can complete the testing task within one second, except GANomaly. We find the testing time of our method is shorter than the training time by comparing Figure \ref{fig:fit_time} and Figure \ref{fig:score_time}. This can be explained by a strategy of our method: once the model training is completed, our method stores the kernel mapping parameters in order to save time of computation. In addition, GANomaly still shows the largest testing time cost. The reason is that the testing phase of GANomaly needs to calculate two representation vectors of each testing data samples, and, thus, GANomaly doesn't use less time, compared with that of the training phase. \subsection{Case Study: Visualization for Spatio-temporal Embedding} The spatio-temporal representation learning phase is an important step in our structured detection framework. An effective representation learning method should be able to preserve the patterns of normal or attack behaviors and maximize the distances between normal and attack in the detection task. We visualize the spatio-temporal embedding on a 2-dimensional space, in order to validate the discriminant capabilities of our learned representations. Specifically, we first select 3000 normal and 3000 attack spatio-temporal embedding respectively. We then utilize the T-SNE manifold method to visualize the embedding. Figure \ref{fig:emb_st_v} shows the visualization results of normal and attack data samples. We find that our representation learning result is discriminant in a transformed 2-dimensional space. As can be seen, the learned normal and attack representation vectors are clustered together to form dense areas. The observation shows that non-linear models are more appropriate for distinguishing normal and attack behaviors than linear methods. \begin{figure}[htbp] \setlength{\abovecaptionskip}{-0.1cm} \centering \includegraphics[width=0.3\textwidth]{image/embedding_visual.png} \caption{visualization result for spatio-temporal embedding} \label{fig:emb_st_v} \vspace{-0.3cm} \end{figure} \section{Introduction} \iffalse segment: delay durative Seq2Seq: intra-segment GCN: spatial OC-svm inter segment \fi \iffalse 1. Incorporating temporal effects a. seq2seq reconstruct original, first derive, second derive to preserve the data changing rate, data changing direction. 2. Incorporating spatial effects a. MCSG preserve and fuse spatio-temporal features. 3. Integrated OC-SVM a. OC-SVM pair wise kernel. \fi Water Treatment Networks (WTNs) are critical infrastructures that utilize industrial control systems, sensors and communication technologies to control the water purification processes to improve the water quality and distribution for drinking, irrigation, or industrial uses. Although it is a critical infrastructure, WTNs are vulnerable to cyber attacks. For example, the water sector reported the fourth largest number of incidents in 2016~\footnote{https://www.osti.gov/servlets/purl/1372266}. How does a cyber attack to WTNs look like? Figure~\ref{fig:wtn} shows that a water treatment procedure includes six stages ({\it i.e.}, P1-P6), each of which is monitored by sensors; a cyber attack compromises the RO Feed Pump sensor of P4 to change the levels of chemicals being used to treat tap water. As a result, there is a compelling need for an effective solution to attack detection in WTNs. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{image/waternetwork} \caption{ Cyber attack example: one cyber attack happens at RO Feed Pump of P4, then the cyber attack effect spreads to other devices in P4.} \vspace{-0.4cm} \label{fig:wtn} \end{figure} In the literature, there are a number of studies about cyber attack detection in WTNs~\cite{goh2017anomaly,romano2010real,li2019mad,feng2017multi}. However, most of these studies only exploit traditional spaiotemporal data preprocessing and pattern extraction methods to distinguish attack patterns. Our preliminary explorations find that tremendous opportunities exist in solving the problem by teaching a machine to augment the differences between normal and attack patterns. To this end, in this paper, we aim to effectively solve the attack detection problem by augmenting the difference between normal and attack patterns in WTNs. However, it is challenging to mine the spatio-temporal graph stream data of WTNs and identify the strategy to maximize the pattern divergence between normal and attack behaviors. By carefully examining the sensing data of WTNs, we identify three types of characteristics of cyber attacks: (1) {\it delayed effect}, meaning that many attacks will not take effects immediately, but usually exhibit symptoms after a while; (2) {\it continued effect}, meaning that the effects of attacks will sustain for a while, not disappear rapidly; (3) {\it cascading effect}, meaning that the effects of attacks propagate to other sensors across the whole WTNs. Specifically, the {\it delayed } and {\it continued} effects are both temporal, and the {\it cascading effect} is spatial. More importantly, these three effects are mutually coupled, impacted, and boosted in WTNs. A new framework is required to address the margin maximization between normal and attack pattern learning, under the three coupled effects. Along this line, we propose a structured detection framework. This framework has two main phases: (1) spatio-temporal representation learning, which includes two modules: incorporating temporal effects and spatial effects; (2) improved unsupervised one-class detection, which utilizes a new designed pairwise kernel to make detection more accurate. Next, we briefly introduce our structured spatio-temporal detection framework named STDO. {\bf Phase 1: Spatio-temporal representation learning.} This phase aims to learn a good spatio-temporal representation over the sensing data of WTNs with two modules. The first module of this part is to integrate temporal effects. Cyber attacks in WTNs exhibit temporally-dependent attack behaviors, sequentially-varying attack purposes over time, and delayed negative impacts. Traditional methods ( e.g., AR, MA, ARMA, ARIMA, arrival density of point process, change point detection) mostly model the patterns of data points at each timestamp. However, we identify that partitioning the sensing data into a sequence of time segments can help to better describe {\it delayed} and {\it continued effect} of attacks. Therefore, we propose to segment the sensing data into a sequence of time segments. We then exploit a sequence-to-sequence (seq2seq) embedding method to characterize the temporal dependencies within each time segment. To improve the seq2seq method, we develop a new neural reconstruction structure to reconstruct not just a time segment, but also first and second derivatives of momentum of the time segment. In this way, the improved seq2seq method can have the awareness of values, acceleration, and jerk (second order derivatives) of sensor measurements. Through this module, we obtain the temporal embedding of each time segment of each sensor. The second module is to integrate spatial effects. The effects of cyber attacks in WTNs will spatially diffuse and propagate to other sensors over time. Therefore, exploring the propagation structure can significantly model attack patterns and improve detection accuracy. However, how can we capture the spatial structures of propagation? The topology of WTNs is a graph of interconnected sensors. We map the temporal embedding of one time segment of a sensor to the graph of WTNs as node attributes. We construct the Spatio-Temporal Graphs (STGs), where a node is a sensor and an attribute is the temporal embedding of the sensor, to describe the state of the WTNs. In this way, the STGs not only contain spatial connectivity among different sensors, but also include temporal patterns by mapping temporal embedding. We develop graph embedding model to jointly learn the state representations of the WTNs from the relational STGs. {\bf Phase 2: Improving Unsupervised One-Class Detection Model.} In reality, WTNs are mostly normal yet with a small number of cyber attack events, so the attack data samples are quite rare. This trait makes the data distributions of normal and attack extremely imbalanced. How can we overcome the imbalanced data distribution to accurately detect cyber attack? One-class detection fits well this problem. In particular, one-class SVM (OC-SVM) is a popular detection model, in which a hyper-plane is identified to divide normal and abnormal data samples after being mapped to a high-dimmensional space by kernel functions. While vanilla OC-SVM achieves promising results, the kernel functions can be improved by exploiting the similarities between data samples. Specifically, we propose a new pair-wise kernel function to augment the distance between normal and attack patterns by preserving similarity across different data samples. Consequently, normal data samples are grouped into a cluster, while abnormal data samples are pushed away from normal data. In our framework, we feed the learned state representations of the WTN into the improved OC-SVM to use the pairwise kernel to detect attacks. In summary, we develop a structured detection framework for cyber attack detection in WTNs. Our contributions are as follows: (1) we investigate an important problem of defending critical graph-structured infrastructures via cyber attack detection, which is important for building resilient and safe communities. (2) we develop a structured detection framework to maximize the margin between normal and attack patterns, by integrating spatio-temporal knowledge, deep representation learning, and pairwise one-class detection. (3) we implement and test our proposed framework with real-world water treatment network data and demonstrate the enhanced performance of our method. Specifically, our detection method achieves an accuracy of $91.65\%$, with average improvement ratios of $82.78\%$ and $22.96\%$ with respect to F1 and AUC, compared with baseline methods. \iffalse In this paper, we develop a structured detection framework for early and precise detection of cyber attacks to defend Water Treatment Networks (WTNs). This research is based on a public research testbed - ``SWaT'', which is a sensing data of WTNs that involves cyber and physical attacks \cite{mathur2016swat}. WTNs utilize industrial control systems, sensors and communication technologies to control the water purification processes to improve the water quality and distribution for drinking, irrigation, or industrial uses. WTNs are critical infrastructures that must be protected from any sort of destruction. However, WTNs are vulnerable to cyber attacks, and the water sector reported the fourth largest number of incidents in 2016~\footnote{https://www.osti.gov/servlets/purl/1372266}. Typically, WTNs control procedure includes six stages ({\it i.e.}, P1-P6) which is composed by various sensors and managed by an industrial control system. Figure~\ref{fig:wtn} shows a typical illustration of cyber attack in WTNs, in which a cyber attack compromises the RO Feed Pump sensor of P4, and changes the levels of chemicals being used to treat tap water, which further poison the water resulting in public disasters. Therefore, effective detection of cyber attack is highly desired for defending WTNs. \begin{figure}[htbp] \centering \includegraphics[width=0.45\textwidth]{image/waternetwork} \caption{ Cyber attack example: one cyber attack happens at RO Feed Pump of P4, then the cyber attack effect spreads to other devices in P4.} \label{fig:wtn} \end{figure} After analyzing the massive sensory data of WTNs, we identify three types of characteristics of cyber attacks: (1) {\it delayed effect}, referring to the fact that the effects of attacks would not immediately appear, but usually show symptoms after a while, as shown in Figure \ref{fig:delay_eff}; (2) {\it durative effect}, referring to the observation that the effects of attacks would maintain for a time period, it would not disappear rapidly; and (3) {\it cascading effect}, referring to the phenomenon that the effects of attack would propagate to other unattacked sensors across the whole WTNs due to the connected network structure. Specifically, the {\it delayed } and {\it durative effect} are both temporal effects of attacks, and {\it cascading effect} belongs to spatial effect of attacks. \begin{figure}[htbp] \centering \includegraphics[width=0.35\textwidth]{image/delay_effect.pdf} \caption{Delayed effect example: the cyber attack close one valve in WTN at the $19_{th}$ time point, the value of downstream flow indicator maintains for 7 seconds, then it changes to 0 rapidly.} \label{fig:delay_eff} \end{figure} In order to improve the performance of cyber attack detection, we capture the spaio-temporal effects of the WTNs completely and exploit an integrated one-class SVM (OC-SVM) algorithm to accomplish the attack detection task. Next, we briefly introduce our structured detection framework. {\bf Incorporating temporal effects.} Cyber attacks in WTNs exhibit temporally-dependent attack behaviors, sequentially-varying attack purposes over time, and delayed negative impacts. Intuitively, partitioning the sensing data into a sequence of time segments can capture {\it delayed} and {\it durative effect} of cyber attack better than only considering the data point at one timestamp. However, traditional methods ( e.g., AR, MA, ARMA, ARIMA, arrival density of point process, change point detection) is unable to take sequental segments as input. Therefore, to better capture the unique temporal effects of attacks in WTNs, we propose to segment the sensing data into a sequence of time segments, then we exploit a sequence-to-sequence (seq2seq) embedding method to characterize the temporal dependencies within each time segment. Specifically, we reconstruct not only the sensory data segments, but also first and second derivatives of the segments, which aims to capture both the instant and trend of change rate of the sensory data. Through this module, we obtain the temporal embedding of each time segment of all sensors. {\bf Incorporating spatial effects.} The effects of cyber attacks in WTNs will spatially diffuse and propagate to other sensors over time. Therefore, exploring the structure of cyber attack propagation and fusing the effects of cyber attacks can significantly improve detection accuracy. But how we can capture the propagation spatial structure and fuse the cyber attack effects? We propose a spatio-temporal graph (STG) to combine the spatio-temporal characteristics of cyber attacks in WTNs together. Formally, the hidden topology structure of the WTNs is a graph that is constructed by the interconnected sensors. We map the temporal embedding of each sensor at one time segment to the graph of WTNs as node attributes. The attributed graph is one STG that not only contains the spatial information among different sensors but also includes the temporal effects of cyber attacks. In order to fuse the spatio-temporal effects of cyber attcks, we develop a graph representation module to map the STG into latent space. We obtain the ultimate latent embedding that sustains the spatio-temporal information of WTNs. \begin{comment} \PW{However, it is challenging to explore the spatio-temporal information to profile anomalies, because there are many features of cyber attack in WTNs (e.g., delayed, propagation and mutation), as shown in Figure \ref{fig:multi-channel}.} But, how we can find an optimal set of dimensions to describe the spatial interaction patterns of sensors and fuse the temporal characteristics of sensing data? We propose a multi-channel sensor graph (MCSG) to combine the spatio-temporal information of WTNs together. Formally, we map the temporal embedding vector of each time segment of all sensors to the WTNs to construct MCSG according to the sensor position in WTNs. Each dimension of embedding forms each channel of MCSG. The MCSG not only contains the latent temporal characteristics but also includes the spatial interactions among different sensors. After that, we pass forward the MCSG into the ultimate graph representation module to obtain the latent embedding of the MCSG. This embedding represents the spatio-temporal correlation in WTNs jointly. \end{comment} {\bf Integrated One-Class Detection Model.} In the real scenario, the WTNs are usually normal with limited cyber attacks, so the attack data samples are scarce. Such fact makes the data distributions of normal and attack are extremely unbalanced. How to overcome the unbalanced data distribution to detect cyber attack accurately? One class classification models are suitable for this problem. Especially, one-class SVM (OC-SVM) is a popular model has attracted many attentions, in which a hyper-plane is found to divide normal and abnormal data samples after being mapped to higher dimmensions by kernal functions. While vanilla OC-SVM achieves promising results, the kernel functions can be improved by exploiting the similarities between data samples. Specifically, we propose a new pair-wise kernel function to augment the distance between normal and attack patterns by preserving similarity of different data samples. Consequently, normal data samples are grouped into a cluster, while abnormal data samples are pushed away. In our framework, we feed the embedding which contains spatio-temporal correlation of WTNs into an integrated OC-SVM module that using the pairwise kernel to detect cyber attacks in WTNs. \begin{comment} \begin{figure}[htbp] \centering \includegraphics[width=0.27\textwidth]{image/mcsg.pdf} \caption{Multi-channel sensor graph (MCSG).} \label{fig:multi-channel} \end{figure} \end{comment} Along these lines, in this paper, we develop a structured detection framework for cyber attack detection in WTNs. Especially, our contributions are as follows: (1) We exploit a seq2seq method to model the temporal dependency of time segment, which captures the changing trend of sensing data in a time period. (2) We identify the spatio-temporal graph (STG) that describe the spatio-temporal correlation of cyber attack in WTNs and learn the fused embedding by a graph representation learning module. (3) We develop an integrated OC-SVM to detect cyber attack in WTNs with a new pair-wise kernel which is provided by us. (4) We implement and test our proposed detection framework with real world cyber attack data in a water treatment plant and demonstrate the enhanced performance of our method. \iffalse Cyber physical systems (CPS) are ubiquitous in recent decades due to the period of vigorous development of hardware and internet. While CPS provide great potentials for the frontier of intelligent Internet of Things (IOT), the risk of suffering malicious behaviors and hazardous attacks is increasing significantly, which may lead to catastrophic consequences. Such facts make detecting attacks a vital topic for protecting critical infrastructures. Therefore, in this paper, we propose to study the problem of cyber attack detection for critical infrastructures on a public research testbed - ``SWaT'', which is a water treatment network (WTN) that involves cyber and physical attacks~\cite{mathur2016swat}. \fi \iffalse It is appealing to modeling both the temporal and spatial effect of attacks to improve the accuracy of cyber attack detection. Next, we briefly introduce how to incorporate temporal and spatial effect of cyber attacks. Intuitively, comparing to only consider one single data point at one time stamp, partitioning the sensing data into sequence of segments can better capture the {\it delay effects} and {\it durative effects}, which further help model temporal effects of attacks. Moreover, partitioned segments raise multi-scale temporal information (1) {\it micro-scale}, which is the temporal dependency within segments, and (2) {\it macro-scale}, which is the temporal dependency across segments. \fi \fi \section{Proposed Method} We first introduce time segment embedding, then illustrate the construction of spatio-temporal graphs using temporal embedding and sensor networks, present spatio-temporal graph-based representation learning, and, finally, discuss the integration with one-class detection. \iffalse \subsection{Segmentation of Sensing Data Streams} The cyber attacks in WTNs cause abnormal sensing measurements. The symptoms of cyber attacks usually donot exhibit immediately (\textbf{delayed effect}) and maintains for a while (\textbf{continued effect}). To capture the two effects, we partition the sensing data of a WTN into a sequence of non-overlapped time segments. Each segment represent the sensing measurements of a fixed length of time. \vspace{-0.5cm} \begin{figure}[htbp] \centering \includegraphics[width=0.35\textwidth]{image/segment.pdf} \caption{The illustration of partition sensing data stream.} \label{fig:segment_sample} \vspace{-0.28cm} \end{figure} Figure \ref{fig:segment_sample} shows how we segment the streams. Specifically, consider the existence of $n$ sensors in a WTN, and the time range of data collection is $[t_1 \sim t_n]$. There are $n$ data streams. We partition all the $n$ streams into $m$ segments, denoted by $\mathbf{S}_{seq} = [\mathbf{S}_1,\mathbf{S}_2,...,\mathbf{S}_m]$. Each segment $\mathbf{S}_i$ includes $q$ time points, denoted by $\mathbf{S}_i = [\mathbf{v}_{i}^{1},\mathbf{v}_{i}^{2},\ldots,\mathbf{v}_{i}^{q}]$, where $\mathbf{v}_{i}^{q}$ represents the sensing data value at $q_{th}$ timestamp in $i$-th time segment and the time range is $[t_1 \sim t_{q}]$. Meanwhile, we label the WTNs cyber attack status based on the rule that if a cyber attack happens at a time point in one temporal segment, the status of WTNs is labeled 1 (attack) in the segment, otherwise, it is labeled 0 (normal). So we obtain the WTNs status label set based on $\mathbf{S}_{seq}$, denoted by $\mathbf{L}_{seq} = [\mathbf{l}_1,\mathbf{l}_2,\ldots, \mathbf{l}_m]$. Through these steps, we obtain non-overlapped temporal segments and the corresponding WTNs status label set. \fi \subsection{Embedding Time Segments Sequential Patterns} \label{sec:temporal_embedding} We first model the sequential patterns of time segments. The sequential patterns of WTN involves two essential measurements: (1) changing rate and (2) the trend of changing rate, which correspond to the first and second derivatives respectively. Therefore, in addition to the raw data points in one segment, we introduce both the first and second derivatives to quantify the sequential patterns, resulting in an augmented segment. Formally, blow we define the augmented segment. \begin{definition} {\it Augmented Segment.} Given a sensory data segment denoted by $\mathbf{S}_i=[\mathbf{v}_i^1, \mathbf{v}_i^2, \cdots, \mathbf{v}_i^k, \cdots, \mathbf{v}_i^K]$, where $\mathbf{v}_i^k \in \mathbb{R}^{N\times 1}$ denotes the sensory measurements of all the sensors of the $i$-th segment at the $k$-th record. Then, the corresponding first-order derivative segment $\mathbf{S}_i^{'}$ is $\mathbf{S}_i^{'}=[\frac{\partial \mathbf{S}_i}{\partial \mathbf{v}_i^2}, \frac{\partial \mathbf{S}_i}{\partial \mathbf{v}_i^3}, \cdots, \frac{\partial \mathbf{S}_i}{\partial \mathbf{v}_i^K}]$, and the corresponding second-order derivative segment $\mathbf{S}_i^{''}$ is $\mathbf{S}_i^{''}=[\frac{\partial \mathbf{S}_i^{'}}{\partial \mathbf{v}_i^3}, \frac{\partial \mathbf{S}_i^{'}}{\partial \mathbf{v}_i^4}, \cdots, \frac{\partial \mathbf{S}^{'}}{\partial \mathbf{v}_i^K}]$. The augmented segment $\tilde{\mathbf{S}}_i$ is then defined as the concatenation of the raw segment, the first-order derivative segments, and the second-order derivative segments: $\tilde{\mathbf{S}}_i = [\mathbf{S}_i, \mathbf{S}_i^{'}, \mathbf{S}_i^{''}]$. For convenience, $\tilde{\mathbf{S}}_i$ also can be denoted as $\tilde{\mathbf{S}}_i=[\mathbf{r}_i^1, \mathbf{r}_i^2,\ldots,\mathbf{r}_i^{3K-3}]$, where elements in $[\mathbf{r}_i^1, \mathbf{r}_i^2, \cdots, \mathbf{r}_i^{K}]$ corresponds to each element in $\mathbf{S}_i$, elements in $[\mathbf{r}_i^{K+1}, \mathbf{r}_i^{K+2}, \cdots, \mathbf{r}_i^{2K-1}]$ corresponds to each element in $\mathbf{S}^{'}_i$, and the elements in $[\mathbf{r}_i^{2K}, \mathbf{r}_i^{2K+1}, \cdots, \mathbf{r}_i^{3K-3}]$ corresponds to each element in $\mathbf{S}^{''}_i$ respectively. \end{definition} We here provide an example of constructing an augmented segment. Suppose there are two sensors in WTNs, there are three measurement records in each time segment. In such WTN, $N$ is 2 and $K$ is 3. Considering the $i$-th segment $\mathbf{S}_i = [[1,3,4],[2,8,5]]$, the size of $\mathbf{S}_i$ is $2\times3$. We then calculate the $\mathbf{S}^{'}_i$ by row: $\mathbf{S}^{'}_i = [[2,1],[6,-3]]$. Afterward, $\mathbf{S}^{''}_i=[[-1],[-9]]$. Finally, we concatenate these three segments by row: $\tilde{\mathbf{S}}_i = [[1,3,4,2,1,-1] [2,8,5,6,-3,-9]] $. Figure \ref{fig:temporal_embedding} shows the process of temporal embedding. The temporal embedding process develops a seq2seq model based on the encoder-decoder paradigm that takes a non-augmented segment as inputs, and reconstructs the corresponding augmented segment. The objective is to minimize the loss between the original augmented segment and the reconstructed one. Next, we provide an example about how our model operate the $i$-th segment $\mathbf{S}_i$. The encoding step feeds the segment $\mathbf{S}_i$ into a seq2seq encoder, and outputs the latent representation of the segment $\mathbf{U}_i$. Formally, as shown in Equation~\ref{equ:seq_enc}, given the segment data $\mathbf{S}_i=[\mathbf{v}_i^1, \mathbf{v}_i^2, \ldots, \mathbf{v}_i^K]$, the first hidden state $\mathbf{h}^1$ is calculated by the first time step value. Then recursively, the hidden state of the previous time step $\mathbf{h}^{t-1}$ and the current time step value $\mathbf{v}_i^t$ are fed into a LSTM model to produce the current time step hidden state $\mathbf{h}^t$. Finally, we concatenate all of the hidden states by row (sensor) to obtain the latent feature matrix $\mathbf{U}_{i}$. \begin{equation} \left\{ \begin{array}{lr} \mathbf{h}^1 =\sigma (\mathbf{W}_e \mathbf{v}_i^1 + \mathbf{b}_e), & \\ \mathbf{h}^t=LSTM([\mathbf{v}_i^t, \mathbf{h}^{t-1}]), \forall t \in \{2,\dots,K\}, \\ \mathbf{U}_{i} = CONCAT(\mathbf{h}^1, \mathbf{h}^2, \ldots , \mathbf{h}^K), & \end{array} \right. \label{equ:seq_enc} \end{equation} where $\mathbf{W}_e$ and $\mathbf{b}_e$ are the weight and bias of the encoding step, respectively. In the decoding step, the decoder takes $\mathbf{U}_i$ as inputs and generates a reconstructed augmented segment: $[\mathbf{\hat{r}}_i^1,\mathbf{\hat{r}}_i^2, \ldots, \mathbf{\hat{r}}_i^{3K-3}]$. Formally, as shown in Equation~\ref{equ:decoder}, the first hidden state $\mathbf{\hat{h}}^1$ of the decoder is copied from the last hidden state of encoder $\mathbf{h}^K$. Then, the previous time step hidden state $\mathbf{\hat{h}}^{t-1}$, the previous time step element $\mathbf{\hat{r}}_i^{t-1}$, and the latent feature vector $\mathbf{U}_i$ are input into the LSTM model to produce the current time step hidden state $\mathbf{\hat{h}}^t$. Finally, the reconstructed value of current time step $\mathbf{\hat{r}}_i^{t}$ is produced by current hidden state $\mathbf{\hat{h}}^t$ that is activated by sigmoid function $\sigma$. \begin{equation} \left\{ \begin{array}{lr} \hat{\mathbf{h}}^1 = \mathbf{h}^K, \\ \hat{\mathbf{r}}_i^{1} = \sigma(\mathbf{W}_d \hat{\mathbf{h}^{1}} + \mathbf{b}_d), \\ \hat{\mathbf{h}}^{t} = LSTM([ \hat{\mathbf{r}}^{t-1}_i, \hat{\mathbf{h}}^{t-1}, \mathbf{U}_i ]), \forall t \in \{2,\dots,K\}, \\ \hat{\mathbf{r}}_i^{t} = \sigma(\mathbf{W}_d \hat{\mathbf{h}^{t}} + \mathbf{b}_d), \forall t \in \{2,\dots,K\}, \\ \end{array} \right. \label{equ:decoder} \end{equation} where $\mathbf{W}_d$ and $\mathbf{b}_d$ are the weight and bias for the decoding step respectively. After the decoding step, we obtain the reconstructed augmented segment sequence $[\hat{\mathbf{r}}_i^1, \hat{\mathbf{r}}_i^2, \ldots, \hat{\mathbf{r}}_i^{3K-3}]$. The objective is to minimize the reconstruction loss between the original and reconstructed augmented segment sequence. The overall loss is denoted as \begin{equation} \min \sum \limits_{i=1}^{m} \sum \limits^{3K-3}_{k=1} || \mathbf{r}_i^k - \hat{\mathbf{r}}_i^{k} ||^2. \end{equation} Along this line, we obtain the latent temporal embedding at the $i$-th time segment, denoted by $\mathbf{U}_i$. \subsection{Temporal Embedding as Node Attributes: Constructing Spatio-temporal Graphs} \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{image/graph_map.pdf} \caption{The illustration of constructing spatio-temporal graphs.} \label{fig:graph_map} \vspace{-0.3cm} \end{figure} The temporal embedding, obtained by Section \ref{sec:temporal_embedding}, describes and models the temporal effects of cyber attacks. Then, to further incorporate the spatial effects of WTNs, we map the temporal embedding to WTNs as node attributes. Taking the temporal embedding of the $i$-th segment $\mathbf{U}_i$ as an example. Since each row of $\mathbf{U}_i$ is a temporal embedding of a segment of one senor (node), we mapped each row of $\mathbf{U}_i$ to the corresponding node (sensor) as attributes, resulting in an attributed WTNs $G_i$, which we call a {\it Spatio-temporal Graph} (STG) that preserves both the spatial and temporal effects. \subsection{Learning Representations of STGs} Figure \ref{fig:collective_embedding} shows that we develop a spatiotemporal graph representation learning framework to preserve not just temporal patterns, but also spatial patterns in a latent embedding space. We take the STG of the $i$-th time segment, $G_i$, as an example to explain the representation process. Formally, we denote $G_i$ by $G_i = (\mathbf{U}_i,\mathbf{A}_i)$, where $\mathbf{A}_i$ is an adjacency matrix that describes the connectivity among different sensors; $\mathbf{U}_i$ is a feature matrix that is formed by the temporal embedding of all the sensors of the $i$-th time segment. The representation learning process is formulated as: given the $i$-th STG $G_i$, the objective is to minimize the reconstruction loss between the input $G_i$ and the reconstructed graph $\hat{G}_i$, by an encoding-decoding framework, in order to learn a latent embedding $\mathbf{z}_i$. The neural architecture of the encoder includes two Graph Convolutional Network (GCN) layers. The first GCN layer take $\mathbf{A}_i$ and $\mathbf{U}_i$ as inputs, and then outputs the lower-dimensional feature matrix $\mathbf{\hat{U}}_i$. Specifically, the encoding process of the first GCN layer is given by: \begin{align} \begin{split} \mathbf{\hat{U}}_i &= RELU(GCN(\mathbf{U}_i,\mathbf{A}_i)) \\ &=RELU(\hat{\mathbf{D}}_i^{-\frac{1}{2}} \mathbf{A}_i\hat{\mathbf{D}}_i^{-\frac{1}{2}}\mathbf{U}_i\mathbf{W}_0) \end{split} \end{align} where $\hat{\mathbf{D}_i}$ is the diagonal degree matrix of $G_i$, and $\mathbf{W_0}$ is the weight matrix of the first GCN layer. Since the latent embedding $\mathbf{z}_i$ of the graph is sampled from one prior normal distribution, here the purpose of the second GCN layer is to assess the parameters of the prior distribution. This layer takes $\mathbf{A}_i$ and $\mathbf{\hat{U}}_i$ as the input, then produces the mean value $\bm{\mu}$ and the variance value $\bm{\delta}^2$ of the prior normal distribution as the output. Thus the encoding process of the second GCN layer can be formulated as \begin{equation}\label{eqn:secondlayer} \bm{\mu},log(\bm{\delta}^2) = GCN( \hat{\mathbf{U}}_i,\mathbf{A}_i) = \hat{\mathbf{D}}_i^{-\frac{1}{2}}\mathbf{A}_i\hat{\mathbf{D}}_i^{-\frac{1}{2}} \hat{\mathbf{U}}_i\mathbf{W}_1, \end{equation} where $\mathbf{W_1}$ is the weight matrix of the second GCN layer. Then we utilize the reparameterization trick to mimic the sample operation to construct the latent representation $\mathbf{z}_i$. The process is formulated as \begin{equation} \mathbf{z}_i=\bm{\mu}+\bm{\delta} \times \epsilon, \end{equation} where $\epsilon \sim \mathcal{N}(0,1)$. The decoding step takes the latent representation $\mathbf{z}_i$ as the input and outputs the the reconstructed adjacent matrix $\mathbf{\hat{A}}_i$. The decoding process is denoted as \begin{equation} \mathbf{\hat{A}}_i = \sigma(\mathbf{z}_i\mathbf{z}_i^T). \end{equation} In addition, the core calculation of the decoding step can be denoted as $\mathbf{z}_i\mathbf{z}_i^T = \left \|\mathbf{z}_i\right \| \left \|\mathbf{z}_i^T\right\| \cos\theta$. Owing to the $\mathbf{z}_i$ is the node level representation, the inner product calculation is helpful to capture the correlation among different sensors. We minimize the joint loss function $\mathcal{L}_{g}$ during the training phase, which is formulated as Equation \ref{equ:loss}. $\mathcal{L}_{g}$ includes two parts. The first part is Kullback-Leibler divergence between the distribution of $\mathbf{z}_i$ and the prior standard normal distribution denoted by $\mathcal{N}(0,1)$. The second part is the squared error between $\mathbf{A}_i$ and $\mathbf{\hat{A}}_i$. Our training purpose is to make the $\mathbf{\hat{A}}_i$ as similar as $\mathbf{A}_i$, and to let the distribution of $\mathbf{z}_i$ as close as $\mathcal{N}(0,1)$. The total loss is denoted as \begin{equation} \mathcal{L}_{g} = \sum \limits_{i=1}^{m} \underbrace{ KL[q(\mathbf{z}_i|\mathbf{X}_i,\mathbf{A}_i) || p(\mathbf{z}_i)] }_{\text{KL Divergence between $q(.)$ and $p(.)$}} + \overbrace{ \sum_{j=1}^{w} \left \| \mathbf{A}_i-\hat{\mathbf{A}}_i \right \|^2 }^{\text{Loss between $\mathbf{A}_i$ and $\mathbf{\hat{A}}_i$}} \label{equ:loss} \end{equation} When the model converges, we apply the global average aggregation to $\mathbf{z}_i$. Then the $\mathbf{z}_i$ becomes the graph-level representation of the WTNs, which contains the spatio-temporal information of the whole system at $i$-th time segment. \subsection {One-Class Detection with Data Similarity Awareness} In reality, most of sensor data are normal, and attacks related data are scarce and expensive. This indeed results into the problem of unbalanced training data. How can we solve the problem? Can we develop a solution that only uses normal data for attack detection? This is the key algorithm challenge for this phase. One-class classification is a promising solution that aims to find a hyperplane to distinguish normal and attack patterns only using normal data. Specifically, OC-SVM is a classical one-class classification model. OC-SVM includes two steps: (1) mapping low dimensional data into a high dimensional feature space by a kernel function. (2) learning the parameters of hyper-plane to divide normal and abnormal data via optimization. Intuitively, in the hyperspace provided by OC-SVM, the normal (or abnormal) data are expected to be closer, while there should be a large distance between normal and abnormal data. In other words, similar data points should be closer to each other than dissimilar ones. However, traditional kernel functions ({\it e.g.}, linear, nonlinear, polynomial, radial basis function (RBF), sigmoid) cannot preserve such characteristic well. How can we make data samples well-separated in order to achieve such characteristic? To address the question, we propose a new pairwise kernel function that is capable of reducing the distances between similar data points, while maximizing the distances between dissimilar ones. Formally, given the representation matrix $\mathbf{Z} = [\mathbf{z}_1, \cdots, \mathbf{z}_i, \cdots, \mathbf{z}_m]$, the pairwise kernel function is given by : \begin{equation} Kernel = \tanh \left(\frac{1}{\mathcal{D}( \mathbf{Z})}\mathbf{Z}\mathbf{Z}^{T}+sim(\mathbf{Z}, \mathbf{Z}^{T})+\mathbf{c}\right) \label{con:newk} \end{equation} where $\mathbf{Z}^{T}$ is the transpose of $\mathbf{Z}$, $\mathcal{D}(\mathbf{Z})$ is the covariance matrix of $\mathbf{Z}$, and $sim(\mathbf{Z}, \mathbf{Z}^T) \in \mathbb{R}^{N \times N}$ is the pairwise similarity matrix between segments. Compared with the vanilla sigmoid kernel function, we add $sim(\mathbf{Z}, \mathbf{Z}^T)$, where the range of $sim(\mathbf{Z}, \mathbf{Z}^T)$ is $[-1,1]$. If two segments are more similar, the corresponding value in $sim(\mathbf{Z}, \mathbf{Z}^T)$ is closer to $1$. Otherwise, the value is closer to $-1$. Therefore, when two segments are similar (e.g., both are normal or abnormal samples), the proposed parwise kernel function will push these two segments closer; otherwise, these two segments will be set away from each other. \begin{figure}[htbp] \centering \includegraphics[width=0.35\textwidth]{image/svm_kernel} \setlength{\abovecaptionskip}{-0.01cm} \caption{The illustration of pairwise kernel, given normal data $x_1$, owing to $x_2$ is normal and $x_3$ is attack, $sim(x_1,x_2) \textgreater sim(x_1,x_3)$ and the directions of $sim(x_1,x_2)$ and $sim(x_1,x_3)$ are opposite. Pairwise kernel increase the distance between $x_2$ and $x_3$ .} \label{fig:ker_func} \vspace{-0.2cm} \end{figure} The pairwise kernel is able to enlarge the distance among different category samples in feature space, which makes the OC-SVM converge more easily and detect cyber attacks more accurate. Figure \ref{fig:outlier_detection} shows the detection process of cyber attacks. The spatio-temporal embedding $\mathbf{z}_i$ is fed into the integrated OC-SVM to detect cyber attacks by utilizing the pairwise kernel function, and to output the corresponding status labels of WTNs, to indicate whether a cyber attack happen or not at the $i$-th time segment. \subsection{Comparison with Related Work} Recently, lots of attempts have been made to detect cyber attacks in WTNs. For instance, Lin \textit{et al.} utilized a probabilistic graphical model to preserve the spatial dependency among sensors in WTNs and a one-class classifier to detect cyber attacks \cite{lin2018tabor}. Li \textit{et al.} regarded the LSTM and RNN as the basic model of the GAN framework to develop an anomaly detection algorithm to detect cyber attacks in WTNs \cite{li2019mad}. Raciti \textit{et al.} constructed one real-time anomaly detection system based on cluster model \cite{raciti2012anomaly}. However, these models exhibit several limitations when detecting cyber attacks: (i) the changing trend of sensing data in a time segment is not preserved; (ii) the spatial patterns among sensors are captured partially; (iii) the similarity between different data samples is not utilized completely. In order to overcome these limitations, we propose a new spatio-temporal graph (STG) to preserve and fuse spatio-temporal effects of WTNs simultaneously. Moreover, a new pairwise kernel that utilizing the data similarity to augment the distance among different patterns is also proposed to improve the accuracy of cyber attack detection. \section{Problem Statement and Framework Overview} We first introduce the statement of the problem, and then present an overview of the framework. \begin{figure*}[htbp] \setlength{\abovecaptionskip}{-0.01cm} \centering \subfigure[\textbf{P1:} Embedding time segments sequential patterns.]{ \begin{minipage}[t]{0.33\linewidth} \centering \includegraphics[width=1\linewidth]{image/seq2seq} \label{fig:temporal_embedding} \end{minipage}% }% \subfigure[\textbf{P1:} Modeling spatio-temporal patterns over STG.]{ \hspace{0.5cm} \begin{minipage}[t]{0.33\linewidth} \centering \includegraphics[width=1\linewidth]{image/vgae} \label{fig:collective_embedding} \end{minipage}% }% \subfigure[\textbf{P2:} Anomaly detection with data similarity.]{ \hspace{-0.6cm} \begin{minipage}[t]{0.33\linewidth} \centering \includegraphics[width=0.75\linewidth]{image/detect} \vspace{0.39cm} \label{fig:outlier_detection} \end{minipage} }% \centering \caption{The overview of cyber attack detection framework in water treatment network} \label{fig:framework} \vspace{-0.6cm} \end{figure*} \subsection{Problem Statement} We aim to study the problem of cyber attack detection using the sensing data of WTNs. We observe that cyberattacks in WTNs exhibit not just spatial diffusion patterns, but also two temporal effects (i.e., delayed and continued). As a result, we partition the sensing data streams into non-overlapped time segments. We investigate detection on the time segment level. \begin{definition} {\it The WTN Attack Detection Problem. } Formally, assuming a WTN consists of $N$ sensors, given the sensing data streams of a WTN, we evenly divide the streams into $m$ non-overlapped segments by every $K$ sensory records. Consequently, we obtain a segment sequence $\mathbf{S}=[\mathbf{S}_1, \mathbf{S}_2, \cdots, \mathbf{S}_i, \cdots, \mathbf{S}_m]$, where the matrix $\mathbf{S}_i \in \mathbb{R}^{N \times K}$ is the $i$-th segment. Each segment is associated with a cyber attack status label: if a cyber attack happens within $\mathbf{S}_i$, the label of this segment is marked as $y_i = 1$; Otherwise, $y_i = 0$. The objective is to develop a framework that takes the segment sequence $\mathbf{S}$ as inputs, and output the corresponding cyber attack labels for each segment to maximize detection accuracy. \end{definition} \subsection{Framework Overview} Figure \ref{fig:framework} shows that our framework includes two phases: (1) Spatio-temporal representation learning (\textbf{P1}); (2) Improving unsupervised one-class detection (\textbf{P2}). Specifically, there are two modules in Phase 1: (a) Embedding time segment sequential patterns, in which a seq2seq model is leveraged to capture the temporal dependencies within a segment. Later, the learned representations are attached to each node (sensor) in the WTNs as node attributes to construct STGs. Be sure to notice that temporal patterns are integrated by attaching temporal embeddings as attributes; and spatial connectivity is integrated via a graph structure of sensors, which is introduced next. (b) Modeling spatio-temporal patterns over STGs, in which the fused embedding is learned through an encode-decode paradigm integrated with Graph Convolutional Network (GCN). The fused embedding summarizes the information of STGs to profile the spatio-temporal characteristics in WTNs. Finally, the Phase 2 exploit the fused embedding as inputs to detect attacks. The Phase 2 has one module, namely anomaly detection with pairwise segment similarity awareness. Specifically, the fused embedding is fed into an improved one-class anomaly detector integrated with awareness of pairwise segment similarity. Specifically, the similarities between two different segment embedding vectors are introduced to the pairwise kernel of the detector to augment the distance between normal and attack patterns. \section{Related Work} \textbf{Representation Learning}. Representation learning is to learn a low-dimensional vector to represent the given data of an object. Representation learning approaches are three-fold: (1) probabilistic graphical models; (2) manifold learning approaches; (3) auto-encoder and its variants; The main idea of the probabilistic graphical model is to learn an uncertain knowledge representation by a Bayesian network~\cite{friedman2004inferring,johnson2016composing}. The key challenge of such methods is to find the topology relationship among nodes in the probabilistic graphical model. The manifold learning methods utilize the non-parametric approach to find manifold and embedding vectors in low dimensional space based on neighborhood information~\cite{zhu2018image,wang2018flexible}. However, manifold learning is usually time-costly. The discriminative ability of such methods is very high in many applications. Recently, deep learning models are introduced to conduct representation learning. The auto-encoder model is a classical neural network framework, which embeds the non-linear relationship in feature space via minimizing the reconstruction loss between original and reconstructed data~\cite{wang2016auto,suk2015latent,calvo2019selectional}. When representation learning meets spatial data, autoencoders can inetgrate with spatio-temporal statistical correlations to learn more effective embedding vectors. ~\cite{cedolin2010spatiotemporal,ma2019ts,pan2019urban}. For instance, Singh et al, use the auto-encoder framework to learn the spatio-temporal representation of traffic videos to help detect the road accidents~\cite{singh2018deep}. Wang et al. utilize spatio-temporal representation learning to learn the intrinsic feature of GPS trajectory data to help analyze driving behavior~\cite{wang2019spatiotemporal}. \textbf{Deep Outlier Detection}. Outlier detection is a classical problem with important applications, such as, fraud detection and cyber attack detection. Recently, deep learning has been introduced into outlier detection. According to the availability of outlier labels, deep anomaly detection can be classified into three categories: (1) supervised deep outlier detection; (2) semi-supervised deep outlier detection; (3) unsupervised deep outlier detection. First, supervised deep outlier detection models usually train a deep classification model to distinguish whether a data sample is normal or not~\cite{yamanaka2019autoencoding,kawachi2019two}. These models are not widely available in reality, because it is difficult to obtain data labels of outliers. Meanwhile, data imbalance is a serious issue that degrades the performances of supervised models. Second, semi-supervised outlier detection methods usually train a deep auto-encoder model to learn the latent embedding of normal data \cite{pmlr-v80-ruff18a,chalapathy2018anomaly,zhao2016classification}, then the learned embedding vectors are used to accomplish outlier detection task. In deep semi-supervised outlier detection, one-class classification is an important research direction. For instance, Liu et. al, proposed to detect the anomaly data on uncertain data by SVDD algorithm \cite{liu2013svdd}. Many experiments have shown the adaptability of one class SVM. Third, unsupervised outlier detection models do not need any label information, they detect outliers depends on the intrinsic rules (e.g., scores, distance, similarity) of data~\cite{liu2019generative,wang2019effective,lu2017unsupervised}. Such methods are appropriate for scenarios that are hard to collect label information. \textbf{Cyber Attack Detection in Water Treatment Network}. Water purification plants are critical infrastructures in our local communities. Such infrastructures are usually vulnerable to cyber attacks. Early detection of cyber attacks in water treatment networks is significant for defending our infrastructure safety and public health. There are many existing studies about outlier detection in water treatment networks~\cite{feng2017multi,romano2010real,lin2018tabor,ramotsoela2019attack,adepu2016using}. For instance, Adepu et al. studied the impact of cyber attacks on water distribution systems~\cite{adepu2019investigation}. Goh et al. designed an unsupervised learning approach that regards Recurrent Neural Networks as a temporal predictor to detect attacks \cite{goh2017anomaly}. Inoue et al. compared the performances of Deep Neural Network and OC-SVM on outlier detection in water treatment networks~\cite{inoue2017anomaly}. Raciti et al. developed a real-time outlier detection system by clustering algorithms and deployed the system into a water treatment network~\cite{raciti2012anomaly}. However, there is limited studies that integrate deep graph representation learning, spatiotemporal patterns, and one-class detection to more effectively address cyber attack problems.
{ "attr-fineweb-edu": 1.84082, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdfs4eIXh2xqaXfk1
\section{Introduction} In the literature, \emph{coverage control} has been gaining particular attention by the scientific community in recent years. Typical approaches follow the path of the seminal works in \mbox{\cite{cortes2003geometric,cortes2005coordination},} where agents interact in order to achieve two main objectives: (1) to partition the area of interest into zones for which a single agent is responsible for the covering (e.g., Voronoi regions); (2) to reach locations that minimize a function of the distance from each point in the region assigned to them (e.g., navigate to a weighted centroid of the assigned Voronoi region). Over the years, the importance of taking into account agents' limited sensing and communication capabilities has become paramount \cite{laventall2009coverage,kantaros2016distributed}. Recent innovations in the field include, among others, time varying methodologies \cite{lee2014multi}, dynamic approaches \cite{palacios2016distributed}, and adaptive management of agents' trust \cite{pierson2016adaptive}. Note that most of the existing literature focuses on continuous space coverage, while in several practical situations it might be required to cover just a discrete set of \emph{ points of interest} (PoI). In fact, a discrete space approach has lower computational demands with respect to continuous space ones and allows the modeling of specific targets of interest. To the best of our knowledge, work at the state of the art focused on the coverage of discrete points of interest is quite sparse. In a recent work \cite{Jiang:2017}, the authors use $k$-means to solve a discrete coverage problem where multiple agents are in charge of a set of PoIs. A related work, even though not specifically focused on coverage, is \cite{montijano2014efficient} where roles are assigned to the agents via constrained discrete optimization. In both cases, however, the agents are not constrained to have a limited sensing radius. In this paper, we consider a distributed problem where there is a finite and discrete set of $n$ PoIs in fixed locations in space, and a set of $r<n$ mobile agents aims at achieving the following two objectives: (1) to partition the PoIs in $r$ groups such that PoIs in the same group are spatially close while PoIs in different groups are far apart; (2) to associate each agent to a group and assign a location to such agent in order to maximize the coverage. % Our distributed approach is inspired by the $C$-means \cite{Bezdek:1981}, a {\em non-exclusive} and unsupervised data clustering algorithm, where a set of points (e.g., the PoIs) are grouped so that each point can be associated, at the same time, to different clusters (i.e., to different agents) with different intensities. From a mathematical standpoint, we extend the classical \mbox{$C$-means} by introducing {\em proximity constraints}, which account for the limited sensing range of the agents. As a matter of fact, results dealing with the distribution of clustering algorithms, including the C-means, can be found in the literature~\cite{Patel:2013,TMC:2014,cinesi,CDC2016Oliva}. Compared to these works, where the sensors collaborate in order to compute the centroids and their associations in a distributed fashion, in our work we reverse the perspective. That is, the agents play the role of the centroid and collaborate in order to compute their location and the association to the PoIs; the latter play the role of the sensors. The outline of the paper is as follows: Section \ref{sec:prelim} provides some preliminary definitions, while Section \ref{sec:problemstatement} introduces the problem addressed in this paper; Sections \ref{sec:assignment} and \ref{sec:refinement} solve two sub-problems that are the basis for our algorithm, while Section \ref{sec:algorithm} provides the proposed algorithm along with simulation results; the final Section \ref{sec:conclusions} collects some conclusive remarks and future work directions. \section{Preliminaries} \label{sec:prelim} In the following, we consider the Euclidean space $\mathbb{R}^d$, equipped with the usual norm. Let us define the {\em metric projection} onto a convex set as follows: \begin{definition}[Metric Projection] Let $X\subset \mathbb{R}^d$ be a convex set. The metric projection onto $X$ is a function $\proj{X}:\mathbb{R}^d\rightarrow X$ such that $ \project{X}{{\bf v}} = \argmin_{ {\bf z} \in X} \left \| {\bf v} - {\bf z} \right \|. $ \end{definition} In the following, we refer to a metric projection simply as projection. To serve the scope of this paper, we now specialize the projection onto a closed ball. \begin{definition}[Closed Ball] A closed ball $\mathbb{B}_{i,\rho}\subset\mathbb{R}^d$ with radius $\rho$ and center ${\bf p}_i\in \mathbb{R}^d$ is given by \mbox{$ \mathbb{B}_{i,\rho} =\{{\bf z}\in \mathbb{R}^d,\,:\, \|{\bf p}_i-{\bf z}\|\leq \rho\} $}. \end{definition} \begin{definition}[Projection onto a Closed Ball] Let \mbox{$\mathbb{B}_{i,\rho}\subset \mathbb{R}^d$} be the closed ball of radius $\rho>0$ centered at ${\bf p}_i\in \mathbb{R}^d$. The projection onto $\mathbb{B}_{i,\rho}$ is the function \mbox{$ \project{\mathbb{B}_{i,\rho}}{{\bf v}} = \alpha {\bf v} + (1-\alpha) {\bf p}_i $}, where $\alpha=\rho/\| {\bf v}-{\bf p}_i\|$ if $\| {\bf v}-{\bf p}_i\|>\rho$ and $\alpha=1$ otherwise. \end{definition} We now review the basics of convex constrained optimization. The reader is referred to \cite{Zangwill} for a comprehensive overview of the topic. \begin{definition}[Convex Constrained Optimization Problem] A convex constrained optimization problem with $s$ constraints has the following structure: \begin{equation} \label{prob:convexproblem} \begin{matrix} \min_{{\bf x}\in \mathbb{R}^d} f({\bf x})\\[2pt] \mbox{Subject to}\\ \begin{cases} g_i({\bf x})\leq 0,&i=1,\ldots, q\\ h_i({\bf x})=0,&i=q+1,\ldots, s. \end{cases} \end{matrix} \end{equation} where $f$ and all $g_i$ and $h_i$ are convex functions. The set of feasible solutions for the above problem is given by $$\Omega=\{{\bf x}\in \mathbb{R}^d\,|\,g_i({\bf x})\leq 0,\,\, 1\leq i\leq q;\, h_i({\bf x})= 0,\,\,q+1\leq i\leq s\},$$ which is a convex set since all $g_i$ and $h_i$ are convex. \end{definition} We now review conditions that are fundamental in order to solve convex constrained optimization problems. \begin{definition}[Slater's Conditions] The convex constrained optimization problem in Eq.~\eqref{prob:convexproblem} satisfies the {\em Slater's Condition} if there is an ${\bf x}\in \Omega$ such that $g_i({\bf x})<0$ for all $i=1, \ldots, q$ and $h_i({\bf x})=0$ for all $i=q+1, \ldots, s$. Moreover, if there is an ${\bf x}\in \Omega$ that satisfies the Slater's Condition and is such that all nonlinear $g_i({\bf x})$ are negative, while all linear $g_i({\bf x})$ are non-positive, then the problem is said to satisfy the {\em restricted Slater's Condition}. \end{definition} We now review Karush-Kuhn-Tucker (KKT) Theorem (see \cite{Zangwill} and references therein). \begin{theorem}[KKT Theorem] \label{theoKKT} Consider a convex constrained optimization problem as in Eq.~\eqref{prob:convexproblem} that satisfies the restricted Slater's Condition, and let the {\em Lagrangian function} be the function \mbox{$ f({\bf x},{\bm \lambda})=f({\bf x})+ \sum_{i=1}^q \lambda_i g_i({\bf x})+ \sum_{i=q+1}^s \lambda_i h_i({\bf x}), $} where ${\bm \lambda}=[\lambda_1,\ldots, \lambda_s]^T$. The point ${\bf x}^*\in \mathbb{R}^d$ is a {\em global minimizer} for the problem if and only if the following conditions hold: 1)~\mbox{$\partial f({\bf x},{\bm \lambda})/\partial {{\bf x}} |_{{\bf x}^*,{\bm \lambda}^*}=0$}; 2)~$\lambda^*_i g_i({\bf x}^*)=0$, for all $i=1,\ldots,q$; 3)~$g_i({\bf x}^*)\leq 0,\, 1\leq i\leq q$ and $h_i({\bf x}^*)=0,\, q+1\leq i \leq s$; 4)~\mbox{$\lambda^*_{i}\geq 0$}, for all $i=1,\ldots,s$. \end{theorem} \section{Problem Statement} \label{sec:problemstatement} Let us consider a set of $n$ PoIs and \mbox{$r< n$} agents deployed in a bounded region in $\mathbb{R}^d$, with \mbox{$d \in \{2, \, 3\}$}; we assume that each PoI $i$ is in a fixed location ${\bf p}_i\in\mathbb{R}^d$, while the $j$-th agent is initially in location ${\bf x}_j(0)\in \mathbb{R}^d$. The following set of assumptions is now in order: a)~ an agent $j$ is able to sense all PoIs for which the Euclidean distance from $j$ is less than or equal to~$\rho$; b)~an agent $j$ is able to communicate with all other agents for which the Euclidean distance from $j$ is less than or equal to a threshold $\theta\geq 2\rho$; c)~the agents are deployed initially in a way such that each PoI is sensed by at least one agent, and each agent senses at least one PoI; d)~the set $\{{\bf p}_1,\ldots, {\bf p}_n\}$ contains at least $n>r$ distinct points. Note that the last two assumptions, i.e., c) and d), are inherited from the standard C-means \cite{Bezdek:1981}. In particular, Assumption c) ensures that no PoI or agent is neglected by the optimization process. Let ${\bf x}=[{\bf x}_1^T,\ldots, {\bf x}_r^T]^T\in \mathbb{R}^{rd}$ collect the location of all agents and let $U$ be an $n\times r$ matrix encoding the associations among the agents and the PoIs, having $u_{ij}$ at its $(i,j)-th$ entry. Moreover, let $\delta_{ij}=\|{\bf p}_i-{\bf x}_j\|$. The problem can be stated as follows. \begin{problem} \label{prob:ourproblem} Find \mbox{${\bf x}^*\in\mathbb{R}^{rd}$} and $U^*\in\mathbb{R}^{n\times r}$ such that $$ \begin{matrix} J({\bf x}^*,U^*)=\min_{{\bf x},U} \sum_{i=1}^n \sum_{j=1}^r u_{ij}^m\delta^2_{ij}\\ \mbox{Subject to}\\ \begin{cases} \begin{matrix} \sum_{j=1}^r u_{ij} =1,& \forall i& (I)\\ \sum_{i=1}^n u_{ij}>0,& \forall j& (II)\\ u_{ij}\in [0,1],& \forall i,j& (III)\\ u_{ij}(\delta_{ij}^2-\rho^2)\leq0,& \forall i,j& (IV) \end{matrix} \end{cases} \end{matrix} $$ where $m$ is a finite positive integer that is greater than one. \end{problem} Problem~\ref{prob:ourproblem} extends the standard C-means problem \cite{Bezdek:1981} through the \emph{ proximity constraints} (IV). Note that constraint (I) ensures the complete coverage of each PoI by the agents, whereas constraint (II) ensures that each agent covers at least one PoI. We point out that the larger the parameter $m$ is, the less the associations $u_{ij}$ influence the objective function; as in the case of C-means, such a parameter accounts for the degree of overlapping among the different clusters\footnote{When there is no a priori knowledge, a popular choice is $m=2$ (see \cite{nayak2015fuzzy} and references therein).}. Note also that constraint (III) requires \mbox{$u_{ij}\in[0,1]$}; as a consequence, the combination of constraint (III) and the proximity constraints (IV) implies that $u_{ij}=0$ whenever $\delta_{ij}>\rho$. Therefore, these constraints ensure that an agent~$j$ does not cover those PoIs that are out-of-reach from it. In the following, we develop a distributed and iterative algorithm to solve Problem \ref{prob:ourproblem}; specifically, at each iteration $k$, our algorithm alternates between an {\em assignment phase} where we find the optimal associations $u_{ij}(k)$ given the current locations ${\bf x}(k)$ and a {\em refinement phase} where we assume the associations $u_{ij}(k)$ are fixed and we find the optimal location ${\bf x}(k+1)$ for the agents\footnote{We abstract away from the actual kinematics of the agents and we neglect the actual planning problem, assuming that a control loop to guide the agent toward the desired point exists.}. The two steps are iterated until a stopping condition is met. We next characterize (in Sections \ref{sec:assignment} and \ref{sec:refinement}) the optimality conditions for each sub-problem, respectively. Future work will be focused on studying the optimality of the overall algorithm \section{Assignment Phase} \label{sec:assignment} In this section we discuss how, given fixed agents' locations, the optimal associations $U$ can be computed. Specifically, the optimal choice for the $i$-th row of $U$ (i.e., the associations of the $i$-th PoI to all the agents) is as follows. If there is at least one agent $j$ such that ${\bf x}_j = {\bf p}_i$ (i.e., the $j$-th agent lies in the same location as the $i$-th PoI), the terms $u_{i1}, \ldots, u_{i,r}$ must satisfy\footnote{When two or more agents are at the same position of a poi ${\bf p}_j$, there is an infinity of choices for the associations $u_{ij}$ that satisfy Eq.~\eqref{calcolamembership2}.} \begin{equation} \label{calcolamembership2} \sum_{j=1}^r u_{ij}(k)=1,\quad u_{ij}(k)=0 \mbox{ if } \delta_{ij}\neq 0; \end{equation} if, conversely, ${\bf x}_j \neq {\bf p}_i$ for all agents $j$ (i.e., no agent is at the same location as the $i$-th PoI), the terms $u_{ij}$ are as follows: \begin{equation} \label{calcolamembershipsparse} u_{ij}=\begin{cases}\left({\underset{h|\delta_{ih}\leq\rho}{\sum}\Big ( \frac{\delta_{ij}}{\delta_{ih}}\Big ) ^{\frac{2}{m-1}}}\right)^{-1},& \mbox{ if } \delta_{ij}\leq\rho,\\ 0,& \mbox{ otherwise}. \end{cases} \end{equation} We now prove the optimality of such strategy. \begin{theorem} \label{prop:propositionassignment} For given agents' locations ${\bf x}$ that satisfy Assumption c), a solution $\{{\bf x}, U\}$ where $U$ satisfies Equations \eqref{calcolamembership2} and \eqref{calcolamembershipsparse} is a global minimizer for Problem~\ref{prob:ourproblem}. \end{theorem} \begin{proof} Consider a given ${\bf x}$ that satisfies Assumption c). Problem~\ref{prob:ourproblem} reduces to finding $U$ that minimizes \mbox{$J({\bf x}, U)=J(U)$} subject to constraints (I)--(III); as for the proximity constraints (IV) we have that if $\delta_{ij}\leq \rho$ then the corresponding proximity constraint is not active, while for $\delta_{ij}> \rho$, since $u_{ij}\in[0,1]$, the constraint reduces to $u_{ij}=0$. As a consequence, the objective function becomes $$ J(U)= \sum_{i=1}^n \sum_{j|\delta_{ij}\leq\rho} u_{ij}^m\delta^2_{ij}. $$ Let $I_1$ be the index set of the PoIs that are not overlapped by any agent and $I_2$ be the complementary set, i.e., \mbox{$I_1=\{i\in\{1,\ldots,n\}\,|\,\delta_{ij}>0$} for all $j\in\{1,\ldots,r\}$ and \mbox{$I_2=\{1,\ldots,n\}\setminus I_1$}. We can split $J(U)$ in two terms \begin{equation} \label{eq:thetasplit} J(U)= \underbrace{\sum_{i\in I_1} \sum_{j| \delta_{ij}\leq\rho} u_{ij}^m\delta^2_{ij}}_{J_1(U)}+ \underbrace{\sum_{i\in I_2} \sum_{j| \delta_{ij}\leq\rho} u_{ij}^m\delta^2_{ij}}_{J_2(U)}. \end{equation} From Eq.~\eqref{calcolamembership2}, it follows that all terms $\delta_{ij}$ appearing in $J_2(U)$ are null, hence $J_2(U)=0$. This implies that the optimality of $U$ only depends on the entries $u_{ij}$ appearing in $J_1(U)$. Since ${\bf x}$ satisfies Assumption c) by construction, using Eq.~\eqref{calcolamembershipsparse} it follows that constraints (II) and (III) are always satisfied. Furthermore, it is convenient to rewrite the variables $u_{ij}$ as the square of new variables $w_{ij}$, i.e., \mbox{$u_{ij}=w^2_{ij}$.} By doing this, and since constraint (IV) implies that $u_{ij}=0$ when $\delta_{ij}>\rho$, we can rewrite constraint (I) as \mbox{$ \sum_{j| \delta_{ij}\leq\rho} w^2_{ij}=1 $}. As a result, our problem becomes that of finding $W^*\in \mathbb{R}^{r\times d}$ that solves $$ \begin{matrix} J(W^*)=\min_{W} \sum_{i\in I_1} \sum_{j| \delta_{ij}\leq\rho} w_{ij}^{2m}\delta^2_{ij}\\ \mbox{Subject to}\\ \begin{cases} \begin{matrix} \sum_{j| \delta_{ij}\leq\rho} w^2_{ij}=1,& \forall i\in I_1.& \end{matrix} \end{cases} \end{matrix} $$ Note that the above problem is convex and has just equality constraints, hence the Slater's Condition is trivially satisfied and Theorem~\ref{theoKKT} applies. Let ${\bm \zeta}=[\zeta_1,\ldots \zeta_q]^T$ be the Lagrangian multipliers associated to the constraints, while the corresponding Lagrangian function is $$ J(W,{\bm \zeta})=\sum_{i\in I_1} \sum_{j| \delta_{ij}\leq\rho} w^{2m}_{ij}\delta^2_{ij} + \sum_{i\in I_1} \zeta_i ( \sum_{j| \delta_{ij}\leq\rho} w^2_{ij} -1). $$ By Theorem~\ref{theoKKT}, $W^*, {\bm \zeta}^*$ is a global optimal solution to the above problem if and only if it satisfies \begin{equation} \label{eq:partial2} \frac{\partial J(W,{\bm \zeta})}{\partial w_{ij}}\Big|_{W^*,{\bm \zeta}^*}= 2m(w_{ij}^*)^{2m-1}\delta_{ij}+2w^*_{ij}\zeta^*_i=0, \end{equation} for all $i\in I_1$ and $j\in\{1,\ldots, r\}$ and \begin{equation} \label{eq:partial1} \sum_{j| \delta_{ij}\leq\rho} (w^*)^2_{ij} =1,\quad \forall i\in I_1. \end{equation} From Eq. \eqref{eq:partial2} it follows that \begin{equation} \label{eq:wijsquareopt} (w^*_{ij})^2=u_{ij}^*=\left( {-\zeta^*_i}/{m\delta_{ij}^2} \right)^{\frac{1}{(m-1)}}. \end{equation} Summing over all $h$ such that $\delta_{ih}\leq\rho$ and applying Eq. \eqref{eq:partial1} we get \begin{equation} \label{eq:wijsquareopt2} (-\zeta_i^*)^{\frac{1}{(m-1)}}=\frac{1}{\sum_{h| \delta_{ih}\leq\rho}( {1}/{m\delta_{ih}^2} )^{\frac{1}{(m-1)}}}. \end{equation} By plugging Eq.~\eqref{eq:wijsquareopt2} in Eq.~\eqref{eq:wijsquareopt}, it follows that $$ (w^*_{ij})^2= \frac{1}{(m\delta_{ij}^2)^{\frac{1}{(m-1)}}\sum_{h| \delta_{ih}\leq\rho}( \frac{1}{m\delta_{ih}^2} )^{\frac{1}{(m-1)}}}= \frac{1}{\sum_{h| \delta_{ih}\leq\rho}( \frac{\delta_{ij}}{\delta_{ih}} )^{\frac{2}{(m-1)}}}. $$ Therefore, since $u_{ij}^*=(w^*_{ij})^2$, we conclude that Eq. \eqref{calcolamembershipsparse} corresponds to the global optimal solution. \end{proof} \section{Refinement Phase} \label{sec:refinement} Let us now discuss the refinement phase within the proposed algorithm, i.e., a way to find the optimal location of the agents, for fixed associations. Note that, when the fixed associations are admissible for Problem~\ref{prob:ourproblem}, our problem is equivalent to solving a collection of independent sub-problems (i.e., one for each agent) having the following structure. \begin{problem} \label{prob:ourproblem2} Find ${\bf x}^*_j\in \mathbb{R}^{d}$ that solves \begin{equation} \label{prob:singleagent} \begin{matrix} J({\bf x}^*_j)=\min_{{\bf x}_j\in \mathbb{R}^{d}} \underset{i| \delta_{ij}\leq\rho}{\sum} u_{ij}^m \delta_{ij}^2\\ \mbox{Subject to}\\ \begin{cases} \delta_{ij}^2\leq \rho^2,\quad \forall i \mbox{ s.t. } u_{ij}>0. \end{cases} \end{matrix} \end{equation} \end{problem} We now characterize the optimal solution of Problem~\ref{prob:ourproblem2}. To this end, we first define the set of admissible solutions. \begin{definition}[Admissible Solutions to Problem~\ref{prob:ourproblem2}] \label{def:feasiblelocations} The set of admissible solutions to Problem~\ref{prob:ourproblem2} is \mbox{$ \mathbb{B}^*_j=\underset{i| u_{ij}>0}{\cap} \mathbb{B}_{i,\rho} $}, where $\mathbb{B}_{i,\rho}$ is the ball of radius $\rho$ centered at the location ${\bf p}_i$ of the $i$-th PoI. \end{definition} \begin{remark} \label{rem:isconvex} Problem~\ref{prob:ourproblem2} is convex, since for fixed terms~$u_{ij}$ the objective function is a linear combination of convex functions, and similarly the constraints are convex functions. \end{remark} We now establish a condition for Problem~\ref{prob:ourproblem2} to satisfy Slater's Condition. \begin{proposition} \label{prop:isslater} Problem~\ref{prob:ourproblem2} satisfies Slater's Condition if and only if $\mathbb{B}^*_j$ is not a singleton. \end{proposition} \begin{proof} The fact $\mathbb{B}^*_j$ is a singleton implies that at least two constraints are satisfied at the equality (i.e., at least two balls are tangent), thus preventing Slater's Condition to be satisfied. \end{proof} We now provide an optimality condition to move an agent~$j$ when $U$ is fixed. \begin{theorem} \label{theo:optimalx} Let $U$ be admissible to Problem~\ref{prob:ourproblem}, and suppose $\mathbb{B}^*_j$ is not a singleton. Then \begin{equation} \label{eq:centrilambda} {\bf x}^*_j= {\underset{i|\delta_{ij}\leq\rho}{\sum} (u_{ij}^m+\lambda^*_{i}){\bf p}_i}\,\,/\,{\underset{i|\delta_{ij}\leq\rho}{\sum} (u_{ij}^m+\lambda^*_{i}) } \end{equation} \end{theorem} is a global minimizer for Problem~\ref{prob:ourproblem2} if and only if there exist ${\bm \lambda}^*=[\lambda^*_1,\ldots,\lambda^*_n]^T\in \mathbb{R}^n$ such that: (a)~${\bf x}^*_j\in \mathbb{B}^*_j$; (b)~$\lambda^*_i\geq 0$ for all $i$; (c)~$\lambda^*_{i}\left(\|{\bf p}_i-{\bf x}^*_j\|^2-\rho^2\right)=0$ for all $i$. \begin{proof} As discussed in Remark~\ref{rem:isconvex}, Problem~\ref{prob:ourproblem2} is convex. Moreover, as shown in Proposition~\ref{prop:isslater}, it satisfies Slater's condition and therefore it satisfies also the restricted Slater's condition. Therefore, the KKT Theorem (Theorem~\ref{theoKKT}) applies to Problem~\ref{prob:ourproblem2}. Let us consider the Lagrangian function associated to Problem~\ref{prob:ourproblem2}, i.e., \begin{equation} J({\bf x}_j,{\bm \lambda}^*)=\underset{i| \delta_{ij}\leq\rho}{\sum} u_{ij}^m \delta_{ij}^2+\underset{i| \delta_{ij}\leq\rho}{\sum}\lambda^*_{i}\left(\delta_{ij}^2-\rho^2\right). \end{equation} By Theorem~\ref{theoKKT}, a point ${\bf x}^*_j$ is a global minimizer for Problem~\ref{prob:ourproblem2} if and only if: 1)~$\partial J({\bf x}_j,{\bm \lambda})/ \partial {{\bf x}_j} |_{{\bf x}_j^*, {\bm \lambda}^*}=0$; 2)~$\delta_{ij}\leq \rho$, for all $i\in\{1,\ldots,n\}$; 3)~$\lambda_{i}\geq0$, for all $i\in\{1,\ldots,n\}$; 4)~$\lambda^*_{i}\left(\|{\bf p}_i-{\bf x}^*_j\|^2-\rho^2\right)=0$, for all $i\in\{1,\ldots,n\}$. Clearly, conditions (a)--(c) are equivalent to the above conditions (2)--(4). To prove Theorem~\ref{theo:optimalx}, therefore, we must show that any solution satisfying condition (1) has the structure of Eq.~\eqref{eq:centrilambda}; to establish this, we show that, for any ${\bm \lambda}$, $\partial J({\bf x}_j,{\bm \lambda})/ \partial {{\bf x}_j}$ vanishes at ${\bf x}^*_j$ along any arbitrary direction ${\bf y}\in \mathbb{R}^d$ with $\|{\bf y}\|\neq0$. Let $t\in \mathbb{R}$ and let ${\bf x}^*_j$ be the optimal solution for the problem in Eq. \eqref{prob:singleagent}, and let us define $J(t)=J({\bf x}^*_j+t{\bf y}, {\bm \lambda})$. It holds \mbox{$ J(t)=\underset{i| \delta_{ij}\leq\rho}{\sum} (u_{ij}^m+\lambda_{i}) \|{\bf p}_i-({\bf x}^*_j+t{\bf y})\|^2-\rho^2\underset{i| \delta_{ij}\leq\rho}{\sum} \lambda_{i} $}. We can rearrange $J(t)$ a \begin{equation*} J(t)=\underset{i| \delta_{ij}\leq\rho}{\sum} (u_{ij}^m+\lambda_{i})({\bf p}_i-{\bf x}^*_j-t{\bf y})^T({\bf p}_i-{\bf x}^*_j-t{\bf y})-\rho^2\underset{i| \delta_{ij}\leq\rho}{\sum} \lambda_{i}, \end{equation*} so that $ d J(t) / d t=-2\underset{i| \delta_{ij}\leq\rho}{\sum} (u_{ij}^m+\lambda_{i}){\bf y}^T ({\bf p}_i-{\bf x}^*_j-t{\bf y} $ and the KKT condition~(1) is satisfied if \begin{equation} \label{eq:dvJ} \begin{aligned} \dv{J(0)}{t &=-2\underset{i| \delta_{ij}\leq\rho}{\sum} (u_{ij}^m+\lambda_{i}) {\bf y}^T ({\bf p}_i-{\bf x}^*_j)=\\%<{\bf y},{\bf p}_i-{\bf x}^*_j>=\\ &=-2 {\bf y}^T \underset{i| \delta_{ij}\leq\rho}{\sum} (u_{ij}^m+\lambda_{i})({\bf p}_i-{\bf x}^*_j)=0. \end{aligned} \end{equation} For arbitrary nonzero ${\bf y}$, Eq.~\eqref{eq:dvJ} holds if and only if $$\underset{i| \delta_{ij}\leq\rho}{\sum} (u_{ij}^m+\lambda_{i})({\bf p}_i-{\bf x}^*_j)=0,$$ which, considering ${\bm \lambda}={\bm \lambda}^*$, is equivalent to Eq. \eqref{eq:centrilambda}. This concludes the proof. \end{proof} We now present some technical results that are fundamental in order to construct a global minimizer for Problem~\ref{prob:ourproblem2}. \begin{lemma} \label{lem:projectioninhull} Given a nonempty, non-singleton set $\mathbb{B}^*$, obtained as the intersection of a collection of $n$ balls $\{\mathbb{B}_{1,\rho},\ldots,\mathbb{B}_{n,\rho}\}$, each centered at ${\bf v}_i \in \mathbb{R}^d$ and with radius $\rho$, i.e. \mbox{$ \mathbb{B}^*=\cap_{i=1}^n \mathbb{B}_{i,\rho} $}, for every point ${\bf v}\in \mathbb{R}^d$ there exist non-negative $\mu_1,\ldots, \mu_n$ and a positive $\overline \mu$ with \mbox{$\overline \mu+\sum_{i=1}^n \mu_i =1$}, such that the projection of ${\bf v}$ onto $\mathbb{B}^*$ is given by \mbox{$ \project{\mathbb{B}^*}{{\bf v}}= \overline \mu {\bf v} + \sum_{i=1}^n \mu_i {\bf v}_i $}. \end{lemma} \begin{proof} To prove our statement, we show that $\project{\mathbb{B}^*}{{\bf v}}$ is the solution of a convex optimization problem, which can be solved by resorting to KKT Theorem. Let us recall that the projection $\project{\mathbb{B}^*}{{\bf v}}$ is the point of minimum distance from ${\bf v}$ among those in $\mathbb{B}^*$. In other words, $\project{\mathbb{B}^*}{{\bf v}}$ is the solution of the following problem $$ \begin{matrix} \project{\mathbb{B}^*}{{\bf v}}=\min_{{\bf z}\in \mathbb{R}^d} \| {\bf z}-{\bf v}\|^2\\ \mbox{Subject to}\\ \begin{cases} \begin{matrix} \| {\bf z}-{\bf v}_i\|^2\leq \rho^2,& \forall i=1,\ldots,n \end{matrix} \end{cases} \end{matrix} $$ The above problem is convex, since $\| {\bf z}-{\bf v}\|^2$ and $\| {\bf z}-{\bf v}_i\|^2$ are convex functions. Moreover, since we assumed $\mathbb{B}^*$ is not empty nor a singleton, the above problem satisfies Slater's Condition, so KKT Theorem~\ref{theoKKT} applies. Let $\gamma_i$ be the Lagrangian multiplier associated to the \mbox{$i$-th} constraint, and let ${\bm \gamma}$ be the stack vector of all $\gamma_i$. The Lagrangian function $L({\bf z},{\bm \gamma})$ for the above problem is \mbox{$ L({\bf z},{\bm \gamma})= \| {\bf z}-{\bf v}\|^2 + \sum_{i=1}^n \gamma_i (\| {\bf z}-{\bf v}_i\|^2- \rho^2) $}; hence, \mbox{${\partial L({\bf z},{\bm \gamma})}/{\partial {\bf z}}|_{{\bf z}^*, {\bm \gamma}^*}= 2{\bf z}^*-2{\bf v} + 2\sum_{i=1}^n \gamma^*_i ({\bf z}^*-{\bf v}_i)=0$}, which implies that the optimal solution has the form \begin{equation} \label{eq:gamma} {\bf z}^*=\frac{{\bf v}+\sum_{i=1}^n \gamma^*_i {\bf v}_i }{1+\sum_{i=1}^n \gamma^*_i}, \end{equation} where the Lagrangian multipliers $\gamma^*_i$ must satisfy $\gamma^*_i\geq 0$ for all $i=1,\ldots, n$ and $\gamma^*_i (\| {\bf z}^*-{\bf v}_i\|^2- \rho^2)=0$ for all $i=1,\ldots, n$. Under the above constraints on $\gamma^*_i$, our statement is verified for \mbox{$\overline \mu = {1}/{(1+\sum_{i=1}^n \gamma^*_i)}>0$} and \mbox{$\mu_i = {\gamma^*_i}/{1+\sum_{i=1}^n \gamma^*_i}\geq 0$}, for all $i=1,\ldots, n$. This completes our proof. \end{proof} \begin{corollary} \label{coroll:good} If ${\bf v}\not \in \mathbb{B}^*$ then $\project{\mathbb{B}^*}{{\bf v}}$ lies on the intersection of the boundaries of the balls corresponding to positive coefficients $\gamma_i$. \end{corollary} \begin{proof} From Lemma~\ref{lem:projectioninhull} the coefficients $\gamma_i$ are non-negative and must satisfy \mbox{$ \gamma_i (\| {\bf z}-{\bf v}_i\|^2- \rho^2)=0$}, for all \mbox{$i=1,\ldots, n$}. Hence, $\project{\mathbb{B}^*}{{\bf v}}$ must belong to the intersection of the boundaries of the balls associated to positive $\gamma_i$. The proof is complete noting that, when \mbox{${\bf v}\not \in \mathbb{B}^*$}, Eq.~\eqref{eq:gamma} implies that there must be at least a positive $\gamma_i$. \end{proof} We now provide a technical result which will be used later in the main theorem of this section. \begin{lemma} \label{lem:sherman} Let $V=\{{\bf v}_1,\ldots, {\bf v}_n\}\subset \mathbb{R}^d$ with $n>1$ and consider given $\overline \mu>0$ and $\mu_i \in (0,1)$ for all $i\in\{1,\ldots,n\}$ such that \mbox{$\overline \mu+\sum_{i=1}^n \mu_i=1$}. For any choice of $\alpha>0$ there is a choice of $\lambda_i$ for all $i\in\{1,\ldots,n\}$ satisfying \begin{equation} \label{eq:choiceoflambda1} \mu_i={\lambda_i}/{(\alpha + \sum_{h=1}^n \lambda_h)},\quad \lambda_i\geq 0. \end{equation} \end{lemma} \begin{proof} For the sake of the proof, it is convenient to rearrange Eq.~\eqref{eq:choiceoflambda1} as \mbox{$\lambda_i=\mu_i(\alpha + \sum_{h=1}^n \lambda_h)$}, $\lambda_i\geq 0$. Stacking for all $\lambda_i$ we get \begin{equation} \label{eq:choiceoflambda2} {\bm \lambda}=\alpha {\bm \mu}+ {\bm \mu} {\bf 1}^T {\bm \lambda}, \end{equation} where ${\bm \lambda}$ and ${\bm \mu}$ are the stack vectors collecting all $\lambda_i$ and $\mu_i$, respectively. Since by assumption $\overline \mu>0$, it holds \mbox{$1-{\bf 1}^T{\bm \mu}=\overline \mu>0$}, from the Sherman-Morrison Theorem \cite{sherman1950adjustment} it follows that the matrix \mbox{$(I-{\bm \mu}{\bf 1}^T)^{-1}$} exists and has the following structure \begin{equation} \label{eq:intermediatesherman2a} (I-{\bm \mu}{\bf 1}^T)^{-1} I+{{\bm \mu}{\bf 1}^T}/{\overline \mu}, \end{equation} where we notice that all entries are non-negative by construction. At this point, by plugging Eq.~\eqref{eq:intermediatesherman2a} in Eq.~\eqref{eq:choiceoflambda2} we obtain \mbox{${\bm \lambda}=(I-{\bm \mu}{\bf 1}^T)^{-1}\alpha{\bm \mu}=(I+{{\bm \mu}{\bf 1}^T}/{\overline \mu})\alpha{\bm \mu}$}. Since $\alpha>0$, $\mu_i\geq 0$ and the matrix in Eq.~\eqref{eq:intermediatesherman2a} has non-negative entries, we conclude that ${\bm \lambda}\geq0$. \end{proof} We now provide a constructive method to obtain a solution to Problem~\ref{prob:ourproblem2}. \begin{theorem} \label{prop:propositionrefinement} Let $U$ be admissible to Problem~\ref{prob:ourproblem} and assume that ${\bf x} \in \mathbb{R}^{rd}$ satisfies Assumption c). Then \begin{equation} \label{fuzzycenterssparsefinal} {\bf x}^*_j=\projectbig{\mathbb{B}^*_j}{{\underset{i|\delta_{ij}\leq\rho}{\sum} u^m_{ij} {\bf p}_i}\,/{\underset{i|\delta_{ij}\leq\rho}{\sum} u^m_{ij} }} \end{equation} is a global minimizer for Problem~\ref{prob:ourproblem2}. \end{theorem} \begin{proof} Let us define the set $I_{j}=\{i\in\{1,\ldots,n\}\,|\, u_{ij}>0\}$ and let \begin{equation} \label{eq:hatxhatudef} \hat {\bf x}_j= {\sum_{i=1}^n u_{ij}^m {\bf p}_i }\,/\,{\sum_{i=1}^n u_{ij}^m }, \quad \hat u=\sum_{i=1}^n u_{ij}^m. \end{equation} We first handle two special cases. Note that, if $\hat {\bf x}_j \in \mathbb{B}^*_j$ then $\project{\mathbb{B}^*_j}{\hat {\bf x}_j}=\hat {\bf x}_j$ and $\hat {\bf x}_j$ itself satisfies Theorem~\ref{theo:optimalx} with all $\lambda^*_i=0$. Moreover, if $\mathbb{B}^*_j$ is a singleton, since ${\bf x}$ satisfies Assumption c) it must hold $\mathbb{B}^*_j=\{{\bf x}_j\}$. In this case, Problem~\ref{prob:ourproblem2} fails to satisfy Slater's conditions and Theorem \ref{theo:optimalx} does not apply; however, Theorem \ref{theo:optimalx} is no longer required if $\mathbb{B}^*_j=\{{\bf x}_j\}$, as $\project{\mathbb{B}^*_j}{\hat {\bf x}_j}={\bf x}_j$ by construction. In other words, in this case agent~$j$ does not move. We now focus on the case $\hat {\bf x}_j \not\in \mathbb{B}^*_j$ and $\mathbb{B}^*_j\neq \{{\bf x}_j\}$, that is ${\bf x}_j$ does not belong to $\mathbb{B}^*_j$ and $\mathbb{B}^*_j$ is not a singleton. In this setting, our goal is to show that $\project{\mathbb{B}^*_j}{\hat {\bf x}}$ is a solution having the form of Eq.~\eqref{eq:centrilambda}, for which a closed form of the Lagrangian multipliers $\lambda^*_i$ that satisfies conditions (a)--(c) in Theorem~\ref{theo:optimalx} is given. If $\hat {\bf x}\not\in \mathbb{B}^*_j$, then $\project{\mathbb{B}^*_j}{\hat {\bf x}}\in \mathbb{B}^*_j$ lies on the boundary of $\mathbb{B}^*_j$, hence Condition (a) in Theorem~\ref{theo:optimalx} is satisfied. By Lemma~\ref{lem:projectioninhull} and Corollary~\ref{coroll:good}, there is an $I^*_j\subseteq I_j$ such that \begin{equation} \label{eq:bcomb1} \project{\mathbb{B}^*_j}{\hat {\bf x}}= \hat \mu \hat {\bf x} + \sum_{i\in I^*_j} \mu_i {\bf p}_i. \end{equation} with $\hat \mu\in(0,1)$, $\mu_i\in(0,1)$ for all $i\in I^*_j$ and \mbox{$\hat \mu+ \sum_{i\in I^*_j} \mu_i=1$.} In other words, the projection is a convex combination of $\hat {\bf x}$ and the location of the PoIs ${\bf p}_i$ for $i\in I^*_j$, where the coefficient $\hat \mu$ for $\hat {\bf x}$ is strictly positive by construction. Let ${\bm \lambda}^*$ and ${\bm \mu}$ be the stack vectors of the Lagrangian multipliers $\lambda^*_i$ and the coefficients $\mu_i$ for all $i\in I^*_j$, respectively. By Proposition~\ref{lem:sherman}, choosing \mbox{${\bm \lambda}^*=\hat u(I+{{\bm \mu}{\bf 1}^T}/{\hat \mu}){\bm \mu}\geq0$} implies \begin{equation} \label{eq:choiceoflambda} \mu_i ={\lambda^*_i}/{(\hat u + \sum_{h\in I^*_j } \lambda^*_h)}, \quad \forall i\in I^*, \end{equation} and therefore \begin{equation} \label{eq:choiceoflambdaa} \hat \mu={\hat u}/{(\hat u + \sum_{h\in I^*_j } \lambda^*_h)}. \end{equation} Plugging Eq.~\eqref{eq:choiceoflambda} and Eq.~\eqref{eq:choiceoflambdaa} in Eq.~\eqref{eq:bcomb1}, and choosing $\lambda^*_i=0$ for all $i\in \{1,\ldots,n\}\setminus I^*_j$ we get \mbox{$ \project{\mathbb{B}^*_j}{\hat {\bf x}}= \frac{\hat u}{\hat u + \sum_{h\in I^*_j } \lambda^*_h} \hat {\bf x}_j + \sum_{i\in I^*_j} \frac{\lambda^*_i}{\hat u + \sum_{h\in I^*_j } \lambda^*_h} {\bf p}_i $}, which has the same structure as ${\bf x}^*_j$ in Eq.~\eqref{eq:centrilambda}. Note that all $\lambda^*_i\geq 0$, hence condition (b) in Theorem~\ref{theo:optimalx} is satisfied. Moreover, as discussed above, Corollary~\ref{coroll:good} guarantees that $ \|\project{\mathbb{B}^*_j}{\hat {\bf x}}-{\bf p}_i\|=\rho$ for all $i\in I^*_j $, and since $\lambda^*_i=0$ for $i\not\in I^*_j$ also condition (c) in Theorem~\ref{theo:optimalx} is satisfied. This completes the proof. \end{proof} \section{Proposed Algorithm and Simulations} \label{sec:algorithm} We now discuss our distributed algorithm, based on the repeated alternated solution of the two sub-problems solved in the previous sections. Specifically, each agent alternates between the assignment phase, where it computes the associations with the sensed PoIs based on its current location, and the refinement phase, where it selects a new location based on the sensed PoIs and the associations. We point out that, knowing the associations, the refinement phase is executed locally by each agent, with no need for coordination among neighboring agents. Conversely, communication among neighboring agents is required during the assignment phase, to collect all the distances $\|{\bf p}_i-{\bf x}_h(k)\|$ involving the $i$-th PoI and the agents able to sense it. Note that communication is ensured by Assumption b). This allows each agent to compute the associations via Eqs.~\eqref{calcolamembership2} ~and~\eqref{calcolamembershipsparse}. Note also that the algorithm execution requires an implicit coordination among the agents, which can be achieved by resorting to well known protocols such as consensus algorithms \cite{Olfati1}. Details are omitted for the sake of brevity. Finally, a stopping criterion is required: a simple choice is to let each agent stop when its new location is closer than $\epsilon$ to the current one. Let us now numerically demonstrate the effectiveness of the proposed algorithm. In Figure \ref{fig:example1}, we show an example where PoIs and agents are embedded in the unit square $[0,1]^2\subset \mathbb{R}^2$. Specifically, we consider $n=140$ PoIs (circles) and $r=4$ agents (triangles). The figure compares the result of the proposed algorithm for $\rho=0.35$ with the result of the standard C-means. In Figures \ref{fig:example1:1} and \ref{fig:example1:2}, we assign the same colors to those PoIs having their largest association with the same agent. Moreover we report the initial and final position for the agents within the proposed algorithm in white and blue, respectively and, for comparison, we report in red the final positions found via standard C-means. According to the plots, in spite of sparsity, quite similar results are achieved. The plots in Figures \ref{fig:example1:3} and \ref{fig:example1:4} show the associations obtained for one cluster within the proposed algorithm (for $\rho=0.35$) and within the standard C-means. This gives an intuitive explanation of the effect of the proximity constraints on the associations, i.e., membership with a very low degree are truncated in the proposed algorithm. Figure \ref{fig:example2} reports the value of the objective function with respect to the iterations for the solution found by the \mbox{C-means} and by the proposed algorithm for different values of~$\rho$. Overall, the figure numerically demonstrate that the behavior of the proposed algorithm tends to coincide with the behavior of the standard C-means as $\rho$ grows, i.e., as $\rho$ increases, the agents become more and more aware of the PoIs, until the proximity constraints no longer exist. Notably, this numerical simulation suggests that our algorithm can be thought as a generalization of the standard C-means. \section{Conclusions} \label{sec:conclusions} In this paper we provide a novel distributed coverage algorithm for a set of agents aiming at covering in a non-exclusive way a discrete and finite set of points of interest, which is inspired by the standard C-Means algorithm. Several directions are envisioned for future work: (1)~consider an asynchronous setting and moving PoIs; (2)~ introduce the possibility for an agent to leave some of the PoIs uncovered if they are already covered by other agents; (3)~provide a formal characterization of the convergence properties of the overall proposed algorithm. \begin{figure}[ht!] \centering \subfloat[]{\label{fig:example1:1} \includegraphics[width=0.21\textwidth]{./fig1.pdf}} \hspace{1mm} \subfloat[]{\label{fig:example1:2} \includegraphics[width=0.21\textwidth]{./fig6.pdf}} \vspace{1.5mm} \subfloat[]{\label{fig:example1:3} \includegraphics[width=0.21\textwidth]{./assoc1bis.pdf}} \hspace{1mm} \subfloat[]{\label{fig:example1:4} \includegraphics[width=0.21\textwidth]{./assoc2bis.pdf}} \caption{Plots \ref{fig:example1:1} and \ref{fig:example1:2}: comparison of the proposed algorithm for $\rho=0.35$ (plot \ref{fig:example1:1}) with the standard C-means (plot \ref{fig:example1:2}), considering a particular instance. Plots \ref{fig:example1:3} and \ref{fig:example1:4}: comparison of the associations between a particular agent and all the PoIs within the proposed algorithm when $\rho=0.35$ (plot \ref{fig:example1:3}) and within the standard C-means (plot \ref{fig:example1:4}). The color of the PoI tends to red or blue as the associations approach one or zero, respectively. Black crosses indicate associations that are exactly zero.} \label{fig:example1} \end{figure} \begin{figure} \centering \includegraphics[width=.45\textwidth]{./OBJECTIVEFUNCTION.pdf} \caption{Comparison of the value of the objective function at each step of the proposed algorithm, for different choices of $\rho$, with the one obtained by the standard C-means, considering the example of Figure \ref{fig:example1}.} \label{fig:example2} \end{figure} \bibliographystyle{IEEEtran}
{ "attr-fineweb-edu": 1.510742, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdfs4ubng9GOHViZS
\section{Introduction} For several decades, the study of black hole (BH) thermodynamics has provided crucial information about the underlying structure of the spacetime. In particular, the fact that the BH entropy is proportional to the BH surface area in Planck units seems to tell us something important regarding the microstructure of spacetime. Recently, a new development in BH thermodynamics based on scaling arguments and the Smarr relation \cite{Smarr:1972kt} was suggested \cite{Kastor:2009wy} and led to the proposal that the mass of a BH in asymptotically anti de-Sitter (AdS) background should be interpreted as the enthalpy $H$ of the spacetime instead of the internal energy $U$. This total \textit{gravitational enthalpy} of the system $H=U+PV$ takes into account the possibility of representing the cosmological constant $\Lambda$ as a pressure $\Lambda = - P/(8 \pi G_N ) $ and define the thermodynamically conjugate variable to be the thermodynamic volume. Consequently, this proposal suggests to include in the first law of BH thermodynamics the variation of physical ``constants" \cite{Kastor:2009wy,PhysRevD.52.4569,Cvetic:2010jb} \begin{equation} dM = TdS + PdV \end{equation} where the thermodynamic BH volume is, by definition, $ V= \left.\frac{\partial H}{\partial P} \right|_{S}\,,$ and can be different from the geometrical volume. The \textit{extended thermodynamic phase space} assumed by this proposal is well motivated for at least three reasons: \begin{enumerate} \item In this extended phase space both the Smarr relation and the first law of thermodynamics hold, while in the conventional phase space only the first law is satisfied for nonzero $\Lambda$. \item The use of an extended thermodynamic phase space is compatible with considering more fundamental theories of physics that admit the variation of physical constants. \item Comparing the physics of BHs with real world thermodynamic systems becomes a much more reasonable possibility \cite{Kubiznak:2014zwa}. \end{enumerate} This proposal has been shown to provide a rich variety of thermodynamic behaviors for both AdS and dS black holes \cite{Kubiznak:2012wp, Kubiznak:2015bya}. For example, by introducing the pressure, it can be shown \cite{Kubiznak:2012wp} that charged BHs behave as Van der Waals fluids\footnote{Van der Waals behavior is also present in the case of a regular BH in \cite{Frassino:2015hpa}.}. It has been found that there could be triple points (e.g., in Kerr-AdS black holes \cite{Altamirano:2013uqa, Altamirano:2014tva}), where a coalescence of small, medium, and large sized BHs merge into a single kind at a particular critical value of pressure and temperature. Also, reentrant phase transitions \cite{Altamirano:2013ane} can occur, in which there are phase transitions from large BHs to small ones and then back to large again as the temperature monotonically increases. In addition, this proposal has been subject to the attempts of study an extended AdS/CFT dictionary (see \cite{CJ, BPD, KRT, KR} and for an extended review \cite{KMT}). \section{Lovelock Gravity} Lovelock theory is a particular massless metric theory of gravity in arbitrary dimensions that, in four dimensions, become identical to general relativity with cosmological constant $\Lambda$. One of the main features of this model is that, although the action functional of the theory could be an arbitrary higher-order polynomial in the Riemann tensor, it leads to equations of motion that contain only second order derivatives of the metric tensor. To define the generic Lovelock densities one can use the language of differential forms or the generalized Kronecker delta symbols, where $\delta_{B_{1} \ldots B_{n}}^{A_{1}\ldots A_{n}}=n! \delta_{\left[B_{1}\ldots B_{n}\right]}^{A_{1}\ldots A_{n}}$ is antisymmetric in any pair of upper and lower indices. In fact, one has that $\delta_{B_{1} \ldots B_{n}}^{A_{1}\ldots A_{n}}$ is equal to $\epsilon_{B_{1}\ldots B_{n}}\epsilon^{A_{1}\ldots A_{n}}$ with respect to the Levi-Civita symbol. Using this definition, the Lovelock densities can be written as the complete contraction of the above generalized delta symbol with the Riemann curvature tensor: \begin{equation} \mathcal{L}_{n}=\frac{1}{2^{n}}\epsilon_{B_{1}\ldots B_{n}}\epsilon^{A_{1}\ldots A_{n}}\,R_{A_{1}A_{2}}^{\quad B_{1}B_{2}}\,\ldots\,R_{A_{2n-1}A_{2n}}^{\quad \: B_{2n-1}B_{2n}}. \label{eq:LLd} \end{equation} Using Eq.\eqref{eq:LLd}, one can check that the lowest order term $\mathcal{L}_{0}$ corresponds to a cosmological constant, while $\mathcal{L}_{1}$ is the Einstein-Hilbert term \begin{eqnarray} \mathcal{L}_{1} &=& \frac{1}{4}\left(2! \: \delta^{A_1 A_2}_{\left[ B_1 B_2 \right]} R^{\quad \: \: B_1 B_2}_{A_1 A_2}\right) = \frac{1}{2} R \label{L1} \end{eqnarray} and $\mathcal{L}_{2}$ is the Gauss-Bonnet combination: \begin{eqnarray} \mathcal{L}_{2} &=& \frac{1}{2^{4}}\left(4! \: \delta^{A_1 A_2 A_3 A_4}_{\left[ B_1 B_2 B_3 B_4 \right]} R^{\quad \: \: B_1 B_2}_{A_1 A_2} R^{\quad \: \: B_3 B_4}_{A_3 A_4}\right)\,. \end{eqnarray} This means that the combinations of all the Lovelock densities for $n>D$ are zero, whereas when $n \equiv D$ (the \textit{critical dimension} for a given Lovelock term), the Lovelock density is topological: it is a total derivative making no contribution to the field equations and whose value depends on the topology of the spacetime manifold. The simplest example is the Einstein-Hilbert term in two dimensions: the Ricci scalar is a total derivative and the Einstein tensor is identically zero\footnote{The Gauss-Bonnet theorem relates the topological ``Euler number'' $\chi$ of a two dimensional surface $\mathcal{M}_2$ defined as $ \chi \left[\mathcal{M}_2 \right]=2-2h$ where $h$ is the number of topological handles to a differentiable geometric quantity, the scalar curvature, in this way:\begin{equation} \chi \left[\mathcal{M}_2 \right]=\frac{1}{4 \pi} \intop_{\mathcal{M}} R. \end{equation} Chern generalized the theorem to higher dimensions finding the relevant higher-order curvature scalars. Thus the Gauss-Bonnet density, for example, is a topological invariant in four dimensions whose integral is the generalized Euler or Chern topological number. }. Thus, the Lovelock densities are a generalization of the Chern scalar densities, namely densities whose variation leads to a second order equation of motion. Any higher order derivatives present in the variation of Lovelock densities end up as total divergences and thus do not contribute to the field equations. For example, the six-dimensional Euler density in seven or eight dimensions (used in Figs.\ref{Fig:Lvq7a} and \ref{Fig:Lvq7b}) will be a Lovelock density of third power in the curvature tensor (we refer to it as 3rd-order Lovelock gravity) The generic Lovelock Lagrangian is given by \begin{equation} \label{eq:Lagrangian} L=\sum_{n=0}^{k_{max}} \hat{\alpha}_{\left(n\right)} \mathcal{L}_{n} \end{equation} where $k_{max}$ is the integer part of $\left[ \left( D-1 \right)/2 \right]$ and the $\alpha$'s are the Lovelock coupling constants. In the \emph{extended thermodynamic phase space}, the Lovelock coupling constants (including the cosmological constant $\hat{\alpha}_{\left(0\right)}$) can be considered as thermodynamic variables and can vary in the first law of BH thermodynamics. The physical meaning of these variables and their conjugates, apart from the cosmological constant and its associated volume, remains to be explored.\footnote{A similar situation was seen in Born--Infeld electrodynamics, in which the thermodynamics conjugate to the Born--Infeld coupling constant was interpreted as vacuum polarization \cite{Gunasekaran:2012dq}. } \subsection{Thermodynamic Considerations}\label{sec:LCBH} Let us consider now a Lovelock BH charged under a Maxwell field, $F=dA$, with the action given by (cf. Eq.\eqref{eq:Lagrangian}) \begin{equation} I=\frac{1}{16\pi G_{N}}\int d^{d}x\sqrt{-g}\Bigl({\sum_{k=0}^{k_{max}}}\hat{\alpha}_{\left(k\right)}\mathcal{L}_{k}-4\pi G_{N}F_{ab}F^{ab}\Bigr)\,,\label{eq:Loveaction} \end{equation} and the corresponding equations of motion \begin{equation} \sum_{k=0}^{k_{max}}\hat{\alpha}_{\left(k\right)}\mathcal{G}_{ab}^{\left(k\right)}=8\pi G_{N}\Bigl(F_{ac}{F_{b}{}^{\,c}}-\frac{1}{4}g_{ab}F_{cd}F^{cd}\Bigr)\,\label{eq:Graveq} \end{equation} where $\mathcal{G}_{ab}^{\left(k\right)}$ is the generalized Einstein's tensor. The Hamiltonian formalism admits a derivation of the expression for the gravitational entropy and the corresponding first law of BH thermodynamics \cite{Jacobson:1993xs}. In \cite{Kastor:2010gq}, both the first law and the associated Smarr formula in an extended phase space can be obtained exploiting the Killing potential formalism. For a Lovelock BH, characterized by the mass $M$, the charge $Q$, the temperature $T$ and the entropy $S$, the extended first law and the associated Smarr relation read \cite{Jacobson:1993xs,Kastor:2010gq} \begin{eqnarray} \delta M & = & T\delta S-\frac{1}{16\pi G_{N}}\sum_{k}\hat{\Psi}^{\left(k\right)}\delta\hat{\alpha}_{\left(k\right)}+\Phi\delta Q\,,\label{first}\\ \left(d-3\right)M & = & \left(d-2\right)TS+\sum_{k}2\left(k-1\right)\frac{\hat{\Psi}^{\left(k\right)}\hat{\alpha}_{\left(k\right)}}{16\pi G_{N}}+\left(d-3\right)\Phi Q\,.\label{Smarr} \end{eqnarray} The potentials $\hat{\Psi}^{\left(k\right)}$ are the thermodynamic conjugates to the $\hat{\alpha}_{(k)}$'s and are a non-trivial functions of the ``bare'' cosmological constant $\Lambda=-\hat{\alpha}_{0}/2$ and of the higher-order Lovelock couplings \cite{Frassino:2014pha}. In general, in Lovelock gravity, the BH entropy is no longer given by one-quarter of the horizon area, but rather reads \begin{equation} S=\frac{1}{4G_{N}}\sum_{k}\hat{\alpha}_{k}{\cal A}^{(k)}\,,\quad{\cal A}^{(k)}=k\int_{\mathcal{H}}\sqrt{\sigma}{\mathcal{L}}_{k-1}\, \label{S} \end{equation} where $\sigma$ denotes the determinant of $\sigma_{ab}$, the induced metric on the BH horizon ${\mathcal{H}}$, and the Lovelock terms ${\mathcal{L}}_{k-1}$ are evaluated on that surface. A curious feature is that the Lovelock black brane entropy (and also the other thermodynamic expressions when considered under an appropriate rescaling) does not depend on the Lovelock coupling constant $\hat{\alpha}_{k}$ for $k \geq 2$ (see \cite{Cadoni:2016hhd}). In what follows, the (negative) cosmological constant $\Lambda=-\hat{\alpha}_{0}/2$ with the thermodynamic pressure and the conjugate quantity $\hat{\Psi}^{\left(0\right)}$ with the thermodynamic volume $V$, are identified in the following way \begin{eqnarray} P &=&-\frac{\Lambda}{8\pi G_{N}}=\frac{\hat{\alpha}_{0}}{16\pi G_{N}}\,,\label{P}\\%\quad V=-\hat{\Psi}^{\left(0\right)}\, V &=&-\hat{\Psi}^{(0)}=\frac{16\pi G_{N}\Psi^{(0)}}{(d-1)(d-2)}=\frac{\Sigma_{d-2}^{(\kappa)}r_{+}^{d-1}}{d-1}\,,\label{V} \end{eqnarray} where $\Sigma_{d-2}^{(\kappa)}$ denotes the finite volume of the $(d-2)$-dimensional compact space at constant $(r,t)$, whose constant curvature $(d-2)(d-3)\kappa$, with the horizon geometry corresponding to $\kappa = 0, +1,-1$ for flat (brane), spherical and hyperbolic black hole horizon geometries respectively. This identification allows interpreting the mass $M$ of the BH as an enthalpy rather than the internal energy of the system. Using Eq. \eqref{P} and the thermodynamic volume given by Eq. \eqref{V} in the definition of the Hawking temperature, one can obtain the Lovelock ``fluid equation of state''\footnote{In terms of the rescaled Lovelock coupling constants \begin{equation} \alpha_{0}=\frac{\hat{\alpha}_{(0)}}{\left(d-1\right)\left(d-2\right)}\,,\quad{\alpha}_{1}={\hat{\alpha}}_{(1)}\,,\quad\alpha_{k}=\hat{\alpha}_{(k)}\prod_{n=3}^{2k}\left(d-n\right){\quad\mbox{for}\quad k\geq2}\,. \end{equation}} \begin{eqnarray}\label{eq:eqofstateL} P & = & P(V,T,Q,\alpha_{1},\dots,\alpha_{k_{max}})= \nonumber \\ & = & \frac{d-2}{16\pi G_{N}}\sum_{k=1}^{k_{max}}\frac{\alpha_{k}}{r_{+}^{2}}\Bigl(\frac{\kappa}{r_{+}^{2}}\Bigr)^{k-1}\Bigl[4\pi kr_{+}T-\kappa(d-2k-1)\Bigr]+\frac{Q^{2}}{2\alpha_{1}r_{+}^{2(d-2)}}\,, \label{state}\end{eqnarray} and study the possible phase transitions based on the behavior of the Gibbs free energy in the canonical ensemble $G=M-TS=G(P,T,Q,\alpha_{1},\dots,\alpha_{k_{max}})\,$. The equilibrium thermodynamic state corresponds to the global minimum of this quantity for fixed parameters $P,T,Q$ and $\alpha$'s. A critical point occurs when $P=P(V)$ has an inflection point, i.e., when \begin{equation} \label{cp} \frac{\partial P}{\partial V}=0\,,\quad \frac{\partial^2 P}{\partial V^2}=0\,. \end{equation} Together with the equation of state \eqref{eq:eqofstateL}, the system \eqref{cp} determines the critical values $\{P_c, V_c, T_c\}$ as functions of $Q$ and $\kappa$. To find a critical point one has to solve the (higher-order polynomial) Eqs. \eqref{cp} for $T_c, V_c$ and insert the result into the equation of state \eqref{eq:eqofstateL} to find $P_c$, subject to the restriction that $P_c, V_c, T_c$ have positive values in order for the critical points to be physical. As result of this study, we find that critical behaviour occurs in $d = 7, 8, 9, 10, 11$ dimensions, but not $d=6$ (Gauss-Bonnet case), though there is a cusp for $\kappa=+1$ . If $d = 7$, the critical point is associated with Van der Waals behavior, whereas in $d = 8, 9, 10, 11$ we observe a reentrant phase transition. Figures \ref{Fig:Lvq7a} and \ref{Fig:Lvq7b} show the results of a numerical analysis of 3rd-order Lovelock gravity in $d=7, 8$ with $\kappa= \pm 1$ in terms of the following dimensionless variables: \begin{equation} r_{+}=v\,\alpha_{3}^{\frac{1}{4}}\,,\quad T=\frac{t\alpha_{3}^{-\frac{1}{4}}}{d-2}\,,\quad m=\frac{16\pi M}{(d-2)\Sigma_{d-2}^{(\kappa)}\alpha_{3}^{\frac{d-3}{4}}}\,,\quad Q=\frac{q}{\sqrt{2}}\alpha_{3}^{\frac{d-3}{4}}\,.\label{dimLov} \end{equation} For $\kappa=-1$, a special case occurs when the parameter $\alpha= \alpha_2/\sqrt{\alpha_3} = \sqrt{3}$. The system can be resolved analytically, and the solution exhibits a special isolated critical point. The Gibbs free energy displays two swallowtails, both emerging from this point, and the critical exponents (obtained by series-expanding the equation of state near the critical point) are \begin{equation} \tilde{\alpha}=0,\,\,\, \tilde{\beta}=1,\,\,\, \tilde{\gamma}=2,\,\,\, \tilde{\delta}=3. \end{equation} for any dimension $d\geq 7$. Three of these critical exponents are independent because of a violation of certain scaling relations, in contrast to two independent exponents in mean field theory. This isolated critical point can be understood as the merging of two critical points, and we find the BH is massless ($M = 0$) \cite{Mann1} in this limit. By comparison, taking $\kappa=+1$ for the same value of $\alpha_2$ yields \begin{equation} \tilde{\alpha}=0,\,\,\, \tilde{\beta}=\frac{1}{2},\,\,\, \tilde{\gamma}=1,\,\,\, \tilde{\delta}=3, \end{equation} which are the standard swallowtail mean field theory critical exponents. \section{Conclusions} 3rd-order Lovelock gravity presents interesting and qualitatively new thermodynamic behaviour. In particular, 3rd-order uncharged Lovelock black holes with $\alpha=\sqrt{3}$ and $\kappa=-1$ are especially peculiar: in this interesting, distinctive case, we find that the equation of state has a non-standard expansion around a special critical point suggesting a violation of the scaling relations and non-standard critical exponents. This feature of Lovelock gravity has been further discussed in \cite{Frassino:2014pha, Dolan:2014vba} and was subsequently observed in quasi-topological gravity as well \cite{Hennigar:2015esa}. In general, since the odd-order Lovelock theories (in any dimension in which they exist) always admit massless topological black holes \cite{Mann1,Mann2} as solutions, they will all exhibit the peculiar isolated critical point for an appropriate choice of coupling constants. \begin{figure*} \centering \begin{tabular}{cc} \includegraphics[width=0.44\textwidth,height=0.28\textheight]{Figures/aq-NcriticalPointsZoomIN.eps} & \rotatebox{0}{ \includegraphics[width=0.44\textwidth,height=0.28\textheight]{Figures/PaZoomkm1.eps}}\\ \end{tabular} \caption{{\bf Critical points in $(q,\alpha)$-parameter space: $d=7$ ({\em left}) and $d=8$ ({\em right}), $\kappa=-1$ case.} The figure shows the number of critical points with positive $(P_c, V_c, T_c)$ and the opportune normalization (see details in \cite{Frassino:2014pha}) in the $(q,\alpha)$-parameter space. Grey dots correspond to no critical points, blue to one critical point, and red to two; black solid and dashed lines highlight $\alpha=\sqrt{5/3}$ and $\alpha=\sqrt{3}$, respectively. Contrary to $d=7$ ({\em left}) case, in $d=8$ ({\em right}) there are no critical points for $\alpha<\sqrt{5/3}$. } \label{Fig:Lvq7a} \end{figure*} \begin{figure*} \centering \begin{tabular}{cc} \includegraphics[width=0.44\textwidth,height=0.28\textheight]{Figures/ProvaLarge.eps} & \rotatebox{0}{ \includegraphics[width=0.44\textwidth,height=0.28\textheight]{Figures/PaZoomkp1.eps}}\\ \end{tabular} \caption{{\bf Critical points in $(q,\alpha)$-parameter space: $d=8, \kappa=+1$ case.} The number of critical points with positive $(P_c, V_c, T_c)$ and the opportune normalization (see details in \cite{Frassino:2014pha}) is displayed in the $(q,\alpha)$-parameter space; grey dots correspond to no critical points, blue to one critical point, red to two, and yellow to three. The corresponding diagram for $d=7$ is trivial (contains only the blue region with one critical point) and hence is not displayed. Although all critical points have positive $(P_c, V_c, T_c)$, some $P_c$ may exceed the maximum pressure $p_+$ and hence occurs for a compact space. Note also the qualitatively different behavior for $q=0$. } \label{Fig:Lvq7b} \end{figure*} \newpage
{ "attr-fineweb-edu": 1.854492, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdhE5qhLA8SA3CMo1
\section{Introduction} \label{sec:intro} Thanks to the rapid development of deep learning in recent years, speech enhancement, separation and dereverberation have received remarkable progress and consistent improvements on public datasets, e.g., WSJ0-2mix~\cite{hershey2016deep}, VoiceBank-DEMAND~\cite{valentini2016investigating} and REVERB Challenge~\cite{kinoshita2016summary}. Various deep neural network (DNN) architectures are proposed and reported to achieve significant improvements on speech enhancement~\cite{tan2018gated,yin2020phasen,hu2020dccrn} or separation~\cite{luo2019conv,luo2020dual,liu2019divide,zeghidour2020wavesplit,takahashi2019recursive} tasks. However, most of the models mentioned above focus on individual enhancement or separation task and haven't considered the real-world environment that speech overlapping, directional/isotropic noise and reverberation may exist together, which leads us to consider adopting one universal model to cope with speech enhancement, separation and dereverberation simultaneously. Several works have been done before on joint speech enhancement and separation. The most straightforward way is to train a separation model directly using noisy mixtures~\cite{wichern2019wham}. In general, the model can learn to recover each denoised speaker in the mixtures without any modification on the training objective and pipeline. Another popular way is to form a cascaded model or two-stage system following the \textit{enhancement-separation}~\cite{maciejewski2020whamr,ma2020two} or \textit{separation-enhancement}~\cite{wang2018integrating,yoshioka2019low} scheme. In \cite{wu2020end}, an end-to-end structure following the \textit{separation-enhancement} processing is proposed, which enables the joint optimization of the speech unmixing and extraction and yield impressive improvement in online scenario. Besides, the work of \cite{wu2020saddel} is proposed to increase the robustness of noisy speech separation under the recursive separation framework~\cite{takahashi2019recursive,kinoshita2018listening} and studies in ~\cite{von2019all,kinoshita2020tackling} continue to consider the practical issue of recursive approach in real meeting scenarios. The authors of \cite{nakatani2020dnn} investigate the DNN supported system that integrates conventional spatial-clustering and beamforming techniques. In this paper, we focus on offline processing in far-field multi-channel scenario and propose an all neural multi-channel network for simultaneous speech dereverberation, enhancement and separation named \textit{DESNet}. It utilizes the DNN based weighted prediction error (WPE)~\cite{kinoshita2017neural} as the dereverberation module, which estimates the time-varying variance of the early reverberated signal using DNN instead of the iterative procedure in original WPE. The design of the following \textit{separation-enhancement} processing is motivated by the E2E-UFE network~\cite{wu2020end}, which consists of three components, namely speech unmixing, attentional feature selection and speech extraction. Different from the original E2E-UFE, the outputs of the speech unmixing network, together with the weighted beams and angle features are concatenated as the input of the speech extraction while the weighted beams and angle features here are treated as the assisted multi-channel directional feature. A similar usage can be found in \cite{wang2018integrating}, where the log spectra of the enhanced signal after the multi-channel Wiener filter is adopted. We use recently proposed deep complex convolutional recurrent network~\cite{hu2020dccrn} (DCCRN) as the structure of speech unmixing. This advanced network achieves the best MOS in real-time track and the second-best in non-real-time track according to the P.808 subjective evaluation in 2020 Deep Noise Suppression (DNS) challenge~\cite{reddy2020interspeech}. The time domain signal is synthesized via inverse short-time Fourier transform (iSTFT) using the final enhanced or separated spectra from the speech extraction module. The DNN-WPE, speech unmixing, attentional feature selection and speech extraction have no non-differentiable operations. Thus we can optimize DESNet jointly in an end-to-end manner. The overall framework of DESNet is shown in Fig.~\ref{fig:DESNet}. We evaluate the performance of the proposed model on three tracks: speech enhancement (SE), clean speech separation (CSS) and noisy speech separation (NSS), in two categories: dereverberated and non-dereverberated. In order to improve the overall performance in each track, we introduce the \textit{staged SNR} strategy and \textit{symphonic loss} for the network training. Experiments show that in non-dereverberated scenario, the proposed DESNet outperforms DCCRN in enhancement track and most state-of-the-art structures in separation task, e.g., DPRNN~\cite{luo2020dual}, FasNet~\cite{Luo2019End}, while in dereverberated scenario, DESNet also shows improvements over the cascaded WPE-DCCRN network in all those three tracks. The rest of the paper is organized as follows. In Section 2, we give a detailed introduction of DESNet. In Section 3, the experimental settings, training details, evaluation scheme, results and analysis will be discussed. A conclusion of our work is drawn in Section 4. \section{Proposed DESNet} \subsection{Problem Formulation} We consider the far-field signal model with $C$ speakers in time-domain as follow: \begin{equation} \mathbf{y}_m = \sum_{c} \mathbf{x}_{c,m} + \mathbf{n}_{m} = \sum_{c} \mathbf{s}_c * \mathbf{h}_{c,m} + \mathbf{n}_{m}, \end{equation} where $\mathbf{x}_{c,m}$ denotes the image of the $c$-th speaker ($0 \leqslant c < C$) at the position of the $m$-th microphone ($0 \leqslant m < M$). $\mathbf{h}_{c,m}$ is room impulse response (RIR) between speaker $c$ and microphone $m$. $\mathbf{s}_c$ is $c$-th source speaker signal and we model environment noise at microphone $m$ as $\mathbf{n}_m$. The corresponding frequency domain signal model is shown as: \begin{equation} \mathbf{Y}_m = \sum_{c} \mathbf{X}_{c,m} + \mathbf{N}_{m}, \end{equation} where $\{\mathbf{Y}_m, \mathbf{X}_{c,m}, \mathbf{N}_{m}\} \in \mathbb{C}^{T \times F}$ are short-time Fourier transform (STFT) of $\{\mathbf{y}_m, \mathbf{x}_{c,m}, \mathbf{n}_{m}\}$. $T$ and $F$ denote the total number of time frames and frequency bins. \subsection{Dereverberation} WPE is an algorithm to effectively reduce reverberation and greatly improve the speech quality. Given a group of estimated linear filter weights $\mathbf{G}_f \in [\mathbf{G}_{0,f}, \cdots, \mathbf{G}_{K-1,f}]^\top \in \mathbb{C}^{MK \times M}$, where $K$ denotes tab number, the dereverberated signal is obtained via: \begin{align} \mathbf{y}'_{\text{drv},t,f} & = \mathbf{y}_{t,f} - \sum_{k=0}^{K -1} \mathbf{G}_{f,k}^H \mathbf{y}_{t - \Delta -k,f} \\ & = \mathbf{y}_{t,f} - \mathbf{G}_{f}^H \overline{\mathbf{y}_{t - \Delta,f}}, \end{align} where $\Delta$ denotes prediction delay, $\mathbf{y}_{t,f} = [\mathbf{Y}_{0,t,f}, \cdots, \mathbf{Y}_{M - 1,t,f}]^T \\ \in \mathbb{C}^{M \times 1}$ and $\overline{\mathbf{y}_{t - \Delta,f}} = [\mathbf{y}_{t - \Delta,f}^T, \cdots, \mathbf{y}_{t - \Delta - K + 1,f}^T]^T \in \mathbb{C}^{MK \times 1}$. The estimation of $\mathbf{G}_f$ is based on the time-varying variance $\mathbf{\Lambda} \in \mathbb{R}^{T \times F}$ of the early reverberated signal following the steps: \begin{align} \mathbf{R}_f &= \sum_t \frac{\overline{\mathbf{y}_{t - \Delta,f}} \overline{\mathbf{y}_{t - \Delta,f}}^H}{\mathbf{\Lambda}_{t,f}}, \\ \mathbf{r}_f &= \sum_t \frac{\overline{\mathbf{y}_{t - \Delta,f}} \mathbf{y}_{t,f}^H}{\mathbf{\Lambda}_{t,f}}, \\ \mathbf{G}_f &= \mathbf{R}_f^{-1} \mathbf{r}_f. \end{align} The conventional WPE estimates $\mathbf{\Lambda}$ in an iterative procedure, while in DNN-WPE~\cite{kinoshita2017neural}, $\mathbf{\Lambda}$ is estimated using a neural network directly: \begin{align} \mathbf{\Lambda}_m & = \text{NN}(|\mathbf{Y}_m|), \end{align} and $\mathbf{\Lambda} = \sum_m \mathbf{\Lambda}_m / M$. \subsection{Angle Feature and Fixed Beamforming} After estimating the dereverberated signal on frequency $f$ $\mathbf{Y}'_{\text{drv},f} = [\mathbf{y}'_{\text{drv},0,f}, \cdots, \mathbf{y}'_{\text{drv},T - 1,f}] \in \mathbb{C}^{M \times T}$, we calculate angle features (in subnet AF) and fixed beams (in subnet BF) in all candidate directions. The beamformed signal $\mathbf{b}_{i,f} \in \mathbb{C}^{1 \times T}$ on $i$-th ($0 \leqslant i < N_b$) direction is obtained via: \begin{equation} \mathbf{b}_{i,f} = \mathbf{w}_{i,f}^{H} \mathbf{Y}'_{\text{drv},f}, \end{equation} where $\mathbf{w}_{i,f} \in \mathbb{C}^{M \times 1}$ represents the $i$-th beam weight on frequency $f$. We denote $\mathbf{B}_i = [\mathbf{b}_{i,0}^T, \cdots, \mathbf{b}_{i,F - 1}^T]^T \in \mathbb{C}^{T \times F}$ in this paper. The angle feature $\mathbf{a}_{\theta,f} \in \mathbb{R}^{1 \times T}$ on frequency band $f$ is derived from: \begin{equation} \mathbf{a}_{\theta,f} = \sum_{m, n \in \psi } \cos (\mathbf{o}_{mn, f} - \mathbf{r}_{\theta,mn, f}) / P, \end{equation} where $\psi$ contains $P$ microphone pairs and $\mathbf{o}_{mn, f} = \angle \mathbf{y}'_{\text{drv},m, f} \\ - \angle \mathbf{y}'_{\text{drv},n, f}$ represents the observed inter-microphone phase difference (IPD) between microphone $m$ and $n$. The reference IPD $\mathbf{r}_{\theta, mn, f}$ can derive from the geometry-dependent steer vector given the DoA $\theta$ and microphone index $m$ and $n$. Similar with $\mathbf{B}_i$, we define $\mathbf{A}_\theta = [\mathbf{a}_{\theta,0}^T, \cdots, \mathbf{a}_{\theta, F - 1}^T]^T \in \mathbb{R}^{T \times F}$. The total number of candidate DoA is defined as $N_{A}$. \begin{figure*}[htb] \centering \includegraphics[width=1 \textwidth]{figures/DesNet} \vspace{-0.7cm} \caption{Overall framework of proposed DESNet.} \label{fig:DESNet} \end{figure*} \subsection{Speech Unmixing by DCCRN} Compared with the original E2E-UFE whose unmixing module only consists of stacked LSTMs and a projection layer, we believe that a better network prototype, like DCCRN~\cite{hu2020dccrn}, can benefit the following selection of the angle and beam features, as well as assist the speech extraction for a better estimation of the final masks. DCCRN follows the conventional UNet structure, but using complex-valued convolutional encoders/decoders and real/imaginary LSTMs to model the context dependency. The architecture of DCCRN is shown in Fig.~\ref{fig:DCCRN}, where the encoder/decoder block is stacked by 2D complexed convolution/deconvolution layers, real-valued 2D batch normalization (BN) and leaky ReLU activation function, as shown in Fig.~\ref{fig:complex}. Similar to the implementation in DCUNet, the complex-valued 2D convolutional operation is interpreted as two real-valued ones, following the formula: \begin{equation} \mathbf{W} \circledast \mathbf{Y} = \begin{bmatrix} \mathbf{W}_r \\ \mathbf{W}_i \end{bmatrix} \circledast \begin{bmatrix} \mathbf{Y}_r \\ \mathbf{Y}_i \end{bmatrix} = \begin{bmatrix} \mathbf{W}_r * \mathbf{Y}_r - \mathbf{W}_i * \mathbf{Y}_i \\ \mathbf{W}_r * \mathbf{Y}_i + \mathbf{W}_i * \mathbf{Y}_r \end{bmatrix}, \end{equation} where $\circledast$ and $*$ denote complex-valued and real-valued convolution, respectively. $\mathbf{W} = [\mathbf{W}_r, \mathbf{W}_i]^T$ is complex-valued convolution filters and $\mathbf{Y} = [\mathbf{Y}_r, \mathbf{Y}_i]^T$ is the input of complexed convolution layer. As DCCRN achieves impressive performance on speech enhancement task, it inspires us to adopt it and extent its ability to separation task. We use real valued LSTM with a linear projection layer to map the concatenation of real and image part of the encoder output to two branches and the decoder is responsible for the generation of the pre-unmixing masks for each speaker independently. Structure of the decoders is symmetrical with the encoders, replacing 2D convolution with 2D deconvolution. The output of each encoder layer is concatenated to the output of the corresponding decoder layer to avoid gradient vanishing. \begin{figure}[htb] \centering \centerline{\includegraphics[width=0.35 \textwidth]{figures/DCCRN}} \caption{Diagram of DCCRN.} \vspace{-0.5cm} \label{fig:DCCRN} \end{figure} \begin{figure}[htb] \centering \centerline{\includegraphics[width=4.5cm]{figures/dccrn-complex-module}} \vspace{-0.1cm} \caption{Diagram of complex-valued convolutional encoder/decoder.} \label{fig:complex} \end{figure} The final complex ratio mask (CRM) of speaker-$c$ $\mathbf{M}_c^U = \mathbf{M}_{c,r} + i\mathbf{M}_{c,i}$ is calculated given the output of the decoders $\mathbf{H}_c = [\mathbf{H}_{c,r}, \mathbf{H}_{c,i}]^T$: \begin{align} \mathbf{M}_{c,\text{mag}} & =\tanh(\sqrt{\mathbf{H}_{c,r}^2+\mathbf{H}_{c,i}^2}), \\ \mathbf{M}_{c,\text{pha}} & =\text{arctan2}(\mathbf{H}_{c,i}, \mathbf{H}_{c,r}). \end{align} And then we have $\mathbf{M}_{c,r} = \mathbf{M}_{c,\text{mag}} \odot \cos(\mathbf{M}_{c,\text{pha}}), \mathbf{M}_{c,i} = \mathbf{M}_{c,\text{mag}} \odot \sin(\mathbf{M}_{c,\text{pha}})$. Thus we can get unmixed results $\mathbf{Y}^U_c \in \mathbb{C}^{T \times F}$ using the dereverberated speech at channel 0: \begin{equation} \mathbf{Y}^U_c = \mathbf{M}^U_c \odot \mathbf{Y}'_{\text{drv}, 0}. \end{equation} \subsection{Attentional Feature Selection} After figuring out all candidate $N_{A}$ angles and $N_{B}$ beams from subnet AF and BF as well as unmixing mask $\mathbf{M}_c^{U} \in \mathbb{C}^{T \times F}, c \in \{1,2\}$, we utilize the proposed attentional selection mechanism from \cite{wu2020end} for multi-channel feature selection. Here we take angle feature as example and similar progress is applied to obtain beam feature. First, two sets of embedding for unmixing masks $\mathbf{M}_c^{U}$ of each speaker and angle features $\mathbf{A}_{\theta}$ are formed through linear mapping: \begin{align} \mathbf{V}_{c}^{U} &= |\mathbf{M}_{c}^{U}|\mathbf{W}_{p}, \\ \mathbf{V}_{\theta}^{A} &= \mathbf{A}_{\theta}\mathbf{W}_{a}, \end{align} where $\mathbf{W}_{p}, \mathbf{W}_{a} \in \mathbb{R}^{F\times D}$ are linear transform weights, resulting in the final embedding matrices $\mathbf{V}_{c}^{U}, \mathbf{V}_{\theta}^{A} \in \mathbb{C}^{T \times D}$. Then a pair-wised similarity matrix is derived between each frame in $\mathbf{V}_c^U$ and $\mathbf{V}_{\theta}^A$ using dot product distance, scaled by $(\sqrt{D})^{-1}$: \begin{align} s_{c,\theta,t} &= (\sqrt{D})^{-1} \left(\mathbf{V}^U_{c,t} \right)^T \mathbf{V}^A_{\theta,t}. \label{eq7} \end{align} The similarity matrix is then averaged along time axis, followed by a softmax activation, to generate the weight $w_{c,\theta}$ of each direction for each source speaker: \begin{align} \hat{s}_{c,\theta} & = \left( T \right)^{-1} \sum_t s_{c,\theta,t}, \\ w_{c, \theta} &= \text{softmax}_\theta(\hat{s}_{c,\theta}). \end{align} Finally, the weight average operation is performed in order to get the weighted angle feature $\hat{\mathbf{A}}_c$ for $c$-th speaker, as shown in the following: \begin{equation} \hat{\mathbf{A}}_{c}=\sum_{\theta} w_{c,\theta}\mathbf{A}_{\theta}. \label{eq10} \end{equation} \subsection{Speech Extraction} After generating the unmixed speech from DCCRN as well as the attentional angle and beam features from the AF Att and BF Att layers, we concatenate these three features and feed them to the following speech extraction network. The extractor consists of a stack of LSTM layers, followed by a linear projection to estimate the final real masks $\mathbf{M}_c^{E} \in \mathbb{R}^{T \times F}$. We apply it to the real and imaginary parts of the unmixed speech of each speaker separately to generate the final separation or enhancement outputs. \subsection{Loss Function} The proposed model works in an end-to-end manner which directly operates on raw waveform and generates the enhanced/separated results. The clean speech from the primary channel (channel 0 in this paper) is adopted as the training target. Scale-invariant source-to-noise ratio (SI-SNR)~\cite{le2019sdr} is used as the objective function, which has been widely used to replace the standard source-to-distortion ratio (SDR) or mean square error (MSE) loss function. SI-SNR is calculated by: \begin{equation} \text{SI-SNR}(\mathbf{s}_i, \mathbf{x}_j) = 20\log_{10}\frac{\Vert \alpha \cdot \mathbf{x}_j \Vert}{\Vert \mathbf{s}_i -\alpha \cdot \mathbf{x}_j \Vert}, \end{equation} where $\mathbf{x}_j$ is the clean reference of speaker $j$, and $\mathbf{s}_i$ refers to the separated signal of speaker $i$. $\alpha$ is a optimal scaling factor computed via $\alpha = \mathbf{s}_{i}^\text{T}\mathbf{x}_{j}/\mathbf{x}_{j}^\text{T}\mathbf{x}_{j}$. In particular, we propose a special way to compute the training loss named symphonic loss for optimization of the DESNet. The loss calculation of each training chunk in one mini-batch is different depending on which track it belongs to. If current mixture chunk contains one speaker, namely in SE track, we only optimize the first branch of the network, while for NSS and CSS tracks, we optimize both branches of the network using permutation invariant training (PIT) strategy, which can be written as: \begin{equation} \mathcal{L} = -\max_{\phi \in \mathcal{P}} \sum_{(i, j) \in \phi} \text{Si-SNR}(\mathbf{s}_i, \mathbf{x}_j) / N_{\mathcal{P}}, \label{pit} \end{equation} where $\mathcal{P}$ and $N_{\mathcal{P}}$ refer to all possible permutations and permutation number, respectively. \section{Experiments} \label{sec:pagestyle} \subsection{Datasets} In our experiments, we adopt \textit{train-clean-100} and \textit{train-clean-360} in LibriSpeech which include 460 hours single-channel speech as the source data. The noise set provided by DNS Challenge~\cite{reddy2020interspeech} that contains 180 hours data is used as the noise dataset. The multi-channel RIRs and isotropic noise are simulated in advance based on a circular array with four microphones put around. The radius of the circular array is 5 cm. In SE and CSS tracks, the SNR and SDR during training are sampled randomly from [-5, 10] dB and [-5, 5] dB, respectively. In NSS track, the SNR range is set as [5, 20] dB and SDR range is [-5, 5] dB. The maximum number of speaker is set as 2 for the separation task. All training utterances are added with isotropic noise with SNR ranges from 15 dB to 20 dB. The RT60 of RIRs ranges from 0.1 s to 0.5 s. The angle between each speaker and noise is set at least $20^{\circ}$ to ensure the spatial distinctiveness of each sound source. For model evaluation, we also create three test sets named SE, CSS and NSS in dereverberated and non-dereverberated categories. All speech data is derived from \textit{test-clean} in LibriSpeech while noise data is selected from DNS noise set which has no overlap with training data. The SDR for two speakers in both CSS and NSS are set to $\{-5, -2, 0\}$ dB, while SNR of noise in NSS is sampled randomly in range of [5, 20] dB. Noise SNR in SE is set to $\{-5, -0, 5, 10\}$ dB. Both CSS and NSS contain 3900 utterances and SE contains 5860 utterances in total. Other settings are kept the same as the training configurations. \begin{table}[] \centering \caption{SNR (SDR) range (dB) in each stage.} \label{table:SNR range} \begin{tabular}{ccccc} \toprule \multirow{2}{*}{Training Epoch} & \multirow{2}{*}{SE} & \multirow{2}{*}{CSS} & \multicolumn{2}{c}{NSS} \\ & & & SE & SS \\ \midrule 1 $\sim$ 5 & [5, 10] & [-2, 2] & $\times$ & $\times$ \\ 6 $\sim$ 10 & [0, 10] & $\times$ & [15, 20] & [-2, 2] \\ 11 $\sim$ 15 & [-2, 10] & $\times$ & [10, 20] & [-4, 4] \\ 16 $\sim$ 20 & [-5, 10] & $\times$ & [5, 20] & [-5, 5] \\ \bottomrule \end{tabular} \end{table} \subsection{Training Setups} The training utterances are generated on-the-fly and segmented into 4s chunks in one batch. In other words, we randomly select speech, noise and RIR and simulate the mixture utterances dynamically with one or two speakers according to SNR (SDR) ranges on each track to form 265 hours of training data in each epoch. Compared with generating all the data beforehand, our dynamic simulating strategy can improve the diversity of the training samples. Each model is trained for 20 epochs with Adam optimizer. The learning rate is set to 0.001 initially and decays by 0.5 if the validation loss goes up. Furthermore, we propose the staged SNR strategy to gradually optimize the network in order to adapt to the low SNR (SDR) scenario. The SNR (SDR) range in each training stage is shown in Table~\ref{table:SNR range}. Before epoch 6 we only optimize the network with SE and CSS data with higher SNR (SDR). And after epoch 5, we replace CSS data with NSS data and gradually expand the SNR (SDR) range. Both tracks share the same amount of data in one single epoch. \begin{table*}[!htb] \centering \caption{Results of non-dereverberated SE and SS.} \setlength{\tabcolsep}{6pt} \label{table:SE+SS} \begin{tabular}{l ccccc cccc cccc} \toprule Model & \multicolumn{5}{c}{SE (PESQ)} & \multicolumn{4}{c}{CSS (SI-SNR (dB))} & \multicolumn{4}{c}{NSS (SI-SNR (dB))} \\ SNR (SDR) & -5 & 0 & 5 & 10 & Avg. & -5 & -2 & 0 & Avg. & -5 & -2 & 0 & Avg. \\ \midrule Mixed & 1.51 & 1.87 & 2.22 & 2.57 & 2.04 & 0.00 & 0.00 & 0.00 & 0.00 & -1.63 & -0.88 & -0.76 & -1.09 \\ CACGMM & 2.14 & 2.40 & 2.69 & 2.88 & 2.53 & 4.50 & 6.16 & 6.48 & 5.71 & 1.72 & 4.08 & 4.46 & 3.42 \\ Proposed DESNet &\textbf{2.55} & \textbf{2.87} & \textbf{3.17} & \textbf{3.41} & \textbf{3.00} & \textbf{10.18} & \textbf{9.98} & \textbf{9.78} & \textbf{9.98} & \textbf{7.16} & \textbf{7.73} & \textbf{7.77} & \textbf{7.55} \\ $\quad$ - Staged SNR & 2.51 & \textbf{2.87} & 3.16 & 3.40 & 2.99 & 9.88 & 8.54 & 7.87 & 8.76 & \textbf{7.16} & 6.65 & 6.19 & 6.67 \\ $\quad$ - Symphonic Loss & 2.36 & 2.73 & 3.06 & 3.33 & 2.87 & 9.61 & 9.40 & 9.26 & 9.42 & 6.70 & 7.31 & 7.31 & 7.11 \\ $\quad$ - BF Feature & 2.29 & 2.65 & 2.97 & 3.23 & 2.79 & 8.77 & 8.65 & 8.44 & 8.62 & 5.84 & 6.32 & 6.31 & 6.16 \\ DCCRN & 2.25 & 2.61 & 2.94 & 3.20 & 2.75 & 7.78 & 6.04 & 5.37 & 6.40 & 5.73 & 4.62 & 4.07 & 4.81 \\ Conv-TasNet &2.00 &2.29 &2.53 &2.71 &2.38 &6.03 &6.67 &6.72 &6.47 &3.93 &5.09 &5.23 &4.75 \\ DPRNN &2.22 &2.55 &2.84 &3.09 &2.68 &9.09 &9.36 &9.32 &9.26 &6.37 &7.32 &7.42 &7.04 \\ FasNet & 2.24 & 2.58 & 2.89 & 3.14 & 2.71 & 9.42 & 9.35 & 9.02 & 9.26 & 6.91 & 7.63 & 7.41 & 7.32 \\ \bottomrule \hline \end{tabular} \vspace{-0.5cm} \end{table*} \begin{table*}[!htb] \centering \caption{Results of dereverberated SE and SS.} \setlength{\tabcolsep}{6pt} \label{table:SE+SS+DRV} \begin{tabular}{l ccccc cccc cccc} \toprule Model & \multicolumn{5}{c}{SE (PESQ)} & \multicolumn{4}{c}{CSS (SI-SNR (dB))} & \multicolumn{4}{c}{NSS (SI-SNR (dB))} \\ SNR (SDR) & -5 & 0 & 5 & 10 & Avg. & -5 & -2 & 0 & Avg. & -5 & -2 & 0 & Avg. \\ \midrule Mixed & 1.41 & 1.71 & 2.02 & 2.31 & 1.86 & -1.38 & -0.75 & -0.64 & -0.92 & -2.63 & -1.54 & -1.35 & -1.84\\ CACGMM & 2.09 & 2.36 & 2.63 & 2.83 & 2.48 & 3.97 & 5.54 & 5.85 & 5.12 & 1.57 & 3.90 & 4.27 & 3.25 \\ Proposed DESNet &\textbf{2.36} &\textbf{2.65} &\textbf{2.90} &\textbf{3.12} &\textbf{2.76} &\textbf{8.07} &\textbf{8.18} &\textbf{8.14} &\textbf{8.13} &\textbf{6.38} &\textbf{6.65} &\textbf{6.50} &\textbf{6.51}\\ $\quad$ - Staged SNR &2.26 &2.57 &2.84 &3.06 &2.68 &7.96 &8.14 &8.03 &8.04 &5.56 &6.36 &6.18 &6.03\\ $\quad$ - Symphonic Loss &2.32 &2.63 &2.89 &3.11 &2.74 &7.74 &7.88 &7.42 &7.68 &5.68 &6.45 &6.50 &6.21 \\ $\quad$ - DNN-WPE &2.17 &2.49 &2.77 &3.01 &2.61 &7.36 &7.66 &7.59 &7.54 &5.20 &5.68 &5.65 &5.51 \\ WPE-DCCRN &2.16 &2.49 &2.78 &3.00 &2.61 &6.64 &6.09 &5.77 &6.17 &5.16 &5.07 &4.61 &4.95 \\ \bottomrule \end{tabular} \end{table*} \subsection{Experiment Setups} For (i)STFT layer in the DESNet, we use hanning window with a FFT size of 512 and a hop size of 256. $N_A$ and $N_B$ are chosen as 36 and 18 in our experiments. The angle feature calculated in AF layer is averaged among three microphone pairs: (0, 1), (0, 2) and (1, 3). The embedding size $D$ used for feature selection is set as 257. We use 6-layer encoder and decoder in DCCRN with output channel $\{16, 32, 64, 128, 256, 256\}$ and the kernel and stride size are set to (5, 2) and (2, 1), respectively. The hidden size of the 3-layer LSTM in DCCRN is set to 512 and the dimension of the following linear layer is 1024. Speech extraction is a stack of 3-layer LSTM with hidden size of 512. \subsubsection{Non-dereverberated SE and SS} In this category, we only perform simultaneous enhancement and separation thus the DNN-WPE is removed from the DESNet. The clean reverberated speech is used as the training reference. CACGMM~\cite{ito2016complex}, DCCRN~\cite{hu2020dccrn}, Conv-TasNet~\cite{luo2019conv}, DPRNN~\cite{luo2020dual} and FasNet~\cite{Luo2019End} are used as the comparative systems. CACGMM\footnote{\url{https://github.com/funcwj/setk/}} is a blind mask estimation method based on spatial-clustering proposed for speech separation. In this paper, the number of the Gaussian is chosen as 2 for SE and CSS tasks and 3 for NSS task. EM steps are repeated for 50 epochs to estimate the mask of each sound source. The setups of the multi-channel Conv-TasNet\footnote{\url{{https://github.com/funcwj/conv-tasnet}}} and DPRNN follow the best configurations in \cite{luo2019conv,luo2020dual} but changing input channel size of the encoder from 1 to 4. The window size is chosen as 20 and 16 samples, respectively. The structure of DCCRN is same as the one used in the proposed DESNet. For FasNet\footnote{\url{{https://github.com/yluo42/TAC}}}, 4-channel setups with TAC module are used. To better validate the proposed structure and strategies, we add three ablation experiments. The first one is to use the fixed SNR range in the last line of the Table~\ref{table:SNR range} to train the model directly. Secondly, we remove the fixed beamformer layer to see whether this learnt multi-channel feature is beneficial to the final performance. We also disable the proposed symphonic loss during training to verify the improvements it brings. The experimental results are shown in Table~\ref{table:SE+SS} and will be discussed in Section 3.4. \subsubsection{Dereverberated SE and SS} In this category, we perform simultaneous dereverberation, enhancement and separation. The early reverberation part of the clean source image is used as the training labels. As the time domain separation model used in Section 3.3.1 does not consider dereverberation, we only use CACGMM and WPE-DCCRN as the baseline systems. The structures of the DNN-WPE network in both DESNet and WPE-DCCRN are the same and it contains a 2-layer CNN whose channel and kernel size are set to (300, 4) and 3, respectively. Other setups are identical to the non-dereverberated category. WPE-DCCRN is also a cascaded network where the input of the DCCRN is the STFT of the early reverberation part of the mixture signal produced by the DNN-WPE module. We also add three ablation experiments to verify the benefits of the staged SNR strategy, symphonic loss and DNN-WPE module, as shown in Table. \ref{table:SE+SS+DRV}. \begin{figure}[!tbh] \begin{minipage}[b]{0.50\linewidth} \centering \centerline{\includegraphics[width=5.0cm]{figures/weight_spk1_cut}} \end{minipage} \hfill \begin{minipage}[b]{0.40\linewidth} \centering \centerline{\includegraphics[width=5.0cm]{figures/weight_spk2_cut}} \end{minipage} \vspace{-0.3cm} \caption{Example of the learnt weights on angle feature in a two-speaker mixture utterance.} \label{fig:weight} \end{figure} \subsection{Results and Discussion} The experimental results of non-dereverberated category are shown in Table~\ref{table:SE+SS}. The proposed DESNet gives the best performance in all three tracks. In SE track, DCCRN is the second only to DESNet, showing its powerful capability of noise reduction. Conv-TasNet is even worse than CACGMM probably due to the way it utilizes the spatial information is limited and not effective as FasNet. Staged SNR has less impact on SE task while beam feature improves PESQ by 0.21. According to the results in CSS/NSS track, DPRNN and FasNet are superior to DCCRN and Conv-TasNet, achieving around 3 dB improvement on SI-SNR in both test sets. The staged SNR strategy and beam feature are important for introducing better separation results, bringing 1.22/1.36 dB and 0.88/1.39 dB improvement in terms of SI-SNR in CSS and NSS tracks, respectively. Symphonic loss brings 0.13 gain on PESQ, as well as 0.56 and 0.44 dB gain on SI-SNR in CSS and NSS tracks, respectively. Compared with FasNet, the best model in our baseline system, DESNet achieves 0.72/0.23 dB improvements on CSS/NSS tracks. The evaluation results of dereverberated category are shown in Table~\ref{table:SE+SS+DRV}. Removing WPE module has much impact on separation performance, leading the degradation of 0.59/1.00 dB on CSS/NSS tracks in terms of SI-SNR. DESNet outperforms WPE-DCCRN baseline and brings 1.96 dB and 1.56 dB improvements on those two tracks. Staged SNR and symphonic loss are proved again to be effective, bringing 0.09/0.48 dB and 0.45/0.30 dB improvement in terms of SI-SNR in CSS and NSS tracks, respectively. In order to see what the attentional selection layer learnt in the DESNet, we split the whole utterance into consequent chunks to get the time-variant weights and visualize an example of a two-speaker mixture utterance on angle feature in Fig.~\ref{fig:weight}. The $x$ axis denotes chunk index and $y$ axis denotes the direction of the speaker. The real direction of two speakers are $102^\circ$ and $193^\circ$ and we can see that the learnt weights fit the actual speaker's direction perfectly, which can select the correct angle feature that we want the network to learn. Audio samples are available online\footnote{\url{https://felixfuyihui.github.io/DesNet_Demo/}}. \section{Conclusions} In this paper, we propose a multi-channel network named DESNet for simultaneous speech dereverberation, enhancement and separation. The novel DCCRN is utilized to perform speech unmixing and form the pre-unmixing masks to assist the soft selection of angle and beam features used in subsequent speech extraction network. The neural network based WPE is cascaded to produce dereverberated signal. All modules mentioned above are jointly optimized to form an end-to-end manner. To lead better performance in each task, we also introduce \textit{staged SNR} strategy and \textit{symphonic loss}. Experiments show that the proposed DESNet outperforms DCCRN in enhancement task and FasNet in separation task on non-dereverberated cases. In dereverberated category, DESNet also shows improvements over the cascaded WPE-DCCRN network in enhancement and separation tasks. In the future, we will consider optimizing speech dereverberation, enhancement and separation with acoustic model to further improve the speech recognition accuracy in real environment scenarios. \clearpage \balance \bibliographystyle{IEEEbib}
{ "attr-fineweb-edu": 1.855469, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdhHxK7IDOB8Th3fr
\section{Introduction} The muon anomalous magnetic moment $g-2$ [$a_\mu \equiv (g-2)/2$] has been measured by the E821 experiment (Muon g-2 Collaboration) at Brookhaven National Laboratory (BNL) with an impressive accuracy of 0.72 ppm \cite{BNL06} yielding the present world average\cite{BNL06} \begin{equation} a_\mu^{\rm exp} = 11 \, 659 \, 208.0(6.3) \times 10^{-10} \, \end{equation} with an accuracy of 0.54 ppm. New experiments\cite{ROB06,MRR06} are under design with a goal of measuring $a_\mu$ with an accuracy of at least 0.25 ppm. On the theory side, a large amount of work has been devoted to reduce the uncertainty of the Standard Model prediction. A recent updated discussion with an extensive list of references for both theoretical predictions and experimental results is Ref.~\refcite{MRR06} and Ref. \refcite{PAS05}, and a more introductory exposition can be found in the lectures by Knecht in Ref. \refcite{Knechtlectures}. In this paper, we review the present status of the hadronic light-by-light contribution (hLBL). A somewhat shorter version is the published talk in Ref. \refcite{kazimierz}. The uncertainty in the hLBL is expected to eventually become the largest theoretical error. This contribution is shown schematically in Fig.~\ref{lbl}. It consists of three photon legs coming from the muon line connected to the external electromagnetic field by hadronic processes. \begin{figure}[ht] \label{lbl} \begin{center} \epsfig{file=figg-2-1.ps,width=5cm} \end{center} \caption{The hadronic light-by-light contribution to the muon $g-2$.} \end{figure} Its contribution can be written as \begin{eqnarray} \label{Mlbl} \displaystyle{\cal M} &=& \vert e\vert^7 V_\beta \int \frac{{\rm d}^4 p_1}{ (2\pi )^4} \int \frac{{\rm d}^4 p_2}{ (2\pi )^4} \, \frac{1}{ q^2\, p_1^2 \, p_2^2 (p_4^2-m^2) \, (p_5^2 - m^2)} \nonumber \\ &\times& \Pi^{\rho\nu\alpha\beta} (p_1,p_2,p_3) \, \bar{u}(p^\prime )\gamma_\alpha (\not{\! p}_4 +m )\gamma_\nu (\not{\! p}_5 +m ) \gamma_\rho u(p) \, \end{eqnarray} where $q=p_1+p_2+p_3$. To obtain the amplitude ${\cal M}$ in (\ref{Mlbl}), the hadronic contribution to the full correlator $\Pi^{\rho\nu\alpha\beta}(p_1,p_2,p_3 \to 0)$ needs to be known for all possible four-momenta $p_1$ and $p_2$. The correlator is defined via \begin{eqnarray} \label{four} \Pi^{\rho\nu\alpha\beta}(p_1,p_2,p_3)&=& i^3 \int {\rm d}^4 x \int {\rm d}^4 y \int {\rm d}^4z e^{i (p_1 \cdot x + p_2 \cdot y + p_3 \cdot z)} \times \nonumber \\ && \langle 0 | T \left[ V^\rho(0) V^\nu(x) V^\alpha(y) V^\beta(z) \right] |0\rangle \end{eqnarray} with $V^\mu(x)=\left[ \overline q \hat Q \gamma^\mu q \right](x)$ and $\hat Q = {\rm diag}(2,-1,-1)/3$ the quark charges. The external magnetic field couples to the photon leg with momentum $p_3 \to 0 $. In the remainder whenever we refer to $a_\mu$ we specifically mean only the hadronic light-by-light contribution to it. Clearly, the correlator (\ref{four}) is a complicated object. It contains many independent Lorentz structures, each of comes with a function of the variables $p_1^2$, $p_2^2$ and $q^2$. As a consequence, many different energy scales can be involved in the calculation of the hadronic light-by-light contribution to muon $g-2$. This makes it difficult to obtain the full needed behavior of the correlator (\ref{four}) from known constraints. Therefore no full first principles calculation exists at present. The needed results cannot be directly related to measurable quantities either. Lattice QCD calculations are at the exploratory stage only, see e.g. Ref.~\refcite{HBIY06}. In fact, there has long been a confusion about hadronic exchanges\footnote{We stick here to the formulation ``exchange'' as used by us\cite{BPP96}. It is often referred to as ``pole'' contributions. We consider this misleading because the exchanged particle is used off-shell.} versus quark loop estimates. This confusion was resolved by organizing the different contributions according to the lowest power in $1/N_c$ and the lowest order in the chiral perturbation theory (CHPT) expansion counting where they start contributing\cite{EdR94}. One can distinguish four types of contributions: \begin{itemize} \item Goldstone boson exchange contributions are order $N_c$ and start contributing at order $p^6$ in CHPT. \item (Constituent) quark-loop and non-Goldstone boson exchange contributions are order $N_c$ and start contributing at order $p^8$ in CHPT. \item Goldstone boson loop contributions are order one in $1/N_c$ and start contributing at order $p^4$ in CHPT. \item Non-Goldstone boson loop contributions are order one in $1/N_c$ and start to contribute at order $p^8$ in CHPT. \end{itemize} The two existing {\em full} calculations\cite{BPP96,HK98}, are based on this classification. The Goldstone boson exchange contribution (GBE) was shown to be numerically dominant in Refs. \refcite{BPP96} and \refcite{HK98} after strong cancellations between the other contributions. But the other contributions, though each smaller than the GBE, were not separately negligible. Using effective field theory techniques, Ref.~\refcite{KNPR02} showed that the leading double logarithm comes from the GBE and was positive. Refs.~\refcite{KNPR02} and \refcite{KN02} found a global sign mistake in the GBE of the earlier work\cite{BPP96,HK98} which was confirmed by the authors of those works\cite{BPP01,HK01} and by others\cite{BCM02,RW02}. In the remainder we will always correct for this sign mistake without explicitly mentioning it. Recently, Melnikov and Vainshtein pointed out new short-distance constraints on the correlator (\ref{four}) in Ref.~\refcite{MV04}, studied and extended in Ref.~\refcite{KPPR04}. The authors of Ref.~\refcite{MV04} constructed a model which satisfies their main new short-distance constraints in order to study its effects and found a number significantly different from the earlier work. They approximated the full hLBL by the GBE and axial-vector exchange contributions. One of the purposes of this review is to critically compare the different contributions in the different calculations and extend somewhat on our earlier comments\cite{kazimierz}. For the dominant GBE we also present some new results on the momentum regions which are relevant. In earlier work several studies were done to check which momentum regions were important. These used different methods, varying the vector meson mass\cite{HK98}, studying the cut-off dependence\cite{BPP96} and expansions around various momentum regions in the loop integrals\cite{MV04,Melnikov}. In Sect.~\ref{pre2002} we discuss the calculations done before 2002 and compare their results. Sect.~\ref{after2002} discusses the short distance constraints proposed by Melnikov and Vainshtein and the numerical results presented in their paper. In Sect.~\ref{GBE} we show in detail in which momentum regions the contributions from $\pi^0$ exchange originate and Sect.~\ref{Comparison} compares and comments on the various contributions in the different calculations. Finally, we present our conclusions as to the present best value and error for the hLBL in Sect.~\ref{conclusions}. \section{Results obtained up to 2002} \label{pre2002} In this section we discuss the calculations performed in the period 1995-2001. These were organized according to the large $N_c$ and CHPT countings \cite{EdR94} discussed above. The CHPT counting is used as a classification tool, none of these calculations were actually performed at a fixed order in CHPT. We want to emphasize once more that the calculations in Refs. \refcite{BPP96,HK98,BPP01} and \refcite{HK01} showed that only after several large cancellations in the rest of the contributions, the numerically dominant one is the Goldstone boson exchange. In this section we concentrate on the work in Refs. \refcite{BPP96} and \refcite{BPP01}, with some comments and results from Refs. \refcite{HK98,KN02} and \refcite{HK01}. \subsection{Pseudo-Scalar Exchange} The pseudo-scalar exchange was saturated by the Goldstone boson exchange in Refs. \refcite{BPP96,HK98,BPP01} and \refcite{HK01}. This contribution is shown in Fig.~\ref{figexchange} with $M=\pi^0,\eta,\eta^\prime$. \begin{figure} \begin{center} \epsfig{file=figexchange.ps} \end{center} \caption{A generic meson exchange contribution to the hadronic light-by-light part of the muon $g-2$.} \label{figexchange} \end{figure} Refs \refcite{BPP96} and \refcite{BPP01}, used a variety of $\pi^0 \gamma^* \gamma^*$ form factors \begin{equation} {\cal F}^{\mu\nu} (p_1,p_2) \equiv {N_c}/({6 \pi}) \,({\alpha}/{f_\pi}) \, i \, \varepsilon^{\mu\nu\alpha\beta} p_{1\alpha} p_{2 \beta} \, {\cal F}(p_1^2, p_2^2) \end{equation} fulfilling several possible QCD constraints. A more extensive analysis of this form factor was done in Ref. \refcite{BP01} finding very similar numerical results. In particular, the three-point form factors ${\cal F} (p_1^2,p_2^2)$ used in Refs.~\refcite{BPP96} and \refcite{BPP01} had the correct QCD short-distance behavior\footnote{ The observance of QCD short-distance constraints was implemented for this one and several other contributions in Refs.~\refcite{BPP96} and \refcite{BPP01}, contrary to the often heard wrong claim that Ref.~\refcite{MV04} is the first calculation to take such constraints into account, e.g. see Ref. \refcite{ET06}.} \begin{eqnarray} \label{pi0OPE} {\cal F} (Q^2,Q^2) &\to& {A}/{Q^2} \, ,\quad {\cal F} (Q^2,0) \to {B}/{Q^2} \, , \end{eqnarray} when $Q^2$ is Euclidean. These form factors were in agreement with available data including the slope at the origin as well as treating the $\pi^0$, $\eta$ and $\eta'$ mixing. All form factors converged for a cutoff scale $\Lambda \sim (2 - 4)$ GeV and produced small numerical differences when plugged into the hadronic light-by-light contribution. Somewhat different ${\cal F} (p_1^2,p_2^2)$ form factors where used in Refs. \refcite{HK98,KN02} and \refcite{HK01} but the results agree well. For comparison, one can find the results of Refs. \refcite{BPP96,HK98,KN02,BPP01} and \refcite{HK01} in Tab.~\ref{tab1} of the $\pi^0$ exchange and after adding $\eta$ and $\eta'$ exchange contributions to the dominant $\pi^0$ one. \begin{table} \begin{center} \tbl{Results for the $\pi^0$ and $\pi^0$, $\eta$ and $\eta'$ exchange contributions. \label{tab1}} { \begin{tabular}{c|cc} &\multicolumn{2}{c}{ $10^{10} \times a_\mu$}\\ & $\pi^0$ only & $\pi^0$, $\eta$ and $\eta'$\\ \hline Bijnens, Pallante and Prades \cite{BPP96,BPP01} & 5.6 & 8.5 $\pm$ 1.3 \\ Hayakawa and Kinoshita \cite{HK98,HK01} & 5.7 & 8.3 $\pm$ 0.6 \\ Knecht and Nyffeler \cite{KN02} ($h_2=0$) & 5.8 & 8.3 $\pm$ 1.2\\ Knecht and Nyffeler \cite{KN02} ($h_2=-10$~GeV$^2$) & 6.3 & \\ Melnikov and Vainshtein \cite{MV04} & 7.65 &11.4$\pm$1.0 \end{tabular}} \end{center} \end{table} \subsection{Axial-Vector Exchange} This contribution is depicted in Fig.~\ref{figexchange} with $M=A=a_1^0,f_1$ and possibly other axial-vector resonances. For this contribution one needs the $A\gamma\gamma^*$ and $A\gamma^*\gamma^*$ form factors. Little is known about these but there exist anomalous Ward identities which relate them to the $P\gamma\gamma^*$ and $P\gamma^*\gamma^*$ form factors. This contribution was not studied by Knecht and Nyffeler\cite{KN02}. Refs. \refcite{BPP96,HK98,BPP01} ands \refcite{HK01} used nonet symmetry, which is exact in the large $N_c$ limit, for the masses of the axial-vector resonances. Their results are shown in Tab.~\ref{tab2} for comparison. \begin{table} \begin{center} \tbl{Results for the axial-vector exchange contributions. \label{tab2}}{ \begin{tabular}{c|c} Axial-Vector Exchange Contributions & $10^{10} \times a_\mu$\\ \hline Bijnens, Pallante and Prades \cite{BPP96,BPP01} & 0.25 $\pm$ 0.10 \\ Hayakawa and Kinoshita \cite{HK98,HK01} & 0.17 $\pm$ 0.10\\ Melnikov and Vainshtein \cite{MV04} & 2.2$\pm$0.5 \end{tabular}} \end{center} \end{table} \subsection{Scalar Exchange} This contribution is shown in Fig.~\ref{figexchange} with $M=S=a_0,f_0$ and possible other scalar resonances. For this contribution one needs the $S\gamma\gamma^*$ and $S\gamma^*\gamma^*$ form factors. Within the extended Nambu--Jona-Lasinio (ENJL) model used in Refs.~\refcite{BPP96} and \refcite{BPP01}, chiral Ward identities impose relations between the constituent quark loop and scalar exchanges. The needed scalar form factors are also constrained at low energies by CHPT. Refs. \refcite{BPP96} and \refcite{BPP01} used nonet symmetry for the masses. This contribution was not included by the other groups\cite{HK98,HK01,MV04}. The leading logarithms of the scalar exchange are the same as those of the pion exchange but with opposite sign\cite{BCM02}. Refs. \refcite{BPP96} and \refcite{BPP01} find that sign for the full scalar exchange contribution, obtaining \begin{equation} a_\mu ({\rm Scalar}) = - (0.7\pm0.2) \times 10^{-10} \, . \end{equation} \subsection{Other contributions at leading order in $1/N_c$.} This includes any contributions that are not modeled by exchanged particles. At short-distance, the main one is the quark-loop. At long distances they are often modeled as a constituent quark-loop with form factors in the couplings to photons. This corresponds to the contribution shown in Fig.~\ref{figquarkloop}. \begin{figure} \begin{center} \epsfig{file=figg-2-3.ps,width=5cm,clip} \end{center} \caption{Quark-loop contribution, as modeled in ENJL.} \label{figquarkloop} \end{figure} Refs.~\refcite{BPP96} and \refcite{BPP01} split up the quark momentum integration into two pieces by introducing an Euclidean matching scale $\Lambda$. At energies below $\Lambda$, the ENJL model was used to compute the quark-loop contribution while above $\Lambda$ a bare (partonic) heavy quark loop of mass $\Lambda$ was used. The latter part scales as $1/\Lambda^2$ and mimics the high energy behavior of QCD for a massless quark with an IR cut-off around $\Lambda$ --see footnote $^b$. Adding these two contributions yields a stable result as can be seen in Tab.~\ref{quarkL}. \begin{table} \begin{center} \tbl{Sum of the short- and long-distance quark loop contributions\protect\cite{BPP96} as a function of the matching scale $\Lambda$. \label{quarkL}}{ \begin{tabular}{c|cccc} $\Lambda$ [GeV] & 0.7 & 1.0 & 2.0 &4.0\\ \hline \rule{0cm}{13pt} $10^{10} \times a_\mu$ & 2.2 & 2.0& 1.9& 2.0 \end{tabular}} \end{center} \end{table} \subsection{NLO in $1/N_c$: Goldstone Boson Loops} At next-to-leading order (NLO) in $1/N_c$, the leading contribution in the chiral counting to the correlator in (\ref{Mlbl}), corresponds to charged pion and Kaon loops which can be depicted analogously to the quark-loop in Fig.~\ref{figquarkloop} but with charged pions and Kaons running inside the loop instead. In general one expects loops of heavier particles to be suppressed and has only been evaluated for the pion loop and the much smaller Kaon loop. The needed form factors\footnote{Note that neither the ENJL model nor any fixed order in CHPT was used in any of the estimates of this contribution.} $\gamma^* P^+P^-$ and $\gamma^* \gamma^* P^+P^-$ vertices were studied extensively in Ref. \refcite{BPP96}. In particular which form factors were fully compatible with chiral Ward identities were studied. The full vector meson dominance model (VMD) is one model fulfilling the known constraints. The conclusion unfortunately is that there is a large ambiguity in the momentum dependence starting at order $p^6$ in CHPT. Both the full VMD model\cite{BPP96,BPP01} and the hidden gauge symmetry (HGS) model\cite{HK98,HK01} satisfy the known constraints. Unfortunately, this ambiguity cannot easily be resolved since there is no data for $\gamma^* \gamma^* \to \pi^+ \pi^-$. Adding the charged pion and Kaon loops, the results obtained in Refs. \refcite{BPP96} and \refcite{HK98} are listed in Tab.~\ref{tab3}. \begin{table} \begin{center} \tbl{ Results for the charged and Kaon loop contributions to the hadronic light-by-light contribution to muon $g-2$. \label{tab3}}{ \begin{tabular}{c|c} Charged Pion and Kaon Loop Contributions & $10^{10} \times a_\mu$\\ \hline Bijnens, Pallante and Prades (Full VMD) \cite{BPP96,BPP01} & $-$1.9 $\pm$ 0.5 \\ Hayakawa and Kinoshita (HGS) \cite{HK98,HK01} & $-$0.45 $\pm$ 0.85\\ Melnikov and Vainshtein (Full NLO in $1/N_c$ guess) \cite{MV04} & 0$\pm$1 \end{tabular}} \end{center} \end{table} In view of this model dependence, the authors of Refs. \refcite{BPP96} and \refcite{BPP01} considered that the difference between the results from Ref. \refcite{BPP96} and Ref. \refcite{HK98} for this contribution needs to be added {\em linearly} to the final uncertainty of the hadronic light-by-light contribution to $a_\mu$. \section{New Short-Distance Constraints} \label{after2002} Melnikov and Vainshtein pointed out\cite{MV04} a new short-distance constraint on the correlator (\ref{four}). This constraint is for \begin{eqnarray} \label{OPEMV} \langle T [ V^\nu(p_1) V^\alpha(p_2) V^\rho(-q=-p_1-p_2) ]| \gamma(p_3 \to 0) \rangle \end{eqnarray} and follows from the OPE for two vector currents when $P_1^2\simeq P_2^2 \gg Q^2$ with $P_1^2=-p_1^2$, $P_2^2=-p_2^2$ and $Q^2=-q^2$: \begin{eqnarray} \label{OPEVV} T[ V^\nu(p_1) V^\alpha(p_2) ] \sim \varepsilon^{\nu\alpha \mu\beta} \, (\hat p_\mu/\hat p^2) \, [\overline q \hat Q^2 \gamma_\beta \gamma_5 q] (p_1+p_2) \end{eqnarray} with $\hat p = (p_1-p_2)/2 \simeq p_1 \simeq -p_2$ and $\hat Q$ is the light quark electrical charge matrix (\ref{four}). This constraint was afterward generalized in Ref. \refcite{KPPR04}. Note that the new part is the use of (\ref{OPEVV}) for the full correlator (\ref{four}). Short-distance was already used to obtain the first constraint in (\ref{pi0OPE}). The authors of Ref.~\refcite{MV04} saturated the full correlator by exchanges. The new OPE constraint is satisfied by introducing a pseudo-scalar exchange with the vertex on the $q,p_3$ side of Fig.~\ref{figexchange} point-like rather than including a form factor. This change strongly breaks the symmetry between the two ends of the exchanged particle. There are also OPE constraints for $P_1^2\approx P_2^2\approx Q^2$ and $P_2^2\approx Q^2 \gg P_1^2$, essentially derived from the quark-loop behavior in this regime\cite{MV04}. Both latter OPE constraints on the correlator (\ref{four}) are not satisfied by the model used in Ref. \refcite{MV04} but they argued that this made only a small numerical difference of order $0.05 \times 10^{-10}$. Ref. \refcite{MV04} added to the pseudo-scalar exchange an axial-vector exchange contribution. They found this contribution to be extremely sensitive to the mixing of the resonances $f_1(1285)$ and $f_1(1420)$ as can be seen in Tab.~\ref{massmixing}, taken from the results there. \begin{table} \begin{center} \tbl{Results quoted in Ref. \protect\refcite{MV04} for the pseudo-vector exchange depending of the $f_1(1285)$ and $f_1(1420)$ resonances mass mixing. \label{massmixing}}{ \begin{tabular}{c|c} Mass Mixing & $10^{10} \times a_\mu$\\ \hline No OPE and Nonet Symmetry with M=1.3 GeV& 0.3 \\ New OPE and Nonet Symmetry with M= 1.3 GeV & 0.7 \\ New OPE and Nonet Symmetry with M= M$_\rho$ & 2.8 \\ New OPE and Ideal Mixing with Experimental Masses & 2.2 $\pm$ 0.5\\ \end{tabular}} \end{center} \end{table} The difference between the lines labeled ``No OPE'' and ``New OPE'' is the effect of making the $q,p_3$ vertex point-like. The authors of Ref. \refcite{MV04} took the ideal mixing result for their final result for $a_\mu$. \section{Momentum Regions for $\pi^0$ Exchange} \label{GBE} We were somewhat puzzled by the effect when saturating the new short distance constraint by GBE in Ref. \refcite{MV04} and have therefore done a few studies to see whether the changes there come from large momentum regimes or are located elsewhere. This was because our total estimate of the quark-loop was similar to the numerical change in the GBE of Ref. \refcite{MV04}. In order to do this study, we have adapted the method used in Refs. \refcite{BPP96} and \refcite{BP01} to various form factors used in earlier works. We rotate the integrals in (\ref{Mlbl}) into Euclidean space. The eight dimensional integral can be easily reduced to a five dimensional integral. Here one can choose as variables\footnote{In Refs. \refcite{BPP96} and \refcite{BP01} a different set was used not quite as suitable for the present study.} $P_1$, $P_2$ and $Q$ and two angles $\theta_1$ and $\theta_2$. These are the angles between the Euclidean $p_1$, $p_2$ and the muon momentum while $P_1$, $P_2$ and $Q$ are the size of the Euclidean momenta with $P_1^2=-p_1^2$, $P_2^2=-p_2^2$ and $Q^2=-q^2$. Ref.~\refcite{KN02} performed the integrals over three of these quantities analytically but not for the asymmetric case used by Ref.~\refcite{MV04}. We have therefore use numerical integration. The main integration routine used by us earlier\cite{BPP96,BP01} was VEGAS. For the present study we have also performed the integration using an adaptive Gaussian multidimensional integration routine and have checked for several quantities that both agree and reproduce earlier known results. We will show the contributions to the muon anomalous magnetic moment from $\pi^0$ exchange for several different form factors. These correspond to the point-like $\pi^0\gamma^*\gamma^*$ form factor (WZW), the full vector meson dominance model (VMD), the LMD+V form factor\cite{KN02} with $h_2=-10$~GeV$^2$ (KN) and the latter form factor but with the point-like version on the soft-photon end\cite{MV04} (MV). We will refer to these form factors as WZW, VMD, KN, and MV in the remainder of this section. We have used the values $h_1=0, h_5 = 6.93$~GeV$^2$ and the value of $h_7$ as given by Ref. \refcite{KN02}. We picked the value of $h_2$ that was argued\cite{MV04} to better produce subleading OPE constraints. It raises the central value somewhat compared to $h_2=0$ as shown in Tab.~\ref{tab1} As inputs we used $M_V=M_{V_1}=0.770$~GeV and $M_{V_2}= 1.465$~GeV, $F_\pi=92.4$~MeV and the measured $\pi^0$ and muon masses. This is the origin of the minor differences with Ref.\refcite{KN02}. As a first indication where the contributions to $a_\mu$ come from, we have listed in Tab.~\ref{tab5} the value of $a_\mu$ for the four cases with the constraint $Q,P_1,P_2 < \Lambda$. \begin{table} \tbl{$\pi^0$-exchange results for $10^{10} \times a_\mu$ with a cut-off on the three photon momenta for the four cases described in the text. The last column is the difference between MV and KN form factors. The numerical error is at or below the last digit quoted. \label{tab5}} {\begin{tabular}{c|ccccc} Cut-off $\Lambda$ (GeV) & WZW & VMD & KN & MV & MV$-$KN\\ \hline 0.5 & 4.74& 3.37 & 3.39 & 3.68 & 0.29\\ 0.7 & 7.51& 4.41 & 4.47 & 5.01 & 0.54\\ 1.0 & 11.3& 5.14 & 5.29 & 6.15 & 0.86\\ 2.0 & 21.9& 5.60 & 5.99 & 7.34 & 1.35\\ 4.0 & 33.8& 5.65 & 6.20 & 7.79 & 1.59\\ 8.0 & 49.6& 5.65 & 6.24 & 7.92 & 1.69\\ 16.0& 68. & 5.64 & 6.23 & 7.96 & 1.73 \end{tabular} } \end{table} We have shown the logarithmically square divergent point-like case here to show the size of the suppression introduced by the form factors. Note that we cannot reproduce the 7.65 of Ref.~\refcite{MV04} but we do reproduce the results of Refs. \refcite{BPP96,KN02} and \refcite{BP01}. The new short-distance constraint (\ref{OPEVV}) came from the region $Q\ll P_1\approx P_2$. We have thus checked how much of the difference and total comes from the region with $Q < {\rm min}(P_1,P_2)$ and from the region with $Q$ larger than at least one of $(P_1$,$P_2)$, the numbers quoted are for $\Lambda = 16$ GeV. The numbers are $10^{10} \times a_\mu$. \begin{equation} \mbox{ \begin{tabular}{cccc} $Q<\min(P_1,P2)$: & 4.01 (\mbox{KN}) & 4.74 (\mbox{MV}) & 0.73 (\mbox{MV}$-$\mbox{KN}) \\ $Q>\min(P_1,P_2)$:& 2.24 (\mbox{KN}) & 3.23 (\mbox{MV}) & 0.99 (\mbox{MV}$-$\mbox{KN}) \end{tabular}} \end{equation} As one sees, in fact, most of the difference comes from the region where the OPE condition is strongly violated. The results in Tab.~\ref{tab5} give only a partial indication of which momentum regions are important. In the remaining figures we therefore show the contribution to $a_\mu$ in several ways. We always denote $p_1,p_2$ as the momenta on the $\pi^0$ side with both photons connected to the muon line and $q$ the momentum on the soft-photon side. We can thus rewrite the contribution to $a_\mu$ of (\ref{Mlbl}) in various ways: \begin{eqnarray} \label{defda} a_\mu &=& \int dP_1 dP_2\,\, a_\mu^{PP}(P_1,P_2) \nonumber\\ &=& \int dl_1 dl_2\,\, a_{\mu}^{LL}(l_1,l_2) \nonumber\\ &=& \int dl_1 dl_2 dl_q\,\, a_{\mu}^{LLQ}(l_1,l_2,l_q) \, , \nonumber\\ {\rm with} \quad l_1&=& \log(P_1/\mbox{GeV}),\quad l_2 = \log(P_2/\mbox{GeV}), \quad {\rm and} \quad l_q = \log(Q/\mbox{GeV}) \, . \end{eqnarray} In Fig.~\ref{figmvPP} we have plotted $10^{10} \times a_\mu^{PP}(P_1,P_2)$ as a function of $P_1$ and $P_2$. \begin{figure} \begin{center} \epsfig{file=figmvPP.ps,width=0.9\textwidth} \end{center} \caption{The quantity $a_\mu^{PP}$ of Eq.~\ref{defda}) as a function of $P_1$ and $P_2$ for the MV choice. \label{figmvPP}} \end{figure} In this way of plotting it is however rather difficult to see why the contribution with at least one scale above 1~GeV is as large as shown in Tab.~\ref{tab5}. The quantity $a_\mu^{LL}$ is much more suitable for this. The result for $a_\mu$ after integrating for this quantity is directly proportional to the volume under the surface as it is plotted in Figs.~\ref{figmvLL}, \ref{figknLL} and \ref{figvmdLL} with a logarithmic scale for $P_1$ and $P_2$. We have used the same scale for all three plots. What one finds is that the VMD one has much smaller contributions for $P_1$ and $P_2$ large but both MV and KN show a significant contribution even at fairly high values of $(P_1,P_2)$. Also the contribution at these higher values of $(P_1,P_2)$ is concentrated along the axis $P_1=P_2$. One also see by comparing Figs.~\ref{figmvLL} and \ref{figknLL} that the enhancement of the MV result over the KN result comes not from a very different shape but more a general increase over the entire region. The parts below $0.1$~GeV were not plotted, these are very similar for all three cases. A plot for the WZW case simply shows a constantly growing ridge along $P_1=P_2$ which produces then the $\log^2\Lambda$ divergence. \begin{figure} \begin{center} \epsfig{file=figmvLL.ps,width=0.9\textwidth} \end{center} \caption{The quantity $a_\mu^{LL}$ of Eq.~\ref{defda}) as a function of $P_1$ and $P_2$ for the MV choice. $a_\mu$ is directly related to the volume under the surface as plotted. \label{figmvLL}} \end{figure} \begin{figure} \begin{center} \epsfig{file=figknLL.ps,width=0.9\textwidth} \end{center} \caption{The quantity $a_\mu^{LL}$ of Eq.~\ref{defda}) as a function of $P_1$ and $P_2$ for the KN choice. $a_\mu$ is directly related to the volume under the surface as plotted. \label{figknLL}} \end{figure} \begin{figure} \begin{center} \epsfig{file=figvmdLL.ps,width=0.9\textwidth} \end{center} \caption{The quantity $a_\mu^{LL}$ of Eq.~\ref{defda}) as a function of $P_1$ and $P_2$ for the VMD choice. $a_\mu$ is directly related to the volume under the surface as plotted. \label{figvmdLL}} \end{figure} The figures before give an indication of which ranges of $(P_1,P_2)$ are important. But what about the values of $Q$ that are relevant. This will of course depend on the values of $P_1$ and $P_2$. We show in Fig.~\ref{figmvLLQ} the value for $a_\mu^{LLQ}$ along the line $P_1=P_2$. Again, the contribution to $a_\mu$ is proportional to the volume under the surface as shown. This is shown for the MV and KN form factors in Figs.~\ref{figmvLLQ} and \ref{figknLLQ} respectively. One surprise for us was that while one can see that the tail towards larger values of $Q$ is somewhat larger for the MV form factor than the KN one, it is much less than expected and only marginally visible in the plot. \begin{figure} \begin{center} \epsfig{file=figmvLLQ.ps,width=0.9\textwidth} \end{center} \caption{The quantity $a_\mu^{LLQ}$ of Eq.~\ref{defda}) as a function of $Q$ and $P_1=P_2$ for the MV choice. $a_\mu$ is directly related to the volume under the surface as plotted. \label{figmvLLQ}} \end{figure} \begin{figure} \begin{center} \epsfig{file=figknLLQ.ps,width=0.9\textwidth} \end{center} \caption{The quantity $a_\mu^{LLQ}$ of Eq.~\ref{defda}) as a function of $Q$ and $P_1=P_2$ for the KN choice. $a_\mu$ is directly related to the volume under the surface as plotted. \label{figknLLQ}} \end{figure} The main conclusion from this section is that the numerical difference between MV and KN comes from relatively low values of $Q$ and moderate values of $P_1$ and $P_2$. We have provided plots and numerics so that readers can draw their own conclusions. \section{Comparison} \label{Comparison} Let us now try to compare the different results of the three calculations in Refs. \refcite{BPP96,BPP01,HK98,HK01} and \refcite{MV04}. In Tab.~\ref{comparisontab}, the results to leading order in $1/N_c$ are shown. The quark loop is of the same order and has to be {\em added} to get the full hadronic light-by-light while the model used in Ref. \refcite{MV04} is saturated just by exchanges. \begin{table} \begin{center} \tbl{Full hadronic light-by-light contribution to $a_\mu$ at ${\cal O}(N_c)$. The difference between the two results of Refs. \protect\refcite{BPP96} and \protect\refcite{BPP01} is the contribution of the scalar exchange $-(0.7\pm0.1) \times 10^{-10}$. This contribution is not included in Refs. \protect\refcite{HK98,HK01} and \protect\refcite{MV04}. \label{comparisontab} \label{largeN}}{ \begin{tabular}{c|c} Hadronic light-by-light at ${\cal O} (N_c)$ & $10^{10} \times a_\mu$\\ \hline Nonet Symmetry + Scalar Exchange \protect\cite{BPP96,BPP01} & 10.2 $\pm$ 1.9\\ Nonet Symmetry \protect\cite{BPP96,BPP01}& 10.9 $\pm$ 1.9 \\ Nonet Symmetry \protect\cite{HK98,HK01} & 9.4 $\pm$ 1.6 \\ New OPE and Nonet Symmetry \protect\cite{MV04} & 12.1 $\pm$ 1.0 \\ New OPE and Ideal Mixing \protect\cite{MV04} & 13.6 $\pm$ 1.5 \end{tabular}} \end{center} \end{table} In the GBE the effect of the new OPE in Ref. \refcite{MV04} is a little larger than the quark loop contributions of Refs. \refcite{BPP96} and \refcite{BPP01} but compatible within one sigma. This contribution has been discussed in more detail in the previous section. The new OPE in Ref. \refcite{MV04} similarly increases the axial-vector exchange with nonet symmetry from 0.3 $\times 10^{-10}$ to 0.7 $\times 10^{-10}$ One thus sees a reasonable agreement in the comparison of the ${\cal O}(N_c)$ results of Refs. \refcite{BPP96,BPP01,HK98,HK01} and \refcite{MV04} when using the same mass mixing for the axial-vectors, namely, (10.9$\pm$ 1.9, 9.4 $\pm$ 1.6, 12.1 $\pm 1.0$). The final differences are due to the additional increase of 1.5$\times 10^{-10}$ from the ideal mixing in the axial vector exchange in Ref. \refcite{MV04} and the scalar exchange of $-$0.7$\times 10^{-10}$ in Refs. \refcite{BPP96} and \refcite{BPP01}. Let us now see what the different predictions at NLO in $1/N_c$ are. In Ref. \refcite{MV04}, the authors studied the chiral expansion of the charged pion loop using the HGS model used in Refs. \refcite{HK98} and \refcite{HK01}. This model is known not to give the correct QCD high energy behavior in some two-point functions, in particular it does not fulfill Weinberg Sum Rules, see e.g. Ref. \refcite{BPP96}. Within this model, Ref. \refcite{MV04} showed that there is a large cancellation between the first three terms of an expansion of the charged pion loop contribution in powers of $(m_\pi/M_\rho)^2$. It is not clear how one should interpret this. In Refs. \refcite{BPP96} some studies of the cut-off dependence of this contribution were done and the bulk of their final number came from fairly low energies which should be less model dependent. However, it is clear that there is a large model dependence in the NLO in $1/N_c$ contributions. But simply taking it to be $(0\pm 1) \times 10^{-10}$ as in Ref. \refcite{MV04} is rather drastic and certainly has an underestimated error. The argument of very large higher order corrections when expanded in CHPT orders which was used against this contribution in Ref. \refcite{MV04} also applies to the $\pi^0$ exchange as can be seen from Tab.~\ref{tab5} by comparing the WZW column with the others. Let us now compare the results for the full hadronic light-by-light contribution to $a_\mu$ when summing all contributions. The final result quoted in Refs. \refcite{BPP96,BPP01}, \refcite{HK98,HK01} and \refcite{MV04} can be found in Tab.~\ref{final}. The apparent agreement between Refs. \refcite{BPP96,BPP01} and \refcite{HK98,HK01} final number is hiding non-negligible differences which numerically compensate to a large extent. There are differences in the quark loop and charged pion and Kaon loops and Refs. \refcite{HK98,HK01} do not include the scalar exchange. \begin{table} \begin{center} \tbl{Results for the full hadronic light-by-light contribution to $a_\mu$. \label{final}}{ \begin{tabular}{c|c} Full Hadronic Light-by-Light & $10^{10} \times a_\mu$\\ \hline Bijnens, Pallante and Prades \cite{BPP96,BPP01}& 8.3 $\pm$ 3.2\\ Hayakawa and Kinoshita \cite{HK98,HK01}& 8.9 $\pm$ 1.7\\ Melnikov and Vainshtein \cite{MV04}& 13.6 $\pm$ 2.5\\ \end{tabular}} \end{center} \end{table} Comparing the results of Refs. \refcite{BPP96,BPP01} and \refcite{MV04}, we have seen several differences of order $1.5 \times 10^{-10}$, differences which are not related to the one induced by the new short-distance constraint introduced in Ref. \refcite{MV04}. These differences are numerically of the same order or smaller than the uncertainty quoted in Refs. \refcite{BPP96,BPP01} but tend to add up making the total difference large as follows: The different axial-vector mass mixing account for $-1.5 \times 10^{-10}$, the absence of scalar exchange in Ref. \refcite{MV04} accounts for $-0.7 \times 10^{-10}$ and the absence of the NLO in $1/N_c$ charged pion and Kaon loops contribution in Ref. \refcite{MV04} accounts for $-1.9 \times 10^{-10}$. These model dependent differences add up to $-4.1 \times 10^{-10}$ out of the final $-5.3 \times 10^{-10}$ difference between the results in Refs. \refcite{BPP96,BPP01} and \refcite{MV04}. In addition we have shown from which regions in momentum the main contribution originates. Clearly, the new OPE constraint found in Ref. \refcite{MV04} alone does not account for the large final numerical difference with respect to Refs. \refcite{BPP96,BPP01} as a reading of it seems to suggest. \section{Conclusions} \label{conclusions} At present, the only possible conclusion is that the situation of the hadronic light-by-light contribution to $a_\mu$ is unsatisfactory. However, looking into the various calculations one finds a {\em numerical} agreement within roughly one sigma when comparing the ${\cal O}(N_c)$ results found in Refs. \refcite{BPP96,HK98,BPP01,HK01} and \refcite{MV04}, see Tab.~\ref{largeN}. A new full ${\cal O}(N_c)$ calculation studying the full correlator with the large $N_c$ techniques developed in Refs. \refcite{BGLP03} and \refcite{CEEKPP06} and references therein, seems feasible and definitely desirable. At NLO in $1/N_c$, one needs to control both Goldstone and non-Goldstone boson loop contributions. The high model dependence of the Goldstone boson loop is clearly visible in the different results of Refs. \refcite{BPP96,BPP01} and \refcite{HK98,HK01} and discussed in Refs. \refcite{BPP96} and \refcite{MV04}. For non-Goldstone boson loops, little is known on how to consistently treat them, a recent attempt in another context is Ref. \refcite{RSP04}. In the meanwhile, we propose as an educated guess for the total hLBL\footnote{ This educated guess agrees with the one presented by Eduardo de Rafael \cite{MRR06} and ourselves \cite{kazimierz} at the ``Final Euridice Meeting'' in Kazimierz, August 2006 and by one of us (JB) at the ``DESY Theory Workshop'' in Hamburg, September 2005.} \begin{eqnarray} \label{finalpluserror} a_\mu= (11 \pm 4) \times 10^{-10} \, . \end{eqnarray} We believe that, that this number and error capture our present understanding of the hLBL contribution to $a_\mu$. This number can be reached using several different arguments: the new short-distance constraint found in Ref. \refcite{MV04} and the ideal mixing for the axial-vector exchange should lead to some increase of the results of Refs. \refcite{BPP96,BPP01} and \refcite{HK98,HK01}; the scalar exchange and the pion and Kaon loops are expected to lead to some decrease of the result of Ref. \refcite{MV04}; one can also average the leading in $1/N_c$ results (three middle results of Tab.~\ref{largeN}) which turn out to be within one sigma. The final error remains a guess but the error in (\ref{finalpluserror}) is chosen to include all the known uncertainties. \section*{Acknowledgments} This work is supported in part by the European Commission (EC) RTN network, Contract No. MRTN-CT-2006-035482 (FLAVIAnet), the European Community-Research Infrastructure Activity Contract No. RII3-CT-2004-506078 (HadronPhysics) (JB), the Swedish Research Council (JB), MEC (Spain) and FEDER (EC) Grant No. FPA2006-05294 (JP), and Junta de Andaluc\'{\i}a Grant Nos. P05-FQM-101 and P05-FQM-437 (JP).
{ "attr-fineweb-edu": 1.818359, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdhY5qoTAooKPOkEZ
\section{Introduction} For a fairly long period, it remains a challenging issue in nuclear physics to explore the existence limit of very heavy nuclei, i.e., the superheavy elements (SHE) with $Z \geq 104$ and the so-called stability island of superheavy nuclei (SHN). Far from being simply large clusters of nucleons, these fascinating species owe their very existence to subtle contributions to the nuclear binding energy~\cite{Block2010}. Experimentally, the discoveries of new elements up to $Z = 118$ have been reported in Refs.~\cite{Oganessian2006, Oganessian2010}. The increasing survival probabilities with increasing proton number of SHE from $Z = 114$ to $118$ seem to indicate enhanced shell effects with increasing $Z$ and therefore a possible proton magic shell may emerge beyond $Z \geq 120$~\cite{Adamian2009}. On the other hand, theoretical studies have provided a large amount of valuable information for the exploration of SHN. These studies can be separated into different categories: Microscopic - Macroscopic (Mic-Mac) models~\cite{Moller1995, Baran2005}, non-relativistic mean field~\cite{Rutz1997, Bender1999, Decharge2003} and covariant mean field~\cite{Rutz1997, Bender1999, Zhang2005} approaches. The extrapolation towards the superheavy region challenges the predictivity of nuclear models. The Mic-Mac methods, although generally successful in describing the nuclear binding, require preconceived knowledge of the expected densities and single-particle (s.p.) potentials~\cite{Moller1995}, which fades away when stepping into new regions where stronger polarization effects and more complicated functional forms of the densities may occur~\cite{Bender1999, Decharge1999}. The stability of nuclei is mostly driven by shell effects and therefore, self-consistent mean field methods are probably the best conceptual tool to explore the superheavy region, although the Mic-Mac models still give a better quantitative description of heavy nuclides. We are searching for doubly closed-shell systems and we assume spherical symmetry. Then, the shells are essentially determined by the spin-orbit (SO) splittings, and by the effective masses. Another effect which affects the shell structure is related to the pseudo-spin symmetry (PSS) and its breaking~\cite{Ginocchio2005, Long2007}. In the non-relativistic self-consistent mean field theory~\cite{Bender2003, Stone2007}, the SO splittings depend directly on an extra SO parameter in the energy density functional. In the superfluid covariant density functional (CDF) theory, like the relativistic Hartree-Bogoliubov (RHB)~\cite{Vretenara2005, Meng2006} or the relativistic Hartree-Fock-Bogoliubov (RHFB)~\cite{Long2010a} approaches, the SO splitting depends directly on the Lorentz scalar and vector mean fields without additional term. The SO splitting is not adjusted and can be considered as a prediction of relativistic Lagrangians, even in ordinary nuclei. This might be an advantage for exploring unknown regions. Furthermore, in the more complete RHFB version of the CDF theory the SO splittings can be affected by meson-nucleon couplings like Lorentz $\rho$-tensor couplings~\cite{Long2007} not present in the simple RHB. This is one of the main motivations for undertaking the present study in the framework of the RHFB approach. In this work we investigate the superheavy nuclides covering $Z = 110-140$. In the pairing channel, the finite-range Gogny force D1S~\cite{Berger1984} renormalized by a strength factor $f$ is adopted as the effective pairing interaction. The strength factor $f$ is introduced to compensate level-density differences among various mean field approaches. It was indeed shown that pairing related quantities, such as odd-even mass differences and moments of inertia, are systematically overestimated in the RHFB calculations of heavy nuclei with the original Gogny pairing force~\cite{Wang2013}. The strength factor $f=0.9$ is therefore adjusted to reproduce the odd-even mass differences of odd Pb isotopes. Concerning the relativistic Hartree-Fock (RHF) mean field, the adopted effective Lagrangians are PKA1~\cite{Long2007} and the PKOi series (i=2, 3)~\cite{Long2006a, Long2008}. To compare with approaches neglecting the Fock term (RHB), we also use PKDD~\cite{Long2004} and DD-ME2~\cite{Lalazissis2005} Lagrangians. The integro-differential RHFB equations are solved by using a Dirac Woods-Saxon basis~\cite{Zhou2003} with a radial cutoff $R = 28$~fm. The numbers of positive and negative energy states in the basis expansion for each s.p. angular momentum ($l, j$) are chosen to be 44 and 12, respectively. \begin{figure}[t] \centering \ifpdf \includegraphics[width = 0.47\textwidth]{SO.pdf} \else \includegraphics[width = 0.47\textwidth]{SO.eps} \fi \caption{ Relative differences between the theoretical SO splittings $\delta\epsilon_{ls}^{cal.}$ and the experimental ones $\delta\epsilon_{ls}^{exp.}$~\cite{Nudat2013} in the (semi)-doubly magic nuclei indicated on the horizontal-axis. Particle and hole SO partners are shown on the left while particle-hole ones are on the right. See the text for details.} \label{fig:SO} \end{figure} Let us first dicuss extrapolations to SHE of mean field models which are well constrained on medium and heavy nuclei. For instance, due to the high level density in SHE, small variations in the s.p. level spacings due to different SO splitting predictions of various models, can have a large effect on magicity. The SO force is therefore a crucial ingredient of nuclear structure models, especially when it comes to extrapolations to SHN. Fig.~\ref{fig:SO} shows the relative differences between calculated and experimental~\cite{Nudat2013} SO splittings for a selection of levels having well controlled spectroscopic factors. The relative differences are typically $\sim 20\%$ when both partners are particle or hole states, but they become larger otherwise. This is not surprising since polarization and correlation effects tend to shift unoccupied and occupied s.p. states into opposite directions ~\cite{Rutz1998, Litvinova2006}. If one compares the results of Fig.~\ref{fig:SO} with those from non-relativistic mean field models such as Skyrme-Hartree-Fock (SHF)~\cite{Bender1999} it appears that the latter give systematically larger deviations. Fig.~\ref{fig:SO} provides therefore a good motivation for predictions of SHE based on relativistic Lagrangians. SHE predictions have been carried out using relativistic mean field (RMF) models~\cite{Rutz1997} or RHB models~\cite{Zhang2005}. In such Hartree-type approaches, the contribution of the Fock term is disregarded, at variance with RHF, leading to a renormalisation of the coupling constants. It is an approximation which forbids the inclusion of the $\pi$ and the $\rho$-tensor mesons. While RMF models are as predictive as RHF ones for medium and heavy nuclei, it is preferable to base extrapolations to SHE on calculations including correctly the contribution of the Fock term. It is also a motivation of the present study. Magicity in SHN might not be as well-marked as in the ordinary nuclei~\cite{Bender1999}. To identify the magic shells, we will employ the so-called two-nucleon gaps, $\delta_{2p}$ (proton) and $\delta_{2n}$ (neutron), i.e., the difference of two-nucleon separation energies of neighboring isotopes or isotones, which provides an efficient evaluation of the shell effects~\cite{Rutz1997, Zhang2005}, \begin{subequations}\label{nucleon gaps} \begin{eqnarray} \delta_{2p}(N,Z)&=&S_{2p}(N,Z)-S_{2p}(N,Z+2),\\ \delta_{2n}(N,Z)&=&S_{2n}(N,Z)-S_{2n}(N+2,Z). \end{eqnarray} \end{subequations} The peak values of the two-nucleon gaps are essentially determined by the sudden jump of the two-nucleon separation energies, which can be taken as a clear evidence of the magic shell occurrence. \begin{figure*}[t] \ifpdf \includegraphics[width = 1.00\textwidth]{D2N.pdf} \else \includegraphics[width = 1.00\textwidth]{D2N.eps} \fi \caption{Contour plots (in MeV) for the two-proton gaps $\delta_{2p}$ (left panels) and the two-neutron gaps $\delta_{2n}$ (right panel) as functions of N and Z. The two-nucleon gaps are obtained with PKA1, PKO2 and PKO3 parametrizations for RHFB, and PKDD and DD-ME2 parametrizations for RHB. The red-solid lines represent the two-proton drip lines. Nuclei stable with respect to $\beta$-decay and fission are marked with green filled stars and blue circles, respectively. The blue long-dashed lines represent the empirical $\beta$-stability line~\cite{Marmier1971}. The red empty squares indicate the experimental SHN from NUSEBASE2012~\cite{Audi2013}. See text for more details.} \label{fig:DN} \end{figure*} Fig. \ref{fig:DN} presents the two-proton (left panels) and two-neutron (right panels) gaps for the $Z = 110-140$ even-even isotopes calculated with the selected effective Lagrangians. We have adopted the presentation of Ref.~\cite{Rutz1997} so that the similarities and differences in the predictions of the earlier study can be more easily seen. The red-solid lines stand for the two-proton drip lines defined as the change in sign of the two-proton separation energy. Nuclei that are stable with respect to $\beta$-decay or fission are represented with filled green stars or filled blue circles, respectively. For a given $A$ (resp. $Z$), the $\beta$-stability (resp. fission-stability) line is located at the maximum of the binding energy per nucleon, and corresponds as well to the minimum of the $Q$-value for $\beta$-decay (resp. fission)~\cite{Wu1996}. The dashed blue line represents the $\beta$-stability line given by the empirical formula $Z = A/(1.98+0.0155A^{2/3})$~\cite{Marmier1971}. Experimental data taken from the NUBASE2012 evaluation of nuclear properties~\cite{Audi2013}, including the extrapolated SHN, are located below $Z = 118$ and are shown in Fig.~\ref{fig:DN} with empty red squares. It is observed from Fig.~\ref{fig:DN} that these nuclei coincide largely with the nuclei which are stable with respect to fission (filled blue circles), as predicted by our models, especially by PKA1. The role of deformation, although not included in our calculations, may play a significant role in stabilizing SHN~\cite{Lalazissis1996, Patra1999}. On the neutron-poor side, the large Coulomb barrier existing in SHN pushes further down the two-proton drip line. The effect is expected to change by a few units the position of the drip line. In Fig.~\ref{fig:DN}, the squares are filled in proportion of the gap, which varies from 1 to 5~MeV, as shown in the grey-scale index. Structures with large gaps between 3 and 5 MeV appear clearly in Fig.~\ref{fig:DN}. From the comparison of the different models shown in Fig.~\ref{fig:DN}, it is clear that PKA1 is the Lagrangian which predicts the larger gaps for $Z = 120$, 126, 138 and $N = 184$, 258. These numbers are thus the predicted magic numbers in neutron-rich SHN based on the PKA1-RHFB model. The other effective Lagrangians also present a fairly remarkable proton shell at $Z=120$. In addition, $Z = 132$ for PKDD-RHB and $Z = 138$ for both RHFB (PKA1 and PKOi) and RHB (PKDD and DD-ME2) approaches are found to be possible proton magic numbers, consistent with the predictions in Ref.~\cite{Zhang2005}. Concerning the neutron shells, besides $N=184$ and 258, PKA1 also presents a fairly distinct shell structure at $N = 172$, which is also present in the predictions of the other Lagrangians. Fairly distinct shell effects at $N = 184$ and 258 are also found with the other parametrizations, except with PKO2. Remarkable shell effects are found at $N = 228$, although less pronounced compared to those at $N = 184$ and $258$ predicted by PKA1. Furthermore, a neutron shell is predicted at $N = 164$ with PKO2, PKDD and DD-ME2 models, and another is predicted at $N = 198$ with RHB models (PKDD and DD-ME2). We have checked that the neutron and proton pairing gaps are also quenched for the same proton and neutron magic numbers as those obtained in Fig.~\ref{fig:DN} for each considered Lagrangian. Combined with the two-nucleon gaps, it is found that the proton shell $Z = 120$ is predicted by PKA1 as well as by the other Lagrangians. It is also predicted by some SHF models such as SLy6, SkI1, SkI3 and SkI4~\cite{Rutz1997}, but it must be stressed that the SHF models can give different predictions for $Z = 114$ and $Z = 126$, see for instance Ref.~\cite{Rutz1997}. $Z=120$ can however be considered as a fairly good candidate for proton magic number. In Ref.~\cite{Rutz1997} SHF forces such as SkM* or SkP predict $Z = 126$ as a magic number for neutron-poor isotopes. $Z = 126$ is also predicted as a magic number by PKA1 model, but not by the other Lagrangians considered in this work which predict a weak SO splitting for high-$j$ states. On the other hand, the situation for the neutrons is more complex. Although $N = 172$ and 228 magic numbers seem to be generally predicted by the selected effective Lagrangians, the corresponding shell effects are rather weak. Except for PKO2, $N = 184$ and $258$ are also generally predicted as candidates for neutron magic numbers. Let us notice that a large number of SHF models considered in Ref.~\cite{Rutz1997} as well as Gogny forces~\cite{Decharge2003} have also a large gap for these neutron numbers. Specifically, PKA1 can provide a better description of the nuclear shell structure than the others~\cite{Long2007} and a better agreement on the fission stability of observed SHN (see Fig.~\ref{fig:DN}), and it leads to pronounced shell effects. In fact, as indicated by SHF investigations~\cite{Kruppa2000} $N = 184$ is also favored evidently to be a spherical neutron magic number and the $N = 184$ isotones are expected to have spherical shapes. By comparing the predictions between the various models discussed here, we conclude that $^{304}120_{184}$ is a most probable doubly magic system in the SHN region, and $^{292}120_{172}$ might be another candidate with less stability. \begin{table}[tb] \caption{Bulk properties of symmetric nuclear matter calculated with the effective interactions PKA1, PKOi series, PKDD and DD-ME2: saturation density $\rho_0$ (fm$^{-3}$), binding energy per particle $E_{B}/A$ (MeV), incompressibility $K$ (MeV), asymmetry energy coefficient $J$ (MeV), scalar mass $M^\ast_S$ and non-relativistic effective mass $M^\ast_{NR}$ in units of nucleon mass $M$.}\setlength{\tabcolsep}{5pt} \label{tab:NMP} \begin{tabular}{lcccccc} \hline\hline Force &$\rho_0$ & $E_{B}/A$ & $K$ & $J$ & $M^\ast_S$ & $M^\ast_{NR}$ \\ \hline PKA1 & 0.160 & $-15.83$ & 229.96 & 36.02 & 0.547 & 0.681 \\ PKO1 & 0.152 & $-16.00$ & 250.24 & 34.37 & 0.590 & 0.746 \\ PKO2 & 0.151 & $-16.03$ & 249.60 & 32.49 & 0.603 & 0.764 \\ PKO3 & 0.153 & $-16.04$ & 262.47 & 32.98 & 0.586 & 0.742 \\ \hline PKDD & 0.150 & $-16.27$ & 262.19 & 36.79 & 0.571 & 0.651 \\ DD-ME2 & 0.152 & $-16.11$ & 250.30 & 32.27 & 0.572 & 0.652 \\ \hline\hline \end{tabular} \end{table} Nevertheless, from Fig.~\ref{fig:DN} one can find distinct deviations among the models in predicting the magic numbers. $Z = 120$ can be considered as a reliable prediction of proton magic number and $Z = 138$ could be another candidate with more model dependence. The neutron shells $N = 172$, 184, 228 and 258 are common to several models. Other shells, e.g., $N = 198$, appear essentially model dependent. Among the present results, one may notice that RHB calculations (PKDD and DD-ME2) predict more shell closures than RHFB, and PKO2-RHFB predicts fewest candidates. To interpret such distinct deviations, Table~\ref{tab:NMP} shows the bulk properties of symmetric nuclear matter determined by the present sets of Lagrangians. In general the occurrence of superheavy magic shells is closely related with both the scalar mass $M^\ast_S$ and effective mass $M^\ast_{NR}$~\cite{Long2006a}, which essentially determine the strength of SO couplings and level densities, respectively. Among the present models, the effective Lagrangian PKO2 predicts the largest values of both masses, leading to relatively weak SO couplings and high level density on the average. As a result there remains little space in the spectra for the occurrence of magic shells. On the other hand, the RHB models (PKDD and DD-ME2) predict more magic shells due to the relatively small masses. In fact, as seen from Fig.~\ref{fig:DN}, PKO2 also presents weaker shell effects than the others. For PKA1 the situation is different. Although it has a larger effective mass $M^\ast_{NR}$ than PKDD or DD-ME2, PKA1 gives a smaller scalar mass $M_S^*$ and shows stronger shell effects than the others. These may partially explain why PKA1 does not suffer from the common drawback of the CDF calculations --- the so-called artificial shell closures induced by low $M_S^*$ and $M^\ast_{NR}$~\cite{Geng2006} --- and it better preserves the PSS~\cite{Long2007, Long2009, Long2010b}. \begin{figure*}[tb]\centering \ifpdf \includegraphics[width = 0.70\textwidth]{SPS120184.pdf} \else \includegraphics[width = 0.70\textwidth]{SPS120184.eps} \fi \caption{Proton (left panel) and neutron (right panel) canonical s.p. spectra of superheavy nuclide $^{304}120$. The results are extracted from the RHFB calculations with PKOi series and PKA1, and compared to the RHB ones with PKDD and DD-ME2. In all cases the pairing force is derived from the finite range Gogny force D1S with the strength factor $f = 0.9$. See the text for details.} \label{fig:SPS} \end{figure*} Similarly to the situation in the stable region~\cite{Long2007}, the model deviations originating from the relativistic PSS can also be found in the s.p. spectra of SHN. Taking the doubly magic SHN $^{304}120_{184}$ as an example, Fig.~\ref{fig:SPS} shows the proton (left panel) and neutron (right panel) canonical s.p. spectra provided by selected models. It is found that PKA1 provides the most evident magicity at $Z = 120$ and $N = 184$, respectively, although these shell closures are much weaker than in ordinary nuclei. For the neutron shell $N = 184$, it is essentially determined by the degeneracy of two pseudo-spin partners $\Lrb{2h_{11/2}, 1j_{13/2}}$ and $\Lrb{4s_{1/2}, 3d_{3/2}}$, respectively above and below the shell. For the latter, the PSS is well preserved in all the calculations while for the former with high angular momentum the symmetry is only weakly restored by PKA1 while seriously violated by the others, leading to the occurrence of the shell closure $N = 198$. In fact, a similar emergence of shell closure can also be found in mid-heavy and heavy regions of the nuclear chart. For instance, the proton shell $Z = 82$ in $^{208}$Pb can also be interpreated as the result of the degeneracies of two pseudo-spin partners $\Lrb{2f_{7/2},1h_{9/2}}$ and $\Lrb{3s_{1/2},2d_{3/2}}$. In the CDF calculations (except PKA1) there is a clear gap between $2f_{7/2}$ and $1h_{9/2}$, i.e., the artificial shell closure $Z = 92$~\cite{Long2007, Geng2006}, which somewhat compresses the magic shell $Z = 82$. A similar mechanism can also be found in the formation of the sub-shell 64 due to the degeneracy of the pseudo-spin partners $\Lrb{3s_{1/2},2d_{3/2}}$ and $\Lrb{2d_{5/2},1g_{7/2}}$~\cite{Long2007, Long2009}. In the CDF calculations (except PKA1) the sub-shell 64 is compressed by the violation of PSS on the partners $\Lrb{2d_{5/2},1g_{7/2}}$, which induces the so-called artificial shell closure 58~\cite{Long2007, Geng2006}. Coming back to SHN, a similar phenomenon is found for $N = 184$ for which the gap is related to the splitting between the pseudo-spin partners $\Lrb{2h_{11/2}, 1j_{13/2}}$ and $\Lrb{4s_{1/2}, 3d_{3/2}}$. The restoration of PSS in PKA1 increases the gap while the breaking of the PSS among the pairs $\Lrb{2h_{11/2},1i_{13/2}}$ decreases it (see the right panel of Fig.~\ref{fig:SPS}). From this point of view, the conservation or the breaking of PSS play a delicate role for the occurrence of shell closures in nuclei --- ordinary as well as SHN. This phenomenon could be another origin for the deviations between the models. For instance it is found in general that PKA1 presents a strong SO splitting for high-$j$ states and leads to a better PSS, compared with the other effective Lagrangians. This goes along with the general finding that SO effects help to restore the PSS~\cite{Liang2013, Shen2013}. It is however difficult to predict the occurrence of the PSS, which can be broken or restored in the same nucleus, for different s.p. levels. At variance with $N = 184$, which is stabilized by the restoration of the PSS, the proton shell closure $Z = 120$ emerges from the PSS violation. As shown in the left panel of Fig.~\ref{fig:SPS} the proton shell closure $Z = 120$ is determined directly by the large splittings of two pseudo-spin partner states, $\Lrb{3p_{3/2}, 2f_{5/2}}$, whereas the spin doublet $3p$ above the shell is almost degenerate. The shell gap at $Z = 120$ can therefore be interpreted as a manifestation of the PSS violation and a weak SO splitting. Below the shell $Z = 120$, the protons filling in the high-$j$ states will be driven towards the surface of the nucleus due to the strong centrifugal potential and large repulsive Coulomb field in SHN. Both effects lead to an interior depression of the proton distributions and consequently the interior region of the mean potential is not flat any more~\cite{Decharge2003}. As a result the SO splitting is reduced, particularly for the low-$j$ states $3p$ and $2f$ which have more overlap with the interior depression. Consequently the splitting between neighbouring pseudo-spin partners (i.e., $3p_{3/2}$ and $2f_{5/2}$) is somewhat enlarged~\cite{Shen2013}. In Ref.~\cite{Afanasjev2005} it is also pointed out that the distinct central depressions on the densities lead to the spherical shell gaps at $Z = 120$ and $N = 172$ as a direct consequence of PSS breaking, whereas a flatter density profile favors the shell occurrence at $N = 184$ as well as the proton one at $Z = 126$. This can happen not only for SHN, and the emergence of new shell closure at $Z$ or $N = 16$ and $N = 32$~\cite{Ozawa2000, Kanungo2002} can be also related with the violation of PSS in light exotic nuclei. In summary, we have explored the occurrence of spherical shell closures for superheavy nuclei (SHN) and the physics therein using the relativistic Hartree-Fock-Bogoliubov (RHFB) theory with density-dependent meson-nucleon couplings, and compared the predictions with those of some relativistic Hartree-Bogoliubov (RHB) models. The shell effects are quantified in terms of two-nucleon gaps $\delta_{2n(p)}$. To our knowledge, this is the first attempt to perform such extensive calculations within the RHFB scheme. The results indicate that the nuclide $^{304}120_{184}$ could be the next spherically doubly magic nuclide beyond $^{208}$Pb. It is also found that the shell effects in SHN are sensitive to the values of both scalar mass and effective mass, which essentially determine the spin-orbit effects and level density, respectively. In addition, the breaking or restoration of relativistic pseudo-spin symmetry (PSS) is found to modify the level structure, and therefore to contribute to the emergence or disappearance of shell closure. Experimental measurement of $Q_\alpha$ for at least one isotope of $Z = 120$ nucleus would help us to set a proper constraint in determining the shell effects of SHN and to test further the reliability of the models as well. One also has to admit that for a more extensive exploration one needs to take into account the deformation effects. We would like to thank Haozhao Liang for enlightening discussions on PSS and its breaking. One of us (J. Li) also thank Haifei Zhang for his help in the initial stage of this study. This work is partly supported by the National Natural Science Foundation of China under Grant Nos. 11075066 and 11375076, and the Program for New Century Excellent Talents in University under Grant No. NCET-10-0466.
{ "attr-fineweb-edu": 1.913086, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdhg5qhDBVdSkbn72
\section{Optical-Bloch Equation For the Case of a fixed Magnetic Field} The system of a fixed magnetic field under study is described by the Optical-Bloch equation (OBE) \cite{Scully2006,Liao2012a} \begin{eqnarray} & & \partial_{t}\hat{\rho}=\frac{1}{i\hbar}\left[\hat{H},\hat{\rho}\right]+\mathcal{L}\rho,\nonumber\\ & & \hat{H} = \hat{H}_i + \hat{H}_h ,\nonumber\\ & & \frac{1}{c}\partial_{t}\Omega_{R}+\partial_{y} \Omega_{R}=i\frac{6 \Gamma\alpha}{L}\left( a_{51}\rho_{51}+a_{62}\rho_{62}\right) ,\nonumber\\ % & & \frac{1}{c}\partial_{t}\Omega_{H}+\partial_{y} \Omega_{H}=i\frac{6 \Gamma\alpha}{L}\left( a_{41}\rho_{41}+a_{52}\rho_{52}\right) ,\nonumber\\ % & & \frac{1}{c}\partial_{t}\Omega_{L}+\partial_{y} \Omega_{L}=i\frac{6 \Gamma\alpha}{L}\left( a_{31}\rho_{31}+a_{42}\rho_{42}\right) , \end{eqnarray} where $\alpha$ is the resonant thickness, $\hat{\rho}$ is the density matrix for the state vector $\sum_{i=1}^{6}A_{i}\vert i\rangle$ of the 6-level $^{57}$Fe nuclei, and each coherence $\rho_{ij}=A_i A_j^*$; \begin{equation} \hat{H}_{i}=-\frac{\hbar}{2}\left( \begin{array}{cccccc} 0 & 0 & a_{13}\Omega_{L}^\ast & a_{14}\Omega_{H}^\ast & a_{15}\Omega_{R}^\ast & 0\\ 0 & 0 & 0 & a_{24}\Omega_{L}^\ast & a_{25}\Omega_{H}^\ast & a_{26}\Omega_{R}^\ast\\ a_{31}\Omega_{L} & 0 & -2\Delta_L & 0 & 0 & 0\\ a_{41}\Omega_{H} & a_{42}\Omega_{L} & 0 & -2\left( \Delta_L+\Delta_H\right) & 0 & 0\\ a_{51}\Omega_{R} & a_{52}\Omega_{H} & 0 & 0 & -2\left( \Delta_H+\Delta_R\right) & 0\\ 0 & a_{62}\Omega_{R} & 0 & 0 & 0 & -2\Delta_R \end{array} \right) \end{equation} is the interaction Hamiltonian describing the nucleus-x-ray interaction with Rabi frequency $\Omega_R$ of right circularly polarized field, $\Omega_H$ of horizontally polarized field and $\Omega_L$ of the left circularly polarized field. $a_{31}=1$, $a_{41}=\sqrt{\frac{2}{3}}$, $a_{51}=\sqrt{\frac{1}{3}}$, $a_{42}=\sqrt{\frac{1}{3}}$, $a_{52}=\sqrt{\frac{2}{3}}$ and $a_{62}=1$ are Clebsch-Gordan coefficients of corresponding transitions. All X-ray detunings are zero, where $\Delta_R$ is the detuning of right circularly polarized field, $\Delta_H$ of horizontally polarized field and $\Delta_L$ of the left circularly polarized field; \begin{equation} \hat{H}_{h}=\hbar\left( \begin{array}{cccccc} -\delta_g & 0 & 0 & 0 & 0 & 0\\ 0 & \delta_g & 0 & 0 & 0 & 0\\ 0 & 0 & 3\delta_e & 0 & 0 & 0\\ 0 & 0 & 0 & \delta_e & 0 & 0\\ 0 & 0 & 0 & 0 & -\delta_e & 0\\ 0 & 0 & 0 & 0 & 0 & -3\delta_e \end{array} \right) \nonumber \end{equation} depicts the hyperfine splitting; \begin{equation} \mathcal{L}\rho=\frac{\Gamma}{2}\left( \begin{array}{cccccc} 2\left( a_{31}^2\rho_{33}+a_{41}^2\rho_{44}+a_{51}^2\rho_{55} \right) & 0 & -\rho_{13} & -\rho_{14} & -\rho_{15} & -\rho_{16}\\ 0 & 2\left( a_{42}^2\rho_{44}+a_{52}^2\rho_{55}+a_{62}^2\rho_{66} \right) & -\rho_{23} & -\rho_{24} & -\rho_{25} & -\rho_{26}\\ -\rho_{31} & -\rho_{32} & -2\rho_{33} & -2\rho_{34} & -2\rho_{35} & -2\rho_{36}\\ -\rho_{41} & -\rho_{42} & -2\rho_{43} & -2\rho_{44} & -2\rho_{46} & -2\rho_{46}\\ -\rho_{51} & -\rho_{52} & -2\rho_{53} & -2\rho_{54} & -2\rho_{55} & -2\rho_{56}\\ -\rho_{61} & -\rho_{62} & -2\rho_{63} & -2\rho_{64} & -2\rho_{65} & -2\rho_{66} \end{array} \right) \end{equation} describes the radioactive decay of the excited states $\left|3\right\rangle $, $\left|4\right\rangle $, $\left|5\right\rangle $ and $\left|6\right\rangle $ characterized by decay rate $\Gamma=1/141$ GHz; $\alpha$ and $L$ the resonant thickness and the length of the medium, respectively. The boundary condition gives the Rabi frequency of the incident x-ray field $\Omega_R \left( t,0\right)=\Omega_L \left( t,0\right) = e^{-\frac{\Gamma}{2} t}\times\frac{1}{2}\left[ 1+\tanh\left( 4\frac{t}{\tau}\right) \right] $, where the exponential decay depicts the radioactive decay from excited $^{57}$Fe nuclei, and hyperbolic tangent simulates the time gating with $\tau=5$ ns. Moreover, $\Omega_H \left( t,0\right)=0$ for the forward emission. The initial conditions give the all initial x-ray fields are zero and initial population $\rho_{11}(0,y)=\rho_{22}(0,y)=1/2$, $\rho_{ii}(0,y)=0$ for $i>2$ and zero coherences $\rho_{ij}(0,y)=0$ for $i\neq j$. \section{Optical-Bloch Equation For the Case of switched Magnetic Field} For general cases of continuous and relatively slow magnetic switching, it is convenient to fix the quantization axis along z-axis and then rotate $\hat{H}_{h}$ at each $\theta$ back onto z-axis. We now derive the rotated master equation of the density matrix. The Schr\"odinger equation along the switched magnetic field are $i \hbar \frac{\partial}{\partial t} \vert \psi\rangle = \hat{H} \vert \psi\rangle$ and $-i \hbar \frac{\partial}{\partial t} \langle \psi\vert = \langle \psi\vert\hat{H}^\dagger$. The associated master equation is then give by \begin{equation} \frac{\partial}{\partial t}\left( \vert \psi\rangle \langle \psi\vert\right) = \left( \frac{\partial}{\partial t} \vert \psi\rangle \right) \langle \psi\vert + \vert \psi\rangle \left( \frac{\partial}{\partial t}\langle \psi\vert\right). \nonumber \end{equation} By substituting Schr\"odinger equation on the right hand side, one gets \begin{equation} \frac{\partial}{\partial t}\left( \vert \psi\rangle \langle \psi\vert\right) = \frac{1}{i\hbar}\left( \hat{H} \vert \psi\rangle \right) \langle \psi\vert - \frac{1}{i\hbar} \vert \psi\rangle \left( \langle \psi\vert \hat{H}^\dagger \right), \nonumber \end{equation} which then becomes the typical master equation $\partial_{t}\hat{\rho}=\frac{1}{i\hbar}\left[\hat{H},\hat{\rho}\right]$. The same procedure of derivation can apply to the rotating case using the rotation operator $\hat{R}=e^{-i \frac{\hat{J}}{\hbar}\cdot\hat{n}\theta\left( t\right) }$ where $\hat{J}$ is the angular momentum operator, $\hat{n}$ the rotation axis and $\theta\left( t\right)$ is the time dependent switching angle between the z-axis and magnetic field at some moment $t$. Along the z-axis as the fixed quantization axis, we have \begin{eqnarray} \frac{\partial}{\partial t}\left( \hat{R}\vert \psi\rangle \langle \psi\vert\hat{R}^{\dagger}\right) & = & \hat{R}\left( \frac{\partial}{\partial t} \vert \psi\rangle \right) \langle \psi\vert\hat{R}^{\dagger} + \hat{R}\vert \psi\rangle \left( \frac{\partial}{\partial t}\langle \psi\vert\right)\hat{R}^{\dagger} + \left( \frac{\partial}{\partial t}\hat{R}\right) \vert \psi\rangle \langle \psi\vert\hat{R}^{\dagger} + \hat{R}\vert \psi\rangle \langle \psi\vert\left( \frac{\partial}{\partial t}\hat{R}^{\dagger}\right) \nonumber\\ & = & \frac{1}{i\hbar}\hat{R} \hat{H}\hat{R}^{\dagger} \hat{R}\vert \psi\rangle \langle \psi\vert\hat{R}^{\dagger} - \frac{1}{i\hbar}\hat{R}\vert \psi\rangle \langle \psi\vert \hat{R}^{\dagger} \hat{R}\hat{H}^\dagger \hat{R}^{\dagger} + \left( \frac{\partial}{\partial t}\hat{R}\right) \vert \psi\rangle \langle \psi\vert\hat{R}^{\dagger} + \hat{R}\vert \psi\rangle \langle \psi\vert\left( \frac{\partial}{\partial t}\hat{R}^{\dagger}\right). \nonumber \end{eqnarray} When the magnetic firld is continuously switched, the $\hat{R}$ is time dependent, and the master equation turns into \begin{eqnarray} \partial_{t}\hat{\rho}^z & = & \frac{1}{i\hbar}\left[\hat{H}^z,\hat{\rho}^z\right] -i \frac{\hat{J}}{\hbar}\cdot\hat{n}\frac{\partial \theta}{\partial t}\hat{R} \vert \psi\rangle \langle \psi\vert\hat{R}^{\dagger} + \frac{\partial \theta}{\partial t} \hat{R}\vert \psi\rangle \langle \psi\vert\hat{R}^{\dagger}i \frac{\hat{J}^{\dagger}}{\hbar}\cdot\hat{n}. \nonumber\\ & = & \frac{1}{i\hbar}\left[\hat{H}^z,\hat{\rho}^z\right] + \frac{\hat{J}\cdot\hat{n}}{i\hbar}\frac{\partial \theta}{\partial t}\hat{\rho}^z - \frac{\partial \theta}{\partial t} \hat{\rho}^z\frac{\hat{J}^{\dagger}\cdot\hat{n}}{i\hbar}. \nonumber\\ & = & \frac{1}{i\hbar}\left[\hat{H}^z + \frac{\partial \theta}{\partial t}\hat{J}\cdot\hat{n},\hat{\rho}^z\right]. \nonumber \end{eqnarray} For simplicity we remove the z index of $\hat{\rho}^z$, the OBE of a time dependent rotating system becomes \begin{eqnarray} & & \partial_{t}\hat{\rho}=\frac{1}{i\hbar}\left[\hat{H},\hat{\rho}\right]+\mathcal{L}\rho,\nonumber\\ & & \hat{H} = \hat{H}_i + \hat{H}^z_h +\frac{\partial\theta}{\partial t}\hat{J}_y, \end{eqnarray} where $\hat{J}_y$ is the y-component of the angular momentum operator when magnetic field is switched about y-axis \cite{Sakurai1994}. The $\theta$-independent $\hat{H}_i$ show the convenience of the scheme such that one can always use the same definition of X-ray polarizations. The rotated hyperfine Hamiltonian reads \begin{equation} \hat{H}^z_{h}=\hat{R}\left( -\theta\right)\hat{H}_{h}\hat{R}^{\dagger}\left( -\theta\right)=\left( \begin{array}{cccccc} -\delta_g\cos\theta\left( t\right) & -\delta_g\sin\theta\left( t\right) & 0 & 0 & 0 & 0\\ -\delta_g\sin\theta\left( t\right) & \delta_g\cos\theta\left( t\right) & 0 & 0 & 0 & 0\\ 0 & 0 & 3\delta_e\cos\theta\left( t\right) & \sqrt{3}\delta_e\sin\theta\left( t\right) & 0 & 0\\ 0 & 0 & \sqrt{3}\delta_e\sin\theta\left( t\right) & \delta_e\cos\theta\left( t\right) & 2\delta_e\sin\theta\left( t\right) & 0\\ 0 & 0 & 0 & 2\delta_e\sin\theta\left( t\right) & -\delta_e\cos\theta\left( t\right) & \sqrt{3}\delta_e\sin\theta\left( t\right)\\ 0 & 0 & 0 & 0 & \sqrt{3}\delta_e\sin\theta\left( t\right) & -3\delta_e\cos\theta\left( t\right) \end{array} \right), \nonumber \end{equation} where $\hat{R}\left( -\theta\right)$ is the rotation operator \cite{Sakurai1994}. Due to the weak x-ray intensity of the $^{57}$Co source, in both cases we work in the perturbation regime, namely, $\vert\Omega_R\vert \ll \Gamma$, $\vert\Omega_H\vert \ll \Gamma$ and $\vert\Omega_L\vert \ll \Gamma$ such that only terms of $\rho_{i1}$ and $\rho_{j2}$, where $i > 2$ and $j > 2$, are used in the calculation. $\alpha=30$, $\delta_g=12.63\Gamma$ and $\delta_e=7.37\Gamma$ are used for all figures in the main text. Because the X-ray propagation time, namely, $L/c$, is on the order of 0.1 ps which is much shorter than the nanosecond time scale of interest, this allows for the typical procedure to neglect the temporal derivative terms in the wave equations. The OBE can then be numerically solved by Runge Kutta 4th order method (RK4) \cite{Press1996} with $L= 10 \mu$m, grid spacings $\Delta y =0.1 \mu$m and $\Delta t = 3\times 10^{-3}$ ns. All solutions are double checked with NDSolve of Mathematica 11 for complete ODE including temporal derivative terms in the wave equations. No significant deviation is observed, which confirms the convergence of the solution. \section{Estimation of Photon counting rates} We estimate the vertically polarized photon counting rate $\chi_1$ at the first crystal by following formula in the spherical coordinate \begin{eqnarray} \chi_1 &=& A \frac{\Theta}{4\pi} P_V , \\ \Theta &=& \int_0^{2\pi}\int_0^{\tan^{-1}\left(\frac{w}{2 D} \right) }\sin\vartheta d\vartheta d\phi . \end{eqnarray} Here the activity of 200 mCi $^{57}$Co source is $A=7.4\times 10^{9}$ counts/s. When $\gamma$-decay equally happens in total $4\pi$ solid angle, one has to calculate the fraction of photons emitted in the forward direction within a small solid angle $\Theta$. As depicted in Fig.~\ref{figs1}, assuming the distance between a radioisotope source and the first crystal is $D=20$ cm and the width of the first $^{57}$FeBO$_3$ crystal is $w=$ 5 mm, these parameters result in a solid angle of $5 \times 10^{-4}$ sr in the forward direction. In order to calculate the probability $P_V$ of the emission of vertically polarized photons by a $^{57}$Co source, one can choose the x-axis as the quantization axis and then look at the $\Delta m = 0$ transitions. The analysis of Clebsch-Gordan coefficients shows $P_V=\frac{a_{41}^2+a_{52}^2}{a_{41}^2+a_{52}^2+a_{51}^2+a_{62}^2+a_{31}^2+a_{42}^2}=\frac{1}{3}$. With above parameters, $\chi_1=96343$ counts/s is obtained. The production rate $\chi_2$ of horizontally polarized photons at the second crystal can be estimated by \begin{equation}\label{eqs5} \chi_2=\chi_1 \frac{\int_0^\infty \vert \Omega_H\left( t,L\right) \vert^2 dt}{2 \int_0^\infty e^{-\Gamma t} dt}. \end{equation} The denominator represents the total temporal area of the incident VPX pulse, and the numerator denotes that of generated HPX pulse. The area ratio in Eq.~(\ref{eqs5}) gives the conversion efficiency from vertically to horizontally polarized x-rays. For three single pulses in Fig.~3(c), the area ratios are respectively 0.0045, 0.069 and 0.019, and so the HPX photon production rates are in turn estimated to be 430/s, 6600/s and 1800/s. The area ratio of 1.3 ns pulse in Fig.~3(c) is tripled as 0.013 by using a thicker $^{57}$FeBO$_3$ crystal of $\alpha=100$, $\delta_e=31\Gamma$ and $\delta_g=53\Gamma$. The covered horizontally polarized x-rays is demonstrated in Fig.~\ref{figs2}(b). \begin{figure}[t] \vspace{-0.4cm} \includegraphics[width=0.45\textwidth]{20171029_figS1} \caption{\label{figs1} The illustration for the estimation of photon counting rates. The distance $D$ between a radioisotope source and the $^{57}$FeBO$_3$ crystal and the crystal width $w$ are used to calculate the solid angle. $\vartheta$ denotes the polar angle in the spherical coordinate. } \end{figure} \begin{figure}[b] \vspace{-0.4cm} \includegraphics[width=0.45\textwidth]{20170809_figS2} \caption{\label{figs2} Comparing to Fig.~3, photon counting rates of horizontally polarized x-ray at the second target are tripled by using a thicker $^{57}$FeBO$_3$ crystal of $\alpha=100$, $\delta_e=31\Gamma$ and $\delta_g=53\Gamma$. } \end{figure} \end{widetext} \clearpage \bibliographystyle{apsrev}
{ "attr-fineweb-edu": 1.886719, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdi05qoTDtv-sk4EX
\section{Introduction} The focus of the paper, as the title suggests, is on quantitative theory of periodic homogenization of divergence type elliptic operators. Lately, there has been much interest around effective estimates on convergence rates for homogenization problems associated with linear elliptic operators in divergence form, see for example \cite{GM-JEMS}-\cite{GM-Acta}, \cite{ASS1}-\cite{ASS3}, \cite{Kenig-Lin-Shen-L2}, \cite{Kenig-Lin-Shen-CPAM}, \cite{Ar-Shen}. A particular common feature of these papers, is that they all establish upper bounds on the speed of convergence of homogenization, in other words these papers, among other results, measure how fast the homogenization holds. However, results showing limitations of the speed of the process, i.e. estimating to which extent homogenization may decelerate, given that the homogenization takes place of course, seem to be extremely scarce in the literature. A few instances of this type of set-up around the divergence setting which we are aware of are the following. It is shown in \cite{ASS2}-\cite{ASS3}, that the Dirichlet boundary value homogenization in $L^p$ can not be faster than a certain algebraic rate depending on $1\leq p<\infty$ and on the geometry of the domain (see \cite[Theorem~1.6]{ASS2} and \cite[Theorem 1.3]{ASS3}). The next one studied in \cite{Prange} is related to boundary layer problems set in halfspaces, and shows that depending on the position of the halfspace, convergence of the solution to its boundary layer tail can be slower than any algebraic rate (see \cite[Theorem 1.3]{Prange}). Finally, in \cite{BBMM} it is proved an existence of one-dimensional examples in almost periodic homogenization with fixed boundary and source terms and oscillating coefficients, where homogenization of solutions in $L^2$ is not faster than a polynomial rate. Here we will be interested in developing tools that will address how slow the convergence can actually be. We start with the discussion of the first problem considered in this paper, and will introduce part of the key ideas in that setting. \vspace{0.1cm} For a scalar $a\in {\mathbb R}$ and a unit vector $n\in {\mathbb R}^d$ consider the following Dirichlet problem \begin{equation}\label{bdry-layer-system-recall} \begin{cases} -\nabla \cdot A(y) \nabla v(y) =0 ,&\text{ $y \in \Omega_{n,a}:=\{y\in {\mathbb R}^d: \ y\cdot n >a \}$}, \\ v(y)=v_0(y) ,&\text{ $ y \in \partial \Omega_{n,a} $.} \end{cases} \end{equation} The main assumptions concerning \eqref{bdry-layer-system-recall} which will be in force throughout are: \begin{itemize}[label=\textbullet] \bulitem{(Periodicity)} The coefficients $A$ and boundary data $v_0$ are ${\mathbb Z}^d$-periodic, that is for any $y\in {\mathbb R}^d$ and any $h\in {\mathbb Z}^d$ $$ A(y+h)=A(y) \qquad \text{ and } \qquad v_0(y+h) = v_0(y), $$ \bulitem{(Ellipticity)} There exists a constant $\lambda>0$ such that for any $x \in {\mathbb R}^d$ and any $y\in {\mathbb R}^d$ one has $$ \lambda |x|^2 \leq x^T A(y) x \leq \lambda^{-1} |x|^2. $$ \bulitem{(Regularity)} All elements of $A$ and boundary data $v_0$ are infinitely smooth. \end{itemize} We will refer to (\ref{bdry-layer-system-recall}) as \emph{boundary layer} problem. These type of problems emerge in the theory of periodic homogenization of Dirichlet problem for divergence type elliptic operators with periodically oscillating coefficients and boundary data. Understanding the well-posedness of problems of the form (\ref{bdry-layer-system-recall}) in a suitable class of solutions, and more importantly the asymptotics of solutions far away from the boundaries are one of the key steps toward obtaining quantitative results for homogenization of the mentioned class of Dirichlet problems. For detailed discussions concerning the role of (\ref{bdry-layer-system-recall}) we refer the reader to \cite{GM-JEMS}, \cite{GM-Acta} and \cite{Prange}. Below we will briefly review some known results concerning boundary layer problems. Interestingly, the asymptotic analysis of (\ref{bdry-layer-system-recall}) depends on certain number-theoretic properties of the normal vector $n$. \vspace{0.2cm} \textbf{Rational directions.} We say that $n\in {\mathbb R}^d$ is a \emph{rational} vector, and write $n\in {\mathbb R} {\mathbb Q}^d$, if $n$ is a scalar multiple of a vector with all components being rational numbers. One can easily see that if $n\in {\mathbb R}^d$ has length one, then $n$ is rational iff $n=\xi/|\xi| $ for some non-zero $\xi\in {\mathbb Z}^d$. In this case it is well-known (see e.g. \cite[Lemma 4.4]{AA99}) that there exists a smooth variational solution $v$ to (\ref{bdry-layer-system-recall}), which is unique given some decay conditions on the gradient, and such that there is a constant $v^{a,\infty}$ for which $v(y) \to v^{a,\infty}$ exponentially fast as $y\cdot n \to \infty$, where the convergence is uniform with respect to tangential directions. Although having these nice convergence properties, the drawback of rational directions is that the constant $v^{a,\infty}$ may actually depend on $a$, i.e. translating the hyperplane in the direction of $n$ may lead to different limits at infinity. \vspace{0.2cm} \textbf{Diophantine directions.} Following \cite{GM-Acta} for a unit vector $n\in {\mathbb S}^{d-1}$ set $P_{n^\bot} $ to be the operator of orthogonal projection on the hyperplane orthogonal to $n$. Fix $l>0$ so that $(d-1)l>1$ and for $\kappa>0$ define \begin{equation}\label{class-diophantine} \mathcal{A}_\kappa=\big\{ n\in {\mathbb S}^{d-1}: | P_{n^\bot}(\xi) | \geq \kappa |\xi|^{-l} \text{ for all } \xi \in {\mathbb Z}^d \setminus \{0\} \big\}. \end{equation} A vector $n\in {\mathbb S}^{d-1}$ is called \emph{Diophantine} if it is from $\mathcal{A}_\kappa$ for some $\kappa>0$. Clearly elements of $\mathcal{A}_\kappa$ are non-rational directions. Also, it is not hard to verify (see \cite[Section 2]{GM-Acta}) that $\sigma(\mathbb{S}^{d-1} \setminus \mathcal{A}_\kappa)\leq C\kappa^{d-1}$, where $\sigma$ is the surface measure of the unit sphere, and $C$ is a constant depending on $l$. Thus, the last inequality shows that almost all directions are Diophantine. Behaviour of (\ref{bdry-layer-system-recall}) in the case when $n$ is Diophantine has been studied only recently in \cite{GM-JEMS}, where it was proved (Propositions 4 and 5 of \cite{GM-JEMS}) that there exists a smooth variational solution $v$ to (\ref{bdry-layer-system-recall}) which is unique, given some growth conditions, and such that for some constant $v^\infty$ one has $v(y) \to v^\infty$ as $y \cdot n \to \infty$. Here convergence is locally uniform with respect to tangential directions, and is faster than any polynomial rate in $y\cdot n$. Moreover, the effective constant $v^\infty$ (the \emph{boundary layer tail}) depends on direction $n$ only, and is independent of $a$ in contrast to the rational case. \vspace{0.2cm} \textbf{Non-rational directions in general.} Here we consider directions which are \emph{irrational}, i.e. from the complement of ${\mathbb R} {\mathbb Q}^d$. Observe that not all irrational directions are Diophantine, therefore the previous two cases do not cover $\mathbb{S}^{d-1}$, the set of all possible directions. In a recent work \cite{Prange}, it was proved that \eqref{bdry-layer-system-recall} has a unique smooth variational solution satisfying certain growth conditions, for which one has convergence toward its boundary layer tail far away from the boundaries (see Theorem \ref{Thm-Prange} for the precise statement). However, the result of \cite{Prange} does not provide any estimates on the rate of convergence given this generality on the normals. It does however show that for irrational directions which are non-Diophantine (meaning they fail to satisfy \eqref{class-diophantine} for any choice of parameters $\kappa $ and $l$ involved in the definition), one may have convergence slower than any power rate in $y\cdot n$. More precisely, for a smooth and ${\mathbb Z}^2$-periodic function $v_0: \mathbb{T}^2 \to {\mathbb R}$ consider the following boundary value problem \begin{equation}\label{u-on-Omega-n} \Delta v =0 \ \text{in } \Omega_n \qquad \text{ and } \qquad v= v_0 \ \text{on } \partial \Omega_n, \end{equation} where $\Omega_n = \{x\in {\mathbb R}^2: \ x\cdot n>0 \}$. Clearly, this problem is of type \eqref{bdry-layer-system-recall} with the matrix of coefficients equal to $2\times 2$-identity matrix. Then \cite[Theorem 1.3]{Prange} shows that if $n\notin {\mathbb R} {\mathbb Q}^2$ is arbitrary non-Diophantine direction, then for any $p>0$ and any $R>0$ there exists a smooth function $v_0: \mathbb{T}^2 \to {\mathbb R}$ and a sequence $\lambda_k \nearrow \infty$ such that if $v$ solves (\ref{u-on-Omega-n}) with boundary data $v_0$, then for each $k=1,2,...$ and all $y' \in \partial \Omega_{n} \cap B(0, R)$ one has \begin{equation}\label{slow-conv-Prange} | v(y' + \lambda_k n) - v^\infty | \geq \lambda_k^{-p}, \end{equation} where the constant $v^\infty$ is the corresponding boundary layer tail. Let us note that the left-hand side of (\ref{slow-conv-Prange}) converges to zero as $k\to \infty$, since as we have just said, for irrational directions solutions converge to their boundary layer tails. The proof of (\ref{slow-conv-Prange}) constructs $v_0$ with Fourier spectrum supported in a subset of ${\mathbb Z}^2$ on which the normal $n$ fails to satisfy the Diophantine condition. Then choosing coefficients having an appropriate decay, combined with the special structure of the spectrum of $v_0$, immediately leads to the conclusion. We stress that this slow convergence result of \cite{Prange} works for any irrational non-Diophantine direction, however, it leaves out the question whether one can go beyond algebraic rates of convergence, and perhaps more intriguing, the case of Laplace operator does not give an insight into the case of variable coefficient operators, since in the Laplacian setting one has an explicit form of solutions which essentially determines the analysis. \vspace{0.3cm} \noindent Throughout the paper we use the following notation and conventions. \vspace{0.3cm} \begin{tabbing} \hspace{1.7cm}\=\kill \vspace{0.1cm} $\mathbb{T}^d$ \> unit torus of ${\mathbb R}^d$, i.e. the quotient space ${\mathbb R}^d/ {\mathbb Z}^d$, where ${\mathbb Z}^d$ is the integer lattice, \\ \vspace{0.1cm} $\mathbb{S}^{d-1}$ \> unit sphere of ${\mathbb R}^d$, \\ \vspace{0.1cm} ${\mathrm O}(d)$ \> group of $d\times d$ orthogonal matrices, \\ \vspace{0.1cm} $M^T$ \> transpose of a matrix $M$ \\ \vspace{0.1cm} $x\cdot y$ \> inner product of $x,y\in {\mathbb R}^d$ \\ \vspace{0.1cm} $|x|$ \> Euclidean length of $x\in {\mathbb R}^d$ \\ \vspace{0.1cm} $\Omega_{n,a}$ \> halfspace $\{ x\in {\mathbb R}^d: \ x\cdot n>a \}$, where $n\in \mathbb{S}^{d-1}$ and $a\in {\mathbb R}$, \\ \vspace{0.1cm} $\Omega_n$ \> halfspace $\Omega_{n,0}$, \\ \vspace{0.1cm} $B_r(x)$ \> or $B(x,r)$ both stand for an open ball with center at $x\in {\mathbb R}^d$ and radius $r>0$, \\ \vspace{0.1cm} $\Subset$ \> compact inclusion for sets. \end{tabbing} \vspace{0.3cm} Positive generic constants are denoted by various letters $C, C_1,c,...$, and if not specified, may vary from formula to formula. For two quantities $x,y$ we write $x\lesssim y$ for the inequality $x\leq C y$ with an absolute constant $C$, and $x\asymp y$ for double inequality $C_1 x \leq y \leq C_2 x$ with absolute constants $C_1,C_2$. Throughout the text the word ``smooth" unless otherwise specified, means differentiable of class $C^\infty$. The term ``modulus of continuity'' is everywhere understood in accordance with Definition \ref{def-mod-contin}. The phrase ``boundary layer tail" refers to the constant determined by Theorem \ref{Thm-Prange}. \emph{Domain} is an open and connected subset of Euclidean space. We also adopt the summation convention of repeated indices (Einstein summation convention). \subsection{Main results} The first class of problems we will study in this article, is motivated by the results discussed above, and the importance of boundary layer problems in periodic homogenization of Dirichlet problem. Most notably, we will show that in the case of irrational non-Diophantine normals convergence of solutions to (\ref{bdry-layer-system-recall}) towards their boundary layer tails can be arbitrarily slow. Next, in the second part of the paper, we will apply our methods developed for the analysis of (\ref{bdry-layer-system-recall}) combined with some ideas from our papers \cite{ASS1}-\cite{ASS3} written in collaboration with H. Shahgholian, and P. Sj\"{o}lin, to construct a Dirichlet problem for elliptic operators in divergence form set in a bounded domain, where boundary value homogenization holds with a speed slower than any given rate in advance. We now proceed to formulations of the main results. In order to measure a speed of convergence we consider the following class of functions. \begin{definition}\label{def-mod-contin} We say that a function $\omega$ is a \verb|modulus of continuity| if it has the following properties: \begin{itemize}[label=\textbullet] \bulitem{} $\omega :[0,\infty) \to (0,\infty)$ is continuous, \vspace{0.1cm} \bulitem{} $\omega$ is one-to-one, decreasing, and $\lim\limits_{t \to \infty} \omega(t) =0$. \end{itemize} \end{definition} We will at places abuse the notation and instead of $[0,\infty)$ may take $[c_0, \infty)$ for some $c_0>0$ as a domain of definition for modulus of continuity. For our first result we will impose a structural restriction on coefficients $A$. Namely we assume that \begin{equation}\label{cond-on-A} \text{ there exists } \ \ 1\leq \gamma \leq d \ \ \text{ such that } \ \ \partial_{y_\alpha} A^{\gamma \alpha } \equiv 0. \end{equation} In other words we require one of the columns of $A$ to be divergence free as a vector field. This assumption is technical and is due to our proof of Theorem \ref{Thm-slow-variable}. It is used to control the contribution of boundary layer correctors in the asymptotics of boundary layer tails (see, in particular, inequality (\ref{Z15})). The following is our main result concerning the slow convergence phenomenon of boundary layers. \begin{theorem}\label{Thm-slow-variable} Let $\omega$ be any modulus of continuity and let $R>0$ be fixed. Then, there exists a unit vector $n\notin {\mathbb R}{\mathbb Q}^d$, a smooth function $v_0:\mathbb{T}^d \to {\mathbb R}$, and a sequence of positive numbers $\{\lambda_k\}_{k=1}^\infty$ growing to infinity, such that if $v$ solves (\ref{bdry-layer-system-recall}) under condition (\ref{cond-on-A}) on the operator, and with $n$ and $v_0$ as specified here, then for any $k=1,2,...$ and any $y'\in \partial \Omega_{n,0} \cap B(0,R)$ one has \begin{equation}\label{z1} | v(y' + \lambda_k n ) - v^\infty | \geq \omega(\lambda_k), \end{equation} where $v^\infty$ is the corresponding boundary layer tail. \end{theorem} \begin{remark} Observe that $v^\infty$ being the boundary layer tail, implies that the left-hand side of (\ref{z1}) decays as $k\to \infty$, and hence the lower bound is non-trivial. Next, notice that we have fixed the halfspace on the direction $n\in \mathbb{S}^{d-1}$ by setting $a=0$ in Theorem \ref{Thm-slow-variable}. This does not lessen the generality, since the case of arbitrary $a\in {\mathbb R}$ can be recovered by a change of variables. However, we do not know if in general one can take the sequence $\{\lambda_k\}$ and boundary data $v_0$ independently of $a$. \end{remark} Finally, let us note that the result of Theorem \ref{Thm-slow-variable} shows that there is \emph{no} lower bound for the speed of convergence on the set of irrational directions, in other words convergence can be in fact arbitrarily slow. This case is in strong contrast with the case of Diophantine normals, where one has convergence faster than any power rate. \vspace{0.2cm} Our next concern is the question of Dirichlet boundary value homogenization for divergence type elliptic operators in bounded domains. Assume we are given a coefficient matrix $A=(A^{\alpha \beta}(x))_{\alpha , \beta=1}^d: X \to {\mathbb R}^{d\times d}$, defined on some domain $X\subset {\mathbb R}^d$ ($d\geq 2$) and having these properties: \begin{itemize} \item[(A1)] for each $1\leq \alpha , \beta \leq d$ we have $A^{\alpha \beta} \in C^\infty(X)$, \vspace{0.2cm} \item[(A2)] there exist constants $0<\lambda\leq \Lambda<\infty$ such that $$ \lambda |\xi|^2 \leq A^{\alpha \beta} (x) \xi_\alpha \xi_\beta \leq \Lambda |\xi|^2, \qquad \forall x\in X, \ \forall \xi \in {\mathbb R}^d. $$ \end{itemize} For a function $g\in C^\infty( \mathbb{T}^d)$, and a bounded subdomain $D\subset X$ with $C^\infty$ boundary consider the following problem \begin{equation}\label{Dirichlet-bdd-domains} -\nabla \cdot A(x) \nabla u_\varepsilon(x) = 0 \text{ in } D \qquad \text{ and } \qquad u_\varepsilon(x) = g (x/ \varepsilon) \text{ on } \partial D, \end{equation} where $\varepsilon>0$ is a small parameter. Along with \eqref{Dirichlet-bdd-domains} consider the corresponding homogenized problem which reads \begin{equation}\label{Dirichlet-bdd-domains-homo} -\nabla \cdot A(x) \nabla u_0(x) = 0 \text{ in } D \qquad \text{ and } \qquad u_0(x) = \int_{\mathbb{T}^d} g(y) dy \text{ on } \partial D. \end{equation} Let us emphasize that we do \emph{not} impose any structural restrictions on $A$, nor we assume that $A$ is necessarily periodic. This is in view of the fact that there is no interior homogenization taking place in (\ref{Dirichlet-bdd-domains}). \begin{theorem}\label{Thm-slow-Dirichlet} Let $A$ be as above defined on $X$ and satisfy (A1)-(A2), and let $\omega$ be any modulus of continuity. Then, there exist bounded, non-empty convex domains $D\subset X$ and $D'\Subset D$ with $C^\infty$ boundaries, and a real-valued function $g \in C^\infty(\mathbb{T}^d)$ such that if $u_\varepsilon$ is the solution to \eqref{Dirichlet-bdd-domains} for $\varepsilon>0$, and $u_0$ to that of \eqref{Dirichlet-bdd-domains-homo}, then for some sequence of positive numbers $\{\varepsilon_k\}_{k=1}^\infty$ strictly decreasing to 0, one has the following: \begin{itemize} \item[{\normalfont a)}] $ |u_{\varepsilon_k}(x) - u_0(x)| \geq \omega(1/ \varepsilon_k ), \qquad \forall x\in D', \ k=1,2,...,$ \vspace{0.1cm} \item[{\normalfont b)}] $|u_\varepsilon(x) - u_0(x)|\to 0 \ \ \text{ as } \varepsilon \to 0 , \ \ \forall x\in D$. \end{itemize} \end{theorem} \vspace{0.2cm} This result should be compared with \cite{ASS1}-\cite{ASS3}, where under certain geometric conditions on boundary of the domain (such as \emph{strict} convexity, or flat pieces having Diophantine normals as considered in \cite{ASS3}) it is proved that periodic homogenization of boundary value problems for elliptic operators in divergence form holds pointwise, as well as in $L^p$ norm where $1\leq p <\infty$, with an algebraic rate in $\varepsilon$. However here, we see that again due to the geometry of the domain, convergence can slow down arbitrarily. Thus relying merely on the smoothness of the data involved in the problem, one can not get a meaningful quantitative theory for homogenization problems with divergence structure as above. \section{Preliminaries on solutions to boundary layer problems}\label{sec-sol-prelim} The aim of this section is to give a precise meaning to a solution of problem (\ref{bdry-layer-system-recall}). The well-posedness of (\ref{bdry-layer-system-recall}) in non-rational setting was first studied by G\'{e}rard-Varet and Masmoudi \cite{GM-JEMS}, and later by the same authors in \cite{GM-Acta}, and by Prange in \cite{Prange}, all in connection with homogenization of Dirichlet problem. Here for the exposition we will follow mainly \cite{GM-Acta} and \cite{Prange}. Let us also note that the results presented in this section hold for strictly elliptic systems, however we will only use them for scalar equations, and thus formulate the results in the setting of a single equation only. Keeping the notation of problem \eqref{bdry-layer-system-recall}, fix some matrix $M\in {\mathrm O}(d)$ such that $M e_d= n$. Then in (\ref{bdry-layer-system-recall}) make a change of variables by $y=Mz$ and transform the problem to \begin{equation}\label{Z9} \begin{cases} -\nabla_z \cdot B(Mz) \nabla_z \textbf{v}(z) =0 ,&\text{ $z_d>a $}, \\ \textbf{v} (z)= v_0( Mz ) ,&\text{ $ z_d= a $,} \end{cases} \end{equation} where $\textbf{v}(z) = v(Mz) $ and the new matrix $B$ is defined by $B = M^T A M$. From the definition of $M$ we have $M=[N| n]$, where $N$ is a matrix of size $d\times (d-1)$. Then the solution to (\ref{Z9}) is being sought of the form $$ \textbf{v}(z) = V(Nz', z_d), \ \text{ where } \ V(\cdot, t) \text{ is } {\mathbb Z}^d \text{-periodic for any } t\geq a, $$ and $z=(z', z_d)\in {\mathbb R}^{d-1}\times {\mathbb R}$. This leads to the following problem \begin{equation}\label{bl-system-d+1} \begin{cases} - \left( \begin{array}{c} N^T \nabla_\theta \\ \partial_t \\ \end{array} \right) \cdot B(\theta + tn) \left( \begin{array}{c} N^T \nabla_\theta \\ \partial_t \\ \end{array} \right) V(\theta, t) =0 ,&\text{ $ t > a $}, \\ V(\theta, t) = v_0 ( \theta + a n ), &\text{ $ t = a $}. \end{cases} \end{equation} The authors of \cite{GM-Acta} then show that (\ref{bl-system-d+1}) has a smooth solution $V$ in the infinite cylinder $\mathbb{T}^d \times [a,\infty)$ satisfying certain energy estimates. Moreover, if $V$ solves (\ref{bl-system-d+1}), then one can easily see that $\textbf{v}(z) = V(Nz', z_d)$ gives a solution to \eqref{Z9}. The proof of this well-posedness result is not hard, but what is rather involved is the analysis of asymptotics of $V(\cdot , t)$ as $t \to \infty$. A proper understanding of this problem for $V$ gives the behaviour of solutions to (\ref{bdry-layer-system-recall}) far away from the boundary of the corresponding halfspace. In this regard, it was proved in \cite{GM-JEMS} that if $n$ is Diophantine in a sense of (\ref{class-diophantine}), then there exists a constant $v^\infty$ depending on $n$ and independent of $a$, such that $|v(y)-v^\infty|\leq C_\alpha (y\cdot n)^{-\alpha} $, for any $\alpha>0$ as $y\cdot n \to \infty$, and convergence is locally uniform with respect to tangential variables. Shortly after \cite{GM-JEMS} and \cite{GM-Acta}, a refined analysis of well-posedness of problems of type (\ref{bdry-layer-system-recall}) was given by Prange \cite{Prange}. In particular he established the following result. \begin{theorem}\label{Thm-Prange} {\normalfont{(Prange \cite[Theorem 1.2]{Prange})}} Assume $n \notin {\mathbb R} {\mathbb Q}^d$. Then \begin{itemize} \item[{\normalfont{1.}}] there exists a unique solution $v\in C^\infty(\overline{\Omega_{n,a} }) \cap L^\infty (\Omega_{n,a})$ of (\ref{bdry-layer-system-recall}) such that $$ \qquad || \nabla v ||_{L^\infty(\{y \cdot n >t\} ) } \to 0, \text{ as } t \to \infty, $$ $$ \int_a^\infty || (n \cdot \nabla) v ||_{L^\infty(\{ y\cdot n -t=0 \}) }^2 dt <\infty, $$ \item[{\normalfont{2.}}] and a boundary layer tail $v^\infty \in {\mathbb R}$ independent of $a$ such that $$ v(y) \to v^\infty, \text{ as } y\cdot n \to \infty, \text{ where } y\in \Omega_{n,a}, $$ and convergence is locally uniform with respect to tangential directions. \end{itemize} \end{theorem} Let us fix here that by \emph{boundary layer tail} we always mean the constant to which solutions of problem (\ref{bdry-layer-system-recall}) converge away from the boundary of the corresponding halfspace. \section{Arbitrarily slow convergence for Laplacian}\label{sec-Laplace} The objective of the present section is to prove Theorem \ref{Thm-slow-variable} for Laplacian. The reason for separating the case of Laplace operator is twofold. First, we will introduce part of the key ideas that will be used in the general case of variable coefficient operators. Second, the setting of Laplacian is essentially self-contained, and is more transparent in comparison to the general case which is based on a different approach. We prove the following result. \begin{theorem}\label{Thm-slow-conv-Laplace} Theorem \ref{Thm-slow-variable} holds when the operator in (\ref{bdry-layer-system-recall}) is the Laplacian. \end{theorem} The proof of Theorem \ref{Thm-slow-conv-Laplace} is based on a series of observations and preliminary statements. We will use the connection of problem (\ref{u-on-Omega-n}) with the corresponding problem \eqref{bl-system-d+1} set on a cylinder $\mathbb{T}^d \times [0,\infty)$ (cf. \cite[Theorem 7.1]{Prange}). For that fix a matrix $M\in {\mathrm O} (d)$ such that $M e_d = n$, clearly $M$ is of the form $M=[N | n]$ where $N$ is a matrix of size $d\times (d-1)$. Then writing (\ref{bl-system-d+1}) when the original operator in (\ref{bdry-layer-system-recall}) is Laplacian and $a=0$ we get \begin{equation}\label{V-of-u} \begin{cases} \begin{vmatrix} N^T \nabla_\theta \\ \partial_t \end{vmatrix}^2 V(\theta , t) =0 ,&\text{ $t>0, \ \theta \in \mathbb{T}^d $}, \\ V(\theta , 0) =v_0(\theta) ,&\text{ $ \theta \in \mathbb{T}^d $,} \end{cases} \end{equation} where as before $V(\cdot, t)$ is ${\mathbb Z}^d$-periodic for all $t\geq 0$, and the action of the operator on $V$ is understood as $(N^T \nabla_\theta, \partial_t )\cdot (N^T \nabla_\theta, \partial_t) V$. As we have discussed in Section \ref{sec-sol-prelim}, the unique solution $v$ of \eqref{u-on-Omega-n} (in a sense of Theorem \ref{Thm-Prange}) is given by \begin{equation}\label{v-in-terms-of-V} v(y) = v(Mz) = V(Nz', z_d), \text{ where } y=Mz \text{ with } y\in \Omega_n \text{ and } z\in {\mathbb R}^d_+. \end{equation} \noindent In this setting the solution $V$ of \eqref{V-of-u} can be computed explicitly, namely we have \begin{equation}\label{V-series} V(\theta, t) = \sum\limits_{\xi \in {\mathbb Z}^d} c_\xi (v_0) e^{ -2\pi |N^T \xi| t } e^{ 2\pi i \xi \cdot \theta }, \end{equation} where $c_\xi(v_0)$ is the $\xi$-th Fourier coefficient of $v_0$. In view of (\ref{V-series}) it is clear the boundary layer tail equals $c_0(v_0)$. We will first establish a slow convergence result for $V$ using which we will prove Theorem \ref{Thm-slow-conv-Laplace}. Observe that by (\ref{V-series}) and Parseval's identity, for any $t\geq 0$ we have \begin{multline}\label{Parseval} || V(\theta, t) - c_0(v_0) ||_{L^\infty( \mathbb{T}^d ) }^2 \geq || V(\theta, t) - c_0(v_0) ||_{L^2( \mathbb{T}^d ) }^2 = \\ \sum\limits_{\xi \in {\mathbb Z}^d \setminus \{0\}} |c_\xi(v_0)|^2 e^{-4\pi |N^T \xi| t} := \mathcal{S}(t; v_0). \end{multline} \begin{prop}\label{Thm-slow-conv} For any modulus of continuity $\omega$ there exists a unit vector $n\notin {\mathbb R} {\mathbb Q}^d$, a smooth function $v_0 : \mathbb{T}^d \to {\mathbb R}$, and a sequence of positive numbers $t_k \nearrow \infty$, $k=1,2,...$ such that $$ \mathcal{S}(t_k; v_0 ) \geq \omega( t_k), \qquad k=1,2,..., $$ where $\mathcal{S}$ is defined by (\ref{Parseval}). \end{prop} As we can observe from (\ref{Parseval}) convergence properties of $\mathcal{S}$ depend on the quantity $|N^T \xi|$ which is the subject of the next result. \begin{lem}\label{Lem-bad-normals} Given any modulus of continuity $\omega$, there exists a unit vector $n\notin {\mathbb R} {\mathbb Q}^d$, and an infinite set $\Lambda \subset {\mathbb Z}^d \setminus \{ 0\}$ such that for any matrix $M=[N |n] \in {\mathrm O}(d)$ one has $$ | N^T \xi | \leq \omega(|\xi |), \qquad \forall \xi \in \Lambda. $$ \end{lem} \vspace{0.2cm} As usual here as well $N$ is a $d\times (d-1)$-matrix formed from the first $(d-1)$ columns of $M$. Obviously $M e_d=n$. This lemma is one of the key statements used in the general case as well. \vspace{0.1cm} \begin{proof}[Proof of Lemma \ref{Lem-bad-normals}] Set $\xi^{(1)}: = e_1 = (1,0,...,0)\in {\mathbb R}^d $ and let $\Gamma_1 \subset {\mathbb S}^{d-1}$ be an open neighbourhood of $\xi^{(1)}$ centred at $\xi^{(1)}$ and with diameter less than $\omega^2( | \xi^{(1)} | ) / ( 10 |\xi^{(1)}|^2 ) $. Due to the density of rational directions\footnote{For any non-zero $\nu \in {\mathbb Q}^d$ the intersection of the ray starting at $0\in {\mathbb R}^d$ and passing through $\nu$, with the sphere $\mathbb{S}^{d-1}$ is a rational vector of unit length. Hence the density of rational directions.} there exists a non-zero $\xi^{(2)} \in {\mathbb Z}^d$ such that $|\xi^{(2)}| \geq 2$ and $$ 0< \left| \frac{ \xi^{(2)} }{ |\xi^{(2)}| } - \frac{ \xi^{(1)} }{ |\xi^{(1)}| } \right| \leq \mathrm{diam}(\Gamma_1) \leq \frac{\omega^2( | \xi^{(1)} | )}{ 10 |\xi^{(1)}|^2 }. $$ Using the same idea, we then inductively construct a sequence $\{ \xi^{(k)} \} \subset {\mathbb Z}^d \setminus \{0\}$ satisfying $k \leq | \xi^{(k)} | < |\xi^{(k+1)}| $ and such that for unit vectors $r_k = \frac{ \xi^{(k)} }{ |\xi^{(k)}| }$ we get \begin{equation}\label{xi-k} 0< \left| r_{k+1} - r_k \right| \leq \frac{\omega^2( | \xi^{(k)} | )}{ 10^k |\xi^{(k)}|^2 } \end{equation} for each $k=1,2,...$ . It is clear by (\ref{xi-k}) that for $k$ large enough one has $|r_{k+1} -r_k| <10^{-k}$, therefore the sequence $\{ r_k \}$ is Cauchy, hence it is convergent. By $n$ we denote its limit, which is obviously a unit vector. We claim that $n\notin {\mathbb R}{\mathbb Q}^d$, which is due to fast convergence of the sequence\footnote{This is in analogy with a standard fact in Diophantine approximation theory, that the sum of a very fast converging series of rationals is irrational.} $\{r_k\}$. Indeed, assume for contradiction that $n\in {\mathbb R} {\mathbb Q}^d$. As $|n|=1$, using the rationality assumption it is easy to see that there exists a non-zero $\xi_0 \in {\mathbb Z}^d$ such that $n=\xi_0 / |\xi_0|$. By monotonicity of $|\xi^{ (k) }|$ and $\omega$, along with (\ref{xi-k}) we get \begin{multline*} | (n\cdot e_1)^2 - (r_k \cdot e_1)^2 | \leq 2 | n\cdot e_1 - r_k \cdot e_1 | \leq 2| n - r_k | \leq 2\sum\limits_{j=k}^\infty | r_{j+1 } - r_j | \leq \\ 2 \sum\limits_{j=k}^\infty \frac{\omega^2( |\xi^{ (j) }| )}{10^j |\xi^{ (j) }|^2 } \leq \frac{20}{9} \frac{1}{10^k} \frac{\omega^2( |\xi^{ (k) }| )}{ |\xi^{ (k) }|^2 }, \ \ k=1,2,... \ . \end{multline*} By rationality of $n$ we have $n\cdot e_1 = \frac{p_0}{ |\xi_0| }$ with $p_0 \in {\mathbb Z} $, and for $r_k$ we have $ r_k \cdot e_1 = \frac{p_k}{ |\xi^{ (k) }| } $ with some $p_k\in {\mathbb Z}$. Hence, from the last inequality we get $$ \biggl| p_0^2 |\xi^{ (k) }|^2 - p_k^2 |\xi_0 |^2 \biggr| \leq \frac{20}{9} \frac{1}{10^k} \omega^2( |\xi^{ (k) }| ) |\xi_0 |^2. $$ Since the left-hand side is an integer and is less than 1 by absolute value for $k$ large enough, it must be 0. From here we conclude that the sequence $|r_k \cdot e_1 |$ is eventually constant. By our notation this implies \begin{equation}\label{p-k-const} \frac{p_k}{ | \xi^{(k)} | } = \pm \frac{p_{k+1}}{ | \xi^{(k+1)} | } , \ k \geq k_0 , \end{equation} where $p_k$ is an integer, representing the first coordinate of $\xi^{(k)}$ and $k_0$ is a large integer. Now, if we have equality in the last expression with minus sign, we get $$ \frac{ 1 }{ 10^k | \xi^{(k)} |^2 } \geq | r_{k+1} - r_k | \geq | r_{k+1} \cdot e_1 - r_k \cdot e_1 | = \frac{2 |p_k|}{ | \xi^{(k)} | }, $$ which implies that $p_k=0$, and hence $p_{k+1}$ is 0 as well by \eqref{p-k-const}. We thus see that (\ref{p-k-const}) in either case of the signs, forces equality within the first components of $r_k$ and $r_{k+1}$. But in this case the same argument with $e_1$ replaced by the remaining vectors of the standard basis of ${\mathbb R}^d$, would lead to equality for all corresponding components of $r_k$ and $r_{k+1}$. The latter contradicts the fact that $r_k \neq r_{k+1}$ which we have from (\ref{xi-k}). Hence the proof that $n$ is not a rational direction is complete. We now set $\Lambda = \{\xi^{(k)}: \ k=1,2,... \}$ and proceed to the proof of the claimed estimate of the lemma. By orthogonality of $M$ for any $\xi \in \Lambda $ we have $$ |\xi|^2 = |M^T \xi|^2 = | N^T \xi |^2 + |n\cdot \xi|^2, $$ which, combined with Cauchy-Schwarz, implies \begin{equation}\label{N-T-square} | N^T \xi |^2 = | \xi |^2 - |n\cdot \xi|^2 = \big(| \xi | + |n\cdot \xi| \big) \big( | \xi | - |n\cdot \xi| \big) \leq 2 | \xi |^2 \left( 1- \frac{|n\cdot \xi|}{|\xi|} \right). \end{equation} Now choose $k\in {\mathbb N}$ such that $\xi=\xi^{(k)}$. We get $$ \xi \cdot n = \xi^{(k)} \cdot n = \xi^{(k)} \cdot \left[ \frac{\xi^{(k)}}{ |\xi^{(k)}| } + \sum\limits_{j=k}^\infty (r_{j+1} - r_j ) \right] = |\xi^{(k)}| + \sum\limits_{j=k}^\infty \xi^{(k)} \cdot (r_{j+1} - r_j ) . $$ Hence by (\ref{xi-k}) we have $$ \big| \xi^{(k)} \cdot n - |\xi^{(k)}| \big| \leq \sum\limits_{j=k}^\infty | \xi^{(k)}| | r_{j+1} -r_j | \leq \sum\limits_{j=k}^\infty | \xi^{(k)}| \frac{\omega^2(|\xi^{(j)}|)}{ 10^j | \xi^{(j)} |^2 } \leq \frac{1}{2}\frac{\omega^2(|\xi^{(k)}|) }{|\xi^{(k)}|}. $$ We thus get $$ \left| 1- \frac{|n\cdot \xi^{(k)}|}{|\xi^{(k)}|} \right| \leq \frac{\omega^2(|\xi^{(k)}|) }{2 |\xi^{(k)}|^2}. $$ From here, getting back to (\ref{N-T-square}) we obtain $$ | N^T \xi |^2 \leq \omega^2(|\xi|), $$ completing the proof of the lemma. \end{proof} The following remarks will be used later on. \begin{remark}\label{rem-large-gap} One may easily observe from the proof and the density of rational directions, that given any $\tau>1$ it is possible to construct $\Lambda$ such that for any $\xi,\eta \in \Lambda $ if $|\xi|<|\eta|$ then $\tau |\xi|<|\eta|$. \end{remark} \begin{remark}\label{rem-bad-normals-dense-measure-0} The set of normal directions satisfying Lemma \ref{Lem-bad-normals} is dense on $\mathbb{S}^{d-1}$, however it necessarily has measure 0 if $\omega$ decreases faster than any polynomial rate. Density follows from the proof of the lemma, as there one may start the construction in the neighbourhood of any rational direction on $\mathbb{S}^{d-1}$ instead of $e_1$. The measure zero claim is due to the fact that the set of Diophantine directions, in a sense of (\ref{class-diophantine}), has full measure on $\mathbb{S}^{d-1}$, and if $\omega$ decreases sufficiently fast, then any Diophantine direction fails to satisfy Lemma \ref{Lem-bad-normals}. \end{remark} We now give a proof of Proposition \ref{Thm-slow-conv} based on the previous lemma. \bigskip \begin{proof}[Proof of Proposition \ref{Thm-slow-conv}] Define $\omega_1(t): [1,\infty) \to {\mathbb R}_+$ as follows $$ \omega_1(t) = \frac{1}{ 4 \pi \omega^{-1} \left( \frac{1}{e} t^{-2t} \right) }, \qquad t\geq 1. $$ Here $\omega^{-1}$ stands for the inverse function of $\omega$, which exists since $\omega$ is one-to-one. Moreover, without loss of generality we will assume that $\omega_1$ is well-defined for $t\geq 1$, since otherwise we will just replace the lower bound of $t$ by a sufficiently large number. It is easy to see from the definition of $\omega_1(t)$ that it is continuous, decreases to 0 as $t \to \infty$ and is one-to-one. We now apply Lemma \ref{Lem-bad-normals} for this choice of $\omega_1$ as a modulus of continuity, and let $n$ be the normal, and $\Lambda$ be the index set given by Lemma \ref{Lem-bad-normals}. We define $v_0\in C^\infty(\mathbb{T}^d)$ as follows. First arrange elements of $\Lambda$ in increasing order of their norms, i.e. we let $\Lambda = \{ \xi^{(k)}: \ k=1,2,... \}$, where $| \xi^{(k)}|< | \xi^{(k+1)} |$, for $k=1,2,...$ . Observe that $|\xi^{(k)}|\geq k$ for all $k\in {\mathbb N}$ due to the construction of Lemma \ref{Lem-bad-normals}. For $\xi \in {\mathbb Z}^d$ we let $c_\xi$ be the $\xi$-th Fourier coefficient of $v_0$. Next, if $\xi \in \Lambda$ is the $k$-th element of $\Lambda$ according to the mentioned arrangement, set $c_{ \pm \xi}(v_0)= |\xi|^{-k}$, otherwise, if $\pm \xi \notin \Lambda$ let $c_{\xi}(v_0)=0$. The sequence of coefficients constructed in this way defines a smooth function $v_0$, since the Fourier coefficients of $v_0$ decay faster than any polynomial rate. Furthermore, as $c_{-\xi} = c_\xi \in {\mathbb R}$, $v_0$ is a real-valued function. By (\ref{Parseval}) we get $$ \mathcal{S}(t; v_0) = \sum\limits_{k=1}^\infty \frac{2}{ |\xi^{(k)}|^{2k} } e^{ -4\pi |N^T \xi^{(k)}| t }. $$ For each $k\in {\mathbb N}$, choose $t_k$ from the condition that \begin{equation}\label{g-t-k} \omega(t_k) = \frac{1}{e} |\xi^{(k)}|^{-2k}, \qquad k=1,2,... \ . \end{equation} By construction we have $t_k \nearrow \infty$. To prove that $\mathcal{S}(t_k; v_0) \geq \omega(t_k) $ it is enough to show that $ e^{ -4\pi |N^T \xi^{(k)}| t_k } |\xi^{(k)}|^{-2k} \geq \omega(t_k) $, while for this one it is sufficient to prove \begin{equation}\label{inq-with-omega} e^{ -4\pi \omega_1 (|\xi^{(k)}|) t_k } |\xi^{(k)}|^{-2k} \geq \omega(t_k) , \end{equation} since $ |N^T \xi^{(k)}| \leq \omega_1(|\xi^{(k)}| )$ by Lemma \ref{Lem-bad-normals}. We now use (\ref{g-t-k}) and the definition of $\omega_1$, by which (\ref{inq-with-omega}) is equivalent to \begin{multline*} 1 \geq 4\pi \omega_1(|\xi^{(k)}|) t_k = 4\pi \omega_1(|\xi^{(k)}| ) \omega^{-1} \left( \frac{1}{e} |\xi^{(k)}|^{-2k} \right) = \\ 4\pi \frac{1}{ 4 \pi \omega^{-1} \left( \frac 1e | \xi^{(k)}|^{-2 |\xi^{(k)}| } \right) } \omega^{-1} \left( \frac{1}{e} |\xi^{(k)}|^{-2k} \right) . \end{multline*} But the last expression is equivalent to $$ \omega^{-1} \left( \frac 1e | \xi^{(k)}|^{-2 |\xi^{(k)}| } \right) \geq \omega^{-1} \left( \frac{1}{e} |\xi^{(k)}|^{-2k} \right), $$ which holds true, since $| \xi^{(k)}|\geq k $ and $\omega^{-1}$ is decreasing. The proof of the proposition is complete. \end{proof} \begin{remark}\label{rem1} It is clear from the proof of Proposition \ref{Thm-slow-conv} that given any $\delta>0$ in advance, we may drop some finite number of initial terms from $\Lambda \subset {\mathbb Z}^d$, the Fourier spectrum of $v_0$, ensuring that $|N^T \xi| \leq \delta $ for all $\xi \in \Lambda$. \end{remark} \begin{remark} By the same way as in Remark \ref{rem-bad-normals-dense-measure-0} we may argue that the set of normals with the property as discussed in Proposition \ref{Thm-slow-conv}, is dense on the unit sphere, however with measure zero if $\omega$ has a sufficiently slow decay at infinity. It is also clear that any prescribed lower bound on the sequence $|N^T \xi|$ for non-zero $\xi\in {\mathbb Z}^d$ by a given function of $|\xi|$ would transform to a certain upper bound on the speed of convergence of $V$ to its tail. \end{remark} \vspace{0.2cm} \begin{proof}[Proof of Theorem \ref{Thm-slow-conv-Laplace}] Let a unit vector $n\notin {\mathbb R} {\mathbb Q}^d$, a ${\mathbb Z}^d$-periodic function $v_0$ and a sequence of positive numbers $\{t_k\}_{k=1}^\infty$ be obtained by applying Proposition \ref{Thm-slow-conv} for the modulus of continuity $\omega$ given in Theorem \ref{Thm-slow-conv-Laplace}. Also let $V$ be defined by \eqref{V-series} for this choice of $v_0$. As we have seen in (\ref{v-in-terms-of-V}) the unique solution $v$ to problem \eqref{u-on-Omega-n} is given by $ v(y) = V(Nz', z_d)$, where as usual $z\in {\mathbb R}^d_+ $ and $Mz=y\in \Omega_n$ with $M=[N|n]\in {\mathrm O}(d)$. By orthogonality of $M$ we have $$ y \cdot n = Mz \cdot n = z\cdot M^T n = z \cdot e_d = z_d. $$ Thus if we let $y=y' + (y\cdot n) n$, with $y'\in \partial \Omega_n$, then $Nz' = y'$ for the tangential component. We now need to derive some bounds on the new tangential variable $z'$. Observe that $N$ has rank $d-1$, hence $d-1$ of its rows are linearly independent. Set $N'$ to be a $(d-1)\times (d-1)$ matrix formed by these $d-1$ rows of $N$. From the overdetermined linear system $Nz' = y'$ we have $N' z' = y''$, where $y''\in {\mathbb R}^{d-1}$ is the corresponding part of $y'$. Using the assumption that $|y'|\leq R$, we get the following bound \begin{equation}\label{z-est} |z'|= | (N')^{-1} y'' | \leq c_N |y| \leq c_N R. \end{equation} Now if $\Lambda\subset {\mathbb Z}^d$ is the Fourier spectrum of $v_0$, by Remark \ref{rem1} we may assume that $|N^T \xi| \leq 1/ (8 c_N R )$, for any $\xi \in \Lambda$, from which and (\ref{z-est}) one gets $$ |\xi \cdot Nz' | = | N^T \xi \cdot z' | \leq \frac{1}{8 c_N R} c_N R = \frac 18. $$ The last expression shows that $\cos ( 2\pi \xi \cdot Nz' ) \geq \sqrt{2}/2$ for all $\xi \in \Lambda$ and any $z'$ satisfying \eqref{z-est}. By construction of Proposition \ref{Thm-slow-conv}, $\Lambda$ is symmetric with respect to the origin, also $c_\xi (v_0) = c_{-\xi}(v_0)$ for any $\xi \in \Lambda$, and all non-zero coefficients of $V$ are positive and do not exceed 1. Hence for any $y'\in \partial \Omega_n \cap B(0,R)$ we get \begin{multline} v( y' + (y\cdot n) n) = V(Nz', z_d) = \sum\limits_{ \xi \in \Lambda } c_\xi(v_0) e^{ -2\pi |N^T \xi| z_d } e^{ 2\pi i \xi \cdot Nz' }= \\ \sum\limits_{\xi \in \Lambda} c_\xi(v_0) e^{ -2\pi |N^T \xi| z_d } \cos( 2\pi \xi \cdot Nz' ) \geq \frac{1}{\sqrt{2}} \mathcal{S}(z_d; v_0), \end{multline} where $\mathcal{S}$ is defined by \eqref{Parseval}. Finally, recall that $y \cdot n = z_d$, and hence in the last inequality restricting $z_d $ to the sequence $\{t_k\}_{k=1}^\infty$, and using the estimate of Proposition \ref{Thm-slow-conv} we complete the proof of Theorem \ref{Thm-slow-conv-Laplace}. \end{proof} It follows from the proof of Theorem \ref{Thm-slow-conv-Laplace} that we may have local uniformity for slow convergence with respect to tangential directions, meaning that the sequence on which the convergence is slow, once chosen for the modulus of continuity $\omega$, can be used for any $R>0$. The only difference is that in this case one should start at a very large index (depending on $R$) in the sequence. \section{Variable coefficients}\label{sec-variable-coeff} In this section we prove Theorem \ref{Thm-slow-variable} for coefficient matrix $A$ satisfying (\ref{cond-on-A}). Recall, that the previous section established the slow convergence phenomenon for Laplace operator. The main (and big) point that makes Laplacian special in the analysis is that one may write the solution to reduced problems (\ref{V-of-u}) explicitly. However, for variable coefficient case one does not possess explicit forms for the solutions, which necessitates a rather different approach. We start with some preliminary set-up. For coefficient matrix $A$ by $A^*$ denote the coefficient matrix of the adjoint operator, i.e. $A^{*, \alpha \beta} = A^{\beta \alpha}$. For $1\leq \gamma \leq d$ we let $ \chi^{*, \gamma}$ be the smooth solution to the following cell-problem \begin{equation}\label{cell-problem} \begin{cases} -\nabla_y \cdot A^*(y) \nabla_y \chi^{*, \gamma }(y) = \partial_{y_\alpha} A^{*, \alpha \gamma } ,&\text{ $ y \in \mathbb{T}^d $}, \\ \int_{ \mathbb{T}^d } \chi^{*, \gamma} (y) dy = 0 . \end{cases} \end{equation} Next, by $v_n^{*,\gamma}$ we denote the solution, in a sense of Theorem \ref{Thm-Prange}, to the boundary layer problem \begin{equation}\label{bdry-layer-system-for-v-star} \begin{cases} - \nabla \cdot A(y) \nabla v_n^{*,\gamma} (y) =0 ,&\text{ $y \in \Omega_n$}, \\ v_n^{*,\gamma}(y)=-\chi^{*,\gamma}(y) ,&\text{ $ y \in \partial \Omega_n $.} \end{cases} \end{equation} Finally we recall the notion of the Green's kernel. For a coefficient matrix $A$ and a halfspace $\Omega \subset {\mathbb R}^d$, the Green's kernel $G=G(y,{ \widetilde y})$ corresponding to the operator $-\nabla \cdot A(y) \nabla$ in domain $\Omega$ is a function satisfying the following elliptic equation \begin{equation}\label{Green-main-half-space} \begin{cases} -\nabla_{y} \cdot A(y) \nabla_{y} G(y, { \widetilde y}) =\delta(y - { \widetilde y}) ,&\text{ $y \in \Omega $}, \\ G(y, { \widetilde y}) =0 ,&\text{ $ y \in \partial \Omega $,} \end{cases} \end{equation} for any ${ \widetilde y}\in \Omega$, where $\delta$ is the Dirac distribution. To have a quick reference to this situation, we will say that $G$ is the Green's kernel for \emph{the pair} $(A, \Omega)$. Note, that the definition of $G$ does not require $A$ to be periodic. The existence and uniqueness of the Green's kernel for divergence type elliptic systems in halfspaces is proved in \cite[Theorem 5.4]{Hofmann-Kim} for $d\geq 3$, and for 2-dimensional case in \cite[Theorem 2.21]{Dong-Kim}. Here we will only use the case of scalar equations. It is also shown that if $G^*$ is the Green's kernel for the pair $(A^*, \Omega)$ then one has the symmetry relation \begin{equation}\label{symm-of-Green-transp} G(y,{ \widetilde y}) = G^*({ \widetilde y}, y), \qquad y,{ \widetilde y} \in \Omega \ \text{ with } y\neq { \widetilde y}. \end{equation} From here we see that Green's kernel has zero boundary values with respect to both variables. For a unit vector $n \notin {\mathbb R} {\mathbb Q}^d$ and a smooth function $v_0$ let $v$ be the solution to (\ref{bdry-layer-system-recall}) in a sense of Theorem \ref{Thm-Prange}. Set $\lambda: = y \cdot n$ for $y\in \Omega_n$, and let $M\in {\mathrm O}(d)$ satisfy $M e_d = n$. Then for any $0<\kappa < 1/(2d)$ we have \begin{multline}\label{bdry-layer-sol-repr1} v(y) = \int_{ \partial {\mathbb R}^d_+} \partial_{2,\alpha} G^n (n, Mz) \times \bigg[ A^{\beta \alpha} (\lambda M z ) n_\beta + \\ A^{\beta \gamma} (\lambda Mz) \left( \partial_{y_\beta} \chi^{*, \alpha} (\lambda Mz) + \partial_{y_\beta} v_n^{*,\alpha} ( \lambda M z) \right) n_\gamma \bigg] v_0(\lambda M z) d \sigma(z) + O(\lambda^{-\kappa}), \end{multline} where $G^n$ is the Green's kernel for the pair $(A^0, \Omega_n)$, and $A^0$ denotes the matrix of coefficients of the homogenized operator corresponding to $-\nabla_y \cdot A(y) \nabla_y$. Also $\partial_{2, \alpha} $ denotes differentiation with respect to the $\alpha$-th coordinate of the second variable of $G^n$, and the error term $O(\lambda^{-\kappa})$ is locally uniform in tangential variable $y': = y- (y\cdot n)n$, and is independent of the matrix $M$. The asymptotic formula (\ref{bdry-layer-sol-repr1}) is proved by Prange in \cite[Section 6]{Prange} for systems of equations. Here, since we are working with scalar equations, we have a slightly simpler form of it. We are going to switch from the differentiation in $y$ to $z$-variable. Since $y=Mz$ and $M$ is orthogonal, it is easy to see that \begin{equation}\label{from-y-to-z} \nabla_y = M \nabla_z. \end{equation} Set $\boldsymbol{\chi}^{*,\alpha}(z) : = \chi^{*,\alpha}(Mz)$ and $\textbf{v}_n^{*,\alpha}(z) := v_n^{*,\alpha}(Mz)$. On the boundary of $ {\mathbb R}^d_+$, for each $1\leq \alpha \leq d$ we have $\boldsymbol{\chi}^{*,\alpha} + \textbf{v}_n^{*,\alpha} = 0$ by (\ref{bdry-layer-system-for-v-star}), hence taking into account the relation (\ref{from-y-to-z}) and the fact that $M$ has $n$ as its last column, we obtain $$ \partial_{y_\beta} ( \chi^{*,\beta} + v_n^{*,\beta} )(y) = n_\beta \partial_{z_d} (\boldsymbol{\chi}^{*,\beta} + \textbf{v}_n^{*,\beta})(z) . $$ Since the Green's kernel has zero boundary data, using (\ref{from-y-to-z}) we have that if $\partial \Omega_n \ni y =Mz $, with $z\in \partial {\mathbb R}^d_+$ then \begin{equation}\label{green-deriv-rel} \partial_{y_\alpha} G^n(n,y)= \partial_{2,\alpha} G^n (n, Mz) = n_\alpha \partial_{z_d} G^{0,n} (e_d, z), \end{equation} where $G^{0,n}$ is the Green's kernel for the pair $(M^T A^0 M, {\mathbb R}^d_+)$. Observe that $G^{0,n}$ depends on the matrix $M$, however for the sake of notation we suppress this dependence in the formulation. For now, dependence of $G^{0,n}$ on $M$ plays no role, but later on we will need to make a specific choice of orthogonal matrices $M$. Applying these observations in (\ref{bdry-layer-sol-repr1}) and grouping similar terms we obtain \begin{multline}\label{bdry-layer-sol-repr2} \ \ v(y) = \int_{{\mathbb R}^{d-1}} \partial_{2,d} G^{0,n} (e_d, (z',0) ) \times n^T A( \lambda M(z',0) ) n \times \\ \bigg[ 1+ n_\alpha \big( \partial_{z_d} \boldsymbol{\chi}^{*, \alpha} (\lambda (z',0)) + \partial_{z_d} \textbf{v}_n^{*,\alpha} ( \lambda (z',0)) \big) \bigg] \times v_0(\lambda M (z',0) ) d z' + O(\lambda^{-\kappa}). \end{multline} The next lemma concerns a particular class of integrals of type (\ref{bdry-layer-sol-repr2}). \begin{lem}\label{lem-quasi-per-my}{\normalfont(see \cite[Lemma 2.3]{A})} Assume $H:{\mathbb R}^d \to {\mathbb R}$ is smooth ${\mathbb Z}^d$-periodic function, $n \notin {\mathbb R}{\mathbb Q}^d$ is a unit vector and $M\in {\mathrm O}(d)$ satisfies $M e_d = n$. Then for $h(z') := H(M(z', 0))$, where $z' \in {\mathbb R}^{d-1}$ and any $F\in L^1({\mathbb R}^{d-1})$ one has $$ \lim\limits_{\lambda \to \infty} \int_{{\mathbb R}^{d-1}} F(z') h(\lambda z') dz' = c_0(H) \int_{{\mathbb R}^{d-1}} F(x) dx, $$ where $c_0(H)=\int_{\mathbb{T}^d } H(y) dy$. \end{lem} This lemma is proved in \cite{A} for functions admitting a certain type of expansion into series of exponentials. To obtain the current version, one can take the matrix $T$ in Lemma 2.3 of \cite{A} to be the identity. In the next statement we collect the necessary information concerning the Green's kernel involved in (\ref{bdry-layer-sol-repr2}). \begin{lem}\label{Lem-Green-props-for-slow} For any $n\in \mathbb{S}^{d-1}$ and any $M \in {\mathrm O}(d)$ satisfying $M e_d = n$ let $G^{0,n}_M (z, { \widetilde z})$ be the Green's kernel for the pair $(M^T A^0 M, {\mathbb R}^d_+)$. Set $f_{n,M} (z') := \partial_{2,d} G^{0,n}_M (e_d, (z',0))$, where $z' \in {\mathbb R}^{d-1}$. Then \vspace{0.1cm} \begin{itemize} \item[{\normalfont(i)}] $ f_{n,M} \in L^1({\mathbb R}^{d-1}) $ \ and \ $ \inf\limits_{n, M} \left| \int_{{\mathbb R}^{d-1}} f_{n,M}(z') dz' \right| > 0 $, \vspace{0.1cm} \item[{\normalfont(ii)}] $\sup\limits_{n,M } || f_{n,M} ||_{L^1({\mathbb R}^{d-1})}<\infty$ \ and \ $\sup\limits_{ n,M } \int\limits_{|z'|\geq A} |f_{n,M}(z')| dz' \to 0 \text{ as } A \to \infty$, \vspace{0.1cm} \item[{\normalfont(iii)}] $f_{n,M} \in C^1({\mathbb R}^{d-1})$ \ and \ $\sup\limits_{n,M} || \nabla' f_{n,M} ||_{L^1({\mathbb R}^{d-1})}<\infty$, where $\nabla'$ is the gradient in ${\mathbb R}^{d-1}$. \end{itemize} \end{lem} \begin{proof} Recall that $G^{n}$ is the Green's kernel for the pair $(A^0, \Omega_n)$. The following bound is proved in \cite[Lemma 2.5]{GM-Acta} \begin{equation}\label{bound-on-Green-n} | G^{n} (y, { \widetilde y}) | \leq C \frac{ (y \cdot n) ({ \widetilde y} \cdot n)}{|y-{ \widetilde y}|^d} , \qquad y\neq { \widetilde y} \text{ in } \Omega_n, \end{equation} where the constant $C$ is independent of $n$. It is easy to observe (see e.g. \cite[Claim 3.1]{A}) that for any $M\in {\mathrm O}(d)$ with $Me_d = n$ we have $G^{0,n}_M(z, { \widetilde z}) = G^n (M^T z, M^T { \widetilde z}) $ for $z \neq { \widetilde z}$ in ${\mathbb R}^d_+$. From here and (\ref{bound-on-Green-n}), along with the orthogonality of $M$ one has \begin{equation}\label{bound-on-Green-0-n} | G^{0,n}_M(z,{ \widetilde z}) | = |G^n (M^T z, M^T { \widetilde z}) | \leq C \frac{ (M^T z \cdot n) (M^T { \widetilde z} \cdot n) }{|M^T z - M^T { \widetilde z}|^d } = C \frac{z_d { \widetilde z}_d}{|z-{ \widetilde z}|^d}, \end{equation} for all $ z\neq { \widetilde z}$ in ${\mathbb R}^d_+$, with constant $C$ as in (\ref{bound-on-Green-n}); in particular $C$ is independent of $n$ and $M$. Since $G^{0,n}$ has zero data on $\partial {\mathbb R}^d_+$, from (\ref{bound-on-Green-0-n}) one easily infers that $$ | f_{n,M} (z') | \leq \frac{C}{|e_d - (z',0)|^d}, \qquad \forall z' \in {\mathbb R}^{d-1}, $$ which shows that $f_{n,M} \in L^1({\mathbb R}^{d-1})$ as well as part (ii). For the second statement of (i) let $P^{0,n}_M(z, { \widetilde z}) $ be the Poisson kernel for the pair $(M^T A^0 M, {\mathbb R}^d_+)$. Then for $z \in {\mathbb R}^d_+$, and ${ \widetilde z} \in \partial {\mathbb R}^d_+$ we have \begin{align*} P^{0,n}_M (z, { \widetilde z}) &= - e_d^T ( M^T A^0 M ) \nabla_{{ \widetilde z}} G^{0,n}_M (z, { \widetilde z}) \\ &= -( M e_d )^T A^0 (M e_d) \partial_{2,d} G^{0,n}_M (z, { \widetilde z}) \\ &= - n^T A^0 n \partial_{{ \widetilde z}_d} G^{0,n}_M (z, { \widetilde z}). \end{align*} The last expression, combined with the fact that\footnote{It is proved in \cite[Sectin 2.2]{GM-Acta} (see also \cite[Section 3.2]{Prange}) that the variational solution to (\ref{bdry-layer-system-recall}) has an integral representation by Poisson's kernel. Moreover, as long as the asymptotics of the solutions to (\ref{bdry-layer-system-recall}) far away from the boundary of the hyperplane is not concerned, there are no restrictions imposed on the normal direction. Since identical 1 is a solution to (\ref{bdry-layer-system-recall}) for a boundary data identically equal to 1, we may represent this solution by Poisson's kernel, which shows that the integral of Poisson's kernel is 1. } $\int_{\partial {\mathbb R}^d_+} P^{0,n}_M (e_d, { \widetilde z}) d \sigma({ \widetilde z}) =1$, along with the ellipticity of $A^0$ completes the proof of the second claim of (i). Finally, for (iii) observe that since $G^{0,n}_M $ solves an elliptic equation with constant coefficients, by standard elliptic regularity we have $f_{n,M} \in C^1({\mathbb R}^{d-1})$. For the growth estimate by \cite[V.4.2 Satz 3]{Schulze-Wildenhain} one has $$ | \nabla' \big( \partial_{2,d} G^{0,n}_M (e_d, (z',0) ) \big) | \leq \frac{C}{|e_d - (z',0)|^d}, \qquad \forall z' \in {\mathbb R}^{d-1}, $$ where $C$ is independent of the unit vector $n$ and the orthogonal matrix $M$. The proof of the lemma is complete. \end{proof} We now turn to the discussion of the core of averaging process of (\ref{bdry-layer-sol-repr2}). Our next result is one of the key lemmas of the current paper. \begin{lem}\label{Lem-slow-conv-family-of-F} Let $\Xi$ be any non-empty set of indices and assume we are given a family of functions $\mathcal{F}=\{F_i\}_{i\in \Xi}$ with the following properties: \vspace{0.1cm} \begin{itemize} \item[{\normalfont(a)}] $F \in L^1({\mathbb R}^{d-1})$ for any $F\in \mathcal{F}$ and $ \inf\limits_{F \in \mathcal{F}} \left| \int_{{\mathbb R}^{d-1}} F (x) dx \right| >0 $, \vspace{0.1cm} \item[{\normalfont(b)}] $\sup\limits_{F \in \mathcal{F}} || F ||_{L^1({\mathbb R}^{d-1})}<\infty$ and $$ \sup\limits_{ F \in \mathcal{F} } \int_{|x|\geq A} |F(x)| dx \to 0 \text{ as } A \to \infty. $$ \end{itemize} Let also $\omega$ be any modulus of continuity and $S_0 \subset \mathbb{S}^{d-1}$ be any open subset of the sphere. Then, there exists an irrational vector $n \in S_0$, an unbounded, and strictly increasing sequence of positive numbers $\{\lambda_k\}_{k=1}^\infty$ such that for any $F\in \mathcal{F} $ there is a function $v_0\in C^\infty(\mathbb{T}^d)$ satisfying \begin{equation}\label{est-of-lemma-slow} \left| \int_{{\mathbb R}^{d-1} } F(x) v_0(\lambda_k M(x,0)) dx - c_0(v_0) \int_{{\mathbb R}^{d-1}} F(x) dx \right| \geq \omega(\lambda_k), \ \ k=1,2,... , \end{equation} whenever $M\in {\mathrm O}(d)$ and $M e_d = n$. Assume in addition to {\normalfont(a)} and {\normalfont(b)} that $\mathcal{F}$ also satisfies \vspace{0.1cm} \begin{itemize} \item[{\normalfont(c)}] $F\in C^1({\mathbb R}^{d-1})$ for any $F\in \mathcal{F}$ and $$\sup\limits_{F\in \mathcal{F}} ||\nabla F||_{L^1({\mathbb R}^{d-1})} <\infty,$$ \end{itemize} then the function $v_0$ too can be chosen independently of $F$. \end{lem} \begin{remark}\label{rem-dense-normals-family-F} Let us remark that the left-hand side of (\ref{est-of-lemma-slow}) decays as $k \to \infty$ in view of Lemma \ref{lem-quasi-per-my}, therefore the lower bound of the current lemma is non-trivial. The Lemma shows that under conditions (a) and (b) only, the direction, and the sequence along which convergence is slow can be chosen uniformly for the entire family $\mathcal{F}$. Moreover, as will be seen from the proof of Lemma \ref{Lem-slow-conv-family-of-F}, for any $F$ and $G$ from $\mathcal{F}$ their corresponding functions $v_0(F)$ and $v_0(G)$ have equal up to the sign Fourier coefficients. \end{remark} \begin{proof}[Proof of Lemma \ref{Lem-slow-conv-family-of-F}] For the sake of clarity we divide the proof into few steps. \vspace{0.2cm} \noindent \textbf{Step 1. Construction of $n$ and $\{\lambda_k\}_{k=1}^\infty$.} We start by determining a suitable modulus of continuity for which we will apply Lemma \ref{Lem-bad-normals} to get the normal $n$. For $F \in \mathcal{F}$ let $ \mathcal{I}_F $ be the absolute value of the integral of $F$ over ${\mathbb R}^{d-1}$, and set $\tau_0 : = \inf\limits_{F \in \mathcal{F}} \mathcal{I}_F $. By (a) and (b) we have $0<\tau_0 <\infty$. It is easy to see using the fact that $\tau_0>0$ and condition (b) that there exists $A_0>0$ large enough such that \begin{equation}\label{est1-A-0-family} \left| \int_{|x|\leq A_0} F(x) dx \right| \geq 2 \int_{|x|\geq A_0} |F(x) | dx + \frac 12 \mathcal{I}_F, \end{equation} for any $F\in \mathcal{F}$. For this choice of $A_0$ denote $\varepsilon_F : = \frac{1}{||F||_{L^1({\mathbb R}^{d-1})}} \left| \int_{|x|\leq A_0} F(x) dx \right| $, where $F\in \mathcal{F}$. Since $\tau_0>0$ from (\ref{est1-A-0-family}) and condition (b) we have \begin{equation}\label{e-0-def} 0<\varepsilon_0:= \inf\limits_{F\in \mathcal{F}} \varepsilon_F \leq 1. \end{equation} We now fix some small constant $\delta_0>0$ such that \begin{equation}\label{choice-of-delta-0} | \cos(t) - 1 |\leq \varepsilon_0 /4 \ \text{ for any } t\in {\mathbb R} \text{ with } |t|\leq \delta_0. \end{equation} Assume that $n\in \mathbb{S}^{d-1}$ and let $M\in {\mathrm O}(d)$ be so that $M e_d = n$. We then have $M=[N|n]$ where $N$ is the matrix formed from the first ($d-1$)-columns of $M$. Observe that for any $\xi\in {\mathbb Z}^d$ and any $x\in {\mathbb R}^{d-1}$ one has $\xi \cdot M(x,0) = x^T N^T \xi$. Therefore if for some $\lambda>0$ we have $2\pi \lambda A_0 |N^T \xi| \leq \delta_0$ then (\ref{est1-A-0-family}) and (\ref{choice-of-delta-0}) imply \begin{align*} \numberthis \label{est-with-38} \left| \int_{{\mathbb R}^{d-1}} F(x) \cos [ 2\pi \lambda \xi \cdot M(x,0) ] dx \right|\geq \left| \int_{|x|\leq A_0} F(x) dx \right| &- \\ \int_{|x|\leq A_0} |F(x)| \times | \cos [2\pi \lambda x^T N^T \xi ] -1 | dx &- \int_{|x|>A_0} |F(x)| dx \geq \frac 38 \mathcal{I}_F, \end{align*} for any $F \in \mathcal{F}$. Define $$ \omega_1(t) := \frac{\delta_0}{2\pi A_0} \frac{1}{\omega^{-1} \left( \frac 38 \tau_0 \frac{1}{t^t} \right) } , \qquad t\geq 1, $$ where $\omega^{-1}$ stands for the inverse function of $\omega$. Obviously $\omega_1$ is one-to-one, continuous, and decreases to 0 as $t\to \infty$. It is also clear that $\omega_1$ is well-defined for large enough $t$, thus without loss of generality we will assume that $\omega_1$ is defined for all $t\geq 1$. Applying Lemma \ref{Lem-bad-normals} for $\omega_1$ as a modulus of continuity we obtain $\Lambda\subset {\mathbb Z}^d$, and a unit vector $n\notin {\mathbb R} {\mathbb Q}^d$ such that if $M\in {\mathrm O}(d)$ is any matrix satisfying $M e_d = n$, then $$ |N^T \xi |\leq \omega_1(|\xi|), \qquad \forall \xi \in \Lambda, $$ where, as is customary, $d\times (d-1)$ matrix $N$ is formed from the first ($d-1$)-columns of $M$. Following Remark \ref{rem-bad-normals-dense-measure-0} we may assume that $n\in S_0$. We arrange elements of $\Lambda$ in increasing order of their norms, thus $\Lambda = \{ \xi^{(k)}: \ k=1,2,... \}$, where by construction we have $k\leq |\xi^{(k)}|<|\xi^{(k+1)}|$ for any $k\geq 1$. Moreover, according to Remark \ref{rem-large-gap} we may also assume that for any $k\in {\mathbb N}$ we have \begin{equation}\label{kappa-gap} |\xi^{(k)}| < \varrho |\xi^{(k+1)}| , \end{equation} where $0<\varrho<1$ is a fixed parameter satisfying \begin{equation}\label{choice-of-kappa} 2 \sup\limits_{F \in \mathcal{F}} || F||_{L^1({\mathbb R}^{d-1})} \frac{\varrho}{1-\varrho} < \frac{3}{16} \tau_0. \end{equation} Note that the supremum here is finite by (b) and is non-zero by (a). Set \begin{equation}\label{def-lambda} \lambda_k := \omega^{-1} \left( \frac 38 \tau_0 \frac{1}{|\xi^{(k)}|^k } \right), \qquad k=1,2,... \ . \end{equation} It is clear that $\lambda_k$ is unbounded and is strictly increasing. Observe that $n$, and the sequence $\{\lambda_k \}$ are uniform for the entire family $\mathcal{F}$. \vspace{0.2cm} \noindent \textbf{Step 2. Construction of $v_0$ for fixed $F\in \mathcal{F}$.} We proceed to construction of the function $v_0\in C^\infty(\mathbb{T}^d)$ for the given $F\in \mathcal{F}$, for which it is enough to construct the sequence of Fourier coefficients of $v_0$, which we will denote by $\{c_\xi(v_0)\}_{\xi \in {\mathbb Z}^d}$. Let $F\in \mathcal{F}$ be fixed. For $\xi \in {\mathbb Z}^d$ if we have $ \xi \in \Lambda$ then set $c_\xi(v_0) = c_{-\xi}(v_0) = \varepsilon_k(F) |\xi|^{-k} $, where $k\in {\mathbb N}$ is the index of $\xi$ in $\Lambda$ according to the increasing rearrangement made above, and $\varepsilon_k(F) \in \{-1,1 \}$ will be chosen below. It is important to note that this sign is the same for $c_\xi$ and $c_{-\xi}$. Otherwise, if $ \pm \xi \notin \Lambda$ we let $c_\xi(v_0)=0$. Clearly the sequence $\{c_\xi\}$ decays faster than any polynomial rate in $|\xi|$, hence $v_0 $ is smooth. Also, since $c_{\xi}(v_0)= c_{-\xi}(v_0)$ for any $\xi \in {\mathbb Z}^d$ we have that $v_0$ is real-valued. Observe that $c_0(v_0)=0$ by construction, and expanding $v_0$ into Fourier series we get \begin{multline}\label{exp-of-F-3} \int_{{\mathbb R}^{d-1}} F(x) v_0(\lambda M(x,0)) dx = \sum\limits_{m=1}^\infty \frac{2 \varepsilon_m(F)}{ |\xi^{(m)}|^m } \int_{{\mathbb R}^{d-1}} F(x) \cos \big( 2\pi \lambda x^T N^T \xi^{(m)} \big) dx := \\ \frac{2 \varepsilon_k(F)}{ |\xi^{(k)}|^k } \mathcal{I}_k(\lambda) + \Sigma_1(\lambda) + \Sigma_2(\lambda), \end{multline} where $k\geq 1$, $\mathcal{I}_k(\lambda):= \int_{{\mathbb R}^{d-1}} F(x) \cos( 2\pi \lambda x^T N^T \xi^{(k)} ) dx$, $\Sigma_1(\lambda)$ contains the part of sum where $m<k$ and $\Sigma_2(\lambda)$ respectively sums over the range $m>k$. In view of the construction the sums $\Sigma_i(\lambda)$, $i=1,2$ are real-valued for any $\lambda$. By the definition of $\lambda_k$, the fact that $|\xi^{(k)}| \geq k $ and that $\omega$ is decreasing we easily see that $ 2\pi \lambda_k A_0 |N^T \xi^{(k)}| \leq \delta_0 $ for any $k$, hence applying (\ref{est-with-38}) we obtain \begin{equation}\label{est-with-34} \frac{2 }{ |\xi^{(k)}|^k } | \mathcal{I}_k(\lambda_k) | \geq \frac{3}{4} \mathcal{I}_F \frac{1 }{ |\xi^{(k)}|^k }. \end{equation} On the other hand, by (\ref{kappa-gap}) and (\ref{choice-of-kappa}) we easily get \begin{equation}\label{est-range2} | \Sigma_2(\lambda) | \leq 2|| F||_{L^1({\mathbb R}^{d-1})} \sum\limits_{m=k+1}^\infty \frac{1}{|\xi^{(m)}|^m} \leq \frac{3}{16} \frac{1}{|\xi^{(k)}|^k} \tau_0, \end{equation} for any $\lambda\geq 1$. We now estimate the contribution of the range $m<k$. The triangle inequality implies $$ \left| \Sigma_1(\lambda_k) + \frac{2 }{ |\xi^{(k)}|^k } \mathcal{I}_k(\lambda_k) \right| + \left| \Sigma_1(\lambda_k) - \frac{2 }{ |\xi^{(k)}|^k } \mathcal{I}_k(\lambda_k) \right| \geq \frac{4 }{ |\xi^{(k)}|^k } | \mathcal{I}_k(\lambda_k) |, $$ hence at least one of the terms in left-hand side of the last inequality is not less than half of the right-hand side. Taking this into account, we choose the sign $\varepsilon_k $ in order to get the largest term from the left-hand side of the above inequality. This choice of $\varepsilon_k$, combined with estimates (\ref{est-with-34}) and (\ref{est-range2}), and the definition of $\lambda_k$ given by (\ref{def-lambda}) yields \begin{equation} \left| \frac{2 \varepsilon_k(F)}{ |\xi^{(k)}|^k } \mathcal{I}_k(\lambda_k) + \Sigma_1(\lambda_k) + \Sigma_2(\lambda_k) \right| \geq \frac{3}{8} \mathcal{I}_F \frac{1 }{ |\xi^{(k)}|^k } \geq \frac{3}{8} \tau_0 \frac{1 }{ |\xi^{(k)}|^k } = \omega(\lambda_k), \end{equation} for any $k=1,2,...$ . The estimate (\ref{est-of-lemma-slow}) of the lemma obviously follows from the last inequality and (\ref{exp-of-F-3}). \vspace{0.2cm} \noindent \textbf{Step 3. Uniform choice of $v_0$.} Lastly, we turn to the proof of possibility of a uniform choice of $v_0$ under additional condition (c). There is no loss of generality to assume that \begin{equation}\label{assump-on-g} t \omega(t) \to \infty \text{ as } t\to \infty , \end{equation} since otherwise we would simply replace $\omega$ by a new modulus of continuity $\widetilde{\omega}$, where $\widetilde{\omega}(t)\geq \omega(t)$ for all $t\geq 1$ and $\widetilde{\omega}$ satisfies (\ref{assump-on-g}), by that getting even a slower convergence. Thus we will take (\ref{assump-on-g}) for granted. For fixed $F \in \mathcal{F}$, $\xi\in {\mathbb Z}^d \setminus \{0\}$, and $\lambda>0$ set \begin{equation}\label{I-lambda-xi} I(\lambda; \xi) := \int_{{\mathbb R}^{d-1}} F(x) e^{2 \pi i \lambda x^T N^T \xi} dx. \end{equation} Let $1\leq k \leq d-1$ be such that the $k$-th component of the vector $N^T \xi $ is the largest by absolute value. This choice implies $|(N^T \xi)\cdot e_k | \geq (d-1)^{-1/2} |N^T \xi |$, where $e_k$ is the $k$-th vector of the standard basis of ${\mathbb R}^{d-1}$. Integrating by parts in $I(\lambda; \xi)$ in the direction of $e_k$ we see that \begin{equation}\label{bound-by-int-by-parts} |I(\lambda; \xi)| \leq \frac{1}{\lambda} \frac{\sqrt{d-1}}{2\pi |N^T \xi|} \sup\limits_{F\in \mathcal{F}} || \nabla F ||_{L^1({\mathbb R}^{d-1})}, \end{equation} where the supremum is finite due to the assumption (c). By (\ref{def-lambda}) we have $|\xi^{(k)}|^k \omega(\lambda_k) =\frac 38 \tau_0$ for each $k\in {\mathbb N}$. Also, since $\lambda_k$ is increasing and unbounded by construction, from (\ref{assump-on-g}) we get $\lambda_k \omega(\lambda_k) \to \infty$ as $k \to \infty$. Hence \begin{equation}\label{lambda-k-vs-xi-k} \frac{\lambda_k}{ |\xi^{(k)}|^k} = : a_k \to \infty \text{ as } k \to \infty. \end{equation} We now choose an increasing sequence of integers $(i_k)_{k=1}^\infty$ where $i_1= 1$ and if for $k>1$, $i_{k-1}$ is chosen, we use (\ref{lambda-k-vs-xi-k}) and take $i_k>i_{k-1}$ so large in order to get \begin{equation}\label{z2} \frac{1}{a_{i_k}} \frac{\sqrt{d-1}}{\pi } \sup\limits_{F \in \mathcal{F}}|| \nabla F ||_{L^1({\mathbb R}^{d-1})} \sum\limits_{m=1}^{k-1} \frac{1}{|\xi^{(i_m)}|^{i_m} } \frac{1}{|N^T \xi^{(i_m)}|} \leq \frac{3}{16} \tau_0. \end{equation} Clearly the choice of the sequence $(i_k)$ is independent of a particular $F$ since constants in (\ref{z2}) are uniform for the entire family $\mathcal{F}$. We define $v_0$ through its Fourier coefficients as follows. If for $\xi\in {\mathbb Z}^d$ we have $\xi = \pm \xi^{ (i_k) }$ for some $k\in {\mathbb N}$ then define $c_\xi(v_0) = c_{-\xi}(v_0) = |\xi|^{-i_k}$, otherwise, set $c_\xi(v_0)=0$. We have that $v_0$ is uniform for all $F\in \mathcal{F}$. Observe also, that $v_0$ is simply the function from Step 2 with the difference that its Fourier spectrum is now supported on the frequencies $\{\pm \xi^{ (i_k) }\}_{k=1}^\infty$ and all non-zero Fourier coefficients are positive. We now complete the proof by showing that $n$, $\{\lambda_{i_k}\}_{k=1}^\infty$ and $v_0$ defined above satisfy the Proposition. Plugging $v_0$ into (\ref{exp-of-F-3}), for each integer $k\geq 1$ we get \begin{equation}\label{int-of-F-uniform-v0} \int_{{\mathbb R}^{d-1}} F(x) v_0( \lambda M(x,0) ) dx = \frac{2}{|\xi^{i_k}|^{i_k}} \mathcal{I}_{i_k} (\lambda) + \Sigma_1(\lambda) + \Sigma_2(\lambda), \end{equation} where $\mathcal{I}_{i_k}$, $\Sigma_1$ and $\Sigma_2$ are defined as in (\ref{exp-of-F-3}). We have $$ \Sigma_1(\lambda ) = \sum\limits_{m=1}^{k-1} \frac{1}{|\xi^{(i_m)} |^{i_m} } \left[ I(\lambda; \xi^{(i_m)} ) + I(\lambda; -\xi^{(i_m)} ) \right] , $$ with $I$ defined from (\ref{I-lambda-xi}). From this we obtain \begin{multline}\label{est-range1-new} | \Sigma_1(\lambda_{i_k} )| \stackrel{ (\ref{bound-by-int-by-parts}) }{\leq} \frac{1}{\lambda_{i_k}} \frac{\sqrt{d-1}}{\pi } \sup\limits_{F \in \mathcal{F}}|| \nabla F ||_{L^1({\mathbb R}^{d-1})} \sum\limits_{m=1}^{k-1} \frac{1}{|\xi^{(i_m)}|^{i_m} } \frac{1}{|N^T \xi^{(i_m)}|} \stackrel{ (\ref{z2}) }{\leq} \\ \frac{1}{\lambda_{i_k}} \frac{3}{16} \tau_0 a_{i_k} \stackrel{ (\ref{lambda-k-vs-xi-k}) }{\leq} \frac{3}{16} \frac{1}{ |\xi^{(i_k)}|^{i_k} } \tau_0. \end{multline} For the subsequence $\{ \xi^{ (i_k) } \}_{k=1}^\infty$ of $\Lambda$ the analogue of estimate (\ref{est-range2}) becomes \begin{multline}\label{est-range2-new} |\Sigma_2 ( \lambda) | \leq 2|| F||_{L^1({\mathbb R}^{d-1})} \sum\limits_{m=k+1} \frac{1}{|\xi^{(i_m)}|^{i_m} } \leq 2|| F||_{L^1({\mathbb R}^{d-1})} \sum\limits_{m=i_{k+1}}^\infty \frac{1}{|\xi^{(m)}|^m} \leq \\ \frac{3}{16} \frac{1}{ |\xi^{ (i_{k+1}-1 ) }|^{ i_{k+1}-1} } \tau_0 \leq \frac{3}{16} \frac{1}{ |\xi^{ (i_k ) }|^{ i_k } } \tau_0, \end{multline} where as before, we have used (\ref{kappa-gap}) and (\ref{choice-of-kappa}). Finally, the lower bound on (\ref{int-of-F-uniform-v0}) of the Lemma follows by replacing $k$ with $i_k$ in (\ref{est-with-34}) and applying estimates (\ref{est-range1-new}) and (\ref{est-range2-new}) to (\ref{int-of-F-uniform-v0}). The proof is now complete. \end{proof} Looking ahead let us remark here, that the importance of uniformity of the choices in Lemma \ref{Lem-slow-conv-family-of-F} will prove crucial in the applications. We now include a small modification of the previous lemma to allow compactly supported functions, as well as shift of the origin in the function $v_0$. This situation emerges in applications of Lemma \ref{Lem-slow-conv-family-of-F} to integrals arising from Poisson kernels corresponding to bounded domains. \begin{lem}\label{Lem-family-on-compact-supp} Keeping the notation of Lemma \ref{Lem-slow-conv-family-of-F}, and conditions (a) and (b) in force, assume in addition that the family $\mathcal{F}$ has the following properties: \vspace{0.1cm} \begin{itemize} \item[{\normalfont(c')}] each $F \in \mathcal{F}$ is supported in some closed ball $\overline{B}_F \subset {\mathbb R}^{d-1} $, where the set of radii of the balls $B_F$ is bounded away from zero and infinity, \vspace{0.1cm} \item[{\normalfont(d)}] $F\in C^1(B_F)$ for any $F \in \mathcal{F}$, and $\sup\limits_{F \in \mathcal{F}} ||\nabla F ||_{L^1(B_F)}<\infty$, \vspace{0.1cm} \item[{\normalfont(e)}] $\sup\limits_{F \in \mathcal{F}} || F ||_{L^\infty(B_F)}<\infty$. \end{itemize} Then there exist an irrational vector $n\in S_0$, and an unbounded, strictly increasing sequence of positive numbers $\{\lambda_k\}_{k=1}^\infty$ such that for any $X_0 \in {\mathbb R}^d$ there exists a real-valued function $v_0 \in C^\infty(\mathbb{T}^d)$ for which the estimate \begin{equation}\label{est-of-lemma-slow-with-shift} \left| \int_{{\mathbb R}^{d-1} } F(x) v_0(\lambda_k M(x,0) + \lambda_k X_0 ) dx - c_0(v_0) \int_{{\mathbb R}^{d-1}} F(x) dx \right| \geq \omega(\lambda_k) \end{equation} holds for any $F\in \mathcal{F}$ and each integer $k\geq 1$, whenever $M\in {\mathrm O}(d)$ and $M e_d = n$. \end{lem} \begin{proof} For notational convenience we extend all functions $F\in \mathcal{F}$ to ${\mathbb R}^{d-1}$ as zero outside their supports. Observe that here the finiteness of the supremum of part (b) of Lemma \ref{Lem-slow-conv-family-of-F} is automatically fulfilled. We will start with the case $X_0=0$. First carry out the proof of Lemma \ref{Lem-slow-conv-family-of-F} up to the definition (\ref{I-lambda-xi}). Since now functions from $\mathcal{F} $ have compact support, the bound in (\ref{bound-by-int-by-parts}) can not be obtained directly from integration by parts due to boundary terms appearing in the integration. To overcome this technicality we introduce smooth cut-offs. Let $F \in \mathcal{F}$ be fixed, and let the closed ball $\overline{B}=\overline{B}(x_0,r) $ be the support of $F$. Then by (c') we know that $r \geq c_0>0$ for some absolute constant $c_0$. For any $\lambda>1/c_0$ we have $B(x_0, r-\lambda^{-1}) \subset B(x_0, r)$ and we let $\varphi_\lambda:{\mathbb R}^{d-1}\to [0,1] $ be a smooth function such that $\varphi_\lambda=1$ on $B(x_0, r-\lambda)$, $\varphi_\lambda = 0 $ on ${\mathbb R}^d\setminus B(x_0, r)$, and $|\nabla \varphi_\lambda(x)| \leq c_1 \lambda$ for any $x\in {\mathbb R}^{d-1}$, where $c_1$ is some absolute constant. Denoting by $\mathbb{I}_B$ the characteristic function of the ball $B(x_0,r)$, we decompose (\ref{I-lambda-xi}) into \begin{multline*} I(\lambda; \xi ) = \int_{{\mathbb R}^{d-1}} F(x) \mathbb{I}_B (x) \varphi_\lambda (x) e^{2\pi i \lambda x^T N^T \xi} dx + \int_{{\mathbb R}^{d-1}} F(x) \mathbb{I}_B (x) (1-\varphi_\lambda (x)) e^{2\pi i \lambda x^T N^T \xi} dx \\ : = I_1(\lambda; \xi ) +I_2(\lambda; \xi ) . \end{multline*} Observe that in $I_2 (\lambda; \xi )$ we have integration over $B(x_0,r) \setminus B(x_0, r-\lambda^{-1}) $, hence by (e) \begin{equation}\label{bound-on-I-2} |I_2 (\lambda; \xi )| \leq | B(x_0,r) \setminus B(x_0, r-\lambda^{-1}) | \sup\limits_{F \in \mathcal{F}} || F ||_{L^\infty({\mathbb R}^{d-1})} \leq C \lambda^{-1}. \end{equation} For $I_1 (\lambda; \xi)$ we proceed as in (\ref{bound-by-int-by-parts}), however here we will have an additional term coming from partial integration, namely the one involving $F \partial_k \varphi_\lambda$. But observe that all derivatives of $\varphi_\lambda$ are supported on $B(x_0,r) \setminus B(x_0, r-\lambda^{-1}) $, and hence using the estimate on the gradient of $\varphi_\lambda$, along with condition (e) we get $$ \int_{{\mathbb R}^{d-1}} | F(x) \partial_k \varphi_\lambda (x)| dx \leq c_1 \lambda | B(x_0,r) \setminus B(x_0, r-\lambda^{-1}) | \sup\limits_{F \in \mathcal{F}} || F ||_{L^\infty({\mathbb R}^{d-1})} \leq C $$ where constants are uniform in $F$ and $\lambda$. The last bound combined with (\ref{bound-on-I-2}) enables us to obtain the estimate in (\ref{bound-by-int-by-parts}) with possibly different absolute constants. Then, the proof of the current lemma for $X_0=0$ follows from exactly the same argument in Lemma \ref{Lem-slow-conv-family-of-F} starting from (\ref{bound-by-int-by-parts}) up to the end. We now consider the case of any $X_0\in {\mathbb R}^d$. Let $n$ be the normal, $\{\lambda_k\}$ be the sequence, and $\widetilde{v_0}$ be the function for which (\ref{est-of-lemma-slow-with-shift}) holds with $X_0=0$. By constructions of Lemma \ref{Lem-slow-conv-family-of-F} (Step 3 in particular) and for the case of $X_0=0$ there is a strictly increasing sequence of integers $\{i_m\}_{m=1}^\infty$ and a set $\widetilde{\Lambda}=\{\xi^{(m)} \}_{m=1}^\infty \subset {\mathbb Z}^d\setminus \{0\}$ satisfying $|\xi^{(m)}|<|\xi^{(m+1)}|$ for all $m \in {\mathbb N}$, such that $$ \widetilde{v_0} (\theta ) = \sum\limits_{m=1}^\infty \frac{1}{|\xi^{ (m) }|^{ i_m } } \left[ e^{ 2\pi i \xi^{(m)} \cdot \theta } + e^{ - 2\pi i \xi^{(m)} \cdot \theta } \right], \qquad \theta \in \mathbb{T}^d. $$ We now slightly adjust the coefficients of $\widetilde{v_0}$ to handle the effect of the shift. Namely, consider the function $$ v_0 (\theta ) = \sum\limits_{m=1}^\infty \frac{1}{|\xi^{ (m) }|^{ i_m } } \left[ e^{ - 2\pi i \lambda_m \xi^{(m)} \cdot X_0 } e^{ 2\pi i \xi^{(m)} \cdot \theta } + e^{ 2\pi i \lambda_m \xi^{(m)} \cdot X_0 } e^{ - 2\pi i \xi^{(m)} \cdot \theta } \right]. $$ By definition, the Fourier coefficient $c_\xi (v_0 )$ is the complex conjugate of $c_{-\xi}(v_0)$ for any $\xi\in {\mathbb Z}^d$, hence $v_0$ is real-valued. It is also clear that $v_0\in C^\infty(\mathbb{T}^d)$ and $c_0(v_0)=0$. Following (\ref{exp-of-F-3}) and plugging $v_0$ into (\ref{est-of-lemma-slow-with-shift}) for each integer $k\geq 1 $ we get $$ \int\limits_{{\mathbb R}^{d-1} } F(x) v_0(\lambda_k M(x,0) + \lambda_k X_0 ) dx = \frac{2}{| \xi^{ (k) } |^{i_k} } \int\limits_{{\mathbb R}^{d-1}} F(x) \cos ( 2\pi \lambda_k x^T N^T \xi^{(k)} ) dx + \Sigma_1(\lambda_k) + \Sigma_2(\lambda_k), $$ where $\Sigma_1$ and $\Sigma_2$ are defined in analogy with (\ref{exp-of-F-3}). Observe that the integral on the right-hand side of the above equality is the same as for $X_0=0$, and the sums in $\Sigma_1$ and $\Sigma_2$ can be estimated exactly as in the case $X_0=0$. Indeed, the only difference is that coefficients in the sums are multiplied by complex numbers having length 1 (namely the exponents involving $X_0$). This fact will have no effect when taking absolute values of the terms in the sums, which is precisely what we do to bound $\Sigma_1$ and $\Sigma_2$. Since the analysis is reduced to the case $X_0=0$, the proof of the lemma is now complete. \end{proof} \begin{proof}[Proof of Theorem \ref{Thm-slow-variable}] We will assume that \begin{equation}\label{omega-decay-in-thm} t^{\frac{1}{4d}} \omega (t) \to \infty, \ \ \text{ as } t\to \infty. \end{equation} This assumption results in no loss of generality, for a similar argument as in (\ref{assump-on-g}). The reason for (\ref{omega-decay-in-thm}) is to have slower speed of decay than the error term involved in (\ref{bdry-layer-sol-repr2}) for a parameter $\kappa=1/(4d)$. Let $1\leq \gamma\leq d$ be fixed from (\ref{cond-on-A}). From (\ref{cell-problem}) and (\ref{cond-on-A}) we get $\chi^{*,\gamma} = 0$. The latter combined with (\ref{bdry-layer-system-for-v-star}) implies that for the corresponding boundary layer corrector we have $v_n^{*,\gamma} = 0$ for any $n\in \mathbb{S}^{d-1}$. For $1\leq \alpha \leq d$ and $n\in \mathbb{S}^{d-1}$, $v_n^{*,\alpha}$ solves a uniformly elliptic PDE in $\Omega_n$ with periodic and smooth coefficients, and with boundary data $\chi^{*,\alpha}$, hence standard elliptic regularity implies that there is a constant $C_0$ independent of $n$, such that $|\nabla_y v_n^{*,\alpha} (y) | \leq C_0$ for any $y\in \partial \Omega_n$ and each $1\leq \alpha \leq d$. From this, and the fact that $v_n^{*,\gamma}=0$ it follows that there exists an open subset of the sphere $\mathbb{S}_\gamma \subset \mathbb{S}^{d-1}$, such that for any $n\in \mathbb{S}_\gamma $ and any $M\in {\mathrm O}(d)$ with $M e_d = n$, one has \begin{equation}\label{Z15} \bigg| 1+ n_\alpha \big[ \partial_{z_d} \boldsymbol{\chi}^{*, \alpha} (\lambda (z',0)) + \partial_{z_d} \textbf{v}_n^{*,\alpha} ( \lambda (z',0)) \big] \bigg| \geq \frac 12, \end{equation} for all $z'\in {\mathbb R}^{d-1}$ and any $\lambda>0$. Indeed, we simply choose $\mathbb{S}_\gamma$ so that each $n\in \mathbb{S}_\gamma$ has its $\gamma$-th component sufficiently close to 1. For $n\in \mathbb{S}^{d-1}$ we fix some $M_n \in {\mathrm O}(d)$ satisfying $M_n e_d = n$ and let $G^{0,n} (\cdot, \cdot )$ be the Green's kernel for the pair $(M^T_n A^0 M_n, {\mathbb R}^d_+)$. Consider the family of functions $\mathcal{F}:=\{F_n\}_{n\in \mathbb{S}^{d-1}}$, where we have $F_n (x):= \partial_{2,d} G^{0,n} (e_d, (x,0)) $ for $x\in {\mathbb R}^{d-1}$. By Lemma \ref{Lem-Green-props-for-slow} the family $\mathcal{F}$ satisfies conditions (a), (b) and (c) of Lemma \ref{Lem-slow-conv-family-of-F}. Recall that solutions to boundary layer problems are constructed via the reduced boundary layer systems of form (\ref{bl-system-d+1}), hence we have \begin{equation}\label{Z16} v_n^{*,\alpha} (y) = v_n^{*,\alpha} (M_n z) = \textbf{v}_n^{*,\alpha} (z) = V_n^\alpha (N_n z', z_d) \end{equation} where $V_n^\alpha$ solves the corresponding problem (\ref{bl-system-d+1}), and as is usual $M_n = [N_n| n]$. In particular we have that $V_n^\alpha(\cdot, t)$ is ${\mathbb Z}^d$-periodic for any $t\geq 0$, and is smooth with respect to all its variables. Here one should take into account the subtlety, that $V_n^\alpha$, and hence also $\textbf{v}_n^{*,\alpha}$, implicitly depend on the matrix $M_n$, but as the choice of $M_n$ is now fixed, we may ignore this dependence. For $n\in \mathbb{S}^{d-1}$ consider the function $$ \Psi_n(y) := 1+ n_\alpha \big( n\cdot \nabla_y \chi^\alpha (y) + \partial_t V_n^\alpha (y,0) \big), \qquad y \in {\mathbb R}^d, $$ where $V_n^\alpha$ is fixed from (\ref{Z16}). Clearly, $\Psi_n \in C^\infty(\mathbb{T}^d)$. From (\ref{from-y-to-z}) we see that $\partial_{z_d} = n \cdot \nabla_y$ which gives the relation between normal derivatives. Now, if $y\in \partial \Omega_n$, we get \begin{equation}\label{V-and-normal-deriv} \partial_t V_n^\alpha( N z', 0) = \partial_{z_d} \textbf{v}_n^{*,\alpha}(z,0)= n\cdot \nabla_y v_n^{*,\alpha}(y). \end{equation} From here, the definition of $\Psi_n$ and (\ref{Z15}), let us show that for any \emph{irrational} $n\in \mathbb{S}_\gamma$ one has \begin{equation}\label{psi-is-large} | \Psi_n (y)| \geq 1/2 \qquad \text{ for all } \qquad y \in {\mathbb R}^d. \end{equation} The small nuance, that (\ref{psi-is-large}) needs the normal to be irrational as compared with (\ref{Z15}) lies in the fact that (\ref{Z15}) holds on the boundary of $\Omega_n$, while here we need the entire space ${\mathbb R}^d$. To see (\ref{psi-is-large}), observe that for $y=Nz'$ with $z'\in {\mathbb R}^{d-1}$ the lower bound we need is due to (\ref{Z15}) and (\ref{V-and-normal-deriv}). Now, if the normal $n$ is irrational, then $\{Nz' : \ z'\in {\mathbb R}^{d-1} \}$ is everywhere dense in $\mathbb{T}^d$, which is the unit cell of periodicity of $\Psi_n$, hence the continuity of $\Psi_n$ completes the proof of (\ref{psi-is-large}). We now apply Lemma \ref{Lem-slow-conv-family-of-F} for the family $\mathcal{F}$ and modulus of continuity $2\omega$, and let $\nu \in \mathbb{S}_\gamma $ be the unit irrational vector and $\{\lambda_k\}_{k=1}^\infty$ be the increasing sequence given by Lemma \ref{Lem-slow-conv-family-of-F}. Thus for $\nu$ we have (\ref{psi-is-large}). Next, for a function $F_\nu(x) $ let $\widetilde{v}_0\in C^\infty(\mathbb{T}^d)$ be the function given by Lemma \ref{Lem-slow-conv-family-of-F} for which \begin{equation}\label{Z13} \left| \int_{{\mathbb R}^{d-1}} F_\nu(x) \widetilde{v}_0(\lambda_k M_\nu (x,0)) dx - c_0(\widetilde{v}_0 ) \int_{{\mathbb R}^{d-1}} F_\nu(x) dx \right| \geq 2\omega(\lambda_k), \end{equation} where $c_0(\widetilde{v}_0 )$ is the $0$-th Fourier coefficient and $k=1,2,...$ . Ellipticity of $A$ implies that $\nu^T A(y) \nu \geq c_0 |\nu|^2=c_0 $ for any $y\in {\mathbb R}^d$, with absolute constant $c_0>0$, hence taking into account (\ref{psi-is-large}) we define \begin{equation}\label{Z14} v_0(y) := \frac{1}{\nu^T A(y) \nu} \frac{1}{\Psi_\nu(y)} \widetilde{v}_0 (y) , \qquad y\in {\mathbb R}^d, \end{equation} and get $v_0\in C^\infty(\mathbb{T}^d)$. Finally, we claim that $\nu$, $\{\lambda_k\}_{k=1}^\infty$, and $v_0$ defined by (\ref{Z14}) satisfy the Theorem. Indeed by (\ref{bdry-layer-sol-repr2}) the solution to boundary layer problem with these parameters has the form $$ v(y) = \int_{{\mathbb R}^{d-1}} \partial_{2,d} G^{0,\nu} (e_d, (z',0)) \widetilde{v}_0(\lambda M_\nu(z',0)) dz' +O(\lambda^{-\frac{1}{4d}}), $$ where the parameter $\kappa$ in (\ref{bdry-layer-sol-repr2}) is set to $1/(4d)$. The last expression combined with (\ref{Z13}) and (\ref{omega-decay-in-thm}) completes the proof of the Theorem. \end{proof} \section{Application to boundary value homogenization}\label{sec-application-to-Dirichlet} This section is devoted to the proof of Theorem \ref{Thm-slow-Dirichlet}, but before delving into details, we sketch the main idea behind the proof. By \cite{ASS1}-\cite{ASS3} we know that the boundary value homogenization of type considered in (\ref{Dirichlet-bdd-domains}) is determined by geometric properties of the boundary of the reference domain, such as non-vanishing Gaussian curvature \cite{ASS1}-\cite{ASS2}, or flat pieces with Diophantine normals \cite{ASS3}. Under these conditions one is able to deduce effective upper bounds on convergence rates for the homogenization, where the rate will be uniform with respect to the boundary data. With these in mind, a suitable candidate of domain $D$ for Theorem \ref{Thm-slow-Dirichlet} is a $C^\infty $ domain such that part of its boundary has non-vanishing Gaussian curvature, while the rest is a piece of a hyperplane. Then relying on integral representation of solutions to (\ref{Dirichlet-bdd-domains}) via Poisson kernel, one splits the integral into two parts, namely over curved and flat boundaries. The next step is to show that the contribution of the curved part has a prescribed rate of decay determined only by the embedding of $\partial D$ into ${\mathbb R}^d$, and hence is invariant under rotations of the domain. This step can be fulfilled by adapting the methods of \cite{ASS1}-\cite{ASS2}. For the integral over the flat part one shows, using Lemma \ref{Lem-family-on-compact-supp}, that after a suitable rotation of the domain and an appropriate choice of the boundary data it can be made comparatively large. In this section we rigorously implement this idea. \subsection{Preliminary results}\label{subsec-prelim} We present some technical results which will be used for the proof of Theorem \ref{Thm-slow-Dirichlet} below. Assume we have a bounded domain $D\subset {\mathbb R}^d$ ($d\geq 2$) with smooth boundary, and a divergence form operator $\mathcal{L}:=-\nabla \cdot A(x) \nabla $ where coefficient matrix $A$ is defined in $\overline{D}$, and is strictly elliptic, and smooth. Note, that we do not impose any structural condition nor any periodicity assumption on $A$. Next, we let $P(x,y):D\times \partial D \to {\mathbb R}$ be the Poisson kernel for the operator $\mathcal{L}$ in the domain $D$. Then by Lemma \ref{Lem-Poisson-dist-smooth} we have \begin{equation}\label{Poisson-dist-1} |P(x,y)|\leq C_P \frac{d(x)}{|x-y|^d},\qquad x\in D, \ y\in \partial D, \end{equation} where the constant $C_P=C_P(A, D, d)$. With this notation we have \begin{lem}\label{lem-Poisson-on-a-portion} Let $\Pi$ be an open and connected subset of $\partial D$ and $ D_0 \Subset D $ be fixed. Then $$ \inf\limits_{x\in D_0} \int_{\Pi} P(x, y)d\sigma(y) >0. $$ \end{lem} \begin{proof} As a trivial observation, before we start the proof, notice that if $\Pi$ is the entire boundary of $D$, then the integral in question is identically 1. The general case, however, requires some care. The proof is motivated by \cite[Lemma 3.1]{ASS2}. We will use integral representation of solutions, to get a more precise version of the maximum principle. Fix a sequence of smooth functions $\{g_n\}_{n=1}^\infty$, where $g_n:{\mathbb R}^d \to [0,1]$ such that $g_n=1$ on $\Pi$, and for any domain $\widetilde{\Pi}\subset {\mathbb R}^d$ which compactly contains $\Pi$, the sequence $\{g_n\}$ uniformly converges to 0 outside $\widetilde{\Pi}$ as $n\to \infty$. We let $u_n$ be the solution to Dirichlet problem for $\mathcal{L}$ in domain $D$ having boundary data $g_n \big|_{\partial D}$. Fix some $\xi\in \Pi$. Since $\Pi $ is open in $\partial D$ there exists $\delta_0>0$ such that \begin{equation}\label{delta0} \{y\in \partial D: | y-\xi |\leq \delta_0 \} \subset \Pi. \end{equation} We get \begin{align*} \numberthis \label{u-n-minus-g-n} |u_n(x) - g_n(\xi)| = \left| \int_{\partial D} P(x,y) [g_n(y) - g_n(\xi) ] d\sigma(y) \right| \leq& \\ \int_{\partial D} \big| P(x,y) [g_n(y) - g_n(\xi) ] \big| d\sigma(y) =& \\ \int_{|y-\xi|> \delta_0 }\big| P(x,y) [g_n(y) - g_n(\xi) ] \big| d\sigma(y) \leq 2C_P d(x) &||g_n ||_{L^\infty} \int_{|y-\xi|> \delta_0 } \frac{d \sigma(y)}{|x-y|^d}, \end{align*} where we have used (\ref{delta0}) and the fact that $g_n=1$ on $\Pi$ to pass from the second row of (\ref{u-n-minus-g-n}) to the first expression of the last row. We now choose $x\in D$ such that $|x-\xi|<\delta_0 /2$. The triangle inequality implies $|x-y|\geq \delta_0/2$ for all $y\in \partial D$ satisfying $|y-\xi|>\delta_0$. Hence, the last integral in (\ref{u-n-minus-g-n}) can be estimated as follows \begin{equation}\label{a1} \int_{|y-\xi|> \delta_0 } \frac{d \sigma(y)}{|x-y|^d} \leq 2^d \int_{|y-\xi|> \delta_0 } \frac{d \sigma(y)}{|y-\xi|^d} \\ \lesssim \int_{\delta_0}^1 \frac{t^{d-2}}{t^d} dt \leq C_0 \frac{1}{\delta_0}, \end{equation} with some positive $C_0=C_0(d)$ uniform in $x$ and $\delta_0$, and we have invoked integration in spherical coordinates to estimate the surface integral. Without loss of generality we assume that constants $C_0, C_P \geq 1$. We now fix $x_0\in D$ such that \begin{equation}\label{a2} |x_0-\xi|\leq \frac{\delta_0}{2} \qquad \text{ and } \qquad \frac{\delta_0}{10 C_0 C_P} \leq d(x_0) \leq \frac{\delta_0}{4 C_0 C_P} . \end{equation} This is always possible, provided $\delta_0 >0$ is small enough. Denote $\widetilde{D}:= D_0 \cup \{x_0\}$. From (\ref{a1}) and (\ref{u-n-minus-g-n}) we obtain $|u_n(x_0) - g_n(\xi)| \leq 1/2 $, hence the triangle inequality implies \begin{equation}\label{a3} |u_n(x_0)|= |g_n(\xi) +u_n(x_0)-g_n(\xi)|\geq |g_n(\xi)| - |u_n(x_0)-g_n(\xi)| \geq 1-1/2 =1/2. \end{equation} From (\ref{a3}) one has $\sup_{\widetilde{D}} u_n \geq 1/2 $ for any $n\in {\mathbb N}$. From the maximum principle we infer that all $u_n$ are everywhere non-negative in $D$, thus applying Moser's version of Harnack's inequality (see e.g. \cite[Theorem 8.20]{GT}) we get \begin{equation}\label{Harnack-Mozer} 1/2\leq \sup_{\widetilde{D}} u_n \leq C_1 \inf_{ \widetilde{D} } u_n \leq C_1 \inf_{ D } u_n, \qquad n=1,2,... , \end{equation} where the constant $C_1=C_1(d, A, \widetilde{D}, D)$. We have the representation $$ u_n(x) = \int_{\partial D} P(x,y) g_n(y) d\sigma(y), \qquad x\in D. $$ Using the construction of $g_n$ we pass to the limit in the last integral, getting by (\ref{Harnack-Mozer}) that $$ \int_{\Pi} P(x,y) d\sigma(y) =\lim\limits_{n\to \infty} u_n(x) \geq \frac{1}{2 C_1}, \qquad \forall x\in D_0. $$ The proof is complete. \end{proof} We will also need a version of the last lemma for a family of Poisson kernels corresponding to a rotated images of a given domain. To fix the ideas, recall that the coefficient matrix of (\ref{Dirichlet-bdd-domains}) is defined on some fixed domain $X$. Let $D\Subset X$ be a bounded domain with $C^\infty$ boundary. For a matrix $M\in {\mathrm O}(d)$ define an orthogonal transformation $\mathcal{M}: {\mathbb R}^d \to {\mathbb R}^d$ by $\mathcal{M}x = Mx$, $x\in {\mathbb R}^d$ and consider the rotated domain $D_M := \mathcal{M} D $. Obviously $D_M$ is a bounded domain, it is also clear that $\partial D_M =\mathcal{M}(\partial D_0) $ since $\mathcal{M}$ is a diffeomorphism. Next, the smoothness of $D_M$ follows directly form the definition of smooth boundary (see e.g. Section 6.2 of \cite{GT}). We shall take the diameter of $D$ sufficiently small so that $D_M\Subset X$ for any $M\in {\mathrm O}(d)$. Now we get the following version of the previous lemma. \begin{cor}\label{cor-Poisson-inf-rotated} Let $D\subset X$ be as above and let $\Pi \subset \partial D$ be open and connected. Fix $B_0\Subset D$ and for some small constant $c_0>0$ denote $$ \widetilde{{\mathrm O}} : = \{ M\in {\mathrm O}(d): \ B_0\subset D_M \text{ and } \mathrm{dist}(B_0, \partial D_M)\geq c_0 \}. $$ Then $$ \inf\limits_{M\in \widetilde{{\mathrm O}}, \ x\in B_0 } \int_{\Pi_M} P_M(x,y) d\sigma(y) >0, $$ where $\Pi_M = \mathcal{M}(\Pi)$ and $P_M$ is the Poisson kernel for the pair $(A, D_M)$. \end{cor} \begin{proof} For each fixed $M\in \widetilde{{\mathrm O}}$ the infimum is positive in view of Lemma \ref{lem-Poisson-on-a-portion}, and we will simply follow the dependence of constants in the proof of Lemma \ref{lem-Poisson-on-a-portion} on the rotation introduced by $M$. First of all by Lemma \ref{Lem-Poisson-dist-smooth} we have that the constant in (\ref{Poisson-dist-1}) is independent of $M$ therefore (\ref{a2}) is uniform with respect to $M$. Concerning the use of Harnack inequality in (\ref{Harnack-Mozer}) referring to \cite[Theorem 8.20]{GT} we know that constants for a ball of radius $R>0$ depend on dimension of the space, ellipticity bounds of the operator, and the radius $R$. Here we have the same operator for all domains $D_M$. Finally, the uniform distance of $B_0$ from the boundary of $D_M$ for each $M\in \widetilde{{\mathrm O}}$, along with standard covering argument extending the Harnack inequality from balls to arbitrary sets, shows that the constant $C_1$ of (\ref{Harnack-Mozer}) can be chosen independently of $M$. As the choice of all constants in the proof of Lemma \ref{lem-Poisson-on-a-portion} can be made uniform with respect to $M\in \widetilde{{\mathrm O}}$, the proof is complete. \end{proof} The next lemma is used in the localization argument of Proposition \ref{int-in-prop} below. \begin{lem}\label{Lem-Hessian-one-to-one} For $r_0>0$ let $\psi \in C^3(\overline{B_{r_0}(0)})$ and assume that $\psi(0) = |\nabla \psi(0)|=0$ and $$ \mathrm{Hess} \psi(0) : = (\partial_{\alpha\beta}^2 \psi)_{\alpha, \beta =1}^d (0)= \mathrm{diag}(a_1,...,a_d)\in M_d({\mathbb R}), $$ where $0<a_1 \leq ...\leq a_d$. Then there exist positive constants $c_0=c_0(d, ||\psi ||_{C^3}) < c_1 = c_1(d, ||\psi ||_{C^3} )< r_0/a_1 $, and $K_1=K_1(d)\leq 1 \leq K_2=K_2(d, ||\psi ||_{C^2})$ such that \vspace{0.2cm} \begin{itemize} \item[\normalfont{(i)}] $ K_1 a_1 |x-y| \leq | \nabla \psi(x) - \nabla \psi(y) | \leq K_2 |x-y|, \qquad \forall x,y\in B(0,c_1 a_1), $ \vspace{0.2cm} \item[\normalfont{(ii)}] if $\delta_{\alpha \beta}$ is the Kronecker symbol, then for any $1\leq \alpha, \beta \leq d$ one has $$ \left| \partial^2_{\alpha \beta} \psi(x) - a_\alpha \delta_{\alpha \beta} \right|\leq \frac{a_1}{20 d}, \qquad \forall x\in B(0, c_1 a_1), $$ \item[\normalfont{(iii)}] $ B(0, c_0 a_1^2) \subset (\nabla \psi)( B(0,c_1 a_1)). $ \end{itemize} \end{lem} \begin{proof} We start with part (i). For any $x, y \in B_{r_0}(0)$ by Mean-Value Theorem we have \begin{equation}\label{upper-bound-grad-of-psi-1} | \nabla \psi(x) - \nabla \psi(y) | \leq \sum\limits_{\alpha=1}^d |\partial_\alpha \psi(x ) - \partial_\alpha \psi(y)| \leq \sum\limits_{\alpha,\beta=1}^d || \partial_{\alpha \beta}^2 \psi||_{L^\infty( B_{r_0}(0))} |x-y| , \end{equation} which demonstrates the upper bound of (i). To obtain a lower bound, for $1\leq \alpha \leq d$ and $x \in B_{r_0}(0)$ set $g_\alpha (x) = |\partial_{\alpha \alpha}^2 \psi (x) | - \sum\limits_{\beta=1, \beta \neq \alpha }^d |\partial_{\alpha \beta}^2 \psi(x)| $, obviously $g_\alpha(0)= a_\alpha >0$. By $C^3$-smoothness of $\psi$ we have that each $g_\alpha$ has linear modulus of continuity, hence there exists a constant $c_1=c_1(d, ||\psi ||_{C^3} ) $ such that for all $|x|\leq c_1 g_\alpha(0)$ we get $g_\alpha(x) > g_\alpha (0)/2$. From here for any $x, y \in B_{c_1 a_1} (0)$ Mean-Value Theorem yields \begin{multline}\label{lower-bound-grad-of-psi-1} | \nabla \psi(x ) - \nabla \psi(y ) | \geq c_d \sum\limits_{\alpha=1}^d | \partial_\alpha \psi(x) - \partial_\alpha \psi(y) | =c_d \sum\limits_{\alpha=1}^d | \nabla (\partial_\alpha \psi) (\tau_\alpha) \cdot (x-y) |, \end{multline} where $\tau_\alpha$ lies on the segment $[x,y]$ and $c_d$ is a constant depending on dimension. We next fix $1\leq \alpha \leq d$ from $|x_\alpha - y_\alpha| = \max\limits_{1\leq \beta \leq d}|x_\beta - y_\beta|$, and invoking (\ref{lower-bound-grad-of-psi-1}) we get \begin{equation}\label{lower-bound-grad-of-psi-2} | \nabla \psi(x ) - \nabla \psi(y ) | \geq c_d |\nabla (\partial_\alpha \psi)(\tau_\alpha) \cdot (x-y)| \geq \\ c_d g_\alpha (\tau_\alpha) |x_\alpha - y_\alpha| \geq c_d a_1 |x-y| . \end{equation} Combining (\ref{upper-bound-grad-of-psi-1}) and (\ref{lower-bound-grad-of-psi-2}) for any $x,y\in B_{c_1 a_1}(0)$ we obtain \begin{equation}\label{bi-Lip-on-grad} K_1 a_1 |x-y| \leq |\nabla \psi(x) - \nabla \psi(y) | \leq K_2 |x-y|, \end{equation} with constants $K_2 = K_2(d, || \psi||_{C^2}) \geq 1$ and $K_1=K_1(d)\leq 1$. This completes the proof of part (i). \vspace{0.2cm} Since $\partial^2_{\alpha \beta} \psi$ for each $1\leq \alpha,\beta\leq d$ has linear modulus of continuity, the claim of (ii) follows easily by Mean-Value Theorem with $c_1$ sufficiently small. We now proceed to (iii). \vspace{0.2cm} By (\ref{bi-Lip-on-grad}) the mapping $\nabla \psi$ is invertible in a neighbourhood of the origin, and (iii) is simply an effective version of Inverse Mapping Theorem. The desired bound follows from the estimate $|| ( \mathrm{Hess} \psi(0))^{-1} ||\asymp a_1^{-1} $, $C^3$-smoothness of $\psi$, and \cite[Theorem 1.1]{Wang}\footnote{Observe that the function $\psi(x) = a_1x_1^2+...+a_d x_d^2$ manifests that order of the radius of the ball in (iii) is generally the best possible. Also, since $\psi\in C^3$, here one may have a direct treatment for (iii) by a well-known approach to Inverse Function Theorem. Indeed, set $F(x) = \nabla \psi(x)$, $|x|\leq c_1 a_1$ and let $y\in {\mathbb R}^d$ be fixed. We need to determine the range of $y$ where the equation $F(x) =y$ has a solution in $x$ from $B(0, c_1 a_1)$. For this, one may utilize the celebrated method of Newton for finding roots of equations by studying the mapping $G(x)=x- (\nabla F(0))^{-1} (F(x)-y) $, where the Jacobian of $F$ is the Hessian of $\psi$. Clearly $F(x) =y$ iff $G(x) = x$, i.e. it is enough to figure out when $G$ is a contraction. The latter can be resolved easily relying on the $C^3$-smoothness of $\psi$, and determining the range of $y$ when $G$ maps the closed ball $\overline{B}(0,c_0 a_1)$ into itself and has differential of norm less than 1. The details are easy to recover and we omit them. }. \vspace{0.2cm} The proof of the lemma is complete. \end{proof} \subsection{A prototypic domain}\label{sec-prot-domain} We introduce a class of domains, call them \emph{prototypes}, which will be used in the proof of Theorem \ref{Thm-slow-Dirichlet}. Let $\mathcal{P}$ be a convex polytope, i.e. a convex bounded domain, which is an intersection of a finite number of halfspaces. We assume that $\mathcal{P}\subset \{x\in {\mathbb R}^d: \ x_d\leq 0\}$ and that $0\in {\mathbb R}^d$ is an inner point of $\partial \mathcal{P} \cap \{x_d=0\}$. We fix some $\Pi_0 \Subset \partial \mathcal{P} \cap \{x_d=0\}$, a $(d-1)$-dimensional closed ball centred at 0. Now let $D_0\subset \mathcal{P}$ be a bounded domain having the following properties: \begin{itemize} \item[{\normalfont(P1)}] $D_0$ is convex with $C^\infty$ boundary, \vspace{0.1cm} \item[{\normalfont(P2)}] $\partial D_0 \cap \{ x\in {\mathbb R}^d: \ x_d=0 \}=\Pi_0$, \vspace{0.1cm} \item[{\normalfont(P3)}] if $x\in \partial D_0$ with $x_d\neq 0$, the Gaussian curvature of $\partial D_0$ at $x$ is strictly positive, \vspace{0.1cm} \item[{\normalfont(P4)}] we fix some ball $ B_0$ lying compactly inside $D_0$. \end{itemize} \vspace{0.1cm} Typically we will embed the whole construction inside a given large domain $X$. Existence of $D_0$ satisfying (P1)-(P4) follows directly, as a special case, from an elegant construction due to M. Ghomi in connection with smoothing of convex polytopes, see \cite[Theorem 1.1]{Ghomi}. The following picture gives a schematic view of the construction. \begin{figure}[htb] \centering \def\svgwidth{300pt} \input{fig1.pdf_tex} \caption{\footnotesize{A prototypic domain $D_0$ obtained as smooth approximation of a polytope. Here $X$ is some fixed domain containing $0\in {\mathbb R}^d$ in its interior. Then $\mathcal{P}$ is any convex polygon sitting inside $X\cap\{x\in {\mathbb R}^d: x_d\leq 0 \}$, with non-empty interior, and with part of its flat boundary lying on the hyperplane $\{x_d=0\}$. We next take a closed flat ball $\Pi_0$, the dashed part on the boundary of $\mathcal{P}$, and invoke \cite[Theorem 1.1]{Ghomi}. Finally, a ball $B_0$ is fixed compactly inside $D_0$.}} \end{figure} The following notation will be in force throughout the section. Set $\Gamma_0 := \partial D_0$ and for $\delta>0$ denote $\Gamma_\delta = \{x\in \Gamma_0: \ \mathrm{dist}(x,\Pi_0) \geq \delta \}$, where $\Pi_0$ is the $(d-1)$-dimensional ball fixed from (P2) above. Define \begin{equation}\label{def-of-kappa} \kappa(\delta) =\min\limits_{x\in \Gamma_\delta} \min\limits_{1\leq \alpha \leq d-1} \kappa_\alpha(x), \end{equation} where $\kappa_\alpha(x)$ is the $\alpha$-th principal curvature of $\Gamma_0$ at $x$, and the minimum over $\Gamma_\delta$ exists in view of the smoothness of $\Gamma_0$ and compactness of $\Gamma_\delta$. In the sequel we assume $\delta>0$ is small enough so that $\Gamma_\delta \neq \emptyset$. Due to property (P3) we have \begin{equation} \kappa(\delta)>0 \text{ and } \kappa(\delta) \searrow 0 \text{ as } \delta \to 0+. \end{equation} The next proposition is one of the key ingredients of the proof of Theorem \ref{Thm-slow-Dirichlet}. \begin{prop}\label{prop-main-for-Dirichlet} There exists a modulus of continuity $\omega_0$ determined by the decay rate of the function $\kappa(\delta)$ defined in (\ref{def-of-kappa}) such that for any smooth function $P: \Gamma_0 \to {\mathbb R}$, any $g\in C^\infty(\mathbb{T}^d)$ satisfying $\int_{\mathbb{T}^d} g =0$, any $y_0 \in {\mathbb R}^d$, and any $M\in {\mathrm O}(d)$ one has \begin{equation}\label{int-in-prop} \left| \int_{\Gamma_0 \setminus \Pi_0 } P(y) g(\lambda My +y_0) d\sigma(y) \right| \leq C \omega_0(\lambda) || P||_{C^1(\Gamma_0)} || g||_{C^d(\mathbb{T}^d)}, \qquad \forall \lambda \geq 1, \end{equation} with a positive constant $C$ depending only on dimension $d$ and embedding of $\Gamma_0$ in ${\mathbb R}^d$. \end{prop} \begin{remark} One may claim a decay of the integral in (\ref{int-in-prop}) relying on \cite{Lee-Shah} for example, however without any explicit bounds we have in the current formulation and which we need for applications. The proof of this proposition is based on adaptation of methods from \cite{ASS1}-\cite{ASS2} both of which work with strictly convex domains, showing that integrals similar to (\ref{int-in-prop}) and involving singular kernel (namely, Poisson's kernel) decay with some prescribed algebraic rate as $\lambda \to \infty$. The difference of the current case from \cite{ASS1}-\cite{ASS2} is that on one hand here we do not have a singularity introduced by an integration kernel, which gives an extra freedom to the entire analysis. On the other hand the strict convexity of the hypersurface deteriorates, and we have integration over a hypersurface with boundary; both of these factors introduce some technical difficulties which entail somewhat refined analysis at certain points. \end{remark} \begin{proof}[Proof of Proposition \ref{prop-main-for-Dirichlet}] The proof is partitioned into few steps. \vspace{0.1cm} \noindent \textbf{Step 1. Localization.} We localize the integral of (\ref{int-in-prop}) in a neighbourhood of each point $z\in \Gamma_0$ of positive curvature. Fix $\delta>0$ small. The hypersurface $\Gamma_0$ is locally a graph of a smooth function, thus there exists $r>0$ small such that for any $z\in \Gamma_{2\delta}$ there is an orthogonal transformation $\mathcal{R}: {\mathbb R}^d \to {\mathbb R}^d$ satisfying \begin{equation}\label{Gamma-is-loc-graph} (\mathcal{R}(\Gamma_0 - z) ) \cap B_r(0) = \{(x', \psi(x')): \ |x'|\leq r \}, \end{equation} where $x'=(x_1,...,x_{d-1})$, $\psi$ is smooth on $ \{x'\in {\mathbb R}^{d-1}: \ |x' | \leq 2r \}$, $\psi(0)= |\nabla \psi(0)|=0$ and for the Hessian of $\psi$ we have \begin{align} \label{hessian-is-diag} \mathrm{Hess} \psi(0) =& \mathrm{diag} (a_1,...,a_{d-1}) \in M_{d-1}({\mathbb R}), \\ \label{hessian-is-diag2} 0< \kappa(\delta) \leq& a_1\leq a_2\leq ...\leq a_{d-1} \leq C_0. \end{align} Here $a_\alpha$ is the $\alpha$-th principal curvature of $\Gamma_0$ at $z$, and the lower bound of (\ref{hessian-is-diag2}) is due to (\ref{def-of-kappa}), while the universal upper bound, as well as the bound $|\mathrm{Hess} \psi(x')|\leq C_0$, for any $|x'|\leq 2r$, are both due to the smoothness of $\Gamma_0$. Also, $r$ is independent of $z$ and $\delta$, and depends on the embedding of hypersurface $\Gamma_0$ in ${\mathbb R}^d$. By Lemma \ref{Lem-Hessian-one-to-one} there exist constants $K_1<K_2$ and $c_0<c_1$ controlled by dimension $d$ and $C^3$-norm of $\psi$, and independent of the principal curvatures of $\Gamma_0$, such that \vspace{0.1cm} \begin{itemize} \item[(a)] $ K_1 a_1 |x'| \leq |\nabla \psi(x') | \leq K_2 |x'|$ for all $|x'|\leq c_1 a_1$, \vspace{0.1cm} \item[(b)] for any $1\leq \alpha, \beta \leq d-1$ setting $\delta_{\alpha \beta}$ to be the Kronecker symbol, we get $$ \left| \partial_{\alpha \beta}^2 \psi (x') - a_\alpha \delta_{\alpha \beta} \right| \leq \frac{a_1}{10 d}, \qquad |x'|\leq c_1 a_1 , $$ \item[(c)] $|y'| < c_0 a_1^2 $ implies that there exists a unique $|x'| < c_1 a_1$ so that $\nabla \psi(x') =y'$. \end{itemize} \vspace{0.1cm} Denote \begin{equation}\label{def-of-L-delta} L_\delta = \frac{ c_0 }{4d K_2} a_1^2 \end{equation} and consider a family of balls $\mathcal{B} =\{ B(z,\frac 15 L_\delta): \ z\in \Gamma_{2\delta} \}$. Clearly $\mathcal{B}$ covers $\Gamma_{2\delta}$, hence by Vitali covering lemma there exists a finite collection of disjoint balls $\mathcal{B}_0 = \{ B(z_j, \frac 15 L_\delta): \ j=1,...,M_\delta \}\subset \mathcal{B}$ such that \begin{equation}\label{gamma-delta-inclusion} \Gamma_{2\delta} \subset \bigcup\limits_{j=1}^{M_\delta} B(z_j, L_\delta) =: \widetilde{\Gamma}_{2\delta}. \end{equation} As $\Gamma_0$ is a graph in $L_\delta$-neighbourhood of any $z\in \Gamma_{2\delta}$ we get $ \mathrm{vol}_{d-1}( B(z_j,L_\delta/5) \cap \Gamma_{2\delta} ) \asymp L_\delta^{d-1} $. From this, (\ref{def-of-L-delta}) and (\ref{hessian-is-diag2}) it easily follows that \begin{equation}\label{est-on-M-delta} M_\delta \leq C L_\delta^{-(d-1)} \leq C (\kappa(\delta))^{-2(d-1)}, \end{equation} with an absolute constant $C$. For $1\leq j \leq M_\delta $ set $B_j: =B(z_j, L_\delta) $, we now define a smooth partition of unity subordinate to these balls. Fix a smooth function $\Psi:{\mathbb R}^d\to [0,1]$, such that $\Psi(x) = 0$, for $|x|\geq 1$ and $\Psi(x)=1$ for $|x|\leq 1$. Define $ \Psi_{\delta, j}(x) = \Psi(L_\delta^{-1}(x- z_j))$, then clearly $\mathrm{supp} \Psi_{\delta, j} \subset B_j $. Now let $ \varphi_{\delta, j}:=\left( \sum\limits_{i=1}^{M_\delta} \Psi_{\delta, j} \right)^{-1} \Psi_{\delta,j}$. By construction we have that $\varphi_{\delta, j}$ is supported in $B_j$, $\sum\limits_{j=1}^{M_\delta} \varphi_{\delta, j} =1$ on $\Gamma_{2\delta}$, and \begin{equation} | \nabla \varphi_{\delta, j} (x) |\leq C L_\delta^{-1}, \qquad x\in {\mathbb R}^d, \end{equation} where the constant $C$ is independent of $j$ and $\delta$. Now for $1\leq j \leq M_\delta$ define \begin{equation}\label{def-of-I-j} I_j :=\int_{\Gamma_0} P(y) g(\lambda My +y_0 ) \varphi_{\delta, j}(y) d\sigma(y). \end{equation} \vspace{0.2cm} \noindent \textbf{Step 2. Reduction to oscillatory integrals.} In (\ref{def-of-I-j}) make a change of variables by setting $y=\mathcal{R}^{-1}z +z_j$, where the orthogonal transformation $\mathcal{R}$ is fixed as in (\ref{Gamma-is-loc-graph}). We next observe that the integral in (\ref{def-of-I-j}) is over $B(z_j, L_\delta)\cap \Gamma_0$, and $\Gamma_0$ is a graph in the $L_\delta$-neighbourhood of $z_j$. With these in mind we make the change of variable in $I_j$ and then pass to volume integration obtaining by so \begin{equation}\label{I-j-2} I_j=\int\limits_{|z'|\leq L_\delta} (P\varphi_{\delta, j} )(z_j+{\mathcal R}^{-1}(z',\psi(z'))) g \left(\lambda M z_j + \lambda M {\mathcal R}^{-1} (z',\psi(z')) +y_0 \right) (1+|\nabla \psi(z')|^2)^{1/2} d z' . \end{equation} For $t\in {\mathbb R}$ set ${\mathrm{exp}}(t) := e^{2\pi i t}$. Next, for $\xi\in {\mathbb Z}^d$ let $c_\xi(g)$ be the $\xi$-th Fourier coefficient of $g$. According to the assumption of the proposition we have $c_0(g)=0$. Using the smoothness of $g$ and expanding it into Fourier series we get \begin{multline}\label{exp of g} g \left( \lambda M z_j + \lambda M {\mathcal R}^{-1} (z',\psi(z')) +y_0 \right) = \\ \sum\limits_{\xi \in {\mathbb Z}^d \setminus\{0\} } c_\xi (g ) {\mathrm{exp}} \left( \lambda \xi \cdot M z_j +\xi \cdot y_0 \right) {\mathrm{exp}} \left[ \lambda {\mathcal R} M^T \xi \cdot (z',\psi(z')) \right], \end{multline} where we have also used the orthogonality of $M$ and ${\mathcal R}$. For $\xi\in {\mathbb Z}^d$ set ${\mathcal R} M^T \xi := \eta := |\eta|(n', n_d)$ with $(n', n_d)\in \mathbb{S}^{d-1}$. By orthogonality of $M$ and ${\mathcal R}$ we have $|\eta|=|\xi|$. Next, define $$ F(z') = n' \cdot z' + n_d \psi(z'), $$ and $$ \Phi_j(z') = (P\varphi_{\delta,j} )(z_j+{\mathcal R}^{-1}(z',\psi(z'))) (1+|\nabla \psi(z')|^2)^{1/2} . $$ From the definition of the cut-off $\varphi_{\delta,j}$ we have \begin{equation}\label{C-1-norm-of-Phi} || \Phi_j||_{C^1} \leq C || P||_{C^1} L_\delta^{-1}, \end{equation} uniformly for all $1\leq j \leq M_\delta$ with an absolute constant $C$ . With these notation, from (\ref{I-j-2}) and (\ref{exp of g}) we get \begin{equation}\label{I-j-as-sum} I_j = \sum_{\xi \in {\mathbb Z}^{d}\setminus \{0\} } c_\xi(g) {\mathrm{exp}} \left( \lambda \xi \cdot M z_j +\xi \cdot y_0 \right) I_{j,\xi}, \end{equation} where \begin{equation}\label{I-j-xi} I_{j,\xi} =\int_{|z'|\leq L_\delta} \Phi_j(z') {\mathrm{exp}} \left[ \lambda |\xi| F(z') \right] dz'. \end{equation} \noindent \textbf{Step 3. Decay of $I_j$.} We analyse the decay of each $I_{j,\xi}$ in two distinct cases. \indent \textbf{Case 1.} $|n'|\geq c_0 a_1^2/2$. Fix $1\leq \alpha \leq d-1$ so that $|n_\alpha| = \max_{1\leq \beta \leq d-1} |n_\beta|$, clearly $|n_\alpha|\geq |n'|/d$. From this, definition of $F$, (\ref{def-of-L-delta}) and assertion (a) of Step 1, on the support of $\Phi_j$ we have \begin{multline}\label{case1-est of phase from below} |\partial_\alpha F(z')| = | n_\alpha + n_d \partial_\alpha \psi (z')| \geq |n_\alpha| -|\partial_\alpha \psi (z')| \geq \\ \frac{c_0 a_1^2}{2d} - K_2 L_\delta \geq \frac{c_0 a_1^2}{2d} - K_2 \frac{c_0 a_1^2}{4d K_2} = \frac{c_0 }{4d} a_1^2. \end{multline} Integrating by parts in $I_{j,\xi}$ in the $\alpha$-th coordinate, and then employing (\ref{case1-est of phase from below}) and (\ref{C-1-norm-of-Phi}) we get $$ |I_{j,\xi}|\leq C (\lambda|\xi|)^{-1} \int_{|z'|\leq L_\delta} \left| \partial_{\alpha} \left( \frac{\Phi_j}{\partial_\alpha F } (z') \right) \right| dz' \leq C (\lambda|\xi|)^{-1} L_\delta^{d-1} a_1^{-4} || P||_{C^1} L_\delta^{-1}, $$ with an absolute constant independent of $j$. From here and (\ref{def-of-L-delta}) we have \begin{equation}\label{I-j-xi-in-case1} |I_{j,\xi}| \leq C (\lambda|\xi|)^{-1} ||P ||_{C^1} a_1^{2d-8} . \end{equation} \vspace{0.2cm} \indent \textbf{Case 2.} $|n'|<c_0 a_1^2/2$. Since $|(n',n_d)|=1$ and $c_0$ is small we have $|n_d|> 1/2$ and hence $\left| \frac{n'}{n_d} \right|< c_0 a_1^2 $. By (c) there exists a unique $x_0'\in B(0,c_1 a_1)$ such that $\nabla \psi (x_0') = -\frac{n'}{n_d}$, and hence $\nabla F(x_0')=0$. Observe that $x_0'$ is not necessarily from the support of $\Phi_j$. For $1\leq \alpha \leq d-1$ consider the cone $$ \mathcal{C}_\alpha=\left\{ z'\in {\mathbb R}^{d-1}: \ |z_\alpha |\geq \frac{1}{2 \sqrt{d-1} } |z'| \right \}, $$ clearly $\cup_{\alpha=1}^{d-1} \mathcal{C}_\alpha ={\mathbb R}^{d-1}$. Now for fixed $1\leq \alpha \leq d-1$ take $z'\in \mathcal{C}_\alpha$ such that $|x_0'+z'|<c_1 a_1$. Using estimate (b) of Step 1, the facts that $|n_d|>1/2$ and $\nabla F(x_0')=0$, and invoking Mean-Value Theorem, for some $\tau$ on the segment $[x_0',x_0'+z'] $ we get \begin{multline}\label{case2-est-of-deriv-F} \left| \frac{\partial F}{\partial z_\alpha} (x_0' +z') \right| = \left| \frac{\partial F}{\partial z_\alpha} (x_0' +z') - \frac{\partial F}{\partial z_\alpha} (x_0' ) \right | =\left| \left(\nabla \frac{\partial F}{\partial z_\alpha} \right) (\tau)\cdot z' \right| \geq \\ |n_d| \left( |\partial^2_{\alpha \alpha}\psi(\tau) z_\alpha| - \sum_{\beta =1, \ \beta\neq \alpha}^{d-1} |\partial^2_{\beta \alpha}\psi (\tau) z_\beta| \right) \geq C a_1 |z'|, \end{multline} with an absolute constant $C>0$. Since the cones $\{C_\alpha\}$ cover ${\mathbb R}^{d-1}$, for each $1\leq \alpha \leq d-1$ there exists $\omega_\alpha$ supported in $\mathcal{C}_\alpha$, smooth away from the origin and homogeneous of degree 0 such that $$ \sum\limits_{\alpha=1}^{d-1} \omega_\alpha (z') =1, \qquad \forall z'\neq 0. $$ Observe that since each $\omega_\alpha$ is homogeneous of degree 0, for all $1\leq \alpha \leq d-1$ and non-zero $z'\in {\mathbb R}^{d-1}$ near the origin we have \begin{equation}\label{deriv-of-omega} \left| \partial_\alpha \omega_\alpha (z') \right| \leq C \frac{1}{|z'|}, \end{equation} with an absolute constant $C$. Now fix a non-negative function $h\in C^{\infty}({\mathbb R}^{d-1})$ satisfying $h(x')=0$ for $|x'|\geq 2$ and $h(x')=1$ for $|x'|\leq 1$. Setting $x'=z'-x'_0$ form (\ref{I-j-xi}) we get $$ I_{j,\xi}=\int_{{\mathbb R}^{d-1}} \Phi_j(x_0'+x') \exp \left[\lambda |\xi| F(x_0'+x') \right] dx' := I_{j,\xi}^{(1)} +I_{j,\xi}^{(2)}, $$ where $$ I_{j,\xi}^{(1)} = \int_{{\mathbb R}^{d-1} } h( \lambda^{1/2} x' ) \Phi_j(x_0'+x') \exp \left[ \lambda |\xi| F(x_0'+x') \right] dx' . $$ From the definition of $h$ we have \begin{equation}\label{est-of-I-xi-1} |I_{j,\xi}^{(1)} | \leq C \lambda^{-(d-1)/2} || P||_{L^\infty}. \end{equation} The second part we decomposed as $I_{j,\xi}^{(2)}= \sum\limits_{\alpha=1}^{d-1} I_{j,\xi}^{(2),\alpha} $ where $$ I_{j,\xi}^{(2),\alpha} = \int_{{\mathbb R}^{d-1}} \omega_\alpha(x') (1-h(\lambda^{1/2} x')) \Phi_j(x_0'+x') \exp \left[ \lambda |\xi| F(x_0'+x') \right] dx' . $$ We now invoke partial integration in $\alpha$-th coordinate, and with the aid of estimates (\ref{case2-est-of-deriv-F}), (\ref{deriv-of-omega}) and (\ref{C-1-norm-of-Phi}) we get \begin{multline*} I_{j,\xi}^{(2),\alpha} \lesssim \frac{(\lambda |\xi|)^{-1} }{a_1} \int\limits_{|x_0' + x'|<L_\delta} \frac{1}{|x'|} \left[ \frac{1}{|x'|} \big( 1-h(\lambda^{1/2} x') \big) L_\delta^{-1} || P||_{C^1} + \lambda^{1/2}(\partial_{\alpha}h)(\lambda^{1/2} x') || P||_{L^\infty} \right] dx' + \\ \frac{(\lambda |\xi|)^{-1} }{a_1^2} \int\limits_{|x_0' + x'|<L_\delta} \frac{1}{|x'|^2} \big( 1-h(\lambda^{1/2} x') \big) || P||_{L^\infty} dx'. \end{multline*} We now use the definition of $h$, and that of $L_\delta$ given by (\ref{def-of-L-delta}), and employ integration in spherical coordinates by so appearing to \begin{equation}\label{est-of-I-xi-2} I_{j,\xi}^{(2),\alpha} \lesssim \frac{(\lambda |\xi|)^{-1} }{a_1} \lambda^{1/2} L_\delta^{-1} || P ||_{C^1} + \frac{(\lambda |\xi|)^{-1} }{a_1^2} \lambda^{1/2} || P ||_{L^\infty} \end{equation} with an absolute constant\footnote{One may get more precise decay rate in $\lambda$ depending on dimension, cf. \cite[p. 76]{ASS2}, however the crude estimates we have here are enough for our purpose.} $C$. By (\ref{est-of-I-xi-1}) and (\ref{est-of-I-xi-2}), along with the definition (\ref{def-of-L-delta}) we get \begin{equation}\label{I-j-xi-in-case2} |I_{j,\xi}|\leq C \lambda^{-1/2} || P||_{C^1} a_1^{-3}. \end{equation} \vspace{0.2cm} \noindent \textbf{Step 4. Final estimates.} We now put everything together. By \cite[Lemma 2.3]{ASS1} we have \begin{equation}\label{g-coeff-est} \sum_{\xi\in {\mathbb Z}^d\setminus\{ 0 \} } |c_\xi(g)| \leq C \left( \sum\limits_{\alpha \in {\mathbb Z}^d_+, \ |\alpha|=d} ||\nabla^\alpha g||_{L^2(\mathbb{T}^d)}^2 \right)^{1/2}, \end{equation} where $\nabla^\alpha =\partial^{\alpha_1}_1 \circ ... \circ \partial^{\alpha_d}_d $, $|\alpha|=|\alpha_1|+...+|\alpha_d|$, and constant $C=C(d)$. Now let $I(\lambda)$ be the integral in (\ref{int-in-prop}). From (\ref{def-of-I-j}) and (\ref{gamma-delta-inclusion}) we get \begin{equation}\label{I-lambda-est1} I(\lambda)=\sum\limits_{j=1}^{M_\delta} I_j + \int_{(\Gamma_0 \setminus \Pi_0)\setminus \widetilde{\Gamma}_{2 \delta}} P(y) g(\lambda M y + y_0) d \sigma(y). \end{equation} The definition of $\widetilde{\Gamma}_{2 \delta}$ infers $\mathrm{vol}_{d-1}((\Gamma_0 \setminus \Pi_0)\setminus \widetilde{\Gamma}_{2 \delta}) \lesssim \delta$, hence by (\ref{I-lambda-est1}) it follows that \begin{equation}\label{I-lambda-est2} |I(\lambda)|\lesssim \sum_{j=1}^{M_\delta} |I_j| + \delta || P||_{L^\infty} ||g||_{L^\infty}. \end{equation} We thus get \begin{align*} |I_j| &\leq \sum\limits_{\xi\neq 0} |c_\xi(g) | |I_{j,\xi}| \ \big( \text{by } (\ref{I-j-as-sum}) \big) \\ &\lesssim \sum\limits_{\xi\neq 0} |c_\xi(g) | \bigg[ (\lambda |\xi|)^{-1} ||P||_{C^1} a_1^{2d-8} + \lambda^{-1/2}||P||_{C^1} a_1^{-3} \bigg] \big( \text{by } (\ref{I-j-xi-in-case1}) \text{ and } (\ref{I-j-xi-in-case2}) \big) \\ &\lesssim \lambda^{-1/2} a_1^{-4} || P||_{C^1} \sum\limits_{\xi \neq 0} |c_\xi (g)| \\ &\lesssim \lambda^{-1/2} (\kappa(\delta))^{-4} || P||_{C^1} ||g||_{C^d} \ \big( \text{by } (\ref{g-coeff-est}) \text{ and } (\ref{hessian-is-diag2}) \big), \end{align*} where the constants are uniform in $1\leq j\leq M_\delta$ and $\delta>0$. Using this bound on $I_j$ along with estimate (\ref{est-on-M-delta}) on $M_\delta$, from (\ref{I-lambda-est2}) we get \begin{equation}\label{I-lambda-last-est} |I(\lambda)|\lesssim \big[\lambda^{-1/2} (\kappa(\delta))^{-2(d+1)} +\delta \big] || P||_{C^1} ||g||_{C^d}, \end{equation} for any $\delta>0 $ small and any $\lambda \geq 1$. It is left to optimize the last inequality in $\delta$, for which consider the function $f(\delta) : = \delta (\kappa(\delta))^{2(d+1)} $ in the interval $(0,\delta_0) $ where $\delta_0>0$ is small. It follows from definition of $\kappa(\delta)$ in (\ref{def-of-kappa}) that $f$ is continuous and strictly increasing in $(0,\delta_0)$, and converges to $0$ as $\delta \to 0+$. Hence for each $\lambda>0$ large enough there is a unique $0<\delta=\delta(\lambda)<\delta_0$ such that $\lambda^{-1/2} = f(\delta(\lambda))$. Define $\omega_0 (\lambda) := \delta(\lambda)$ and observe that $\omega_0(\lambda) = f^{-1} (\lambda^{-1/2})$, where $f^{-1}$ is the inverse of $f$. It readily follows from the mentioned properties of $f$ that $\omega_0$ is a modulus of continuity. Finally, for given $\lambda>0$ large, applying inequality (\ref{I-lambda-last-est}) with $\delta=\delta(\lambda)$ we get $|I(\lambda) | \lesssim \omega_0(\lambda) || P||_{C^1} ||g||_{C^d} $ completing the proof of the proposition. \end{proof} \begin{proof}[Proof of Theorem \ref{Thm-slow-Dirichlet}] Recall that the coefficient matrix of (\ref{Dirichlet-bdd-domains}) is defined on some fixed domain $X$. Take any $x_0$ in the interior of $X$ and consider the translated domain $X-x_0$. Since the origin lies in the interior of $X-x_0$ we fix $D_0 \Subset X-x_0$, any prototypic domain constructed in subsection \ref{sec-prot-domain}. Following the notation of this section, we let $\Gamma_0 := \partial D_0$ and by $\Pi_0 $ we denote the flat portion of $\Gamma_0$, which by (P2) is a $(d-1)$-dimensional closed ball. Finally, we fix a ball $B_0$ from property (P4) formulated above. For a matrix $M\in {\mathrm O}(d)$ let bounded domain $D_M$ be the rotated image of $D_0$ by $M$ as defined in subsection \ref{subsec-prelim}. Set $\Gamma_M= \partial D_M$, as we have already discussed $\Gamma_M$ is smooth and we have $\Gamma_M =\mathcal{M}(\Gamma_0) $. Since $M$ is orthogonal, it follows that $\Pi_M: = \mathcal{M} \Pi_0 \subset \partial D_M$ is a $(d-1)$-dimensional ball lying in a hyperplane passing through the origin and having normal equal to $M e_d$. Finally, we fix a small closed neighbourhood $S_0\subset \mathbb{S}^{d-1}$ of $e_d$ with non-empty interior, such that for any $n\in S_0$ if $M\in {\mathrm O}(d)$ with $M e_d = n$, then the corresponding rotated domain $D_M$ lies in $X-x_0$ and compactly contains the ball $B_0$. Fix a function $g\in C^\infty(\mathbb{T}^d)$ with the property $\int_{\mathbb{T}^d} g =0$, and a unit vector $n\in S_0$ along with a matrix $M \in {\mathrm O}(d)$ satisfying $M e_d = n$. For $\varepsilon>0$ let $u_\varepsilon$ be the smooth solution to \begin{equation}\label{osc-prob-in-proof} -\nabla \cdot A(x) \nabla u_\varepsilon(x) = 0 \text{ in } D_M +x_0 \qquad \text{ and } \qquad u_\varepsilon(x) = g (x/ \varepsilon) \text{ on } \Gamma_M+x_0. \end{equation} Since $g$ has mean zero, by (\ref{Dirichlet-bdd-domains-homo}) we get that $u_0$, the homogenized solution corresponding to (\ref{osc-prob-in-proof}), is identically zero. Now let $P_M: (D_M+x_0) \times (\Gamma_M+x_0) \to {\mathbb R}$ be the Poisson kernel for the pair $(A, D_M+x_0)$. For $x\in D_M+x_0$ we have \begin{equation}\label{u-e-two-parts} u_\varepsilon (x) = \int_{\Gamma_M+x_0} P_M(x,y) g(y/ \varepsilon) d\sigma_M(y) = \int_{\Pi_M+x_0} + \int_{\Gamma_M \setminus \Pi_M+x_0} := I_1 + I_2. \end{equation} Observe that $\Gamma_M \setminus \Pi_M$ is the part of the boundary of $D_M$ with non-vanishing (positive) Gaussian curvature, while $\Pi_M$ is a flat ball (notice that principal curvatures are orthogonal invariants, hence do not change after rotation, see for example \cite[Section 14.6]{GT}). The aim is to show that $I_2$ has some prescribed decay rate in $\varepsilon$ independently of $M$, while the decay of $I_1$ can be made as slow as one wish. Setting $\lambda = 1/\varepsilon$ and then translating the origin to $x_0$ and rotating the coordinate system by $M^T$ we get $$ I_2=I_2(\lambda; x)= \int_{\Gamma_0 \setminus \Pi_0} P_M(x,M z+x_0) g( \lambda M z + \lambda x_0) d\sigma_0(z) . $$ For each fixed $x\in B_0+x_0$ by (\ref{Poisson-rep}) the function $P_M(x, \cdot +x_0 )$ is $C^\infty$ on $\Gamma_M$, hence applying Proposition \ref{prop-main-for-Dirichlet} we obtain $$ |I_2| \leq C_0 \omega_0(\lambda ) ||P_M (x, \cdot +x_0) ||_{C^1(\Gamma_M)} ||g||_{C^d(\mathbb{T}^d)}, \qquad x\in B_0+x_0, $$ with an absolute constant\footnote{Observe that the constant in Proposition \ref{prop-main-for-Dirichlet} is independent of the shift and hence having $\lambda x_0$ in $g$ still results in a constant independent of $\lambda$.} $C_0$. By Lemma \ref{Lem-Poisson-dist-smooth} we have $ ||P_M (x, \cdot + x_0) ||_{C^1(\Gamma_M)} \leq C_0 $ uniformly for $x\in B_0 +x_0$ and matrix $M$ as above. The latter implies that \begin{equation}\label{decay-of-I-2} |I_2(\lambda; x)| \leq C_0 \omega_0(\lambda ) ||g||_{C^d (\mathbb{T}^d) }, \qquad x\in B_0+x_0. \end{equation} For the flat part of the integral we denote by $\mathbb{I}_{M}$ the characteristic function of $\Pi_M$ and extend $P_M(x, \cdot )$ as zero outside $\Pi_M+x_0$. With these notation we get \begin{align*} \numberthis \label{I-1} I_1= I_1(\lambda; x) = \int_{\Pi_M} P_M(x,y +x_0) g(\lambda y + \lambda x_0) d\sigma_M(y) &= \\ \int_{y\cdot n =0} P_M(x,y +x_0)\mathbb{I}_{M}(y) g(\lambda y + \lambda x_0) d \sigma(y) &= \\ \int_{z_d= 0} P_M(x , Mz +x_0 ) \mathbb{I}_{M}(Mz ) g(\lambda Mz +\lambda x_0 ) d\sigma_0(z) &= \\ \int_{{\mathbb R}^{d-1}} P_M \left(x , M (z',0)+ x_0 \right) \mathbb{I}_{M} \left(M (z',0) \right) g \left(\lambda M(z',0)+ \lambda x_0 \right) d z' &. \end{align*} Consider the following 2-parameter family of functions \begin{multline} \mathcal{F}: = \{ F_{x,M}(z') : = P_M(x , M (z',0) + x_0 ) \mathbb{I}_{M} (M (z',0) ), \ z'\in {\mathbb R}^{d-1}: \text{ where } \\ \ \ x\in B_0 +x_0 , \ M\in {\mathrm O}(d) \text{ with } M e_d =n \text{ and } n\in S_0 \}. \end{multline} Let us see that the family $\mathcal{F}$ satisfies all conditions of Lemma \ref{Lem-family-on-compact-supp}. Indeed, the assumption (c') of the lemma is simply due to the construction of domain $D_0$ and orthogonality of matrices $M$. Next, (c') combined with the estimate (\ref{dist-Poisson-smooth}) implies assumption (b) of Lemma \ref{Lem-slow-conv-family-of-F}. Items (d) and (e) of Lemma \ref{Lem-family-on-compact-supp} are due to (\ref{deriv-Poisson-smooth}) and (\ref{dist-Poisson-smooth}) correspondingly. It is left to verify the assumption with the lower bound on the integrals in part (a) of Lemma \ref{Lem-slow-conv-family-of-F}. The latter is due to Corollary \ref{cor-Poisson-inf-rotated}. We now apply Lemma \ref{Lem-family-on-compact-supp} to the family $\mathcal{F}$ for modulus of continuity $\lambda\longmapsto \omega(\lambda) + \omega_0^{1/2}(\lambda)$, where $\omega$ is the one fixed in the formulation of the theorem, and $\omega_0$ is determined from (\ref{decay-of-I-2}). By doing so we get an irrational normal $n\in S_0$, a function $g \in C^\infty(\mathbb{T}^d)$ with the property $\int_{\mathbb{T}^d} g =0$, and a strictly increasing sequence of positive numbers $\{\lambda_k\}_{k=1}^\infty$ such that for any $M\in {\mathrm O}(d)$ satisfying $M e_d=n$ from (\ref{I-1}) we have \begin{equation}\label{I-1-on-lambda-k} |I_1(\lambda_k; x)| \geq \omega(\lambda_k) + \omega_0^{1/2}(\lambda_k), \ \ x\in B_0+x_0, \qquad k=1,2,... \ . \end{equation} Since $\omega_0$ decays at infinity, combining (\ref{I-1-on-lambda-k}) and (\ref{decay-of-I-2}) and setting $\varepsilon_k = 1/\lambda_k$, from the representation (\ref{u-e-two-parts}) for any $x\in B_0 +x_0$ we get \begin{equation}\label{u-e-k} |u_{\varepsilon_k}(x)| \geq |I_1(\varepsilon_k; x)|-|I_2(\varepsilon_k;x)| \geq \omega(\lambda_k) + \omega_0^{1/2}(\lambda_k) - C_0 \omega_0(\lambda_k) ||g||_{C^d(\mathbb{T}^d)} \geq \omega(1/\varepsilon_k) , \end{equation} if $k\geq 1$ large enough. Finally, as a domain $D$ of the Theorem we take $D_M+x_0$ where $D_M$ is any domain having $n$ as the normal of its flat boundary, where $n$ is obtained from Lemma \ref{Lem-family-on-compact-supp} (note that $D$ is defined modulo ${\mathrm O}(d-1)$), and as $D'$ we take $B_0+x_0$, where $B_0$ is the ball fixed above. Since $\int_{\mathbb{T}^d} g =0$, the homogenized problem (\ref{Dirichlet-bdd-domains-homo}) has only a trivial solution. Thus (\ref{u-e-k}) completes the proof of part a) of the theorem. \vspace{0.2cm} We now prove part b). Observe that an argument similar to what we had for (\ref{decay-of-I-2}) shows that $I_2(\lambda, x) \to 0 $ as $\lambda \to \infty$ for any $x\in D=D_M+x_0$. Hence, taking into account decomposition (\ref{u-e-two-parts}), it is enough to show that $I_1(\lambda; x) \to 0$ as $\lambda \to \infty$ for all $x\in D$. We may assume without loss of generality, by passing to a subset of $S_0$ if necessary, that $n_d\neq 0$, where the unit vector $n$ is fixed from part a). Now take any subset $\Pi$ of $\Pi_M+x_0$ such that the projection of $\Pi$ onto ${\mathbb R}^{d-1}\times\{0\} $ is a $(d-1)$-dimensional rectangle, which we will denote by $[a_1,b_1]\times ... \times [a_{d-1}, b_{d-1}]$. To show that $I_1(\lambda; x) $ decays it is enough to prove that the integral \begin{equation}\label{I-lambda-over-Pi} I(\lambda; x):= \int_{\Pi } P_M(x,y) g(\lambda y) d\sigma_M(y) \end{equation} converges to 0 as $\lambda \to \infty$ for any $x\in D$. The latter follows easily. Indeed, since $n_d\neq 0$ for $y\in \Pi$ we have $y_d = -\frac{1}{n_d} ( n_1 y_1 +...+n_{d-1} y_{d-1} ) $ hence passing to volume integration in (\ref{I-lambda-over-Pi}) and expanding $g$ into Fourier series, for each $x\in D$ we get \begin{equation}\label{I-lambda-over-Pi-rect} I(\lambda; x) = \sum_{\xi \neq 0} c_\xi(g) \int_{a_1}^{b_1}...\int_{a_{d-1}}^{b_{d-1}} P_M(x, y_1,..., y_d ) \prod\limits_{k=1}^{d-1} {\mathrm{exp}} \left[ \left( \xi_k - \frac{n_k}{n_d} \xi_d \right) \lambda y_k \right] d y_1... d y_{d-1}, \end{equation} where $y_d =-\frac{1}{n_d} ( n_1 y_1 +...+n_{d-1} y_{d-1} )$, and as before ${\mathrm{exp}}(t) = e^{2\pi i t}$, $t\in {\mathbb R}$. Since $n$ is irrational, it is easy to see that at least one of the exponentials in the last expression is non-trivial, i.e. for any non-zero $\xi\in {\mathbb Z}^d $ there is $1\leq k \leq d-1$ such that $ \xi_k - \frac{n_k}{n_d} \xi_d \neq 0$. Using this and the smoothness of $P_M(x, \cdot)$ which is due to Lemma \ref{Lem-Poisson-dist-smooth}, in (\ref{I-lambda-over-Pi-rect}) for each non-zero $\xi$ we integrate by parts with respect to the corresponding $k$-th coordinate, obtaining by so that each integral in (\ref{I-lambda-over-Pi-rect}) decays as $\lambda \to \infty$. Observe as well that for each fixed $x$, integrals in (\ref{I-lambda-over-Pi-rect}) are uniformly bounded with respect to $\lambda$ and $\xi$ in view of (\ref{dist-Poisson-smooth}). Finally, the series of Fourier coefficients of $g$ converges absolutely due to the smoothness of $g$. This, coupled with uniform boundedness and decay of integrals in (\ref{I-lambda-over-Pi-rect}), easily implies that $I(\lambda; x)\to 0 $ as $\lambda \to \infty$ for each fixed $x\in D$, completing the proof of part b) of the theorem. The theorem is proved. \end{proof} \subsection{Concluding comments} The reader may have observed that the approach we have here has a potential to work for homogenization problems when solutions to the underlying PDE admit integral representation with some nice control for representation kernels. For example, one should be able to treat the periodic homogenization of Neumann boundary data with the methods developed in this note. It also seems plausible, possibly with some more work, that one can study homogenization of almost-periodic boundary data as well using a similar analysis. The reason we can not readily allow (periodic) oscillations in the operator of the problem (\ref{Dirichlet-bdd-domains}) is due to the absence of the necessary control over the Poisson kernel corresponding to oscillating operator. In particular, we do not have uniform (with respect to $\varepsilon>0$) bounds on $C^1$-norms similar to those in Lemma \ref{Lem-Poisson-dist-smooth} at our disposal. One possible detour of this obstacle is the use of results on homogenization of Poisson kernel for the $\varepsilon$-problem considered in \cite{Kenig-Lin-Shen-CPAM}, to reduce matters to a fixed operator, where the analysis of the current paper can be utilized (cf. the proof of Theorem 1.7 of \cite{ASS2}). \section*{Appendix} We give some basic estimates on Green's and Poisson's kernels associated with divergence type elliptic operators in bounded domains. The estimates we will need are standard and well-known, but in view of the absence of an explicit reference we include the proofs here. Following subsection \ref{subsec-prelim} for a matrix $M\in {\mathrm O}(d)$ and domain $D\subset {\mathbb R}^d$ by $D_M$ we denote the rotated image of $D$ by the matrix $M$. We saw already that $D_M$ is a bounded domain with $C^\infty$ boundary, moreover $\partial D_M =\mathcal{M} (\partial D)$, where the orthogonal transformation $\mathcal{M}$ is defined through $\mathcal{M}x = Mx$ for $x\in {\mathbb R}^d$. Next, we let the coefficient matrix $A$ and domain $X$ be as in the formulation of Theorem \ref{Thm-slow-Dirichlet}. Finally, we fix a bounded domain $D$ with $C^\infty $ boundary, and such that for any $M\in {\mathrm O}(d)$ the domain $D_M$ lies compactly inside $X$. With these preliminary setup we now consider the operator $\mathcal{L}:= - \nabla \cdot A(x) \nabla $, $x\in X$, and let $G_M(x,y)$ be the Green's kernel for $\mathcal{L}$ in domain $D_M$, i.e. for each fixed $y \in D_M$, $G_M(\cdot , y)$ solves $$ -\nabla_x \cdot A(x) \nabla_x G_M(x,y) = \delta(x-y), \ \ x\in D_M \qquad \text{ and } \qquad G_M(x,y)=0 , \ x\in \partial D_M, $$ in a sense of distributions, where $\delta$ is the Dirac symbol. We have the following properties. \begin{itemize} \indentdisplays{1pt} \item[(a)] Set $G_M^*(x,y) := G_M^T(y,x)$, then $G_M^*$ is the Green's kernel for the formal adjoint to $\mathcal{L}$ in domain $D_M$, i.e. a divergence type operator with coefficient matrix equal to $A^{\beta \alpha}$. \vspace{0.1cm} \item[(b)] For any multi-index $\alpha \in {\mathbb Z}^d_+$ we have \begin{equation}\label{sDom-Green-deriv} |\nabla^\alpha_x G_M (x,y)| \leq C_\alpha |x-y|^{2-d -|m|} \text{ for } d+|m|>2, \end{equation} where $C_\alpha $ is independent of $M$. \end{itemize} \vspace{0.1cm} The existence and uniqueness of Green's kernel, as well as property (a) and estimate of (b) for each fixed $M$ are proved in \cite{Dolz-Muller}, which is one of the several papers discussing some basic questions around the Green's kernels for divergence type elliptic operators. The only thing that needs some clarification is the choice of the constant in (b) independently of $M$. To see it, we proceed as follows. Assume $G$ is the Green's kernel for the pair ($A, D$) and let $M\in {\mathrm O}(d)$ be fixed. Make a change a variables by setting $Mx = \widetilde{x}$ and $My = \widetilde{y}$ where $x,y\in D$ and $\widetilde{x}, \widetilde{y} \in D_M$. By orthogonality of $M$ we have $\nabla_x = M^T \nabla_{\widetilde{x}}$ which combined with the uniqueness of the Green's kernel, easily implies that the function $$ \widetilde{G} (\widetilde{x}, \widetilde{y}):= G(M^T \widetilde{x}, M^T \widetilde{y}) $$ is the Green's kernel for the pair $( M A(M^T \cdot ) M^T , D_M )$. This shows that if $\widetilde{G}_M$ is the Green's kernel for the pair $(M^T A(M \cdot )M, D)$, then \begin{equation}\label{rel-between-greens} G_M (\widetilde{x}, \widetilde{y} ) := \widetilde{G}_M (M^T \widetilde{x}, M^T \widetilde{y} ) \end{equation} produces the Green's kernel for the pair $(A, D_M)$. In view of the orthogonality of $M$ the ellipticity constants as well as bounds on $C^k$ norms, for any $k\geq 0$, of the coefficient matrix $M^T A(M \cdot) M$ can be chosen independently of $M$, hence (\ref{rel-between-greens}) and the proof of \cite{Dolz-Muller} illustrate uniformity of the constants in (b) with respect to $M$. Now for $x \in D_M$ and $y \in \partial D_M$ we let $P_M(x,y)$ be the Poisson kernel for the pair $(A, D_M)$. One may easily conclude using the divergence theorem that \begin{equation}\label{Poisson-rep} P_M(x,y) = - n^T(y) A^T(y) \nabla_y G_M(x,y), \ \ \text{ where } x \in D_M, \ y \in \partial D_M, \end{equation} where $n(y)$ is the unit outward normal to $\partial D_M$ at the point $y$. \begin{lem}\label{Lem-Poisson-dist-smooth} Let the domain $D$ be as above and $M\in {\mathrm O}(d)$ be any. Keeping the above notation and assumptions in force we have \begin{equation}\label{dist-Poisson-smooth} |P_M(x,y)| \leq C \frac{d(x)}{|x-y|^d}, \qquad \forall x\in D_M, \ \forall y\in \partial D_M, \end{equation} where $d(x)$ is the distance of $x$ from the boundary of $D$, and \begin{equation}\label{deriv-Poisson-smooth} |\nabla_y P_M(x,y)| \leq C |x-y|^{-d} , \qquad \forall x\in D_M, \ \forall y\in\partial D_M. \end{equation} with constants independent of $M$. \end{lem} \begin{proof} Let us first observe that the estimates on derivatives of $P$ follow trivially from the representation (\ref{Poisson-rep}), estimate (\ref{sDom-Green-deriv}), the symmetry property of Green's matrix (a) formulated above, combined with smoothness of the coefficients $A$ and domain $D_M$. We now proceed to the proof of the estimate with distance, for which we will rely on uniformity of constants in (\ref{sDom-Green-deriv}) with respect to $M$. First consider the case of $d\geq 3$. We now show that for $x,y\in D$ with $x\neq y$ one has \begin{equation}\label{sDom-Green-dist1} | G_M(x,y) | \leq C \frac{d(x)}{ |x-y|^{d-1} }. \end{equation} By (\ref{sDom-Green-deriv}) we have $|G_M(x,y)|\leq C |x-y|^{2-d}$, hence (\ref{sDom-Green-dist1}) is trivial if $d(x)>\frac 13 |x-y|$, thus we will assume that $d(x)\leq \frac 13 |x-y|$. Now fix $\overline{ x } \in \partial D$ such that $d(x) = |x-\overline{x}|$. Since $G_M$ vanishes on the boundary of $D_M$ with respect to both variables, using Mean-Value Theorem and estimate (\ref{sDom-Green-deriv}) for $G_M^*$ we get \begin{equation}\label{Z1} |G_M(x,y)| = |G_M(x,y) - G_M( \overline{x}, y) | \leq | \nabla_x G_M( \widetilde{x}, y) | | x-\overline{x}| \leq C \frac{d(x)}{| \widetilde{x} -y |^{d-1} }. \end{equation} Here, for estimating the derivative of $G_M$ we have used symmetry relation (a) above. As $d(x) \leq \frac 13 |x-y|$, and $\widetilde{x}$ is on the segment $[x, \overline{x}]$, the triangle inequality implies $$ | \widetilde{x} - y | \geq |x-y|-|x- \widetilde{x} | \geq |x-y| - |x - \overline{x}| \geq \frac 23 |x-y|, $$ which combined with (\ref{Z1}) gives (\ref{sDom-Green-dist1}). Now fix $x_0,y_0$ and let $r:=|x_0-y_0|>0$. Consider $\widetilde{G}_M(z): = G_M(x_0, rz + x_0 )$ in a scaled and shifted domain $D_{r,M}:= r^{-1}(D_M-x_0)$. We have that $\widetilde{G}_M$ is a solution to the adjoint operator in $D_{r,M} \cap (B(0, 3)\setminus B( 0, 1/3))$, hence in view of the smoothness of the coefficients, from standard elliptic regularity estimates we obtain $$ |\nabla_z \widetilde{G}_M(z)| = | r \nabla_y G_M (x_0, rz + x_0 ) | \leq C || \widetilde{G}_M ||_{L^\infty( D_{r,M} \cap( B(0,3)\setminus B(0,1/3) ))}, $$ for all $z\in D_{r,M} \cap (B(0,2)\setminus B(0,1/2)) $. Note that the regularity estimates we have used are uniform in $M$ due to (\ref{rel-between-greens}) and orthogonality of $M$. The last inequality combined with (\ref{sDom-Green-dist1}) implies \begin{equation}\label{Z2} | \nabla_y G_M(x_0,y_0)| \leq C \frac{d(x_0)}{| x_0 -y_0 |^d}. \end{equation} Now the estimate (\ref{dist-Poisson-smooth}) follows from the last inequality and representation (\ref{Poisson-rep}). It is left to consider the 2-dimensional case, which can be handled precisely as in \cite[Lemma 21]{AL-systems} thus we will omit the details. The lemma is proved. \end{proof} \noindent {\footnotesize \textbf{Acknowledgements.} I thank Christophe Prange for an important conversation back in Fall of 2014 which has eventually led to the problem considered in Section \ref{sec-Laplace}. Part of the work was done during my visit to Institut Mittag-Leffler for the term ``Homogenization and Random Phenomenon". The warm hospitality and support of The Institute is acknowledged with gratitude. This article is partially based on my PhD thesis completed at The University of Edinburgh in 2015, and I wish to thank my thesis supervisor Aram Karakhanyan for his encouragement, as well as advice. I am also grateful to the anonymous referees for thorough reading of the manuscript and providing valuable suggestions, as well as corrections which have certainly helped to improve the quality of the presentation. }
{ "attr-fineweb-edu": 1.171875, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdj85qWTBJKvQ8ZTO
\section{Introduction} \label{sec:introduction} There are many virtual assistants commercially available today, such as Apple's Siri, Google's Home, Microsoft's Cortana, and Amazon's Echo. With a well-designed dialogue system as an intelligent assistant, people can accomplish tasks easily via natural language interactions. In the research community, dialogue system has been well studied for many decades. Recent advance in deep learning has also inspired the exploration of neural dialogue systems. However, it still remains a big challenge to build and evaluate multi-turn task-completion systems in a universal setting. On one hand, conversational data for dialogue research has been scarce, due to challenges in human data collection and privacy issues. Without standard public datasets, it has been difficult for any group to build universal dialogue models that could encourage follow-up studies to benchmark upon. On the other hand, labeled datasets that are available now, while useful for evaluating partial components of a dialogue system (such as natural language understanding, dialogue state tracking), fail at end-to-end system evaluation. As a thorough evaluation of a dialogue system requires a large number of users to interact with the system at real time. A well-adopted alternative approach is the employment of user simulators. The idiosyncratic strength and weakness of simulators for dialogue systems has been a long-standing research topic. User simulators can provide an interactive environment for evaluating dialogue system, which is more attainable and less costly than human evaluation. The use of simulators can also foster interest and encourage research effort in exploring reinforcement learning for dialogue management. However, the progress of dialogue research via reinforcement learning is not as fast as we have expected, largely due to the lack of a common evaluation framework, on which different research groups can jointly develop new technologies and improve their systems. In addition, the dependency on simulators often limits the scope of functionality of the implemented dialogue systems, due to the inevitable discrepancy between real users and artificial simulators. Over the past few years, we have achieved some initial success in this area. This proposal aims to further develop and mature this work and release a universal experimentation and evaluation framework by working together with research teams in the community. In this proposal, we present a new Dialogue Challenge on ``End-to-End Task-Completion Dialogue System''. This differs from previous dialogue tracks, most of which have focused on component-level evaluation. In this dialogue challenge, we will release a carefully-labeled conversational dataset in multiple domains. This data can be used by participants to develop all the modules required to build task-completion dialogue systems. We will also release an experimentation platform with built-in simulators. Each domain will have its own well-trained simulator for experimentation purpose. In the rest of the proposal, Section~\ref{sec:platform_overview} will provide more details about the proposed experimentation platform. Section~\ref{sec:task_des} will describe the specific tasks defined in the challenge, as well as the corresponding datasets that will be released. And Section~\ref{sec:eval} will describe the final evaluation of submitted systems. \begin{figure*}[ht] \centering \includegraphics[width=\linewidth]{frmend2end_v2.pdf} \vspace{-8mm} \caption{Illustration of an end-to-end task-completion dialogue system.} \vspace{-2mm} \label{fig:end2end} \end{figure*} \section{Platform Overview} \label{sec:platform_overview} The proposed experimentation platform is illustrated in Figure~\ref{fig:end2end}~\cite{li2017end}. It consists of a user simulator (on the left) that mimics a human user and a dialogue system (on the right). In the user simulator, an agenda-based user modeling component~\cite{schatzmann2009hidden} works at the dialog-act level, and controls the conversation exchange conditioned on the generated user goal, to ensure that the user behaves in a consistent, goal-oriented manner. An NLU (Natural Language Understanding) module will process user's natural language input into a semantic frame. An NLG (Natural Language Generation) module is used to generate natural language sentences responding to the user's dialogue actions. Although Figure~\ref{fig:end2end} presents a neural dialogue system as an example, participants are free, and encouraged, to plug in any NLU/NLG modules, as long as their systems can complete a predefined task via multi-turn conversations with the user. In every turn of a conversation, the system needs to understand natural language input generated by the user or the simulator, track dialogue states during the conversation, interact with a task-specific dataset (described in Section~\ref{sec:task_des}), and generate an action (i.e., system response). The system action could be presented either as semantic frames (known as \emph{dialog-acts} in simulation evaluation), or as natural language utterances generated by an NLG module. \section{Task Description} \label{sec:task_des} In this dialogue challenge, we will release well-annotated datasets for three task-completion domains\footnote{All the datasets and code will be available at \url{https://github.com/xiul-msr/e2e_dialog_challenge}.}: movie-ticket booking, restaurant reservation, and taxi ordering. Table~\ref{tab:stat_tasks} shows the statistics of the three datasets. We will use movie ticket booking as an example to explain the specific task of building dialogue system in each domain. \begin{table}[h] \small \begin{tabular}{|c|c|c|c|c|} \hline Task & \#Intents & \#Slots & \#Dialogs \\ \hline Movie-Ticket Booking & 11 & 29 & 2890 \\ \hline Restaurant Reservation & 11 & 30 & 4103 \\ \hline Taxi Ordering & 11 & 19 & 3094 \\ \hline \end{tabular} \centering \vspace{1mm} \caption{The Statistics of Three Tasks.} \label{tab:stat_tasks} \end{table} \subsection{Movie-Ticket booking task} In this task, the goal is to build a dialogue system to help users find information about movies and book movie tickets. Throughout the course of the conversation, the agent gathers information about the user's requests, and in the end books the movie tickets or provides information about the movie in question. At the end of the conversation, the dialogue environment assesses a binary outcome (success or failure), based on (1) whether a movie ticket is booked and (2) whether the movie satisfies the user's constraints. \subsubsection{Dataset} The data that will be released for this task was collected via Amazon Mechanical Turk. The annotation schema contains $11$ intents (e.g., inform, request, confirm\_question, confirm\_answer, etc.), and $29$ slots (e.g., moviename, starttime, theater, numberofpeople). Most of the slots are \textit{informational} slots, which can be used to constrain the search. Others are \textit{request} slots, with which users can request information from the agent. The final dataset to release will consist of 2890 dialogue sessions, with approximately $7.5$ turns per session on average. Table \ref{tab:data_example} shows an example of annotated human-human dialogue in the movie-ticket booking task. And Table \ref{tab:tcp_sample} shows one success and one failure dialogue example, generated by a rule-based agent and an RL agent interacting with user simulator, respectively. \subsubsection{User-Goal Set} The user goals that will be released alongside with the labeled data, are extracted from labeled dialogues by two methods. The first one extracts all the slots (known and unknown) from the first user turn (excluding the greeting turn) in each session, under the assumption that the first turn usually contains the main request from the user. The second method aggregates all the slots that appear in the user turns into one user goal. These user goals are then stored into a user-goal database for the simulator to draw from. When triggering a dialogue, the user simulator randomly samples one user goal from this database. Figure 2 shows one example user goal for the movie-ticket booking task. \begin{figure}[htb] \begin{lstlisting}[language=XML] New episode, user goal: { "request_slots": { "ticket": "UNK" }, "inform_slots": { "city": "seattle", "numberofpeople": "2", "theater": "amc pacific place 11 theater", "starttime": "9:00 pm", "date": "tomorrow", "moviename": "deadpool" } } \end{lstlisting} \label{fig:user_goal_example} \vspace{-2mm} \caption{An example of a user goal: the user wants to buy 2 tickets of Deadpool at 9:00 PM tomorrow at amc pacific place 11 theater{,} Seattle.} \end{figure} \subsubsection{Knowledge Base} The knowledge base to be released for this task was built from labeled dialogues. The data entries are organized in JSON files, which include several KBs of size 1000 records or above. Examples of these data records are as follows: \begin{figure}[htb] \begin{lstlisting}[language=XML] #movie 1 'city': 'hamilton', 'theater': 'manville 12 plex', 'zip': '08835', 'critic_rating': 'good', 'date': 'tomorrow', 'state': 'NJ', 'starttime': '10:30am', 'genre': 'comedy', 'moviename': 'zootopia' #movie 2 'city': 'seattle', 'theater': 'regal meridian 16', 'zip': '98101', 'theater_chain': 'regal meridian', 'state': 'WA', 'starttime': '6:30pm', 'date': 'tonight', 'moviename': 'zootopia' \end{lstlisting} \label{fig:mv_kb_exmaples} \vspace{-2mm} \caption{Data records in Movie Knowledge Base.} \end{figure} \subsubsection{User Simulator} For the experimentation platform, we will also release a user simulator~\cite{li2016user} for this task. The user simulator can support two formats of input: \begin{enumerate} \item \emph{Frame-level semantics}: A dialog act form (e.g., \textsf{request(moviename; genre=action; date=this weekend)}) that can be used for debug purpose. \item \emph{Natural language}: Natural language text. To use this format, each participate needs to build their own NLU component to convert natural language input into frame-level semantics. \end{enumerate} \section{Evaluation} \label{sec:eval} To evaluate the quality of the submitted systems, we will conduct both simulation evaluation and human evaluation. \subsection{Simulation Evaluation} Three metrics will be used to measure the quality of the systems: \{\emph{success rat , average turns, average reward}\}. Success rate is sometimes known as \emph{task completion rate} -- the fraction of dialogues that ended successfully. Average turns is the average length of the dialogue. Average reward is the average reward received during the conversation. There is a strong correlation among the three metrics: generally speaking, a good policy should have a high success rate, high average reward and low average turns. Here, we choose \emph{success rate} as our major evaluation metric. \subsection{Human Evaluation} We will also conduct human evaluation for the competition. We will ask human judges to interact with the final systems submitted by participants. Besides the measurements aforementioned, each user will also give a rating on a scale of 1 to 5 based on the naturalness, coherence, and task-completion capability of the system, at the end of each dialogue session. \begin{table*}[htbp!] \small \centering \caption{Annotated human-human dialog examples for the movie-ticket booking task and the restaurant reservation task. Left column shows natural language. Right column shows annotated dialog act form. (\underline{usr}: User, \underline{agt}: Agent)} \label{tab:data_example} \begin{tabular}{l|l} \hline \multicolumn{1}{c|}{\textbf{Natural language}} & \multicolumn{1}{c}{\textbf{Dialog Act}} \\ \hline \multicolumn{2}{c}{\textbf{Movie-Ticket Booking Task}} \\ \hline \underline{usr}: Find me a good action movie this weekend. & \underline{usr}: request(moviename;genre=action;date=this weekend) \\[3pt] \underline{agt}: \begin{tabular}[c]{@{}l@{}} London Has Fallen is currently the number 1 \\ action movie in America. \end{tabular} & \underline{agt}: \begin{tabular}[c]{@{}l@{}} inform(moviename=london has fallen; other=\\number 1; genre=action) \end{tabular} \\ [8pt] \underline{usr}: Oh that sounds terrific. & \underline{usr}: confirm\_answer() \\ [3pt] \underline{agt}: \begin{tabular}[c]{@{}l@{}} Would you like to purchase tickets to this movie? \\ I would need to know what city you are in. \end{tabular} & \underline{agt}: request(city) \\ [8pt] \underline{usr}: Seattle & \underline{usr}: inform(city=seattle) \\ [3pt] \underline{agt}: \begin{tabular}[c]{@{}l@{}} Which theater in Seattle should I search for \\ tickets? \end{tabular} & \underline{agt}: request(theater;city=seattle) \\ [8pt] \underline{usr}: Regency or AMC please. & \underline{usr}:inform(theater\_chain={amc\#regency}) \\ [3pt] \underline{agt}: \begin{tabular}[c]{@{}l@{}} Around what time do you want to go, \\ and on which day? \end{tabular} & \underline{agt}: request(date) \\ [8pt] \underline{usr}: 9:30 pm any day this week. & \underline{usr}: inform(starttime=9:30 pm;date=this week) \\ [3pt] \underline{agt}: \begin{tabular}[c]{@{}l@{}} London Has Fallen is showing at 9:45pm on \\ Wednesday at AMC Southcenter 16, is that \\ showing acceptable for you? \end{tabular} & \underline{agt}: \begin{tabular}[c]{@{}l@{}} inform(moviename=london has fallen; \\ starttime= 9:45pm; date=wednesday; theater=\\ amc southcenter 16) \end{tabular} \\ [13pt] \underline{usr}: yes that would be great. & \underline{usr}: confirm\_answer() \\ [3pt] \underline{agt}: Excellent, how many tickets would you like? & \underline{agt}: request(numberofpeople) \\ [3pt] \underline{usr}: I want 2 tickets please! & \underline{usr}: inform(numberofpeople=2) \\ [3pt] \hline \multicolumn{2}{c}{\textbf{Restaurant Reservation}} \\ \hline \underline{usr}: \begin{tabular}[c]{@{}l@{}}Im looking for a martini bar in Indianapolis. \end{tabular} & \underline{usr}: \begin{tabular}[c]{@{}l@{}} request(restaurantname;food=martini bar;\\city=Indianapolis) \end{tabular} \\ [3pt] \underline{agt}: \begin{tabular}[c]{@{}l@{}} Here is the restaurant I found: High Velocity. \\ Do you want to book? \end{tabular} & \underline{agt}: \begin{tabular}[c]{@{}l@{}} request(reservation;restaurantname=High Velocity) \end{tabular} \\ [8pt] \underline{usr}: \begin{tabular}[c]{@{}l@{}} YES \end{tabular} & \underline{usr}: \begin{tabular}[c]{@{}l@{}} confirm\_answer() \end{tabular} \\ [3pt] \underline{agt}: \begin{tabular}[c]{@{}l@{}} at what date would you like to go? \end{tabular} & \underline{agt}: request(date) \\ [3pt] \underline{usr}: \begin{tabular}[c]{@{}l@{}} saturday night \end{tabular} & \underline{usr}: inform(date=Saturday night) \\ [3pt] \underline{agt}: \begin{tabular}[c]{@{}l@{}} at what time would you like to go? \end{tabular} & \underline{agt}: request(starttime) \\ [3pt] \underline{usr}: \begin{tabular}[c]{@{}l@{}} 8pm \end{tabular} & \underline{usr}: inform(starttime=8pm) \\ [3pt] \underline{agt}: \begin{tabular}[c]{@{}l@{}} how many people are going? \end{tabular} & \underline{agt}: request(numberofpeople) \\ [3pt] \underline{usr}: \begin{tabular}[c]{@{}l@{}} 4 \end{tabular} & \underline{usr}: inform(numberofpeople=4) \\ [3pt] \underline{agt}: \begin{tabular}[c]{@{}l@{}} Your reservation at High Velocity for 02/27/2016 \\ 08:00PM for 4 people under Joe Does has been \\ confirmed. \end{tabular} & \underline{agt}: \begin{tabular}[c]{@{}l@{}} inform(taskcomplete;restaurantname=High Velocity;\\date=02/27/2016;starttime=08:00pm;numberofpeople=4;\\personfullname=Joe Does) \end{tabular} \\ [3pt] \hline \end{tabular} \end{table*} \begin{table*}[htbp!] \small \centering \caption{Two sample dialogs generated by a rule-based agent and an RL agent with user simulator. Left column shows scenarios where both the rule-based and RL-based agents succeed. Right column shows scenarios where the rule-based agent fails, while the RL agent succeeds.(\underline{usr}: User, \underline{agt}: Agent)} \label{tab:tcp_sample} \begin{tabular}{l|l} \hline \begin{tabular}[c]{@{}l@{}} New episode, user goal:\\ \{\\ \-\hspace{3mm} "request\_slots": \{ \\ \-\hspace{6mm} "ticket": "UNK" \\ \-\hspace{3mm} \}, \\ \-\hspace{3mm} "diaact": "request", \\ \-\hspace{3mm} "inform\_slots": \{ \\ \-\hspace{6mm} "city": "seattle", \\ \-\hspace{6mm} "numberofpeople": "2", \\ \-\hspace{6mm} "theater": "regal meridian 16", \\ \-\hspace{6mm} "starttime": "9:25 pm", \\ \-\hspace{6mm} "date": "tomorrow", \\ \-\hspace{6mm} "moviename": "zoolander 2" \\ \-\hspace{3mm} \}\\ \}\\ \end{tabular} & \begin{tabular}[c]{@{}l@{}} New episode, user goal:\\ \{\\ \-\hspace{3mm} "request\_slots": \{ \\ \-\hspace{6mm} "ticket": "UNK" \\ \-\hspace{6mm} "theater": "UNK" \\ \-\hspace{6mm} "starttime": "UNK" \\ \-\hspace{3mm} \}, \\ \-\hspace{3mm} "diaact": "request", \\ \-\hspace{3mm} "inform\_slots": \{ \\ \-\hspace{6mm} "numberofpeople": "3", \\ \-\hspace{6mm} "date": "tomorrow", \\ \-\hspace{6mm} "moviename": "10 cloverfield lane" \\ \-\hspace{3mm} \}\\ \}\\ \-\hspace{3mm} \end{tabular} \\ \hline \multicolumn{1}{c}{\textbf{Rule-based agent}} & \multicolumn{1}{c}{\textbf{Rule-based Agent}} \\ \hline \underline{usr}: Can I get some tickets for zoolander 2? & \underline{usr}: What is the start time for 10 cloverfield lane? \\ \underline{agt}: What movie are you interested in? & \underline{agt}: What movie are you interested in? \\ \underline{usr}: I want to watch zoolander 2. & \underline{usr}: I want to watch at 10 cloverfield lane. \\ \underline{agt}: What time would you like to see it? & \underline{agt}: What time would you like to see it? \\ \underline{usr}: I want to watch at 9:25 pm. & \underline{usr}: What time is it playing tomorrow? I need 3 tickets. \\ \underline{agt}: What city you would like? & \underline{agt}: What city you would like? \\ \underline{usr}: I want to watch at seattle. & \underline{usr}: I do not care. \\ \underline{agt}: What date would you like to watch it? & \underline{agt}: What date would you like to watch it? \\ \underline{usr}: I want to set it up tomorrow. & \underline{usr}: I want to set it up tomorrow. \\ \underline{agt}: Which theater would you like? & \underline{agt}: Which theater would you like? \\ \underline{usr}: I want to watch at regal meridian 16. & \underline{usr}: Which theater is available? \\ \underline{agt}: How many tickets do you need? & \underline{agt}: How many tickets do you need? \\ \underline{usr}: I want 2 tickets please! & \underline{usr}: I want 3 tickets please! \\ \begin{tabular}[c]{@{}l@{}}\underline{agt}: Great - I was able to purchase 2 tickets for \\ you to see zoolander 2 tomorrow at regal meridian 16 \\ theater in seattle at 9:25 pm. \end{tabular} & \begin{tabular}[c]{@{}l@{}}\underline{agt}: Okay, I have booked 3 tickets for 10 cloverfield \\ lane tomorrow. \end{tabular} \\ \underline{usr}: Thank you. & \underline{usr}: Thank you. \\ \underline{agt}: Thank you. & \underline{agt}: Thank you. \\ \hline \multicolumn{1}{c|}{\textbf{Success}} & \begin{tabular}[c]{@{}l@{}} \textbf{Failure:} Agent does not answer all the questions \\ (starttime, theater) from user.\end{tabular} \\ \hline \multicolumn{1}{c}{\textbf{RL agent}} & \multicolumn{1}{c}{\textbf{RL Agent}} \\ \hline \underline{usr}: Can I get tickets for zoolander 2 tomorrow? & \begin{tabular}[c]{@{}l@{}}\underline{usr}: Which theater can I book 3 tickets for \\ 10 cloverfield lane? \end{tabular} \\ \underline{agt}: Which city would you like? & \underline{agt}: What time would you like to see it? \\ \underline{usr}: I want to watch at seattle. & \begin{tabular}[c]{@{}l@{}} \underline{usr}: Which theater and start time are available \\ tomorrow? \end{tabular}\\ \underline{agt}: How many tickets do you need? & \underline{agt}: 11:45am is available. \\ \underline{usr}: I want 2 tickets please! & \underline{usr}: Which theater is available? \\ \underline{agt}: 9:25 pm is available. & \underline{agt}: regal la live stadium 14 is available. \\ \underline{usr}: I want to watch at regal meridian 16. & \underline{usr}: Could you help me to book the tickets? \\ \begin{tabular}[c]{@{}l@{}}\underline{agt}: Great - I was able to purchase 2 tickets for \\ you to see zoolander 2 tomorrow at regal meridian 16 \\ theater in seattle at 9:25 pm. \end{tabular} & \begin{tabular}[c]{@{}l@{}} \underline{agt}: Okay - I was able to book 3 tickets for you to \\ see 10 cloverfield lane at regal la live stadium 14 at \\ 11:45am tomorrow. \end{tabular} \\ \underline{usr}: Thank you. & \underline{usr}: Thank you \\ \underline{agt}: Thank you. & \underline{agt}: Thank you \\ \hline \multicolumn{1}{c|}{\textbf{Success}} & \multicolumn{1}{c}{\textbf{Success}} \\ \hline \end{tabular} \end{table*} \bibliographystyle{IEEEbib}
{ "attr-fineweb-edu": 1.928711, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdl45qU2Ap6Tgufro
\section{Introduction}\label{introduction} In this paper we study a queueing network served by a single shared server that visits the queues in a cyclic order. Customers from the outside arrive at the queues according to independent Poisson processes, and the service time and switch-over time distributions are general. After receiving service at queue $i$, a customer is either routed to queue $j$ with probability $p_{i,j}$, or leaves the system with probability $p_{i,0}$. We consider systems with mixtures of gated and exhaustive service. This model can be seen as an extension of the standard polling model (in which customers always leave the system upon completion of their service) by customer routing. Yet another view is provided by the notion that the system is a Jackson network with a dedicated server for each queue with the additional complexity that only one server can be active in the network simultaneously. The possibility of re-routing of customers further enhances the already-extensive modelling capabilities of polling models, which find applications in diverse areas such as computer systems, communication networks, logistics, flexible manufacturing systems, robotics systems, production systems and maintenance systems (see, for example, \cite{boonapplications2011,Grillo1,levy1,takagi3} for overviews). Applications of the introduced type of customer routing can be found in many of these areas. In this regard, we would like to mention a manufacturing system where products undergo service in a number of stages or in the context of rework \cite{Grasman1}, a Ferry based Wireless Local Area Network (FWLAN) in which nodes can communicate with each other or with the outer world via a message ferry \cite{kavitha}, a dynamic order picking system where the order picker drops off the picked items at the depot where sorting of the items is performed \cite{gongdekoster08}, and an internal mail delivery system where a clerk continuously makes rounds within the offices to pick up, sort and deliver mail \cite{sarkar}. In the past many papers have been published on special cases of the current network. In some of these papers distributional results are derived as well; the techniques used do, however, not allow for extension to the general setting of the current paper. Some special case configurations are standard polling systems \cite{takagi3}, tandem queues \cite{nair,taube}, multi-stage queueing models with parallel queues \cite{katayama}, feedback vacation queues \cite{boxmayechiali97, takine}, symmetric feedback polling systems \cite{takagifeedback,takine}, systems with a waiting room \cite{alineuts84,takacsfeedback77}, and many others. In conclusion, one can say that the present research can be seen as a unifying analysis of the waiting-time distribution for a wide variety of queueing models. The main contribution of this paper is the derivation of waiting-time distributions in queueing networks with a single roving server via the development of a new method. For this model we derive the Laplace-Stieltjes transform of the waiting-time distribution of an arbitrary (internally rerouted, or external) customer. Due to this intrinsic complexity of the model, studies in the past were restricted to queue lengths and \textit{mean} delay figures (see \cite{boonvdmeiwinandsRovingPER2011,sarkar,sidi1,sidi2}). A complicating, yet interesting, factor is that the combined process of internal and external arrivals violates the classical assumption of Poisson (or even renewal) arrivals, implying that traditional methods are not applicable. The basic idea behind the new method is that we explicitly compute a priori all \emph{future} service requirements upon arrival of a newly arriving customer. In doing so the prerequisites of the distributional form of Little's Law are overcome. An important feature of the newly developed technique is that it can be applied to a myriad of models which lacked an analysis of the waiting-time distribution until now. One could apply the framework (possibly after some minor modifications) to obtain distributional results in all of the aforementioned special cases of the studied system \cite{alineuts84,boxmayechiali97, katayama,nair,takacsfeedback77,takagifeedback,takagi3,takine,taube} but also, for example, in a closed network \cite{altman2}, in an $M/G/1$ queue with permanent and transient customers \cite{boxmacohen91}, in a network with permanent and transient customers \cite{armonyyechiali99}, or in a polling model with arrival rates that depend on the location of the server \cite{boonsmartcustomers2010,smartcustomers}. Although we study a continuous-time cyclic system with gated or exhaustive service in each queue, we may extend all results - without complicating the analysis - to discrete time, to periodic polling, to batch arrivals, or to systems with different branching-type service disciplines such as globally gated service. The structure of the present paper is as follows. In Section \ref{modelsection}, we introduce the model and notation. Section \ref{waitingtimesectiongated} analyses the waiting-time distribution of an arbitrary customer for gated service. In Section \ref{exhaustiveservice} we study the system with mixtures of gated and exhaustive service. In the penultimate section, we present some examples which show the wide range of applicability of the studied model. The final section of this paper contains a brief discussion. \section{Model description and notation}\label{modelsection} We consider a queueing network consisting of $N\geq1$ infinite buffer queues $Q_1,\dots,Q_N$. External customers arrive at $Q_i$ according to a Poisson arrival process with rate $\lambda_i$, and have a generally distributed service requirement $B_i$ at $Q_i$, with mean value $b_i := \E[B_i]$. In general we denote the Laplace-Stieltjes Transform (LST) or Probability Generating Function (PGF) of a random variable $X$ with $\LST{X}(\cdot)$. The queues are served by a single server in cyclic order. Whenever the server switches from $Q_i$ to $Q_{i+1}$, a random switch-over time $R_i$ is incurred, with mean $r_i$. The cycle time $C_i$ is the time between successive moments when the server arrives at $Q_i$. The total switch-over time in a cycle is denoted by $R=\sum_{i=1}^N R_i$, and its first two moments are $r:=\E[R]$ and $r^{(2)}:=\E[R^2]$. Indices throughout the paper are modulo $N$, so $Q_{1-N}$ and $Q_{N+1}$ both refer to $Q_1$. All service times and switch-over times are mutually independent. This queueing network can be modelled as a \emph{polling system} with the specific feature that it allows for routing of the customers: upon completion of service at $Q_i$, a customer is either routed to $Q_j$ with probability $p_{i,j}$, or leaves the system with probability $p_{i,0}$. Note that $p_{i,0}$ should be greater than 0 for at least one queue, to make sure that customers can leave the system eventually. Moreover, note that $\sum_{j=0}^N~p_{i,j}=1$ for all $i$, and that the transition of a customer from $Q_i$ to $Q_j$ takes no time. Since we consider the gated and exhaustive service disciplines, the model under consideration has a branching structure, which is discussed in more detail by Foss \cite{fossBranching} in the context of queueing models, and by Resing \cite{resing93} more specifically in the context of polling systems. The total arrival rate at $Q_i$ is denoted by $\gamma_i$, which is the unique solution of the following set of linear equations: \begin{equation*} \gamma_i = \lambda_i + \sum_{j=1}^N \gamma_j p_{j,i},\qquad\ii. \end{equation*} The offered load to $Q_i$ is $\rho_i:=\gamma_i b_i$ and the total load is $\rho:=\sum_{i=1}^N \rho_i$. We assume that the system is stable, which means that $\rho$ should be less than one (see \cite{sidi2}). \section{Gated service}\label{waitingtimesectiongated} In the present section we study the waiting-time distribution of an arbitrary customer for a system in which each queue receives gated service, which means that only those customers present at the server's arrival at $Q_i$ will be served before the server switches to the next queue. We define the waiting time $W_i$ of an arbitrary customer in $Q_i$ as the time between his arrival at this queue and the moment at which his service starts. As far as waiting times are concerned, a customer that is routed to another queue, say $Q_j$, upon his service completion is regarded as a new customer with waiting time $W_j$. The waiting-time distribution is found by conditioning on the numbers of customers present in each queue at an arrival epoch. To this end, we study the joint queue-length distribution at several embedded epochs in Section~\ref{jointQLsubsection}. In Sections~\ref{cycleTimesubsection} and~\ref{waitingTimesubsection} we use these results to successively derive the cycle-time distribution and the waiting-time distributions of internally rerouted customers and external customers. \subsection{The joint queue-length distributions}\label{jointQLsubsection} Sidi et al. \cite{sidi2} derive the PGFs of the joint queue-length distributions in all $N$ queues at visit beginnings, visit completions, and at arbitrary points in time. In order to keep this manuscript self-contained, we briefly recapitulate their approach, as it forms the starting point of our novel method to find the waiting time LSTs. There is one important adaptation that we have to make, which will prove essential for finding waiting time LSTs. We consider not only the customers in all $N$ queues, but we distinguish between customers standing \emph{in front of} the gate and customers standing \emph{behind} the gate (meaning that they will be served in the next cycle). Hence, we introduce the $N+1$ dimensional vector $\z=(z_1, \dots, z_N, z_G)$. The element $z_i$, $i=1,\dots,N$, in this vector corresponds to customers in $Q_i$ standing in front of the gate. The element $z_G$ at position $N+1$ is only used during visit periods. During $V_j$, the visit period of $Q_j$, it corresponds to customers standing behind the gate in $Q_j$. This makes the analysis of systems with gated service slightly more involved than systems with exhaustive service (discussed in the next section). Before studying the joint queue-length distributions, we briefly introduce some convenient notation: \begin{align*} \Sigma(\z)&=\sum_{j=1}^N\lambda_j(1-z_j),\\ \Sigma_i(\z)&=\lambda_i(1-z_G)+\sum_{j\neq i}\lambda_j(1-z_j),\\ P_i(\z) &= p_{i,0}+p_{i,i}z_G+\sum_{j\neq i}p_{i,j}z_j. \end{align*} \paragraph{Visit beginnings and completions.} A cycle consists of $N$ visit periods, $V_i$, each of which is followed by a switch-over time $R_i$, for $i=1,\dots,N$. A cycle $C_i$ starts with a visit to $Q_i$ and consists of the periods $V_i, R_i, V_{i+1},\dots,V_{i+N-1}$, $R_{i+N-1}$. Let $P$ denote any of these periods. We denote the joint queue length PGF at the \emph{beginning} of $P$ as $\LB^{(P)}(\z)$. The equivalent at the \emph{completion} of period $P$ is denoted by $\LC^{(P)}(\z)$. Since the gated service discipline is a so-called \emph{branching-type} service discipline (see \cite{resing93}), we can express each of these functions in terms of $\LB^{(V_i)}(\z)$, for any $i=1,\dots,N$. These relations, which are sometimes called \emph{laws of motion}, are given below. \begin{subequations} \begin{align} \LC^{(V_i)}(\z) &= \LB^{(V_i)}\Big(z_1, \dots, z_{i-1},\B_i\big(\Sigma_i(\z)\big)P_i(\z), z_{i+1}, \dots, z_N, z_G\Big),\label{lawsofmotion1}\\ \LB^{(R_i)}(\z) &= \LC^{(V_i)}(z_1, \dots, z_N, z_i),\\ \LC^{(R_i)}(\z) &= \LB^{(R_i)}(\z)\R_i\Big(\Sigma(\z)\Big),\\ \LB^{(V_{i+1})}(\z) &= \LC^{(R_i)}(\z),\\ &\vdots\nonumber\\ \LB^{(V_{i+N})}(\z) &= \LC^{(R_{i+N-1})}(\z).\label{lawsofmotionN} \end{align} \end{subequations} Note the subtle difference between $\LC^{(V_i)}(\z)$ and $\LB^{(R_i)}(\z)$, due to the fact that the gate in $Q_i$ is removed after the completion of $V_i$, causing type $G$ customers to become type $i$ customers. In steady-state we have that $\LB^{(V_{i+N})}(\z) = \LB^{(V_{i})}(\z)$, implying that we have obtained a recursive relation for $\LB^{(V_{i})}(\z)$. Resing \cite{resing93} shows how a clever definition of immigration and offspring generating functions can be used to find an explicit expression for $\LB^{(V_{i})}(\z)$. For reasons of compactness we refrain from doing so in the present paper. Instead we want to point out that the recursive relation obtained from \eqref{lawsofmotion1}-\eqref{lawsofmotionN} can be differentiated with respect to the variables $z_1, \dots, z_N, z_G$. The resulting set of equations, which are called the \emph{buffer occupancy equations} in the polling literature, can be used to compute the moments of the queue-length distributions at all visit beginnings and completions. \paragraph{Service beginnings and completions.} We denote the joint queue length PGF at \emph{service} beginnings and completions in $Q_j$ by respectively $\LB^{(B_j)}(\z)$ and $\LC^{(B_j)}(\z)$. Since a customer may be routed to another queue upon his service completion, we define $\LC^{(B_j)}(\z)$ as the PGF of the joint queue-length distribution right \emph{after} the tagged customer in $Q_j$ has received service (implying that he is no longer present in $Q_j$), but \emph{before} the moment that he may join another queue (even though these two epochs take place in a time span of length zero). Eisenberg \cite{eisenberg72} observed the following relation, albeit in a slightly different model: \begin{equation} \LB^{(V_i)}(\z) + \gamma_i\E[C]\LC^{(B_i)}(\z)P_i(\z) = \LC^{(V_i)}(\z) + \gamma_i\E[C]\LB^{(B_i)}(\z). \label{eisenberg} \end{equation} Equation \eqref{eisenberg} is based on the observation that each visit beginning coincides with either a service beginning, or a visit completion (if no customer was present). Similarly, each visit completion coincides with either a visit beginning or a service completion. The long-run ratio between the number of visit beginnings/completions and service beginnings/completions in $Q_i$ is $\gamma_i\E[C]$, with $\E[C]=\E[C_i]=r/(1-\rho)$. The distribution of the cycle time is given in the next subsection. Furthermore, Eisenberg observes the following simple relation between the joint queue-length distribution at service beginnings and completions: \begin{equation} \LC^{(B_i)}(\z) = \LB^{(B_i)}(\z)\B_i\big(\Sigma_i(\z)\big)/z_i.\label{servicebeginningscompletions} \end{equation} Substitution of \eqref{servicebeginningscompletions} in \eqref{eisenberg} gives an equation which can be solved to express $\LB^{(B_i)}(\z)$ in $\LB^{(V_i)}(\z)$ and $\LC^{(V_i)}(\z)$. \paragraph{Arbitrary moments.} The PGF of the joint queue-length distribution at arbitrary moments, denoted by $\L(\z)$, is found by conditioning on the period in the cycle during which the system is observed $(V_1, R_1, \dots, V_N, R_N)$. \begin{equation} \L(\z) = \frac{1}{\E[C]}\sum_{j=1}^N\left(\E[V_j]\L^{(V_j)}(\z)+r_j\L^{(R_j)}(\z)\right),\label{Lz} \end{equation} with $\E[V_j] = \rho_j\E[C]$. In \eqref{Lz} the functions $\L^{(V_j)}(\z)$ and $\L^{(R_j)}(\z)$ denote the PGFs of the joint queue-length distributions at an arbitrary moment during $V_j$ and $R_j$ respectively: \begin{align} \L^{(V_j)}(\z) &= \LB^{(B_j)}(\z)\frac{1-\B_j\big(\Sigma_j(\z)\big)}{b_j\Sigma_j(\z)}, \label{jointQLduringVj}\\ \L^{(R_j)}(\z) &= \LB^{(R_j)}(\z)\frac{1-\R_j\big(\Sigma(\z)\big)}{r_j\Sigma(\z)}. \label{jointQLduringSj} \end{align} The interpretation of \eqref{jointQLduringVj} and \eqref{jointQLduringSj} is that the queue length vector at an arbitrary time point in $V_j$ or $R_j$ is the sum of those customers that were present at the beginning of that service/switch-over time, plus vector of the customers that have arrived during the elapsed part of the service/switch-over time. For more details about the joint queue length and workload distributions for general branching-type service disciplines (in the context of polling systems, but also applicable to our model) we refer to Boxma et al. \cite{boxmakellakosinski2011}. \subsection{Cycle-time distributions}\label{cycleTimesubsection} In the remainder of this paper we present new results for the model introduced in Section \ref{modelsection}. We start by analysing the distributions of the cycle times $C_i$, $i=1,\dots,N$. The idea behind the following analysis is to condition on the number of customers present in each queue at the beginning of $C_i$ (and, hence, of $V_i$). The cycle will consist of the service of all of these customers, plus all switch-over times $R_i, \dots, R_{i+N-1}$, plus the services of all customers that enter during these services and switch-over times \emph{and} will be served \emph{before} the next visit beginning to $Q_i$. The cycle time for polling systems without customer routing is discussed in Boxma et al. \cite{boxmafralixbruin08}. However, as it turns out, the analysis is severely complicated by the fact that customers may be routed to another queue and be served again (even multiple times) during the same cycle. From branching theory we adopt the term \emph{descendants} of a certain (tagged) customer to denote all customers that arrive (in all queues) during the service of this tagged customer, plus the customers arriving during their service times, and so on. If, upon his service completion, a customer is routed to another queue, we also consider him as his own descendant. We define $B^*_{k,i}$, $i=1,\dots,N; k=0, \dots, N$, as the service time of a type $i-k$ (which is understood as $N+i-k$ if $i \leq k$) customer at $Q_{i-k}$, plus the service times of all of his descendants that will be served before or during the next visit of the server to $Q_i$. The special case $B^*_{0,i}$ is simply the service time of a type $i$ customer, $i=1,\dots,N$. A formal definition in terms of LSTs is given below: \begin{align} \Btilde_{k,i}(\omega) &= \B_{i-k}\Big(\omega + \sum_{j=0}^{k-1}\lambda_{i-j}\big(1-\Btilde_{j,i}(\omega)\big)\Big)\Ptilde_{k,i}(\omega), &&k=0,1,\dots,N; i=1,\dots,N, \label{bkiomega}\\ \intertext{where} \Ptilde_{k,i}(\omega) &= 1-\sum_{j=0}^{k-1}p_{i-k,i-j}\big(1-\Btilde_{j,i}(\omega)\big), &&k=0,1,\dots,N; i=1,\dots,N. \label{pkiomega} \end{align} For a type $i-k$ customer, $P^*_{k,i}$ accounts for the service times of his descendants that are caused by the fact that he may be routed to another queue upon his service completion. A similar function should be defined for the switch-over times: \begin{equation} \Rtilde_{k,i}(\omega) = \R_{i-k}\Big(\omega + \sum_{j=0}^{k-1}\lambda_{i-j}\big(1-\Btilde_{j,i}(\omega)\big)\Big), \qquad\ \qquad k=0,1,\dots,N; i=1,\dots,N. \label{rkiomega} \end{equation} Note that, compared to \eqref{bkiomega}, no term $\Ptilde_{k,i}(\omega)$ is required because no routing takes place at the end of a switch-over time. Finally, we define the following $N+1$ dimensional vectors: \begin{align} \ub{k,i} &= \big(1, \dots, 1, \Btilde_{k,i}(\omega), 1, \dots, 1\big), && k=0,1,\dots,N-1; i=1,\dots,N, \label{ubki}\\ \ub{N,i} &= \big(1, \dots, 1, \Btilde_{0,i}(\omega)\big), && i=1,\dots,N,\label{ubNi} \end{align} with $\Btilde_{k,i}(\omega)$ at position $i-k$ in \eqref{ubki} (or position $N+i-k$ if $k\geq i$), and $\Btilde_{0,i}(\omega)$ at position $N+1$ in \eqref{ubNi}. We use $\bigotimes$ to denote the element-wise multiplication of vectors. \begin{proposition}\label{cycletimethm} The LST of the distribution of the cycle time $C_i$ is given by \begin{equation} \C_i(\omega) = \LB^{(V_i)}\big( \bigotimes_{k=0}^{N-1} \ub{k,i-1}\big)\prod_{k=0}^{N-1}\Rtilde_{k,i-1}(\omega), \qquad i=1,\dots,N.\label{cycleTimeLST} \end{equation} The interpretation of \eqref{cycleTimeLST} is that the length of a cycle starting with a visit to $Q_i$ is the sum of the \emph{extended} service times of all customers present at the beginning of the cycle, and the sum of all \emph{extended} switch-over times during the cycle. By extended service time (switch-over time) we refer to a service time (switch-over time) plus the service times of all customers that arrive during this service time (switch-over time) in one of the queues that are yet to be served during the remainder of the cycle, and all of their descendants that will be served before the end of the cycle. \begin{proof} To prove Proposition \ref{cycletimethm} we keep track of all the customers that will be served during one cycle. We condition on the numbers of customers present in each queue at the beginning of $C_i$, denoted by $n_1, \dots, n_N$. Note that there are no gated customers present at this moment, because the gate has been removed at the beginning of the last switch-over time of the previous cycle. A cycle $C_i$ consists of: \begin{enumerate} \item the service of all customers present at the beginning of the cycle, \item all of their descendants that will be served before the start of the next cycle (i.e., before the next visit to $Q_i$), \item the switch-over times $R_1, \dots, R_N$, \item all customers arriving during these switch-over times that will be served before the start of the next cycle, \item all of their descendants that will be served before the start of the next cycle. \end{enumerate} We define $S_j$ for $j=1,\dots, N$, as the service time of a type $j$ customer plus the service times of all of his descendants that will be served during (the remaining part of) $C_i$. Since the service discipline is gated at all queues, we have: \begin{equation} S_j = B_j + \sum_{k=j+1}^{i-1}\sum_{l=1}^{N_k(B_j)} S_{k_l} + \begin{cases} S_m & \qquad\text{for $m=j+1,\dots,i-1$, w.p. $p_{j,m}$},\\ 0 & \qquad\text{w.p. }1-\sum_{m=j+1}^{i-1}p_{j,m}, \end{cases} \label{Sj} \end{equation} where $N_k(T)$ denotes the number of arrivals in $Q_k$ during a (possibly random) period of time $T$, and $S_{k_l}$ is a sequence of (independent) extended service times $S_k$. Note that $S_j$ depends on $i$, although we have chosen to hide this for presentational purposes. The gated service discipline is reflected in the fact that only customers arriving in (or rerouted to) $Q_{j+1},\dots,Q_{i-1}$ are being served during the residual part of $C_i$. It can easily be shown that the LST of $S_{i-k}$ is $\Btilde_{k-1,i-1}(\omega)$ for $k=1, \dots, N$. Note that the first summation in \eqref{Sj} is cyclic, which may sometimes cause confusion (for example if $j=i-1$, when this is supposed to be a summation over zero terms). Avoiding this (possible) confusion is the main reason that we have chosen to define $\Btilde_{k,i}(\omega)$, $\Ptilde_{k,i}(\omega)$ and $\Rtilde_{k,i}(\omega)$ relative to queue $i$ ($k$ steps backward in time). Using this branching way of looking at the cycle time, we can express $C_i$ in terms of $R_1, \dots, R_N$ and $S_1, \dots, S_N$. First, however, we derive the following intermediate result. \begin{align*} \E\left[\ee^{-\omega R_{i-k}}\prod_{j=i-k+1}^{i-1}\prod_{l=1}^{N_j(R_j)}\ee^{-\omega S_{j_l}}\right]& =\R_{i-k}\big(\omega + \sum_{j=i-k+1}^{i-1}\lambda_j(1- \E[\ee^{-\omega S_j}])\big)\\ &=\Rtilde_{k-1,i-1}(\omega). \end{align*} Now, introducing the shorthand notation $n_1,\dots,n_N$ for the event that the numbers of customers at the beginning of $C_i$ in queues $1,\dots,N$ are respectively $n_1,\dots,n_N$, we can find the cycle time LST conditional on this event. \begin{align*} \E\left[\ee^{-\omega C_i}\,|\,n_1,\dots,n_N\right] &= \E\left[\exp\Big(-\omega \sum_{j=i-N}^{i-1} \big(\sum_{l=1}^{n_j} S_{j_l} + R_j + \sum_{k=j+1}^{i-1}\sum_{l=1}^{N_k(R_j)}S_{k_l} \big) \Big)\right]\\ &=\E\left[\prod_{j=i-N}^{i-1} \left(\prod_{l=1}^{n_j} \ee^{-\omega S_{j_l}}\right) \ee^{-\omega R_j}\prod_{k=j+1}^{i-1}\prod_{l=1}^{N_k(R_j)}\ee^{-\omega S_{k_l}}\right]\\ &=\prod_{j=i-N}^{i-1}\left(\prod_{l=1}^{n_j}\E\left[\ee^{-\omega S_{j_l}}\right]\right)\prod_{j=i-N}^{i-1}\E\left[\ee^{-\omega R_j}\prod_{k=j+1}^{i-1}\prod_{l=1}^{N_k(R_j)}\Big(\ee^{-\omega S_{k_l}}\Big)\right]\\ &=\left(\prod_{k=1}^N\Btilde_{k-1, i-1}(\omega)^{n_{i-k}}\right) \prod_{k=1}^N \Rtilde_{k-1,i-1}(\omega).\\ \end{align*} Equation \eqref{cycleTimeLST} follows after deconditioning. \end{proof} \end{proposition} \begin{remark}\label{alternativeC} Because of our main interest in the waiting-time distributions, we have followed quite an elaborate path to find the LST of the cycle-time distribution. However, if one is merely interested in a quick way to find $\C_i(\omega)$, a more efficient approach can be used. One of the most efficient ways to find $\C_i(\omega)$ is to distinguish between customers that arrive from outside the network (external customers) and internally rerouted customers (internal customers). One can straightforwardly adapt the laws of motion \eqref{lawsofmotion1}-\eqref{lawsofmotionN} to find an expression for $\LB^{(V_i)'}(z_1^E, z_1^I, \dots, z_N^E, z_N^I)$. Just like $\LB^{(V_i)}(z_1, \dots, z_N, z_G)$, $\LB^{(V_i)'}(z_1^E, z_1^I, \dots, z_N^E, z_N^I)$ stands for the PGF of the joint queue length at the beginning of $V_i$, but now we distinguish between external and internal customers in each queue (indicated by $z_j^E$ and $z_j^I$). Since external customers arrive in $Q_i$ according to a Poisson process with intensity $\lambda_i$, one can apply the distributional form of Little's Law (see, for example, Keilson and Servi \cite{keilsonservi90}) to the \emph{external} customers in $Q_i$: \[ \C_i(\omega) = \LB^{(V_i)'}(1, \dots, 1, 1-\omega/\lambda_i, 1, \dots, 1), \qquad i=1,\dots, N. \] \end{remark} \subsection{Waiting-time distributions}\label{waitingTimesubsection} In this subsection we find the LSTs of $W_i^{E}$ and $W_i^I$, the waiting-time distributions of arbitrary external and internal customers in $Q_i$, and use them to obtain the LST of $W_i$, the waiting time of an arbitrary customer. Recall that the waiting time $W_i$ of an arbitrary customer in $Q_i$ is the time between his arrival at this queue and the moment at which his service starts. Hence, even if a customer is routed to the same queue multiple times, each visit to this queue invokes a new waiting time. We stress that common methods used in the polling literature to find waiting time LSTs cannot be applied in our queueing network, because they rely heavily on the assumption that \emph{every} customer in the system has arrived according to a Poisson process. Since this assumption is violated in our model, we have developed a novel approach to find the waiting time LST of an arbitrary customer in our network. The joint queue-length distributions at various epochs, as discussed in Subsection \ref{jointQLsubsection}, play an essential role in the analysis. First we focus on the waiting times of internal customers, then we discuss the waiting times of external customers. \paragraph{Internal customers.} The arrival epoch of an internal customer always coincides with a service completion. Hence, we condition on the joint queue length and the arrival epoch of an internal customer to find his waiting time LST. The waiting time of an internal customer \emph{given that} he arrives in $Q_i$ after a service completion at $Q_{i-k}$ is denoted by $\textit{WC}_{i}^{(B_{i-k})}$ ($i,k = 1,\dots,N$). To find $\textit{WC}_{i}^{(B_{i-k})}$, we only have to compute the probability that an arbitrary internal customer in $Q_i$ arrives after a service completion at $Q_{i-k}$. The mean number of customers (internal plus external) present at the beginning of $V_{i-k}$ at $Q_{i-k}$ is $\gamma_{i-k}\E[C]$. Each of these customers joins $Q_i$ upon his service completion with probability $p_{{i-k},i}$. This observation combined with the fact that the mean number of \emph{internal} customers arriving at $Q_i$ during the course of one cycle is $(\gamma_i-\lambda_i)\E[C]$, leads to the following result: \begin{equation} \W_i^I(\omega) = \sum_{k=1}^N \frac{\gamma_{i-k}p_{{i-k},i}}{\gamma_i-\lambda_i}\WC_{i}^{(B_{i-k})}(\omega), \qquad i=1,\dots,N. \label{WiI} \end{equation} As a consequence, the problem of finding $\W_i^I(\cdot)$ is reduced to finding $\WC_{i}^{(B_{i-k})}(\omega)$ for all $i,k=1,\dots,N$. For notational reasons we first introduce the following $N+1$ dimensional vectors, which will appear several times in this section: \begin{align*} \ubG{k,i} &= \begin{cases} \displaystyle\ub{0,i} & \qquad\textrm{ if }k<0,\\ \displaystyle\ub{0,i}\bigotimes_{j=0}^{k-1}\ub{j,i-1} & \qquad\textrm{ if }k=1,\dots,N,\\ \displaystyle\ub{N,i}\bigotimes_{j=0}^{N-1}\ub{j,i-1} & \qquad\textrm{ if }k=N, \end{cases} \end{align*} for $i=1,\dots,N$. Again, we use $\bigotimes$ to denote the element-wise multiplication of vectors. \begin{proposition}\label{LSTWexternalthm} We have \begin{align} \WC_{i}^{(B_{i-k})}(\omega) &= \LC^{(B_{i-k})}\big( \ubG{k,i}\big)\prod_{j=0}^{k-1}\Rtilde_{j,i-1}(\omega), \label{WkiI} \end{align} for $i,k=1,\dots,N$. \begin{proof} The key observation in the proof of Proposition \ref{LSTWexternalthm} is that an arrival of an internally rerouted customer always coincides with some service completion. For this reason, we consider the system right after the service completion at, say, $Q_j$ ($j=1,\dots,N$). We compute the waiting time LST of a customer routed to $Q_i$ after being served in $Q_j$, conditional on the numbers of customers of each type (now \emph{including} gated customers) present at the arrival epoch (\emph{not} including the arriving customer himself). We denote by $n_1, \dots, n_N, n_G$ the event that the numbers of customers of all types are respectively $n_1, \dots, n_N, n_G$. Let $n_{iG}:=n_i$ if $i\neq j$, and $n_{iG}:=n_G$ if $i= j$. Note that the type $G$ customers are located behind the gate in $Q_j$, and that the customer routed to $Q_i$ only has to wait for these customers in case $i=j$. The waiting time of the tagged customer consists of: \begin{enumerate} \item the service of all $n_j$ customers in front of the gate in $Q_j$ at the arrival epoch,\label{enum1} \item the service of all $n_{j+1},\dots,n_{i-1}$ customers present in $Q_{j+1},\dots,Q_{i-1}$ at the arrival epoch,\label{enum2} \item all of the descendants of the previously mentioned customers that will be served before the next visit to $Q_i$, \item if $i\neq j$, the service of all $n_{iG}$ customers present in $Q_i$ at the arrival epoch; if $i=j$, the service of all $n_{iG}$ gated customers present in $Q_i$ at the arrival epoch, \item the switch-over times $R_j, \dots, R_{i-1}$, \item all customers arriving during these switch-over times that will be served before the next visit to $Q_i$, \item all of their descendants that will be served before the next visit to $Q_i$. \end{enumerate} We denote the waiting time of an internal customer conditional on the event that he arrives in $Q_i$ after being served in $Q_j$, \emph{and conditional on the event that the numbers of customers of all types at the arrival epoch are respectively} $n_1, \dots, n_N, n_G$, by $\textit{WC}_i^{(B_j)'}$. Just like in the proof of Proposition \ref{cycletimethm}, we can express $\textit{WC}_i^{(B_j)'}$ in terms of $R_1, \dots, R_N$ and $S_1, \dots, S_N$: \begin{align} \textit{WC}_i^{(B_j)'} &= \sum_{k=j}^{i-1}\left[\sum_{l=1}^{n_k}S_{k_l} + R_k+\sum_{l=k+1}^{i-1}\sum_{m=1}^{N_l(R_k)}S_{l_m}\right] + \sum_{l=1}^{n_{iG}}B_{i,l}.\label{WCiBj} \end{align} Taking the LST of \eqref{WCiBj} leads to \eqref{WkiI} after deconditioning. The derivation proceeds along the exact same lines as in the proof of Proposition \ref{cycletimethm}, and is therefore omitted. \end{proof} \end{proposition} \paragraph{External customers.} External customers arrive in $Q_i$ according to a Poisson process with intensity $\lambda_i$. We distinguish between customers arriving during a switch-over time and customers arriving during a visit time. The waiting time of an external customer in $Q_i$ \emph{given that} he arrives during $R_{i-k}$ is denoted by $W_{i}^{(R_{i-k})}$ ($i,k = 1,\dots,N$). Similarly, we use $W_{i}^{(V_{i-k})}$ to denote an external customer arriving in $Q_i$ during $V_{i-k}$. The waiting time LST of an arbitrary external customer can be expressed in terms of $\W_{i}^{(R_{i-k})}(\cdot)$ and $\W_{i}^{(V_{i-k})}(\cdot)$: \begin{equation} \W_i^E(\omega) = \frac{1}{\E[C]}\sum_{k=1}^N \left(\E[V_{i-k}]\W_{i}^{(V_{i-k})}(\omega)+r_{i-k}\W_{i}^{(R_{i-k})}(\omega)\right), \qquad i=1,\dots,N. \label{WiE} \end{equation} We first focus on the waiting time of customers arriving during a switch-over time. Consider a tagged customer arriving in $Q_i$ during $R_{i-k}$, $i,k=1,\dots,N$. Since the remaining part of the switch-over time is part of the waiting time of the arriving customer, it will turn out that we need the \emph{joint} distribution of all customers present at the arrival epoch \emph{and} the residual part of $R_{i-k}$, denoted by $R_{i-k}^R$. The PGF of the joint queue-length distribution at the arrival epoch is given by \eqref{jointQLduringSj}. Equation \eqref{jointQLduringSj} is based on the observation that the number of customers in each queue at an arbitrary moment during $R_{i-k}$ is simply the sum of the number of customers present at the beginning of $R_{i-k}$ and the number of customers that have arrived during the elapsed (past) part of $R_{i-k}$, denoted by $R_{i-k}^P$. These random variables are independent. Hence, it is straightforward to adapt \eqref{jointQLduringSj} to find the joint distribution of the queue lengths \emph{and} residual part of $R_{i-k}$, using the following result from elementary renewal theory: \[ \R^{PR}_j(\omega_P, \omega_R) = \frac{\R_j(\omega_P)-\R_j(\omega_R)}{(\omega_R-\omega_P)r_j},\qquad j=1,\dots,N, \] with $\R^{PR}_j(\omega_P, \omega_R)$ denoting the LST of the joint distribution of past and residual switch-over time $R_j$. Hence, \begin{equation} \L^{(R_j)}(\z, \omega)= \LB^{(R_j)}(\z)\R^{PR}_j(\Sigma(\z), \omega), \label{jointQLduringSjandRjres} \end{equation} where $\L^{(R_j)}(\z, \omega)$ denotes the PGF-LST of the joint distribution of the number of customers of each type at an arbitrary moment during $R_j$ and the residual part of $R_j$. Obviously, there are no gated customers present during a switch-over time. Consequently, and also using PASTA, we can find the waiting-time distribution by conditioning on the number of customers present at an arbitrary moment during $R_{i-k}$ and on the residual switch-over time. \begin{proposition} We have \begin{align} \W_{i}^{(R_{i-k})}(\omega) &= \R^{PR}_{i-k}\Big(\sum_{j=1}^{k-1}\lambda_{i-j}\big(1-\Btilde_{j-1,i-1}(\omega)\big) + \lambda_{i}\big(1-\B_i(\omega)\big), \omega+\sum_{j=1}^{k-1}\lambda_{i-j}\big(1-\Btilde_{j-1,i-1}(\omega)\big)\Big)\nonumber\\ &\times \LB^{(R_{i-k})}\big( \ubG{k-1,i}\big) \prod_{j=0}^{k-2}\Rtilde_{j,i-1}(\omega), \qquad i,k=1,\dots,N, \label{WkiR} \end{align} \begin{proof} We consider an arbitrary customer arriving in $Q_i$ during $R_j$. Similar to the proofs of the preceding propositions in this section, we condition on the number of customers present in all queues at the arrival epoch, denoted by $n_1, \dots, n_N$. As mentioned before, no gated customers are present during a switch-over time. However, we also condition on the residual length of $R_j$, denoted by $t_R$. The waiting time of the tagged customer consists of: \begin{enumerate} \item the service of all $n_{j+1},\dots,n_{i-1}$ customers present at the arrival epoch in $Q_{j+1},\dots,Q_{i-1}$, \item the service of all their descendants that will be served before the start of the next visit to $Q_i$, \item the service of all $n_i$ customers present at the arrival epoch in $Q_i$, \item the residual switch-over time $t_R$, \item the switch-over times $R_{j+1}, \dots, R_{i-1}$, \item the service of all customers arriving during $t_R, R_{j+1}, \dots, R_{i-1}$ that will be served before the start of the next visit to $Q_i$, \item the service of all descendants of these customers that will be served before the start of the next visit to $Q_i$. \end{enumerate} If we denote the waiting time of a type $i$ customer arriving during $R_j$, \emph{conditional on} $n_1, \dots, n_N$ and $t_R$, by $W_i^{(R_j)'}$, we can summarise these items in the following formula: \begin{align} W_i^{(R_j)'} &= \sum_{k=j+1}^{i-1}\left[\sum_{l=1}^{n_k}S_{k_l} + R_k + \sum_{l=k+1}^{i-1}\sum_{m=1}^{N_l(R_k)}S_{l_m}\right]+\sum_{l=1}^{n_i}B_{i_l} +t_R + \sum_{l=j+1}^{i-1}\sum_{m=1}^{N_l(t_R)}S_{l_m}.\label{WiRj} \end{align} Taking the LST of \eqref{WiRj} and using \eqref{jointQLduringSjandRjres} leads to \eqref{WkiR} after deconditioning. The derivation is not completely straightforward, but rather than providing it here, we refer to the proof of Proposition \ref{WkiVthm}, which contains a similar derivation of a more complicated equation. \end{proof} \end{proposition} Now we only need to determine $\W_{i}^{(V_{i-k})}(\cdot)$. Focussing on a tagged customer arriving in $Q_i$ during the service of a customer in $Q_{i-k}$, for $i,k=1,\dots,N$, we can find $\W_{i}^{(V_{i-k})}(\cdot)$ by conditioning on the number of customers in each queue at the arrival epoch and the residual service time. Similar to $\R^{PR}_j(\cdot)$, we define the LST of the joint distribution of past and residual service time $B_j$ as \begin{equation} \B^{PR}_j(\omega_P, \omega_R) = \frac{\B_j(\omega_P)-\B_j(\omega_R)}{(\omega_R-\omega_P)b_j},\qquad j=1,\dots,N. \label{BPRj} \end{equation} We can now use Equations \eqref{jointQLduringVj} and \eqref{BPRj} to find the PGF-LST of the joint distribution of the number of customers of each type present at an arbitrary moment during $V_j$ and the residual service time of the customer that is being served at that moment: \begin{equation} \L^{(V_j)}(\z, \omega)= \LB^{(B_j)}(\z)\B^{PR}_j(\Sigma_j(\z), \omega). \end{equation} Note that the customers arriving in $Q_j$ during the elapsed part of $B_j$ are gated customers. \begin{proposition}\label{WkiVthm} We have \begin{align} \W_{i}^{(V_{i-k})}(\omega) &= \B^{PR}_{i-k}\Big(\sum_{j=1}^{k-1}\lambda_{i-j}\big(1-\Btilde_{j-1,i-1}(\omega)\big) + \lambda_{i}\big(1-\B_i(\omega)\big), \omega+\sum_{j=1}^{k-1}\lambda_{i-j}\big(1-\Btilde_{j-1,i-1}(\omega)\big)\Big)\nonumber\\ &\times \LB^{(B_{i-k})}\big( \ubG{k,i}\big) \prod_{j=0}^{k-1}\Rtilde_{j,i-1}(\omega \times\frac{\Ptilde_{k-1,i-1}(\omega)}{\Btilde_{k-1,i-1}(\omega)}, \label{WkiV1} \end{align} for $i,k=1,\dots,N$. \begin{proof} We denote by $n_1, \dots, n_N, n_G$ the numbers of customers of all types present at the arrival epoch of the tagged customer. The residual part of the service time of the customer being served at this arrival epoch is denoted by $t_R$. Let $n_{iG}:=n_i$ if $i\neq j$, and $n_{iG}:=n_G$ if $i= j$. The waiting time of a type $i$ customer arriving during $V_j$, conditional on $n_1, \dots, n_N, n_G$ and the residual service time consists of the following components: \begin{enumerate} \item the service of $n_j-1$ customers in front of the gate in $Q_j$ (We exclude the customer being served at the arrival epoch), \item the service of all $n_{j+1},\dots,n_{i-1}$ customers present in $Q_{j+1},\dots,Q_{i-1}$, \item all of the descendants of the previously mentioned customers that will be served before the next visit to $Q_i$, \item if $i\neq j$, the service of all $n_{iG}$ customers present in $Q_i$ at the arrival epoch; if $i=j$, the service of all $n_{iG}$ gated customers present in $Q_i$, \item the switch-over times $R_j, \dots, R_{i-1}$, \item the residual service time $t_R$, \item all customers arriving during $t_R$ and $R_j, \dots, R_{i-1}$ that will be served before the next visit to $Q_i$, \item all of their descendants that will be served before the next visit to $Q_i$, \item the (possible) future service of the customer being served at the arrival epoch, due to the fact that he may be routed to another queue that will be served before the next visit to $Q_i$, \item the service of all descendants of this rerouted customer (Note that if he will be rerouted and served again, he will count as his own descendant). \end{enumerate} More formally: \begin{equation} \begin{aligned} W_i^{(V_j)'} &= \sum_{l=1}^{n_j-1}S_{j,l} + \sum_{k=j+1}^{i-1}\sum_{l=1}^{n_k}S_{k_l}+\sum_{l=1}^{n_{iG}}B_{i_l} +\sum_{k=j}^{i-1}\left[R_k + \sum_{l=k+1}^{i-1}\sum_{m=1}^{N_l(R_k)}S_{l_m}\right] \\ &+t_R + \sum_{l=j+1}^{i-1}\sum_{m=1}^{N_l(t_R)}S_{l_m}+\begin{cases} S_l & \qquad\text{for $l=j+1,\dots,i-1$, w.p. $p_{j,l}$},\\ 0 & \qquad\text{w.p. }1-\sum_{l=j+1}^{i-1}p_{j,l}, \end{cases}. \end{aligned} \label{WiVj} \end{equation} We now show that Equation \eqref{WkiV1} follows from taking the LST: \begin{align*} &\E[\ee^{-\omega W_i^{(V_j)}}|n_1,\dots,n_N,n_{iG}] \\ &=\E\left[\prod_{l=1}^{n_j-1}\ee^{-\omega S_{j_l}}\prod_{m=j+1}^{i-1}\prod_{l=1}^{n_m}\ee^{-\omega S_{m_l}}\right]\E\left[\prod_{l=1}^{n_{iG}}\ee^{-\omega B_{i_l}}\right]\E\left[\prod_{m=j}^{i-1}\ee^{-\omega\left(R_m + \sum_{l=m+1}^{i-1}\sum_{q=1}^{N_l(R_m)}S_{l_q} \right)}\right]\\ &\times \ee^{-\omega t_R} \E\left[\prod_{l=j+1}^{i-1}\prod_{m=1}^{N_l(t_R)}\ee^{-\omega S_{l_m}} \right] \left(\sum_{l=j+1}^{i-1}p_{j,l}\E\left[\ee^{-\omega S_l}\right]+1-\sum_{l=j+1}^{i-1}p_{j,l}\right)\\ &= \E\left[\ee^{-\omega S_{j}}\right]^{n_j-1}\prod_{m=j+1}^{i-1}\E\left[\ee^{-\omega S_{m}}\right]^{n_m}\E\left[\ee^{-\omega B_{i}}\right]^{n_{iG}} \prod_{m=j}^{i-1}\R_m\Big( \omega + \sum_{l=m+1}^{i-1}(1-\E[\ee^{-\omega S_l}])\Big)\\ &\times \ee^{-\omega t_R} \prod_{l=j+1}^{i-1}\sum_{m=0}^{\infty}\E[\ee^{-\omega S_{l}}]^{m}\P[N_l(t_R)=m] \left(1-\sum_{l=j+1}^{i-1}p_{j,l}\Big(1-\E\left[\ee^{-\omega S_l}\right]\Big)\right)\\ &= \Btilde_{k-1,i-1}(\omega)^{n_{i-k}-1}\prod_{l=1}^{k-1}\Btilde_{l-1,i-1}(\omega)^{n_{i-l}}\B_{i}(\omega)^{n_{iG}} \prod_{l=1}^{k}\Rtilde_{l-1,i-1}(\omega)\\ &\times \exp\left[-\Big(\omega + \sum_{l=j+1}^{i-1}(1-\E[\ee^{-\omega S_{l}}])\Big)t_R\right] \Ptilde_{k-1,i-1}(\omega)\\ &= \Btilde_{k-1,i-1}(\omega)^{n_{i-k}}\prod_{l=1}^{k-1}\Btilde_{l-1,i-1}(\omega)^{n_{i-l}}\B_{i}(\omega)^{n_{iG}} \prod_{l=1}^{k}\Rtilde_{l-1,i-1}(\omega)\\ &\times \exp\left[-\Big(\omega + \sum_{l=1}^{k-1}(1-\Btilde_{l-1,i-1}(\omega))\Big)t_R\right] \frac{\P_{k-1,i-1}(\omega)}{\Btilde_{k-1,i-1}(\omega)},\\ \end{align*} where $k=i-j$ (or $k=N+i-j$ if $j\geq i$). Deconditioning of this expression leads to \eqref{WkiV1}. \end{proof} \end{proposition} \paragraph{Arbitrary customers.} Finally, we present the main result of this section: the LST of the waiting-time distribution of an arbitrary customer in $Q_i$. \begin{theorem} The LST of the waiting-time distribution of an arbitrary customer in $Q_i$, if this queue receives gated service, is given by: \begin{equation} \W_i(\omega) = \frac{\gamma_i-\lambda_i}{\gamma_i}\W_i^I(\omega) + \frac{\lambda_i}{\gamma_i}\W_i^E(\omega), \qquad i=1,\dots,N,\label{LSTW} \end{equation} where $\W_i^I(\omega)$ and $\W_i^E(\omega)$ are given by \eqref{WiI} and \eqref{WiE}, respectively. \begin{proof} The result follows immediately after conditioning on the event that an arbitrary customer is an internal or external customer. \end{proof} \end{theorem} \section{Exhaustive service}\label{exhaustiveservice} In this section we study systems with mixtures of gated and exhaustive service, that is, some queues are served exhaustively whereas other queues receive gated service. We restrict ourselves to presenting the results, but for reasons of compactness we omit all proofs as they can be produced similar to the proofs in the previous section. Throughout we use the index $e\in\{1,\dots,N\}$ to refer to an arbitrary queue with exhaustive service, which means that customers are being served until the queue is empty. This means that, in contrast to gated service, customers arriving in $Q_e$ \emph{during} $V_e$ will be served during that same visit period. This is true, even if the customer has just received service in $Q_e$ and was routed back to $Q_e$ again. To deal with this issue, we define an extended service time $B_e^\textit{exh}$ which is the total amount of service that a customer receives during a visit period $V_e$ before being routed to another queue (or leaving the system), cf. \cite{sidi2}. As stated in \cite{sidi2}, $B_e^\textit{exh}$ is the geometric sum, with parameter $p_{e,e}$, of independent random variables with the same distribution as $B_e$. The LST of $B_e^\textit{exh}$ is given by \[ \Bexh_e(\omega) = \frac{(1-p_{e,e})\B_e(\omega)}{1-p_{e,e}\B_e(\omega)}. \] We denote a busy period of type $e$ customers by $\textit{BP}_e$. The PGF-LST of the joint distribution of a busy period and the number of customers served during this busy period satisfies the following equation: \[ \BP_e(z, \omega) = z\Bexh_e\big(\omega + \lambda_e(1-\BP_e(z, \omega))\big). \] \subsection{The joint queue-length distributions} \paragraph{Visit beginnings and completions.} The laws of motion \eqref{lawsofmotion1}-\eqref{lawsofmotionN} have to be adapted if a queue receives exhaustive service. First we need to redefine $\Sigma_i(\z)$ and $P_i(\z)$ if $Q_i$ is served exhaustively, and introduce $\Pexh_i(\z)$: \begin{align*} \Sigma_e(\z)&=\sum_{j\neq e}\lambda_j(1-z_j),\\ P_e(\z) &= p_{e,0}+\sum_{j=1}^Np_{e,j}z_j,\\ \Pexh_e(\z) &= \frac{p_{e,0}}{1-p_{e,e}}+\sum_{j\neq e}\frac{p_{e,j}}{1-p_{e,e}}z_j, \end{align*} for all $e\in\{1,\dots,N\}$ corresponding to queues with exhaustive service. The laws of motion now change accordingly: \begin{align*} \LC^{(V_e)}(\z) &= \LB^{(V_e)}\Big(z_1, \dots, z_{e-1},\BP_e\big(\Pexh_e(\z), \Sigma_e(\z)\big), z_{e+1}, \dots, z_N, 1\Big),\\ \LB^{(R_e)}(\z) &= \LC^{(V_e)}(\z), \end{align*} for any exhaustively served $Q_e$. \paragraph{Service beginnings and completions.} Eisenberg's relation \eqref{eisenberg} remains valid for queues with exhaustive service. Note that $P_e(\z)$ should \emph{not} be replaced by $\Pexh_e(\z)$ for exhaustive queues in \eqref{eisenberg}! Relation \eqref{servicebeginningscompletions} should be slightly changed for queues with exhaustive service, since customers are not placed behind a gate: \begin{equation*} \LC^{(B_e)}(\z) = \LB^{(B_e)}(\z)\B_e\big(\Sigma(\z)\big)/z_e. \end{equation*} \paragraph{Arbitrary moments.} Equation \eqref{Lz} for the PGF of the joint queue-length distribution at arbitrary moments remains valid if some of the queues have exhaustive service. However, $\L^{(V_j)}(\z)$ should be adapted for queues with exhaustive service by replacing gated customers with ``ordinary'' type $e$ customers: \[ \L^{(V_e)}(\z) = \LB^{(B_e)}(\z)\frac{1-\B_e\big(\Sigma(\z)\big)}{b_e\Sigma(\z)}. \] \subsection{Cycle-time distributions} The fact that customers arriving in an exhaustively served queue, say $Q_{i-k}$, during $V_{i-k}$ are served before the end of this visit period, requires changes in the definition of $\Btilde_{k,i}(\omega)$. \begin{align} \Btilde_{k,i}(\omega) &= \BP_{i-k}\Big(\Ptilde_{k,i}(\omega), \omega + \sum_{j=0}^{k-1}\lambda_{i-j}\big(1-\Btilde_{j,i}(\omega)\big)\Big), &&k=0,1,\dots,N; i=1,\dots,N,\\ \intertext{where} \Ptilde_{k,i}(\omega) &= 1-\sum_{j=0}^{k-1}\frac{p_{i-k,i-j}}{1-p_{i-k,i-k}}\big(1-\Btilde_{j,i}(\omega)\big), &&k=0,1,\dots,N; i=1,\dots,N. \end{align} Given this modified definition of $\Btilde_{k,i}(\omega)$, the function $\Rtilde_{k,i}(\omega)$ remains unchanged. The expression for the LST of the cycle time $C_i$, given by \eqref{cycleTimeLST}, also remains valid for systems containing exhaustively served queues. \subsection{Waiting-time distributions} \paragraph{Internal customers.} The waiting time LST of internal customers \eqref{WiI} is determined by conditioning on the event that an arrival in $Q_i$ follows a service completion in some $Q_{i-k}$. As stated before, for queues with exhaustive service we need to take into account that customers that are routed back to the same queue will be served during the same visit period. For an arbitrary exhaustively served queue $Q_e$, this results in \begin{equation} \W_e^I(\omega) = \sum_{k=0}^{N-1} \frac{\gamma_{e-k}p_{{e-k},e}}{\gamma_e-\lambda_e}\WC_{e}^{(B_{e-k})}(\omega).\label{WeI} \end{equation} Compared to \eqref{WiI}, the summation starts at $k=0$ and runs up to $k=N-1$. We now introduce \[ \ubexh{0,i}=\big(1, \dots, 1, \B_{i}(\omega), 1, \dots, 1\big),\qquad i=1,\dots,N, \] with $\B_{i}(\omega)$ at the position corresponding to customers in $Q_i$. If $Q_i$ has exhaustive service, there is a subtle difference with $\ub{0,i}$ which has $\BP_{i}(1,\omega)$ at position $i$. We can now determine $\WC_{e}^{(B_{e-k})}(\omega)$ for any $Q_e$ that receives exhaustive service: \begin{align*} \WC_{e}^{(B_{e-k})}(\omega) &= \LC^{(B_{e-k})}\big( \ubexh{0,e}\bigotimes_{j=0}^{k-1}\ub{j,e-1} \big)\prod_{j=0}^{k-1}\Rtilde_{j,e-1}(\omega), \qquad k=1,\dots,N-1, \\ \WC_{e}^{(B_{e})}(\omega) &= \LC^{(B_{e})}\big( \ubexh{0,e}\big). \end{align*} For each $Q_i$ that receives gated service, we can still use \eqref{WiI} with the modified definition of $\Btilde_{k,i}(\omega)$ for each $Q_{i-k}$ which receives exhaustive service. \paragraph{External customers.} The waiting time LST of external customers \eqref{WiE} is determined by conditioning on the event that an arrival in $Q_i$ takes place during $V_{i-1},\dots,V_{i-N}$ or during $R_{i-1},\dots,R_{i-N}$. Before discussing the waiting times of external customers arriving in an exhaustively served queue, it is important to realise that allowing some queues to have exhaustive service will now also require some changes to waiting times of customers arriving in a queue with gated service. This means that \eqref{WkiV1} should now become \begin{align} \W_{i}^{(V_{i-k})}(\omega) &= \B^{PR}_{i-k}\Big(\sum_{j=1}^{k-1}\lambda_{i-j}\big(1-\Btilde_{j-1,i-1}(\omega)\big) + \lambda_{i}\big(1-\B_i(\omega)\big)+\lambda_{i-k}\big(1-\Btilde_{k-1,i-1}(\omega)\big), \nonumber\\ &\qquad\qquad\omega+\sum_{j=1}^{k-1}\lambda_{i-j}\big(1-\Btilde_{j-1,i-1}(\omega)\big)+\lambda_{i-k}\big(1-\Btilde_{k-1,i-1}(\omega)\big)\Big)\nonumber\\ &\times \LB^{(B_{i-k})}\big( \ub{0,i}\bigotimes_{j=0}^{k-1}\ub{j,i-1} \big) \prod_{j=0}^{k-1}\Rtilde_{j,i-1}(\omega \times\frac{1-\sum_{j=0}^{k-1}p_{i-k,i-j-1}\big(1-\Btilde_{j,i-1}(\omega)\big)}{\Btilde_{k-1,i-1}(\omega)}, \label{WkiV1exh} \end{align} if $Q_{i-k}$ receives exhaustive service (and $Q_i$ receives gated service). Compared to \eqref{WkiV1} we can see that there are two additional terms $\lambda_{i-k}\big(1-\Btilde_{k-1,i-1}(\omega)\big)$ which take into account that customers arriving in $Q_{i-k}$ during the elapsed \emph{and} during the residual part of the present service time $B_{i-k}$ will be served during the present visit period. Furthermore, we can see that $\Ptilde_{k-1,i-1}(\omega)$ has been replaced by $1-\sum_{j=0}^{k-1}p_{i-k,i-j-1}\big(1-\Btilde_{j,i-1}(\omega)\big)$, which is required because the customer being served should be allowed to return to $Q_{i-k}$ upon his service completion. If $Q_e$ receives exhaustive service we have to make some additional changes. We have \begin{equation} \W_e^E(\omega) = \frac{1}{\E[C]}\sum_{k=1}^N \left(\E[V_{e-k+1}]\W_{e}^{(V_{e-k+1})}(\omega)+r_{e-k}\W_{e}^{(R_{e-k})}(\omega)\right),\label{WeE} \end{equation} where we have chosen to denote the waiting time LST of customers arriving in $Q_e$ during $V_e$ as $\W_{e}^{(V_{e})}(\omega)$ rather than $\W_{e}^{(V_{e-N})}(\omega)$ to illustrate the fact that they will be served during the same visit period. The expression for $\W_{e}^{(R_{e-k})}(\omega)$, given by \eqref{WkiR}, should be slightly modified if $Q_e$ receives exhaustive service. However, since the only required modification is that $\ub{0,i}$ should be replaced by $\ubexh{0,i}$, we refrain from giving the complete expression. If $k>0$, the expression for $\W_{e}^{(V_{e-k})}(\omega)$ remains almost the same as \eqref{WkiV1} if $Q_{e-k}$ receives gated service, or \eqref{WkiV1exh} if $Q_{e-k}$ receives exhaustive service. The only change is, once again, that $\ub{0,i}$ should be replaced by $\ubexh{0,i}$. The case $k=0$ results in a much simpler expression, since we only have to wait for the service times of the customers that were present at the beginning of the present service (excluding the customer in service) plus the service times of the customers that have arrived in $Q_e$ during the elapsed part of the present service, plus the residual service time: \[ \W_{e}^{(V_{e})}(\omega) = \B^{PR}_{e}\Big(\lambda_{e}\big(1-\B_e(\omega)\big), \omega\Big)\frac{\LB^{(B_{e})}\big( \ubexh{0,e}\big)}{\B_e(\omega)}. \] \paragraph{Arbitrary customers.} The LST of the waiting-time distribution of an arbitrary customer in an exhaustively served queue immediately follows after conditioning on the event that an arbitrary customer is either an internal or an external customer, similar to the derivation of~\eqref{LSTW}. The result is presented in the theorem below. \begin{theorem} The LST of the waiting-time distribution of an arbitrary customer in $Q_i$, if this queue receives exhaustive service, is given by: \begin{equation} \W_i(\omega) = \frac{\gamma_i-\lambda_i}{\gamma_i}\W_i^I(\omega) + \frac{\lambda_i}{\gamma_i}\W_i^E(\omega), \qquad i=1,\dots,N,\label{LSTWexhaustive} \end{equation} where $\W_i^I(\omega)$ and $\W_i^E(\omega)$ are defined in \eqref{WeI} and \eqref{WeE}. \end{theorem} \section{Applicability of the model} In this section we give some numerical examples that indicate the versatility of the model that we have discussed. To this end, we use some examples that can be found in the existing literature, and show how our model can be used to describe the various systems and find the relevant performance measures. Hence, most of the results presented in this section are not novel, but the way of deriving them is new. \paragraph{Example 1: tandem queues with parallel queues in the first stage.} \begin{figure}[ht] \begin{center} \includegraphics[width=0.7\linewidth]{example2} \end{center} \caption{Tandem queues with parallel queues in the first stage, as discussed in Example 1.} \label{figureExample1} \end{figure} We first use an example that was introduced by Katayama \cite{katayama}, who studies a network consisting of three queues. Customers arrive at $Q_1$ and $Q_2$, and are routed to $Q_3$ after being served (see Figure \ref{figureExample1}). This model, which is referred to as a tandem queueing model with parallel queues in the first stage, is a special case of the model discussed in the present paper. We simply put $p_{1,3}=p_{2,3}=p_{3,0}=1$ and all other $p_{i,j}$ are zero. We use the same values as in \cite{katayama}: $\lambda_1=\lambda_2/10$, service times are deterministic with $b_1=b_2=1$, and $b_3=5$. The server serves the queues exhaustively, in cyclic order: 1, 2, 3, 1, \dots. The only difference with the model discussed in \cite{katayama} is that we introduce (deterministic) switch-over times $R_2=R_3=2$. We assume that no time is required to switch between the two queues in the first stage, so $r_1=0$. In Figure~\ref{numericalresults} we show the means and standard deviations of the waiting times of customers at the three queues. These plots reveal that in the heavy-traffic regime, as $\rho\uparrow1$, the \emph{mean} waiting times of customers in $Q_3$ are close to those in $Q_1$, but the \emph{standard deviations} of the waiting times in $Q_3$ are closer to those in $Q_2$. Further inspection of the exact results, obtained by differentiating the LSTs, confirms that in both cases the limits are very close, but not exactly the same. It is also interesting to study the light-traffic behaviour of the system, i.e., as $\rho\downarrow0$. From the plots in Figure~\ref{numericalresults} we can see that, as $\rho\downarrow0$, the \emph{mean} waiting times are all equal, but the \emph{standard deviation} of the waiting times in $Q_1$ and $Q_2$ is different than in $Q_3$. From the LSTs of the waiting-time distributions we can obtain the exact expressions when $\rho\downarrow0$, by taking the Taylor expansion in $\rho$ at $\rho=0$ and subsequently ignoring all $\O(\rho)$ terms. This, combined with the fact that $R_1=0$ and all of the routing probabilities are either 0 or 1, considerably simplifies all expressions from the previous section: \begin{align*} \W_1(\omega) &= \W_1^E(\omega) \rightarrow \frac{r_2}{r}\W_{1}^{(R_{2})}(\omega)+\frac{r_3}{r}\W_{1}^{(R_{3})},\\ \W_2(\omega) &= \W_2^E(\omega) \rightarrow \frac{r_2}{r}\W_{2}^{(R_{2})}(\omega)+\frac{r_3}{r}\W_{2}^{(R_{3})},\\ \W_3(\omega) &= \W_3^I(\omega) \rightarrow \frac{\lambda_1}{\lambda_1+\lambda_2}\WC_{3}^{(B_{1})}(\omega)+\frac{\lambda_2}{\lambda_1+\lambda_2}\WC_{3}^{(B_{2})}(\omega). \end{align*} Since we are considering the case $\rho\downarrow0$, these expressions can be simplified even further to closed-form expressions, because ignoring all $\O(\rho)$ terms is equivalent to regarding the system as being empty all the time: \begin{align*} \W_1(\omega) &\rightarrow \frac{r_2}{r}\R^{PR}_2(0, \omega)\R_3(\omega)+\frac{r_3}{r}\R^{PR}_3(0, \omega),\\ \W_2(\omega) &\rightarrow \frac{r_2}{r}\R^{PR}_2(0, \omega)\R_3(\omega)+\frac{r_3}{r}\R^{PR}_3(0, \omega),\\ \W_3(\omega) &\rightarrow \frac{\lambda_1}{\lambda_1+\lambda_2}\R_1(\omega)\R_2(\omega)+\frac{\lambda_2}{\lambda_1+\lambda_2}\R_2(\omega). \end{align*} These expressions reveal the true behaviour of the system in light traffic. The waiting times in $Q_1$ and $Q_2$ are simply the total residual switch-over time, with mean $r^{(2)}/2r = 2$ and second moment $r^{(3)}/3r = 16/3$. For queue $Q_3$ the situation is different, because this queue only contains internally rerouted customers. Customers being rerouted from $Q_1$ have to wait for the switch-over times $R_1+R_2$, whereas customers arriving from $Q_2$ have to wait only for $R_2$. Since $R_1=0$, the waiting time only consists of $R_2=2$ in both cases. Substituting all parameter values results in the following LT limits of the waiting-time LSTs: \[ \W_1(\omega) \rightarrow \frac{1-\ee^{-4\omega}}{4\omega}, \qquad \W_2(\omega) \rightarrow \frac{1-\ee^{-4\omega}}{4\omega}, \qquad \W_3(\omega) \rightarrow \ee^{-2\omega}\qquad (\rho\downarrow0). \] Differentiating the LSTs gives the following results as $\rho\downarrow0$: \begin{align*} \E[W_1] &\rightarrow 2,& \E[W_2] &\rightarrow 2, &\E[W_3] &\rightarrow 2, \\ \sd[W_1] &\rightarrow \sqrt{4/3}, &\sd[W_2] &\rightarrow \sqrt{4/3}, &\sd[W_3] &\rightarrow 0. \end{align*} \begin{figure}[h] \includegraphics[width=0.49\textwidth]{example1exhaustiveEWrevision} \hfill \includegraphics[width=0.49\textwidth]{example1exhaustivesdWrevision} \caption{Means and standard deviations of the waiting times in the first numerical example.} \label{numericalresults} \end{figure} \paragraph{Example 2: a two-stage queueing model with customer feedback.} This second example is introduced by Tak\'acs \cite{takacsfeedback77}, and extended by Ali and Neuts \cite{alineuts84}. The queueing system under consideration consists of a waiting room, in which customers arrive according to a Poisson process with intensity $\lambda$, and a service room. The customers are all transferred simultaneously to the service room where they receive service in order of arrival. However, at the moment of the transfer to this service room $M$ additional ``overhead customers'' are added to the front of this queue. (In \cite{takacsfeedback77} $M$ is a constant, in \cite{alineuts84} it is a random variable.) Upon service completion, each customer leaves the system with probability $q$, and returns to the waiting room with probability $1-q$. Overhead customers leave the system with probability one after being served. As soon as the last customer in the service room finishes service (and either leaves the system, or returns to the waiting room) all customers present in the waiting room are transferred to the service room, where they will receive service after a new batch of overhead customers has been served, and so on. A schematic representation of this model is depicted in Figure \ref{figureExample2}. \begin{figure}[ht] \begin{center} \includegraphics[width=0.7\linewidth]{example1} \end{center} \caption{The two-stage queueing model with customer feedback, as discussed in Example 2.} \label{figureExample2} \end{figure} We use the same input parameters as Tak\'acs \cite{takacsfeedback77}: $q=2/3$ and $\lambda/\mu=1/6$, where $1/\mu$ is the mean service time in the service room. This service time is exponentially distributed. The number of overhead customers that are added to the front of the queue is a constant with value $M$. We can model this system in terms of our network with a single, shared server by defining arrival intensities $\lambda_1=\lambda$ and $\lambda_2 = 0$. The service times in stations 1 and 2 are respectively $0$ and exponentially distributed with mean $b_2=1/\mu$. The routing probabilities are $p_{1,2}=1$ and $p_{2,1}=1/3$, the other $p_{i,j}$ are zero. The service times of the overhead customers are also exponentially distributed with parameter $\mu$. Hence, we can model the addition of $M$ overhead customers as a switch-over time which is Erlang-$M$ distributed with parameter $\mu$. The switch-over time between $Q_2$ and $Q_1$ is zero. Note that, since $b_1=0$, there is no difference between gated and exhaustive service. By differentiation of the waiting time LSTs \eqref{LSTW}, we can obtain explicit expressions for all moments of the waiting-time distributions for this example. The first three moments of the waiting times are given below. \begin{align*} &\E[W_1] = \frac{1+M}{2\mu}, &&\E[W_1^2] = \frac{(M+1) (11 M+25)}{27 \mu ^2}, &&\E[W_1^3] =\frac{(M+1) (M (43 M+223)+310)}{108 \mu ^3},\\ &\E[W_2]=\frac{1+7M}{6\mu}, &&\E[W_2^2]=\frac{(M+1) (37 M+11)}{27 \mu ^2},&&\E[W_2^3]=\frac{(M+1) (M+2) (175 M+81)}{108 \mu ^3}. \end{align*} The results are slightly different from those presented in \cite{takacsfeedback77}, because Tak\'acs also considers the overhead customers in the computations of the waiting times and allows them to return to the waiting room after their service is completed. Modelling this situation would require one minor adaptation in the laws of motion (adding the overhead customers at the beginning of $V_2$) and another adaptation in the waiting time LST (conditioning on the event that a new customer is an overhead customer). These changes are not too difficult but beyond the scope of this paper. \section{Discussion and further research}\label{discussion} In this section, we not only elaborate on the developed method and its applicability, but we also discuss possible ways of extending the present study. \paragraph{Method.} As mentioned in the introduction, the main complicating factor of the model under consideration is caused by the rerouting of internal customers. This implies that the \emph{total} arrival process at each queue is not Poisson, and not even renewal. Traditional methods to determine waiting-time distributions in each queue are based on the distributional form of Little's Law, which relies on the assumption of Poisson arrivals. Contrary to the distributional form of Little's Law, we explicitly make use of the branching structure to find waiting-time distributions. The main idea is that upon the arrival of a tagged customer $Y$ at time $t$ at $Q_i$ we compute \emph{a priori} the total future service times at each of the queues, for \emph{all} the other customers present in the system at time $t$ that will be served before customer $Y$ enters service at $Q_i$ (see \eqref{bkiomega}). Additionally, we add the total future service requirements of all external arrivals (and their descendants) that will be served before customer $Y$ enters service (see \eqref{rkiomega}). The advantage of this method is that a system no longer needs to satisfy all of the prerequisites required to apply the distributional form of Little's Law (see \cite{keilsonservi90}). \paragraph{Applicability.} The novel approach of this paper to find the LST of the waiting-time distribution can also be applied to other types of models with a single server serving multiple queues. Obviously, one can apply it to standard polling models (without customer routing) by simply taking $p_{i,0} = 1$ and $p_{i,j} = 0$ for $j > 0$. However, the developed methodology carries almost directly over to tandem queues \cite{nair,taube}, multi-stage queueing models with parallel queues \cite{katayama}, feedback vacation queues \cite{boxmayechiali97, takine}, symmetric feedback polling systems \cite{takagifeedback,takine}, systems with a waiting room \cite{alineuts84,takacsfeedback77}, closed networks \cite{altman2}, $M/G/1$ queues with permanent and transient customers \cite{boxmacohen91}, networks with permanent and transient customers \cite{armonyyechiali99}, or polling models with arrival rates that depend on the location of the server \cite{boonsmartcustomers2010,smartcustomers}. \paragraph{Further research.} Since the model can be described as a multi-type branching process, \emph{explicit closed-form} expressions can be obtained for the waiting-time distributions under heavy-traffic (HT) assumptions. Such expressions are appealing because they give fundamental insight in how the system performance depends on the system parameters, and in particular on the routing probabilities $p_{i,j}$. HT asymptotics can be obtained by combining insights from multi-type branching processes \cite{RvdM_QUESTA}, fluid analyses \cite{olsenvdmei03,olsenvdmei05}, and the heavy-traffic averaging principle by Coffman et al. \cite{coffman95,coffman98}. The HT analysis is relevant because in practice the proper operation of the system is particularly important when the system is heavily loaded. The HT asymptotics form an excellent basis for the development of approximations for the waiting-time distributions for \emph{arbitrary} loads. For the \emph{mean} waiting times, preliminary results are obtained in \cite{boonvdmeiwinandsRovingPER2011}. From a practical perspective, motivated by applications in production systems \cite{boonapplications2011}, an important extension of the model under consideration is a model where customers visit a predetermined, class-specific sequence of queues in a fixed order. In our model one would have to define multiple customer classes, each having their own fixed visit order through the system. The method presented in this paper forms a good basis for this extension. \section*{Acknowledgements} The authors are grateful to Ivo Adan and Onno Boxma for providing valuable comments on earlier drafts of the present paper. \expandafter\ifx\csname urlstyle\endcsname\relax \providecommand{\doi}[1]{DOI: #1}\else \providecommand{\doi}{DOI: \begingroup \urlstyle{rm}\Url}\fi \bibliographystyle{abbrvnat}
{ "attr-fineweb-edu": 1.625, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdlbxK6EuNCveqBVw
\section{Introduction and Main Results}\label{sec:intro} Multi-species Sherrington-Kirkpatrick (MSK) model, as a generalization of the classical SK model~\cites{Tal11a,Tal11b,Pan13}, was introduced by Barra~\emph{et.~al.}~in~\cite{BCMT15}. In the MSK model, the spins are divided into finitely many species, and the density of each species is asymptotically fixed with increasing system volume. The interaction parameter between different spins depends only on the species structure. Thus, one can consider the MSK model as an inhomogeneous generalization of the classical SK model. The detailed definition is given in Section~\ref{s1sec:msk}. One important question in spin glasses is to understand the behavior of limiting free energy. In the classical SK model, Parisi~\cites{Par83, Par80} proposed his celebrated variational formula, known as the Parisi formula, which gives the limiting free energy at all temperatures. Talagrand~\cite{Tal06} proved this in the setting of mixed $p$-spin models for even $p$. Later Panchenko~\cite{Pan14}, using a different approach, proved the Parisi formula of mixed $p$-spin models for all $p$. The authors in~\cite{BCMT15} proposed a Parisi type formula for the limiting free energy in the MSK model. They also proved the upper bound by adapting Guerra's interpolation method under \emph{positive-definite}~condition on the interaction matrix $\gD^2$. Later, Panchenko~\cite{Pan15} completed the proof by proving a multi-species version of the Ghirlanda-Guerra identities to match the lower bound. The \emph{positive-definite}~condition on $\gD^{2}$ was used only in the proof of the upper bound. In~\cite{Pan15}, Panchenko also proposed several open questions, such as understanding behaviors of the model when the interaction matrix $\gD^2$ is \emph{indefinite}, deriving an analog of the AT line condition in MSK model, among others. The authors in~\cite{BSS19} derived the AT line condition of two-species \emph{positive-definite}~case and proved that above the AT line, the 2-species SK model is RSB. The essential tool of their argument is a perturbation of the Parisi formula, which is only known for \emph{positive-definite}~$\gD^2$. The idea fails in the general species case because of algebraic difficulties. For the interesting question about what happens when $\gD^2$ is \emph{indefinite}, to the best of our knowledge, only some partial results on a few particular models have been known very recently. In~\cites{BGG11, BGGPT14}, there are some conjectured min-max type formulas by physicists for the bipartite SK model, which is a special case of \emph{indefinite}~MSK model. Another particular case of \emph{indefinite}~MSK model known as deep Boltzmann machine (DBM) was investigated in \cites{ABCM20, ACCM21} in the replica symmetry regime, and a complete solution was obtained in \cite{ACCM20} under assumptions on the Nishimori line. Besides that, a min-max formula for the replica symmetric solution in the DBM model is proved in~\cite{Gen20}. Recently, Mourrat~\cites{MD20, Mou20A, Mou20B, Mou20C, Mou18, Mou19} has reinterpreted the Parisi formula as the solution to an infinite-dimensional Hamilton-Jacobi equation in the Wasserstein space of probability measures on the positive half-line. Particularly in~\cite{Mou20B}, he studied the bipartite SK model and questioned the possibility to represent the limiting free energy as a variational formula. In the setting of the Hamilton-Jacobi equation, non-convex $\gD^2$ breaks down the application of the Hopf-Lax formula while solving the PDE. For the spherical case, Auffinger and Chen~\cite{AC14} studied the bipartite spherical SK (BSSK) model and proved a variational formula for the limiting free energy at sufficiently high temperature. Later, Baik and Lee~\cite{BL17} computed the limiting free energy and its fluctuations at all non-critical temperatures in the BSSK model using tools from random matrix theory. However, computing limiting free energy for general \emph{indefinite}~model is still unknown. Specifically for Ising spins, we do not even know the limiting free energy at any temperature, other than the lower bound by Panchenko~\cite{Pan15}. In this paper, we do a high-temperature analysis of the Ising MSK model, and our main contributions are: \begin{enumerate} \item For external field $h \ge 0$, by extending Latala's argument, we prove an RS regime in the MSK model, where the overlap has exponential concentration. Note that our approach is unified for both \emph{positive-definite}~and \emph{indefinite}~MSK model, but the proved RS regimes have a different form; see Theorem~\ref{thm1}. For $h=0$, by using a quadratic coupling argument, we prove the concentration of overlap up to $\gb_c=\rho(\gD^2\gL)^{-\half}$, where $\gD^{2}$ is the interaction matrix and $\gL$ is the diagonal matrix containing the species size ratios. This result is true for \emph{indefinite}~and \emph{positive-definite}~$\gD^2$, \ie~we prove the whole RS regime for the general MSK model, which is given as $\gb < \gb_c$, see Theorem~\ref{ovp no h}. The above overlap concentrations also enable us to prove the limit and central limit theorem of free energy in the corresponding RS regime; see Theorem~\ref{RS solution}. \item By developing a different species-wise cavity method, we derive a linear system of Gibbs average of a quadratic form of overlap vectors. The system is solved using linear algebraic methods, enabling us to compute the variance-covariance structure of overlap vectors. The computation also suggests the AT-line condition in the MSK model from the replica symmetry side. Note that our species-wise cavity method does not require \emph{positive-definite}~$\gD^2$, but the AT line condition for \emph{indefinite}~$\gD^2$ seems to be more complicated; see the discussions below Theorem~\ref{thm:varoverlap}. \item In the case of \emph{positive-definite}~$\gD^2$, we prove the AT line condition from the RSB side under some natural assumption. The key is still the perturbation idea of the Parisi formula. This generalizes the result for $2$-species SK model in~\cite{BSS19}. For the \emph{indefinite}~$\gD^2$ case, we use our species-wise cavity approach to give a conjectured form of the AT line condition. To get a rigorous proof of the AT line in this case is challenging. Because, first, the Parisi formula for \emph{indefinite}~MSK model is still in mystery, so the classical perturbation technique can not be used. Second, proving the uniqueness of stationary point is hard. We prove a uniqueness result in 2-species case using an elementary approach, see Proposition~\ref{prop:indf-uniq}. \end{enumerate} The definition of the MSK model is given in Section~\ref{s1sec:msk}, then we review the main results of the $2$-species case in~\cite{BSS19}, and conclude Section~\ref{sec:intro} by the statement of our results. Readers can always check notations back in Section~\ref{notation}. A road map is given in Section~\ref{roadmap}. \subsection{MSK model}\label{s1sec:msk} Fix $N \ge 1$. For a spin configuration on the $N$-dimensional hyper-cube, $\mvgs =( \gs_1, \gs_2, \ldots, \gs_N) \in \gS_N := \{-1,+1\}^N$, we consider the Hamiltonian given by \begin{align}\label{hamilton} H_N(\mvgs) := \frac{\gb}{\sqrt{N}}\sum_{1\le i<j\le N} g_{ij} \gs_i \gs_j+ h \sum_{i=1}^N \gs_i \end{align} where $g_{ij}$, the disorder interaction parameters are independent centered Gaussian random variables, $\gb>0$ is the inverse temperature and $h \ge 0$ is the external field. In the classical SK model, the variance structure of disorder $g_{ij}$ is homogeneous, usually taken as $g_{ij} \sim \N(0,1)$. While in the MSK model, the variance of $g_{ij}$ depends on the structure of species among $N$ spins. Assume that, there are $m \ge 2$ species. When $m=1$ the MSK model reduces to the classical SK model. We partition the set of spins into $m$ disjoint sets, namely, \[ I = \bigcup_{s=1}^m I_s = \{1,2,\ldots, N\},\qquad I_{s}\cap I_{t}=\emptyset\text{ for } s\neq t. \] For $i \in I_s, j \in I_t$, we assume \[ \E g_{ij}^2 = \gD_{st}^2 ,\] \ie~the inhomogeneity of MSK model comes from the interaction among different species. While proving MSK Parisi formula in~\cites{Pan15,BCMT15}, the assumption that $\gD^2 = (\!(\gD_{st}^2)\!)_{s,t=1}^m$ is symmetric and \emph{positive-definite}\ was used. But in this paper, we do not require the positive-definiteness condition for most of the results. Besides that, we assume that the ratio of spins in each species is fixed asymptotically, \ie~for $s=1,2,\ldots,m$ and \[ \gl_{s,N}:={|I_s|}/{N},\] we have \[ \lim_{N\to \infty} \gl_{s,N} = \gl_s \in (0,1). \] Since $\gl_{s,N}$ and $\gl_s$ are asymptotically the same, for the rest of the article we will use $\gl_s$ instead of $\gl_{s,N}$ for convenience. We denote $\gL:=\diag(\gl_1,\gl_2,\ldots, \gl_m)$. The overlap vector between two replicas $\mvgs^1, \mvgs^2 \in \gS_N$ is given by \[ \vR_{12} = (R_{12}^{(1)},R_{12}^{(2)},\ldots,R_{12}^{(m)})^{\T} ,\] where $$R_{12}^{(s)} := \frac{1}{|I_s|} \sum_{i \in I_s} \gs_i^1 \gs_i^2.$$ is the overlap restricted to species $s \in \{1, \ldots, m\}$. In some cases we write it as $R^{(s)}$ for short if there are only 2 replicas involved. All vectors will be considered as a column vector in the rest of the article. A central question in spin glasses is to understand the free energy \begin{align} F_N := \frac{1}{N} \log Z_N, \quad \text{where} \quad Z_N := \sum_{\mvgs \in \gS_N} \exp(H_N(\mvgs)) \end{align} is the partition function, and the Gibbs measure is given by \begin{align} G_{N}(\mvgs) = {\exp(H_N(\mvgs))}/{Z_N},\qquad \mvgs\in \gS_{N}. \end{align} Later we will also use $\la \cdot \ra$ to denote the Gibbs average just for convenience; check Section~\ref{notation}. When $\gD^{2}$ is \emph{positive-definite}, Parisi formula gives a variational representation of the limiting free energy for all $\gb >0, h\in \dR$, which was set up in~\cites{BCMT15,Pan15} for MSK model. This paper is mainly about the phase transitions and fluctuation of free energy in the RS regime. Before stating the main results, let us recall some related results in MSK model in~\cites{BSS19,Pan15,BCMT15}. We will only use the Parisi formula in the proof of Theorem~\ref{AT line}. \subsection{Related results in MSK model} \label{s1sec:2sk} Consider a sequence of real numbers \begin{align}\label{seq1} 0=\gz_0 < \gz_1 < \cdots < \gz_k < \gz_{k+1}=1, \end{align} and for each $s \in \{1,2, \ldots, m\}$, the sequences \begin{align}\label{seq2} 0=q_0^s\le q_1^s \le \cdots \le q_{k+1}^s \le q_{k+2}^s = 1. \end{align} For $0 \le l \le k+2$, define $\vq_\ell = (q_\ell^s)_{s=1}^{m}$, \begin{align}\label{Q sequen} Q_\ell := \frac 1 2 {\vq_\ell}^{\T} \gL \gD^2 \gL \vq_\ell, \quad Q_\ell^s :=(\gD^2 \gL \vq_\ell)_s \quad \text{for} \ 1 \le s\le m. \end{align} Given these sequences, consider i.i.d.~standard Gaussian random variables $(\eta_\ell)_{1\le \ell \le k+2}$. Define \[ X_{k+2}^s := \log \cosh\bigl(h+\gb \sum_{0 \le \ell \le k+1} \eta_{\ell+1}(Q_{\ell+1}^s-Q_{\ell}^s)^{\half}\bigr) ,\] then recursively define for $0 \le \ell \le k+1$, \[ X_\ell^s := \frac{1}{\gz_\ell} \log \E_{\ell+1} \exp(\gz_\ell X_{\ell+1}^s) .\] where $\E_{\ell+1}$ denotes expectation w.r.t.~$\eta_{\ell+1}$. The following theorem gives the Parisi formula in MSK model. \begin{thm} [{\cite{Pan15}*{Theorem 1}}] For the MSK model with positive-definite $\gD^2$, the limiting free energy is given by \begin{align}\label{par} \lim_{N \to \infty} F_N = \inf_{\mvgz,\vq}\mathscr{P}(\mvgz,\vq), \end{align} where \[ \sP(\mvgz, \vq) = \log 2+\sum_{s=1}^{m}\gl_s X_0^s - \frac{\gb^2}{2} \sum_{\ell=1}^{k+1} \gz_{\ell}(Q_{\ell+1}-Q_\ell) .\] is the Parisi functional and the $\inf$ in~\eqref{par} is taken over all the sequences in~\eqref{seq1} and~\eqref{seq2}. \end{thm} In the above variational formula~\eqref{par}, let $k=0$ and $q_1^s = q_s\in [0,1]$, then the Parisi functional on the RS regime simplifies to \begin{align*} \sP_{\RS}(\vq) & := \log 2 + \sum_{s=1}^m \gl_s \E\log \cosh(\gb \eta\sqrt{(\gD^{2}\gL\vq)_s}+h)+\frac{\gb^2}{4} (\vone-\vq)^{\T}\gL\gD^{2}\gL(\vone-\vq) \end{align*} where $\vone$ is the vector of all $1$'s and $\vq=(q_{1},q_{2},\ldots,q_{m})^{\T}$. Taking derivative w.r.t.~$q_t$ for $t=1, \ldots, m$ \begin{align}\label{sys-crit} \frac{\partial \sP_{\RS}}{\partial q_t} = \frac{\gb^2}{2}\gl_t \sum_{s=1}^m \gD_{s,t}^2\gl_s\left[q_s - \E \tanh^2(\gb \eta \sqrt{(\gD^{2}\gL\vq)_s} +h)\right] . \end{align} and setting it 0, we get the set of critical points (when $\gD^2$ is invertible) to be \begin{align}\label{crit set} \cC(\gb,h) = \left\{ \vq \in [0,1]^m \ \bigl|\ \E \tanh^2 (\gb\eta \sqrt{(\gD^{2}\gL\vq)_s}+h) = q_s \ \text{for} \ s= 1,2, \ldots m\right\}. \end{align} For the MSK model with positive-definite $\gD^2$, we define the Replica Symmetric solution as \begin{align}\label{RS} \RS(\gb,h):=\inf_{\vq \in [0,1]^m} \sP_{\RS}(\vq). \end{align} Note that $\sP_{\RS}$ is well-defined for \emph{indefinite}~$\gD^2$. However, one expects the replica symmetric solution to be achieved at a saddle point of $\sP_{\RS}$ instead of a minimizer, see~\cite{BGG11}*{Theorem~4}. The $\sP_{\RS}$ functional in the \emph{indefinite}~case seems to be not convex. We only use the $\RS(\gb,h)$ expression for proving the replica symmetry breaking in Theorem~\ref{AT line} for \emph{positive-definite}~case. Actually, in the RS region for \emph{positive-definite}~ $\gD^2$, the infimum should be achieved at the critical point in $\cC(\gb,h)$ instead of some boundary point. This fact can easily be partially checked by comparing $\RS(\gb,h)$ with the replica symmetric solution we obtained from the overlap concentration results in Theorem~\ref{RS solution}. However, to rigorously prove this for the whole region can be a nasty calculus problem. The other way is to prove the convexity of the $\sP_{\RS}$ functional, which is beyond the scope of the current paper. Therefore, we will assume that the infimum in~\eqref{RS} is not achieved at the boundary for Theorem~\ref{AT line}. The following results were proved in~\cite{BSS19} for $m=2$. \begin{thm} [{\cite{BSS19}*{Theorem 1.1--1.2}}]\label{2spec-res} Restricted to $m=2$ species model, under the assumption \begin{align}\label{lamda} \gD_{12}^2 =1, \gD_{11}^2\gD_{22}^2 >1 \ \text{and} \ \gl_1 \gD_{11}^2 \ge \gl_2 \gD_{22}^2 , \end{align} \begin{enumerate} [label=(\roman*)] \item If either $h>0$ or \begin{align}\label{con1} \gb^2 <\frac{2}{\gl_1 \gD_{11}^2 + \gl_2 \gD_{22}^2 + \sqrt{(\gl_1\gD_{11}^2-\gl_2\gD_{22}^2)^2+4\gl_1 \gl_2}} \end{align} then $\cC(\gb,h)=\{\vq_{\star}\}$ is a singleton. \item Assume $h>0$, let $\vq \in \cC(\gb,h)$, and $ \gc_s:= \E \sech^4(\gb \eta\sqrt{(\gD^{2}\gL\vq)_s}+h)$ for $s=1,2$. If \begin{align}\label{con2} \gb^2 > \frac{2}{\gl_1\gc_1 \gD_{11}^2 + \gl_2\gc_2 \gD_{22}^2 + \sqrt{(\gl_1\gc_1\gD_{11}^2-\gl_2\gc_2\gD_{22}^2)^2+4\gl_1\gl_2\gc_1 \gc_2}} \end{align} then \[\lim_{N\to \infty}F_N < \RS(\gb,h).\] \end{enumerate} \end{thm} Comparing with the classical SK model, since the RHS of~\eqref{con1} is reduced to the critical $\gb$ for $h=0$ in SK model when $m=1$ and the similar form of RHS of~\eqref{con1} and~\eqref{con2}, one can reasonably guess that~\eqref{con1} gives an RS regime of MSK model, and~\eqref{con2} is the AT line condition. The authors in~\cite{BSS19} proved that~\eqref{con2} indeed gives RSB phase for $m=2$ using the idea in~\cite{Tal11b}*{Chapter~13} by a 1-RSB perturbation of the Parisi formula, and the uniqueness of $\vq_{\star}$ is essential in the proof of Theorem~\ref{2spec-res} part (ii), without that, it is hard to give an accurate description of the AT line. In this paper, we prove the uniqueness of $\vq_{\star}$ in the general species case under an analogous condition of~\eqref{con1}. To fully generalize Theorem~\ref{2spec-res} in the $m>2$ case (\ie~incorporating the case $h>0$), one needs to analyze the uniqueness of the solution to a nonlinear system. Unfortunately, we are not able to prove that. However, assuming $\gD^2$ is \emph{positive-definite}~, $\vq_{\star}$ is unique when $h>0$ and the infimum in~\eqref{RS} is not achieved at the boundary, we can prove the AT line condition for general $m$. The idea is still based on the perturbation technique of the Parisi formula. On the other hand, our variance analysis in Section~\ref{sec:cavity} and Section~\ref{sec:varoverlap} suggests the AT line condition should be true from the RS side when $\gD^2$ is \emph{positive-definite}. \subsection{Statement of the main results}\label{s1sec:main} First, by adapting Lalata's argument, we prove that the MSK model is in the RS regime when $\gb < \gb_0$. As noted in Section~\ref{sec:intro}, our argument holds for \emph{indefinite}~$\gD^2$ and proves the asymptotics of the free energy in the MSK model with \emph{indefinite}~$\gD^2$. To the best of our knowledge, this is the first result dealing with general \emph{indefinite}~$\gD^2$ in the Ising MSK model. For the rest of the article, we will assume the following: \begin{ass}\label{ass:0} Assume that $\gD^{2}$ is a symmetric and invertible $m\times m$ nonzero matrix with non-negative entries. However, it need not be positive-definite. \end{ass} Before stating the RS phase diagram results, we need to generalize that the set $\cC(\gb,h)$ is singleton from $2$-species~\cite{BSS19} to general $m$-species. We define \begin{align}\label{betac} \gb_{c}:=\rho(\gD^2 \gL)^{-\half}. \end{align} where $\rho(A)$ is the \emph{spectral radius} or the largest absolute value of the eigenvalues of $A$. In general, one has $\rho(A) \le \norm{A}$. But it is easy to check that for symmetric $A$, $\rho(A)=\norm{A}$. \begin{thm}[Uniqueness of solution]\label{thm0} Assume that $\gb < \gb_{c}$. Then $\cC(\gb,h)$ is a singleton set, \ie~the system \begin{align}\label{syseq} q_s = \E \tanh^2(\gb \eta \sqrt{(\gD^2\gL \vq)_s}+h),\qquad s=1,2,\ldots, m \end{align} has a unique solution, where $\eta\sim \N(0,1)$. \end{thm} \begin{rem} If we remove the invertiblity of $\gD^2$ in Assumption~\ref{ass:0}, then one has to replace the equations~\eqref{syseq} by setting~\eqref{sys-crit} to 0 and uniqueness holds for the vector $\gD^{2}\gL\vq$. \end{rem} The proof is similar to the SK model, which is because the RHS map is a contraction for small $\gb$. For large $\gb$, the contraction is not true anymore. In the SK model with $h>0$, Latala--Guerra lemma~\cite{Tal11a}*{Proposition 1.3.8} tells us $\vq$ is still unique. The proof is based on the monotone property of a nonlinear function and intermediate value theorem. In the MSK model, the analog of Latala--Guerra is not obvious since we are dealing with a system of nonlinear equations~\eqref{syseq}. The authors in~\cite{BSS19} give proof for the $m=2$ case using an elementary approach, but the idea is hard to generalize for $m \ge 3$. Their proof holds only for \emph{positive-definite}~$\gD^2$ case. For a particular \emph{indefinite}~MSK model, the deep Boltzmann machine, the Latala-Guerra lemma~\cite{ACM21} has been extended to arbitrary depth with Gaussian random field $h$. For \emph{indefinite}~ $\gD^2$, the following Proposition~\ref{prop:indf-uniq} gives the uniqueness of $\vq_{\star}$ when $m=2$. \begin{prop}\label{prop:indf-uniq} For $m=2$, assume $\gD^2 = \begin{pmatrix}a & b \\c & d \end{pmatrix}$ is \emph{indefinite}. If $h>0$, then the system~\eqref{syseq} has a unique solution for all $\gb>0$. \end{prop} This proposition will be used to discuss AT line for some bipartite SK models in Example~\ref{ex:RSB}. The proof of Proposition~\ref{prop:indf-uniq} and Theorem~\ref{thm0} will be given in Section~\ref{sec:RSB}. Now we define the symmetric \emph{positive-definite}~matrix \begin{align}\label{V} \cV:=|\gL^{\sfrac12}\gD^2\gL^{\half}|. \end{align} obtained by taking absolute values of all the eigenvalues in the spectral decomposition of the symmetric matrix $\gL^{\sfrac12}\gD^2\gL^{\half}$. We also denote by \begin{align} \ovR_{12} := \vR_{12} -\vq . \end{align} the centered overlap vector, where $\vq$ is the unique solution to~\eqref{syseq}. The following two theorems are about the overlap concentration. Here we use $\nu(\cdot) = \E \la \cdot \ra$ to denote expectation w.r.t.~the disorder and the Gibbs measure, one can always check the notation in Section~\ref{notation}. \begin{thm}[Overlap concentration for general $h$]\label{thm1} Assume that $\gb < \gb_0:={\gb_c}/{\sqrt{4\ga}}$, where $\ga =\ga (\gD^{2}):= 1+\ind\{\gD^2 \text{ is indefinite}\}$. For $2\gh<\gb_c^2-4\ga \gb^2 $, we have \begin{align*} \gn(\exp(\gh N\cP(\ovR_{12})))\le \det(I- (2\gh+4\ga \gb^2 ) \cV )^{-1/2} \end{align*} where \begin{align*} \cP(\vx):= \vx^{\T}\gL^{\half}\cV\gL^{\half}\vx. \end{align*} \end{thm} Theorem~\ref{thm1} says that the overlap vector $\vR_{12}$ concentrates around $\vq$ when $\gb< \gb_0$ for all $h \ge 0$, and its proof is given in Section~\ref{sec2: latala} by adapting Latala's argument. Depending on the definiteness of $\gD^2$, we obtained two different RS regimes. The RS regime for \emph{indefinite}~$\gD^2$ has an extra factor because the control of derivative of interpolated Gibbs average in the proof is somewhat crude. To get a sharper bound, one needs to work much harder, even unknown in SK. However, the next theorem tells us that if $h=0$, one can prove the concentration of overlap up to $\gb_c$, which is true even for \emph{indefinite}~$\gD^2$. \begin{thm}[Overlap Concentration for $h=0$]\label{ovp no h} If $h=0, \gb < \gb_c$, we have \begin{align} \nu\left(\exp\bigl(\frac18 (\gb_c^2-\gb^2)N \cP(\vR_{12})\bigr)\right) \le K \end{align} for some constant $K<\infty$ that does not depend on $N$. \end{thm} The proof of Theorem~\ref{ovp no h} is given in Section~\ref{concen-h0} and is based on a quadratic coupling argument~\cites{GT02b,Tal11b}. This theorem is expected to give the whole RS regime when $h=0$ (by comparing with the classical SK model), even for \emph{indefinite}~$\gD^2$. With the control of overlap, we also prove the following. \begin{thm}[LLN and CLT for free energy]\label{RS solution} If $\gb < \gb_0, h\ge 0$, then \begin{align}\label{sol} \abs{F_N - \RS(\gb,h)} \le \frac K N , \end{align} where $K$ is a constant that does not depend on $N$. Moreover, for $h>0$, we have \begin{align}\label{clt} N^{\half}\left( F_N - \RS(\gb,h) \right) \Rightarrow \N(0,b(\gb,h)) \quad \text{as} \quad N \to \infty, \end{align} where \begin{align*} \RS(\gb,h) & = \log 2 + \sum_{s=1}^m\gl_s \E \log \cosh(\gb \eta \sqrt{(\gD^2\gL \vq)_s} +h) + \frac{\gb^2}{4}\cQ(\vone -\vq), \\ b(\gb,h) & :=\sum_{s=1}^m \gl_{s}\var(\log\cosh(\gb \eta \sqrt{(\gD^2\gL \vq)_s}+h)) - \frac{\gb^2}2 \cQ(\vq). \end{align*} and $\cQ(\vx) := \vx^{\T}\gL\gD^2\gL \vx$ denotes the quadratic form of $\gL \gD^2\gL$. \end{thm} The first part of Theorem~\ref{RS solution} gives the RS solution of MSK model for general $\gD^2$. It's easy to check that $\text{RS}(\gb,h) $ here has the same form as $\text{RS}(\gb,h)$ in~\eqref{RS} derived from the Parisi formula in the \emph{positive-definite}\ $\gD^{2}$ case. However, in the bipartite case the critical points in $\cC(\gb,h)$ are saddle points, so the formula in~\eqref{par} is not true anymore. However, a modified Parisi formula is likely to be true for $\gD^2$ \emph{indefinite}~at least in the RS regime (see Example~\ref{ex:RSB} for further discussion). The second part is a Central Limit Theorem for the free energy, which holds at the corresponding RS regime depending on $\gD^2$. The proof of Theorem~\ref{RS solution} is in Section~\ref{clt-free-engy}. For the CLT of free energy in classical SK model, see~\cites{GT02,ALR87,Tin05,CN95} and references therein. \begin{rem} The proof of CLT for the free energy in the $\gb<\gb_c, h=0$ case, is similar to the classical SK model, see~\cite{ALR87} and~\cite{Tal11b}*{Chapter 11}. In~\cite{ALR87}, the authors proved the CLT using cluster expansion approach and later~\cite{Tal11b} reproved it using moment method. Both the cluster expansion approach and moment method are highly nontrivial for MSK model. However, the characteristic function approach we used in our paper, can be generalize for the MSK model with $\gb<\gb_{c}, h=0$. Since we already have a good control of overlap in Theorem~\ref{ovp no h}, the next question is to determine the asymptotic mean and variance of free energy. By the species-wise cavity approach developed in Section~\ref{sec:cavity} and Section~\ref{sec:varoverlap}, we evaluate \begin{align} c_N(\gb) & := N\left(\log 2 +\frac 1 4 \gb^2 \cQ(\vone)\right) + \frac{1}{4} \log \det(I-\gb^2\gD^2\gL), \\ b(\gb) & :=\frac 1 2 \left(-\log \det(I-\gb^2\gD^2\gL)-\gb^2\tr(\gD^2\gL)\right). \end{align} Then $\log Z_N(\gb) - c_N(\gb) \Rightarrow \N(0,b(\gb))$ as $N \to \infty$ can be carried over similarly as in~\cite{Tal11b}*{Chapter 11}, but we do not pursue that here. \end{rem} Next, we generalize the classical cavity approach to a species-wise cavity method to analyze the fluctuation of the overlap vector. We choose a species $s \in \{1,2,\ldots,m\}$ first with probability ${\gl_s}$, then select a spin uniformly in that species to be decoupled and finally compare this decoupled system with the original one. The complete description is given in Section~\ref{sec:cavity}. In~\cite{Pan15}, Panchenko introduced another cavity idea when using the Aizenman--Sims--Starr scheme to prove the lower bound of the Parisi formula. It adds/decouples $k>1$ spins simultaneously, and those $k$ spins are distributed into $m$ species according to the ratios stored in $\gL$. It might be possible to do the second-moment computation along with this cavity idea, but it will be more technically challenging since one needs to control $k>1$ spins. For $(\gb,h)$ in the RS region proved in Theorem~\ref{thm1} and~\ref{ovp no h} with $\vq$ being the unique solution to~\eqref{syseq}, define \[ \hat{q}_{s}:=\E\tanh^{4}(\gb\eta\sqrt{(\gD^{2}\gL\vq)_{s}}+h),\qquad s=1,2,\ldots,m. \] Define the $m\times m$ diagonal matrices $\gC, \gC', \gC''$, whose $s$-th diagonal entries are respectively given by \begin{align*} \gc_s = 1-2q_{s}+\hat{q}_s & =\E\sech^{4}(\gb\eta\sqrt{(\gD^{2}\gL\vq)_{s}}+h), \\ \gc'_s = 1 -4q_{s}+3\hat{q}_s\qquad & \text{and} \qquad \gc''_s = 2q_s +q_s^2 -3\hat{q}_s \qquad \text{ for } s=1,2,\ldots,m. \end{align*} Note that $-1/3\le \gc_{s}'\le \gc_{s}\le 1$ but $\gc'_{s}$ can be negative. For large $h$, one can easily check that $\gc_{s}' < -\gc_{s}$. This fact will be used when we discuss the AT line condition in Example~\ref{ex:RSB}. Let \begin{align}\label{Ui} U(0):=\nu(\ovR_{12}\ovR_{34}^{\T}),\quad U(1):=\nu(\ovR_{12}\ovR_{13}^{\T})\text{ and }\quad U(2):=\nu(\ovR_{12}\ovR_{12}^{\T}) \end{align} denote the variance-covariance matrices of the overlap vectors. Note that, all the matrices $U(i)$'s are \emph{positive-definite}. Using the generalized cavity method we prove the following. \begin{thm}\label{thm:lyap} For $\gb< \gb_{0}$ or $\{\gb<\gb_{c}, h=0\}$, the matrices \begin{align*} \hat{U}(0) & := -3U(0)+2U(1), \\ \hat{U}(1) & := 3U(0)-4U(1)+U(2), \\ \hat{U}(2) & := U(0)-2U(1)+U(2) \end{align*} satisfy the following equations \begin{align}\label{lyap} \begin{split} \sym\bigl( \bigl(I-\gb^2\gC\gD^{2}\gL\bigr) \cdot \hat{U}(2)\bigr) &= \frac1N\cdot \gC\gL^{-1} + \mfR,\\ \sym\bigl( \bigl(I-\gb^2\gC'\gD^{2}\gL\bigr) \cdot \hat{U}(1)\bigr) &= \frac1N\cdot \gC'\gL^{-1} + \mfR,\\ \text{ and } \sym\bigl( \bigl(I-\gb^2\gC'\gD^{2}\gL\bigr) \cdot \hat{U}(0)\bigr) &= \gb^{2} \sym\bigl( \gC''\gD^{2}\gL\cdot \hat{U}(1)\bigr) + \frac1N\cdot \gC''\gL^{-1} + \mfR, \end{split} \end{align} where $\sym(A)=(A+A^{\T})/2$ and $\mfR$ is some $m\times m$ symmetric matrix with \[ \max_{1\le p,q\le m}|\mfR_{p,q}|\le K\biggl(N^{-3/2} + \sum_{s=1}^{m}\nu(\abs{R_{12}^s-q_s}^3)\biggr)\] for some constant $K$ independent of $N$. \end{thm} Unfortunately, we do not have a nice interpretation of the coefficients appearing in the system of linear equations in Theorem~\ref{thm:lyap}. This is actually related to an open problem by Talagrand, where he asked to identify the underlying algebraic structure (see~\cite[Research Problem 1.8.3]{Tal11a}). Before we discuss the Theorem~\ref{thm:lyap}, let's recall the definition of stable matrices. \begin{defn}\label{def:stable} Square matrix $A$ is \emph{stable} if all the eigenvalues of $A$ have strictly negative real part. \end{defn} Note that the linear equations in~\eqref{lyap} are examples of continuous Lyapunov equation of the form $AX+XA^{\T}+Q=0$ where $A, Q$ are symmetric matrices. Those equations also appear in control theory (see \eg~\cite{AM07book}). Moreover, the existence and uniqueness of the solution are equivalent to the matrix $A$ being stable. It is an exciting question to connect equations~\eqref{lyap} with an appropriate control problem. In our case, solving the system of equations given in~\eqref{lyap}, we get the asymptotic variance of overlap. \begin{thm}[Asymptotic variance of overlap vector]\label{thm:varoverlap} For $\gb< \gb_{0}$ or $\{\gb<\gb_{c}, h=0\}$, we have \begin{align*} N \cdot U(i) \to \gS(i) \quad \text{as} \quad N \to \infty \end{align*} where $\gS(i), i=0,1,2$ satisfies the following \begin{align}\label{eq:solve-lya} \begin{split} \gS(0)-2\gS(1)+\gS(2) &= \gC(I - \gb^2\gD^{2}\gL\gC)^{-1}\gL^{-1},\\ 3\gS(0)-4\gS(1)+\gS(2) &= \gC'(I - \gb^2\gD^{2}\gL\gC')^{-1}\gL^{-1}. \end{split} \end{align} \end{thm} \begin{rem} $\gS(i), i=0,1,2$ also solve a third equation~\eqref{eq:var-overlap-3} besides~\eqref{eq:solve-lya}, but it does not possess a simple form, see the details in Section~\ref{sec:varoverlap}. \end{rem} The asymptotic variance of overlap is given as solution to the linear system in~\eqref{lyap}. We recall the definition of \emph{stable} in the Definition~\ref{def:stable}. To get a unique solution from the system~\eqref{lyap}, one needs the matrices $\gb^2\gC\gD^{2}\gL-I, \gb^2\gC'\gD^{2}\gL-I$ to be stable, \ie\ \begin{align}\label{cond1:solve sys} \gb^2\max_{\gl \in \text{spec}(\gC\gD^2\gL)} \Re(\gl)<1 \quad \text{and} \quad \gb^2\max_{\gl \in \text{spec}(\gC'\gD^2\gL)} \Re(\gl) <1. \end{align} Note that $\gc_s >0 $ for all $s$ and $\gC\gD^2\gL$ is similar to a symmetric matrix. Thus all eigenvalues of $\gC\gD^2\gL$ are real, and by Perron--Frobenius theorem \[ \max_{\gl \in \text{spec}(\gC\gD^2\gL)} \Re(\gl)=\rho(\gC\gD^2\gL) \] and the condition~\eqref{cond1:solve sys} is equivalent to \begin{align}\label{cond2:solve sys} \gb^2\max\{\rho(\gC\gD^2\gL),\max_{\gl\in \text{spec}(\gC'\gD^2\gL)} \Re(\gl)\}<1. \end{align} For \emph{positive-definite}~$\gD^2$, the matrix $\gC'\gD^2\gL$ is similar to a symmetric matrix and has real eigenvalues. Moreover, using $\gc'_s \le \gc_s $, one can get \begin{align}\label{cond3:psd solving} \max_{\gl\in \text{spec}(\gC'\gD^2\gL)} \Re(\gl) \le \rho(\gC\gD^2\gL). \end{align} Thus the results in Theorem~\ref{thm:varoverlap} will not be true unless $\gb^2<\rho(\gC\gD^2\gL)^{-1}$. This indicates that $\gb^2 \rho(\gC\gD^2\gL)=1$ is the AT line condition from the RS side for \emph{positive-definite}~$\gD^{2}$. To rigorously prove that $\gb^2< \rho(\gC\gD^2\gL)^{-1}$ gives RS phase is still open even in classical SK model. In~\cite{JT17}, the authors proved it except for a bounded region close to the phase boundary by using the Parisi formula. Recently, Bolthausen~\cites{Bol14,Bol19} developed an iterative TAP approach to SK model, where he also discussed some possible ideas to prove $\lim_{N\to \infty} F_N = \text{RS}(\gb,h)$ below the AT line. For $h=0$, when $\gb^2< \gb_{c}^{2}=\rho(\gD^2\gL)^{-1}$ and $\gD^{2}$ is general, it is easy to check that condition~\eqref{cond1:solve sys} is satisfied, more details can be found in Section~\ref{sec:varoverlap}. The following Theorem~\ref{AT line} states the RSB condition when $\gD^2$ is \emph{positive-definite}, under the assumption that $\cC(\gb,h)$ is a singleton for $h>0$. Without this, it is not easy to give an accurate description of the AT line. We assume $\gD^2$ is \emph{positive-definite}, as the proof is based on perturbation of the Parisi formula known only in that case in~\cite{Pan15}. Recall that we also need to assume that the infimum in~\eqref{RS} can only be achieved in $\cC(\gb,h)$ but not on the boundary. \begin{thm}[RSB condition]\label{AT line} Assume that $\gD^2$ is \emph{positive-definite}~and for some $\gb,h>0$, $\cC(\gb,h)$ is a singleton. We further assume that the infimum in~\eqref{RS} is not achieved on the boundary. Let $\vq$ be the unique critical point in $\cC(\gb,h)$. Define $ \gC := \diag(\gc_1, \gc_2, \ldots, \gc_m) $ where $ \gc_s := \E \sech^4(\gb \eta\sqrt{Q_1^s}+h) \ \text{for} \ s=1, 2,\ldots, m. $ If $ \gb>\rho(\gC\gD^2\gL)^{-\half} $, then \[ \lim_{N\to \infty}F_N < \RS(\gb,h). \] \end{thm} Note that \begin{align}\label{ATline} \gb_{\textsc{AT}}(h):=\rho(\gC\gD^2\gL)^{-\half} \end{align} seems to be the AT line in the MSK model for \emph{positive-definite}~$\gD^{2}$, and when $h=0$, $\gb_{\textsc{AT}}(0) = \gb_c$ gives the critical temperature defined in~\eqref{betac}, see the left phase diagram picture of Figure~\ref{fig:phase}. When $h=0, \gb>\gb_c$, there seems to be at least one non-zero solution in $\cC(\gb,0)$. However, to prove RSB by the perturbation argument is still technically challenging. For the general MSK model, here is a class of examples with a non-zero solution for $h=0,\gb>\gb_{c}$. In the classical SK model, one can easily check that for $h=0,\gb>1$, there exists a non-zero solution $\hat{q}(\gb)\in (0,1)$ to the equation $q=\E \tanh^2(\gb\eta \sqrt{q})$. Take $\gD^2, \gL$ such that $\vone$ is an eigenvector of $\gD^{2}\gL$ with associated eigenvalue $\theta$. Then $\gb_c = \rho(\gD^2\gL)^{-\half} = \theta^{-\half}$ and for $\gb>\gb_c$, we have $ \gb \sqrt{\theta}>1$. It follows that $\vq=\hat{q}(\gb \sqrt{\theta})\vone$ is a non-zero solution to the equations \begin{align*} q_s= \E \tanh^2(\gb\eta \sqrt{(\gD^{2}\gL\vq)_s}) \text{ for } s=1,2,\cdots, m. \end{align*} Finally, we point out that the AT line condition for $\gD^2$ \emph{indefinite}~seems to be more complicated than the \emph{positive-definite}~case. Because for \emph{indefinite}~$\gD^2$, the inequality~\eqref{cond3:psd solving} could fail depending on the values of $(\gb,h)$. For fixed $\gb>\gb_c$ and large $h$, we observe $\max_{\gl\in \text{spec}(\Gamma'\gD^2\gL)} \Re(\gl) \ge \rho(\gC\gD^2\gL)$ in some examples. However, it is not obvious whether in the $\gD^2$ \emph{indefinite}~case, the AT line is given by $\gb^2\rho(\gC\gD^2\gL)=1$ (see from the right picture in Figure~\ref{fig:phase}). \begin{figure}[ht] \centering \includegraphics[scale=0.75]{combine_phase_v1-3.pdf} \caption{Phase diagram of MSK model: the left and right one are expected for \emph{positive-definite}~ and \emph{indefinite}~$\gD^2$ respectively. The green shaded region corresponds to RS regime proved in Theorem~\ref{thm1} and~\ref{ovp no h}. The expected RSB phase is labelled in both cases. In both pictures, the red curve and blue curve correspond to $\gb^2 \rho(\gC\gD^2\gL)=1$ and $\gb^2\max_{\gl\in \text{spec}(\gC'\gD^2\gL)} \Re(\gl)=1$ respectively. The right picture for \emph{indefinite}~$\gD^2$ is based on the evidence from some examples, see Example~\ref{ex:RSB}.} \label{fig:phase} \end{figure} \begin{ex}\label{ex:RSB} Take $\gL = \begin{pmatrix} 1/2 & 0 \\ 0 & 1/2 \end{pmatrix} $ and $\gD^2 = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}$. This is the bipartite SK model, an example of \emph{indefinite}~MSK model. For $h>0$, by Proposition~\ref{prop:indf-uniq}, the following system \[ \begin{cases} q_1 = \E \tanh^2(\gb \eta \sqrt{q_2/2} +h) \\ q_2 = \E \tanh^2(\gb \eta \sqrt{q_1/2} +h) \end{cases} \] has a unique solution $q_1=q_2 = q= \E \tanh^2(\gb \eta \sqrt{\frac{q}{2}}+h)$. Similarly, $\gc_1 = \gc_2 = \gamma$ and $\gc_1' = \gc_2' =\gamma'$. Then we have \begin{align*} & \rho(\gC\gD^2\gL) = \frac{\gamma}{2} \quad \text{and} \quad \max_{\gl\in \text{spec}(\Gamma'\gD^2\gL)} \Re(\gl)= \rho(\gC'\gD^2\gL) =\frac{\abs{\gamma'}}{2}. \end{align*} We know that $\gc \ge \gc', \gc>0$ always holds, but $\abs{\gamma'} \le \gamma$ is not always true (for large $h$), and it depends on the strength of external field $h$. The suggested AT line condition is $\gb^2 \max\{\gc, |\gc'|\}<1$. For the functional $\sP_{\RS}(\vq)$, the Hessian w.r.t.~$\vq$ is a positive multiple of $$H= \gD^2 - \gb^2 \gL\gC' = \begin{pmatrix} - \gb^2 \gc'/2 & 1 \\ 1 & - \gb^2\gc'/2 \end{pmatrix}.$$ $H$ is an indefinite matrix in the replica symmetric region, as the eigenvalues of $H$ are given by $ - \gb^2 \gc'/2 \pm 1 $ and the RS region seems to be $ \beta^2 \max( \gamma, |\gamma'| ) /2 < 1 $. So, although the Parisi formula evaluated at the stationary point gives the limit for the free energy (proved for small $\beta$), it is not the minimizer of the Parisi formula directly taken from the positive-definite case. \end{ex} In general, we conjecture the RSB condition for \emph{indefinite}~$\gD^2$ as follows. \begin{conj}\label{AT conj} For general indefinite $\gD^2$, if $$\gb^2\max\{\rho(\gC\gD^2\gL),\max_{\gl\in \text{spec}(\Gamma'\gD^2\gL)} \Re(\gl)\}>1,$$ the system is in RSB phase. \end{conj} The above conjecture seems far to solve since the classical idea is based on perturbation of the Parisi formula and the Parisi formula in general \emph{indefinite}~case is still in mystery. Besides that, we also need to understand the solution to~\eqref{syseq} for \emph{indefinite}~$\gD^2$. To prove the AT line condition from the RS side for \emph{indefinite}~$\gD^2$ is even more challenging. In order to illustrate the generality and correctness of our results, we look at some models studied in other papers as particular cases in our setting. \begin{ex} Take $\gL = \begin{pmatrix} \gl_1 & 0 \\ 0 & \gl_2 \end{pmatrix} $ and $\gD^2$ as in~\eqref{lamda}, then $$ \gb_c= \rho(\gD^2 \gL) =\frac12\left(\gl_1 \gD_{11}^2 + \gl_2 \gD_{22}^2 + \sqrt{(\gl_1\gD_{11}^2-\gl_2\gD_{22}^2)^2+4\gl_1 \gl_2}\right). $$ Similarly, one can check $$ \gb_{AT}(h) = \rho(\gC\gD^2\gL)= \frac12\left(\gl_1\gc_1 \gD_{11}^2 + \gl_2\gc_2 \gD_{22}^2 + \sqrt{(\gl_1\gc_1\gD_{11}^2-\gl_2\gc_2\gD_{22}^2)^2+4\gl_1\gl_2\gc_1 \gc_2}\right). $$ Theorem~\ref{thm0} and~\ref{AT line} extend the critical temperature~\eqref{con1} and AT line condition~\eqref{con2} to the general MSK model. \end{ex} \begin{ex} Take $\gL = \begin{pmatrix} \gl_1 & 0 \\ 0 & \gl_2 \end{pmatrix} $ and $\gD^2 = \begin{pmatrix} 0 & 1 \\ \theta^{2} & 0 \end{pmatrix}$. For $\theta=1$, this corresponds to the bipartite SK model. In~\cite{BL17} the authors study the BSSK model on the sphere with $h=0$ and compute $\gb_c = (\gl_1\gl_2)^{-1/4}$. In our case, we have $\gb_c = \rho(\gD^2\gL)^{-\half}$ in~\eqref{betac} and by a simple computation, one gets $\rho(\gD^2\gL)^{-\half}=(\gl_1\gl_2\theta^{2})^{-1/4}$, which agrees with $\gb_c$ in~\cite{BL17}. \end{ex} \subsection{Notations} \label{notation} \begin{enumerate} \item $\rho(A)$ is the \emph{spectral radius} or the largest absolute value of the eigenvalues of $A$. \item $\norm{A}$ is the operator norm of $A$. \item $\gD^2 = (\!(\gD_{s,t}^2)\!)_{s,t=1}^m$ is the covariance matrix of Gaussian disorder. \item $\gL:=\diag(\gl_1,\gl_2,\ldots, \gl_m)$ is the diagonal matrix, whose diagonal entries is the ratio of the spins in each species. \item $\cV:=|\gL^{\sfrac12}\gD^2\gL^{\half}|$ which is obtained by taking absolute values of all the eigenvalues in the spectral decomposition of the matrix $\gL^{\sfrac12}\gD^2\gL^{\half}$. \item $\vR_{12} = (R^{(1)}_{12}, R^{(2)}_{12}, \ldots, R^{(m)}_{12})$ denotes the overlap vector. \item $\ovR_{12} := \vR_{12} -\vq$ is the centered overlap vector, and $\vq$ is defined through~\eqref{syseq}. \item $Q^s :=(\gD^2\gL\vq)_s \text{ for } s=1,2, \ldots, m$. \item $ \Gamma := \diag(\gc_1,\gc_{2}, \ldots, \gc_m)$ where $\gc_s := \E \sech^4(\gb \eta\sqrt{Q^s}+h) \text{ for } s=1,2, \ldots, m$. \item $\la f(\mvgs) \ra := \sum_{\mvgs\in \gS_N} f(\mvgs ) \cdot {\exp(H_N(\mvgs))}/{Z_N} $ denotes the Gibbs average of function $f$ on $\gS_N$. \item $\gn_t(\cdot)$ denotes the expectation w.r.t.~the Gibbs randomness with interpolated Hamiltonian and disorder. \item $\cQ(\vx) = \vx^{\T}\gL\gD^2\gL \vx$ denotes the quadratic form associated with the symmetric matrix $\gL\gD^2\gL$. For a general symmetric matrix $A$, we will use $\cQ(\vx,A)$ for $\vx^\T A \vx$. We also write $\cP(\vx)=\cQ(\vx, \gL^{\half}\cV\gL^{\half})$. \end{enumerate} \subsection{Roadmap}\label{roadmap} The paper is structured as follows. Section~\ref{sec2: latala} is mainly about the proof of Theorem~\ref{thm1}, where the smart path interpolation argument is applied in the MSK model to prove the concentration of overlap at high temperature. The argument also holds for indefinite $\gD^2$, while in this case, the concentration happens in a different regime. Section~\ref{clt-free-engy} is mainly about the proof of Theorem~\ref{RS solution}, where we prove the RS solution of the MSK model and a Central Limit Theorem for the free energy with a non-zero external field. In Section~\ref{concen-h0}, we study the MSK model without an external field, where the proof of Theorem~\ref{ovp no h} is presented. A Central Limit Theorem for the free energy with zero external field can also be proved in a similar way as in~\cite{ALR87}, but we do not pursue this direction here. In Section~\ref{sec:cavity}, we develop a generalized cavity method to study the MSK model. Using second-moment computations, we derive a linear system of overlap functions and solve it using linear algebraic methods to get the variance-covariance structure of overlap vectors in Section~\ref{sec:varoverlap}. Basically, Section~\ref{sec:cavity} and~\ref{sec:varoverlap} contain the proofs of Theorem~\ref{thm:lyap} and~\ref{thm:varoverlap} respectively. In Section~\ref{sec:RSB}, for \emph{positive-definite}~MSK model, we give the proof of Theorem~\ref{AT line} by using the perturbation argument of the Parisi formula. Finally Section~\ref{sec:openqn} contains discussion and some further questions. \section{Concentration of overlap in MSK}\label{sec2: latala} Originally Latala's argument~\cite{La02} in the SK model is used to prove the concentration of overlap in part of the RS regime. The idea is based on Guerra's interpolation of two different spin glass models, one of which is the fully decoupled model, \ie~the associated Gibbs measure is a product measure, while the other model is the standard SK model of our interest. With the concentration of overlap in the decoupled model, one can obtain the concentration of overlap in the SK model by controlling the derivative of the interpolated model. For the details of this classical story, see Section 1.3 and 1.4 in~\cite{Tal11a}. This section employs a similar idea in the MSK model to prove the concentration of overlap vectors. With this concentration result, we also prove a part of the RS phase diagram. Following Guerra's interpolation, given two independent centered Gaussian vectors $\mvu=(u_{\mvgs}), \mvv=(v_{\mvgs})$ indexed by $\mvgs\in\gS_{N}$, consider the interpolation given by $u_{\mvgs}(t) = \sqrt{t}\cdot u_{\mvgs}+\sqrt{1-t}\cdot v_{\mvgs}$, for $t \in [0,1]$. Suppose $F$ is a twice differentiable function allowing us to use Gaussian integration by parts. Taking $\phi(t) := \E F(\mvu(t))$ and using Gaussian integration by parts we get the following. \begin{prop} For $\mvgs, \mvgt \in \gS_N$, \[ \phi^{\prime}(t) = \sum_{\mvgs,\mvgt}U(\mvgs,\mvgt) \E\left( \frac{\partial^2F}{\partial x_{\mvgs}x_{\mvgt}}(\mvu(t))\right) \] where $U(\mvgs,\mvgt) := \frac{1}{2}(\E u_{\mvgs} u_{\mvgt} -\E v_{\mvgs} v_{\mvgt} ) $. \end{prop} Given $(\eta_{i})_{i=1,2,\ldots,N}$ i.i.d.~standard Gaussian random variables and independent of $(g_{ij})_{1\le i<j\le N}$, we take \[ u_{\mvgs} = \frac{\beta}{\sqrt{N}} \sum_{i<j}g_{ij}\gs_i\gs_j, \quad v_{\mvgs} =\beta \sum_{s=1}^{m}\sqrt{(\gD^{2}\gL\vq)_{s}}\cdot \sum_{i \in I_s}\eta_{i} \gs_i, \quad w_{\mvgs} = \exp\bigl(h\sum_{i}\gs_i\bigr) \] and $F(\mvx) = \frac1N \log Z_N$ to be the free energy, where $Z_N=\sum_{\mvgs}w_{\mvgs} e^{x_{\mvgs}}$. In this case, the interpolated Hamiltonian is \begin{align}\label{inter-Hamilt} H_{N,t}(\mvgs) := u_t(\mvgs)= \sqrt{t}u_{\mvgs} + \sqrt{1-t}v_{\mvgs}. \end{align} Moreover, we have \begin{align*} \E u_{\mvgs} u_{\mvgt} & = \frac{N\beta^2}{2}\left(\sum_{s,t}\lambda_s R^s\gD_{s,t}^2\lambda_tR^t-\frac{1}{N}\sum_s\gl_s\gD_{s,s}^2\right), \\ \E v_{\mvgs} v_{\mvgt} & = N\beta^2\sum_s \gl_s R^s Q^s = N\beta^2\sum_{s,t} \gl_s R^s \gD_{s,t}^2 \gl_{t}q_{t}. \end{align*} In particular, \begin{align*} U(\mvgs,\mvgt) & = \frac{1}{2}(\E u_{\mvgs} u_{\mvgt} -\E v_{\mvgs} v_{\mvgt}) = \frac {N\beta^2} 4 \left(\cQ(\ovR)- \cQ(\vq) -\frac{1}{N}\sum_s \gl_s\gD_{s,s}^2\right) \\ \text{ and } U(\mvgs, \mvgs) & =\frac {N\beta^2}4 \left(\cQ(\vone-\vq)- \cQ(\vq) -\frac{1}{N}\sum_s \gl_s\gD_{s,s}^2\right). \end{align*} By a simple computation, we have \[ \frac{\partial F}{\partial x_{\mvgs}} = \frac1N\cdot w_{\mvgs} e^{x_{\mvgs}}/Z_N, \quad \frac{\partial^2 F}{\partial x_{\mvgs} \partial x_{\mvgt}} = \frac1N\cdot (\ind_{\mvgs=\mvgt} - w_\gt e^{x_{\mvgt}}/Z_N)\cdot w_{\mvgs} e^{x_{\mvgs}}/Z_N \] and we use this to rewrite the formula of $\phi^{\prime}(t)$, \begin{align}\label{free-ener deriv} \phi^{\prime}(t) & = \frac{1}{N} \E(\la U(\mvgs,\mvgs) \ra_t - \la U(\mvgs,\mvgt) \ra_t) = \frac{\gb^2}{4}\left(\cQ(\vone-\vq) - \gn_t(\cQ(\ovR_{12}))\right). \end{align} In the following section, we will prove that $\vR_{12}$ is concentrated around $\vq$ at some high temperature regime, \ie\ the term $\gn_t(\cQ(\ovR_{12}))$ is very small, and this concentration property will enable us to prove the RS solution in MSK model. First, we will prove that the quantity $\gn_t(\cQ(\ovR_{12}))$ is small. The basic idea is to prove some exponential moments is non-increasing in $t$, then by controlling $\gn_0(\cQ(\ovR_{12}))$, we can analyze the other side: $\gn(\cQ(\ovR_{12}))$ along the interpolation path. In order to prove the Theorem~\ref{thm1}, we need to study the property of $\gn_t^{\prime}$, \ie~the derivative of $\gn_t$ with respect to $t$. The following lemma gives the expression of $\gn_t^{\prime}$. \begin{lem} [{~\cite{Tal11a}*{Page~33}}] If $f$ is a function defined on $\gS_N^n$, then \begin{align*} \gn_t^{\prime}(f) & = \sum_{1 \le l,l^\prime \le n} \gn_t(U(\mvgs^l,\mvgs^{l^{\prime}})f) + n(n+1)\gn_t(U(\mvgs^{n+1},\mvgs^{n+2})f) \\ & \qquad\qquad\qquad-2n \sum_{l \le n} \gn_t(U(\mvgs^l,\mvgs^{n+1})f) - n\gn_t(U(\mvgs^{n+1},\mvgs^{n+1})f). \end{align*} \end{lem} Note that $\gn_t(U(\mvgs^l,\mvgs^{l})f) =\gn_t(U(\mvgs^{n+1},\mvgs^{n+1})f) $, therefore the above formula can be simplified one more step as follows: \begin{align}\label{eq:gen-derivative} \gn_t^{\prime}(f) & = \sum_{1 \le l <l^\prime \le n} 2\gn_t(U(\mvgs^l,\mvgs^{l^{\prime}})f) + n(n+1)\gn_t(U(\mvgs^{n+1},\mvgs^{n+2})f) -2n \sum_{l \le n} \gn_t(U(\mvgs^l,\mvgs^{n+1})f). \end{align} When $n=2$, simplifying we have \begin{align}\label{derivative} \gn_t^{\prime}(f) & = \frac{N\gb^2}{2}\bigl(\gn_t(\cQ(\ovR_{12})f) - 2\sum_{l \le 2}\gn_t(\cQ(\ovR_{l,3})f) + 3 \gn_t(\cQ(\ovR_{34})f)\bigr). \end{align} Next, we prove a useful lemma comparing the quadratic forms of overlap under $\gn_t'(\cdot)$. \begin{lem}\label{holder lem} Consider $\gr >0$, for $l, l' \in \bN$, we have \begin{align*} \abs{ \gn_t (\cQ(\ovR_{\ell\ell'})\exp(\gr N\cP( \ovR_{12}))) } & \le \gn_t(\cP(\ovR_{12})\exp(\gr N\cP( \ovR_{12}))) \end{align*} where $\cP(\vx) =\vx^{\T}\gL^{\half}\cV\gL^{\half}\vx$. \end{lem} \begin{proof} [Proof of Lemma~\ref{holder lem}.] Let $i:= \abs{\{\ell,\ell'\} \cap \{1,2\}}$, we prove the above inequality case by case. For $i=2$, we need to prove \[ \gn_t(\cQ(\ovR_{12})\exp(\gr N\cP( \ovR_{12}))) \le \gn_t(\cP(\ovR_{12})\exp(\gr N\cP( \ovR_{12}))) \] the inequality is obvious by the fact that $|\cQ(\vx)|\le \cP(\vx)$. For $i=1$ and $i=0$, it can be proved by H\"older inequalities. We provide a proof for the case $i=1$. First, the general form of H\"older inequality w.r.t.~$\gn_t$ is: \[ \gn_t(f_1f_2) \le \gn_t(f_1^{\gt_1})^{1/\gt_1}\gn_t(f_2^{\gt_2})^{1/\gt_2} \quad \text{for} \quad f_1, f_2 \ge 0, \quad \frac{1}{\gt_1}+ \frac{1}{\gt_2} =1 \] Without loss of generality, we assume $\ell=1$ and $\ell'=3$, then one needs to prove \[ \abs{\gn_t(\cQ(\ovR_{13})\exp(\gr N\cP( \ovR_{12})))} \le \gn_t(\cP(\ovR_{12})\exp(\gr N\cP( \ovR_{12}))) \] Since \begin{align}\label{exp expans} \abs{\gn_t(\cQ(\ovR_{13})\exp(\gr N\cP( \ovR_{12}))) } & \le \gn_t(\cP(\ovR_{13})\exp(\gr N\cP( \ovR_{12})))\notag \\ & = \sum_k \frac{(\gr N)^k}{k!}\gn_t(\cP(\ovR_{13})\cP( \ovR_{12})^k) \end{align} Applying H\"{o}lder inequality with $\gt_1 = k+1, \gt_2 = (k+1)/k$, we get \begin{align*} \gn_t(\cP(\ovR_{13})\cdot \cP( \ovR_{12})^k) & \le \gn_t(\cP(\ovR_{13})^{k+1})^{1/(k+1)} \cdot \gn_t(\cP( \ovR_{12})^{k+1})^{k/(k+1)} = \gn_t(\cP(\ovR_{12})^{k+1}) \end{align*} the last equality is by the symmetry among replicas. Combining with the exponential expansion~\eqref{exp expans}, we proved the case $i=1$. The case $i=0$ follows in a similar fashion. \end{proof} \begin{rem}\label{non-neg remark} In the Lemma~\ref{holder lem}, if $\gD^2$ is non-negative definite, then $\cP(\ovR_{12}) =\cQ(\ovR_{12})$, thus the similar inequalities of Lemma~\ref{holder lem} also hold by replacing $\cP(\ovR_{12})$ with $\cQ(\ovR_{12})$ and the proof is similar but simpler. \end{rem} \begin{cor}\label{derivat bound} If $\gk >0 $, then \begin{align*} \gn_t^{\prime}(\exp(\gk N \cP(\ovR_{12})) \le 2 \ga N\gb^2\gn_t(\cP(\ovR_{12})\exp(\gk N \cP(\ovR_{12}))), \end{align*} where $\ga = 1+\ind\{\gD^2 \text{ is indefinite}\} $ as in Theorem~\ref{thm1}. \end{cor} \begin{proof} [Proof of Corollary~\ref{derivat bound}.] Combined with the Lemma~\ref{holder lem} and the expression of $\gn_t^{\prime}(f)$ with $f= \exp(\gk N \cP(\ovR_{12}))$ in~\eqref{derivative}, and the symmetry between replicas, we can do the following analysis. When $\gD^2$ is non-negative definite, as in Remark~\ref{non-neg remark}, $\cP(\ovR_{12}) =\cQ(\ovR_{12})$. In this case, by just dropping the terms \[- 2\sum_{l \le 2}\gn_t(\cQ(\ovR_{l,3})f) \] in~\eqref{derivative}, and applying the inequalities in Lemma~\ref{holder lem}, we get the upper bound of $\gn_t^{\prime}(f)$, which is $2N\gb^2\gn_t(\cP(\ovR_{12})\exp(\gk N \cP(\ovR_{12})))$. For the case $\gD^2$ is indefinite, similarly we just apply all the inequalities in Lemma~\ref{holder lem} to get the bound, \ie~$ 4N\gb^2\gn_t(\cP(\ovR_{12})\exp(\gk N \cP(\ovR_{12})))$. \end{proof} \begin{cor}\label{monotone} For $t <\gr/(2\ga \gb^2)$, the function \[ t \mapsto \gn_t(\exp((\gr-2\ga t\gb^2) N \cP(\ovR_{12}))) \] is non-increasing. \end{cor} \begin{proof} [Proof of Corollary~\ref{monotone}.] Taking the derivative of $\gn_t(\exp((\gr-2\ga t\gb^2) N \cP(\ovR_{12})))$ w.r.t.~$t$, we get \begin{align*} \frac{d}{dt} & (\gn_t(\exp((\gr-2\ga t\gb^2) N \cP(\ovR_{12})))) \\ & =\gn_t^\prime(\exp((\gr-2\ga t\gb^2) N \cP(\ovR_{12}))) - 2\ga N\gb^2\gn_t(\cP(\ovR_{12})\exp((\gr-2\ga t\gb^2) N \cP(\ovR_{12}))) \le 0. \end{align*} The last step is due to Corollary~\ref{derivat bound}. \end{proof} Our goal is to control $\gn(\exp(\gk N \cP(\ovR_{12})))$, by the idea of interpolation, we just need to obtain a bound for $\gn_0(\exp(\gk N \cP(\ovR_{12})))$ since the interpolation path is non-increasing by Corollary~\ref{monotone}. \begin{lem}\label{concen of decouple} For $\gk<1/(2\norm{\cV})$, we have \begin{align} \gn_0(\exp(\gk N \cP(\ovR_{12}))) \le \det(I-2\gk \cV)^{-1/2}. \end{align} \end{lem} \begin{proof} [Proof of Lemma~\ref{concen of decouple}.] First, take a Gaussian vector $\mvu = \sqrt{2\gk} \gL^{\half} \cV^{\half} \mveta$, where $\mveta \sim \N(0,I_{m})$ and is independent with the disorder and Gibbs randomness, then \begin{align*} \gn_0(\exp(\gk N\cP(\ovR_{12})) =\E \gn_0(\exp(\sqrt{N}\mvu^{\T} \ovR_{12})) & =\E\gn_0(\prod_{s=1}^m \exp(\sqrt{N}u_s(R^{s}-q_{s}))) \\ & = \E\gn_0\left(\prod_{s=1}^m \prod_{i\in I_s} \exp\bigl(\sqrt{N}(\gs_{i}^1\gs_{i}^2-q_{s}){u_s}/{|I_s|}\bigr)\right) \\ & \le \E \prod_{s=1}^m \exp(Nu_s^2/2|I_s|) = \E \exp(\mvu^{\T}\gL^{-1}\mvu/2), \end{align*} where the last inequality is by the fact that (see~\cite{Tal11a}*{Page 39} for a proof) \[\gn_0(\exp u(\gs_i^1\gs_i^2-q)) = \exp(-qu)(\cosh(u)+q\sinh(u)) \le \exp(u^2/2).\] Finally we use the fact that $\E \exp(\mvu^{\T}\gL^{-1}\mvu/2) = \E\exp(\gk \mveta^{\T} \cV \mveta) = \det(I-2\gk\cV)^{-\half}$ and this completes the proof. \end{proof} Now we are ready to prove Theorem~\ref{thm1}. \subsection{Proof of Theorem~\ref{thm1}} Take $\gr = \gh +2\ga \gb^2<1/(2\norm{\cV})$, then \begin{align*} \gn_t(\exp((\gh+2\ga(1-t)\gb^2) N \cP(\ovR_{12}))) =\gn_t(\exp((\gr-2\ga t\gb^2) N \cP(\ovR_{12}))) \end{align*} By the non-increasing property of $\gn_t$ in Corollary~\ref{monotone}, we have \[ \gn_t(\exp((\gr-2\ga t\gb^2) N \cP(\ovR_{12}))) \le \gn_0(\exp(\gr N \cP(\ovR_{12}))) \quad \text{for} \quad t \in [0,1]. \] By Lemma~\ref{concen of decouple}, \[ \gn_0(\exp(\gr N \cP(\ovR_{12}))) \le \det(I-2\gr \cV)^{-1/2} =\det(I-2(\gh +2\ga \gb^2) \cV)^{-1/2}. \] Thus, \[ \gn_1(\exp(\gh N \cP(\ovR_{12}))) \le \det(I-2(\gh +2\ga \gb^2) \cV)^{-1/2}. \] This proves Theorem~\ref{thm1}. \hfill\qed \section{Asymptotic for the free energy}\label{clt-free-engy} In this section, we give the proof of Theorem~\ref{RS solution}. \subsection{Proof of Theorem~\ref{RS solution}} First, let us prove the RS solution~\eqref{sol}. Recall the formula~\eqref{free-ener deriv} in Section~\ref{sec2: latala}, \[ \phi^{\prime}(t) = \frac{\gb^2}{4}\bigl(\cQ(\mathbf{1}-\vq) - \gn_t(\cQ(\ovR_{12}))\bigr) \] which gives the derivative of free energy associated to $H_{N,t}(\mvgs)$. By Theorem~\ref{thm1}, when $\gb^2 < \gb_0^2$, we have $\gn_t(\cQ(\ovR_{12})) \le \frac{K}{N}$ by Jensen's inequality, which implies that $\abs{\phi'(t) - \frac{\gb^2}{4}\cQ(\bf1-\vq)} \le \frac{K}{N}$, combining with the fact that $\phi(1)- \phi(0) = \int_0^1 \phi'(t) dt$, then \begin{align*} & \abs{\phi(1)-\phi(0) - \frac{\gb^2}{4}\cQ(\bf1-\vq)} \le \frac{K}{N} \end{align*} where $\phi(1) = F_N(\gb,h)$, and $\phi(0)$ is the free energy with Hamiltonian $H_{N,0}(\mvgs)$, a simple calculation based on the definition of free energy can give us $\phi(0) = \log 2+ \sum_{r=1}^m \gl_r \E\log \cosh(\gb \eta \sqrt{Q^{r}}+h)$, where $Q^{r}=(\gD^{2}\gL\vq)_r$ and the expectation is w.r.t.~$\eta \sim \N(0,1)$. Next, we prove the second part of Theorem~\ref{RS solution}, a quantitative Central Limit Theorem for the free energy with $h \neq 0$. Let \[ X_t = \frac{1}{\sqrt{N}} \bigl(\log Z_{N,t}- \E \log Z_{N,0} - \frac14N \gb^2 t\cdot \cQ(\bf1-\vq)\bigr) \] where $Z_{N,t}$ is the partition function associated with the interpolated Hamiltonian $H_{N,t}(\mvgs)$ introduced in Section~\ref{sec2: latala}, specifically \[ Z_{N,t} = \sum_{\mvgs} \exp \biggl( \frac{\gb}{\sqrt{N}}\sum_{i<j}g_{ij}\gs_i \gs_j \cdot \sqrt{t} + \gb \sum_{r=1}^m \sqrt{Q^r} \sum_{i \in I_r} \eta_i \gs_i \cdot \sqrt{1-t} + h \sum_i \gs_i\biggr). \] Our goal is to prove a Central Limit Theorem for $t=1$. Let us first introduce a lemma packing up technical details of the computation. \begin{lem}\label{asy-lem1} Let $G_{t}(\mvgs):= \frac{\exp(H_{N,t}(\mvgs))}{Z_{N,t}}, \gs\in\{1,-1\}^{N}$ be the interpolated Gibbs measure. For any bounded twice differentiable function $g \in C_b^2(\bR)$ and $t \in [0,1]$, we have \begin{align}\label{asy-eq0} \frac{d}{dt} \E g(X_t) = \frac{\gb^2}{4} \E \left( g''(X_t)\bigl(\la \cQ(\ovR_{12})\ra_t - \cQ(\vq)-N^{-1}\tr(\gD^{2}\gL)\bigr) - \sqrt{N} g'(X_t)\la \cQ(\ovR_{12}) \ra_t\right). \end{align} In particular, for $m(t,\theta):= \E e^{\i \theta X_t}$ the characteristic function of $X_t$, we have \begin{align}\label{sec3:eq0} \abs{\frac{\partial}{\partial t}m(t,\theta) - v^2\theta^2 m(t,\theta) } \le \frac{\gb^2 C}{4}\left(\frac{|\theta|}{\sqrt{N}} + \frac{2\theta^2}{N}\right) \text{ for all }\theta\in\dR \end{align} where $C$ is an upper bound of $ \sup_{0\le t\le 1} N \nu_t(|\cQ(\ovR_{12})|)$ and $\tr(\gD^{2}\gL) $; and $v^2 :=\gb^2\cQ(\vq)/4$. \end{lem} \begin{proof} [Proof of Lemma~\ref{asy-lem1}.] For $g \in C_b^{2}(\bR)$, we evaluate the LHS directly, \begin{align*} & \frac{d}{dt} \E g(X_t) \\ & =\frac{1}{\sqrt{N}} \E g'(X_t) \biggl( \frac{\gb}{2 }\sum_{\mvgs} G_t(\mvgs) \biggl(\frac{1}{\sqrt{tN}}\sum_{i<j} g_{ij}\gs_i \gs_j - \frac{1}{\sqrt{1-t}} \sum_{r=1}^{m}\sqrt{Q^r} \sum_{i \in I_r}\eta_i \gs_i \biggr)- \frac{N\gb^2}{4} \cQ(\vone-\vq)\biggr). \end{align*} Notice that $\frac{1}{\sqrt{tN}}\sum_{i<j} g_{ij}\gs_i \gs_j - \frac{1}{\sqrt{1-t}} \sum_{r=1}^{m}\sqrt{Q^r} \sum_{i \in I_r}\eta_i \gs_i$ and $X_t$ are both Gaussian random variables. Applying Gaussian integration by parts, we get that the RHS is equal to \begin{align*} & \frac{\gb^2}{2N} \E \Bigg[ g''(X_s)\sum_{\mvgs,\mvgt}G_t(\mvgs) G_t(\mvgt) \left( \frac12 (N\cQ(\vR_{12})-\sum_{r=1}^m \gl_r \gD_{r,r}^2) - N \sum_{r,s}\gl_rR_{12}^r\gD_{r,s}^2\gl_sq_s \right) \Bigg] \\ & \qquad\qquad + \frac{\gb^2}{2\sqrt{N}} \E \Bigg[ g'(X_s)\Bigg(\sum_{\mvgs} G_t(\mvgs) \Big( \frac12 (N\cQ(\vone)-\sum_{r=1}^m \gl_r \gD_{r,r}^2) - N \sum_{r,s}\gl_r\gD_{r,s}^2\gl_sq_s \\ & \qquad\qquad -\sum_{\mvgt}G_t (\mvgt)\Big(\frac12 (N\cQ(\vR_{12})-\sum_{r=1}^m \gl_r \gD_{r,r}^2)- N \sum_{r,s}\gl_rR_{12}^r\gD_{r,s}^2\gl_sq_s \Big)\Big)- \frac{N}{2}\cQ(\vone-\vq)\Bigg)\Bigg] \\ & = \frac{\gb^2}{4} \E \left[ g''(X_t) \left( \la \cQ(\ovR_{12}) \ra_t - \cQ(\vq) - N^{-1}\tr(\gD^{2}\gL)\right) - \sqrt{N} g'(X_t)\la \cQ(\ovR_{12}) \ra_t \right]. \end{align*} For the second part, taking $g(x)=e^{\i x\theta}$ for fixed $\theta\in\dR$, equation~\eqref{asy-eq0} can be rewritten as \begin{align*} \frac{\partial}{\partial t}m(t,\theta) & = \frac{\gb^2\theta^2}{4} \cQ(\vq) m(t,\theta) \\ & \quad+ \frac{\gb^2}{4}\E \left(\Bigl( -\frac1N\theta^2\left( N\la \cQ(\ovR_{12}) \ra_t - \tr(\gD^{2}\gL)\right) - \i \sqrt{N} \theta\la \cQ(\ovR_{12}) \ra_t \Bigr) e^{\i \theta X_t}\right). \end{align*} From Section~\ref{sec2: latala}, we know that $N \nu_{t}|\cQ(\ovR_{12})|\le N \nu_{t}(\cP(\ovR_{12}))$ is bounded, then \begin{align*} \abs{\frac{\partial}{\partial t}m(t,\theta) - v^2\theta^2 m(t,\theta) } \le \frac{\gb^2 C}{4}\left(\frac{|\theta|}{\sqrt{N}} + \frac{2\theta^2}{N}\right) \text{ for all }t\in[0,1], \theta\in\dR \end{align*} where $\max \big\{ N\nu_t|\cQ(\ovR_{12})|, 0\le t\le 1, \tr(\gD^2\gL) \big\} \le C.$ \end{proof} Now we prove the Central Limit Theorem for the free energy. \begin{proof} [Proof of Theorem~\ref{RS solution} part~(ii)] Notice that \begin{align}\label{sec3:eq1} \abs{e^{-v^2\theta^2}m(1,\theta) - m(0,\theta)} & = \abs{\int_0^1 e^{-v^2\theta^2 t}\left( \partial_t m(t,\theta) - v^2\theta^2 m(t,\theta)\right) dt} \notag \\ & \le \frac{\gb^2 C}{4} \left(\frac{|\theta|}{\sqrt{N}} + \frac{2\theta^2}{N}\right) \int_0^1 e^{-v^2\theta^2 t} dt \le \frac{\gb^2 C}{4} \left(\frac{|\theta|}{\sqrt{N}} + \frac{2\theta^2}{N}\right). \end{align} Thus \begin{align*} \abs{m(1,\theta)- m(0,\theta)e^{v^2\theta^2}} \le \frac{\gb^2 C}{4} \left(\frac{|\theta|}{\sqrt{N}} + \frac{2\theta^2}{N}\right)e^{v^2\theta^2}. \end{align*} For $\theta$ fixed, let $N \to \infty$. The RHS of the last step goes to 0. Moreover, as $N\to\infty$, we have by the classical CLT $m(0,\theta) \to e^{-c^2\theta^2/2}$ where $$c^2=\lim_{N\to\infty}N^{-1}\var(\log Z_{N,0})=\sum_{s=1}^{m}\gl_{s}\var(\log\cosh(\gb\eta\sqrt{Q^{s}}+h)).$$ This completes the proof. \end{proof} \section{Concentration of overlap with zero external field}\label{concen-h0} In this section, we study the MSK model without an external field. We prove that when $\gb<\gb_c$, the MSK model is in the replica symmetric phase. Note that our argument holds when $\gD^2$ is indefinite. In that case, the RS regime is still given by $\gb<\gb_c$ as in $\gD^2$ \emph{positive-definite}~case. The proof of Theorem~\ref{ovp no h} is based on a control of the free energy. Here $q=0$ is the unique solution to~\eqref{syseq} and the averaged Gibbs measure under the decoupled Hamiltonian is the product of i.i.d.~Bernoulli($\half$) measures and thus is non-random. Before giving the proof, we introduce several useful lemmas following the proof for the classical case. Recall that $\cP(\vx)=\vx^{\T}\gL^{\half}\cV\gL^{\half}\vx$ and $\cV=|\gL^{\half}\gD^{2}\gL^{\half}|$. \begin{lem}\label{use-lem} If $\gamma+\gb^2 < \gb_c^2$, then \begin{align} \E \sum_{\mvgs_1, \mvgs_2} \exp\left( H_N(\mvgs_1)+ H_N(\mvgs_2) + \frac{\gamma N}{2}\cP(\vR_{12})\right) \le (\E Z_N)^2\cdot \det(I-(\gb^2+\gamma)\cV)^{-\half}. \end{align} \end{lem} \begin{proof} [Proof of Lemma~\ref{use-lem}] Since the Hamiltonian $H_N{(\mvgs)}$ is a \emph{centered} Gaussian field, we know that \begin{align}\label{eq4-h0} \begin{split} \E \sum_{\mvgs_1, \mvgs_2} & \exp\left( H_N(\mvgs_1)+ H_N(\mvgs_2)+\frac{\gamma N}{2}\cP(\vR_{12})\right)\\ & =\sum_{\mvgs_1, \mvgs_2}\exp\left(\frac{1}{2}\E(H_N(\mvgs_1)+ H_N(\mvgs_2))^2 + \frac{\gamma N}{2}\cP(\vR_{12})\right) \\ & \le \sum_{\mvgs_1, \mvgs_2}\exp \left( \frac{N\beta^2}{2}\left(\sum_{s,t}\lambda_s \gD_{s,t}^2\lambda_t-\frac{2}{N}\sum_s\gl_s\gD_{s,s}^2\right)+\frac{(\gb^2+\gamma)N \cP(\vR_{12})}{2}\right) \\ & \le (\E Z_N)^2 \cdot 2^{-2N} \sum_{\mvgs_1, \mvgs_2} \exp\left(\frac{\gd N}{2} \cP(\vR_{12})\right) \end{split} \end{align} where $\gd =\gb^2+\gamma$, and the last two inequalities are based on the fact that $$ \E Z_N = 2^N\exp\left(\frac{N\beta^2}{4}\left(\sum_{s,t}\lambda_s \gD_{s,t}^2\lambda_t-\frac{1}{N}\sum_s\gl_s\gD_{s,s}^2\right)\right), $$ and \begin{align*} \E H_N(\mvgs)^2 & = \frac{N\beta^2}{2}\left(\sum_{s,t}\lambda_s \gD_{s,t}^2\lambda_t-\frac{1}{N}\sum_s\gl_s\gD_{s,s}^2\right) \\ \E H_N(\mvgs_1)H_N(\mvgs_2) & =\frac{N\beta^2}{2}\left(\sum_{s,t}\lambda_s R^s\gD_{s,t}^2\lambda_tR^t-\frac{1}{N}\sum_s\gl_s\gD_{s,s}^2\right). \end{align*} Next we prove that, for fixed $\mvgs_2$, \begin{align}\label{sech0:eq1} 2^{-N} \sum_{\mvgs_1} \exp \left( \frac{\gd N}{2} \cP(\vR_{12})\right) = 2^{-N} \sum_{\mvgs} \exp \left( \frac{\gd N}{2} \cP(\vv)\right) \end{align} where $\vv = (\frac{1}{|I_1|} \sum_{i \in I_1} \gs_i, \frac{1}{|I_2|} \sum_{i \in I_2} \gs_i, \ldots, \frac{1}{|I_m|} \sum_{i \in I_m} \gs_i)$. Note that for $\mvxi = (\xi_1, \xi_2, \ldots, \xi_m)$, where $\xi_s = \frac{1}{|I_s|} \sum_{i\in I_s}\eta_i$, and $\eta_i \sim \text{Bernouli}(\half)$ are i.i.d, we have \begin{align*} 2^{-N} \sum_{\mvgs} \exp \left( \frac{\gd N}{2} \cP(\vv)\right) & =\E \exp \left(\frac{\gd N}{2}\cP(\mvxi) \right) \\ & = \E \exp\left(\frac{\gd N}{2} \cQ(\mvxi,\gL^{\half}\cV\gL^{\half}) \right) = \E\E_{\vu}\exp (\sqrt{N} \vu \mvxi ) \end{align*} where $\vu \sim \N(0,\gd \gL^{\half} \cV\gL^{\half})$. By a similar way as in the proof of Lemma~\ref{concen of decouple}, we have \begin{align*} \E\E_{\vu}\exp (\sqrt{N} \vu \mvxi ) & = \E_{\vu} \E\prod_{s=1}^m \prod_{i \in I_s}\exp(\sqrt{N}u_s \eta_i/|I_s|) \\ & \le \E_{\vu} \prod_{s=1}^m \exp(N u_s^2/2|I_s|) = \E_{\vu} \exp(\vu^{\T}\gL^{-1}\vu /2) = \det(I-\gd\cV)^{-\half}. \end{align*} Going back to~\eqref{eq4-h0} and using $\gd=\gb^{2}+\gc$, we finish the proof. \end{proof} \begin{thm}\label{lower devat} If $\gb < \gb_c$, then there exists a constant $K$ such that for each $N>0$, and each $t>0$, \begin{align} \pr\left( \log Z_N \le N(\log 2+ {\gb^2}/{4}\cdot \cQ(\vone)) -t\right) \le K \exp(-t^2/K). \end{align} \end{thm} The following lemma is going to tell us how to choose the $K$ appropriately. \begin{lem}\label{lemm1-h0} If $\gb< \gb_c$, we have \begin{align} \pr \left(Z_N \ge \E Z_N/2, N\la \cQ(\vR_{12})\ra \le K \right)\ge \frac{1}{K}. \end{align} \end{lem} \begin{proof} [Proof of Lemma~\ref{lemm1-h0}] Notice that \[Z_N^2\la \exp(\frac{\gamma N}{2} \cQ(\vR_{12}))\ra = \sum_{\mvgs_1, \mvgs_2} \exp\left( H_N(\mvgs_1)+ H_N(\mvgs_2) + \frac{\gamma N}{2}\cQ(\vR_{12})\right).\] Thus by Lemma~\ref{use-lem}, we have \begin{align*} \E\left( Z_N^2\la \exp(\frac{\gamma N}{2} \cQ(\vR_{12}))\ra \right) \le (\E Z_N)^2 \det(I-(\gb^2+\gamma)\cV)^{-\half} \end{align*} and \[ \pr(Z_{N}\ge \E Z_{N}/2) \ge (\E Z_N)^{2}/4\E Z_{N}^{2}\ge \det(I-\gb^2\cV)^{\half}/4. \] By Markov's inequality, \begin{align*} \pr \left( Z_N^2\la \exp(\frac{\gamma N}{2} \cQ(\vR_{12}))\ra \le t (\E Z_N)^2 \right) \ge 1- \frac1t \det(I-(\gb^2+\gamma)\cV)^{-\half} \end{align*} and \begin{align*} \pr \bigl( Z_N^2 & \la \exp(\frac{\gamma N}{2} \cQ(\vR_{12}))\ra \le t (\E Z_N)^2, Z_N \ge \E Z_N/2\bigr) \\ & \ge \pr \left( Z_N^2\la \exp(\frac{\gamma N}{2} \cQ(\vR_{12}))\ra \le t (\E Z_N)^2 \right)+\pr \left( Z_N \ge \E Z_N/{2}\right)-1 \\ & \ge \pr \left( Z_N \ge \E Z_N/{2}\right)- \frac1t\det(I-(\gb^2+\gamma)\cV)^{-\half}. \end{align*} Based on this inequality, we can choose some large $K$ such that, \begin{align*} \pr \left( Z_N^2\la \exp(\frac{\gamma N}{2} \cQ(\vR_{12}))\ra \le K (\E Z_N)^2, Z_N \ge \E Z_N/2\right) \ge \frac{1}{K}. \end{align*} When $Z_N \ge \E Z_N/{2}$, $Z_N^2\la \exp(\frac{\gamma N}{2} \cQ(\vR_{12}))\ra \le K (\E Z_N)^2 $ implies that \[\left\la \exp(\frac{\gamma N}{2} \cQ(\vR_{12}))\right\ra \le 4K \] and hence $\la N\cQ(\vR_{12}))\ra \le K$. The constant $K$ in different expressions are different, here we abuse the notation a bit. \end{proof} \begin{lem}\label{lemm2-h0} The following identity: \begin{align} N^2 \la \cQ(\vR_{12}) \ra =\sum_{s,t}\sum_{i\in I_s} \sum_{j\in I_t}\gD_{s,t}^2 \la\gs_i \gs_j \ra^2\ge 0 \end{align} holds. \end{lem} \begin{proof} [Proof of Lemma~\ref{lemm2-h0}] Recall that $\cQ(\vR_{12})$ is the quadratic form of $\vR_{12}$ with matrix $\gL\gD^2\gL$, then we have \begin{align*} N^2 \cQ(\vR_{12}) & = N^2\sum_{s,t} R_{12}^s \gl_s \gD_{s,t}^2 \gl_t R_{12}^t \\ & = N^2\sum_{s,t} \frac{1}{|I_s|} \sum_{i \in I_s} \gs_i^1 \gs_i^2 \gl_s \gD_{s,t}^2 \gl_t \frac{1}{|I_t|} \sum_{j \in I_t} \gs_j^1 \gs_j^2 = \sum_{s,t} \sum_{i \in I_s} \gs_i^1 \gs_i^2 \gD_{s,t}^2 \sum_{j \in I_t} \gs_j^1 \gs_j^2 , \end{align*} After applying the Gibbs average $\la \cdot \ra$ on both sides, for RHS we have $\la \gs_i^1 \gs_j^1 \gD_{s,t}^2\gs_i^2 \gs_j^2\ra = \gD_{s,t}^2 \la \gs_i \gs_j \ra^2$, then $N^2\ \la \cQ(\vR_{12}) \ra = \sum_{s,t}\sum_{i\in I_s} \sum_{j\in I_t}\gD_{s,t}^2 \la\gs_i \gs_j \ra^2$. \end{proof} \begin{lem} [{\cite{Tal03}*{Lemma~2.2.11}}]\label{lemm-talagrand} Consider a closed subset $B$ of $\bR^M$, and set \[ d(\vx,B) = \inf\{d(\vx,\vy);\vy \in B\}, \] as the Euclidean distance from $\vx$ to $B$. Then for $t>0$, we have \begin{align}\label{eq0-h0} \pr\left(d(\mveta,B) \ge t+ 2 \sqrt{\log\frac{2}{\pr(\mveta\in B)}} \right) \le 2 \exp(-t^2/4) \end{align} where $\mveta = (\eta_1,\eta_2,\ldots, \eta_m)$ and $\eta_i \sim \N(0,1)$ are i.i.d. \end{lem} Now we are ready to prove the Theorem~\ref{lower devat} \begin{proof} [Proof of Theorem~\ref{lower devat}] Recall the Hamiltonian with $h=0$ is \[ H_N(\mvgs) = \frac{\gb}{\sqrt{N}} \sum_{i<j} g_{ij}\gs_i\gs_j \] where $\E g_{ij}^2 = \gD^2_{s,t}$ for $i\in I_s, j\in I_t$. In the following content, we let $g_{ij} = \gD_{s,t} \eta_{ij}$, where $\eta_{ij} \sim \N(0,1)$. Let $M=N(N-1)/2$, and consider Gaussian $\mveta = (\eta_{ij})_{i<j}$, in this case, $\mveta \in \bR^M$. We understand $H_N(\mvgs), Z_N$ as functions of $\mveta$. By Lemma~\ref{lemm1-h0}, for some suitably large $K=K_1$, there is a subset $B \subset \bR^M$ with \[ \{\mveta \in B \} = \{ Z_N(\mveta) \ge \E Z_N/2; N\la \cQ(\vR_{12})\ra \le K_1 \} \] i.e. the set $B$ characterizes the event on RHS, and also $\pr(\mveta \in B) \ge \frac 1 {K_1}$. Next we will prove \begin{align}\label{eq1-h0} \log Z_N(\mveta) \ge N\left(\log 2 + \frac{\gb^2}{4}\cQ(\bf1)\right)- K_1(1+d(\mveta,B)) \end{align} For $\mveta' \in B$, we have \[ \log Z_N(\mveta') \ge \log \frac 1 2 \E Z_N = \frac{\gb^2}{4}\left(N\cQ(\mathbf{1})- \sum_{s=1}^m\gl_s\gD_{s,s}^2\right)+(N-1)\log 2. \] To prove~\eqref{eq1-h0}, it is enough to show that \begin{align}\label{eq2-h0} \log Z_N(\mveta) \ge \log Z_N(\mveta') - K_1 d(\mveta,\mveta') \end{align} for all $\mveta'\in B$. Here $K_{1}$ must be bigger than $\log 2 +\frac{\beta^2}{4}\sum_{s=1}^m\gl_s\gD_{s,s}^2$. Notice that \begin{align*} Z_N(\mveta) = Z_N(\mveta') \left\la \exp \biggl( \frac{\gb}{\sqrt{N}} \sum_{s,t}\sum_{\substack{i \in I_s, j\in I_t, \\ i<j}}\gD_{s,t}(\mveta_{ij}-\mveta_{ij}')\gs_i\gs_j \biggr) \right \ra' \end{align*} where $\la \cdot \ra'$ denotes the Gibbs average with disorders $\mveta'$. Since there is an exponential part, it's natural to apply Jensen's inequality, \[ \left\la \exp \biggl( \frac{\gb}{\sqrt{N}} \sum_{s,t}\sum_{\substack{i \in I_s, j\in I_t,\\ i<j}}\gD_{s,t}(\mveta_{ij}-\mveta_{ij}')\gs_i\gs_j \biggr) \right \ra' \ge \exp \biggl( \frac{\gb}{\sqrt{N}} \sum_{s,t}\sum_{\substack{i \in I_s, j\in I_t,\\ i<j}}\gD_{s,t}(\mveta_{ij}-\mveta_{ij}')\la \gs_i\gs_j \ra' \biggr) \] where we use $g_{ij}=\gD_{s,t}\eta_{ij}$ to rewrite the Hamiltonian, then applying Cauchy-Schwarz inequality and by Lemma~\ref{lemm2-h0}, we have \[ \sum_{s,t}\sum_{\substack{i \in I_s, j\in I_t,\\ i<j}}\gD_{s,t}(\mveta_{ij}-\mveta_{ij}')\la \gs_i\gs_j \ra' \ge -d(\mveta,\mveta')(N^2\la\cQ(\vR_{12})\ra')^{\half} \ge -K_1\sqrt{N}d(\mveta,\mveta') \] where $d(\mveta,\mveta') = \sum_{i<j}(\eta_{ij}-\eta_{ij}')^2$. The last inequality is based on $\mveta' \in B$, then this proves~\eqref{eq2-h0}, and hence~\eqref{eq1-h0}. From that, we have \[ \log Z_N(\mveta) \le N\left(\frac{\gb^2}{4}\cQ(\mathbf{1}) + \log 2\right) -t \Rightarrow d(\mveta,B) \ge \frac{t-K_1}{K_1} \ge 2 \sqrt{\log \frac{2}{\pr(\mveta \in B)}} +\frac{t-K_2}{K_1} \] then by~\eqref{eq0-h0} in Lemma~\ref{lemm-talagrand}, it follows \begin{align*} \pr \left( \log Z_N(\mveta) \le N \left( \frac{\gb^2}{4}\cQ(\mathbf{1}) +\log 2\right)-t\right) \le 2 \exp \left( -\frac{(t-K_2)^2}{4K_1^2} \right) \end{align*} for $t \ge K_2$. Then for the RHS, if we take $K$ large enough, when $t\ge K_2$, \[ 2 \exp \left( -\frac{(t-K_2)^2}{4K_1^2} \right) \le K \exp \left(-\frac{t^2}{K} \right). \] On the other hand, when $0 \le t \le K_2$, we have $ K \exp \left(-\frac{t^2}{K} \right) \ge 1$. Therefore, in any case, we proved \[ \pr \left( \log Z_N(\mveta) \le N \left( \frac{\gb^2}{4}\cQ(\mathbf{1}) +\log 2\right)-t\right) \le K \exp \left(-\frac{t^2}{K} \right). \] \end{proof} Now we collect all the above to prove the main theorem. \subsection{Proof of Theorem~\ref{ovp no h}} By Cauchy-schwarz w.r.t the measure $\la \cdot \ra$ \begin{align*} \left \la \exp\left(\frac{\gb_c^2-\gb^2}{8}N \cP(\vR_{12})\right) \right \ra & \le \left\la \exp\left(\frac{\gb_c^2-\gb^2}{4}N \cP(\vR_{12})\right) \right \ra^{\half} \\ & = \frac{1}{Z_N}\left(Z_N^2 \left \la\exp\left(\frac{\gb_c^2-\gb^2}{4}N \cP(\vR_{12})\right) \right \ra \right)^{\half} \end{align*} then apply Cauchy-schwarz again for $\E$, \begin{align*} \E \left \la \exp\left(\frac{\gb_c^2-\gb^2}{8}N \cP(\vR_{12})\right)\right \ra & \le \left(\E \frac{1}{Z_N^2} \right)^{\half} \left[\E \left( Z_N^2 \left \la \exp\left(\frac{\gb_c^2-\gb^2}{4}N \cP(\vR_{12})\right)\right \ra \right)\right]^{\half} \\ & \le \left(\E \frac{1}{Z_N^2} \right)^{\half} (K\E(Z_N)^2)^{\half} = K \left(\E \frac{(\E Z_N)^2}{Z_N^2}\right)^{\half} \end{align*} where the second inequality is due to Lemma~\ref{use-lem}. By Theorem~\ref{lower devat}, \begin{align} \pr \left(\frac{\E Z_N}{Z_N} >t\right) \le K \exp(-(\log t)^2/K) \end{align} then we have $\E \frac{(\E Z_N)^2}{Z_N^2} <K$ by standard tail estimates. \hfill\qed \section{MSK cavity solution}\label{sec:cavity} For the classical SK model, the cavity method is an induction on the number of spins. In this section, we generalize the cavity method to the MSK model and derive a linear system for the variance-covariance matrices of the overlap vectors. More specifically, to compare the system with $N$ spins and $(N-1)$ spins, we need to choose a spin to be decoupled from others. Since the classical SK model is the one-species case of the MSK model, the spin can be chosen uniformly. For the multi-species case, decoupling a spin now depends on the structure of the species. However, we can choose a species $s \in \{1,2,\ldots,m\}$ first with probability ${\gl_s}$, then select a spin uniformly inside that species to be decoupled, then compare this decoupled system and the original one. For convenience, we denote the released spin as $\eps := \gs_{s_{\star}}$, where $s_{\star}$ is the index of last spin in species $s$ for a configuration $\mvgs$. Once the species is determined, we take the convention to decouple the last spin in the chosen species because of symmetry inside a particular species. Note that this is equivalent to choosing a species, u.a.r.~from all spins. Let $\gd$ represent the species to be chosen, which is a random variable taking value in $\{1,2,\ldots,m\}$ with probability $\mathbb{P}(\gd = s) = \gl_s$. Given $\gd=\tilde s$, we decouple the last spin $\gs_{\tilde s_{\star}}$ in that species, then the interpolated Hamiltonian between the original system and the decoupled system is \begin{align}\label{cavity-H} H_t^{\tilde s}(\mvgs) = H_{\tilde s_{\star}^-}(\mvrho)+ \gs_{\tilde s_{\star}} \biggl( \sqrt{t} \cdot \frac{\gb}{\sqrt{N}}\sum_{i \neq \tilde s_{\star}}g_{i \tilde s_{\star}}\gs_i + \sqrt{1-t}\cdot \gb \eta \sqrt{Q^{\tilde s}} + h\biggr) \end{align} where $H_{\tilde s_{\star}^-}(\mvrho)= \frac{1}{\sqrt{N}}\sum_{i<j}g_{ij}\rho_i\rho_j + \sum_{i\neq \tilde s_{\star}}\rho_i$, and $\mvrho$ is a ($N-1$)-dimensional vector of spins in $\mvgs$ without $\gs_{\tilde s_{\star}}$. Recall that the overlap vector between two replicas $\gs^l,\gs^{l^{\prime}}$ in MSK model is defined as \begin{align} \vR_{ll^{\prime}} = (R_{ll^{\prime}}^{(1)}, R_{ll^{\prime}}^{(2)}, \cdots, R_{ll^{\prime}}^{(m)})^{\T} \end{align} where each coordinate in the above vector represents the marginal overlap in the corresponding species. Similarly, \begin{align} \vR_{ll^{\prime}}^{(\tilde s-)} = (R_{ll^{\prime}}^{(1)}, R_{ll^{\prime}}^{(2)}, \cdots, R_{ll^{\prime}}^{(\tilde s-)},\cdots R_{ll^{\prime}}^{(m)})^{\T} \end{align} where $R_{ll^{\prime}}^{(\tilde s-)} = R_{ll^{\prime}}^{(\tilde s)} - \frac{1}{|I_\gd|}\eps_l\eps_{l^{\prime}}$, and $\eps_l=\mvgs^l_{\tilde s_{\star}}$ denotes the released spin in $\mvgs^l$. In the following, $\gn_{t,\tilde s}(\cdot)$ denotes the expectation over the Gibbs measure and disorder $g_{ij}$ associated with the Hamiltonian~\eqref{cavity-H}. In particular, $\gn_{1,\tilde s}(\cdot)$ corresponds to the original system and does not depend on $\tilde s$, but $\gn_{0,\tilde s}$ and $\gn_{t,\tilde s}$ depend on $\tilde s$ since they both involve the decoupling procedure. In the rest of this section, we state the results for each fixed $\gd=\tilde s$, unless we state clearly otherwise. \begin{lem}\label{prod property} For any $f^{-}$ on $\gS_{N-1}^n$, where $\gS_{N-1}$ does not contain the spin $\gs_{\tilde{s}_{*}}$, and $I \subset \{1,2,\cdots,n\}$, we have \begin{align} \gn_{0,\tilde s}\biggl(f^{-} \prod_{i\in I} \eps_i\biggr) = \gn_{0,\tilde s}(f^{-})\cdot \E(\tanh Y^{\tilde s})^{|I|} \end{align} where $Y^{\tilde s} = \gb \eta \sqrt{Q^{\tilde s}}+h$, and $|I|$ is the cardinality of the set $I$. \end{lem} The proof of this lemma is same to the classical case, see Section 1.4 in~\cite{Tal11a}. Next we turn to computation of $\gn_{t,\tilde s}'(f):=\frac{d}{dt}\gn_{t,\tilde s}(f)$ in cavity method. Let \begin{align*} u_{\mvgs} = \frac{\gb}{\sqrt{N}} \gs_{\tilde s_{\star}} \sum_{i \neq \tilde s_{\star}} g_{i\tilde s_{\star}} \gs_i,\qquad v_{\mvgs} = \gb \eta \gs_{\tilde s_{\star}}\sqrt{Q^{\tilde s}},\qquad w_{\mvgs} = \exp(H_{\tilde s_{\star}^-}(\mvgr)+ h\gs_{\tilde s_{\star}}) \end{align*} in~\eqref{cavity-H}. Recall that in Section~\ref{sec2: latala}, the derivative $\gn_t'(f)$ was computed in (\ref{eq:gen-derivative}) for some $f$ on $\gS_N^n$: \begin{align*} \gn_t'(f) = 2\biggl(\sum_{1 \le l < l^{\prime} \le n} \gn_t(U(\mvgs^l,\mvgs^{l^{\prime}})f) - n \sum_{l \le n}\gn_t(U(\mvgs^l,\mvgs^{n+1})f) + \frac{n(n+1)}{2} \gn_t(U(\mvgs^{n+1},\mvgs^{n+2})f)\biggr) \end{align*} where $U(\mvgs^l, \mvgs^{l^{\prime}}) = \frac{1}{2}(\E u_{\mvgs^l}u_{\mvgs^{l^{\prime}}} -\E v_{\mvgs^l}v_{\mvgs^{l^{\prime}}}) $. In the setting of cavity method, \begin{align*} \E u_{\mvgs^l}u_{\mvgs^{l^{\prime}}} & = \gb^2 \eps_l \eps_{l^{\prime}}\biggl(\sum_{s \neq \tilde s} \gD_{s,\tilde s}^2\gl_s R_{ll^{\prime}}^{(s)} + \gD_{\tilde s,\tilde s}^2 \gl_{\tilde s}R_{ll^{\prime}}^{(\tilde s-)}\biggr) \\ \E v_{\mvgs^l}v_{\mvgs^{l^{\prime}}} & = \gb^2 \eps_l \eps_{l^{\prime}} Q^{\tilde s} \end{align*} then \begin{align*} U(\mvgs^l, \mvgs^{l^{\prime}}) & = \frac{1}{2}\gb^2 \eps_l \eps_{l^{\prime}}\biggl(\sum_{s \neq \tilde s} \gD_{s, \tilde s}^2\gl_s R_{ll^{\prime}}^{(s)} + \gD_{\tilde s,\tilde s}^2\gl_{\tilde s}R_{ll^{\prime}}^{(\tilde s-)} - Q^{\tilde s}\biggr) = \frac{1}{2}\gb^2 \eps_l \eps_{l^{\prime}}\cdot \gD_{\tilde s}^2 \gL \ovR_{\ell\ell'}^{(\tilde s-)}. \end{align*} We make the inner product of vectors and the dependence of $U(\mvgs^l, \mvgs^{l^{\prime}})$ on $\tilde s$ implicit in the above expression. We use $\gD_{s}^2:= \gD^2_{s,\cdot}$ to denote the $s$-th row vector of $\gD^2$, and we keep using this notation for other symmetric matrices $A$ in the rest of this section. Thus in cavity solution $\gn'_{t,\tilde s}(f)$ can be written in the following way: \begin{thm}\label{cav_deriv} For $f$ on $\gS_N^n$, we have \begin{align}\label{der formula} \begin{split} \gn'_{t,\tilde s}(f) & = \gb^2\biggl( \sum_{1 \le l < l^{\prime} \le n} \gn_{t,\tilde s}(\eps_l \eps_{l^{\prime}}f\cdot \gD_{\tilde s}^2 \gL \ovR_{\ell\ell'}^{(\tilde s-)}) - n \sum_{l \le n}\gn_{t,\tilde s}(\eps_l \eps_{n+1}f\cdot \gD_{\tilde s}^2 \gL \ovR_{l,n+1}^{(\tilde s-)})\\ &\qquad\qquad\qquad + \frac{n(n+1)}{2} \gn_{t,\tilde s}(\eps_{n+1} \eps_{n+2}f\cdot \gD_{\tilde s}^2 \gL \ovR_{n+1,n+2}^{(\tilde s-)}) \biggr). \end{split} \end{align} \end{thm} The next proposition is about some H\"{o}lder type inequalities. \begin{prop}\label{Holder prop} For $f$ on $\gS_N^n$, and $\gt_1>0, \gt_2>0$, $1/\gt_1+1/\gt_2 = 1$, we have \begin{align} \abs{\gn(f)- \gn_{0,\tilde s}(f)} & \le K(n,\gb,\tilde s) \gn(\abs{f}^{\gt_1})^{1/\gt_1} \gn\left(\abs{\gD_{\tilde s}^2 \gL (\vR^{(\tilde s-)}_{12}-\vq)}^{\gt_2}\right)^{1/\gt_2} \label{h1} \\ \abs{\gn(f)- \gn'_{0,\tilde s}(f)- \gn_{0,\tilde s}(f)} & \le K(n,\gb,\tilde s) \gn(\abs{f}^{\gt_1})^{1/\gt_1} \gn\left(\abs{\gD_{\tilde s}^2 \gL (\vR^{(\tilde s-)}_{12}-\vq)}^{2\gt_2}\right)^{1/\gt_2} . \label{h2}\end{align} \end{prop} \begin{rem} The proof of this theorem is similar to the original proof in Talagrand's book. For the inequality~\eqref{h1}, we just need to control $\gn'_{t,\tilde s}(f)$, and by Theorem~\ref{cav_deriv}, it suffices to control the terms $\gn_{t,\tilde s}(\eps_l \eps_{l^{\prime}}f(\gD_{\tilde s}^2 \gL \ovR_{\ell\ell'}^{(\tilde s-)})) $. We claim that $\abs{\gn_{t,\tilde s}(f)} \le \exp(4C_{\tilde s}n^2\gb^2)\gn(f)$, where $C_{\tilde s} :=\max\{\gD_{\tilde s}^2\gL\}$ is the maximal entry of the vector $\gD_{\tilde s}^2\gL$. Because in Theorem~\ref{cav_deriv}, $\abs{R_{12}^s-q_s}\le 2$ implies \[ \abs{\gn'_{t,\tilde s}(f)} \le 4 C_{\tilde s}n^2\gb^2\gn_{t,\tilde s}(f) \] then integrating the above inequality will prove the claim. Finally applying H\"older inequality, it proves~\eqref{h1}. For~\eqref{h2}, we just control the second order derivative $\gn''_{t,\tilde s}$ in a similar way. \end{rem} Recall the notation $\ov{\vR}_{12} = \vR_{12} -\vq$, using the Proposition~\ref{Holder prop}, one can estimate the quantities $\gn(\ov \vR_{12}^{\T} A\ov \vR_{12})$, $\gn(\ov \vR_{13}^{\T} A\ov \vR_{12})$ and $\gn(\ov \vR_{34}^{\T} A\ov \vR_{12})$, where $A=(\!(a_{s,t})\!)_{s,t=1}^m$ is a symmetric $m \times m$ matrix. We start with $\gn(\ov \vR_{12}^{\T}A \ov \vR_{12})$, then use the cavity method to do some second moment computation to derive a linear system. We incorporate the randomness of $\gd$ in the following to get \begin{align}\label{eq:s3a} \begin{split} \gn\big(\ov{\vR}^{\T}_{12} \gL A \gL \ov{\vR}_{12}\big) &= \gn\big(\sum_{s=1}\gl_s \oR_{12}^{(s)}\sum_{t=1}\gl_ta_{s,t}\oR_{12}^{(t)}\big) \\ &= \gn\big(\E^{\gd}\big( \oR_{12}^{(\gd)}\sum_{t=1}\gl_t a_{\gd,t} \oR_{12}^{(t)}\big)\big) = \hat{\gn}\big(\oR_{12}^{(\gd)}\sum_{t=1}\gl_t a_{\gd,t} \oR_{12}^{(t)}\big) \end{split} \end{align} where $\hat{\gn} := \gn \otimes \E^{\gd}$, and \[ \oR_{12}^{(\gd)} = \frac{1}{|I_{\gd}|} \sum_{i \in I_\gd}(\gs_i^1\gs_i^2-q_{\gd}) = \frac{1}{|I_\gd|} \sum_{\substack{i \in I_\gd, \\ i \neq \gd_{\star}}} \gs_i^1 \gs_i^2 + \frac{1}{|I_\gd|}\eps_1\eps_2 - q_{\gd}. \] By using the symmetry among sites inside the species $\gd$, we continue the above expression~\eqref{eq:s3a}: \begin{align}\label{eq:s3b} \begin{split} \hat{\gn}\biggl((\eps_1\eps_2-q_{\gd})\sum_{t=1}\gl_t a_{\gd,t} \oR_{12}^{(t)}\biggr) &= \hat{\gn}\biggl((\eps_1\eps_2-q_{\gd})\bigl(\sum_{t \neq \gd} \gl_t a_{\gd,t} \oR_{12}^{(t)}+\gl_{\gd}a_{\gd,\gd} \oR_{12}^{(\gd)}\bigr)\bigg)\\ & = \hat{\gn}((\eps_1\eps_2-q_{\gd})A_{\gd}\gL\ov{\vR}_{12}^{(\gd-)}) + \frac1N\hat{\gn}((\eps_1\eps_2-q_{\gd}) a_{\gd,\gd}\eps_1\eps_2). \end{split} \end{align} Note that, in the above expression~\eqref{eq:s3b}, there is randomness inside $\eps_1, \eps_2, \gd$, which describes the way of choosing the species to decouple the spin, and the computation proceeds in a quenched way, because all the randomness only comes from $\gd$, which takes finitely many values. In the following subsections, we deal with the two terms in~\eqref{eq:s3b} separately. \subsection{Estimation of $\hat{\gn}((\eps_1\eps_2-q_{\gd})A_{\gd}\gL\ov{\vR}_{12}^{(\gd-)})$ } Recall the definition of $\hat{\gn}$ below~\eqref{eq:s3a} and notice that \begin{align}\label{eq:s3bb} \hat{\gn}((\eps_1\eps_2-q_{\gd})A_{\gd}\gL\ov{\vR}_{12}^{(\gd-)}) = \sum_{s=1}^m \gl_{s}\cdot \gn((\eps_1\eps_2-q_{s})A_{s}\gL\ov{\vR}_{12}^{(s-)}). \end{align} Therefore it suffices to compute $ \gn((\eps_1\eps_2-q_{s})A_{s}\gL\ov{\vR}_{12}^{(s-)})$. First let's take \[\tilde{f} =(\eps_1\eps_2-q_{s})A_{s}\gL\ov{\vR}^{(s-)}_{12},\] by the second H\"older inequality~\eqref{h2} with $\gt_1 = 3, \gt_2 = 3/2$ in the Proposition~\ref{Holder prop}, we have \[ \gn(\tilde{f}) =\gn_{0,s}'(\tilde{f})+ \mathfrak{R}. \] Because $\gn_{0,s}(\tilde{f}) = 0 $ by Lemma~\ref{prod property}, where $\mathfrak{R}$ represents the higher order terms with \begin{align}\label{error} \fR\le K\left(N^{-3/2} + \sum_{s=1}^{m}\nu(|R_{12}^{(s-)}-q_s|^{3})\right)\max_{s,t}|A_{s,t}|. \end{align} For the derivative term, by Theorem~\ref{cav_deriv}, $\gn_{0,s}'(\tilde{f})$ is a sum of terms in the form of \[ \gb^2\gn_{0,s}(\eps_l\eps_{l^{\prime}}(\eps_1\eps_2-q_{s})\cdot A_{s}\gL\ovR_{12}^-\cdot \gD_{s}^2 \gL \ovR_{\ell\ell'}^{(s-)}). \] In particular, we introduce a general formula as a corollary to Theorem~\ref{cav_deriv} to compute $\gn_{0,s}'(\tilde{f})$. \begin{cor}\label{cor:4} Consider a function $f^-$ on $\gS_{N-1}^n$ and two integers $x \neq y \le n$. Then \begin{align}\label{cavity:1} \begin{split} \gn_{0,s}'\left((\eps_x\eps_y-q_{s})f^-\right) & = \sum_{1 \le l < l^{\prime} \le n} b_{s}(l,l^{\prime};x,y) \gn_{0,s}(f^-\cdot \gD_{s}^2 \gL \ovR_{\ell\ell'}^{(s-)}) \\ &\qquad\qquad - n \sum_{l \le n}b_{s}(l,n+1;x,y)\gn_{0,s}(f^-\cdot \gD_{s}^2 \gL \ovR_{l,n+1}^{(s-)})\\ &\qquad\qquad\qquad\qquad + \frac{n(n+1)}{2} b_{s}(0)\gn_{0,s}(f^-\cdot \gD_{s}^2 \gL \ovR_{n+1,n+2}^{(s-)}) \end{split} \end{align} where $b_{s}(l,l^{\prime};x,y) = b_{s}(\abs{\{l,l^{\prime}\} \cap \{x,y\}})$ and \begin{align}\label{def of b} \begin{split} b_{s}(2) & = \gb^2 \gn_{0,s}(\eps_1\eps_2(\eps_1\eps_2-q_{s})) =\gb^2 \gn_{0,s}(1-\eps_1\eps_2q_s) = \gb^2(1-q_s^2) \\ b_{s}(1) & = \gb^2 \gn_{0,s}(\eps_1\eps_3(\eps_1\eps_2-q_{s})) =\gb^2 \gn_{0,s}(\eps_2\eps_3-\eps_1\eps_3q_{s}) =\gb^2 q_s(1-q_s) \\ b_{s}(0) & = \gb^2 \gn_{0,s}(\eps_3\eps_4(\eps_1\eps_2-q_{s})) =\gb^2 \gn_{0,s}(\eps_1\eps_2\eps_3\eps_4-\eps_3\eps_4q_s) =\gb^2 (\hat{q}_{s}-q_s^2) \end{split} \end{align} \end{cor} In the above Corollary, $$q_{s}=\E\tanh^2(\gb \eta\sqrt{(\gD^{2}\gL\vq)_s}+h), \qquad \hat{q}_{s} :=\E\tanh^{4}(\gb \eta \sqrt{(\gD^{2}\gL\vq)_s}+h), $$ and we used Lemma~\ref{prod property} to get \begin{align*} & \gn_{0,s}(\eps_l\eps_{l^{\prime}}(\eps_1\eps_2-q_{s})\cdot A_{s}\gL\ovR_{12}^{(s-)}\cdot \gD_{s}^2 \gL \ovR_{\ell\ell'}^{(s-)}) \\ & \qquad = \gn_{0,s}(\eps_l\eps_{l^{\prime}}(\eps_1\eps_2-q_{s}))\cdot \gn_{0,s}(A_{s}\gL\ovR_{12}^{(s-)}\cdot \gD_{s}^2 \gL \ovR_{\ell\ell'}^{(s-)}). \end{align*} Using Corollary~\ref{cor:4} with $n=2,x=1,y=2, f^- = A_{s}\gL\ovR_{12}^{(s-)}$, we have \begin{align}\label{deriva 0} \begin{split} &\gn'_{0,s}((\eps_x\eps_y-q_{s})f^-)\\ & = \gn_{0,s}'\bigl((\eps_1\eps_2-q_{s})\cdot A_{s}\gL\ovR_{12}^{(s-)}\bigr)\\ &= b_{s}(2)\cdot \gn_{0,s}(A_{s}\gL\ovR_{12}^{(s-)}\cdot \gD_{s}^2 \gL \ovR_{12}^{(s-)}) - 2 b_{s}(1)\cdot \gn_{0,s}(A_{s}\gL\ovR_{12}^{(s-)}\cdot \gD_{s}^2 \gL \ovR_{13}^{(s-)})\\ &\qquad - 2 b_{s}(1)\cdot \gn_{0,s}(A_{s}\gL\ovR_{12}^{(s-)}\cdot \gD_{s}^2 \gL \ovR_{23}^{(s-)}) + 3 b_{s}(0)\cdot \gn_{0,s}(A_{s}\gL\ovR_{12}^{(s-)}\cdot \gD_{s}^2 \gL \ovR_{34}^{(s-)}). \end{split} \end{align} For terms like $\gn_{0,s}(\gL A_{s}\ovR_{12}^{(s-)}\cdot \gD_{s}^2\gL \ovR_{\ell\ell'}^{(s-)})$ in the above expression, we apply H\"older inequality~\eqref{h1} with $\gt_1 = \frac 3 2, \gt_2= 3$, to obtain \[ \gn_{0,s}(\gL A_{s}\ovR_{12}^{(s-)}\cdot \gD_{s}^2\gL \ovR_{\ell\ell'}^{(s-)}) = \gn(\gL A_{s}\ovR_{12}^{(s-)}\cdot \gD_{s}^2\gL \ovR_{\ell\ell'}^{(s-)}) + \fR \] and changing from $ \gn(\gL A_{s}\ovR_{12}^{(s-)}\cdot \gD_{s}^2\gL \ovR_{\ell\ell'}^{(s-)})$ to $ \gn(\gL A_{s}\ovR_{12}\cdot \gD_{s}^2\gL \ovR_{\ell\ell'})$ will result in an error term $\fR$, \ie \[ \gn(\gL A_{s}\ovR_{12}^{(s-)}\cdot \gD_{s}^2\gL \ovR_{\ell\ell'}^{(s-)}) = \gn(\gL A_{s}\ovR_{12}\cdot \gD_{s}^2\gL \ovR_{\ell\ell'})+\fR. \] This is because $R_{ll^{\prime}}^{(s-)} = R_{ll^{\prime}}^{(s)} - \frac{1}{|I_s|}\eps_l\eps_{l^{\prime}}$, and the fact that $\frac 1 N \gn(\gL A_{s}\ovR_{12}) = \fR$ as $xy \le x^{\frac 3 2}+y^3$ for $x,y \ge 0$. Combining~\eqref{deriva 0} and~\eqref{eq:s3bb}, we have \begin{align} \begin{split} \hat{\gn}\big((\eps_1\eps_2-q_{\gd})A_{\gd}\gL\ov{\vR}_{12}^{(\gd-)}\big) &= \gn\big(\ovR_{12}^{\T}S_{A} B(2)\gD^2\gL\ovR_{12}\big) - 4\gn\big(\ovR_{12}^{\T}S_{A} B(1)\gD^2\gL\ovR_{13}\big)\\ &\qquad + 3\gn\big(\ovR_{12}^{\T}S_{A} B(0)\gD^2\gL\ovR_{34}\big) + \fR \end{split} \end{align} where $S_{A}:=\gL A\gL$, \begin{align}\label{def of B} B(i) = \diag(b_1(i), b_2(i),\ldots, b_m(i)), \quad \text{for} \quad i=0,1,2, \end{align} with $b_{s}(i)$ as given in~\eqref{def of b} and $\fR$ represent the higher order terms as defined in~\eqref{error}. \subsection{Estimation of $\hat{\gn}((\eps_1\eps_2-q_{\gd})a_{\gd,\gd}\eps_1\eps_2)$} For the second term in the last step of~\eqref{eq:s3b}, we get \begin{align}\label{eq:s3c} \begin{split} \hat{\gn}((\eps_1\eps_2-q_{\gd})a_{\gd,\gd}\eps_1\eps_2) & = \hat{\gn}_{0,\gd}(a_{\gd,\gd}(1-\eps_1\eps_2q_{\gd})) +\fR\\ & = \E^{\gd}(a_{\gd,\gd}(1-q_{\gd}^2))+\fR = \sum_{s=1}^m \gl_s a_{s,s}(1-q_s^2)+\fR. \end{split} \end{align} In the above computation, the first equality is due to the H\"older inequality~\eqref{h1} with $\gt_1=\infty, \gt_2 = 1$ in the Proposition~\ref{Holder prop}, the second equality is due to the property of $\gn_{0,s}$ as in Lemma~\ref{prod property}. By collecting all the terms, we have \begin{align}\label{U2} \begin{split} \gn(\ov{\vR}^{\T}_{12} S_{A} \ov{\vR}_{12}) & = \frac1N \sum_{s=1}^m \gl_s a_{s,s}(1-q_s^2)+ \gn\big(\ovR_{12}^{\T}S_{A} B(2)\gD^2\gL)\ovR_{12}\big) \\ &\qquad - 4\gn\big(\ovR_{12}^{\T}S_{A} B(1)\gD^2\gL\ovR_{13}\big) + 3\gn\big(\ovR_{12}^{\T}S_{A} B(0)\gD^2\gL\ovR_{34}\big) + \fR. \end{split} \end{align} By a similar argument, we get \begin{align}\label{U1} \begin{split} \gn(\ov{\vR}^{\T}_{12} S_{A} \ov{\vR}_{13}) & = \frac1N\sum_{s=1}^m \gl_s a_{s,s}(q_s-q_s^2)+\gn(\ovR_{12}^{\T}S_{A} B(1)\gD^2\gL\ovR_{12}) \\ &\qquad + \gn(\ovR_{12}^{\T}S_{A} (B(2)-2B(1)-3B(0))\gD^2\gL\ovR_{13})\\ &\qquad\qquad + \gn(\ovR_{12}^{\T}S_{A} (6B(0)-3B(1))\gD^2\gL\ovR_{34}) + \fR, \end{split} \end{align} and \begin{align}\label{U0} \begin{split} \gn(\ov{\vR}^{\T}_{12} S_{A} \ov{\vR}_{34}) & = \frac1N\sum_{s=1}^m \gl_s a_{s,s}(\hat{q}_s-q_s^2)+\gn(\ovR_{12}^{\T}S_{A} B(0)\gD^2\gL\ovR_{12}) \\ &\qquad + \gn(\ovR_{12}^{\T}S_{A} (4B(1)-8B(0))\gD^2\gL\ovR_{13})\\ &\qquad\qquad + \gn(\ovR_{12}^{\T}S_{A} (B(2)-8B(1)+10B(0))\gD^2\gL\ovR_{34}) + \fR. \end{split} \end{align} We define the $m\times m$ diagonal matrices \begin{align}\label{Theta} \Theta(i) :=\gb^{-2}B(i)\gL^{-1},\qquad \text{ for } i=0,1,2, \end{align} so that \begin{align*} \sum_{s=1}^m \gl_s a_{s,s}(1-q_s^2) = \tr(S_{A}\Theta(2)),\quad & \quad \sum_{s=1}^m \gl_s a_{s,s}(q_s-q_s^2) = \tr(S_{A}\Theta(1)), \\ \text{ and } \sum_{s=1}^m \gl_s a_{s,s}(\hat{q}_s-q_s^2) & = \tr(S_{A}\Theta(0)). \end{align*} We also define the $m\times m$ matrices (not necessarily symmetric) \begin{align}\label{Bhat} \hat{B}(i):=B(i)\gD^2\gL, \qquad \text{ for } i=0,1,2, \end{align} Thus, from equations~\eqref{U2},~\eqref{U1} and~\eqref{U0}, for any $m\times m$ symmetric matrix $S$ we have \begin{align} \begin{split} \nu( \ovR_{12}^{\T}S \ovR_{12}) & = \nu( \ovR_{12}^{\T}S \hat{B}(2) \ovR_{12}) - 4 \nu(\ovR_{12}^{\T}S \hat{B}(1)\ovR_{13}) \\ &\qquad + 3\nu(\ovR_{12}^{\T}S \hat{B}(0)\ovR_{34}) + \frac1N\tr(S \Theta(2)) + \fR,\\ \nu(\ovR_{12}^{\T}S\ovR_{13})& = \nu(\ovR_{12}^{\T}S \hat{B}(1)\ovR_{12}) + \nu(\ovR_{12}^{\T}S (\hat{B}(2)-2\hat{B}(1)-3\hat{B}(0))\ovR_{13})\\ &\qquad + \nu(\ovR_{12}^{\T}S (6\hat{B}(0)-3\hat{B}(1))\ovR_{34}) +\frac1N\tr(S \Theta(1)) + \fR,\\ \nu(\ovR_{12}^{\T}S\ovR_{34})&= \nu(\ovR_{12}^{\T}S \hat{B}(0)\ovR_{12}) + \nu(\ovR_{12}^{\T}S (4\hat{B}(1)-8\hat{B}(0))\ovR_{13})\\ &\qquad + \nu(\ovR_{12}^{\T}S (\hat{B}(2)-8\hat{B}(1)+10\hat{B}(0))\ovR_{34}) +\frac1N\tr(S \Theta(0))+ \fR. \end{split} \label{eq:1}\end{align} The system of equations~\eqref{eq:1} can be written as a system of linear equations in the variables $\nu((R_{12}^{(s)}-q_{s})(R^{(t)} _{\ell\ell'}- q_{t})), s\le t, \ell<\ell'$ as follows. We define the three $m\times m$ matrices \begin{align*} U(0):=\nu(\ovR_{12}\ovR_{34}^{\T}),\ U(1):=\nu(\ovR_{12}\ovR_{13}^{\T}),\ U(2):=\nu(\ovR_{12}\ovR_{12}^{\T}). \end{align*} Thus equation~\eqref{eq:1} can be restated in the following way: for any symmetric $m\times m$ matrix $S$ \begin{align}\label{eq:2} \tr(SU(i)) = \sum_{j=0}^2\sum_{k=0}^2 C_{ij}(k)\tr(S\hat{B}(k)U(j)) + \frac1N\tr(S \Theta(i)) + \fR,\qquad i=0,1,2, \end{align} where \begin{align*} C(0):= \begin{pmatrix} 10 & -8 & 1 \\6 &-3&0\\3 &0&0 \end{pmatrix} , \quad C(1):= \begin{pmatrix} -8 & 4 & 0 \\-3&-2&1\\0 &-4&0 \end{pmatrix} \text{ and } C(2):= \begin{pmatrix} 1 & 0 & 0 \\0&1&0\\0&0&1 \end{pmatrix} \end{align*} (rows and columns are indexed by $0,1,2$). The matrices, $C(0), C(1), C(2)$, commute with each other and thus are simultaneously ``upper-triangularizable''. This fact is crucially used in the variance computation for the overlap in Section~\ref{sec:varoverlap}. Explaining the commutativity property of the $C(\cdot)$ matrices involves understanding the underlying algebraic structure of the Gibbs measure and is an interesting open question. Note that this phenomenon appears in both SK and MSK models. One can guess that this is probably due to some high-level symmetry among replicas in the RS regime. Also, in the MSK model, it is likely to be connected to the synchronization property of overlap discovered by Panchenko (see~\cite{Pan15}). Note that, the upper bound on $\fR$ depends on $S$ only through $\max_{1\le r,t\le m}|S_{r,t}|$. We denote by $\ve_{i}$, the column vector with $1$ in the $i$-th coordinate and zero elsewhere. Taking $S=(\ve_{l}\ve_{k}^{\T}+\ve_{k}\ve_{l}^{\T})/2$ for all possible choices of $1\le k\le l \le m$, from equation~\eqref{eq:2}, we get \begin{align}\label{eq:3} U(i) = \sum_{j=0}^2\sum_{k=0}^2 C_{ij}(k)\cdot\sym(\hat{B}(k)U(j)) + \frac1N\Theta(i) + \mfR, \quad \text{for} \quad i=0,1,2, \end{align} where $\sym(A):=(A+A^{\T})/2$ for a square matrix $A$ and $\mfR$ is a $m\times m$ matrix with \[ \max_{p,q}|\mfR_{p,q}|\le K\left(N^{-3/2} + \sum_{s=1}^{m}\nu(|R_{12}^{(s-)}-q_s|^{3})\right). \] \begin{rem} The above argument does not require the positive semi-definiteness of $\gD^2$. It seems that at least in a high-temperature region, the RS solution given by Parisi formula~\cite{Pan15} is still valid even for indefinite $\gD^2$. \end{rem} \section{Variance of overlap} \label{sec:varoverlap} In Section~\ref{sec:cavity}, we introduced a species-wise cavity method for the MSK model to derive a linear system involving the quadratic form of overlap. In this section, by studying the linear system, we solve the overlap vectors' variance-covariance structure. Along the way, we obtain the AT-line condition in the MSK model. \subsection{Proof of Theorem~\ref{thm:lyap}} We recall the notations \begin{align*} b_{s}(0) & =\gb^2 (\hat{q}_{s}-q_s^2),\quad b_{s}(1) =\gb^2 q_s(1-q_s),\quad b_{s}(2) = \gb^2(1-q_s^2) \end{align*} for $s=1,2,\ldots,m$; and \begin{align*} B(i) = \diag(b_1(i), b_2(i),\ldots, b_m(i)),\quad \Theta(i) =\gb^{-2}B(i)\gL^{-1},\quad \hat{B}(i) =B(i)\gD^2\gL \end{align*} for $i=0,1,2.$ Then with \begin{align*} C(0):= \begin{pmatrix} 10 & -8 & 1 \\6 &-3&0\\3 &0&0 \end{pmatrix} , \quad C(1):= \begin{pmatrix} -8 & 4 & 0 \\-3&-2&1\\0 &-4&0 \end{pmatrix} \text{ and } C(2):= \begin{pmatrix} 1 & 0 & 0 \\0&1&0\\0&0&1 \end{pmatrix} \end{align*} (row and column indexed by $0,1,2$), we have from~\eqref{eq:3} \begin{align*} U(i) = \sum_{j=0}^2\sum_{k=0}^2 C_{ij}(k)\cdot\sym(\hat{B}(k)U(j)) + \frac1N\Theta(i) + \mfR,\qquad i=0,1,2, \end{align*} where $\sym(A):=(A+A^{\T})/2$. The matrices, $C(0),C(1),C(2)$, commute with each other and thus are simultaneously ``upper-triangularizable''. In particular, with \[ V= \begin{pmatrix} -3 & 2 & 0 \\3 &-4&1\\1&-2&1 \end{pmatrix} \] and defining $T(k):=VC(k)V^{-1}, k=0,1,2$ we have \begin{align*} T(0) = \begin{pmatrix} 3 & -3 & 0 \\ 0 & 3 & 0 \\ 0 & 0 & 1 \end{pmatrix} , T(1) = \begin{pmatrix} -4 & 2 & 0 \\ 0 & -4 & 0 \\ 0 & 0 & -2 \end{pmatrix} , T(2)= \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}. \end{align*} Define $\hat{U}(l):=\sum_{i=0}^{2}V_{li}U(i)$ for $l=0,1,2$, \ie\ \begin{align*} \hat{U}(0) & := -3U(0)+2U(1), \\ \hat{U}(1) & := 3U(0)-4U(1)+U(2), \\ \hat{U}(2) & := U(0)-2U(1)+U(2). \end{align*} Similarly, we define \[ \hat{\Theta}(l):=\sum_{i=0}^{2}V_{li}\Theta(i) \text{ for } l=0,1,2. \] We have for $l=0,1,2$, \begin{align*} \hat{U}(l) & =\sum_{i=0}^{2}V_{li} \sum_{j=0}^2\sum_{k=0}^2 C_{ij}(k)\cdot\sym(\hat{B}(k)U(j)) + \frac1N \sum_{i=0}^{2}V_{li}\Theta(i) + \mfR \\ & =\sum_{j=0}^2 \sum_{k=0}^2(VC(k))_{lj}\cdot\sym(\hat{B}(k)U(j)) + \frac1N \hat{\Theta}(l) + \mfR \\ & =\sum_{k=0}^2 \sym\bigl(\hat{B}(k)\cdot \sum_{j=0}^2(T(k)V)_{lj} U(j) \bigr) + \frac1N \hat{\Theta}(l) + \mfR. \end{align*} Simplifying, we get \begin{align}\label{Lyapunov} \begin{split} \sym\biggl( \bigl(I-\sum_{i=0}^{2}V_{li}\hat{B}(i)\bigr) \cdot \hat{U}(l)\biggr) &= \frac1N \hat\Theta(l) + \mfR,\qquad l=1,2,\\ \text{ and } \sym\biggl( \bigl(I-\sum_{i=0}^{2}V_{1i}\hat{B}(i)\bigr) \cdot \hat{U}(0)\biggr) &= \sym\biggl( \sum_{i=0}^{2}V_{0i}\hat{B}(i) \cdot \hat{U}(1)\biggr) + \frac1N \hat\Theta(0) + \mfR. \end{split} \end{align} Now consider the $m\times m$ diagonal matrices $\gC, \gC', \gC''$, whose $s$-th diagonal entries are respectively given by \begin{align*} \gc_s & = \gb^{-2}(b_s(0)-2b_s(1)+b_s(2)) = 1-2q_{s}+\hat{q}_s, \\ \gc'_s & = \gb^{-2}(3 b_s(0)-4b_s(1)+b_s(2))= 1 -4q_{s}+3\hat{q}_s, \\ \gc''_s & = \gb^{-2}(-3b_s(0)+2b_s(1)) =2q_s +q_s^2 -3\hat{q}_s. \end{align*} Then, \begin{align*} \sum_{i=0}^{2}V_{2i}\hat{B}(i) & = \gb^{2}\gC\gD^{2}\gL,\quad \sum_{i=0}^{2}V_{1i}\hat{B}(i) = \gb^{2}\gC'\gD^{2}\gL,\quad \sum_{i=0}^{2}V_{0i}\hat{B}(i) = \gb^{2}\gC''\gD^{2}\gL, \\ \hat\Theta(2) & = \gC\gL^{-1},\quad \hat\Theta(1) = \gC'\gL^{-1},\quad \hat\Theta(0) = \gC''\gL^{-1}. \end{align*} Combining with~\eqref{Lyapunov} we have the result.\hfill\qed Before going to the proof of Theorem~\ref{thm:varoverlap}, we will prove the following lemma, which essentially solves the continuous Lyapunov equation. Recall that a square matrix $A$ is \emph{stable} if all the eigenvalues of $A$ have a strictly negative real part. \begin{lem}\label{lem:cle} Let $A$ be a $m\times m$ stable matrix, and $C$ be a symmetric matrix. Suppose that the symmetric matrix $X$ satisfies the equation, \[ \sym(AX) = - C. \] Then we have \[ X=\int_{0}^{\infty}e^{\frac{t}{2}A}Ce^{\frac{t}{2}A^{\T}}dt. \] \end{lem} \begin{proof}[Proof of Lemma~\ref{lem:cle}] First we consider the case when $A$ is stable and similar to a diagonal matrix, \ie\ $A=SDS^{-1}$ for a diagonal matrix $D$ with negative diagonal entries and an invertible matrix $S$. We can write $\sym(AX)= S\sym(DS^{-1} XS^{-\T})S^{\T}$ and solve for $\sym(DY) = - S^{-1}CS^{-\T}$ where $Y=S^{-1} XS^{-\T}$. Furthermore, \[ \int_{0}^{\infty} Se^{\frac{t}{2}D}S^{-1}CS^{-\T}e^{\frac{t}{2}D}S^{\T}dt = \int_{0}^{\infty}e^{\frac{t}{2}A}Ce^{\frac{t}{2}A^{\T}}dt. \] Thus, w.l.o.g.~$A$ can be taken as a diagonal matrix. We write $\vect(A)$ as the $m^{2}\times 1$ vector formed by stacking the columns of $A$, \ie\ $\vect(A) = \sum_{s=1}^{m}e_s\otimes Ae_s=\sum_{s=1}^{m}A^{\T}e_s\otimes e_s$, where $e_s$ is the canonical basis vector with 1 at $s$-th entry and 0 elsewhere, $\otimes$ is the Kronecker product of matrices. We define a linear map $\phi$ from $m \times m$ matrices to $m^2 \times m^2$ matrices: \[ \phi(A) := \frac 12(A\otimes I + I\otimes A). \] One can easily check that, $\vect(AXB^{\T})=(B\otimes A)\vect(X)$ for any symmetric matrix $X$, and thus we have \[ \vect(\sym(AX)) = \phi(A)\vect(X). \] From $\sym(AX)=-C$, we get \begin{align}\label{eq:3a} \phi(A)\vect(X) = -\vect(C). \end{align} Since $A$ is negative definite diagonal matrix, $\phi(A)$ is also a negative definite diagonal matrix and hence invertible with inverse given by \[ (\phi(A))^{-1}=-\int_{0}^{\infty}e^{t\phi(A)}dt=-\int_{0}^{\infty}e^{\frac{t}{2}A}\otimes e^{\frac{t}{2}A}dt. \] Thus, we have \[ \vect(X)=-(\phi(A))^{-1}\vect(C) = \int_{0}^{\infty} (e^{\frac{t}{2}A}\otimes e^{\frac{t}{2}A}) \vect(C) dt = \int_{0}^{\infty} \vect(e^{\frac{t}{2}A} Ce^{\frac{t}{2}A} ) dt \] and $$ X= \int_{0}^{\infty}e^{\frac{t}{2}A} Ce^{\frac{t}{2}A}dt. $$ In the general case, by Jordan decomposition, $A$ is similar to an upper triangular matrix with diagonal entries having negative real parts. The exact proof goes through. \end{proof} \subsection{Proof of Theorem~\ref{thm:varoverlap}} First we note that $0\le \gc_{s}\le 1, -1< \gc_{s}'\le \gc_{s}\le 1$ and thus $\rho(\gC\gD^2\gL)\le \rho(\gD^2\gL), \rho(\gC'\gD^2\gL)\le \rho(\gD^2\gL)$. If $\gb^{2} <\gb_c^2=\rho(\gD^2\gL)^{-1}$, it is easy to see that the matrices $\gb^2\gC\gD^{2}\gL-I, \gb^2\gC'\gD^{2}\gL-I$ are stable. Furthermore, we have \begin{align*} \hat{U}(2) & = \frac1N\cdot \int_{0}^{\infty} \exp\left(-\frac{t}{2}(I-\gb^2\gC\gD^{2}\gL)\right)\gC\gL^{-1}\exp\left(-\frac{t}{2}(I-\gb^2\gL\gD^{2}\gC)\right)dt + \mfR \\ & = \frac1N\cdot \int_{0}^{\infty} \gC^{\half}\gL^{-\half} \exp\left(-t(I-\gb^2\gC^{\half}\gL^{\half}\gD^{2}\gL^{\half}\gC^{\half})\right) \gC^{\half}\gL^{-\half}dt + \mfR \\ & = \frac1N\cdot \gC^{\half}\gL^{-\half} (I-\gb^2\gC^{\half}\gL^{\half}\gD^{2}\gL^{\half}\gC^{\half})^{-1} \gC^{\half}\gL^{-\half}+ \mfR \\ & = \frac1N\cdot \gC (I-\gb^2\gD^{2}\gL\gC)^{-1} \gL^{-1}+ \mfR. \end{align*} In the second equality, we used the fact that $ABA^T = B^{1/2}(B^{-1/2}AB^{1/2})(B^{-1/2}AB^{1/2})^T B^{1/2}$ with $A=\exp\left(-\frac{t}{2}(I-\gb^2\gC\gD^{2}\gL)\right)$ and $B:=\Gamma \gL^{-1}$, a diagonal matrix with positive entries. Similarly, we get \begin{align*} \hat{U}(1) & = \frac1N\cdot \gC' (I-\gb^2\gD^{2}\gL\gC')^{-1} \gL^{-1}+ \mfR \end{align*} and \begin{align}\label{eq:var-overlap-3} \hat{U}(0) & = \int_{0}^{\infty} \exp\left(-\frac{t}{2}(I-\gb^2\gC'\gD^{2}\gL)\right) \sym\bigl( \gb^{2}\gC''\gD^{2}\gL\cdot \hat{U}(1)\bigr) \exp\left(-\frac{t}{2}(I-\gb^2\gL\gD^{2}\gC')\right)dt \\ & +\frac1N\cdot \int_{0}^{\infty} \exp\left(-\frac{t}{2}(I-\gb^2\gC'\gD^{2}\gL)\right) \gC''\gL^{-1} \exp\left(-\frac{t}{2}(I-\gb^2\gL\gD^{2}\gC')\right)dt +\mfR. \end{align} Finally, we use the fact that $N\mfR$ converges to $0$ entrywise as $N\to\infty$.\hfill\qed \section{Uniqueness of $\vq_{\star}$ and Replica Symmetry Breaking }\label{sec:RSB} In this section, for \emph{positive-definite}~$\gD^2$, we prove the AT line condition of the MSK model when $h>0$, beyond which the MSK model is in replica symmetry breaking phase. From classical literature of the SK model, we know that the uniqueness of $\vq_{\star}$ is essential to characterize the AT line condition. In the SK model, the uniqueness was proved using contraction argument for $\gb$ small, and the Latala-Guerra Lemma~\cite{Tal11a}*{Proposition 1.3.8} gives the uniqueness result for $h>0$. In the MSK model, we prove the uniqueness of $\vq_{\star}$ for small $\beta$ by extending the contraction argument, but extending Latala-Guerra Lemma becomes more challenging, which is about analysis of a complicated nonlinear system~\eqref{syseq}. In~\cite{BSS19}, they use an elementary approach to prove this for \emph{positive-definite}~$2$-species model, but the idea seems difficult to be generalized. We first prove the uniqueness result for small $\gb$ in Theorem~\ref{thm0}, then we prove Proposition~\ref{prop:indf-uniq} which gives the uniqueness result for \emph{indefinite}~ 2-species models when $h>0$. \subsection{Proof of Theorem~\ref{thm0}} We need to prove that the following system \begin{align}\label{sys1} q_s = \E \tanh^2(\gb \eta \sqrt{(\gD^2\gL \vq)_{s}}+h),\qquad s=1,2,\ldots,m \end{align} has a unique solution where $\vq=(q_1,q_2,\ldots,q_m)^\T$. We rewrite the equations in terms of $\vx = (x_1,x_2,\ldots,x_m)^\T:=\gL^{\half} \vq$. Let $f(y) := \tanh^2(y)$. Then the system of equations~\eqref{sys1} is equivalent to \begin{align} x_s = {\psi}_s,\qquad s=1,2,\ldots,m \end{align} where $$ \psi_s := {\gl^{\half}_s}\E f(\gb \eta \sqrt{(\gD^2\gL^{\half}\vx)_s} + h). $$ We define ${\Psi}(\vx) = (\psi_1,\psi_{2}, \ldots, \psi_m )$. It's easy to compute the Jacobian matrix of the map $\Psi$, \[ J({\Psi})= \left(\!\!\!\left(\frac{\partial \psi_s}{\partial x_t} \right)\!\!\!\right)_{s,t=1}^{m} \] where $$ \frac{\partial \psi_s}{\partial x_t} =\frac{\gb {\gl^{\half}_s}\gD_{s,t}^2{\gl^{\half}_t}}{2\sqrt{(\gD^2\gL^{\half}\vx)_s}} \E \eta f^{\prime}(\gb \eta \sqrt{(\gD^2\gL^{\half}\vx)_s}+h) = \frac{\gb^2}{2} {\gl^{\half}_s} \gD_{st}^2 {\gl^{\half}_t} \E f^{\prime \prime}(\gb \eta \sqrt{(\gD^2\gL^{\half}\vx)_s}+h) $$ and the last equality follows by Gaussian integration by parts. Now, note that \[ f^{\prime}(y) = 2\cdot \frac{\tanh (y)}{\cosh^2 (y)}, \ f^{\prime \prime}(y) = 2\cdot \frac{1- 2 \sinh^2(y)}{\cosh^4(y)} \in [-2,2]. \] In particular, we have \begin{align*} J({\Psi}) = \gb^2 A \gL^{\half} \gD^2 \gL^{\half} \end{align*} where $A=\diag(a_{1},a_{2},\ldots,a_{m})$ and $ a_s := \frac12\E f^{\prime \prime}(\gb \eta \sqrt{(\gD^2\gL^{\half}\vx)_s}+h)\in [-1,1] $ for $s=1,2,\ldots,m$. Thus \[ \norm{J(\Psi)} \le \gb^2 \norm{\gL^{\half}\gD^2 \gL^{\half}}= \gb^2 \rho(\gL^{\half}\gD^2 \gL^{\half})=\gb^2 \rho(\gD^2 \gL). \] In particular, $\gb^2 < \frac{1}{\rho(\gD^2 \gL)}$ implies that $\norm{J({\Psi})} <1$, \ie\ ${\Psi}$ is a contraction and the system~\eqref{sys1} has a unique solution. \hfill\qed Next, we present an elementary approach to prove Proposition~\ref{prop:indf-uniq}, which concerns the uniqueness of $\vq_{\star}$ for \emph{indefinite}~$\gD^2$ with $m=2$. \subsection{Proof of Proposition~\ref{prop:indf-uniq}} We use an elementary approach to prove Proposition~\ref{prop:indf-uniq}. Recall $\vQ = A \vq$, where $A=\gD^2\gL$. For \emph{indefinite}~$\gD^2$, we have $\det(A) <0$, then \[A^{-1} = \begin{pmatrix} -a & b \\ c & -d \end{pmatrix} \quad \text{where} \quad a,b,c,d >0. \] Then we can rewrite the equation, \[ \begin{cases} q_1 = \E \tanh^2(\gb \eta \sqrt{Q^1}+h), \\ q_2 = \E \tanh^2(\gb \eta \sqrt{Q^2}+h). \end{cases} \] as \begin{align}\label{eq:trans} \begin{cases} -aQ^1+bQ^2 = \E \tanh^2(\gb \eta \sqrt{Q^1}+h), \\ cQ^1-dQ^2 = \E \tanh^2(\gb \eta \sqrt{Q^2}+h). \end{cases} \end{align} Let $g_1(x) := \frac{1}{b}(\E \tanh^2(\gb \eta \sqrt{x}+h)/x+a)$ and $g_2(x) := \frac{1}{c}(\E \tanh^2(\gb \eta \sqrt{x}+h)/x+d)$, by the classical Latala-Guerra lemma~\cite{Tal11a}*{Appendix A.14}, $g_1(x),g_2(x)$ are both strictly decreasing on $\mathbb{R}^+$. The equation~\eqref{eq:trans} can be rewritten further: \begin{align}\label{eq:trans2} \begin{cases} Q^1 = Q^2g_2(Q^2), \\ Q^2 = Q^1g_1(Q^1). \end{cases} \end{align} From this, we get \[ g_1(Q^1) g_2(Q^2) =1, \] then $Q^2=g_2^{-1}\left(\frac{1}{g_1(Q^1)}\right) $ is a decreasing function of $Q^1$. On the other hand, taking cross product of equation~\eqref{eq:trans2}, we have \begin{align}\label{eq:trans3} (Q^2)^2 g_2(Q^2) =( Q^1)^2g_1(Q^1). \end{align} Let $h_i(x) = x^2 g_i(x), i=1,2.$ Next we show $h_1,h_2$ are both increasing functions on $\mathbb{R}^+$. Since \begin{align*} h_1(x) = x^2g_1(x) & = \frac{x\E \tanh^2(\gb \eta \sqrt{x}+h)+ax^2}{b} \\ & = \int_{-\infty}^{\infty} \frac{\sqrt{x}}{b} \tanh^2(y) \frac{1}{\sqrt{2\pi \gb^2}} e^{-\frac{(y-h)^2}{2\gb^2x}} dy+ \frac{ax^2}{b}, \end{align*} where we expand the expectation part as a Gaussian integral. For this integral form, it's easy to check $h_1(x)$ is an strictly increasing function of $x \in \mathbb{R}^+$. Similarly, we can prove $h_2$ is also strictly increasing. From~\eqref{eq:trans3}, we have \[ h_1(Q^1) = h_2(Q^2). \] Thus $Q_2 = h_2^{-1}\left(h_1(Q^1) \right)$ is strictly increasing as a function of $Q^1$. Recall $Q_2 = g_2^{-1}\left(\frac{1}{g_1(Q^1)}\right)$ is a decreasing function of $Q^1$. Therefore, we conclude the uniqueness of $Q^1$, similarly for $Q^2$. The proof is complete. \hfill \qed In the following part, we will prove Theorem~\ref{AT line}. As we mentioned before, we assume $\gD^2$ is \emph{positive-definite}~because the Parisi formula is only known in that case. We further assume the uniqueness of $\vq_{\star}$ when $h>0$, in order for accurate and correct characterization of AT line condition. Consider $\vp \in [0,1]^m$ with $p^s\ge q_{\star}^s$ for all $s=1,2,\ldots, m$, define \begin{align}\label{RSB-V} V(\vp) = \frac{\partial \sP_{\text{1RSB}}(\vq_{\star},\vp,\zeta)}{\partial \zeta} \bigg |_{\zeta=1} , \end{align} where $\sP_{\text{1RSB}}$ is Parisi functional of 1-step replica symmetry breaking. The explicit expression and related computations can be found in the appendix of~\cite{BSS19}. The following Lemma gives some useful properties of $V(\vp)$. \begin{lem} [{\cite{BSS19}*{Lemma~3.1}}] Fix any $\vq=\vq_{\star} \in \cC(\gb,h)$, The following identities hold: \begin{enumerate} \item $V(\vq_{\star}) =0$. \item $\nabla V(\vq_{\star}) =0$. \item $HV(\vq_{\star}) = \gb^2\gL(\gb^2\gD^2\gL\Gamma\gD^2-\gD^2)\gL$, where $\gL, \Gamma$ are diagonal matrices defined in Theorem~\ref{AT line}. \end{enumerate} \end{lem} The following Corollary characterizes the RSB condition using $HV(\vq_{\star})$ in the above lemma. \begin{cor} [{\cite{BSS19}*{Corollary 3.2}}]\label{RSB-cor} Assume the Parisi formula~\eqref{par}, and that $\text{RS}(\gb,h) = \sP_{\RS}(\vq_{\star})$ for some $\vq_{\star} \in \cC(\gb,h)$. If there exists a $\vx \in \bR^m$ with all nonnegative entries such that $\vx^{\T}HV(\vq_{\star})\vx>0$, then \[ \lim_{N \to \infty}F_N < \RS(\gb,h) . \] \end{cor} Now we use the characterization of RSB in Corollary~\ref{RSB-cor} to prove Theorem~\ref{AT line}. \subsection{Proof of Theorem~\ref{AT line}} Based on the definition, the matrix $\Gamma^{\half}\gL^{\half}\gD^2\gL^{\half}\Gamma^{\half}$ has all entries positive, by Perron-Frobenius theorem, we know that there exists an eigenvector $\vu=(u_1,u_2,\ldots, u_m) \in \bR^m $ associated to the largest eigenvalue $\rho(\gC\gD^2\gL)>0$ such that $u_s >0 $ for $s=1,2,\ldots,m$. Without loss of generality, we assume $\vu$ is a unit vector, \ie\ $\vu^{\T}\vu=1$. Take $\vx=\gL^{-\half}\Gamma^{\half}\vu$ which has positive entries. Then \begin{align*} \vx^{\T}\left(\gL(\gb^2\gD^2\gL\Gamma\gD^2-\gD^2)\gL\right)\vx & = \vu^{\T}\left[\Gamma^{\half}\gL^{-\half}\gL(\gb^2 \gD^2\gL\Gamma\gD^2-\gD^2)\gL\gL^{-\half}\Gamma^{\half}\right]\vu \\ & = \vu^{\T}\left(\gb^2 [(\gL\Gamma)^{\half}\gD^2(\gL\Gamma)^{\half}]^2\vu-[(\gL\Gamma)^{\half}\gD^2(\gL\Gamma)^{\half}]\vu \right) \\ & = \rho(\gC\gD^2\gL)\left({\gb^2\rho(\gC\gD^2 \gL)-1}\right). \end{align*} where the last step uses the fact that $\vu$ is an eigenvector associated with $\rho(\gC\gD^2\gL)$ and $\vu^{\T}\vu =1$. If $\gb^2 > \frac{1}{\rho(\gC\gD^2\gL)}$, we have \[ \vx^{\T}\left(\gL(\gb^2\gD^2\gL\Gamma\gD^2-\gD^2)\gL\right)\vx >0 ,\] \ie\ $\vx^{\T}HV(\vq_{\star})\vx > 0 $. By Corollary~\ref{RSB-cor}, we have \[ \lim_{N \to \infty} F_N < \RS(\gb,h) ,\] and this completes the proof. \hfill\qed \section{Discussions and Further Questions}\label{sec:openqn} In this section, we discuss some open questions spread out in the previous sections. \begin{itemize} \item In Theorem~\ref{RS solution}, the RS solution in \emph{indefinite}~$\gD^2$ case, obtained using Guerra's interpolation has the same expression as evaluating the Parisi functional $\sP_{\text{RS}}$ at $\vq_{\star}$, while $\vq_{\star}$ is not a minimizer of $\sP_{\text{RS}}$ any more as in \emph{positive-definite}~case. This fact suggests that a modified Parisi formula as conjectured in~\cite{BGG11} should be true at least in the RS regime. Our argument verifies this in part of the RS regime. The question is whether one can prove it in the whole RS regime while adapting Guerra's interpolation? \item For \emph{positive-definite}~$\gD^2$, when $h>0$, analyze the uniqueness of solution to the nonlinear system: \begin{align*} q_s = \E \tanh^2(\gb \eta \sqrt{(\gD^2\gL\vq)_s+h}, \quad \text{for} \quad s=1,2,\ldots, m. \end{align*} As we mentioned, the uniqueness condition is essential to characterize AT line in Theorem~\ref{AT line}. In~\cite{BSS19}, it was proved in the 2-species case, where one needs to identify the signs of matrix entries, but this is nearly impossible for general $m>2$. Similarly for \emph{indefinite}~case with $m>2$. \item In Theorem~\ref{AT line}, we proved the RSB condition when $h>0$. In the case $h=0$, to prove RSB when $\gb>\gb_c$ is still challenging. In that case, $\cC(\gb,0)=\{0,\vq_{\diamond}\}$, $\RS(\gb,0):=\sP_{\RS}(\vq_{\diamond})$ achieves the minimum at $\vq_{\diamond}$, to prove $\lim_{N \to \infty} F_N(\gb)< \RS(\gb,0)$ is still not clear and depends on a good understanding of $\vq_{\diamond}$. \item Can one prove or disprove the Conjecture~\ref{AT conj}? In this case, one also needs to understand the solution to~\eqref{syseq} for \emph{indefinite}~$\gD^2$. \item In Theorem~\ref{thm:varoverlap}, we proved the asymptotic variance of overlap, then the next question will be a central limit theorem. In the classical SK model, Talagrand~\cite{Tal11a} proved it using the moment method, but adapting the idea for the MSK model seems hopeless because now we have overlap vectors and many computations involve matrices. One has to find a different method to prove it. \item Note that, the function \[ \vX(t):=\int_{0}^{-\ln(1-t)} e^{-\frac{s}{2}(I-\gb^2\gC\gD^2\gL)} \gC e^{-\frac{s}{2}(I-\gb^2\gL\gD^2\gC)}ds,\ t\in[0,1] \] is the unique solution of \begin{align*} (1-t)\vX''(t) = \gb^2\cdot \sym(\vX'(0)\gD^2\gL\vX'(t)) \text{ with } \vX(0)=0, \vX'(0) = \gC . \end{align*} when $I-\gb^2\gC\gD^2\gL$ has all eigenvalues strictly positive. Is it possible to connect this equation with Guerra's interpolation? \item Finally, continuous Lyapunov equation arises in studying the variance-covariance matrices for OU-type SDE of the form $dX_{t}=AX_{t}dt+CdB_{t}$, where $A$ is a stable matrix and $\lim_{t\to\infty}\var(X_{t})=\int_{0}^{\infty}e^{tA}CC^{\T}e^{tA^{\T}}dt$. It will be interesting to connect the appearance of continuous Lyapunov equation in Replica Symmetric MSK model with an appropriate SDE. % % % \end{itemize} \noindent{\bf Acknowledgments.} The authors would like to thank Erik Bates, Jean-Christophe Mourrat, Dimitry Panchenko for reading the manuscript and insightful comments, and the two anonymous referees for providing helpful suggestions and additional references that improved the clarity and presentation of the paper.
{ "attr-fineweb-edu": 1.705078, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdm7xaKgT2CDEmgnH
\section{Introduction} After Buchberger initiated his celebrated algorithm in his remarkable PhD thesis \cite{Buc65}, the theory of Gr\"obner\ bases has been established as a standard tool in algebraic geometry and computer algebra, yielding algorithmic solutions to many significant problems in mathematics, science and engineering \cite{BW98}. As a result, there have been many excellent textbooks on the subject such as \cite{AL94} \cite{BW93} \cite{CLO15} \cite{KR00} \cite{DL06} \cite{GP08} \cite{EH12} \cite{GG13}. Nonetheless the computational complexity of Gr\"obner\ bases often demands an enormous amount of computing time and storage space even for problems of moderate sizes, which severely impedes its practicality and dissemination. A striking phenomenon is in the computation of Gr\"obner\ bases over the rational field with respect to the \lex\ ordering, when the coefficients of the final basis elements swell to extremely complicated rational numerals even though the coefficients of the original ideal generators are quite moderate. Example \ref{Expl:FullModularAlgo} in this paper should be impressive enough to illustrate such a phenomenon. An even more dramatic phenomenon is the ``intermediate expression swell" referring to a generation of a huge number of intermediate polynomials with much more gigantic coefficients and larger exponents than those of the final basis elements during the implementation of the classical algorithm. In \cite[P116]{CLO15} and \cite[P616, 21.7]{GG13} there are some brief reviews on the complexity issues associated with the classical algorithm. These challenges have stimulated decades of ardent endeavors in improving the efficiency of the classical algorithm. The methodologies such as the normal selection strategies and signatures effectively diminish the number of intermediate polynomials spawned during the process of algorithmic implementations \cite{Buc85} \cite{GMN91} \cite[P222, 5.5]{BW93} \cite{Fau02} \cite{EF17}. The modular and $p$-adic techniques based on ``lucky primes" and Hensel lifting have been adopted to control the rampant growth of the intermediate coefficients albeit being limited to numeral coefficients only \cite{Ebe83} \cite{Win87} \cite{ST89} \cite{Tra89} \cite{Pau92} \cite{Gra93} \cite{Arn03}. There are also Gr\"obner\ basis conversion methods such as the FGLM algorithm \cite{FGL93} and Gr\"obner\ Walk \cite{CKM97}, a detailed description of which can be found in \cite[P49, \S 3; P436, \S 5]{CLO05} and \cite{Stu95}. The idea of these methods is to compute another Gr\"obner\ basis with respect to a different but favorable monomial ordering before converting it to the desired Gr\"obner\ basis. Albeit with all these endeavors over the decades, the high-level complexity associated with the Gr\"obner\ basis computations remains a conundrum. Another train of thoughts over the past decades is to generalize Gr\"obner\ bases from over fields to over rings. Among the copious and disparate coefficient rings that we shall not enumerate here, the Gr\"obner\ bases over principal ideal rings are pertinent to the new type of bases in this paper. There is an excellent exposition on Gr\"obner\ bases over rings and especially over PIDs in \cite[Chapter 4]{AL94}. However the focal point of the exposition is on the strong Gr\"obner\ bases that resemble the Gr\"obner\ bases over fields and hence are still plagued with complexity problems. In this paper we take a novel approach by defining a new type of bases over principal quotient rings instead of over numeral fields like Gr\"obner\ bases. It is a natural approach since for a zero-dimensional ideal, the final eliminant is always a univariate polynomial after eliminating all the other variables. With the principal ideals generated by the eliminant factors serving as moduli, we obtain an elegant decomposition of the original ideal into pairwise relatively prime ideals. We also use pseudo-divisions and multipliers to enhance computational efficiency. In the exemplary computations in Section \ref{Sec:Examples}, it is conspicuous that this new approach scales down both the high-level complexity and gigantic numeral coefficients of the Gr\"obner\ bases over rational fields. In practice the Wu's method \cite{Wu83} is more commonly used than the Gr\"obner\ basis method since it is based on pseudo-divisions and thus more efficient. However the pseudo-divisions adopted by Wu's method usually lose too much algebraic information of the original ideals. In Section \ref{Sec:DivisionAlgm} we recall some rudimentary facts on monomial orderings and then define pseudo-divisions over PIDs. The multipliers for the pseudo-divisions in this paper are always univariate polynomials so as to avoid losing too much algebraic information of the original ideals. The pseudo-divisions of this ilk also dispose of the solvability condition for the linear equations of leading coefficients imposed by the classical division algorithm over rings. Please refer to Remark \ref{Rmk:LinearEqs} for details. Algorithm \ref{Algo:PseudoEliminant} is one of the pivotal algorithms in the paper. It computes the pseudo-eliminant and pseudo-basis as per the elimination ordering in Definition \ref{Def:EliminationOrdering}. The purpose of Corollary \ref{Cor:CoprimePair} and Lemma \ref{Lemma:TriangleIdentity} is to trim down the number of $S$-polynomials to be pseudo-reduced. They are highly effective in this respect as illustrated by the exemplary computations in Section \ref{Sec:Examples}. The pseudo-eliminant might contain factors that are not the bona fide ones of the eliminant. The discrimination among these factors for authenticity is based on a crucial methodology, i.e., the pseudo-eliminant should be compared with the multipliers of the pseudo-divisions instead of the leading coefficients of the basis elements. Example \ref{Expl:MultipliersCount} shows that the multipliers of the pseudo-divisions are more reliable than the leading coefficients of the basis elements. The multiplier methodology is incorporated into Theorem \ref{Thm:CompatiblePart} establishing that the compatible part of the pseudo-eliminant constitutes a bona fide factor of the eliminant. This is one of the primary conclusions of the paper. The multiplier and its property in Lemma \ref{Lemma:SyzygyTransform} generalize the syzygy theory over fields and PIDs for Gr\"obner\ bases and is another substantiation of the multiplier methodology. Please refer to Remark \ref{Rmk:MultiplierSyzygy} for the comment. The compatible and incompatible parts of the pseudo-eliminant are defined in Definition \ref{Def:CompatibleDivisors} and computed via Algorithm \ref{Algo:CompatiblePartPseudoEliminant}. In particular, we obtain a squarefree decomposition of the incompatible part $\Ip (\pel)$ by Algorithm \ref{Algo:CompatiblePartPseudoEliminant} via a univariate squarefree factorization of the pseudo-eliminant $\pel$ by Algorithm \ref{Algo:Squarefree}. We avoid a complete univariate factorization of the pseudo-eliminant $\pel$ due to the concerns on computational complexity. We conduct a complete analysis of the incompatible part $\Ip (\pel)$ of the pseudo-eliminant $\pel$ in Section \ref{Sec:IncompatibleModular} based on modular algorithms with the composite divisors obtained in Algorithm \ref{Algo:CompatiblePartPseudoEliminant} as the moduli. The advantages are that we have one less variable than the classical algorithm and the composite divisors are usually small polynomial factors of the pseudo-eliminant $\pel$. However the disadvantage is that the computations are over the principal quotient rings (PQR) that might contain zero divisors. As a result, we redefine $S$-polynomials in Definition \ref{Def:SpolynPQR} carefully in order to obviate the zero multipliers incurred by the least common multiple of leading coefficients. Algorithm \ref{Algo:ProperEliminant} is pivotal in procuring the proper eliminants and proper bases by proper divisions as in Theorem \ref{Thm:ProperReduction}. We prove rigorously in Theorem \ref{Thm:IncompatiblePart} that the nontrivial proper eliminants obtained in Algorithm \ref{Algo:ProperEliminant} are de facto the bone fide factors of the eliminant of the original ideal. The meticulous arguments in this primary conclusion are to ensure that our arguments are legitimate within the algebra $\rx$ that contains zero divisors. Similar to Lemma \ref{Lemma:SyzygyTransform}, we generalize the classical syzygy theory over fields and PIDs for Gr\"obner\ bases to the one in Lemma \ref{Lemma:nPQRSyzygy}. Further, we also have Corollary \ref{Cor:PQRCoprime} and Lemma \ref{Lemma:TriangleIdentityPQR} to trim down the number of $S$-polynomials for proper reductions. We render two equivalent characterizations on the pseudo-bases $\pbs$ obtained in Algorithm \ref{Algo:PseudoEliminant}. The first characterization is the identity \eqref{LeadTermMod} in terms of leading terms whereas the second one is Theorem \ref{Thm:MemberChar} via \Gd-reductions as defined in Theorem \ref{Thm:GcdReduction}. We have the same kind of characterizations in Theorem \ref{Thm:MemberCharModulo} for the proper bases $\prb$ and $\Bn$ obtained in Algorithm \ref{Algo:ProperEliminant}. These bases as in \eqref{NewBases} correspond to a decomposition of the original ideal in \eqref{IdealDecomposition} whose modular version is in \eqref{ChineseModuleTheorem}. We can define a unique normal form of a polynomial in $\rx$ with respect to the original ideal by the Chinese Remainder Theorem as in Lemma \ref{Lemma:CRT} since the ideal decomposition in \eqref{IdealDecomposition} is pairwise relatively prime. In the remaining part of this section we define irredundant, minimal and reduced bases that possess different levels of uniqueness. In Section \ref{Sec:MinorImprovements} we make some further improvements on the algorithms by Principle \ref{Principle:PseudoRedPrinciple}. The highlight of Section \ref{Sec:ComplexityComparison} is Lemma \ref{Lemma:OldGbasis} in which we contrive a special scenario consisting of two basis elements, a detailed analysis of which reveals that the classical algorithm contains the Euclidean algorithm computing the greatest common divisor of the leading coefficients. Moreover, the results in \eqref{CancelLeadTerm} and \eqref{AlsoCancelLeadTerm} contain the B\'ezout coefficients\ $u$ and $v$ that might swell to an enormous size like in Example \ref{Expl:BezoutCoeffs}. By contrast the computation of our new type of $S$-polynomial as in \eqref{NewSPolynComput} yields the above results in one step without the B\'ezout coefficients. This might help to unveil the mystery of intermediate coefficient swell as well as high-level complexity associated with the Gr\"obner\ basis computations. We make two exemplary computations in Section \ref{Sec:Examples} with the second one being more sophisticated than the first one. It contains a paradigmatic computation of proper eliminants and proper bases over principal quotient rings with zero divisors as in Algorithm \ref{Algo:ProperEliminant}. We provide a detailed explanation for each step of the computation to elucidate the ideas of this new type of bases. As usual, we denote the sets of complex, real, rational, integral and natural numbers as $\mathbb{C}$, $\mathbb{R}$, $\bQ$, $\mathbb{Z}$ and $\mathbb{N}$ respectively. In this paper, we use the following notations for a ring $R$: $R^\ast:=R\setminus\{0\}$; $R^\times$ denotes the set of units in $R$. With $\bm{x}=(\Lst x1n)$ and $\alpha=(\Lst\alpha 1n)$, we denote a \emph{monomial} $x_1^{\alpha_1}\cdots x_n^{\alpha_n}$ as $\bm{x}^\alpha$ and a \emph{term} as $c\bm{x}^\alpha$ with the \emph{coefficient} $c\in R^\ast$. We also use the boldface $\bm{x}$ to abbreviate the algebra $R[\Lst x1n]$ over a ring $R$ as $R[\bm{x}]$. The notation $\langle A\rangle$ denotes an ideal generated by a nonempty subset $A\subset R[\bm{x}]$. Further, $K$ usually denotes a perfect field that is not necessarily algebraically closed unless specified. In most cases we treat the algebra $K[\bm{x}]$ as $\knx$ over the ring $R=\kxn$ with the variables $\tilde{\bx}:=(\Lst x2n)$. \section{A Pseudo-division Algorithm over Principal Ideal Domains}\label{Sec:DivisionAlgm} In this article we adopt a pseudo-divison of polynomials over a principal ideal domain as in Theorem \ref{Thm:PseudoReduction}. We shall abbreviate a principal ideal domain as a PID and denote it as $R$ henceforth. Let $R$ be a PID and $R[\bm{x}]$ a polynomial algebra $R[\Lst x1n]$ over $R$. Let us denote the set of monomials in $\bm{x}=(\Lst x1n)$ as $\bM:=\{\bm{x}^\alpha\colon\alpha\in{\mathbb{N}^n}\}$. A nonzero ideal $I\subset R[\bm{x}]$ is called a \emph{monomial} ideal if $I$ is generated by monomials in $\bM$. By Hilbert Basis Theorem we can infer that every monomial ideal in $R[\bm{x}]$ is finitely generated since a PID $R$ is Noetherian. \begin{lemma}\label{Lemma:MonomialIdealProperty} Let $R$ be a PID. Consider a monomial ideal $I=\langle\bm{x}^\alpha\colon\alpha\in E\rangle$ in $R[\bm{x}]$ with $E\subset{\mathbb{N}^n}\setminus\{\bm{0}\}$. We have the following conclusions: \begin{enumerate}[(i)] \item\label{item:MonomialIdealDivisionCond} A term $c\bm{x}^\beta\in I$ for $c\in R^\ast$ if and only if there exists an $\alpha\in E$ such that $\bm{x}^\beta$ is divisible by $\bm{x}^\alpha$; \item\label{item:MonomialIdealCond} A polynomial $f\in I$ if and only if every term of $f$ lies in $I$. \end{enumerate} \end{lemma} \begin{proof} It suffices to prove the necessity of the two conclusions. Suppose that $g=\sum_{j=1}^s\qr_j\bm{x}^{\alpha_j}$ with $g$ representing the term $c\bm{x}^\beta\in I$ as in \eqref{item:MonomialIdealDivisionCond}, or $f\in I$ as in \eqref{item:MonomialIdealCond}. Here $\qr_j\in R[\bm{x}]$ and $\alpha_j\in E$ for $1\le j\le s$. We expand each $\qr_j$ into individual terms and compare those with the same multi-degrees on both sides of the equality. The conclusion readily follows since every term on the right hand side of the equality is divisible by some $\bm{x}^{\alpha_j}$ with $\alpha_j\in E$. \end{proof} A \emph{total ordering} $\succ$ on the monomial set $\bM$ satisfies that for every pair $\bm{x}^\alpha,\bm{x}^\beta\in\bM$, exactly one of the following relations holds: $\bm{x}^\alpha\succ\bm{x}^\beta$, $\bm{x}^\alpha=\bm{x}^\beta$, or $\bm{x}^\alpha\prec\bm{x}^\beta$. Moreover, $\bm{x}^\alpha\succeq\bm{x}^\beta$ means either $\bm{x}^\alpha\succ\bm{x}^\beta$ or $\bm{x}^\alpha=\bm{x}^\beta$. A \emph{well-ordering} $\succ$ on $\bM$ satisfies that every nonempty subset $A\subset\bM$ has a minimal element. That is, there exists $\bm{x}^\alpha\in A$ such that $\bm{x}^\beta\succeq\bm{x}^\alpha$ for every $\bm{x}^\beta\in A$. A well-ordered set is always a totally ordered set since every subset consisting of two elements has a minimal element. It is evident that under a well-ordering $\succ$ on $\bM$, there is no infinite strictly decreasing sequence $\bm{x}^{\alpha_1}\succ\bm{x}^{\alpha_2}\succ\bm{x}^{\alpha_3}\succ\cdots$ in $\bM$ (or every strictly decreasing sequence in $\bM$ terminates). Nonetheless we have a much easier description as follows under the Noetherian condition. \begin{proposition} Let $\succ$ be a total ordering on $\bM$ such that $\bm{x}^\alpha\cdot\bm{x}^\gamma\succ\bm{x}^\beta\cdot\bm{x}^\gamma$ when $\bm{x}^\alpha\succ\bm{x}^\beta$ for all $\bm{x}^\alpha,\bm{x}^\beta,\bm{x}^\gamma\in\bM$. Then $\succ$ is a well-ordering on $\bM$ if and only if $\bm{x}^\gamma\succeq 1$ for all $\gamma\in{\mathbb{N}^n}$. \end{proposition} \begin{proof} Suppose that $\succ$ is a well-ordering. Then $\bM$ has the smallest element which we denot as $\bm{x}^{\beta_0}$. If $1\succ\bm{x}^{\beta_0}$, then $\bm{x}^{\beta_0}\succ\bm{x}^{2\beta_0}$, contradicting the minimality of $\bm{x}^{\beta_0}$. Hence follows the necessity of the conclusion. Suppose that $A\subset\bM$ is nonempty. To prove the sufficiency, it suffices to prove that $A$ has a minimal element in terms of the ordering $\succ$. Let $K$ be a nontrivial field and $\langle A\rangle:=\langle\{\bm{x}^\alpha\colon\alpha\in A\}\rangle$ the monomial ideal generated by $A$ in $K[\bm{x}]$. As per Hilbert Basis Theorem, $\langle A\rangle$ has a finite basis as $\langle A\rangle=\langle\bm{x}^{\alpha_1},\bm{x}^{\alpha_2},\dotsc,\bm{x}^{\alpha_s}\rangle$. Since $\succ$ is a total ordering, we relabel the subscripts such that $\bm{x}^{\alpha_s}\succ\cdots\succ\bm{x}^{\alpha_1}$. Now $\bm{x}^{\alpha_1}$ is the minimal element of $A$. In fact, for every $\bm{x}^\alpha\in\langle A\rangle$, according to Lemma \ref{Lemma:MonomialIdealProperty}, $\bm{x}^\alpha$ is divisible by one of $\{\bm{x}^{\alpha_j}\}$ for $1\le j\le s$. Assume that $\bm{x}^\alpha$ is divisible by $\bm{x}^{\alpha_{j_0}}$, i.e., $\bm{x}^\alpha=\bm{x}^{\alpha_{j_0}}\cdot\bm{x}^\gamma$ for some $\gamma\in{\mathbb{N}^n}$. Then $\bm{x}^\gamma\succeq 1$ indicates that $\bm{x}^\alpha\succeq\bm{x}^{\alpha_{j_0}}\succeq\bm{x}^{\alpha_1}$. \end{proof} \begin{definition}\label{Def:MonomialOrdering} A \emph{monomial ordering} on $\bM$ is a well-ordering on $\bM$ such that $\bm{x}^\alpha\cdot\bm{x}^\gamma\succ\bm{x}^\beta\cdot\bm{x}^\gamma$ when $\bm{x}^\alpha\succ\bm{x}^\beta$ for all $\bm{x}^\alpha,\bm{x}^\beta,\bm{x}^\gamma\in\bM$. In particular, we have $\bm{x}^\gamma\succeq 1$ for all $\gamma\in{\mathbb{N}^n}$. \end{definition} \begin{notation}\label{Notation:LeadingEntities} Let $R$ be a PID and $f=\sum_\alpha c_\alpha\bm{x}^\alpha$ a polynomial in $R[\bm{x}]$. Let $\succ$ be a monomial ordering. We denote the \emph{support} of $f$ as $\supp f:=\{\bm{x}^\alpha\in\bM\colon c_\alpha\ne 0\}\subset\bM$. In particular, we define $\supp f:=\{1\}$ when $f\in R^\ast$ and $\supp f:=\emptyset$ when $f=0$. If $f$ has a term $c_\beta\bm{x}^\beta$ that satisfies $\bm{x}^\beta:=\max_{\succ}\{\bm{x}^\alpha\in\supp f\}$, then we use the following terminologies hereafter. The \emph{leading term} of $f$ is denoted as $\ltc (f):=c_\beta\bm{x}^\beta$; The \emph{leading monomial}\footnote{It is also called the ``leading power product" in the literature. Here we adopt the convention that is consistent with the terminology of ``\emph{monomial} ideals".} of $f$ is denoted as $\lmc (f):=\bm{x}^\beta$; The \emph{leading coefficient} of $f$ is denoted as $\lcc (f):=c_\beta\in R^\ast$. Let $\pb=\{\ibr_j\colon 1\le j\le s\}$ be a polynomial set in $R[\bm{x}]\setminus\{0\}$. We denote the leading monomial set $\{\lmc (\ibr_j)\colon 1\le j\le s\}$ as $\lmc (\pb)$. Let us also denote the monomial ideal generated by $\lmc (\pb)$ in $R[\bm{x}]$ as $\langle\lmc (\pb)\rangle$. In what follows we use $\gcd (a,b)$ and $\lcm (a,b)$ to denote the greatest common divisor and least common multiple of $a,b\in R^\ast$ respectively over a PID $R$. \end{notation} \begin{definition}[Term pseudo-reduction in \text{$R[\bm{x}]$} over a PID $R$]\label{Def:TermReduction} \hfill Let $R$ be a PID and $\succ$ a monomial ordering on $\bM$. For $f\in\rd$ and $g\in R[\bm{x}]\setminus\{0\}$, suppose that $f$ has a term $c_\alpha\bm{x}^\alpha$ such that $\bm{x}^\alpha\in\supp f\cap\langle\lmc (g)\rangle$. Then we can make a \emph{pseudo-reduction} of the term $c_\alpha\bm{x}^\alpha$ of $f$ by $g$ as follows. \begin{equation}\label{TermReduction} h=\iur f-\frac{\lmr\bm{x}^\alpha}{\ltc (g)}g \end{equation} with the multipliers $\lmr:=\lcm (c_\alpha,\lcc (g))$ and $\iur:=\lmr/c_\alpha\in R^\ast$. We call $h$ the \emph{remainder} of the pseudo-reduction and $\iur$ the \emph{interim multiplier} on $f$ with respect to $g$. \end{definition} \begin{definition}[Pseudo-reduced polynomial]\label{Def:PseudoReduced} \hfill Let $R$ be a PID and $\succ$ a monomial ordering on $\bM$. A polynomial $\dr\in R[\bm{x}]$ is \emph{pseudo-reduced} with respect to a polynomial set $\pb=\{\ibr_j\colon 1\le j\le s\}\subset\rd$ if $\supp\dr\cap\langle\lmc (\pb)\rangle=\emptyset$. In particular, this includes the special case when $\dr=0$ and hence $\supp\dr=\emptyset$. We also say that $\dr$ is \emph{pseudo-reducible} with respect to $\pb$ if it is not pseudo-reduced with respect to $\pb$, i.e., $\supp\dr\cap\langle\lmc (\pb)\rangle\ne\emptyset$. \end{definition} \begin{theorem}[Pseudo-division in \text{$R[\bm{x}]$} over a PID $R$]\label{Thm:PseudoReduction \hfil Let $R$ be a PID and $\succ$ a monomial ordering on $\bM$. Suppose that $\pb=\{\ibr_j\colon 1\le j\le s\}\subset\rd$ is a polynomial set. For every $f\in R[\bm{x}]$, there exist a multiplier $\mr\in R^\ast$ as well as a remainder $\dr\in R[\bm{x}]$ and quotients $\qr_j\in R[\bm{x}]$ for $1\le j\le s$ such that \begin{equation}\label{PseudoDivisionExpression} \mr f=\sum_{j=1}^s\qr_j\ibr_j+\dr, \end{equation} where $\dr$ is pseudo-reduced with respect to $G$. Moreover, the polynomials in \eqref{PseudoDivisionExpression} satisfy the following condition: \begin{equation}\label{DivisionCond} \lmc (f)=\max\bigl\{\max_{1\le j\le s}\{\lmc (\qr_j\ibr_j)\},\lmc (\dr)\bigr\}. \end{equation} \end{theorem} \begin{proof} If $f$ is already pseudo-reduced with respect to $\pb$, we just take $\dr=f$ and $\qr_j=0$ for $1\le j\le s$. Otherwise we define $\bm{x}^\alpha:=\max_{\succ}\{\supp f\cap\langle\lmc (\pb)\rangle\}$. There exists some $j$ such that $\bm{x}^\alpha$ is divisible by $\lmc (\ibr_j)$ as per Lemma \ref{Lemma:MonomialIdealProperty} \eqref{item:MonomialIdealDivisionCond}. We make a pseudo-reduction of the term $c_\alpha\bm{x}^\alpha$ of $f$ by $\ibr_j$ in the same way as the term pseudo-reduction in \eqref{TermReduction}. We denote the remainder also as $h$ and $\bm{x}^\beta:=\max_{\succ}\{\supp h\cap\langle\lmc (\pb)\rangle\}$ if $h$ is not pseudo-reduced with respect to $\pb$. It is easy to see that $\bm{x}^\alpha\succ\bm{x}^\beta$ after the above term pseudo-reduction. We repeat such term pseudo-reductions until the remainder $h$ is pseudo-reduced with respect to $\pb$. Since the monomial ordering $\succ$ is a well-ordering by Definition \ref{Def:MonomialOrdering}, the term pseudo-reductions terminate in finite steps. Hence follows the representation \eqref{PseudoDivisionExpression} in which the multiplier $\mr\in R^\ast$ is a product of such interim multipliers $\iur$ as in \eqref{TermReduction}. To prove the equality in \eqref{DivisionCond}, it suffices to prove it for the term pseudo-reduction in \eqref{TermReduction}. In fact, the pseudo-division in \eqref{PseudoDivisionExpression} is just a composition of the term pseudo-reductions in \eqref{TermReduction} and the remainder $h$ in \eqref{TermReduction} shall eventually become the remainder $\dr$ in \eqref{PseudoDivisionExpression}. In \eqref{TermReduction} the leading monomial of $\lmr\bm{x}^\alpha g/\ltc (g)$ is $\bm{x}^\alpha$. Hence either $\lmc (f)=\bm{x}^\alpha$, or $\lmc (f)\succ\bm{x}^\alpha$ in which case $\lmc (f)=\lmc (h)$ in \eqref{TermReduction}. Thus follows the equality in \eqref{DivisionCond}. \end{proof} \begin{definition}\label{Def:Reduction} Let $R$ be a PID and $f\in R[\bm{x}]$. Suppose that $\pb=\{\ibr_j\colon 1\le j\le s\}\subset\rd$ is a polynomial set over $R$. We call the expression in \eqref{PseudoDivisionExpression} a \emph{pseudo-division} of $f$ by $\pb$. More specifically, we name the polynomial $r$ in \eqref{PseudoDivisionExpression} as a \emph{remainder} of $f$ on pseudo-division by $\pb$ and $\mr\in R^\ast$ in \eqref{PseudoDivisionExpression} a \emph{multiplier} of the pseudo-division. We say that $f$ \emph{pseudo-reduces} to the \emph{remainder} $r$ via the \emph{multiplier} $\mr\in R^\ast$ \emph{modulo} $\pb$. We also call it a \emph{pseudo-reduction} of $f$ by $\pb$. \end{definition} The proof of Theorem \ref{Thm:PseudoReduction} shows that the multiplier $\mr$ in \eqref{PseudoDivisionExpression} is a finite product of the interim multipliers $\iur$ as in \eqref{TermReduction}. Based on the proof of Theorem \ref{Thm:PseudoReduction} we can easily contrive a pseudo-division algorithm. We do not elaborate on it here since it is quite straightforward. \begin{remark}\label{Rmk:LinearEqs} There is a difference between the above pseudo-division algorithm and the traditional division algorithm over a PID. In the traditional one as in \cite[P207, Algorithm 4.1.1]{AL94}, it is required that the linear equation $\lcc (f)=\sum_{j=1}^s\ibr_j\cdot\lcc (f_j)$ as in \cite[P204, (4.1.1)]{AL94} be solvable for $\ibr_j$'s over $R$. The pseudo-division algorithm does not have this extra requirement. Their major difference is the multiplier $\mr\in R^\ast$ in \eqref{PseudoDivisionExpression}. \end{remark} \section{Pseudo-eliminants of Zero-dimensional Ideals}\label{Sec:PseudoGroebner} Let $K$ be a field and $\bm{x}$ denote variables $(\Lst x1n)$ as before. In this section let us consider the case when the PID $R$ in Section \ref{Sec:DivisionAlgm} bears the particular form $R=\kxn$ with $x_1$ being the first variable of $\bm{x}$. In this case the polynomials in the algebra $K[\bm{x}]$ over $K$ can be viewed as those in $\knx$ over $\kxn$ with the variables $\tilde{\bx}=(\Lst x2n)$ and coefficients in $\kxn$. It is evident that the pseudo-division of polynomials in Theorem \ref{Thm:PseudoReduction} applies here without any essential change. Unless specified, in what follows we shall always treat the algebra $K[\bm{x}]$ over $K$ as the algebra $\knx$ over $\kxn$. Hence for $f\in\knx$ in this section, its leading coefficient $\lcc (f)$ and leading monomial $\lmc (f)$ in Notation \ref{Notation:LeadingEntities} now satisfy $\lcc (f)\in (\kxn)^*$ and $\lmc (f)\in\tM$ respectively. Here $\tM$ denotes the set of nonzero monomials in the variables $\tilde{\bx}=(\Lst x2n)$. Moreover, we use $\pid f$ to denote a principal ideal in $\kxn$ generated by $f\in\kxn$. Recall that $\langle f\rangle$ denotes a principal ideal in $\knx=K[\bm{x}]$ generated by either $f\in\kxn$ or $f\in\knx$. \begin{definition}[Elimination ordering\footnote{Please also refer to \cite[P69, Definition 2.3.1]{AL94} and \cite[P33, Definition 3.1]{EH12}.} on $\knx$]\label{Def:EliminationOrdering} \hfill An \emph{elimination ordering} on $\knx$ is a monomial ordering on $\bM$ such that the $\tilde{\bx}$ variables are always larger than the $x_1$ variable. That is, $x_1^\alpha\tilde{\bx}^\gamma\succ x_1^\beta\tilde{\bx}^\delta$ if and only if $\tilde{\bx}^\gamma\succ\tilde{\bx}^\delta$ or, $\tilde{\bx}^\gamma=\tilde{\bx}^\delta$ and $\alpha>\beta$. \end{definition} In what follows let us suppose that $I$ is a zero-dimensional ideal\footnote{Please note that a zero-dimensional ideal $I$ is always a proper ideal such that $I\ne K[\bm{x}]$.} of $K[\bm{x}]=\knx$. We have the following well-known conclusion. \begin{proposition} For a zero-dimensional ideal $I\subset\knx$, we always have $\Il\ne\{0\}$. \end{proposition} \begin{proof} Please refer to \cite[P272, Lemma 6.50]{BW93} or \cite[P243, Proposition 3.7.1(c)]{KR00}. \end{proof} \begin{definition}[Eliminant]\label{Def:Eliminant} \hfill For a zero-dimensional ideal $I\subset\knx$, the principal ideal $\Il$ in $\kxn$ is called the \emph{elimination ideal} of $I$. Let us denote its generator as $\el$ that satisfies $\Il=\pid\el$ being a principal ideal in $\kxn$. We call $\el$ the \emph{eliminant} of the zero-dimensional ideal $I$ henceforth. \end{definition} In what follows let us elaborate on a revised version of Buchberger's algorithm. The purpose is to compute not only a pseudo-basis but also a pseudo-eliminant of the elimination ideal $\Il$. Let us first recall the $S$-polynomial over a PID as in the following definition\footnote{Please also refer to \cite[P249, (4.5.1)]{AL94} or \cite[P457, Definition 10.9]{BW93}.}. \begin{definition}[$S$-polynomial]\label{Def:SPolynomial} \hfill Suppose that $f,g\in\Rd$. Let us denote $\lmr:=\lcm (\lcc (f),\lcc (g))\in\kxn$ and $\clm:=\lcm (\lmc (f),\lmc (g))\in\tM$. Then the polynomial \begin{equation}\label{SPolynomialDef} S(f,g):=\frac{\lmr\clm}{\ltc (f)}f-\frac{\lmr\clm}{\ltc (g)}g \end{equation} is called the \emph{$S$-polynomial} of $f$ and $g$. \end{definition} It is easy to verify that the $S$-polynomial satisfies the following inequality due to the cancellation of leading terms in \eqref{SPolynomialDef}: \begin{equation}\label{LeadingSMonomials} \lmc (S(f,g))\prec\clm=\lcm (\lmc (f),\lmc (g)). \end{equation} When $g\in (\kxn)^*$ and $f\in\Rd$, we take $\lmc (g)=1$ and $\lmr=\lcm (\lcc (f),g)$. The $S$-polynomial in \eqref{SPolynomialDef} becomes: \begin{equation}\label{SpecialSPoly} S(f,g):=\frac\lmr{\lcc (f)}f-\lmr\cdot\lmc (f). \end{equation} \begin{lemma}\label{Lemma:UnnecessaryConst} When $g\in (\kxn)^*$ and $f\in\Rd$, the $S$-polynomial in \eqref{SpecialSPoly} satisfies: \begin{equation}\label{SpecialSPolyEssence} \idr S(f,g)=(f-\ltc (f))\cdot g:=f_1g \end{equation} with $\idr:=\gcd (\lcc (f),g)\in\kxn$ and $f_1:=f-\ltc (f)$. \end{lemma} \begin{proof} Let us denote $l_f:=\lcc (f)$. Based on the equality $\lmr/l_f=g/\idr$, the $S$-polynomial in \eqref{SpecialSPoly} becomes: \begin{equation*} S(f,g)=\frac{gf}{\idr}-\frac{gl_f}\idr\cdot\lmc (f)=\frac g\idr (f-\ltc (f))=\frac{f_1g}\idr. \qedhere \end{equation*} \end{proof} Lemma \ref{Lemma:UnnecessaryConst} can be generalized to the following conclusion: \begin{lemma}\label{Lemma:RelativePrimePairs} For $f,g\in\Rd$, suppose that $\lmc (f)$ and $\lmc (g)$ are relatively prime. Let us denote $\idr:=\gcd (\lcc (f),\lcc (g))$. Then their $S$-polynomial in \eqref{SPolynomialDef} satisfies: \begin{equation}\label{CoprimeReduction} \idr S(f,g)=f_1g-g_1f \end{equation} with $f_1:=f-\ltc (f)$ and $g_1:=g-\ltc (g)$. Moreover, we have: \begin{equation}\label{LeadTwoMonomial} \lmc (S(f,g))=\max\{\lmc (f_1g),\lmc (g_1f)\}. \end{equation} \end{lemma} \begin{proof} If $\lmc (f)$ and $\lmc (g)$ are relatively prime, then we have an identity $\clm=\lmc (f)\cdot\lmc (g)$ in \eqref{SPolynomialDef}. For convenience, let us denote $l_f:=\lcc (f)$, $l_g:=\lcc (g)$ and $\idr:=\gcd (l_f,l_g)$. Then we have the identities $\lmr/l_f=l_g/\idr$ and $\lmr/l_g=l_f/\idr$ with $\lmr=\lcm (l_f,l_g)$ as in \eqref{SPolynomialDef}. We substitute these identities into \eqref{SPolynomialDef} to obtain: \begin{align} S(f,g)&:=\frac{l_g\cdot\lmc (g)}\idr f-\frac{l_f\cdot\lmc (f)}\idr g=\frac 1\idr (\ltc (g)\cdot f-\ltc (f)\cdot g)\notag\\ &=\frac 1\idr ((g-g_1)f-(f-f_1)g)=\frac 1\idr (f_1g-g_1f)\label{CoprimeReductProof}\\ &=\frac 1\idr (f_1(g_1+\ltc (g))-g_1(f_1+\ltc (f)))=\frac 1\idr (f_1\cdot\ltc (g)-g_1\cdot\ltc (f)).\label{ReductionLeadTerms} \end{align} The identity \eqref{CoprimeReduction} follows from \eqref{CoprimeReductProof}. Now we show that the conclusion \eqref{LeadTwoMonomial} is a consequence of the expression \eqref{ReductionLeadTerms}. In fact, for every term $c_\alpha\tilde{\bx}^\alpha$ of $f_1$ and every term $c_\beta\tilde{\bx}^\beta$ of $g_1$, we have $\tilde{\bx}^\alpha\cdot\lmc (g)\ne\tilde{\bx}^\beta\cdot\lmc (f)$ since $\lmc (g)$ and $\lmc (f)$ are relatively prime and moreover, we have $\tilde{\bx}^\alpha\prec\lmc (f)$ and $\tilde{\bx}^\beta\prec\lmc (g)$. As a result, no term of $f_1\cdot\ltc (g)$ cancels no term of $g_1\cdot\ltc (f)$ in \eqref{ReductionLeadTerms}. Thus follows the equality \eqref{LeadTwoMonomial}. \end{proof} \begin{corollary}\label{Cor:CoprimePair} Suppose that $\lmc (f)$ and $\lmc (g)$ are relatively prime for $f,g\in\Rd$. Then their $S$-polynomial $S(f,g)$ as in \eqref{SPolynomialDef} can be pseudo-reduced to $0$ by $f$ and $g$ as in Theorem \ref{Thm:PseudoReduction} with the multiplier $\mr=\idr$ and quotients $f_1$ and $g_1$ as in \eqref{CoprimeReduction}. In particular, for $g\in (\kxn)^*$ and $f\in\Rd$, their $S$-polynomial $S(f,g)$ in \eqref{SpecialSPoly} can be pseudo-reduced to $0$ by $g$ with the multiplier $\mr=\idr$ and quotient $f_1$ as in \eqref{SpecialSPolyEssence}. \end{corollary} \begin{proof} The conclusions readily follow from Lemma \ref{Lemma:RelativePrimePairs} and Lemma \ref{Lemma:UnnecessaryConst} based on Theorem \ref{Thm:PseudoReduction}. \end{proof} For two terms $c_\alpha\tilde{\bx}^\alpha,c_\beta\tilde{\bx}^\beta\in\knx$ with $c_\alpha,c_\beta\in\kxn$, let us denote $\lcm (c_\alpha\tilde{\bx}^\alpha,c_\beta\tilde{\bx}^\beta):=\lcm (c_\alpha,c_\beta)\cdot\lcm (\tilde{\bx}^\alpha,\tilde{\bx}^\beta)$. Then we have the following notation: \begin{equation*} \lcm (\ltc (f),\ltc (g)):=\lcm (\lcc (f),\lcc (g))\cdot\lcm (\lmc (f),\lmc (g)). \end{equation*} \begin{lemma}\label{Lemma:TriangleIdentity} For $f,g,h\in\Rd$, if $\lcm (\lmc (f),\lmc (g))\in\langle\lmc (h)\rangle$, then we have the following triangular relationship among their $S$-polynomials: \begin{equation}\label{TriangleIdentity} \mr S(f,g)=\frac{\mr\cdot\lcm (\ltc (f),\ltc (g))}{\lcm (\ltc (f),\ltc (h))}S(f,h)-\frac{\mr\cdot\lcm (\ltc (f),\ltc (g))}{\lcm (\ltc (h),\ltc (g))}S(g,h), \end{equation} where the multiplier $\mr:=\lcc (h)/\idr$ with $\idr:=\gcd (\lcm (\lcc (f),\lcc (g)),\lcc (h))\in\kxn$. Henceforth we also call the identity \eqref{TriangleIdentity} the \emph{triangular identity} of $S(f,g)$ with respect to $h$. \end{lemma} \begin{proof} From $\lcm (\lmc (f),\lmc (g))\in\langle\lmc (h)\rangle$ we can easily deduce that: \begin{equation*} \lcm (\lmc (f),\lmc (g))\in\langle\lcm (\lmc (f),\lmc (h))\rangle\cap\langle\lcm (\lmc (h),\lmc (g))\rangle. \end{equation*} In order to corroborate that the multiplier $\mr=\lcc (h)/\idr$ suffices to make the two fractions in \eqref{TriangleIdentity} terms in $\knx$, we only need to consider the case when $\mlt_\ipr (\lcc (h))>\gamma:=\max\{\mlt_\ipr (\lcc (f)),\mlt_\ipr (\lcc (g))\}$ for every irreducible factor $\ipr$ of $\lcc (h)$. In this case we have $\mlt_\ipr (\lcm (\lcc (f),\lcc (h)))=\mlt_\ipr (\lcc (h))=\mlt_\ipr (\lcm (\lcc (h),\lcc (g)))$ in the denominators of \eqref{TriangleIdentity}. Hence in the numerators of \eqref{TriangleIdentity} we can take $\mlt_\ipr (\mr)=\mlt_\ipr (\lcc (h))-\gamma=\mlt_\ipr (\lcc (h))-\mlt_\ipr (\idr)$. Now let us write the definition of $S$-polynomial in \eqref{SPolynomialDef} into the following form: \begin{equation}\label{OtherFormSPoly} S(f,g)=\frac{\lcm (\ltc (f),\ltc (g))}{\ltc (f)}f-\frac{\lcm (\ltc (f),\ltc (g))}{\ltc (g)}g. \end{equation} Then the identity \eqref{TriangleIdentity} readily follows if we also write $S(f,h)$ and $S(g,h)$ into the form of \eqref{OtherFormSPoly}. \end{proof} It is easy to verify that the identity \eqref{TriangleIdentity} is consistent with the inequality \eqref{LeadingSMonomials} for $S$-polynomials since from \eqref{TriangleIdentity} we can deduce that $\lmc (S(f,g))$ is dominated by one of the following leading monomials: \begin{equation*} \frac{\lcm (\lmc (f),\lmc (g))}{\lcm (\lmc (f),\lmc (h))}\lmc (S(f,h)),\text{~\,or~\,}\frac{\lcm (\lmc (f),\lmc (g))}{\lcm (\lmc (h),\lmc (g))}\lmc (S(g,h)). \end{equation*} For $f\in\Rd$ and $g\in (\kxn)^*$, we shall use the following relation between \eqref{SPolynomialDef} and \eqref{SpecialSPoly}: \begin{equation}\label{DisappearingMonomial} S(f,g\cdot\lmc (f))=S(f,g). \end{equation} Moreover, the $S$-polynomial in \eqref{SpecialSPoly} coincides with the term pseudo-reduction in \eqref{TermReduction} for $g\in R^\ast=(\kxn)^*$. \begin{algorithm}[Pseudo-eliminant of a zero-dimensional ideal over a PID]\label{Algo:PseudoEliminant} \hfil Input: A finite polynomial set $\tas\subset\knx\setminus K$. Output: A pseudo-eliminant $\pel\in (\kxn)^*$, pseudo-basis $\pbs\subset\langle\tas\rangle\setminus\kxn$ and multiplier set $\mrs\subset\nonk$. Initialization: A temporary basis set $\tbs:=\tas\setminus\kxn$; a multiplier set $\mrs:=\emptyset$ in $\kxn$; a temporary set $\mathfrak{S}:=\emptyset$ in $\knx\setminus K$ for $S$-polynomials. If $\tas\cap\kxn\ne\emptyset$, we initialize $\lel:=\gcd (\tas\cap\kxn)$; otherwise we initialize $\lel:=0$. For each pair $\tf,\tg\in\tbs$ with $\tf\ne\tg$, we invoke \Po Q as follows to compute their $S$-polynomial $S(f,g)$. \Po Q: \leftskip=5mm \begin{itshape} Input: $f,g\in\Rd$. If $\lmc (\tf)$ and $\lmc (\tg)$ are relatively prime, we define $\idr:=\gcd (\lcc (f),\lcc (g))$ as in \eqref{CoprimeReduction}. If $\idr\in\nonk$, we add $\idr$ into the multiplier set $\mrs$. Then we disregard the $S$-polynomial $S(\tf,\tg)$. If there exists an $h\in\tbs\setminus\{\tf,\tg\}$ such that $\lcm (\lmc (f),\lmc (g))\in\langle\lmc (h)\rangle$, and the triangular identity \eqref{TriangleIdentity} has never been applied to the same triplet $\{f,g,h\}$, we compute the multiplier $\mr$ as in \eqref{TriangleIdentity}. If $\mr\in\nonk$, we add $\mr$ into the multiplier set $\mrs$. Then we disregard the $S$-polynomial $S(\tf,\tg)$. If neither of the above two cases is true, we compute their $S$-polynomial $S(\tf,\tg)$ as in \eqref{SPolynomialDef}. Then we add $S(\tf,\tg)$ into the set $\mathfrak{S}$. \end{itshape} End of $\mathcal{Q}$ \leftskip=0mm We recursively repeat \Po P as follows for the pseudo-reductions of all the $S$-polynomials in the set $\mathfrak{S}$. \Po P: \leftskip=5mm \begin{itshape} For an $S\in\mathfrak{S}$, we invoke Theorem \ref{Thm:PseudoReduction} to make a pseudo-reduction of $S$ by the temporary basis set $\tbs$. If the multiplier $\mr\in\nonk$ in \eqref{PseudoDivisionExpression}, we add $\mr$ into the multiplier set $\mrs$. If the remainder $\dr\in\Rd$, we add $\dr$ into $\tbs$. For every $\tf\in\tbs\setminus\{\dr\}$, we invoke \Po Q to compute the $S$-polynomial $S(\tf,\dr)$. If the remainder $\dr\in\nonk$, we redefine $\lel:=\gcd (\dr,\lel)$. If the remainder $\dr\in K^\ast$, we halt the algorithm and output $\tbs=\{1\}$. Then we delete $S$ from the set $\mathfrak{S}$. \end{itshape} End of $\mathcal{P}$ \leftskip=0mm Finally we define $\pel:=\lel$ and $\pbs:=\tbs$ respectively. \Po R: \leftskip=5mm \begin{itshape} For every $\tf\in\pbs$, if $\idr:=\gcd (\lcc (f),\pel)\in\nonk$, we add $\idr$ into the multiplier set $\mrs$. \end{itshape} End of $\mathcal{R}$ \leftskip=0mm We output $\pel$, $\pbs$ and $\mrs$. \end{algorithm} \begin{remark} In Algorithm \ref{Algo:PseudoEliminant} we compute both $\idr:=\gcd (\lcc (f),\lcc (g))$ when $\lmc (\tf)$ and $\lmc (\tg)$ are relatively prime in \Po Q and $\idr:=\gcd (\lcc (f),\pel)$ for every $\tf\in\pbs$ in \Po R. The purpose of these computations is to procure the multipliers $\idr$ in \eqref{CoprimeReduction} of Lemma \ref{Lemma:RelativePrimePairs} and \eqref{SpecialSPolyEssence} of Lemma \ref{Lemma:UnnecessaryConst} respectively for the pseudo-reductions of the $S$-polynomials. It is the reason why we add $\idr$ into the multiplier set $\mrs$ when $\idr\in\nonk$. Moreover, in \Po Q the condition that there exists an $h\in\tbs\setminus\{\tf,\tg\}$ such that $\lcm (\lmc (f),\lmc (g))\in\langle\lmc (h)\rangle$ is a condition for Lemma \ref{Lemma:TriangleIdentity}. \end{remark} \begin{definition}[Pseudo-eliminant; pseudo-basis; multiplier set]\label{Def:PseudoBasis} \hfill We call the univariate polynomial $\pel$ obtained via Algorithm \ref{Algo:PseudoEliminant} a \emph{pseudo-eliminant} of the zero-dimensional ideal $I$. We also call the polynomial set $\pbs$ a \emph{pseudo-basis} of $I$ and $\mrs$ its \emph{multiplier set}. \end{definition} Please note that contrary to the convention, we do not include the pseudo-eliminant $\pel$ in the pseudo-basis $\pbs$ since we shall contrive modular algorithms to compute modular bases with the factors of $\pel$ as moduli in Section \ref{Sec:IncompatibleModular}. \begin{lemma}\label{Lemma:PseudoEliminantTerminate} \begin{inparaenum}[(i)] \item\label{item:EliminantDivisibility} A pseudo-eliminant $\pel$ of a zero-dimensional ideal $I$ is divisible by its eliminant $\el$. \item\label{item:AllSPolynRemainder} For each pair $f\ne g$ in the union set of pseudo-basis and pseudo-eliminant $\pbs\cup\{\pel\}$, the pseudo-reduction of their $S$-polynomial $S(f,g)$ by $\pbs$ yields a remainder $r\in\pid\pel$ in $\kxn$. In particular, this includes the case when $r=0$. \item\label{item:PseudoEliminantTerminate} Algorithm \ref{Algo:PseudoEliminant} terminates in a finite number of steps. \end{inparaenum} \end{lemma} \begin{proof}\begin{inparaenum}[(i)] \item The conclusion readily follows from the fact that $\pel\in\Il=\pid\el$. \item According to \Po P in Algorithm \ref{Algo:PseudoEliminant}, if the remainder $r$ of the pseudo-reduction by an intermediate polynomial set $\tbs$ satisfies $r\in\Rd$, we add it into $\tbs$. That is, $r$ eventually becomes an element of the pseudo-basis $\pbs$. It is evident that a pseudo-reduction of $r$ by itself per se leads to the zero remainder. On the other hand, if $r\in\nonk$, then as per $\lel:=\gcd (\dr,\lel)$ in \Po P of Algorithm \ref{Algo:PseudoEliminant}, $r$ is divisible by $\lel$ and hence by the pseudo-eliminant $\pel$, i.e., $r\in\pid\pel$. \item The termination of the algorithm follows from the ring $\knx=K[\bm{x}]$ being Noetherian. In fact, whenever the remainder $r\in\Rd$ in the \Po P of the algorithm, we add it to the intermediate polynomial set $\tbs$. As a result, the monomial ideal $\langle\lmc (\tbs)\rangle$ is strictly expanded since $r$ is pseudo-reduced with respect to $\tbs\setminus\{r\}$. Hence the ascending chain condition imposed on the chain of ideals $\langle\lmc (\tbs)\rangle$ ensures the termination of the algorithm. \end{inparaenum} \end{proof} Based on Lemma \ref{Lemma:PseudoEliminantTerminate} \eqref{item:EliminantDivisibility}, the following conclusion is immediate: \begin{corollary} If a pseudo-eliminant $\pel$ of a zero-dimensional ideal $I$ in $K[\bm{x}]$ satisfies $\pel\in K^\ast$, then the reduced Gr\"obner\ basis\footnote{Please refer to \cite[P48, Definition 1.8.5]{AL94} or \cite[P93, Definition 4]{CLO15} for a definition.} of $I$ is $\{1\}$. \end{corollary} In what follows we assume that the ideal $I$ is a proper ideal of $K[\bm{x}]$, that is, $I\ne K[\bm{x}]$. Thus let us exclude the trivial case of $\pel\in K^\ast$ hereafter. \section{Pseudo-eliminant Divisors and Compatibility}\label{Section:PseudoEliminantDivisors} Suppose that $K$ is a perfect field whose characteristic is denoted as $\ch K$. Recall that finite fields and fields of characteristic zero are perfect fields. In this section we begin to contrive an algorithm retrieving the eliminant $\el$ of a zero-dimensional ideal $I$ from its pseudo-eliminant $\pel$ obtained in Algorithm \ref{Algo:PseudoEliminant}. We first make a factorization of the pseudo-eliminant $\pel$ into \emph{compatible} and \emph{incompatible} divisors. We prove that the compatible divisors of $\pel$ are the authentic factors of $\el$. This shows that Algorithm \ref{Algo:PseudoEliminant} generates the eliminant $\el$ when the pseudo-eliminant $\pel$ is compatible. We compute the factors of $\el$ that correspond to the incompatible divisors of $\pel$ in Section \ref{Sec:IncompatibleModular}. \begin{definition}[Squarefree factorization of univariate polynomials]\label{Def:SquarefreePart} \hfill A univariate polynomial $f\in\nok$ is \emph{squarefree} if it has no quadratic factors in $\nok$. That is, for every irreducible polynomial $g\in\nok$, $f$ is not divisible by $g^2$. The \emph{squarefree factorization} of a univariate polynomial $f\in\nok$ refers to a product $f=\prod_{i=1}^s g_i^i$ with $g_s\in\nok$ such that for those $g_i$'s that are not constants, they are both squarefree and pairwise relatively prime. Moreover, the \emph{squarefree part} of $f$ is defined as $\prod_{i=1}^s g_i$. \end{definition} The squarefree factorization is unique up to multiplications by constants in $K^\ast$. Its existence and uniqueness follow from the fact that $K[x]$ is a PID and hence a unique factorization integral domain. There are various algorithms for squarefree factorization depending on the field $K$ being finite or of characteristic zero. Algorithm \ref{Algo:Squarefree} as follows amalgamates these two cases of field characteristics. We improve the algorithm in \cite[P539, Algorithm B.1.6]{GP08} over a finite field and then apply it to the squarefree factorization over a filed of characteristic zero. Consider the integer set $J:=\Np$ when $\ch K=0$, and $J:=\mathbb{N}\setminus p\mathbb{N}$ when $\ch K=p>0$. That is, $J$ stands for the set of positive integers that are not a multiple of $p$ when $\ch K=p>0$. Let us enumerate the positive integers in $J$ by the bijective enumeration map $\rho\colon\Np\rightarrow J$ such that $\rho (i)<\rho (j)$ whenever $i<j$. We have the evident inequality $\rho (i)\ge i$ when $\ch K=p>0$. When $\ch K=0$, the enumeration map $\rho$ is the identity map. In the algorithm below, for every $i\in\Np$ we simply denote its image $\rho (i)\in J$ as $[i]$, i.e., $\rho (i)=[i]$. \begin{algorithm}[Squarefree factorization of a univariate polynomial]\label{Algo:Squarefree} \hfill Input: A univariate polynomial $f\in\nok$. Output: The squarefree decomposition $\{g_1,\dotsc,g_s\}$ of $f$. \Po P: \leftskip=5mm \begin{itshape} If $f'\ne 0$, we compute the greatest common divisor $f_\ro 1:=\gcd (f,f')$ first. The squarefree part of $f$ is defined as $h_\ro 1:=f/f_\ro 1$. We repeat the following procedure\footnote{If $\ch K>0$, the exponent $\ro {i+1}-\ro i$ of $h_\ro {i+1}$ in the following definition of $f_\ro {i+1}$ is the improvement on \cite[P539, Algorithm B.1.6]{GP08}.} starting with $i=1$ until $i=s$ such that $f'_\ro s=0$: \begin{equation*} \begin{aligned} h_\ro{i+1}&:=\gcd (f_\ro i,h_\ro i);\\ g_\ro i&:=h_\ro i/h_\ro {i+1}. \end{aligned}\quad \biggl\{\begin{aligned} f_\ro {i+1}&:=f_\ro i/h_\ro {i+1}^{\ro {i+1}-\ro i}\\ f_\ro {i+1}&:=f_\ro i/h_\ro {i+1}. \end{aligned}\biggr.~ \begin{aligned} &\text{if $\ch K>0$};\\ &\text{if $\ch K=0$}. \end{aligned} \end{equation*} If $f_\ro s\in K$, we define $g_\ro s:=h_\ro s$ to obtain the squarefree factorization $f=\prod_{i=1}^s g_\ro i^\ro i$. If $\ch K=p>0$ and $f_\ro s\in\nok$, we invoke \Po Q on $f_\ro s$. \end{itshape} End of $\mathcal{P}$ \leftskip=0mm \Po Q: \leftskip=5mm \begin{itshape} If $\ch K=p>0$ and $f\in\nok$ satisfies $f'=0$, we repeat the following procedure starting with $x_1:=x$ and $\psi_1:=f$ until $i=t$ such that $\psi'_t\ne 0$: \begin{equation*} x_{i+1}:=x_i^p;\qquad\psi_{i+1}(x_{i+1}):=\psi_i(x_i) \end{equation*} We treat $\psi_t$ as $f$ and invoke \Po P on $\psi_t$. \end{itshape} End of $\mathcal{Q}$ \leftskip=0mm \end{algorithm} \Po Q in Algorithm \ref{Algo:Squarefree} is a composition of Frobenius automorphism. \begin{proposition} We can procure a squarefree factorization of $f$ in finite steps via Algorithm \ref{Algo:Squarefree}. \end{proposition} \begin{proof} The termination of the algorithm in finite steps readily follows from the fact that $\deg f_\ro{i+1}<\deg f_\ro i$ in \Po P as well as $\deg\psi_{i+1}<\deg\psi_i$ in \Po Q. In the case of $\ch K=0$, the bijective map $\rho\colon\Np\rightarrow J$ is an identity map such that $\rho (i)=[i]=i$. To illustrate the procedure of the algorithm, suppose that $f=\prod_{i=1}^s g_i^i$ is a squarefree factorization of $f$. Then $h_1=\prod_{i=1}^s g_i$ is the squarefree part of $f$ obtained in the beginning of \Po P. Moreover, the $h_i$ in \Po P equals $\prod_{j=i}^s g_j$ for $1\le i\le s$. Hence we have $g_i=h_i/h_{i+1}$. Further, $f_i$ equals $\prod_{j=i+1}^s g_j^{j-i}$ for $1\le i<s$. Finally $f_s=1$ and \Po P terminates since $f'_s=0$. Thus follows the squarefree factorization. In the case when $\ch K=p>0$, suppose that $f=\prod_{k\in p\mathbb{N}}g_k^k\cdot\prod_{i=1}^s g_\ro i^\ro i$. Let us denote $\varphi_\pp p:=\prod_{k\in p\mathbb{N}}g_k^k$ and $\varphi_q:=\prod_{i=1}^s g_\ro i^\ro i$ such that $f=\varphi_\pp p\varphi_q$. Then the $f_\ro 1$ in \Po P equals $\varphi_\pp p\cdot\prod_{i=2}^s g_\ro i^{\ro i-1}$. Hence $h_\ro 1=\prod_{i=1}^s g_\ro i$ is the squarefree part of $\varphi_q$. And the $h_\ro i$ equals $\prod_{j=i}^s g_\ro j$ for $1\le i\le s$. Hence we have $g_\ro i=h_\ro i/h_\ro{i+1}$. Moreover, $f_\ro i$ equals $\varphi_\pp p\cdot\prod_{j=i+1}^s g_\ro j^{\ro j-\ro i}$ for $1\le i<s$. Finally, $f_\ro s=\varphi_\pp p$ and thus $f'_\ro s=0$. Now we have a squarefree factorization of $\varphi_q$ as $\prod_{i=1}^s g_\ro i^\ro i$. \Po Q amounts to a variable substitution $x_t=x^{p^t}$ such that $f_\ro s(x)=\psi_t (x_t)$. Since $\psi'_t\ne 0$, we treat $\psi_t$ as $f$ and assume that $\psi_t=\varphi_\pp p\varphi_q$ with $\varphi_\pp p$ and $\varphi_q$ being defined as above. We repeat \Po P on $\psi_t$ to obtain a squarefree factorization $\varphi_q=\prod_{i=1}^s g_\ro i^\ro i$. Nonetheless here $\varphi_q$ is in the variable $x_t$. Since the field $K$ is perfect, we have $\varphi_q(x_t)=(\varphi_q(x))^{p^t}=\prod_{i=1}^sg_\ro i^{\ro ip^t}$. \end{proof} \begin{remark} A remarkable thing about Algorithm \ref{Algo:Squarefree} is that as long as the field $K$ is perfect, it is independent of $K$ and any of its field extensions. In fact, all the computations in \Po P are based on $f_\ro 1=\gcd (f,f')$ and $h_\ro{i+1}:=\gcd (f_\ro i,h_\ro i)$ that are independent of the field extensions of $K$. We do not attempt to make a complete factorization of a univariate polynomial due to the complexity of distinct-degree factorizations. There is a discussion on the various stages of univariate polynomial factorizations including both squarefree and distinct-degree factorizations on \cite[P379]{GG13}. \end{remark} \begin{definition}[Compatible and incompatible divisors and parts]\label{Def:CompatibleDivisors} \hfill For a zero-dimensional ideal $I$ over a perfect field $K$, let $\pel$ be a pseudo-eliminant of $I$. Assume that $\mrs$ is the multiplier set for the pseudo-reductions of all the $S$-polynomials as in Algorithm \ref{Algo:PseudoEliminant}. For an irreducible factor $\ipr$ of $\pel$ with multiplicity $i$, if there exists a multiplier $\mr\in\mrs$ such that $\mr$ is divisible by $\ipr$, then $\ipr^i$ is called an \emph{incompatible divisor} of $\pel$. If $\ipr$ is relatively prime to every multiplier $\mr$ in $\mrs$, then $\ipr^i$ is called a \emph{compatible divisor} of $\pel$. We name the product of all the compatible divisors of $\pel$ as the \emph{compatible part} of $\pel$ and denote it as $\Cp (\pel)$. The \emph{incompatible part} of $\pel$ is defined as the product of all the incompatible divisors of $\pel$ and denoted as $\Ip (\pel)$. In particular, we say that a pseudo-eliminant $\pel$ per se is \emph{compatible} if it has no incompatible divisors. \end{definition} From the above Definition \ref{Def:CompatibleDivisors} it is evident that $\pel=\Cp (\pel)\cdot\Ip (\pel)$. In the following Algorithm \ref{Algo:CompatiblePartPseudoEliminant}, we compute the compatible part $\Cp (\pel)$ and make a squarefree decomposition of the incompatible part $\Ip (\pel)$ simultaneously. \begin{algorithm}[Compatible part $\Cp (\pel)$ of a pseudo-eliminant $\pel$ and squarefree decomposition of its incompatible part $\Ip (\pel)$]\label{Algo:CompatiblePartPseudoEliminant} \hfil Input: A pseudo-eliminant $\pel\in\kxn$ and multiplier set $\mrs\subset\kxn$ that are obtained from Algorithm \ref{Algo:PseudoEliminant}. Output: Compatible part $\Cp (\pel)$ and squarefree decomposition $\{\ids_i\colon 1\le i\le s\}$ of the incompatible part $\Ip (\pel)$ with $\ids_i\subset\kxn$. We invoke Algorithm \ref{Algo:Squarefree} to make a squarefree factorization $\pel=\prod_{i=1}^s q_i^i$. For each multiplicity $i$ satisfying $1\le i\le s$, we construct a polynomial set $\ids_i\subset\kxn$ whose elements are pairwise relatively prime as follows: For every $\mr\in\mrs$, we compute $\idr_{\mr i}:=\gcd (\mr,q_i)$. If $\idr_{\mr i}\in\nonk$, we check whether $\idr_{\mr i}$ is relatively prime to every element $\sfr$ that is already in $\ids_i$. If not, we substitute $\idr_{\mr i}$ by $\idr_{\mr i}/\gcd (\idr_{\mr i},\sfr)$. And we substitute the $\sfr$ in $\ids_i$ by both $\gcd (\idr_{\mr i},\sfr)$ and $\sfr/\gcd (\idr_{\mr i},\sfr)$. We repeat the process until either $\idr_{\mr i}\in K$, or $\idr_{\mr i}$ is relatively prime to every element in $\ids_i$. We add $\idr_{\mr i}$ into $\ids_i$ if $\idr_{\mr i}\in\nonk$. Finally, for each multiplicity $i$ satisfying $1\le i\le s$, we compute the product $\sfr_i:=\prod_{\sfr\in\ids_i}\sfr$. Then we output $\pel/\prod_{i=1}^s\sfr_i^i$ as the compatible part $\Cp (\pel)$. We also output $\{\ids_i\colon 1\le i\le s\}$ as a squarefree decomposition of the incompatible part $\Ip (\pel)$. \end{algorithm} \begin{definition}[Composite divisor set $\ids_i$; composite divisor $\sfr^i$]\label{Def:CompositeDivisor} \hfill We call the univariate polynomial set $\ids_i$ for $1\le i\le s$ obtained in Algorithm \ref{Algo:CompatiblePartPseudoEliminant} a \emph{composite divisor set} of the incompatible part $\Ip (\pel)$ of the pseudo-eliminant $\pel$. For an element $\sfr$ of $\ids_i$, we call its $i$-th power $\sfr^i$ a \emph{composite divisor} of the incompatible part $\Ip (\pel)$. \end{definition} A composite divisor $\sfr^i$ is a product of the incompatible divisors $\ipr^i$ in Definition \ref{Def:CompatibleDivisors}. The incompatible part $\Ip (\pel)$ is the product of all the composite divisors $\sfr^i$ according to the final step of Algorithm \ref{Algo:CompatiblePartPseudoEliminant}, that is: \begin{equation}\label{IPCompositeDivisor} \Ip (\pel)=\prod_{i=1}^s\prod_{\sfr\in\ids_i}\sfr^i. \end{equation} The above composite divisors $\sfr^i$ are pairwise relatively prime by the construction of the composite divisor set $\ids_i$ in Algorithm \ref{Algo:CompatiblePartPseudoEliminant}. It is evident that Algorithm \ref{Algo:CompatiblePartPseudoEliminant} terminates in finite steps since the multiplier set $\mrs$ in Algorithm \ref{Algo:CompatiblePartPseudoEliminant} is a finite set. \begin{lemma}\label{Lemma:SyzygyTransform} Suppose that $\tas=\{f_j\colon 1\le j\le s\}\subset\Rd$ is a polynomial set. Moreover, each $f_j$ has the same leading monomial $\lmc (f_j)=\tilde{\bx}^\alpha\in\tM$ for $1\le j\le s$. \begin{enumerate}[(i)] \item\label{item:SPolynomialExpansion} If $f=\sum_{j=1}^s f_j$ satisfies $\lmc (f)\prec\tilde{\bx}^\alpha$, then there exist multipliers $\ibr,\ibr_j\in (\kxn)^*$ for $1\le j<s$ such that \begin{equation}\label{SPolynomialExpansion} \ibr f=\sum_{1\le j<s}\ibr_jS(f_j,f_s) \end{equation} with the $S$-polynomial $S(f_j,f_s)$ being defined as in \eqref{SPolynomialDef}. \item\label{item:Nondivisibility} For each irreducible polynomial $\ipr\in\nonk$, we can always relabel the subscripts of the polynomial set $\tas=\{f_j\colon 1\le j\le s\}$ such that the multiplier $\ibr\in (\kxn)^*$ of $f$ in \eqref{SPolynomialExpansion} is not divisible by $\ipr$. \end{enumerate} \end{lemma} \begin{proof} \begin{inparaenum}[(i)] \item Let us denote $l_j:=\lcc (f_j)$ for $1\le j\le s$ and the least common multiple $\lmr_j:=\lcm (l_j,l_s)$ for $1\le j<s$. From $\lmc (f)\prec\tilde{\bx}^\alpha$ we can deduce the following condition on the leading coefficients: \begin{equation}\label{LeadingCoeffsSygyzy} \sum_{j=1}^s l_j=0. \end{equation} Now we define the multipliers in $\kxn$ as follows: \begin{equation}\label{SyzygyMultipliers} \ibr:=\lcm_{1\le j<s}\Bigl(\frac{\lmr_j}{l_j}\Bigr);\qquad\ibr_j:=\frac{\ibr l_j}{\lmr_j}\quad (1\le j<s) \end{equation} and prove that they satisfy the identity in \eqref{SPolynomialExpansion}. In fact, as per the definition of $S$-polynomials in \eqref{SPolynomialDef} and the above definition of $\ibr_j$ for $1\le j<s$, we have: \begin{align}\label{SPolynIdentity} \sum_{1\le j<s}\ibr_jS(f_j,f_s) &=\sum_{1\le j<s}\ibr_j\Bigl(\frac{\lmr_jf_j}{l_j}-\frac{\lmr_jf_s}{l_s}\Bigr) =\ibr\sum_{1\le j<s}f_j-f_s\sum_{1\le j<s}\frac{\ibr_j\lmr_j}{l_s}\\\label{IdentityLastSum} &=\ibr\sum_{1\le j<s}f_j-f_s\sum_{1\le j<s}\frac{\ibr l_j}{l_s}=\ibr\sum_{1\le j\le s}f_j \end{align} as per the condition \eqref{LeadingCoeffsSygyzy}. This proves the identity \eqref{SPolynomialExpansion}. Moreover, we should ensure that all our manipulations are over the PID $\kxn$. In fact, $\ibr l_j/l_s$ in \eqref{IdentityLastSum} is in $\kxn$ because it equals $\ibr_j\lmr_j/l_s$ such that both $\ibr_j$ and $\lmr_j/l_s$ are in $\kxn$. \item If none of $\{l_j\colon 1\le j\le s\}$ is a multiple of the irreducible polynomial $\ipr$, the conclusion \eqref{item:Nondivisibility} readily follows from the definition of the multiplier $\ibr$ in \eqref{SyzygyMultipliers}. For $1\le j\le s$, we denote the multiplicity of $\ipr$ in $l_j$ as $\mlt_\ipr (l_j)\ge 0$. Let us relabel the subscripts of $f_j$ and $l_j$ for $1\le j\le s$ such that $\mlt_\ipr (l_s)=\min_{1\le j\le s}\{\mlt_\ipr (l_j)\}$. As a result, we have $\mlt_\ipr (\gcd (l_j,l_s))=\mlt_\ipr (l_s)$ for $1\le j<s$. Hence $\mlt_\ipr (l_s/\gcd (l_j,l_s))=0$ for $1\le j<s$. Then $\mlt_\ipr (\lmr_j/l_j)=0$ since $\lmr_j/l_j=l_s/\gcd (l_j,l_s)$ for $1\le j<s$. Thus $\ibr=\lcm_{1\le j<s}(\lmr_j/l_j)$ is not divisible by $\ipr$. \end{inparaenum} \end{proof} \begin{remark}\label{Rmk:MultiplierSyzygy} The multiplier $\ibr$ in \eqref{SPolynomialExpansion} and its associated property in Lemma \ref{Lemma:SyzygyTransform} \eqref{item:Nondivisibility} amount to a generalization of the syzygy theory for Gr\"obner\ bases over fields \cite[P119, Prop. 3.2.3]{AL94} and over PIDs \cite[P247, Prop. 4.5.3]{AL94}. In particular, Lemma \ref{Lemma:SyzygyTransform} \eqref{item:Nondivisibility} proves to be a vital property of the multiplier $\ibr$ for our subsequent line of reasoning in Theorem \ref{Thm:CompatiblePart}. For simplicity we do not use the language of syzygy modules in Lemma \ref{Lemma:SyzygyTransform}. \end{remark} \begin{theorem}\label{Thm:CompatiblePart} Suppose that $\el$ is the eliminant of a zero-dimensional ideal $I$ in $\knx$ over a perfect field $K$ and $\pel$ a pseudo-eliminant of $I$. Then $\el$ is divisible by the compatible divisors of $\pel$, that is, for every compatible divisor $\ipr^i$ of $\pel$, we have $\mlt_\ipr (\pel)=\mlt_\ipr (\el)=i$. Hence $\el$ is divisible by the compatible part $\Cp (\pel)$ of $\pel$. In particular, $\el=\pel$ if $\pel$ per se is compatible. \end{theorem} \begin{proof} With $\ipr\in\nonk$ being an irreducible polynomial and $i\in\Np$, let $\ipr^i$ be a compatible divisor of the pseudo-eliminant $\pel$ as in Definition \ref{Def:CompatibleDivisors}. We shall prove that the eliminant $\el$ is also divisible by $\ipr^i$. That is, $\mlt_\ipr (\Cp (\pel))=i\le\mlt_\ipr (\el)$. Thus as per Lemma \ref{Lemma:PseudoEliminantTerminate} \eqref{item:EliminantDivisibility} we have: \begin{equation}\label{DivisorMultiplicityIdentity} \mlt_\ipr (\pel)=\mlt_\ipr (\Cp (\pel))=\mlt_\ipr (\el)=i. \end{equation} Let $\tbs\cup\{\lel\}:=\{f_j\colon 0\le j\le s\}\subset\knx\setminus K$ be the basis of the ideal $I$ after the Initialization in Algorithm \ref{Algo:PseudoEliminant} with $\lel\in\nona$. In the generic case when $\lel\ne 0$, i.e., $\lel\in\nonk$, the definition of pseudo-eliminant $\pel$ in Algorithm \ref{Algo:PseudoEliminant} shows that $\lel\in\pid\pel\subset\kxn$. That is, there exists $\rhor\in (\kxn)^\ast$ such that $\lel=\rhor\pel$. The eliminant $\el\in I\cap\kxn$ can be written as: \begin{equation}\label{InitialRepresentation} \el=\sum_{j=0}^s h_jf_j=\sum_{j=1}^s h_jf_j+\rhor h_0\pel \end{equation} with $h_j\in\knx$ for $0\le j\le s$. Let us abuse the notation a bit and denote $\tas:=\tbs\cup\{\lel\}=\{f_j\colon 0\le j\le s\}$. Suppose that $\max_{0\le j\le s}\{\lmc (h_jf_j)\}=\tilde{\bx}^\beta$. Let us collect and rename the elements in the set $\{f_j\in\tas\colon\lmc (h_jf_j)=\tilde{\bx}^\beta,0\le j\le s\}$ into a new set $\pb_t:=\{g_j\colon 1\le j\le t\}$. And the subscripts of the functions $\{h_j\}$ are adjusted accordingly. In this way \eqref{InitialRepresentation} can be written as follows: \begin{equation}\label{RevisedRepresentation} \el=\sum_{j=1}^th_jg_j+\sum_{f_j\in\tas\setminus\pb_t}h_jf_j, \end{equation} where the products $h_jf_j$ are those in \eqref{InitialRepresentation} with $f_j\in\tas\setminus\pb_t$ for $0\le j\le s$. If we denote $\ltc (h_j):=c_j\tilde{\bx}^{\alpha_j}$ with $c_j\in (\kxn)^\ast$ for $1\le j\le t$ in \eqref{RevisedRepresentation}, then it is evident that the following polynomial: \begin{equation}\label{InitialTermReps} \tg:=\sum_{j=1}^t\ltc (h_j)\cdot g_j=\sum_{j=1}^tc_j\tilde{\bx}^{\alpha_j}g_j \end{equation} is a summand of \eqref{RevisedRepresentation} and satisfies $\lmc (\tg)\prec\tilde{\bx}^\beta=\lmc (\tilde{\bx}^{\alpha_j}g_j)$ for $1\le j\le t$ since the eliminant $\el\prec\tilde{\bx}^\beta$ in \eqref{RevisedRepresentation}. According to Lemma \ref{Lemma:SyzygyTransform} \eqref{item:SPolynomialExpansion}, there exist multipliers $\ibr,\ibr_j\in (\kxn)^\ast$ for $1\le j<t$ that satisfy the following identity: \begin{equation}\label{LeadingMonomialSyzygy} \ibr\tg=\sum_{1\le j<t}\ibr_jS(c_j\tilde{\bx}^{\alpha_j}g_j,c_t\tilde{\bx}^{\alpha_t}g_t). \end{equation} Moreover, by Lemma \ref{Lemma:SyzygyTransform} \eqref{item:Nondivisibility}, we can relabel the subscript set $\{1\le j\le t\}$ such that the multiplier $\ibr$ is not divisible by the irreducible polynomial $\ipr\in\nonk$ in \eqref{DivisorMultiplicityIdentity}, i.e., $\mlt_\ipr (\ibr)=0$. When $\pb_t\subset\Rd$, if we define $\tilde{\bx}^{\gamma_j}:=\lcm (\lmc (g_j),\lmc (g_t))$, we can simplify the $S$-polynomials in \eqref{LeadingMonomialSyzygy} based on \eqref{SPolynomialDef}: \begin{equation}\label{SPolynomialRelation} S(c_j\tilde{\bx}^{\alpha_j}g_j,c_t\tilde{\bx}^{\alpha_t}g_t)=\lmr_j\tilde{\bx}^{\beta-\gamma_j}S(g_j,g_t) \end{equation} with $\lmr_j:=\lcm (c_j\cdot\lcc (g_j),c_t\cdot\lcc (g_t))/\lcm (\lcc (g_j),\lcc (g_t))$ for $1\le j<t$. In particular, when $\lel=\rhor\pel$ as in \eqref{InitialRepresentation} satisfies $\lel\in\pb_t$, we can deduce from $\lmc (h_0\lel)=\tilde{\bx}^\beta$ that $\ltc (h_0)$ bears the form $\icr\tilde{\bx}^\beta$ with $\icr\in (\kxn)^\ast$. Then the $S$-polynomials in \eqref{LeadingMonomialSyzygy} involving $\icr\tilde{\bx}^\beta\lel$ bear the form $S(c_j\tilde{\bx}^{\alpha_j}g_j,\icr\rhor\tilde{\bx}^\beta\pel)$ when $g_j\ne\lel$ for $1\le j<t$ and $g_t=\lel$, or $S(\icr\rhor\tilde{\bx}^\beta\pel,c_t\tilde{\bx}^{\alpha_t}g_t,)$ when $g_t\ne\lel$. Let us simply summarize these as $S(c_j\tilde{\bx}^{\alpha_j}g_j,\icr\rhor\tilde{\bx}^\beta\pel)$ with $g_j\ne\lel$ and $1\le j\le t$. Thus by \eqref{DisappearingMonomial} and \eqref{SpecialSPoly}, the simplification parallel to \eqref{SPolynomialRelation} now becomes: \begin{equation}\label{SpecialSPolyRelation} S(c_j\tilde{\bx}^{\alpha_j}g_j,\icr\rhor\tilde{\bx}^\beta\pel)=S(c_j\tilde{\bx}^{\alpha_j}g_j,\icr\rhor\pel) =\lnr_j\tilde{\bx}^{\alpha_j}S(g_j,\pel) \end{equation} with $\lnr_j:=\lcm (c_j\cdot\lcc (g_j),\icr\rhor\pel)/\lcm (\lcc (g_j),\pel)$. From the definition of the set $\pb_t$ it follows that $\tilde{\bx}^{\alpha_j}\cdot\lmc (g_j)=\tilde{\bx}^\beta$. Let $\pbs=\{\tg_k\colon 1\le k\le\tau\}\subset\Rd$ be the pseudo-basis of the ideal $I$ obtained in Algorithm \ref{Algo:PseudoEliminant} such that the polynomial set $\pb_t$ as in \eqref{RevisedRepresentation} is a subset of $\pbs\cup\{\pel\}$. In Algorithm \ref{Algo:PseudoEliminant} we have pseudo-reduced every $S$-polynomial $S(g_j,g_t)$ in \eqref{SPolynomialRelation} by the pseudo-basis $\pbs$, either directly or indirectly like in Lemma \ref{Lemma:TriangleIdentity}. More specifically, according to Theorem \ref{Thm:PseudoReduction}, there exist a multiplier $\mr_j\in (\kxn)^*$ as well as a remainder $\dr_j\in\nona$ and $q_{jk}\in\knx$ for $1\le k\le\tau$ such that the following pseudo-reduction of $S(g_j,g_t)$ by the pseudo-basis $\pbs$ holds for $1\le j<t$: \begin{equation}\label{SPolynomialReduction} \mr_jS(g_j,g_t)=\sum_{k=1}^\tau q_{jk}\tg_k+\dr_j=\sum_{k=1}^\tau q_{jk}\tg_k+\rhor_j\pel. \end{equation} Please note that $S(g_j,g_t)$ is an $S$-polynomial between two elements $g_j$ and $g_t$ in $\pb_t$ because we abuse the notation for the subscripts of the elements in $\pbs$ and $\pb_t$ in \eqref{SPolynomialReduction}. The remainder $\dr_j$ in \eqref{SPolynomialReduction} is a univariate polynomial in $\pid\pel\subset\nona$ according to Lemma \ref{Lemma:PseudoEliminantTerminate} \eqref{item:AllSPolynRemainder}. Hence in \eqref{SPolynomialReduction} we denote $\dr_j:=\rhor_j\pel$ with $\rhor_j\in\kxn$. Moreover, $\mr_j$ is relatively prime to the compatible divisor $\ipr^i$ of the pseudo-eliminant $\pel$ as in \eqref{DivisorMultiplicityIdentity}, i.e., $\mlt_\ipr (\mr_j)=0$ for $1\le j<t$. As per \eqref{DivisionCond}, we can deduce that $\lmc (S(\tg_j,\tg_t))=\max_{1\le k\le\tau}\{\lmc (q_{jk}\tg_k)\}$ holds in \eqref{SPolynomialReduction} for $1\le j<t$. We further have $\lmc (S(\tg_j,\tg_t))\prec\tilde{\bx}^{\gamma_j}$ by \eqref{LeadingSMonomials} with $\tilde{\bx}^{\gamma_j}=\lcm (\lmc (g_j),\lmc (g_t))$ as in \eqref{SPolynomialRelation}. Hence it readily follows that for $1\le j<t$: \begin{equation}\label{OrderSPolynomialReduction} \max_{1\le k\le\tau}\{\lmc (q_{jk}\tg_k)\}=\lmc (S(\tg_j,\tg_t))\prec\tilde{\bx}^{\gamma_j}. \end{equation} Based on \eqref{SPolynomialRelation} and \eqref{SPolynomialReduction}, it is straightforward to obtain a pseudo-reduction of the $S$-polynomial $S(c_j\tilde{\bx}^{\alpha_j}g_j,c_t\tilde{\bx}^{\alpha_t}g_t)$ in \eqref{LeadingMonomialSyzygy} by the pseudo-basis $\pbs$ as follows when $g_j,g_t\in\Rd$ for $1\le j<t$. \begin{equation}\label{RealSPolyReduction} \frac{\mr_j}{\idr_j}S(c_j\tilde{\bx}^{\alpha_j}g_j,c_t\tilde{\bx}^{\alpha_t}g_t) =\frac{\lmr_j}{\idr_j}\tilde{\bx}^{\beta-\gamma_j}\Bigl(\sum_{k=1}^\tau q_{jk}\tg_k+\rhor_j\pel\Bigr). \end{equation} In fact, with $\idr_j:=\gcd (\mr_j,\lmr_j)$, it suffices to take $\mr_j/\idr_j$ as the multipliers for the above pseudo-reductions for $1\le j<t$. It is evident that $\mlt_\ipr (\mr_j/\idr_j)=0$ if $\mlt_\ipr (\mr_j)=0$, i.e., the multipliers $\mr_j/\idr_j$ are still relatively prime to the compatible divisor $\ipr^i$ for $1\le j<t$. Moreover, a combination of \eqref{OrderSPolynomialReduction} and \eqref{RealSPolyReduction} leads to the inequalities: \begin{equation}\label{OrderRealSPolyReduction} \lmc (\tilde{\bx}^{\beta-\gamma_j}q_{jk}\tg_k)\prec\tilde{\bx}^\beta,\quad 1\le j<t,~1\le k\le\tau. \end{equation} We can make a pseudo-reduction of the $S$-polynomial $S(g_j,\pel)$ in \eqref{SpecialSPolyRelation} by the pseudo-eliminant $\pel$ as in \eqref{SpecialSPolyEssence} of Lemma \ref{Lemma:UnnecessaryConst}. The multiplier $\mr_j:=\gcd (\lcc (\tg_j),\pel)$ of the pseudo-reduction satisfies $\mlt_\ipr (\mr_j)=0$ since for every $\tf\in\pbs$, we added $\gcd (\lcc (\tf),\pel)\in\nonk$ into the multiplier set $\mrs$ in \Po R of Algorithm \ref{Algo:PseudoEliminant}. Then based on the relationship in \eqref{SpecialSPolyRelation}, we can make pseudo-reductions of the $S$-polynomials $S(c_j\tilde{\bx}^{\alpha_j}g_j,\icr\rhor\pel)$ in \eqref{SpecialSPolyRelation} via multipliers $\mr_j/\idr_j$ with $\idr_j:=\gcd (\mr_j,\lnr_j)$ as follows. \begin{equation}\label{RealSpecialReduction} \frac{\mr_j}{\idr_j}S(c_j\tilde{\bx}^{\alpha_j}g_j,\icr\rhor\pel) =\frac{\lnr_j}{\idr_j}\tilde{\bx}^{\alpha_j}\ur_j\pel \end{equation} with $\ur_j:=g_j-\ltc (g_j)$ and $g_j\ne\lel$ for $1\le j\le t$. Evidently we also have $\mlt_\ipr (\mr_j/\idr_j)=0$ as well as $\lmc (\tilde{\bx}^{\alpha_j}\ur_j)\prec\tilde{\bx}^{\alpha_j}\cdot\lmc (g_j)=\tilde{\bx}^\beta$ by \eqref{SpecialSPolyRelation}. In \eqref{LeadingMonomialSyzygy} there appear two kinds of $S$-polynomials as $S(c_j\tilde{\bx}^{\alpha_j}g_j,c_t\tilde{\bx}^{\alpha_t}g_t)$ in \eqref{RealSPolyReduction} and $S(c_j\tilde{\bx}^{\alpha_j}g_j,\icr\rhor\pel)$ in \eqref{RealSpecialReduction}. Let $\mr$ denote the product of the multipliers $\mr_j/\idr_j$ in \eqref{RealSPolyReduction} and \eqref{RealSpecialReduction} for the pseudo-reductions of the $S$-polynomials in \eqref{LeadingMonomialSyzygy}. It is evident that $\mr$ is still relatively prime to the compatible divisor $\ipr^i$. Based on \eqref{LeadingMonomialSyzygy} and the pseudo-reductions of the $S$-polynomials in \eqref{RealSPolyReduction} and \eqref{RealSpecialReduction}, we obtain the following representation: \begin{equation}\label{TempRepresent} \ibr\mr\tg=\sum_{k=1}^\tau q_k\tg_k+\eta\pel. \end{equation} With $1\le k\le\tau$ here, $q_k\in\knx$ is a linear combination of the factors $\tilde{\bx}^{\beta-\gamma_j}q_{jk}$ in \eqref{RealSPolyReduction} for $1\le j<t$ with coefficients $\ibr_j\mr\lmr_j/\mr_j\in\kxn$. And $\eta\in\knx$ is a linear combination of the factors $\tilde{\bx}^{\beta-\gamma_j}\rhor_j$ in \eqref{RealSPolyReduction} for $1\le j<t$ and the factors $\tilde{\bx}^{\alpha_j}\ur_j$ in \eqref{RealSpecialReduction} for $1\le j\le t$ with coefficients $\ibr_j\mr\lmr_j/\mr_j$ and $\ibr_j\mr\lnr_j/\mr_j$ in $\kxn$ respectively. Thus from \eqref{OrderRealSPolyReduction} and $\lmc (\tilde{\bx}^{\alpha_j}\ur_j)\prec\tilde{\bx}^\beta$ in \eqref{RealSpecialReduction}, we have the following inequality for \eqref{TempRepresent}: \begin{equation}\label{DecreaseLeadingMonomial} \max\bigl\{\max_{1\le k\le\tau}\{q_k\tg_k\},\eta\pel\bigr\}\prec\tilde{\bx}^\beta. \end{equation} The multiplier $\ibr\mr$ as in \eqref{TempRepresent} is relatively prime to the compatible divisor $\ipr^i$ since are both $\ibr$ and $\mr$. We also know that the polynomial $\tg$ in \eqref{InitialTermReps} is a summand of the representation of the eliminant $\el$ in \eqref{RevisedRepresentation}. We multiply both the polynomial $\tg$ in \eqref{InitialTermReps} and the eliminant $\el$ in \eqref{RevisedRepresentation} by the multiplier $\ibr\mr$. Then we substitute the summand $\ibr\mr\tg$ by its new representation in \eqref{TempRepresent}. In this way we obtain a new representation of $\ibr\mr\el$ as follows. \begin{equation}\label{NewEliminantRep} \ibr\mr\el=\sum_{k=1}^\tau q_k\tg_k+\eta\pel+\ibr\mr\sum_{j=1}^t(h_j-\ltc (h_j))g_j+\ibr\mr\sum_{f_j\in\tas\setminus\pb_t}h_jf_j. \end{equation} The representation \eqref{NewEliminantRep} also applies to the case when the univariate polynomial $\lel$ in \eqref{InitialRepresentation} satisfies $\lel\in\tas\setminus\pb_t$, that is, $\lmc (h_0\lel)\prec\tilde{\bx}^\beta$ as per the definition of the set $\pb_t$ in \eqref{RevisedRepresentation}. In fact, in this case we can rewrite the last summation in \eqref{NewEliminantRep} as follows. \begin{equation}\label{TheLastSummand} \ibr\mr\sum_{f_j\in\tas\setminus\pb_t}h_jf_j=\ibr\mr h_0\lel+\ibr\mr\sum_{f_j\in\tbs\setminus\pb_t}h_jf_j, \end{equation} where $\tbs=\tas\setminus\kxn$ is defined as in Algorithm \ref{Algo:PseudoEliminant}. We can treat the above summand $\ibr\mr h_0\lel$ as the summand $\eta\pel$ in \eqref{NewEliminantRep} since $\lel=\rhor\pel$ as in \eqref{InitialRepresentation}. Let $\pbs=\{\tg_k\colon 1\le k\le\tau\}$ be the pseudo-basis obtained in Algorithm \ref{Algo:PseudoEliminant}. Since $\pb_t\subset\tas=\tbs\cup\{\lel\}$ and $\tbs\subset\pbs$ in \eqref{NewEliminantRep} and \eqref{TheLastSummand}, we can rewrite the representation in \eqref{NewEliminantRep}, including the case of \eqref{TheLastSummand}, into a new representation in terms of $\pbs$ and $\pel$ as follows. \begin{equation}\label{FinalRepresentation} \ibr\mr\el=\sum_{k=1}^\tau \iur_k\tg_k+\iur_0\pel \end{equation} with $\iur_k\in\knx$ for $0\le k\le\tau$. The leading monomials in \eqref{FinalRepresentation} satisfy \begin{equation}\label{RepresentationLeadingMonomial} \max\bigl\{\max_{1\le k\le\tau}\{\lmc (\iur_k\tg_k)\},\lmc (\iur_0)\bigr\}\prec\tilde{\bx}^\beta \end{equation} according to \eqref{DecreaseLeadingMonomial} and the representations in \eqref{NewEliminantRep} and \eqref{TheLastSummand}. To summarize, the leading monomials in the representation \eqref{FinalRepresentation} strictly decrease from those in the representations \eqref{InitialRepresentation} and \eqref{RevisedRepresentation}, up to the multiplier $\ibr\mr$ that satisfies $\mlt_\ipr (\ibr\mr)=0$. Now we treat the representation in \eqref{FinalRepresentation} as the one in \eqref{InitialRepresentation} and repeat our discussions from \eqref{RevisedRepresentation} through \eqref{FinalRepresentation}. In this way we obtain a new representation of $\ibr\mr\el$ with a new multiplier. The leading monomials of the new representation are strictly less than those in \eqref{FinalRepresentation}. This is similar to the strict inequality in \eqref{RepresentationLeadingMonomial} comparing the leading monomials of the representation in \eqref{FinalRepresentation} with those in \eqref{InitialRepresentation}. Moreover, the new multiplier for the new representation is still relatively prime to the compatible divisor $\ipr^i$ of the pseudo-eliminant $\pel$. We repeat the above procedure of rewriting the representations of the eliminant $\el$ so as to strictly reduce the orderings of their leading monomials. Moreover, the multipliers for the representations are always relatively prime to the compatible divisor $\ipr^i$ of the pseudo-eliminant $\pel$. Since the elimination ordering on $\knx$ as in Definition \ref{Def:EliminationOrdering} and Definition \ref{Def:MonomialOrdering} is a well-ordering, the above procedures halt after a finite number of repetitions. In this way we shall reach a representation bearing the following form: \begin{equation}\label{Finally} \fir\el=h\pel. \end{equation} The multiplier $\fir\in (\kxn)^\ast$ in \eqref{Finally} is relatively prime to the compatible divisor $\ipr^i$ of the pseudo-eliminant $\pel$. Here the multiplier $h\in (\kxn)^\ast$. Hence the eliminant $\el$ is divisible by $\ipr^i$ since we assumed that $\pel$ is divisible by its compatible divisor $\ipr^i$ according to the definition of compatible divisors in Definition \ref{Def:CompatibleDivisors}. Thus follows the conclusion of Theorem \ref{Thm:CompatiblePart}. \end{proof} In Definition \ref{Def:CompatibleDivisors} we defined the compatible part $\Cp (\pel)$ and incompatible part $\Ip (\pel)$ of a pseudo-eliminant $\pel$. The definition is based on the multiplier set $\mrs$ for the pseudo-reductions of $S$-polynomials by the pseudo-basis $\pbs$ in obtaining $\pel$. As the following Corollary \ref{Cor:CoefficientCriterion} shows, sometimes it is more convenient to determine the compatibility by the leading coefficients of the pseudo-basis $\pbs$ than by the multiplier set $\mrs$ as above. \begin{corollary}\label{Cor:CoefficientCriterion} For a zero-dimensional ideal $I$ over a perfect field $K$, let $\pel$ be a pseudo-eliminant of $I$ and $\pbs$ its pseudo-basis. Suppose that $\ipr$ is an irreducible factor of $\pel$ with multiplicity $i$. If $\ipr$ is relatively prime to every leading coefficient in $\lcc (\pbs):=\{\lcc (\ibr)\in\nonk\colon \ibr\in\pbs\}$, then $\ipr^i$ is a compatible divisor of $\pel$. In particular, if $\pel$ is relatively prime to every leading coefficient in $\lcc (\pbs)$, then $\pel$ is compatible and $\el=\pel$. \end{corollary} \begin{proof} Let us prove that the divisor $\ipr^i$ of $\pel$ satisfies Definition \ref{Def:CompatibleDivisors} and hence is compatible when $\ipr$ is relatively prime to every leading coefficient in $\lcc (\pbs)$. That is, $\ipr^i$ is relatively prime to the multiplier set $\mrs$ for the pseudo-reductions of $S$-polynomials by the pseudo-basis $\pbs$ in obtaining $\pel$. In what follows let us write an $S$-polynomial as $S$ and the pseudo-basis as $\pbs=\{\Lst \ibr1s\}$. Then the multiplier $\mr\in (\kxn)^*$ in \eqref{PseudoDivisionExpression} for the pseudo-reduction of $S$ by $\pbs$ is just a finite product of the interim multipliers $\iur:=\lmr/c_\alpha\in (\kxn)^*$ in \eqref{TermReduction} with $\lmr=\lcm (c_\alpha,\lcc (\ibr_j))$. It is evident that if an irreducible factor $\ipr$ of $\pel$ is relatively prime to $\lcc (\ibr_j)$, then it is also relatively prime to the interim multiplier $\iur=\lmr/c_\alpha$ since $\lmr/c_\alpha=\lcc (\ibr_j)/\gcd (c_\alpha,\lcc (\ibr_j))$. As a result, $\ipr$ is relatively prime to the multiplier $\mr$ for the pseudo-reduction of $S$ by $\pbs$. Thus $\ipr^i$ is a compatible divisor of $\pel$. \end{proof} Definition \ref{Def:CompatibleDivisors} provides a criterion for the compatibility of a pseudo-eliminant's divisors based on the multiplier set $\mrs$, whereas the criterion in Corollary \ref{Cor:CoefficientCriterion} is based on the leading coefficients of the pseudo-basis $\pbs$. To distinguish we call them the \emph{multiplier criterion} and \emph{coefficient criterion} respectively. The following example shows that a divisor of a pseudo-eliminant $\pel$ satisfying the multiplier criterion does not necessarily satisfy the coefficient criterion. \begin{example}\label{Expl:MultipliersCount} In the algebra $(\bQ[y])[x]$ with elimination ordering $x\succ y$, consider an ideal $I$ generated by $f(x,y)=y(x^2+1)$ and $g(x,y)=(y+1)(2x+1)$. As per \eqref{SPolynomialDef}, their $S$-polynomial is as follows with $\lcc (f)=y$ and $\lcc (g)=2(y+1)$. \begin{equation*} S(f,g)=2(y+1)f-yxg=-y(y+1)(x-2). \end{equation*} The pseudo-reduction of $S(f,g)$ is $2S(f,g)+yg=5y(y+1)$ with the multiplier $\mr=2\in\bQ$. The pseudo-eliminant $\pel=5y(y+1)$ and pseudo-basis $\tas=\{f,g\}$. Now both the irreducible factors $y$ and $y+1$ of $\pel$ satisfy the multiplier criterion and hence are compatible divisors. Hence $\pel=5y(y+1)$ is compatible and the eliminant $\el=\pel$ up to the unit coefficient $5\in\bQ$. Nevertheless $\pel$ does not satisfy the coefficient criterion. \end{example} \section{Analysis of Incompatible Divisors via Modular Method} \label{Sec:IncompatibleModular} For a pseudo-eliminant $\pel$ of a zero-dimensional ideal $I$ in $\knx$ over a perfect field $K$, we made a squarefree decomposition of its incompatible part $\Ip (\pel)$ in Algorithm \ref{Algo:CompatiblePartPseudoEliminant}. In order to determine the eliminant $\el$ of $I$, we perform a complete analysis of $\Ip (\pel)$ in this section. For the squarefree decomposition $\{\ids_i\}$ of $\Ip (\pel)$ obtained in Algorithm \ref{Algo:CompatiblePartPseudoEliminant}, the elements in $\ids_i$ are pairwise relatively prime and usually have small exponents due to the way they are constructed in Algorithm \ref{Algo:CompatiblePartPseudoEliminant}. Accordingly it is natural to contrive a modular algorithm modulo the elements in $\ids_i$ so as to reduce the complexity of our algorithm. However this requires unorthodox computations in principal ideal rings with zero divisors. \begin{definition}[Principal ideal Quotient Ring$\colon$PQR]\label{Def:PQR} \hfill Let $R$ be a PID whose set of units is denoted as $R^\times$. With the principal ideal $\pid\qr$ generated by an element $\qr\in R^\ast\setminus R^\times$, the quotient ring $\bar R:=R/\pid\qr$ is called a \emph{principal ideal quotient ring} and abbreviated as PQR henceforth. \end{definition} \begin{notation}\label{Notation:Factorials} Suppose that $\qr\in R^\ast\setminus R^\times$ in Definition \ref{Def:PQR} has a unique factorization $\qr=u\prod_{i=1}^sp_i^{\alpha_i}$ with $s,\alpha_i\in\Np$ for $1\le i\le s$ and $u\in R^\times$. Here the factors $\{p_i\colon 1\le i\le s\}\subset R^\ast\setminus R^\times$ are irreducible and they are pairwise relatively prime when $s>1$. \end{notation} When $\qr\in R^\ast\setminus R^\times$ is irreducible in Definition \ref{Def:PQR}, i.e., when $s=\alpha_1=1$ in Notation \ref{Notation:Factorials}, the PQR $\bar R$ becomes a field since $\qr$ is prime in $R$. Nonetheless when $\qr\in R^\ast\setminus R^\times$ is not irreducible in Definition \ref{Def:PQR}, i.e., in the case of either $s>1$ or $\alpha_i>1$ for an $i$ satisfying $1\le i\le s$, the PQR $\bar R$ has zero divisors and is not an integral domain. In this case $\bar R$ is no longer a factorial ring. Nonetheless $\bar R$ still has nice properties to which we can resort in our computations. \begin{lemma}\label{Lemma:PQRProperties} Let $\bar R=R/\pid\qr$ be a PQR as in Definition \ref{Def:PQR} and $\varphi\colon R\rightarrow\bar R$ the canonical ring homomorphism. Suppose that $\qr\in R^\ast\setminus R^\times$ has a unique factorization $\qr=u\prod_{i=1}^sp_i^{\alpha_i}$ as in Notation \ref{Notation:Factorials}. In what follows we also use the notation $\bar a:=\varphi (a)$ for every $a\in R$. \begin{enumerate}[(i)] \item\label{item:SimpleCoprime} An $\dr\in R$ is relatively prime to $\qr$, i.e., $\gcd (\dr,\qr)=1$, if and only if $\varphi (\dr)$ is a unit in $\bar R$, that is, $\varphi (\dr)\in\bru R$. \item\label{item:ExpoLimits} For $1\le i\le s$ and each $l\in\mathbb{N}$ satisfying $l\ge\alpha_i$, we have $\bar p_i^l\sim\bar p_i^{\alpha_i}$. Here the notation $\bar a\sim\bar b$ in $\bar R$ means that $\bar a$ is an associate of $\bar b$ in $\bar R$, i.e., there is a unit $\bar u\in\bru R$ such that $\bar b=\bar u\bar a$. \item\label{item:StandardRep} For every $\bar a\in\bra R$, we have a unique representation $\bar a\sim\prod_{i=1}^s\bar p_i^{\beta_i}$ that satisfies $0\le\beta_i\le\alpha_i$ for $1\le i\le s$. We call such kind of representations a \emph{standard} representation of $\bar a$ in the PQR $\bar R$ and denote it as $\bar a_\st$. In particular, we define $\bar a_\st:=1$ for $\bar a\in\bru R$. \end{enumerate} \end{lemma} \begin{proof} The conclusion \eqref{item:SimpleCoprime} readily follows from the fact that $R$ is a PID. In fact, $\gcd (\dr,\qr)=1$ means that there exist $u,v\in R$ such that $u\dr+v\qr=1$, from which we can deduce that $\varphi (u)\varphi (\dr)=1$. Conversely $\varphi (u\dr)=1$ implies that there exists $v\in R$ such that $u\dr-1=v\qr$. Hence follows the conclusion. When $s=1$ both the conclusions \eqref{item:ExpoLimits} and \eqref{item:StandardRep} are evident since $\bar p_1^l=0$ for $l\ge\alpha_1$. In particular, $\bra R=\bru R$ when $\alpha_1=s=1$. In this case every $\bar a\in\bra R$ has a standard representation $\bar a\sim 1:=\bar a_\st$. So in what follows let us suppose that $s>1$. Without loss of generality, let us prove \eqref{item:ExpoLimits} in the case when $i=s$. For $l>\alpha_s$, we have $p_s^l\equiv p_s^l+\qr\mod \qr$. Moreover, we have the identity: $p_s^l+\qr=p_s^{\alpha_s}(p_s^{l-\alpha_s}+u\prod_{i=1}^{s-1}p_i^{\alpha_i})$. Here $\varphi (p_s^{l-\alpha_s}+u\prod_{i=1}^{s-1}p_i^{\alpha_i})$ is a unit in $\bar R$ by \eqref{item:SimpleCoprime} since $p_s^{l-\alpha_s}+u\prod_{i=1}^{s-1}p_i^{\alpha_i}$ is relatively prime to $\qr$ in $R$. Thus $\bar p_s^l=\varphi (p_s^l+\qr)\sim\bar p_s^{\alpha_s}$. The existence of the standard representation in \eqref{item:StandardRep} readily follows from the fact that $R$ is factorial and the canonical homomorphism $\varphi$ is an epimorphism. If $\ipr$ is irreducible and relatively prime to $\qr$, then $\bar\ipr=\varphi (\ipr)\in\bru R$ is a unit. Hence for every $\bar a\in\bra R$, its standard representation can only bear the form $\bar a\sim\prod_{i=1}^s\bar p_i^{\beta_i}$ with $0\le\beta_i\le\alpha_i$ for $1\le i\le s$. Now suppose that $\bar a$ has another standard representation $\bar a\sim\prod_{i=1}^s\bar p_i^{\gamma_i}$ with $0\le\gamma_i\le\alpha_i$ for $1\le i\le s$. Then there exists $h\in R$ such that $\prod_{i=1}^s\ipr_i^{\beta_i}=u\cdot\prod_{i=1}^s\ipr_i^{\gamma_i}+h\qr$ with $u\in R$ being relatively prime to $\qr$, from which we can easily deduce that $\beta_i=\gamma_i$ for $1\le i\le s$. \end{proof} Let $K$ be a perfect field and $\qr\in\nonk$. It is easy to see that $\kxn/\pid\qr$ is a PQR as defined in Definition \ref{Def:PQR}. Hereafter we use $R$ and $\bar R$ to denote $\kxn$ and $\kxn/\pid\qr$ respectively. Let us consider the following set: \begin{equation}\label{ResidueRing} \rr:=\{r\in\kxn\colon\deg (r)<\deg (\qr)\} \end{equation} with $\deg (r)=0$ for every $r\in K$ including $r=0$. Let us deem the canonical ring homomorphism $\varphi\colon R\rightarrow\bar R$ as a map. We restrict it on $\rr$ and denote it as $\varphi_\qr$. It is evident that $\varphi_\qr\colon\rr\rightarrow\bar R$ is a bijective map with $\varphi_\qr (0)=0$. We redefine the two binary operations on $\rr$, the addition and multiplication, as follows. \begin{equation}\label{BinaryOperations} a+b:=\varphi_\qr^{-1}(\bar a+\bar b);\quad a\cdot b:=\varphi_\qr^{-1}(\bar a\cdot\bar b). \end{equation} In this way the set $\rr$ in \eqref{ResidueRing} becomes a ring, which we still denote as $\rr$. It is easy to verify that $\varphi_\qr$ is a ring isomorphism between $\rr$ and $\bar R$. As a result, the conclusions in Lemma \ref{Lemma:PQRProperties} apply to the ring $\rr$ as well. \begin{definition}[Normal PQR $\rr$]\label{Def:nPQR} \hfill We call the ring $\rr$ being constructed as in \eqref{ResidueRing} and \eqref{BinaryOperations} a \emph{normal} PQR henceforth. \end{definition} Let a normal PQR $\rr$ be defined as in Definition \ref{Def:nPQR} for $\qr\in\nonk$. For every $f\in R=\kxn$, there exist a quotient $h\in\kxn$ and unique remainder $r\in\kxn$ satisfying \begin{equation}\label{DivisionPQR} f=h\qr+r;\qquad\deg (r)<\deg (\qr). \end{equation} Hence by \eqref{ResidueRing} we can define an epimorphism directly as follows. \begin{equation}\label{ProjectionPQR} \mo\colon R\rightarrow\rr\colon\quad\mo (f):=r. \end{equation} It is easy to verify that the epimorphism $\mo$ is a composition of the canonical ring homomorphism $\varphi\colon R\rightarrow\bar R=\kxn/\pid\qr$ and the isomorphism $\varphi_\qr^{-1}\colon\bar R\rightarrow\rr$ in \eqref{BinaryOperations}. Since a normal PQR $\rr$ is a subset of $R=\kxn$, for every $r\in\rr$, we can define an injection as follows. \begin{equation}\label{PQREmbedding} \io\colon\rr\hookrightarrow R\colon\quad\io (r):=r. \end{equation} Please note that $\io$ is not a ring homomorphism since the binary operations on the ring $\rr$ are different from those on $R$. Nonetheless $\mo\circ\io$ is the identity map on $\rr$. For each pair $a,b\in\rr$, we define the binary operations between $\io (a)$ and $\io (b)$ as those defined on $R$. Suppose that $\bar a\in\bra R$ has a standard representation $\bar a\sim\bar a_\st=\prod_{i=1}^s\bar p_i^{\beta_i}$ with $0\le\beta_i\le\alpha_i$ as in Lemma \ref{Lemma:PQRProperties} \eqref{item:StandardRep}. We can substitute $p_i=\varphi_\qr^{-1}(\bar p_i)\in\rat$ as in \eqref{BinaryOperations} for $\bar p_i\in\bra R\setminus\bru R$ in this representation. In this way we obtain a \emph{standard} representation of $a:=\varphi_\qr^{-1}(\bar a)\in\rr^\ast$ in the normal PQR $\rr$ as follows: \begin{equation}\label{StdRepsPQR} a\sim a_\st:=\prod_{i=1}^s p_i^{\beta_i},\quad 0\le\beta_i\le\alpha_i; \qquad a=a^\times\cdot a_\st, \end{equation} where $\{\alpha_i\colon 1\le i\le s\}$ are the exponents for the unique factorization of the moduli $\qr$ as in Lemma \ref{Lemma:PQRProperties}. For convenience we use $a^\times\in\rr^\times$ to denote the unit factor of $a$ with respect to $a_\st$. We also call $a_\st$ the \emph{standard factor} of $a$ henceforth. In particular, we define $a_\st:=1$ for $a\in\rr^\times$. We can derive the existence and uniqueness of the standard representation $a_\st$ in \eqref{StdRepsPQR} from Lemma \ref{Lemma:PQRProperties} \eqref{item:StandardRep} since the normal PQR $\rr$ is isomorphic to the PQR $\bar R$ under $\varphi_\qr$. \begin{remark} It is unnecessary to procure a complete factorization of $a_\st$ as in \eqref{StdRepsPQR} in our computations. In fact, it suffices to make a factorization $a=a^\times\cdot a_\st$. This can be easily attained by a computation $a_\st=\gcd (\io (a),\qr)$ with $\io$ being defined as in \eqref{PQREmbedding}. The soundness of the computation readily follows from Lemma \ref{Lemma:PQRProperties} and \eqref{StdRepsPQR}. \end{remark} An apparent difference between the PQR $\bar R$ and normal PQR $\rr$ is that the degree function $\deg$ is well defined on $\rr$, which is indispensable for polynomial divisions. More specifically, for all $a,b\in\rr^\ast$ with $\deg (b)>0$, there exist a quotient $h\in\rr$ and unique remainder $r\in\rr$ satisfying the following equality: \begin{equation}\label{SimpleDivisionIdentity} a=hb+r\text{\quad such that~}\deg (r)<\deg (b). \end{equation} Since all polynomials involved here including the product $hb$ have degrees strictly less than $\deg (\qr)$ in \eqref{SimpleDivisionIdentity}, there is no multiplication of zero divisors leading to $0$ for polynomial divisions. This includes the case when $\deg (a)<\deg (b)$ and hence $h=0$. That is, we make polynomial divisions on the normal PQR $\rr$ in the same way as on $R$. For $a,b\in\rr^\ast$ and their standard representations $a\sim a_\st=\prod_{i=1}^s p_i^{\beta_i}$ and $b\sim b_\st=\prod_{i=1}^s p_i^{\gamma_i}$ as in \eqref{StdRepsPQR}, let us define: \begin{equation}\label{GcdPQRStd} \gcd\nolimits_\st (a,b):=\gcd (a_\st,b_\st)=\prod_{i=1}^s p_i^{n_i};~ \lcm\nolimits_\st (a,b):=\lcm (a_\st,b_\st)=\prod_{i=1}^s p_i^{m_i} \end{equation} with $n_i:=\min\{\beta_i,\gamma_i\}$ and $m_i:=\max\{\beta_i,\gamma_i\}$. It is evident that we might have $\lcm_\st (a,b)=0$ for $a,b\in\rr^\ast$ due to the possible existence of zero divisors in $\rr^\ast$. \begin{remark}\label{Rmk:GcdComput} The definition of $\gcd_\st (a,b)$ and $\lcm_\st (a,b)$ in \eqref{GcdPQRStd} is based upon a complete factorization of $a$ and $b$ as in \eqref{StdRepsPQR}. In practice in order to minimize the complexity of our algorithm, we resort to Euclidean algorithm to compute $\gcd (a,b)$. The normal PQR $\rr$ might have zero divisors and not be an Euclidean domain. However from our discussion on polynomial divisions in \eqref{SimpleDivisionIdentity}, we know that the polynomial division on $\rr$ is the same as that on $R$. Moreover, for the irreducible factor $p_i\in\rat$ in Notation \ref{Notation:Factorials} and $1\le e\le\alpha_i$, if both $a$ and $b$ are divisible by $p_i^e$ in \eqref{SimpleDivisionIdentity}, then so is the remainder $r$. Similarly if both $b$ and $r$ are divisible by $p_i^e$ in \eqref{SimpleDivisionIdentity}, then so is $a$. Thus the computation of $\gcd (a,b)$ for $a,b\in\rr^\ast$ by Euclidean algorithm on $\rr$ is sound and feasible. It differs from $\gcd_\st (a,b)$ only by a unit factor. \end{remark} Let $\lcm (\io (a),\io (b))$ be the least common multiple of $\io (a)$ and $\io (b)$ on $R$. The same for $\gcd (\io (a),\io (b))$. For each pair $a,b\in\rr$, let us define: \begin{equation}\label{GcdPQRqr} \gcd\nolimits_\qr (a,b):=\mo(\gcd (\io (a),\io (b)));\quad \lcm\nolimits_\qr (a,b):=\mo(\lcm (\io (a),\io (b))) \end{equation} with the epimorphism $\mo$ and injection $\io$ defined as in \eqref{ProjectionPQR} and \eqref{PQREmbedding} respectively. By Lemma \ref{Lemma:PQRProperties} and \eqref{StdRepsPQR}, it is easy to verify the following relationship between the two definitions in \eqref{GcdPQRStd} and \eqref{GcdPQRqr}: \begin{equation}\label{UniqueGcds} \gcd\nolimits_\qr (a,b)\sim\gcd\nolimits_\st (a,b)\quad\text{and}\quad\lcm\nolimits_\qr (a,b)\sim\lcm\nolimits_\st (a,b). \end{equation} In the identity \eqref{DivisionPQR}, $\mlt_\ipr (r)$ is well-defined since $\kxn$ is a factorial domain and $r\in\kxn$. Therefore we can deduce that for every irreducible polynomial $\ipr\in\kxn$, if $\max\{\mlt_\ipr (f),\mlt_\ipr (r)\}\le\mlt_\ipr (q)$, then we have: \begin{equation}\label{InvariantMultiplicity} \mlt_\ipr (f)=\mlt_\ipr (r). \end{equation} \begin{definition}[Elimination ordering on \text{$\rx$}]\label{Def:EliminationOrderingPQR} \hfill If the variable $x_1\in\rr^\ast$, the \emph{elimination ordering} on $\rx$ is the monomial ordering such that the $\tilde{\bx}$ variables are always larger than the variable $x_1\in\rr^\ast$. That is, $x_1^\alpha\tilde{\bx}^\gamma\succ x_1^\beta\tilde{\bx}^\delta$ if and only if $\tilde{\bx}^\gamma\succ\tilde{\bx}^\delta$ or, $\tilde{\bx}^\gamma=\tilde{\bx}^\delta$ and $\alpha>\beta$. We also say that the elimination ordering on $\rx$ is \emph{induced} from the one on $\knx$ in Definition \ref{Def:EliminationOrdering}. \end{definition} \begin{definition}[Term reduction in \text{$\rx$}]\label{Def:TermReductionPQR} \hfill Let $\rr$ be a normal PQR as in Definition \ref{Def:nPQR} and $\succ$ the elimination ordering on $\rx$ as in Definition \ref{Def:EliminationOrderingPQR}. Let the epimorphism $\mo\colon R\rightarrow\rr$ and injection $\io\colon\rr\rightarrow R$ be defined as in \eqref{ProjectionPQR} and \eqref{PQREmbedding} respectively. For $f\in\rqd$ and $g\in\rnd$ with $\lcc (g)\in\rr^\ast$, suppose that $f$ has a term $\icr_\alpha\tilde{\bx}^\alpha$ with $\tilde{\bx}^\alpha\in\supp f\cap\langle\lmc (g)\rangle$. We also define the multipliers $\iur:=\mo (\lcm (l_\alpha,l_g)/l_\alpha)$ and $\lmr:=\mo (\lcm (l_\alpha,l_g)/l_g)$ with $l_\alpha:=\io (\icr_\alpha)$ and $l_g:=\io (\lcc (g))$. We can make a \emph{reduction} of the term $\icr_\alpha\tilde{\bx}^\alpha$ of $f$ by $g$ as follows. \begin{equation}\label{TermReductionPQR} h=\iur f-\frac{\lmr\tilde{\bx}^\alpha}{\lmc (g)}g. \end{equation} We call $h$ the \emph{remainder} of the reduction and $\iur$ the \emph{interim multiplier} on $f$ with respect to $g$. \end{definition} In Definition \ref{Def:TermReductionPQR} we might have $\lcm_\st (c_\alpha,\lcc (g))=0$ for $c_\alpha,\lcc (g)\in\rr^\ast$ due to the possible existence of zero divisors in $\rr^\ast$. We postpone to address this issue until Lemma \ref{Lemma:SPolynRational} \eqref{item:NonzeroMultipliers} after the definition of $S$-polynomials over a normal PQR because in what follows we only consider a special kind of term reductions whose interim multipliers $\iur$ in \eqref{TermReductionPQR} satisfy $\iur\in\rr^\times$. \begin{definition}[Properly reduced polynomial in $\rx$]\label{Def:ProperlyReduced} \hfill Let $\rr$ be a normal PQR as in Definition \ref{Def:nPQR} and $\succ$ the elimination ordering on $\rx$ as in Definition \ref{Def:EliminationOrderingPQR}. A term $\icr_\alpha\tilde{\bx}^\alpha\in\rx$ with the coefficient $\icr_\alpha\in\rr^\ast$ is said to be \emph{properly reducible} with respect to $\tas=\{\Lst f1s\}\subset\rqd$ if there exists an $f_j\in\tas$ such that $\tilde{\bx}^\alpha\in\langle\lmc (f_j)\rangle$ and the interim multiplier $\iur$ with respect to $f_j$ as in \eqref{TermReductionPQR} satisfies $\iur\in\rr^\times$. We say that a polynomial $f\in\rx$ is \emph{properly reduced} with respect to $\tas$ if none of its terms is properly reducible with respect to $\tas$. \end{definition} The condition $\iur\in\rr^\times$ for the properness in Definition \ref{Def:ProperlyReduced} indicates that $\iur=\mo (\lcm (l_\alpha,l_j)/l_\alpha)\in\rr^\times$. Here $l_\alpha:=\io (\icr_\alpha)$ and $l_j:=\io (\icr_j)$ with $\icr_j:=\lcc (f_j)$. Hence we can deduce that $\icr_\alpha\in\pid{\icr_j}\subset\rr$. Combined with the condition $\tilde{\bx}^\alpha\in\langle\lmc (f_j)\rangle$, the condition $\iur\in\rr^\times$ for the properness in Definition \ref{Def:ProperlyReduced} is equivalent to the following term divisibility condition: \begin{equation}\label{DivisibleCondProper} \icr_\alpha\tilde{\bx}^\alpha\in\langle\icr_j\cdot\lmc (f_j)\rangle=\langle\ltc (f_j)\rangle\subset\rx. \end{equation} \begin{theorem}[Proper division in \text{$\rx$}]\label{Thm:ProperReduction \hfill Let $\rr$ be a normal PQR as in Definition \ref{Def:nPQR} and $\succ$ the elimination ordering on $\rx$ as in Definition \ref{Def:EliminationOrderingPQR}. Suppose that $\tas=\{\Lst f1s\}$ are polynomials in $\rqd$. For every $f\in\rx$, there exist a multiplier $\mr\in\rr^\times$ as well as a remainder $\dr\in\rx$ and quotients $q_j\in\rx$ for $1\le j\le s$ such that: \begin{equation}\label{ProperDivisionExpression} \mr f=\sum_{j=1}^s q_jf_j+\dr, \end{equation} where $\dr$ is properly reduced with respect to $\tas$. Moreover, the polynomials in \eqref{ProperDivisionExpression} have to satisfy the following condition: \begin{equation}\label{ProperDivisionCond} \lmc (f)=\max\{\max_{1\le j\le s}\{\lmc (q_j)\cdot\lmc (f_j)\},\lmc (r)\}. \end{equation} \end{theorem} \begin{proof} The proof amounts to a verbatim repetition of that for Theorem \ref{Thm:PseudoReduction} if we substitute the criterion of being properly reduced for that of being pseudo-reduced. In fact, the polynomial division on a normal PQR $\rr$ as in \eqref{SimpleDivisionIdentity} is the same as that on $R=\kxn$. Moreover, a normal PQR $\rr$ as in Definition \ref{Def:nPQR} is also a Noetherian ring since it is isomorphic to the PQR $\bar R$ in Definition \ref{Def:PQR} that is Noetherian. \end{proof} Please note that the product $\lmc (q_j)\cdot\lmc (f_j)$ in \eqref{ProperDivisionCond} is based upon the term divisibility condition \eqref{DivisibleCondProper}. We also call the proper division in Theorem \ref{Thm:ProperReduction} a \emph{proper reduction} henceforth. We can easily contrive a proper division algorithm based on Theorem \ref{Thm:ProperReduction}. For $f,g\in\rnd$, suppose that $\lcm_\st (\lcc (f),\lcc (g))=0$ due to the existence of zero divisors in $\rr^\ast$. In this case if we employed Definition \ref{Def:SPolynomial} for $S$-polynomials directly and in particular, the multiplier $\lmr=\lcm_\qr (\lcc (f),\lcc (g))$, then their $S$-polynomial $S(f,g)$ would equal $0$ since $\lmr=\lcm_\qr (\lcc (f),\lcc (g))=0$ in \eqref{SPolynomialDef} as per \eqref{UniqueGcds}. Hence we need to revise Definition \ref{Def:SPolynomial} as follows. \begin{definition}[$S$-polynomial over a normal PQR $\rr$]\label{Def:SpolynPQR} \hfill Let $\rr$ be a normal PQR as in Defintion \ref{Def:nPQR}. Suppose that $f\in\rqd$ and $g\in\rnd$. Let us use $c_f$ and $c_g$ to denote $\lcc (f)$ and $\lcc (g)$ in $\rr^\ast$ respectively. With the epimorphism $\mo\colon R\rightarrow\rr$ and injection $\io\colon\rr\rightarrow R$ defined as in \eqref{ProjectionPQR} and \eqref{PQREmbedding} respectively, we denote $l_f:=\io(c_f)$ and $l_g:=\io(c_g)$. We also define multipliers $\lmr_f:=\mo (\lcm (l_f,l_g)/l_f)$ and $\lmr_g:=\mo (\lcm (l_f,l_g)/l_g)$ as well as the monomial $\clm:=\lcm (\lmc (f),\lmc (g))\in\tM$. Then the following polynomial: \begin{equation}\label{SPolyPQR} S(f,g):=\frac{\lmr_f\clm}{\lmc (f)}f-\frac{\lmr_g\clm}{\lmc (g)}g \end{equation} is called the \emph{$S$-polynomial} of $f$ and $g$ in $\rx$. \end{definition} In particular, when $f\in\rqd$ and $\tg\in\rat$, we can take $\lmc (\tg)=1$ and $c_g=\lcc (\tg)=g$ in Definition \ref{Def:SpolynPQR}. If we define $l_\tg:=\io (\tg)$, the definitions for $\lmr_f$ and $\lmr_\tg$ in \eqref{SPolyPQR} are unaltered. Now $\clm=\lmc (f)$ and the $S$-polynomial in \eqref{SPolyPQR} becomes: \begin{equation}\label{SpecialSPolyPQR} S(f,\tg):=\lmr_ff-\lmr_\tg\tg\cdot\lmc (f). \end{equation} \begin{lemma}\label{Lemma:IdentitySpecialSPoly} For $f\in\rqd$ and $\tg\in\rat$, and with the same notations as in \eqref{SpecialSPolyPQR}, let us further denote $\idr:=\gcd (l_\tf,l_\tg)$. Then the $S$-polynomial $S(\tf,\tg)$ in \eqref{SpecialSPolyPQR} satisfies the following identity: \begin{equation}\label{IdentitySpecialSPoly} S(\tf,\tg)=\mo\Bigl(\frac {l_\tg}\idr\Bigr)(f-\ltc (f))=\lmr_\tf (f-\ltc (f)) \end{equation} with $\lmr_f=\mo (\lcm (l_f,l_g)/l_f)$ being defined as in \eqref{SPolyPQR}. \end{lemma} \begin{proof} It is evident that $\lmr_\tf=\mo (l_g/\idr)$ and $\lmr_\tg=\mo (l_\tf/\idr)$. Hence from \eqref{SpecialSPolyPQR} follows directly: \begin{equation*} S(f,\tg)=\mo\Bigl(\frac {l_\tg}\idr\Bigr)f-\mo\Bigl(\frac {l_\tf}\idr\Bigr)\mo (l_\tg)\cdot\lmc (f)=\mo\Bigl(\frac {l_\tg}\idr\Bigr)(f-\ltc (f)) \end{equation*} since $\mo\circ\io$ is the identity map on $\rr$. \end{proof} There is a special kind of $S$-polynomials for $f\in\rqd$ when $c_f=\lcc (f)\in\rat$. \begin{equation}\label{BeheadSPolyPQR} S(f,\qr):=\lnr_ff=\lnr_f (f-\ltc (f)) \end{equation} with $\lnr_f:=\mo (\lcm (l_f,\qr)/l_f)=\mo (\qr/\gcd (l_f,\qr))$. Here $l_f:=\io (c_f)$ as in \eqref{SPolyPQR}. \begin{lemma}\label{Lemma:RelativePrimePQR} For $f,g\in\rqd$, suppose that $\lmc (f)$ and $\lmc (g)$ are relatively prime. With the same notations as in Definition \ref{Def:SpolynPQR}, let us also denote $\idr:=\gcd (l_f,l_g)$. Then their $S$-polynomial in \eqref{SPolyPQR} satisfies: \begin{equation}\label{CoprimeReductionPQR} S(f,g)=\frac{f_1\cdot\ltc (g)-g_1\cdot\ltc (f)}{\mo (\idr)}=\frac{f_1g-g_1f}{\gcd_\qr (\lcc (f),\lcc (g))} \end{equation} with $f_1:=f-\ltc (f)$ and $g_1:=g-\ltc (g)$. Moreover, we have: \begin{equation}\label{LeadTwoMonomialPQR} \max\{\lmc (f_1)\cdot\lmc (g),\lmc (g_1)\cdot\lmc (f)\}\prec\lmc (f)\cdot\lmc (g). \end{equation} \end{lemma} \begin{proof} In the definition \eqref{SPolyPQR} we have $\clm=\lmc (f)\cdot\lmc (g)$. Furthermore, $\lmr_f=\mo (l_g/\idr)$ and $\lmr_g=\mo (l_f/\idr)$. Thus the first equality in \eqref{CoprimeReductionPQR} follows from \eqref{SPolyPQR} as follows. \begin{equation*} S(f,g)=\mo\Bigl(\frac{l_g}{\idr}\Bigr)\cdot\lmc (g)(f_1+\ltc (f))- \mo\Bigl(\frac{l_f}{\idr}\Bigr)\cdot\lmc (f)(g_1+\ltc (g)), \end{equation*} where we can write $\mo (l_g/\idr)$ as $\mo (l_g)/\mo (\idr)=c_g/\mo (\idr)$ and same for $\mo (l_f/\idr)$. The second equality in \eqref{CoprimeReductionPQR} follows from the first one by substituting $g-g_1$ and $f-f_1$ for $\ltc (g)$ and $\ltc (f)$ respectively. And the denominator $\mo (\idr)=\gcd_\qr (\lcc (f),\lcc (g))$ is defined as in \eqref{GcdPQRqr}. The inequality \eqref{LeadTwoMonomialPQR} is evident since $\lmc (f_1)\prec\lmc (f)$ and $\lmc (g_1)\prec\lmc (g)$. \end{proof} From Lemma \ref{Lemma:IdentitySpecialSPoly} and Lemma \ref{Lemma:RelativePrimePQR} and based on Theorem \ref{Thm:ProperReduction}, we can easily deduce the following corollary for algorithmic simplifications. \begin{corollary}\label{Cor:PQRCoprime} For $f,g\in\rqd$, suppose that $\lmc (f)$ and $\lmc (g)$ are relatively prime. With the same notations as in Lemma \ref{Lemma:RelativePrimePQR}, if $\mo (\idr)\in\rr^\times$, then their $S$-polynomial $S(f,g)$ as in \eqref{CoprimeReductionPQR} can be properly reduced to $0$ by $f$ and $g$ as in Theorem \ref{Thm:ProperReduction} with the multiplier $\mo (\idr)$ and quotients $f_1$ and $-g_1$. In particular, for $f\in\rqd$ and $\tg\in\rat$, with the same notations as in Lemma \ref{Lemma:IdentitySpecialSPoly}, if $\mo (\idr)=\gcd_\qr (\lcc (f),\tg)\in\rr^\times$, then their $S$-polynomial $S(f,g)$ as in \eqref{IdentitySpecialSPoly} can be properly reduced to $0$ by $g$ with the multiplier $\mo (\idr)\in\rr^\times$ and quotient $f-\ltc (f)$. \end{corollary} \begin{definition}[\Lcm\ representation\footnote{Please refer to \cite[P107, Definition 5]{CLO15} for its definition over a field instead.}]\label{Def:LcmRep} \hfill For $\tas=\{\Lst f1s\}\subset\rnd$, we say that an $S$-polynomial $S(f,g)$ has an \emph{\Lcm\ representation} with respect to $\tas$ if there exist $\{\Lst h1s\}\subset\rx$ satisfying: \begin{equation*} S(f,g)=\sum_{j=1}^sh_jf_j \end{equation*} such that the following condition holds: \begin{equation}\label{LcmRepCond} \max_{1\le j\le s}\{\lmc (h_j)\cdot\lmc (f_j)\}\prec\lcm (\lmc (f),\lmc (g)). \end{equation} \end{definition} \begin{remark}\label{Rmk:LCMRep} The \Lcm\ representation is especially suitable for the representation of $S$-polynomials over such rings with zero divisors as the PQR $\rr$. In particular, the condition \eqref{LeadTwoMonomialPQR} in Lemma \ref{Lemma:RelativePrimePQR} amounts to the condition \eqref{LcmRepCond} for the \Lcm\ representation with respect to $\{\tf,\tg\}$ when the multiplier $\mo (\idr)\in\rr^\times$ in \eqref{CoprimeReductionPQR} as in Corollary \ref{Cor:PQRCoprime}. Similarly the identity \eqref{IdentitySpecialSPoly} is also an \Lcm\ representation of $S(f,g)$ with respect to $\tg\in\rat$ when the multiplier $\mo (\idr)\in\rr^\times$. \end{remark} For $g\in\rat$ and $f\in\rqd$, we shall also use the following relation between \eqref{SPolyPQR} and \eqref{SpecialSPolyPQR}: \begin{equation}\label{DisappearingMonomialPQR} S(f,g\cdot\lmc (f))=S(f,g). \end{equation} \begin{lemma}\label{Lemma:SPolynRational} \begin{inparaenum}[(i)] \item\label{item:SPolyRational} The $S$-polynomial in \eqref{SPolyPQR} satisfies $\lmc (S(f,g))\prec\clm$. The $S$-polynomials in \eqref{SpecialSPolyPQR} and \eqref{BeheadSPolyPQR} satisfy $\lmc (S(f,g))\prec\lmc (f)$ and $\lmc (S(f,\qr))\prec\lmc (f)$ respectively. \item\label{item:NonzeroMultipliers} The two multipliers $\lmr_f$ and $\lmr_g$ in \eqref{SPolyPQR} and \eqref{SpecialSPolyPQR} are not zero, that is, we always have $\lmr_f,\lmr_g\in\rr^\ast$ even if $\lcm_\qr (\lcc (f),\lcc (g))=0$ with $\lcm_\qr$ defined as in \eqref{GcdPQRqr}. \end{inparaenum} \end{lemma} \begin{proof} To prove the conclusion \eqref{item:SPolyRational}, it suffices to prove that $\lmr_fc_f=\lmr_gc_g$. In particular, we have $c_g=g$ in the case of \eqref{SpecialSPolyPQR}. The composition $\mo\circ\io$ of the epimorphism $\mo$ in \eqref{ProjectionPQR} and injection $\io$ in \eqref{PQREmbedding} is the identity map on $\rr$. Hence $c_f=\mo (\io (c_f))=\mo (l_f)$ and we have the following verification: \begin{equation*} \lmr_fc_f=\mo\Bigl(\frac{\lcm (l_f,l_g)}{l_f}\Bigr)\cdot\mo (l_f)=\mo (\lcm (l_f,l_g)). \end{equation*} Similarly we have $\lmr_gc_g=\mo (\lcm (l_f,l_g))$ and thus the conclusion follows. The conclusion for \eqref{BeheadSPolyPQR} is easy to corroborate. The conclusion \eqref{item:NonzeroMultipliers} follows from the identity: \begin{equation}\label{NonzeroMultipliers} \lmr_f=\mo\Bigl(\frac{\lcm (l_f,l_g)}{l_f}\Bigr)=\mo\Bigl(\frac{l_g}{\gcd (l_f,l_g)}\Bigr). \end{equation} In fact, according to the definition of leading coefficients in Notation \ref{Notation:LeadingEntities}, $c_g=\lcc (g)\in\rr^\ast$. Hence $l_g=\io (c_g)\in (\kxn)^*$ and $\deg (l_g)<\deg (\qr)$ as in \eqref{ResidueRing}. Thus the multiplier $\lmr_f\in\rr^\ast$ by \eqref{NonzeroMultipliers}. The same holds for $\lmr_g$. \end{proof} A conspicuous difference between the $S$-polynomials in Definition \ref{Def:SpolynPQR} over a normal PQR and those in Definition \ref{Def:SPolynomial} over a PID is that the leading coefficients $\lmr_f\cdot\lcc (f)=\lmr_fc_f$ and $\lmr_g\cdot\lcc (g)=\lmr_gc_g$ in \eqref{SPolyPQR} and \eqref{SpecialSPolyPQR} might be zero due to the possible existence of zero divisors in $\rr^\ast$. We shall prove that this imposes no hindrance to the viability of our computations. For $S$-polynomials over a normal PQR $\rr$, Lemma \ref{Lemma:SPolynRational} \eqref{item:SPolyRational} is equivalent to the inequality \eqref{LeadingSMonomials}. For $f,g\in\rnd$, let us define: \begin{equation}\label{LCMstTerm} \lcm\nolimits_\qr (\ltc (f),\ltc (g)):=\lcm\nolimits_\qr (\lcc (f),\lcc (g))\cdot\lcm (\lmc (f),\lmc (g)) \end{equation} with $\lcm_\qr$ being defined as in \eqref{GcdPQRqr}. Let us use the same notations as in Definition \ref{Def:SpolynPQR} for $S$-polynomials. For $f,g\in\rnd$ without both of them in $\rat$, we define the \emph{\Lcm\ multiplier} of $f$ and $g$ as: \begin{equation}\label{CMR} \cmr (g\vert f):=\mo\Bigl(\frac{\lcm (l_f,l_g)}{l_f}\Bigr)\frac{\lcm (\lmc (f),\lmc (g))}{\lmc (f)}=\frac{\lmr_f\clm}{\lmc (f)}. \end{equation} Then the definition for the $S$-polynomial $S(f,g)$ in \eqref{SPolyPQR} can be written as: \begin{equation}\label{LMRRepSPoly} S(f,g)=\cmr (g\vert f)\cdot f-\cmr (f\vert g)\cdot g. \end{equation} \begin{lemma}\label{Lemma:TriangleIdentityPQR} For $f,g,h\in\rnd$ with at most one of them in $\rat$, if $\lcm (\lmc (f),\lmc (g))\in\langle\lmc (h)\rangle$, then we have the following relationship between their $S$-polynomials: \begin{equation}\label{TriangleIdentityPQR} \mr S(f,g)=\frac{\mr\cdot\cmr (g\vert f)}{\cmr (h\vert f)}S(f,h)-\frac{\mr\cdot\cmr (f\vert g)}{\cmr (h\vert g)}S(g,h) \end{equation} with the \Lcm\ multiplier $\cmr$ being defined as in \eqref{CMR}. Here the multiplier $\mr:=\mo (l_h/\idr)\in\rr^\ast$ with $l_h:=\io (\lcc (h))$ and $\idr:=\gcd (\lcm (l_f,l_g),l_h)$. \end{lemma} \begin{proof} Same as the conclusion in Lemma \ref{Lemma:SPolynRational} \eqref{item:NonzeroMultipliers}, the denominators $\cmr (h\vert f)$ and $\cmr (h\vert g)$ in \eqref{TriangleIdentityPQR} are nonzero, which is the reason why we use the \Lcm\ multiplier $\cmr$ as in \eqref{CMR} instead of the $\lcm_\qr$ as in \eqref{LCMstTerm}. The multiplier $\mr:=\mo (l_h/\idr)\in\rr^\ast$ can indeed render the two fractions in \eqref{TriangleIdentityPQR} terms in $\rx$. The proof is totally similar to that for the multiplier $\mr$ in the identity \eqref{TriangleIdentity} in Lemma \ref{Lemma:TriangleIdentity}. We can corroborate the identity in \eqref{TriangleIdentityPQR} directly by the form of $S$-polynomials in \eqref{LMRRepSPoly} as well as the definition of \Lcm\ multipliers in \eqref{CMR}. \end{proof} Based on the above discussions we now analyze the incompatible part $\Ip (\pel)$ of the pseudo-eliminant $\pel$. Our goal is to determine the corresponding factors of the eliminant $\el$ of the original ideal $I$. Let $\{\ids_i\colon 1\le i\le s\}$ be the composite divisor sets of the incompatible part $\Ip (\pel)$ as in Definition \ref{Def:CompositeDivisor}. For a multiplicity $i$ satisfying $1\le i\le s$ and composite divisor $\sfr^i$ with $\sfr\in\ids_i\subset\kxn$, let us denote $\sfr^i$ as $\qr$ and consider the normal PQR $\rr$ that is isomorphic to the PQR $\bar R=\kxn/\pid\qr=\kxn/\pid{\sfr^i}$ as in Definition \ref{Def:nPQR}. In short, from now on our discussions and computations are over the normal PQR $\rr$ as follows. \begin{equation}\label{kxnPQR} \rr\cong\kxn/\pid\qr,\quad\qr=\sfr^i,\quad\sfr\in\ids_i\subset\kxn. \end{equation} We shall follow the pseudo-eliminant computation in Algorithm \ref{Algo:PseudoEliminant} to compute the eliminant of the ideal $I+\pid\qr=I+\pid{\sfr^i}$ except that we shall compute it over the normal PQR $\rr$. If we extend the ring epimorphism $\mo$ in \eqref{ProjectionPQR} such that it is the identity map on the variables $\tilde{\bx}$, then $\mo$ induces a ring epimorphism from $\knx$ to $\rx$ which we still denote as $\mo$ as follows. \begin{equation}\label{ExtendedProjectionPQR} \mo\colon\knx\rightarrow\rx\colon\quad\mo\Bigl(\sum_{j=1}^s c_j\tilde{\bx}^{\alpha_j}\Bigr):=\sum_{j=1}^s\mo(c_j)\tilde{\bx}^{\alpha_j}. \end{equation} Please note that when the composite divisor $\qr$ bears the form $x_1-a$ with $a\in K$, the epimorphism $\mo$ in \eqref{ProjectionPQR} becomes $\mo (f)=f(a)\in K$ for $f\in\kxn$. In this case the coefficients $\mo(c_j)$ in \eqref{ExtendedProjectionPQR} become $c_j(a)\in K$. We call the induced epimorphism $\mo$ in \eqref{ExtendedProjectionPQR} a \emph{specialization} associated with $a\in K$. Similarly we can extend the injection $\io$ in \eqref{PQREmbedding} to an injection of $\rx$ into $\knx$ in the way that it is the identity map on the variables $\tilde{\bx}$ as follows. \begin{equation}\label{ExtendedEmbedding} \io\colon\rx\rightarrow\knx\colon\quad\io\Bigl(\sum_{j=1}^s c_j\tilde{\bx}^{\alpha_j}\Bigr):=\sum_{j=1}^s\io(c_j)\tilde{\bx}^{\alpha_j}. \end{equation} Further, it is evident that $\mo\circ\io$ is the identity map on $\rx$. \begin{lemma}\label{Lemma:InitialBasisPQR} Let $\succ$ be an elimination ordering on $\bM$ as in Definition \ref{Def:EliminationOrdering} and $\tas:=\{f_j\colon 0\le j\le s\}\subset\knx\setminus K$ a basis of a zero-dimensional ideal $I$. Suppose that $\qr=\sfr^i$ is a composite divisor of the incompatible part $\Ip (\pel)$ as in Definition \ref{Def:CompositeDivisor} and $\rr$ a normal PQR as in Definition \ref{Def:nPQR}. Then for $\tas\cap\kxn$ and $\tbs:=\tas\setminus\kxn$ as in the Initialization of Algorithm \ref{Algo:PseudoEliminant}, we have $\mo (\tas\cap\kxn)=\{0\}$ and $\mo (\tbs)$ is a basis of $\Iq:=\mo (I)$ under the epimorphism $\mo$ in \eqref{ExtendedProjectionPQR}. \end{lemma} \begin{proof} The construction of the composite divisor set $\ids_i$ in Algorithm \ref{Algo:CompatiblePartPseudoEliminant} indicates that the pseudo-eliminant $\pel$ is divisible by the composite divisor $\qr=\sfr^i$. The computation of $\pel$ in Algorithm \ref{Algo:PseudoEliminant} shows that every element of $\tas\cap\kxn$ is divisible by $\lel$ in the Initialization of Algorithm \ref{Algo:PseudoEliminant} and thus by $\pel$. Hence $\tas\cap\kxn\subset\pid{\sfr^i}=\pid\qr\subset\kxn$, which yields $\mo (\tas\cap\kxn)=\{0\}$. Then readily follows the conclusion $\Iq=\langle\mo (\tbs)\rangle$ with $\mo (\tbs)\subset\rqd$. \end{proof} In the following Algorithm \ref{Algo:ProperEliminant} that is parallel to Algorithm \ref{Algo:PseudoEliminant}, please note that all the binary operations over the ring $\rr$ in \eqref{kxnPQR}, i.e., the additions and multiplications over the ring $\rr$ in \eqref{kxnPQR}, are performed according to those defined in \eqref{BinaryOperations}. Based on Lemma \ref{Lemma:InitialBasisPQR}, in what follows let us abuse the notations a bit and simply denote $\mo (\tbs)$ as $\tas=\{f_j\colon 1\le j\le s\}\subset\rqd$. \begin{algorithm}[Proper eliminant and proper basis over a normal PQR $\rr$]\label{Algo:ProperEliminant} \hfil Input: A finite polynomial set $\tas\subset\rqd$. Output: A proper eliminant $\ee\in\rr$ and proper basis $\prb\subset\rqd$. Initialization: A temporary set $\mathfrak{S}:=\emptyset$ in $\rx\setminus\rr$ for $S$-polynomials; a temporary $e\in\rr$ as $e:=0$. For each pair $\tf,\tg\in\tas$ with $\tf\ne\tg$, we invoke \Po R as follows to compute their $S$-polynomial $S(f,g)$. \Po R: \leftskip=5mm \begin{itshape} If $\lmc (\tf)$ and $\lmc (\tg)$ are relatively prime, we compute the multiplier $\mo (\idr):=\gcd_\qr (\lcc (f),\lcc (g))$ as in \eqref{CoprimeReductionPQR}. If $\mo (\idr)\in\rat$, we compute and then add the $S$-polynomial $S(\tf,\tg)$ into the set $\mathfrak{S}$. If $\mo (\idr)\in\rr^\times$ as in Corollary \ref{Cor:PQRCoprime}, we disregard $S(\tf,\tg)$. If there exists an $h\in\tas\setminus\{\tf,\tg\}$ such that $\lcm (\lmc (f),\lmc (g))\in\langle\lmc (h)\rangle$, and the triangular identity \eqref{TriangleIdentityPQR} has not been applied to the same triplet $\{f,g,h\}$ before, we compute the multiplier $\mr$ as in \eqref{TriangleIdentityPQR}. If $\mr\in\rat$, we compute and then add the $S$-polynomial $S(\tf,\tg)$ into the set $\mathfrak{S}$. If $\mr\in\rr^\times$, we disregard $S(\tf,\tg)$. If neither of the above two cases is true, we compute their $S$-polynomial $S(\tf,\tg)$ as in \eqref{SPolyPQR}. Then we add $S(\tf,\tg)$ into the set $\mathfrak{S}$. \end{itshape} End of $\mathcal{R}$ \leftskip=0mm We recursively repeat \Po P as follows for proper reductions of all the $S$-polynomials in $\mathfrak{S}$. \Po P: \leftskip=5mm \begin{itshape} For an $S\in\mathfrak{S}$, we invoke Theorem \ref{Thm:ProperReduction} to make a proper reduction of $S$ by $\tas$. If the remainder $\dr\in\rqd$, we add $\dr$ into $\tas$. For every $\tf\in\tas\setminus\{\dr\}$, we invoke \Po R to compute the $S$-polynomial $S(\tf,\dr)$. If the remainder $\dr\in\rat$ and $e=0$, we redefine $\et:=\mo(\gcd (\io (\dr),q))$ with $\mo$ and $\io$ as in \eqref{ProjectionPQR} and \eqref{PQREmbedding} respectively. If the remainder $\dr\in\rat$ and $\et\in\rr^\ast$, we compute $\idr=\gcd_\qr (\dr,\et)$ as in \eqref{GcdPQRqr}. If $\idr$ is not an associate of $\et$, we redefine $\et:=\idr$. If the remainder $\dr\in\rr^\times$, we halt the algorithm and output $\ee=1$. Then we delete $S$ from $\mathfrak{S}$. \end{itshape} End of $\mathcal{P}$ \leftskip=0mm Next we recursively repeat \Po Q as follows for proper reductions of the special kinds of $S$-polynomials in \eqref{SpecialSPolyPQR} and \eqref{BeheadSPolyPQR}. \Po Q: \leftskip=5mm \begin{itshape} If $\mathfrak{S}=\emptyset$ and $\et=0$, then for every $\tf\in\tas$ with $\lcc (\tf)\in\rat$, we compute the $S$-polynomial $S(\tf,\qr)$ as in \eqref{BeheadSPolyPQR} and add it into $\mathfrak{S}$ if this has not been done for $\tf$ in a previous step. Then we recursively repeat \Po P. If $\mathfrak{S}=\emptyset$ and $\et\in\rr^\ast$, then for every $\tf\in\tas$ with $\lcc (\tf)\in\rat$, if $\mo (\idr):=\gcd_\qr (\lcc (\tf),\et)\in\rat$ as in Corollary \ref{Cor:PQRCoprime}, we compute the $S$-polynomial $S(\tf,\et)$ as in \eqref{IdentitySpecialSPoly} and add it into $\mathfrak{S}$ unless an $S$-polynomial equal to $uS(\tf,\et)$ with $u\in\rr^\times$ had been added into $\mathfrak{S}$ in a previous step. Then we recursively repeat \Po P. \end{itshape} End of $\mathcal{Q}$ \leftskip=0mm Finally we define $\ee:=\et$ and $\prb:=\tas$ respectively. We output $\ee$ and $\prb$. \end{algorithm} \begin{remark} In \Po Q of Algorithm \ref{Algo:ProperEliminant}, the condition $\lcc (f)\in\rat$ in the case of $\et\in\rr^\ast$ is necessary because if $\lcc (f)\in\rr^\times$, we would have $\idr:=\gcd_\qr (\lcc (\tf),\et)\in\rr^\times$ as in Corollary \ref{Cor:PQRCoprime}. \end{remark} \begin{definition}[Proper eliminant $\ee$; proper basis $\prb$; modular eliminant $\mel$]\label{Def:ProperEliminant} \hfill We call the standard representation $\ee^\st$ as in \eqref{StdRepsPQR} of the univariate polynomial $\ee\in\Iq\cap\rr$ obtained in Algorithm \ref{Algo:ProperEliminant}, whether it is zero or not, a \emph{proper} eliminant of the ideal $\Iq$. In what follows we shall simply denote $\ee:=\ee^\st$ except for a necessary discrimination in the context. We also call the final polynomial set $\prb$ obtained in Algorithm \ref{Algo:ProperEliminant} a \emph{proper} basis of $\Iq$. Let $\el$ be the eliminant of a zero-dimensional ideal $I$ and $\qr$ a composite divisor as in \eqref{kxnPQR}. Suppose that $\mo$ is the epimorphism as in \eqref{ExtendedProjectionPQR}. Then we define $\mel:=\mo (\el)$ as the \emph{modular} eliminant of $\Iq=\mo (I)$. \end{definition} \begin{lemma}\label{Lemma:ProperEliminant} Let $\mel$ and $\ee$ be the modular and proper eliminants of the ideal $\Iq$ respectively as in Definition \ref{Def:ProperEliminant}. Then the following conclusions hold. \begin{enumerate}[(a)] \item\label{item:EliminantAndItsPart} If the modular eliminant $\mel\in\rr^\ast$, then the eliminant $\el$ is divisible by $\io (\mel^\st)$ with $\io$ being the injection as in \eqref{PQREmbedding} and $\mel^\st$ the standard representation of $\mel$ in $\rr$ as in \eqref{StdRepsPQR}. And we have $\mlt_\ipr (\el)=\mlt_\ipr (\mel^\st)$ for every irreducible factor $\ipr$ of the composite divisor $\qr=\sfr^i$. If $\mel=0$, then $\el$ is divisible by $\qr$ and we have $\mlt_\ipr (\el)=\mlt_\ipr (\qr)$ for every irreducible factor $\ipr$ of $\qr$. \item\label{item:ProperEliminant} The epimorphism $\mo$ as in \eqref{ExtendedProjectionPQR} is also an epimorphism from $I\cap\kxn$ to $\Iq\cap\rr$. Moreover, the proper and modular eliminants $\ee$ and $\mel$ of $\Iq$ satisfy $\ee\in\pid\mel=\Iq\cap\rr$. In particular, we have $\ee=0$ if $\mel=0$. \item\label{item:SPolynReductionPQR} For each pair $f\ne g$ in the polynomial set $\prb\cup\{\ee\}$ with $\ee\in\rr^\ast$ and $\prb$ being the proper basis of $\Iq$, the proper reduction of their $S$-polynomial $S(f,g)$ by $\prb$ yields a remainder $\dr\in\pid\ee\subset\pid\mel\subset\rr$ including the special case of $\dr=0$. The same holds for the polynomial set $\prb\cup\{\qr\}$ when $\ee=0$. \item\label{item:ProperEliminantTermination} Algorithm \ref{Algo:ProperEliminant} terminates in finite steps. \end{enumerate} \end{lemma} \begin{proof} Let us first prove \eqref{item:EliminantAndItsPart}. When $\mel\in\rr^\ast$, by Lemma \ref{Lemma:PQRProperties} as well as the definition of the standard representation $\mel^\st$ of $\mel$ as in \eqref{StdRepsPQR}, we know that $\mel=u\mel^\st$ with $u\in\rr^\times$ being a unit. Moreover, for every irreducible factor $\ipr$ of $\qr$, we have $0\le\mlt_\ipr (\mel^\st)\le\mlt_\ipr (\qr)$ as per Lemma \ref{Lemma:PQRProperties} \eqref{item:StandardRep} and \eqref{StdRepsPQR}. As per Lemma \ref{Lemma:PseudoEliminantTerminate} \eqref{item:EliminantDivisibility}, the pseudo-eliminant $\pel$ is divisible by $\el$. By Definition \ref{Def:CompositeDivisor}, the composite divisor $\qr=\sfr^i$ satisfies $\mlt_\ipr (\qr)=i=\mlt_\ipr (\pel)$ for every irreducible factor $\ipr$ of $\qr$. Hence follows the following inequality: \begin{equation}\label{MinimumEliMultiplicity} 0<\mlt_\ipr (\el)\le\mlt_\ipr (\qr)=i \end{equation} for every irreducible factor $\ipr$ of $\qr$. Based on the division identity $\el=h\qr+\io (\mel)=h\qr+\io (u\mel^\st)$ that is parallel to \eqref{DivisionPQR}, we can deduce that $\mlt_\ipr (\mel^\st)=\mlt_\ipr (\io (\mel^\st))=\mlt_\ipr (\el)$ for every irreducible factor $\ipr$ of $\qr$, which is similar to the deduction of \eqref{InvariantMultiplicity}. Thus the eliminant $\el$ is divisible by $\io (\mel^\st)$ as in the conclusion \eqref{item:EliminantAndItsPart} due to the arbitrariness of the irreducible factor $\ipr$ of $\qr$. When the modular eliminant $\mel=0$, the divisibility of $\el$ by $\qr$ can be readily deduced from the definition of the epimorphism $\mo$ in \eqref{ProjectionPQR}. Then the equality $\mlt_\ipr (\el)=\mlt_\ipr (\qr)$ for every irreducible factor $\ipr$ of $\qr$ can be deduced from \eqref{MinimumEliMultiplicity}. Next let us prove \eqref{item:ProperEliminant}. For every $r\in\Iq\cap\rr$, assume that there exists $f\in I\setminus\kxn$ such that $\mo (f)=r$. Then $f$ can be written into $f=g\qr+\io (r)$ with $\io (r)\in\kxn$. Let us denote $\idr:=\gcd (\el,\qr)\in\kxn$. It is evident that we have $g\qr\el/\idr\in\langle\el\rangle\subset I$ and hence $(f-g\qr)\el/\idr=\el\cdot\io (\dr)/\idr \in\Il$. Moreover, $\mo (\el\cdot\io (\dr)/\idr)=\dr\mo (\el/\idr)$ such that $\mo (\el/\idr)\in\rr^\times$ by Lemma \ref{Lemma:PQRProperties} \eqref{item:SimpleCoprime} since $\el/\idr$ is relatively prime to $\qr$. Thus $\mo\colon\Il\longrightarrow\Iq\cap\rr$ is an epimorphism. As a result, we have $\Iq\cap\rr=\pid\mel$ based on $\Il=\pid\el$ as per Definition \ref{Def:Eliminant}. Then the conclusion \eqref{item:ProperEliminant} readily follows from the fact that $\ee\in\Iq\cap\rr$. The proofs for the conclusions \eqref{item:SPolynReductionPQR} and \eqref{item:ProperEliminantTermination} are almost verbatim repetitions of those for Lemma \ref{Lemma:PseudoEliminantTerminate} \eqref{item:AllSPolynRemainder} and \eqref{item:PseudoEliminantTerminate}. In particular, the argument for \eqref{item:ProperEliminantTermination} is based on the fact that $\rx$ is also a Noetherian ring. In fact, the normal PQR $\rr$ in Definition \ref{Def:nPQR} is isomorphic to the Noetherian PQR $\bar R$ in Definition \ref{Def:PQR}. \end{proof} \begin{corollary} If the proper eliminant $\ee\in\rr^\ast$, then the eliminant $\el$ is not divisible by the composite divisor $\qr=\sfr^i$ of the incompatible part $\Ip (\pel)$. Moreover, if the proper eliminant $\ee\in\rr^\times$, then the eliminant $\el$ is relatively prime to the composite divisor $\qr=\sfr^i$. \end{corollary} \begin{proof} If the eliminant $\el$ is divisible by the composite divisor $\qr$, then the modular eliminant $\mel=\mo (\el)=0$. By Lemma \ref{Lemma:ProperEliminant} \eqref{item:ProperEliminant} we can deduce that $\ee=0$, contradicting $\ee\in\rr^\ast$. If the proper eliminant $\ee\in\rr^\times$, there exists $b\in\rr^\times$ such that $1=b\ee\in\pid\mel$ since we have $\ee\in\pid\mel$ by Lemma \ref{Lemma:ProperEliminant} \eqref{item:ProperEliminant}. Hence the modular eliminant $\mel=\mo (\el)\in\rr^\times$, from which we can deduce that the eliminant $\el$ is relatively prime to the composite divisor $\qr=\sfr^i$ by Lemma \ref{Lemma:PQRProperties} \eqref{item:SimpleCoprime}. \end{proof} In what follows let us exclude the trivial case when the proper eliminant $\ee\in\rr^\times$. That is, let us assume that the eliminant $\el$ is not relatively prime to the composite divisor $\qr=\sfr^i$. \begin{lemma}\label{Lemma:nPQRSyzygy} Let $\tas=\{f_j\colon 1\le j\le s\}\subset\rqd$ be a polynomial set over a normal PQR $\rr$ as in \eqref{kxnPQR}. Suppose that for $1\le j\le s$, each $f_j$ has the same leading monomial $\lmc (f_j)=\tilde{\bx}^\alpha\in\tM$. \begin{inparaenum}[(a)] \item\label{item:SPolynomialExpansionPQR} If $f=\sum_{j=1}^s f_j$ satisfies $\lmc (f)\prec\tilde{\bx}^\alpha$, then there exist multipliers $\ibr,\ibr_j\in\rr^\ast$ for $1\le j<s$ such that \begin{equation}\label{SPolynomialExpansionPQR} \ibr f=\sum_{1\le j<s}\ibr_jS(f_j,f_s) \end{equation} with the $S$-polynomial $S(f_j,f_s)$ being defined as in \eqref{SPolyPQR}. \item\label{item:NondivisibilityPQR} For every irreducible factor $\ipr$ of the composite divisor $\qr$, we can always relabel the subscripts of the polynomial set $\tas=\{f_j\colon 1\le j\le s\}$ such that the multiplier $\ibr\in\rr^\ast$ in \eqref{SPolynomialExpansionPQR} is not divisible by $\ipr$. \end{inparaenum} \end{lemma} \begin{proof} The following meticulous discussions are to ensure that our computations and manipulations are in the algebra $\rx$. \begin{inparaenum}[(a)] \item Let us denote $\icr_j:=\lcc (f_j)\in\rr^\ast$ for $1\le j\le s$. And $l_j:=\io (\icr_j)$ for $1\le j\le s$. We also define multipliers $\lmr_j:=\lcm (l_j,l_s)/l_j$ and $\lnr_j:=\lcm (l_s,l_j)/l_s$ in $R=\kxn$ for $1\le j<s$. Same as the proof for Lemma \ref{Lemma:SPolynRational} \eqref{item:NonzeroMultipliers}, we can substitute $\lmr_j$ by $l_s/\gcd (l_j,l_s)$ for $1\le j<s$, which leads to the following identities since $\mo$ is a homomorphism: \begin{equation*} \mo (\lmr_j)\cdot\mo (\gcd (l_j,l_s))=\mo (l_s/\gcd (l_j,l_s))\cdot\mo (\gcd (l_j,l_s))=\mo (l_s)=\icr_s\in\rr^\ast. \end{equation*} Hence we have $\mo (\lmr_j)\in\rr^\ast$ for $1\le j<s$. Similarly we also have $\mo (\lnr_j)\in\rr^\ast$ for $1\le j<s$. Let us define a multiplier $\ar:=\lcm_{1\le j<s}(\lmr_j)\in\kxn$ and $\ibr:=\mo (\ar)\in\rr$. Consider the following identities of univariate polynomials that are not difficult to verify with $\dr:=\gcd_{1\le j\le s}(l_j)$: \begin{equation}\label{ExerciseIdentity} \ar=\lcm\limits_{1\le j<s}(\lmr_j) =\lcm\limits_{1\le j<s}\Bigl(\frac{l_s}{\gcd (l_j,l_s)}\Bigr) =\frac{l_s}{\gcd_{1\le j<s}(\gcd (l_j,l_s))} =\frac{l_s}\dr. \end{equation} Based on \eqref{ExerciseIdentity}, we can infer that $\mo (\dr)\ibr=\mo (\dr)\mo (l_s/\dr)=\mo (l_s)=\icr_s\in\rr^\ast$. Hence we have $\ibr\in\rr^\ast$. Similarly to the above, with $\ar_j:=\ar/\lmr_j\in\kxn$ and $\ibr_j:=\mo (\ar_j)$ for $1\le j<s$, we can deduce that $\ibr_j\in\rr^\ast$ from $\ibr_j\cdot\mo (\lmr_j)=\ibr\in\rr^\ast$. With the $S$-polynomials $S(f_j,f_s)$ defined in \eqref{SPolyPQR}, we have the following equalities by denoting $\iur_j:=\mo (\lmr_j)\in\rr^\ast$ and $\ivr_j:=\mo (\lnr_j)\in\rr^\ast$ for $1\le j<s$: \begin{align}\notag \sum_{1\le j<s}\ibr_jS(f_j,f_s) &=\sum_{1\le j<s}\ibr_j(\iur_jf_j-\ivr_jf_s)=\ibr\sum_{1\le j<s}f_j-f_s\sum_{1\le j<s}\mo\Bigl(\frac{\ar\lnr_j}{\lmr_j}\Bigr)\\\label{PQRIdentityLastSum} &=\ibr\sum_{1\le j<s}f_j-f_s\sum_{1\le j<s}\mo\Bigl(\frac{l_j}{\dr}\Bigr), \end{align} where the final equality \eqref{PQRIdentityLastSum} is based on the identity $\lnr_j/\lmr_j=l_j/l_s$ for $1\le j<s$ as well as the identity \eqref{ExerciseIdentity}. Moreover, it is easy to verify that $\mo (l_j/\dr)=\mo (l_j)/\mo (\dr)=\icr_j/\mo (\dr)\in\rr^\ast$ in \eqref{PQRIdentityLastSum}. Hence the equality \eqref{PQRIdentityLastSum} now becomes: \begin{equation}\label{MiddleEquality} \sum_{1\le j<s}\ibr_jS(f_j,f_s)=\ibr\sum_{1\le j<s}f_j-\frac{f_s}{\mo (\dr)}\sum_{1\le j<s}\icr_j. \end{equation} The given condition $\lmc (f)\prec\tilde{\bx}^\alpha$ amounts to $0=\sum_{j=1}^s\lcc (f_j)=\sum_{j=1}^s\icr_j$, from which we can deduce that $\sum_{1\le j<s}\icr_j=-\icr_s$. We plug this into \eqref{MiddleEquality} to obtain: \begin{equation}\label{SyzygyCoeffCond} \sum_{1\le j<s}\ibr_jS(f_j,f_s)=\ibr\sum_{1\le j<s}f_j+\frac{\icr_sf_s}{\mo (\dr)}. \end{equation} Finally as per \eqref{ExerciseIdentity} we can easily deduce that $\mo (\dr)=\mo (l_s)/\mo (\ar)=\icr_s/\ibr\in\rr^\ast$. This combined with \eqref{SyzygyCoeffCond} yield the conclusion \eqref{SPolynomialExpansionPQR}. \item The proof depends on a substitution of $\lmr_j=\lcm (l_j,l_s)/l_j$ by $l_s/\gcd (l_j,l_s)$ for $1\le j<s$. More specifically, given an irreducible factor $\ipr$ of the composite divisor $\qr$, we can always change the order of the elements in the polynomial set $\tas=\{f_j\colon 1\le j\le s\}$ so that $\mlt_\ipr (l_s)=\min_{1\le j\le s}\{\mlt_\ipr (l_j)\}$. Hence $\mlt_\ipr (\lmr_j)=\mlt_\ipr (l_s/\gcd (l_j,l_s))=0$ for $1\le j<s$. And $\mlt_\ipr (\ar)=0$ since $\ar:=\lcm_{1\le j<s}(\lmr_j)$. Similar to the deduction of \eqref{InvariantMultiplicity}, from the division of $\ar$ by $\qr$ as $\ar=h\qr+\mo (\ar)=h\qr+\ibr$ we can deduce that $\mlt_\ipr (\ibr)=\mlt_\ipr (\ar)=0$. \end{inparaenum} \end{proof} The proof of the following theorem is similar to that of Theorem \ref{Thm:CompatiblePart}. However in the proof there is an unprecedented phenomenon that the leading coefficients of the polynomials in $\rx$ might be zero divisors in $\rr^\ast$. I believe that a meticulous elaboration on the proof constitutes a remedy for the situation, albeit at a price of being a bit repetitious. \begin{theorem}\label{Thm:IncompatiblePart} Suppose that $\el$ is the eliminant of a zero-dimensional ideal $I$ in $\knx$ over a perfect field $K$ and $\pel$ a pseudo-eliminant of $I$. Let $\qr=\sfr^i$ be a composite divisor of the incompatible part $\Ip (\pel)$ and $\rr$ the normal PQR as in \eqref{kxnPQR}. Let $\ee$ and $\mel$ in $\rr$ denote the proper and modular eliminants respectively as in Definition \ref{Def:ProperEliminant}. \begin{inparaenum}[(a)] \item\label{item:ProperZeroRemainder} If the proper eliminant $\ee=0$, then the eliminant $\el$ is divisible by the composite divisor $\qr=\sfr^i$ of the incompatible part $\Ip (\pel)$ and hence the modular eliminant $\mel=0$. For every irreducible factor $\ipr$ of the composite divisor $\qr$, we have $\mlt_\ipr (\el)=i$. \item\label{item:ProperNonZeroRemainder} If the proper eliminant $\ee\in\rr^\ast$, then the eliminant $\el$ is divisible by $\io (\ee)$ with $\io$ being the injection as in \eqref{PQREmbedding} and $\ee=\ee^\st$ as in Definition \ref{Def:ProperEliminant}. Hence the modular eliminant $\mel^\st=\ee$. For every irreducible factor $\ipr$ of the composite divisor $\qr$, we have $\mlt_\ipr (\el)=\mlt_\ipr (\ee)$. \end{inparaenum} \end{theorem} \begin{proof} Let us fix an irreducible factor $\ipr$ of the composite divisor $\qr=\sfr^i$. If $\tas$ is the originally given basis of the ideal $I$ in $\knx$, then $\mo (F)$ is a basis of the ideal $\Iq=\mo (I)$ in $\rr$ under the epimorphism $\mo$ in \eqref{ExtendedProjectionPQR} according to Lemma \ref{Lemma:InitialBasisPQR}. For simplicity let us abuse the notation a bit and still denote $\mo (F)$ as $\tas=\{f_j\colon 1\le j\le s\}\subset\rqd$. Then there exist $h_j\in\rx$ for $1\le j\le s$ such that the modular eliminant $\mel=\mo (\el)\in\rr$ can be written as: \begin{equation}\label{InitialRepresentationPQR} \mel=\sum_{j=1}^s h_jf_j. \end{equation} Suppose that $\max_{1\le j\le s}\{\lmc (h_j)\cdot\lmc (f_j)\}=\tilde{\bx}^\beta$. Similar to \eqref{RevisedRepresentation}, we collect and rename the set $\{f_j\colon\lmc (h_jf_j)=\tilde{\bx}^\beta,1\le j\le s\}$ as a new set $\pb_t:=\{g_j\colon 1\le j\le t\}$. Let us first make an assumption that $\pb_t\ne\emptyset$. We shall address the special case of $\pb_t=\emptyset$ shortly afterwards. Of course the subscripts of the functions $\{h_j\}$ are relabelled accordingly. It is easy to see that for $g_j\in\pb_t$ with $1\le j\le t$, we have $\lcc (h_j)\cdot\lcc (g_j)\in\rr^\ast$. In this way \eqref{InitialRepresentationPQR} can be written as: \begin{equation}\label{RevisedRepresentationPQR} \mel=\sum_{j=1}^th_jg_j+\sum_{f_j\in\tas\setminus\pb_t}h_jf_j, \end{equation} where the products $h_jf_j$ are those in \eqref{InitialRepresentationPQR} with $f_j\in\tas\setminus\pb_t$ for $1\le j\le s$. With $\ltc (h_j):=\icr_j\tilde{\bx}^{\alpha_j}$ and $\lcc (h_j)=\icr_j\in\rr^\ast$ for $1\le j\le t$, it suffices to study the following polynomial that is a summand of \eqref{RevisedRepresentationPQR}: \begin{equation}\label{InitialTermRepsPQR} \tg:=\sum_{j=1}^t\ltc (h_j)\cdot g_j=\sum_{j=1}^t\icr_j\tilde{\bx}^{\alpha_j}g_j. \end{equation} From $\lmc (\mel)=1\prec\tilde{\bx}^\beta$ in \eqref{RevisedRepresentationPQR} we can deduce that $\lmc (\tg)\prec\tilde{\bx}^\beta$ in \eqref{InitialTermRepsPQR}. As per Lemma \ref{Lemma:nPQRSyzygy} \eqref{item:SPolynomialExpansionPQR}, there exist multipliers $\ibr,\ibr_j\in\rr^\ast$ for $1\le j<t$ such that: \begin{equation}\label{LeadingMonomialSyzygyPQR} \ibr\tg=\sum_{1\le j<t}\ibr_jS(\icr_j\tilde{\bx}^{\alpha_j}g_j,\icr_t\tilde{\bx}^{\alpha_t}g_t) \end{equation} with $g_j\in\pb_t$ for $1\le j\le t$. Moreover, let us fix an irreducible factor $\ipr$ of the composite divisor $\qr=\sfr^i$. According to Lemma \ref{Lemma:nPQRSyzygy} \eqref{item:NondivisibilityPQR}, we can reorder the elements in $\pb_t$ such that the multiplier $\ibr$ in \eqref{LeadingMonomialSyzygyPQR} is not divisible by the irreducible factor $\ipr$, i.e., $\mlt_\ipr (\ibr)=0$. In what follows let us elaborate on the simplifications of the $S$-polynomials in \eqref{LeadingMonomialSyzygyPQR} that are similar to \eqref{SPolynomialRelation} and \eqref{SpecialSPolyRelation}. The purpose of the following meticulous discussions is to ensure that all our manipulations in the proof are in the ring $\rx$. For $1\le j\le t$ we have $\icr_j\cdot\lcc (g_j)=\lcc (h_j)\cdot\lcc (g_j)\in\rr^\ast$ by the definition of $\pb_t$ in \eqref{RevisedRepresentationPQR}. Let us denote $\plc_j:=\icr_j\cdot\lcc (g_j)\in\rr^\ast$ for $1\le j\le t$. And $L_j:=\io (\plc_j)\in (\kxn)^\ast$ for $1\le j\le t$. Let us also define multipliers $\iur_j:=\mo (\lcm (L_j,L_t)/L_j)$ and $\ivr_j:=\mo (\lcm (L_t,L_j)/L_t)$ in $\rr$ for $1\le j<t$. By the definition in \eqref{SPolyPQR}, we have for $1\le j<t$: \begin{equation}\label{SyntheticSPolynomial} S(\icr_j\tilde{\bx}^{\alpha_j}g_j,\icr_t\tilde{\bx}^{\alpha_t}g_t) =\frac{\iur_j\icr_j\tilde{\bx}^\beta}{\lmc (g_j)}g_j-\frac{\ivr_j\icr_t\tilde{\bx}^\beta}{\lmc (g_t)}g_t. \end{equation} By Lemma \ref{Lemma:SPolynRational} \eqref{item:NonzeroMultipliers}, we have $\iur_j,\ivr_j\in\rr^\ast$ for $1\le j<t$. For $1\le j\le t$ let us denote $\ar_j:=\lcc (g_j)\in\rr^\ast$ and $l_j:=\io (\ar_j)\in (\kxn)^\ast$. The multipliers $\lmr_j:=\mo (\lcm (l_j,l_t)/l_j)$ and $\lnr_j:=\mo (\lcm (l_t,l_j)/l_t)$ are in $\rr^\ast$ for $1\le j<t$ by Lemma \ref{Lemma:SPolynRational} \eqref{item:NonzeroMultipliers}. If we define $\tilde{\bx}^{\gamma_j}:=\lcm (\lmc (g_j),\lmc (g_t))$, we have for $1\le j<t$: \begin{equation}\label{SimpleSPolynomial} S(g_j,g_t) =\frac{\lmr_j\tilde{\bx}^{\gamma_j}}{\lmc (g_j)}g_j-\frac{\lnr_j\tilde{\bx}^{\gamma_j}}{\lmc (g_t)}g_t. \end{equation} by the definition in \eqref{SPolyPQR}. Now let us define the following multipliers for $1\le j<t$: \begin{equation}\label{ConnectMultiplier} \iwr_j:=\mo\Bigl(\frac{\lcm (L_j,L_t)}{\lcm (l_j,l_t)}\Bigr). \end{equation} From our assumption $\plc_j\in\rr^\ast$ we know that $\mlt_\varrho (l_j)\le\mlt_\varrho (L_j)$ for every irreducible factor $\varrho$ of the composite divisor $\qr$. Hence $\lcm (L_j,L_t)$ is divisible by $\lcm (l_j,l_t)$ and as a result, $\iwr_j\in\rr$. We have the following representations of the $S$-polynomials in \eqref{SyntheticSPolynomial} by those in \eqref{SimpleSPolynomial}: \begin{equation}\label{SPolynomialRelationPQR} S(c_j\tilde{\bx}^{\alpha_j}g_j,c_t\tilde{\bx}^{\alpha_t}g_t)=\iwr_j\tilde{\bx}^{\beta-\gamma_j}S(g_j,g_t) \end{equation} with $g_j\in\pb_t$ for $1\le j\le t$ and $\pb_t$ being the polynomial set as in \eqref{RevisedRepresentationPQR}. In fact, by $C_j=\icr_j\cdot\lcc (g_j)=\icr_j\ar_j\in\rr^\ast$ as above and $\mo\circ\io$ being the identity map on $\rr$, we have the following identities between the multipliers in \eqref{SyntheticSPolynomial} and those in \eqref{SimpleSPolynomial}: \begin{equation}\label{NonzeroRelation} \begin{aligned} \iur_j\icr_j&=\frac{\iur_j\plc_j}{\ar_j} =\mo\Bigl(\frac{\lcm (L_j,L_t)}{L_j}\Bigr)\cdot\frac{\mo (L_j)}{\ar_j} =\frac{\mo (\lcm (L_j,L_t))}{\ar_j};\\ \iwr_j\lmr_j&=\mo\Bigl(\frac{\lcm (L_j,L_t)}{\lcm (l_j,l_t)}\Bigr)\cdot\mo\Bigl(\frac{\lcm (l_j,l_t)}{l_j}\Bigr)\cdot\frac{\mo (l_j)}{\ar_j}=\frac{\mo (\lcm (L_j,L_t))}{\ar_j}. \end{aligned} \end{equation} Thus by \eqref{NonzeroRelation} we have $\iwr_j\lmr_j=\iur_j\icr_j$. Similarly we can prove that $\iwr_j\lnr_j=\ivr_j\icr_t$. Hence follows the relationship \eqref{SPolynomialRelationPQR} between the $S$-polynomials. Moreover, this shows that $\iwr_j\in\rr^\ast$ whenever either $\iur_j\icr_j\in\rr^\ast$ or $\ivr_j\icr_t\in\rr^\ast$ in \eqref{SyntheticSPolynomial}. Let $\prb=\{\tg_k\colon 1\le k\le\tau\}\subset\rqd$ be the proper basis of the ideal $\Iq$ obtained in Algorithm \ref{Algo:ProperEliminant} such that the polynomial set $\pb_t$ is a subset of $\prb$. In Algorithm \ref{Algo:ProperEliminant} we have properly reduced every $S$-polynomial $S(g_j,g_t)$ in \eqref{SimpleSPolynomial} by the proper basis $\prb$ for $1\le j<t$, either directly or indirectly by the triangular identity \eqref{TriangleIdentityPQR} in Lemma \ref{Lemma:TriangleIdentityPQR}. More specifically, according to Theorem \ref{Thm:ProperReduction}, for $1\le j<t$, there exist a multiplier $\mr_j\in\rr^\times$ as well as a remainder $\dr_j\in\rr$ and quotients $\qr_{jk}\in\rx$ for $1\le k\le\tau$ such that: \begin{equation}\label{SPolynomialReductionPQR} \mr_jS(g_j,g_t)=\sum_{k=1}^\tau q_{jk}\tg_k+\dr_j =\sum_{k=1}^\tau q_{jk}\tg_k+\rhor_j\ee. \end{equation} Here we abused the notations a bit and denote $g_j$ as an element in $\pb_t$ with $1\le j\le t$ whereas $g_k$ a proper basis element in $\prb\supset\pb_t$ with $1\le k\le\tau$. The remainder $\dr_j$ in \eqref{SPolynomialReductionPQR} is a univariate polynomial in $\pid\ee\subset\rr$ according to Lemma \ref{Lemma:ProperEliminant} \eqref{item:SPolynReductionPQR}. Hence in \eqref{SPolynomialReductionPQR} we simply denote $\dr_j:=\rhor_j\ee$ with $\rhor_j\in\rr$. Based on \eqref{SPolynomialRelationPQR} and \eqref{SPolynomialReductionPQR}, we obtain a pseudo-reduction of the $S$-polynomial $S(c_j\tilde{\bx}^{\alpha_j}g_j,c_t\tilde{\bx}^{\alpha_t}g_t)$ in \eqref{LeadingMonomialSyzygyPQR} as follows for $1\le j<t$. \begin{equation}\label{RealSPolyReductionPQR} \mr_jS(c_j\tilde{\bx}^{\alpha_j}g_j,c_t\tilde{\bx}^{\alpha_t}g_t) =\iwr_j\tilde{\bx}^{\beta-\gamma_j}\Bigl(\sum_{k=1}^\tau q_{jk}\tg_k+\rhor_j\ee\Bigr). \end{equation} Here we still use the multiplier $\mr_j\in\rr^\times$ in \eqref{SPolynomialReductionPQR} for the pseudo-reduction in \eqref{RealSPolyReductionPQR}. Let $\mr$ denote the product of all the multipliers $\mr_j\in\rr^\times$ in \eqref{RealSPolyReductionPQR} for $1\le j<t$. It is evident that we still have $\mr\in\rr^\times$. Based on \eqref{LeadingMonomialSyzygyPQR} and the pseudo-reductions of $S$-polynomials in \eqref{RealSPolyReductionPQR}, we obtain the following representation: \begin{equation}\label{TempRepresentPQR} \ibr\mr\tg=\sum_{k=1}^\tau q_k\tg_k+\rhor\ee \end{equation} with $q_k:=\sum_{1\le j<t}\idr_j\tilde{\bx}^{\beta-\gamma_j}q_{jk}$ and $\rhor:=\sum_{1\le j<t}\idr_j\tilde{\bx}^{\beta-\gamma_j}\rhor_j$ if we denote $\idr_j:=\mr\ibr_j\iwr_j/\mr_j\in\rr$ for $1\le j<t$. Moreover, the multiplier $\ibr\mr$ in \eqref{TempRepresentPQR} satisfies $\mlt_\ipr (\ibr\mr)=0$ since $\mlt_\ipr (\ibr)=0$ in \eqref{LeadingMonomialSyzygyPQR} for the fixed irreducible factor $\ipr$ of the composite divisor $\qr$ and $\mr\in\rr^\times$ in \eqref{TempRepresentPQR}. According to \eqref{ProperDivisionCond}, we can deduce that $\lmc (S(\tg_j,\tg_t))=\max_{1\le k\le\tau}\{\lmc (q_{jk})\cdot\lmc (\tg_k)\}$ holds for $1\le j<t$ in \eqref{SPolynomialReductionPQR}. The $S$-polynomial in \eqref{SimpleSPolynomial} satisfies \begin{equation}\label{SPolyExpoReduce} \lmc (S(\tg_j,\tg_t))\prec\tilde{\bx}^{\gamma_j} \end{equation} for $1\le j<t$ according to Lemma \ref{Lemma:SPolynRational} \eqref{item:SPolyRational}. Hence for $1\le j<t$ and $1\le k\le\tau$ we have the following estimates in \eqref{RealSPolyReductionPQR}: \begin{equation}\label{StrictDecreaseMonoPQR} \tilde{\bx}^{\beta-\gamma_j}\cdot\lmc (q_{jk})\cdot\lmc (\tg_k) \preceq\tilde{\bx}^{\beta-\gamma_j}\cdot\lmc (S(\tg_j,\tg_t))\prec\tilde{\bx}^\beta. \end{equation} From the above we can deduce the following inequality in \eqref{TempRepresentPQR}: \begin{equation}\label{DecreaseLeadingMonomialPQR} \max\bigl\{\max_{1\le k\le\tau}\{\lmc (q_k)\cdot\lmc (\tg_k)\},\lmc (\rhor)\bigr\}\prec\tilde{\bx}^\beta. \end{equation} There is a special kind of $S$-polynomials whose proper reductions as in \eqref{SPolynomialReductionPQR} are performed in \eqref{CoprimeReductionPQR} of Lemma \ref{Lemma:RelativePrimePQR} instead of Algorithm \ref{Algo:ProperEliminant}. This is the case when $\lmc (\tg_j)$ and $\lmc (\tg_t)$ are relatively prime and $\mo (\idr)\in\rr^\times$ as in Corollary \ref{Cor:PQRCoprime}. In this case \eqref{CoprimeReductionPQR} is an \Lcm\ representation, which is defined in Definition \ref{Def:LcmRep} and Remark \ref{Rmk:LCMRep}. The condition \eqref{LeadTwoMonomialPQR} for the \Lcm\ representation amounts to the condition \eqref{SPolyExpoReduce} so that the inequality \eqref{DecreaseLeadingMonomialPQR} is still sound in this special case. With an almost verbatim repetition of the arguments in \eqref{NewEliminantRep} and \eqref{FinalRepresentation}, we can substitute the representation of $\ibr\mr\tg$ in \eqref{TempRepresentPQR} into that of the modular eliminant $\mel$ in \eqref{RevisedRepresentationPQR} with the multiplier $\ibr\mr$ so as to obtain a new representation of $\ibr\mr\mel$ as follows. \begin{equation}\label{FinalRepresentationPQR} \ibr\mr\mel=\sum_{k=1}^\tau\vr_k\tg_k+\vr_0\ee \end{equation} with $\vr_k\in\rx$ for $0\le k\le\tau$. Similar to \eqref{RepresentationLeadingMonomial} and according to \eqref{DecreaseLeadingMonomialPQR} as well as a representation similar to \eqref{NewEliminantRep}, the leading monomials in \eqref{FinalRepresentationPQR} satisfy: \begin{equation}\label{RepLeadingMonomialPQR} \max\bigl\{\max_{1\le k\le\tau}\{\lmc (\vr_k)\cdot\lmc (\tg_k)\},\lmc (\vr_0)\bigr\}\prec\tilde{\bx}^\beta. \end{equation} In summary, the leading monomials in the representation \eqref{FinalRepresentationPQR} strictly decrease from those in the representation \eqref{InitialRepresentationPQR}, up to the same multiplier $\ibr\mr$ as in \eqref{TempRepresentPQR} that satisfies $\mlt_\ipr (\ibr\mr)=0$. Now let us consider the case of $\lmc (h_jf_j)\prec\lmc (h_j)\cdot\lmc (f_j)$ in \eqref{InitialRepresentationPQR}, i.e., $\lcc (h_j)\cdot\lcc (f_j)=0$ for $1\le j\le s$. In this case the set $\pb_t=\emptyset$ in \eqref{RevisedRepresentationPQR} and the above discussions are of no avail any more. In this case we reorganize $h_jf_j$ in the way of \eqref{InitialTermRepsPQR} with $\ltc (h_j):=c_j\tilde{\bx}^{\alpha_j}$ and $c_j\in\rr^\ast$ as follows: \begin{equation}\label{ReorganizeLeadTerm} h_jf_j=\ltc (h_j)\cdot f_j+(h_j-\ltc (h_j))\cdot f_j:=c_j\tilde{\bx}^{\alpha_j}f_j+(h_j-\ltc (h_j))\cdot f_j \end{equation} such that $c_j\cdot\lcc (f_j)=0$. With $c_j\cdot\lcc (f_j)=0$ as in \eqref{ReorganizeLeadTerm}, let us first consider the case when the proper eliminant $\ee\in\rr^\ast$. With $\ar_j:=\lcc (f_j)\in\rat$ and $\io$ the injection defined in \eqref{PQREmbedding}, let us denote $l_f:=\io(\ar_j)$ and $l_e:=\io (\ee)$. The $S$-polynomial $S(f_j,\ee)$ defined as in \eqref{SpecialSPolyPQR} satisfies the identity in \eqref{IdentitySpecialSPoly}: \begin{equation}\label{NoHeadSPolyn} S(\tf_j,\ee)=\mo\Bigl(\frac {l_e}\idr\Bigr)(\tf_j-\ltc (\tf_j)) \end{equation} with $\idr:=\gcd (l_\tf,l_e)$. Let us also denote $l_c:=\io (c_j)$ and $\upsilon:=l_c\idr/l_e$. Then we have $\upsilon\in\kxn$. In fact, we have $\upsilon=l_c\idr/l_e=l_cl_\tf/\lcm (l_\tf,l_e)$. From $c_j\cdot\lcc (f_j)=0$ we can infer that $l_cl_\tf\in\pid\qr\subset\kxn$. Since $\ee=\ee^\st$ as in Definition \ref{Def:ProperEliminant} and the composite divisor $\qr$ is divisible by the proper eliminant $l_e=\io (\ee)$ as per Lemma \ref{Lemma:PQRProperties} \eqref{item:StandardRep}, $l_cl_\tf$ is divisible by $l_e$ and hence divisible by $\lcm (l_\tf,l_e)$. Thus $\upsilon\in\kxn$. Moreover, $\mo (\upsilon)\in\rr^\ast$ since $\mo (\upsilon)\cdot\mo (l_e/\idr)=\mo (l_c)=c_j\in\rr^\ast$. A multiplication of $\mo (\upsilon)\tilde{\bx}^{\alpha_j}$ on both sides of \eqref{NoHeadSPolyn} yields the following intriguing relationship between the polynomial $c_j\tilde{\bx}^{\alpha_j}f_j$ in \eqref{ReorganizeLeadTerm} and $S$-polynomial $S(f_j,\ee)$ in \eqref{NoHeadSPolyn}: \begin{equation}\label{IntriguingEquality} c_j\tilde{\bx}^{\alpha_j}f_j=c_j\tilde{\bx}^{\alpha_j}(\tf_j-\ltc (\tf_j))=\mo (\upsilon)\tilde{\bx}^{\alpha_j}S(f_j,\ee). \end{equation} With $c_j\cdot\lcc (f_j)=0$ as in \eqref{ReorganizeLeadTerm}, if the proper eliminant $\ee=0$, we computed the $S$-polynomial $S(\tf,\qr)$ in \Po Q of Algorithm \ref{Algo:ProperEliminant}. In this case we have $\ar_j=\lcc (f_j)\in\rat$, which is the condition for the definition of the $S$-polynomial $S(f_j,\qr)$ in \eqref{BeheadSPolyPQR}. Thus we have the following relationship instead: \begin{equation}\label{BeheadEquality} c_j\tilde{\bx}^{\alpha_j}f_j=\mo (\eta)\cdot\tilde{\bx}^{\alpha_j}S(f_j,\qr), \end{equation} where the $S$-polynomial $S(f_j,\qr):=\lnr_ff_j$ with $\lnr_f:=\mo (\lcm (l_f,\qr)/l_f)$ as in \eqref{BeheadSPolyPQR}. Here $\eta:=l_cl_f/\lcm (l_f,\qr)$ with the same notations $l_f=\io(\ar_j)$ and $l_c:=\io (c_j)$ as in \eqref{NoHeadSPolyn}. From $c_j\cdot\lcc (f_j)=0$ we can infer that $l_cl_\tf\in\pid\qr\subset\kxn$. Moreover, $\mo (\eta)\in\rr^\ast$ since $\mo (\eta)\cdot\lnr_f=c_j\in\rr^\ast$. If we have \begin{equation}\label{UnitMultiplierCond} \mo (\idr)=\mo (\gcd (l_\tf,l_e))=\gcd\nolimits_\qr (\lcc (\tf),\ee)\in\rr^\times \end{equation} in \eqref{NoHeadSPolyn} as in Corollary \ref{Cor:PQRCoprime}, then we already have an \Lcm\ representation in terms of $\ee$ in \eqref{NoHeadSPolyn}. Otherwise we made proper reductions of the $S$-polynomials $S(f_j,\ee)$ in \eqref{IntriguingEquality} in \Po Q of Algorithm \ref{Algo:ProperEliminant} by the proper basis $\prb$. In \Po Q we also made proper reductions of the $S$-polynomials $S(f_j,\qr)$ in \eqref{BeheadEquality} by $\prb$. They bear the same form as the one in \eqref{SPolynomialReductionPQR} and also lead to strictly decreasing leading monomials as in \eqref{DecreaseLeadingMonomialPQR}. Moreover, it has a multiplier $\mr_j\in\rr^\times$ like in \eqref{SPolynomialReductionPQR}. Thus these new kinds of $S$-polynomials in \eqref{IntriguingEquality} and \eqref{BeheadEquality} have impact on the soundness of neither our arguments nor our conclusion. If we define $\tg_0:=\ee$ in \eqref{FinalRepresentationPQR}, then the representation of the modular eliminant $\ibr\mr\mel$ in \eqref{FinalRepresentationPQR} resembles the one in \eqref{InitialRepresentationPQR} except that we have the extra multiplier $\ibr\mr$ that is not divisible by the irreducible factor $\ipr$ of the composite divisor $\qr$. In particular, when the following holds in \eqref{FinalRepresentationPQR}: \[ \lmc (\vr_0\ee)=\max_{1\le k\le\tau}\{\lmc (\vr_k)\cdot\lmc (\tg_k)\}=\max_{1\le k\le\tau}\{\lmc (\vr_k\tg_k)\}:=\tilde{\bx}^\gamma, \] we have $\tg_0=\ee\in\pb_t$ with $\pb_t$ being defined similarly to that in \eqref{RevisedRepresentationPQR} but with $\tilde{\bx}^\beta$ substituted by $\tilde{\bx}^\gamma$ here. Thus in this case there is a new kind of $S$-polynomials as follows besides the ilk in \eqref{LeadingMonomialSyzygyPQR} and \eqref{SPolynomialRelationPQR}. \begin{equation}\label{NewIlkSPolynomial} S(c_k\tilde{\bx}^{\alpha_k}\tg_k,\icr\tilde{\bx}^\gamma\ee) =S(c_k\tilde{\bx}^{\alpha_k}\tg_k,\icr\ee)=\iwr_k\tilde{\bx}^{\alpha_k}S(\tg_k,\ee). \end{equation} Here $c_k\tilde{\bx}^{\alpha_k}=\ltc (\vr_k)$ and $\icr\tilde{\bx}^\gamma=\ltc (\vr_0)$. We have $\tilde{\bx}^{\alpha_k}\cdot\lmc (\tg_k)=\tilde{\bx}^\gamma$. The $S$-polynomials $S(c_k\tilde{\bx}^{\alpha_k}\tg_k,\icr\ee)$ and $S(\tg_k,\ee)$ in \eqref{NewIlkSPolynomial} are defined according to \eqref{SpecialSPolyPQR} and \eqref{IdentitySpecialSPoly}. Similar to that in \eqref{ConnectMultiplier}, we define the multiplier $\iwr_k:=\mo (\lcm (L_k,L_e)/\lcm (l_k,l_e))$ with $L_k:=\io (C_k)$ and $L_e:=\io (C_e)$. Here $C_k:=c_k\cdot\lcc (\tg_k)\in\rr^\ast$ and $C_e:=c\ee\in\rr^\ast$. And $l_k:=\io (\ar_k)$ and $l_e:=\io (\ee)$ with $\ar_k:=\lcc (\tg_k)$. The first equality in \eqref{NewIlkSPolynomial} is due to \eqref{DisappearingMonomialPQR} and the second one is based on a deduction similar to \eqref{NonzeroRelation}. As we discussed in the paragraph of \eqref{UnitMultiplierCond}, this new kind of $S$-polynomials $S(\tg_k,\ee)$ in \eqref{NewIlkSPolynomial} have no impact on our conclusion. Hence we can just treat the representation \eqref{FinalRepresentationPQR} as the one in \eqref{InitialRepresentationPQR} and repeat our discussions from \eqref{RevisedRepresentationPQR} through \eqref{FinalRepresentationPQR}. With a new multiplier not divisible by the irreducible factor $\ipr$ of the composite divisor $\qr$, we obtain a new representation of $\ibr\mr\mel$ whose leading monomials are strictly less than those in \eqref{FinalRepresentationPQR}. We continue to repeat the procedure in \eqref{FinalRepresentationPQR} of rewriting the representations of the modular eliminant $\mel=\mo (\el)$ so as to strictly reduce the orderings of their leading monomials like in \eqref{RepLeadingMonomialPQR}. Moreover, the multipliers for the representations are never divisible by the irreducible factor $\ipr$ of the composite divisor $\qr$. Since the elimination ordering on $\rx$ as in Definition \ref{Def:EliminationOrderingPQR} is a well-ordering, the above rewriting procedures halt after a finite number of repetitions. In this way we shall arrive at a representation of the modular eliminant $\mel$ bearing the following form: \begin{equation}\label{FinallyPQR} \fir\mel=h\ee \end{equation} with the multiplier $\fir$ not divisible by the irreducible factor $\ipr$ of the composite divisor $\qr$ and hence $\fir\in\rr^\ast$. The other multiplier in \eqref{FinallyPQR} is $h\in\rr$. The following argument shows that it has no influence on our conclusion whether $h=0$ or not. In fact, if the proper eliminant $\ee=0$ in \eqref{FinallyPQR}, then $\fir\mel=0$ in $\rr$. The modular eliminant $\mel$ is divisible by the incompatible divisor $\ipr^i$ since the multiplier $\fir$ is not divisible by $\ipr$ but the composite divisor $\qr=\sfr^i$ satisfies $\mlt_\ipr (\qr)=i$. Since the incompatible divisor $\ipr^i$ of the composite divisor $\qr=\sfr^i$ is arbitrary, the modular eliminant $\mel$ is divisible by $\qr=\sfr^i$ and hence $\mel=0$. Then by Lemma \ref{Lemma:ProperEliminant} \eqref{item:EliminantAndItsPart} follows Theorem \ref{Thm:IncompatiblePart} \eqref{item:ProperZeroRemainder} that the eliminant $\el$ is divisible by the composite divisor $\qr=\sfr^i$. Hence for every incompatible divisor $\ipr^i$ of $\qr=\sfr^i$, we have $\mlt_\ipr (\qr)=i\le\mlt_\ipr (\el)$. Finally by \eqref{MinimumEliMultiplicity} we can infer that $\mlt_\ipr (\el)=i$. Let us consider the case when the proper eliminant $\ee\in\rr^\ast$ in Theorem \ref{Thm:IncompatiblePart} \eqref{item:ProperNonZeroRemainder}. In the case of $h\ee\in\rr^\ast$ in \eqref{FinallyPQR}, from $\mlt_\ipr (\fir)=0$ we can deduce that $\mlt_\ipr (\ee)+\mlt_\ipr (h)=\mlt_\ipr (\mel)$. Hence $\mlt_\ipr (\ee)\le\mlt_\ipr (\mel)$. In the case of $h\ee=0$ in \eqref{FinallyPQR}, which includes the case when $h=0$, we have $\mlt_\ipr (\mel)=\mlt_\ipr (\qr)$ since $\mlt_\ipr (\fir)=0$. Thus $\mlt_\ipr (\ee)\le\mlt_\ipr (\qr)=\mlt_\ipr (\mel)$ as per Lemma \ref{Lemma:PQRProperties} \eqref{item:ExpoLimits}. So from \eqref{FinallyPQR} we can infer that $\mlt_\ipr (\ee)\le\mlt_\ipr (\mel)$ always holds. On the other hand, by Lemma \ref{Lemma:ProperEliminant} \eqref{item:ProperEliminant} we have $\mlt_\ipr (\ee)\ge\mlt_\ipr (\mel)$. Thus follows the equality $\mlt_\ipr (\ee)=\mlt_\ipr (\mel)$ for every irreducible factor $\ipr$ of the composite divisor $\qr$. We can infer that they have equal standard representations $\mel^\st=\ee$ based on the definition in \eqref{StdRepsPQR} and $\ee=\ee^\st$ in Definition \ref{Def:ProperEliminant}. Hence the eliminant $\el$ is divisible by $\io (\mel^\st)=\io (\ee)$ according to Lemma \ref{Lemma:ProperEliminant} \eqref{item:EliminantAndItsPart}. Moreover, Lemma \ref{Lemma:ProperEliminant} \eqref{item:EliminantAndItsPart} also indicates that $\mlt_\ipr (\el)=\mlt_\ipr (\mel^\st)=\mlt_\ipr (\ee)$ for every irreducible factor $\ipr$ of $\qr$. This completes the proof of Theorem \ref{Thm:IncompatiblePart} \eqref{item:ProperNonZeroRemainder}. \end{proof} \section{A New Type of Bases for Zero-dimensional Ideals}\label{Sec:InductiveGroebner} In this section we procure the exact form of the eliminant $\el$ of a zero-dimensional ideal $I$. This is based on our former analyses of the pseudo-eliminant $\pel$ of $I$, i.e., of its compatible part $\Cp (\pel)$ in Theorem \ref{Thm:CompatiblePart} and incompatible part $\Ip (\pel)$ in Theorem \ref{Thm:IncompatiblePart} respectively. We also formulate a decomposition of $I$ according to $\Cp (\pel)$ and the composite divisors of $\Ip (\pel)$. In this way we acquire a new type of bases for $I$ based on the exact form of $\el$, pseudo-basis $\pbs$ obtained in Algorithm \ref{Algo:PseudoEliminant} and proper bases $\prb$ obtained in Algorithm \ref{Algo:ProperEliminant}. Moreover, we address the ideal membership problem for this new type of bases and characterize the new type of bases in terms of their leading terms. \begin{definition}[Proper divisors $\qel$ and proper factor $\pf$]\label{Def:ProperFactor} \hfill For every composite divisor $\qr$ of the incompatible part $\Ip (\pel)$ as in Definition \ref{Def:CompositeDivisor}, there corresponds to a proper eliminant $\ee$ as in Definition \ref{Def:ProperEliminant}. We define a \emph{proper divisor} $\qel\in\kxn$ in accordance with $\ee$ as follows. If $\ee\in\rr^\times$, we define $\qel:=1$; If $\ee=0$, we define $\qel:=\qr$; If $\ee\in\rat$, we define $\qel:=\io (\ee)$ with $\io$ being the injection as in \eqref{PQREmbedding} and $\ee=\ee^\st$ as in Definition \ref{Def:ProperEliminant}. We say that a proper divisor $\qel\in\kxn$ is \emph{nontrivial} if $\qel\ne 1$. We define the \emph{proper factor} $\pf$ as the product of all the proper divisors $\qel$. \end{definition} \begin{theorem}\label{Thm:Eliminant} The eliminant $\el$ of a zero-dimensional ideal $I$ is the product of the compatible part $\Cp (\pel)$ of the pseudo-eliminant $\pel$ and proper factor $\pf$ as in Definition \ref{Def:ProperFactor}. That is, $\el=\Cp (\pel)\cdot\pf$. Moreover, the compatible part $\Cp (\pel)$ is relatively prime to the proper factor $\pf$. \end{theorem} \begin{proof} According to Lemma \ref{Lemma:PseudoEliminantTerminate} \eqref{item:EliminantDivisibility}, every irreducible factor $\ipr$ of the eliminant $\el$ is also a factor of either the compatible part $\Cp (\pel)$ or a composite divisor $\qr=\sfr^i$ of the incompatible part $\Ip (\pel)$ as per \eqref{IPCompositeDivisor}. Moreover, the proper factor $\pf$ is defined as the product of the proper divisors $\qel$ for all the composite divisors $\qr$ of the incompatible part $\Ip (\pel)$. Hence we can easily deduce the conclusion from Theorem \ref{Thm:CompatiblePart} and Theorem \ref{Thm:IncompatiblePart} as well as the definition of proper divisors $\qel$ in Definition \ref{Def:ProperFactor}. Finally, $\Cp (\pel)$ and $\pf$ are relatively prime since $\Cp (\pel)$ and $\Ip (\pel)$ are relatively prime. \end{proof} The nontrivial proper divisors $\qel$ in Definition \ref{Def:ProperFactor} as well as the compatible part $\Cp (\pel)$ of the pseudo-eliminant $\pel$ are pairwise relatively prime. In fact, the composite divisor $\qr$ is divisible by the proper divisor $\qel=\io (\ee^\st)$ when the proper eliminant $\ee\in\rat$ in Definition \ref{Def:ProperFactor}. And $\qel=\qr$ when $\ee=0$. The composite divisors $\qr$ are pairwise relatively prime as in \eqref{IPCompositeDivisor}. In this way the nontrivial proper divisors $\qel$ and the compatible part $\Cp (\pel)$ constitute a factorization of the eliminant $\el$ according to Theorem \ref{Thm:Eliminant}. In the following lemma we make a decomposition of the zero-dimensional ideal $I$ in accordance with the factorization of the eliminant $\el$.\footnote{Please also refer to \cite[P337, Lemma 8.5]{BW93} for a similar decomposition.} \begin{lemma}\label{Lemma:IdealDecomposition} Let $\{\ibr_j\colon 1\le j\le s\}\subset\nonk$ be pairwise relatively prime and $\ibr:=\prod_{j=1}^s\ibr_j$. Then for an arbitrary ideal $J\subset\knx$, we have: \begin{equation}\label{NewModularLaw} J+\langle\ibr\rangle=\bigcap_{j=1}^s(J+\langle\ibr_j\rangle), \end{equation} where $\langle\ibr\rangle$ and $\langle\ibr_j\rangle$ denote the principal ideals in $\knx$ generated by $\ibr$ and $\ibr_j$ respectively for $1\le j\le s$. \end{lemma} \begin{proof} It is evident that the inclusion ``$\subset$" holds. The proof of the reverse inclusion is as follows. For every $f\in\bigcap_{j=1}^s(J+\langle\ibr_j\rangle)$, there exist $g_j\in J$ and $h_j\in\knx$ such that $f=g_j+h_j\ibr_j$ for $1\le j\le s$. Moreover, $\{\ibr/\ibr_j\colon 1\le j\le s\}$ have no nontrivial common factors. Hence there exist $a_j\in\kxn$ for $1\le j\le s$ such that $\sum_{j=1}^s a_j\ibr/\ibr_j=1$. Now we have: \begin{equation*} f=\sum_{j=1}^s fa_j\ibr/\ibr_j=\sum_{j=1}^s(g_j+h_j\ibr_j)a_j\ibr/\ibr_j =\sum_{j=1}^s\frac{a_j\ibr}{\ibr_j}g_j+\ibr\sum_{j=1}^s h_ja_j\in J+\langle\ibr\rangle. \qedhere \end{equation*} \end{proof} Let $\ntd:=\{\qr_j\colon 1\le j\le t\}$ be the set of composite divisors of the incompatible part $\Ip (\pel)$ such that their corresponding proper divisors $\qer j$ as in Definition \ref{Def:ProperFactor} are not trivial, i.e., $\qer j\ne 1$ for $1\le j\le t$. We have the following decomposition of the zero-dimensional ideal $I=I+\langle\el\rangle$ according to Theorem \ref{Thm:Eliminant} and Lemma \ref{Lemma:IdealDecomposition}. \begin{equation}\label{IdealDecomposition} I=(I+\langle\Cp (\pel)\rangle)\cap\bigcap_{\qr\in\ntd}(I+\langle\qel\rangle). \end{equation} \begin{lemma}\label{Lemma:PseudoReps} Suppose that $I\subset\knx$ is a zero-dimensional ideal with an elimination ordering on $\bM$ as in Definition \ref{Def:EliminationOrdering}. Let $\pbs=\{g_k\colon 1\le k\le s\}$ be a pseudo-basis of $I$ and $\er=\Cp (\pel)$ the compatible part of the pseudo-eliminant $\pel$ associated with $\pbs$. For every $f\in I$, there exist $\{\vr_k\colon 0\le k\le\tau\}\subset\knx$ and a multiplier $\mr$ relatively prime to $\er$ such that: \begin{equation}\label{PseudoReps} \mr f=\sum_{k=1}^\tau \vr_k\tg_k+\vr_0\pel. \end{equation} Moreover, the polynomials in \eqref{PseudoReps} satisfy the following condition: \begin{equation}\label{PseudoRepCond} \lmc (f)=\max\bigl\{\max_{1\le k\le\tau}\{\lmc (\vr_k\tg_k)\},\lmc (\vr_0)\bigr\}. \end{equation} \end{lemma} \begin{proof} The proof for the conclusion is almost a verbatim repetition of that for Theorem \ref{Thm:CompatiblePart}. More specifically, suppose that $f\in I$ can be written as \begin{equation}\label{OriginBasisRep} f=\sum_{j=0}^s h_jf_j \end{equation} with $\{f_j\colon 0\le j\le s\}\subset\knx\setminus K$ being the basis $\tbs\cup\{\lel\}$ of the ideal $I$ after the Initialization in Algorithm \ref{Algo:PseudoEliminant} and $\{h_j\colon 0\le j\le s\}\subset\knx$. In particular, $\lel\in\pid\pel\subset\pid\er\subset\nona$. It is evident that the conclusion holds when $\lmc (f)=\max_{0\le j\le s}\{\lmc (h_jf_j)\}$. So in what follows let us suppose that $\lmc (f)\prec\max_{0\le j\le s}\{\lmc (h_jf_j)\}$. In this case we treat $f$ as the eliminant $\el$ in \eqref{InitialRepresentation}. Let us fix an irreducible factor $\ipr$ of the compatible part $\er=\Cp (\pel)$. We repeat the arguments verbatim from \eqref{RevisedRepresentation} through \eqref{FinalRepresentation} to obtain a new representation like \eqref{FinalRepresentation} as follows. \begin{equation}\label{BasisRepFinal} \ibr\mr f=\sum_{k=1}^\tau \iur_k\tg_k+\iur_0\pel \end{equation} such that the multiplier $\ibr\mr$ is relatively prime to $\ipr$, i.e., $\mlt_\ipr (\ibr\mr)=0$. Here $\{\tg_k\colon 1\le k\le\tau\}=\pbs$ is the pseudo-basis obtained in Algorithm \ref{Algo:PseudoEliminant} and $\iur_k\in\knx$ for $0\le k\le\tau$. The leading monomials of the representation in \eqref{BasisRepFinal} are strictly less than those in \eqref{OriginBasisRep}, which is similar to \eqref{RepresentationLeadingMonomial}. We repeat this procedure of rewriting the representations of $\ibr\mr f$ so that their leading monomials strictly decrease. Moreover, the multipliers for the representations are always relatively prime to the irreducible factor $\ipr$ of the compatible part $\er$. Since the elimination ordering on $\knx$ is a well-ordering and the leading monomials of a representation of $f$ cannot be strictly less than $\lmc (f)$, after a finite number of repetitions we obtain a representation bearing the following form: \begin{equation}\label{FinalEqualLeadingMonomials} \fir f=\sum_{k=1}^\tau w_k\tg_k+w_0\pel \end{equation} with $w_k\in\knx$ for $0\le k\le\tau$ such that \begin{equation}\label{LeadingMonomialIdentity} \max\bigl\{\max_{1\le k\le\tau}\{\lmc (w_k\tg_k)\},\lmc (w_0\pel)\bigr\}=\lmc (f). \end{equation} Moreover, the multiplier $\fir\in (\kxn)^\ast$ in \eqref{FinalEqualLeadingMonomials} is relatively prime to the irreducible factor $\ipr$ of the compatible part $\er=\Cp (\pel)$. Suppose that the compatible part $\er$ has a factorization into a product of compatible divisors that are pairwise relatively prime as in Definition \ref{Def:CompatibleDivisors}, i.e., $\er=\prod_{l=1}^t\ipr_l^{n_l}$ with $n_l\in\Np$. For each irreducible factor $\ipr_l$ of $\er$, there corresponds to a representation of $f$ in \eqref{FinalEqualLeadingMonomials} that can be indexed by the subscript $l$ of $\ipr_l$ with $1\le l\le t$ as follows. \begin{equation}\label{SingleIrreducibleFactor} \fir_lf=\sum_{k=1}^\tau w_k^{(l)}\tg_k+w_0^{(l)}\pel, \end{equation} where the multiplier $\fir_l\in (\kxn)^\ast$ in \eqref{SingleIrreducibleFactor} is relatively prime to the irreducible factor $\ipr_l$ of the compatible part $\er$. Moreover, the leading monomial identity \eqref{LeadingMonomialIdentity} still holds for $w_k^{(l)}$ and $w_0^{(l)}$ with $1\le l\le t$ in \eqref{SingleIrreducibleFactor}, i.e., \begin{equation}\label{LeadingMonomialIds} \max\bigl\{\max_{1\le k\le\tau}\{\lmc (w_k^{(l)}\tg_k)\},\lmc (w_0^{(l)}\pel)\bigr\}=\lmc (f). \end{equation} Let us denote $\mr:=\gcd_{1\le l\le t}\{\fir_l\}\in (\kxn)^\ast$. Then $\mr$ is relatively prime to the compatible part $\er$. There exist $\{\ur_l\in\kxn\colon 1\le l\le t\}$ such that $\mr=\sum_{l=1}^t\ur_l\fir_l$. Hence we can obtain a representation of $f$ as follows. \begin{equation}\label{FinalMemberRep} \mr f=\sum_{l=1}^t\ur_l\fir_lf=\sum_{k=1}^\tau\tg_k\sum_{l=1}^t\ur_lw_k^{(l)} +\pel\sum_{l=1}^t\ur_lw_0^{(l)}:=\sum_{k=1}^\tau \vr_k\tg_k+\vr_0\pel \end{equation} with $\vr_k:=\sum_{l=1}^t\ur_lw_k^{(l)}$ for $0\le k\le\tau$ in $\knx$. This is \eqref{PseudoReps} proved. Based on the identities $\vr_k=\sum_{l=1}^t\ur_lw_k^{(l)}$ in \eqref{FinalMemberRep} for $0\le k\le\tau$, we can infer the following inequalities between their leading monomials: \begin{equation}\label{DetailedOrdering} \lmc (\vr_0\pel)\preceq\max_{1\le l\le t}\{\lmc (w_0^{(l)}\pel)\};\quad \lmc (\vr_k\tg_k)\preceq\max_{1\le l\le t}\{\lmc (w_k^{(l)}\tg_k)\} \end{equation} for $1\le k\le\tau$ since $\ur_l\in\kxn$ for $1\le l\le t$. A combination of \eqref{DetailedOrdering} and \eqref{LeadingMonomialIds} leads to: \begin{equation}\label{ReverseIneq} \max\bigl\{\max_{1\le k\le\tau}\{\lmc (\vr_k\tg_k)\},\lmc (\vr_0\pel)\bigr\}\preceq\lmc (f). \end{equation} We can also infer the reverse inequality of \eqref{ReverseIneq} from \eqref{FinalMemberRep}. Thus follows the equality \eqref{PseudoRepCond}. \end{proof} The following intriguing observation is crucial for its ensuing conclusions. \begin{lemma}\label{Lemma:EpiLeadingCoeff} Suppose that $I\subset\knx$ is a zero-dimensional ideal with an elimination ordering on $\bM$ as in Definition \ref{Def:EliminationOrdering}. Let $\er=\Cp (\pel)$ be the compatible part of a pseudo-eliminant $\pel$ of $I$. Consider the epimorphism $\md\colon\knx\rightarrow\rc [\tilde{\bx}]$ as in \eqref{ExtendedProjectionPQR} such that $I_\er:=\md (I)$. If we define $I_\ast:=\{f\in I\colon\md (\lcc (f))\in\rc^\ast\}$, then $\md (I_\ast)=I_\er$. \end{lemma} \begin{proof} Let $\pf$ be the proper factor as in Definition \ref{Def:ProperFactor} such that $\er\cdot\pf=\el$ with $\el$ being the eliminant of $I$ as per Theorem \ref{Thm:Eliminant}. For every $h\in I_\er$ with $\lcc (h)\in\rc^\ast$, suppose that $\md (g)=h$ with $g\in I$. Then $g-\ie (h)\in\langle\er\rangle$ with the injection $\ie$ defined like in \eqref{ExtendedEmbedding}. Hence $\pf g-\pf\cdot\ie (h)\in\langle\el\rangle\subset I$. Thus $\pf\cdot\ie (h)\in I$ and $\md (\pf\cdot\ie (h))=\md (\pf)\cdot\md (\ie (h))=\md (\pf)\cdot h$ since $\md\circ\ie$ is the identity map. We can further infer that $\md (\pf)$ is a unit in $\rc$ by Lemma \ref{Lemma:PQRProperties} \eqref{item:SimpleCoprime} since $\pf$ is relatively prime to $\er$ as per Theorem \ref{Thm:Eliminant}. Hence $\md (\pf)\cdot\lcc (h)\in\rc^\ast$, from which we can deduce that $\pf\cdot\ie (h)\in I_\ast$. From $\md (\pf)\in\rc^\times$ and $\md (\pf)\cdot h=\md (\pf\cdot\ie (h))$, we can also deduce that $I_\er=\md (\pf)\cdot I_\er:=\{\md (\pf)\cdot h\colon h\in I_\er\}\subset\md (I_\ast)$. The inclusion $I_\er=\md (I)\supset\md (I_\ast)$ is evident. \end{proof} For the ideal $I+\langle\er\rangle=I+\langle\Cp (\pel)\rangle$ in \eqref{IdealDecomposition}, we provide a characterization of the basis $\pbs\cup\{\er\}$ in the following conclusions with $\pbs$ being a pseudo-basis of $I$ as in Definition \ref{Def:PseudoBasis}. In particular, we address the ideal membership problem for the ideal $I+\langle\er\rangle$. \begin{lemma}\label{Lemma:IdealMembership} Suppose that $I\subset\knx$ is a zero-dimensional ideal with an elimination ordering on $\bM$ as in Definition \ref{Def:EliminationOrdering}. Let $\pbs=\{g_k\colon 1\le k\le s\}$ be a pseudo-basis of $I$ and $\er=\Cp (\pel)$ the compatible part of the pseudo-eliminant $\pel$ associated with $\pbs$. Consider the epimorphism $\md$ as follows. \begin{equation}\label{ModuloCompatiblePart} \md\colon\knx\longrightarrow\rc [\tilde{\bx}] \end{equation} is defined like in \eqref{ExtendedProjectionPQR} such that $I_\er:=\md (I)$ and $\pb_\er:=\md (\pbs)$. Then with an elimination ordering on $\rc [\tilde{\bx}]$ like in Definition \ref{Def:EliminationOrderingPQR}, we have the following ideal identity in $\rc [\tilde{\bx}]$: \begin{equation}\label{LeadTermMod} \langle\ltc (I_\er)\rangle=\langle\ltc (\pb_\er)\rangle. \end{equation} \end{lemma} \begin{proof} For every $\tg\in I_\er$, let $\tf\in I_\ast$ as in Lemma \ref{Lemma:EpiLeadingCoeff} such that $\md (\tf)=\tg$ and in particular, $\md (\lcc (\tf ))=\lcc (\tg)\in\rc^\ast$. According to Lemma \ref{Lemma:PseudoReps}, there exist $\{\vr_k\colon 0\le k\le s\}\subset\knx$ as well as a multiplier $\mr\in\kxn$ that is relatively prime to $\er=\Cp (\pel)$ such that both \eqref{PseudoReps} and \eqref{PseudoRepCond} hold for this $\tf\in I_\ast$. We apply the epimorphism $\md$ like in \eqref{ExtendedProjectionPQR} to the identity \eqref{PseudoReps}. Then $\md (\mr)\in\rc^\times$ by Lemma \ref{Lemma:PQRProperties} \eqref{item:SimpleCoprime} since $\mr$ is relatively prime to $\er$. We have $\md (\ltc (\mr\tf))=\md (\mr)\cdot\ltc (\tg)\ne 0$ since $\md (\lcc (\tf))=\lcc (\tg)\in\rc^\ast$ whereas $\md (\ltc (\vr_0\pel))=\md (\pel\cdot\ltc (\vr_0))=0$ since $\pel\in\pid\er\subset\kxn$. For $1\le k\le s$, we collect the subscript $k$ into a set $\dss$ if $\lmc (\vr_k)\cdot\lmc (g_k)=\lmc (\tg)=\lmc (\tf)$ and $\md (\lcc (\vr_kg_k))=\md (\lcc (\vr_k)\cdot\lcc (g_k))\in\rc^\ast$. Then based on \eqref{PseudoRepCond} we have $\dss\ne\emptyset$ since $\md (\ltc (\mr\tf))\ne 0$ on the left hand side of \eqref{PseudoReps}. In the case of $\md (\lcc (g_k))\in\rc^\ast$ for $k\in\dss$, we also have $\md (\ltc (g_k))=\ltc (\md (g_k))$. Hence the following identity: \begin{equation}\label{LeadTermExpansion} \ltc (\tg)=\md (\mr)^{-1}\cdot\sum_{k\in\dss}\md (\ltc (\vr_k))\cdot\ltc (\md (g_k))\in\langle\ltc (\pb_\er)\rangle \end{equation} indicates the ideal identity \eqref{LeadTermMod}. \end{proof} \begin{lemma}\label{Lemma:LinearEqGcd} Let $\rr$ be a normal PQR as in Definition \ref{Def:nPQR}. Suppose that $\{c_j\colon 0\le j\le s\}\subset\rr^\ast$. There exist $\{\ibr_j\colon 1\le j\le s\}\subset\rr$ such that $c_0=\sum_{j=1}^s\ibr_jc_j$ if and only if $c_0\in\pid\icr\subset\rr$ with $\icr:=\gcd(\{c_j\colon 1\le j\le s\})$ as in \eqref{GcdPQRStd} or \eqref{GcdPQRqr}. In particular, for every proper subset $\dss\subset\{1\le j\le s\}$, the identity $c_0=\sum_{j\in\dss}\idr_jc_j$ holds with $\{\idr_j\colon j\in\Lambda\}\subset\rr$ only if $c_0\in\pid\icr$. \end{lemma} \begin{proof} First of all, there exist $\{\ar_j\colon 1\le j\le s\}\subset\rr$ such that \begin{equation}\label{GcdInPQR} c=\sum_{j=1}^s\ar_jc_j. \end{equation} In fact, let $\io$ be the injection defined in \eqref{PQREmbedding}. If $\idr:=\gcd(\{\io (c_j)\colon 1\le j\le s\})$, then there exist $\{\idr_j\colon 1\le j\le s\}\subset\kxn$ such that $\idr=\sum_{j=1}^s\idr_j\cdot\io (c_j)$. Applying $\mo$ to this identity, we have \begin{equation}\label{TempIdentity} \mo (\idr)=\sum_{j=1}^s\mo (\idr_j)\cdot c_j \end{equation} since $\mo\circ\io$ is the identity map. Hence $\mo (\idr)\in\pid c$ since $c_j\in\pid c$ for $1\le j\le s$. On the other hand, $\io (c_j)\in\pid\idr$ for $1\le j\le s$. Thus $c_j\in\pid{\mo (\idr)}$ for $1\le j\le s$ and hence their common divisor $c\in\pid{\mo (\idr)}$. By the standard representations of $c$ and $\mo (\idr)$ as in \eqref{StdRepsPQR}, we have $c=\ur\cdot\mo (\idr)$ with $\ur\in\rr^\times$. As a result, it suffices to take $\ar_j:=\ur\cdot\mo (\idr_j)$ for $1\le j\le s$ in \eqref{TempIdentity} in order to deduce \eqref{GcdInPQR}. Now the necessity of the conclusion is easy to verify. And the sufficiency of the conclusion readily follows from \eqref{GcdInPQR}. \end{proof} \begin{notation}\label{Notation:DivisorSet} Suppose that $\pb\subset\rqd$ is a finite polynomial set and $\tilde{\bx}^\alpha\in\tM$. We denote $\db:=\{\ibr\in\pb\colon\tilde{\bx}^\alpha\in\langle\lmc (\ibr)\rangle\}$. Hereafter we shall simply use $\gcd$ to denote the $\gcd_\st$ in \eqref{GcdPQRStd} or $\gcd_\qr$ in \eqref{GcdPQRqr}. The two definitions only differ by a unit as in \eqref{UniqueGcds}, which has no impact on our conclusions. \end{notation} It is evident that $\db\ne\emptyset$ is equivalent to $\tilde{\bx}^\alpha\in\langle\lmc (\pb)\rangle$ since Lemma \ref{Lemma:MonomialIdealProperty} applies to PQR as well. Please note that this is also the condition for a nonzero term $c_\alpha\tilde{\bx}^\alpha$ to be pseudo-reducible with respect to $\pb$ in Definition \ref{Def:PseudoReduced}. \begin{definition}[\Gd-reducible terms and polynomials in $\rx$]\label{Def:RelativeReducible} \hfill Let $\rhor\in\rr^\ast$ and $\pb\subset\rqd$ be a finite polynomial set. We say that a nonzero term $c_\alpha\tilde{\bx}^\alpha$ is \emph{\Gd-reducible} with respect to $\pb$ if $\db\ne\emptyset$ and $c_\alpha\in\pid\idr\subset\rr$ with $\idr:=\gcd(\{\lcc (\ibr)\colon\ibr\in\db\})$ as in Notation \ref{Notation:DivisorSet}. We also say that $c_\alpha\tilde{\bx}^\alpha$ is \emph{\Gd-reduced} with respect to $\pb$ if it is not \Gd-reducible with respect to $\pb$. A polynomial $\tf\in\rqd$ is said to be \emph{\Gd-reducible} with respect to $\pb$ if $f$ has a \Gd-reducible term. Otherwise $\tf$ is said to be \emph{\Gd-reduced} with respect to $\pb$. \end{definition} \begin{lemma}\label{Lemma:LeadTermCond} Let $\pb\subset\rqd$ be a finite polynomial set. Then a nonzero term $c_\alpha\tilde{\bx}^\alpha\in\langle\ltc (\pb)\rangle$ if and only if $c_\alpha\tilde{\bx}^\alpha$ is \Gd-reducible with respect to $\pb$. \end{lemma} \begin{proof} We only prove the necessity condition since the sufficiency condition is evident by Lemma \ref{Lemma:LinearEqGcd}. Suppose that $\ltc (\pb)=\{a_j\tilde{\bx}^{\alpha_j}\colon 1\le j\le s\}$ and $c_\alpha\tilde{\bx}^\alpha=\sum_{j=1}^sa_jh_j\tilde{\bx}^{\alpha_j}$ with $h_j\in\rx$ for $1\le j\le s$. We expand each $h_j$ into individual terms and compare the terms with the same monomial $\tilde{\bx}^\alpha$ on both sides of the equality. In this way we obtain an equality $c_\alpha\tilde{\bx}^\alpha=\sum_{j=1}^sc_{\beta_j}a_j\tilde{\bx}^{\alpha_j}\tilde{\bx}^{\beta_j}$ instead with $c_{\beta_j}\tilde{\bx}^{\beta_j}$ being a term of $h_j$ such that $\alpha_j+\beta_j=\alpha$ for $1\le j\le s$. Now it is evident that $\db\ne\emptyset$ and $c_\alpha=\sum_{j=1}^sa_jc_{\beta_j}$. Then the conclusion readily follows from Lemma \ref{Lemma:LinearEqGcd}. \end{proof} \begin{definition}[\Gd-term reduction in $\rx$]\label{Def:GcdTermReduction} \hfill Suppose that $f\in\rqd$ has a term $c_\alpha\tilde{\bx}^\alpha$ that is \Gd-reducible with respect to a finite set $\pb\subset\rqd$. Let us denote $\idr:=\gcd(\{\lcc (b)\colon\ibr\in\db\})$ as in Notation \ref{Notation:DivisorSet}. With $l_\alpha:=\io (c_\alpha)$ and $l_\idr:=\io (\idr)$, let us define the multipliers $\iur:=\mo (\lcm (l_\alpha,l_\idr)/l_\alpha)$ and $\lmr:=\mo (\lcm (l_\alpha,l_\idr)/l_\idr)$. Then we can make a \emph{\Gd-term reduction} of $f$ by $\pb$ as follows. \begin{equation}\label{GcdTermReduction} h=\iur f-\sum_{\ibr\in\db}\frac{\lmr c_\ibr\bm{x}^\alpha}{\lmc (\ibr)}\ibr, \end{equation} where $\idr=\sum_{\ibr\in\db}c_\ibr\cdot\lcc (b)$ with $c_\ibr\in\rr$. We call $h$ the \emph{remainder} of the reduction and $\iur$ the \emph{interim multiplier} on $f$ with respect to $\pb$. \end{definition} \begin{theorem}[\Gd-division in $\rx$]\label{Thm:GcdReduction} \hfill With the elimination ordering on $\rx$ as in Definition \ref{Def:EliminationOrderingPQR}, suppose that $\pb=\{\tg_j\colon 1\le j\le s\}\subset\rqd$ is a polynomial set. For every $f\in\rx$, there exist a multiplier $\mr\in\rr^\times$ as well as a remainder $\dr\in\rx$ and quotients $h_j\in\rx$ for $1\le j\le s$ such that \begin{equation}\label{GcdDivisionExpression} \mr f=\sum_{j=1}^sh_j\tg_j+\dr, \end{equation} where $\dr$ is \Gd-reduced with respect to $\pb$. Moreover, the polynomials in \eqref{GcdDivisionExpression} satisfy the following condition: \begin{equation}\label{GcdDivisionCond} \lmc (f)=\max\bigl\{\max_{1\le j\le s}\{\lmc (h_j)\cdot\lmc (\tg_j)\},\lmc (\dr)\bigr\}. \end{equation} \end{theorem} \begin{proof} Similar to the proof of Theorem \ref{Thm:ProperReduction} for proper division, the proof is almost a verbatim repetition of that for Theorem \ref{Thm:PseudoReduction} if we substitute $\rx$ for $\knx$. In fact, the maximal term of $h$ that is \Gd-reducible with respect to $\pb$ is strictly less than that of $f$ after we make a \Gd-term reduction as in \eqref{GcdTermReduction}. Moreover, it suffices to prove that the condition \eqref{GcdDivisionCond} applies to the \Gd-term reduction in \eqref{GcdTermReduction}, same as in the proof of Theorem \ref{Thm:PseudoReduction}. \end{proof} We also call the \Gd-division with respect to $\pb$ a \emph{\Gd-reduction} with respect to $\pb$ henceforth. \begin{theorem}\label{Thm:MemberChar} Suppose that $I\subset\knx$ is a zero-dimensional ideal with an elimination ordering on $\bM$ as in Definition \ref{Def:EliminationOrdering}. Let $\pbs=\{g_k\colon 1\le k\le s\}$ be a pseudo-basis of $I$ and $\er=\Cp (\pel)$ the compatible part of the pseudo-eliminant $\pel$ associated with $\pbs$. Let $\md\colon\knx\rightarrow\rcx$ be the epimorphism as in \eqref{ExtendedProjectionPQR} such that $I_\er:=\md (I)$ and $\pb_\er:=\md (\pbs)$. Then the identity \eqref{LeadTermMod} in Lemma \ref{Lemma:IdealMembership} is equivalent to a characterization of $\pb_\er$ as following: For every $f\in\rcx$, we have $f\in I_\er$ if and only if we can make a \Gd-reduction of $f$ with respect to $\pb_\er$ as in \eqref{GcdDivisionExpression} and \eqref{GcdDivisionCond} such that the remainder $\dr=0$. \end{theorem} \begin{proof} Suppose that the identity \eqref{LeadTermMod} holds. For every $f\in I_\er$, we make a \Gd-reduction of $f$ with respect to $\pb_\er$ as in \eqref{GcdDivisionExpression} and \eqref{GcdDivisionCond} such that the remainder $\dr$ is \Gd-reduced with respect to $\pb_\er$ as in Definition \ref{Def:RelativeReducible}. On the other hand, the remainder $\dr\in I_\er$ as per the expression in \eqref{GcdDivisionExpression}. If $\dr\ne 0$, there exist $\{h_k\colon 1\le k\le s\}\subset\rcx$ such that $\ltc (\dr)=\sum_{k=1}^sh_k\cdot\ltc (\md (g_k))$ according to \eqref{LeadTermMod}. For $1\le k\le s$, we collect the subscript $k$ into a set $\dss$ if $h_k$ has a term denoted as $c_k\tilde{\bx}^{\alpha_k}$ that satisfies $\tilde{\bx}^{\alpha_k}\cdot\lmc (\md (g_k))=\lmc (\dr)$ and $c_k\cdot\lcc (\md (g_k))\in\rc^\ast$. Then we have $\dss\ne\emptyset$ since $\lcc (\dr)\in\rc^\ast$. Moreover, $\lcc (\dr)=\sum_{k\in\dss}c_k\cdot\lcc (\md (g_k))$ and hence $\lcc (\dr)\in\pid\ar$ by Lemma \ref{Lemma:LinearEqGcd} with $\ar:=\gcd(\{\lcc (\md (g_k))\colon k\in\dss\})$. According to Definition \ref{Def:RelativeReducible}, $\dr$ is \Gd-reducible with respect to $\pb_\er$, which constitutes a contradiction. This proves the necessity of the conclusion. The sufficiency of the conclusion is evident since $\pb_\er\subset I_\er$. Next let us prove the identity \eqref{LeadTermMod} under the assumption that every $f\in I_\er$ can be \Gd-reduced to $r=0$ by $\pb_\er$. In fact, $f=\sum_{k=1}^s\qr_k\cdot\md (g_k)$ with $\qr_k\in\rcx$ such that $\lmc (f)=\max_{1\le k\le s}\{\lmc (\qr_k)\cdot\lmc (\md (g_k))\}$ according to \eqref{GcdDivisionCond}. For $1\le k\le s$, we collect the subscript $k$ into a set $\dss$ if $\lmc (\qr_k)\cdot\lmc (\md (g_k))=\lmc (f)$. Then $\ltc (f)=\sum_{k\in\dss}\ltc (\qr_k)\cdot\ltc (\md (g_k))$. Thus $\ltc (I_\er)\subset\langle\ltc (\pb_\er)\rangle$. The other direction of \eqref{LeadTermMod} is trivial to prove. \end{proof} In what follows we provide a modular characterization of the ideal $I+\langle\qel\rangle$ in \eqref{IdealDecomposition} over the normal PQR $\rr$. We shall use a modular argument for the characterization resorting to the proper basis $\prb$ and proper eliminant $\ee$ obtained in Algorithm \ref{Algo:ProperEliminant}. \begin{lemma}\label{Lemma:ProperReps} Suppose that $\pel$ is a pseudo-eliminant of a zero-dimensional ideal $I$ with $\qr$ being a composite divisor of its incompatible part $\Ip (\pel)$. Let $\ee$ and $\prb=\{\tg_k\colon 1\le k\le\tau\}$ be the proper eliminant and proper basis of $\Iq=\mo (I)$ respectively with $\mo$ being the epimorphism as in \eqref{ExtendedProjectionPQR}. For every $f\in\Iq$, there exist a multiplier $\mr\in\rr^\times$ and $\{\vr_k\colon 0\le k\le\tau\}\subset\rx$ such that: \begin{equation}\label{ProperReps} \mr f=\sum_{k=1}^\tau \vr_k\tg_k+\vr_0\ee. \end{equation} Moreover, the polynomials in \eqref{ProperReps} satisfy the following condition: \begin{equation}\label{ProperRepCond} \lmc (f)=\max\bigl\{\max_{1\le k\le\tau}\{\lmc (\vr_k)\cdot\lmc (\tg_k)\},\lmc (\vr_0)\bigr\}. \end{equation} In particular, the above conclusions are still sound when the proper eliminant $\ee=0$. \end{lemma} \begin{proof} The proof is an almost verbatim repetition of that for Theorem \ref{Thm:IncompatiblePart}, which is similar to the proof for Lemma \ref{Lemma:PseudoReps}. In fact, suppose that $\tas$ is the originally given basis of the ideal $I$ in $\knx$ such that $\mo (F)=\{f_j\colon 1\le j\le s\}\subset\rqd$ is a basis of the ideal $\Iq=\mo (I)$. Then for every $f\in\Iq$, there exist $h_j\in\rx$ for $1\le j\le s$ such that $f$ can be written as: \begin{equation}\label{OriginalRepPQR} f=\sum_{j=1}^s h_jf_j. \end{equation} Thus the conclusion readily follows when $\lmc (f)=\max_{1\le j\le s}\{\lmc (h_j)\cdot\lmc (f_j)\}$ since $\mo (F)\subset\prb$. Now let us suppose that $\lmc (f)\prec\max_{1\le j\le s}\{\lmc (h_j)\cdot\lmc (f_j)\}$. In this case we treat $f$ as the modular eliminant $\mel$ in \eqref{InitialRepresentationPQR}. Let us fix an irreducible factor $\ipr$ of the composite divisor $\qr$. We repeat the arguments verbatim from \eqref{RevisedRepresentationPQR} through \eqref{FinalRepresentationPQR} to obtain a new representation like in \eqref{FinalRepresentationPQR} as follows. \begin{equation}\label{FinalRepPQR} \ibr\mr f=\sum_{k=1}^\tau \iur_k\tg_k+\iur_0\ee \end{equation} with $\iur_k\in\rx$ for $0\le k\le\tau$. The leading monomials of the representation in \eqref{FinalRepPQR} are strictly less than those in \eqref{OriginalRepPQR}, which resembles \eqref{RepLeadingMonomialPQR} closely. Moreover, the multiplier $\ibr\mr$ in \eqref{FinalRepPQR} is relatively prime to the irreducible factor $\ipr$ of the composite divisor $\qr$, i.e., $\mlt_\ipr (\ibr\mr)=0$. We repeat this procedure of rewriting the representations of $\ibr\mr f$ so that their leading monomials strictly decrease. And the multipliers for the representations are always relatively prime to the irreducible factor $\ipr$ of the composite divisor $\qr$. After a finite number of repetitions we shall obtain a representation in the following form: \begin{equation}\label{FinalEqualityLeadPQR} \fir f=\sum_{k=1}^\tau w_k\tg_k+w_0\ee \end{equation} with $w_k\in\rx$ for $0\le k\le\tau$ such that \begin{equation}\label{LeadMonomialIdPQR} \max\bigl\{\max_{1\le k\le\tau}\{\lmc (w_k)\cdot\lmc (\tg_k)\},\lmc (w_0)\bigr\}=\lmc (f). \end{equation} The multiplier $\fir\in\rr^\ast$ in \eqref{FinalEqualityLeadPQR} is relatively prime to the irreducible factor $\ipr$ of the composite divisor $\qr$. Since for every irreducible factor $\ipr$ of the composite divisor $\qr$, we have \eqref{FinalEqualityLeadPQR} and \eqref{LeadMonomialIdPQR}, we can repeat the arguments almost verbatim in \eqref{SingleIrreducibleFactor} and \eqref{FinalMemberRep} to show that there exist a multiplier $\mr\in\rr^\times$ and $\{\vr_k\colon 0\le k\le\tau\}\subset\rx$ such that \eqref{ProperReps} holds. In the arguments we substitute the proper eliminant $\ee$ for the pseudo-eliminant $\pel$ and use \eqref{GcdInPQR} for the representation of a greatest common divisor in $\rr$. Moreover, we can corroborate \eqref{ProperRepCond} by repeating almost verbatim the arguments in \eqref{LeadingMonomialIds}, \eqref{DetailedOrdering} and \eqref{ReverseIneq}. In fact, it suffices to substitute $\lmc (w_k^{(l)})\cdot\lmc (\tg_k)$ for $\lmc (w_k^{(l)}\tg_k)$ in \eqref{LeadingMonomialIds} and \eqref{DetailedOrdering}, as well as to substitute $\lmc (\vr_k)\cdot\lmc (\tg_k)$ for $\lmc (\vr_k\tg_k)$ in \eqref{DetailedOrdering} and \eqref{ReverseIneq}, as regards the existence of zero divisors in $\rr$. \end{proof} Let $I\subset\knx$ be a zero-dimensional ideal. For a composite divisor $\qr$ of the incompatible part $\Ip (\pel)$ of a pseudo-eliminant $\pel$ of $I$, let $\rr$ be the normal PQR defined as in \eqref{kxnPQR}. Suppose that $\ee\in\rat$ is the proper eliminant obtained in Algorithm \ref{Algo:ProperEliminant}. In particular, let $\ee$ stand for the standard representation $\ee^\st$ as in Definition \ref{Def:ProperEliminant}. As per \eqref{StdRepsPQR}, we have $\qr\in\pid{\io (\ee)}\subset\kxn$ with $\io$ being the injection defined as in \eqref{PQREmbedding}. Hence similar to \eqref{ResidueRing}, let us define: \begin{equation}\label{ProperPQR} \re:=\{r\in\rr\colon\deg (r)<\deg (\ee)\}. \end{equation} Similar to \eqref{BinaryOperations}, we can redefine the binary operations on $\re$ such that $\re$ is a normal PQR satisfying $\re\cong\rr/\pid\ee$. For every $f\in\rr$, there exist a quotient $h\in\rr$ and unique remainder $r\in\rr$ satisfying $f=h\ee+r$ such that $\deg (r)<\deg (\ee)$. Like in \eqref{ProjectionPQR} we can define an epimorphism $\mq\colon\rr\rightarrow\re$ as $\mq (f):=r$. This combined with \eqref{DivisionPQR} and \eqref{ProjectionPQR} lead to an epimorphism $\mq\circ\mo\colon\kxn\rightarrow\re$. For every $\tf\in\kxn$, there exist a quotient $h\in\kxn$ and unique remainder $r\in\kxn$ such that $\tf=h\cdot\io (\ee)+r$. Hence we can also define an epimorphism $\mqr\colon\kxn\rightarrow\re$ as $\mqr (f):=r$. Since $\qr\in\pid{\io (\ee)}$, it is evident that $\mqr=\mq\circ\mo$. For simplicity we still denote $\mqr$ as $\mq$ henceforth. Similar to \eqref{ExtendedProjectionPQR}, $\mq$ can be extended to a ring epimorphism from $\knx$ or $\rr [\tilde{\bx}]$ to $\rex$ which we still denote as $\mq$ as follows. \begin{equation}\label{FinalProjection} \mq\colon\knx\text{~or~}\rr [\tilde{\bx}]\rightarrow\rex\colon\quad\mq\Bigl(\sum_{j=1}^s c_j\tilde{\bx}^{\alpha_j}\Bigr):=\sum_{j=1}^s\mq (c_j)\tilde{\bx}^{\alpha_j}. \end{equation} Similar to \eqref{ExtendedEmbedding}, we also have an injection $\iq$ as follows. \begin{equation}\label{FinalEmbedding} \iq\colon\rex\rightarrow\knx\text{~or~}\rr [\tilde{\bx}]\colon\quad\iq\Bigl(\sum_{j=1}^s c_j\tilde{\bx}^{\alpha_j}\Bigr):=\sum_{j=1}^s\iq(c_j)\tilde{\bx}^{\alpha_j}. \end{equation} \begin{theorem}\label{Thm:MemberCharModulo} Suppose that $I\subset\knx$ is a zero-dimensional ideal over a perfect field $K$ and $\pel$ a pseudo-eliminant of $I$. Let $\qr$ be a composite divisor of the incompatible part $\Ip (\pel)$ and $\rr$ the normal PQR as in \eqref{kxnPQR}. Let $\ee\in\rr\setminus\rr^\times$ and $\prb$ denote the proper eliminant and proper basis obtained in Algorithm \ref{Algo:ProperEliminant} respectively. If $\ee=0$, then we have the following two equivalent characterizations of $\prb$: \begin{enumerate}[(1)] \item\label{item:LeadTermQr} A characterization of $\prb$ through its leading terms via an ideal identity as follows. \begin{equation}\label{QrLeadTermChar} \langle\ltc (\Iq)\rangle=\langle\ltc (\prb)\rangle. \end{equation} \item\label{item:MemberCharQr} A characterization of $\prb$ through \Gd-reductions: For every $f\in\rx$, we have $f\in\Iq$ if and only if we can make a \Gd-reduction of $f$ with respect to $\prb$ as in \eqref{GcdDivisionExpression} and \eqref{GcdDivisionCond} with the remainder $\dr=0$. \suspend{enumerate} If $\ee\in\rat$, we define $\Ie:=\mq (\Iq)=\mq (I)$ and $\Bn:=\mq (\prb)=\mq (\pbs)$ with $\mq$ being the epimorphism as in \eqref{FinalProjection}. Then we have the following two equivalent characterizations of $\Bn$: \resume{enumerate}[{[(1)]}] \item\label{item:LeadTermEq} A characterization of $\Bn$ through its leading terms via an ideal identity as follows. \begin{equation}\label{EqLeadTermChar} \langle\ltc (\Ie)\rangle=\langle\ltc (\Bn)\rangle. \end{equation} \item\label{item:MemberCharEq} A characterization of $\Bn$ through \Gd-reductions: For every $f\in\rex$, we have $f\in \Ie$ if and only if we can make a \Gd-reduction of $f$ with respect to $\Bn$ as in \eqref{GcdDivisionExpression} and \eqref{GcdDivisionCond} with the remainder $\dr=0$. \end{enumerate} \end{theorem} \begin{proof} The identities \eqref{QrLeadTermChar} and \eqref{EqLeadTermChar} follow directly from Lemma \ref{Lemma:ProperReps}. The proofs are similar to and even simpler than that for the identity \eqref{LeadTermMod} in Lemma \ref{Lemma:IdealMembership} since we need a conclusion like Lemma \ref{Lemma:EpiLeadingCoeff} in neither cases. In fact, we can obtain \eqref{QrLeadTermChar} as an identity of leading terms from the identity \eqref{ProperReps}. We first define a subscript set $\dss$ for \eqref{ProperReps} as $\dss:=\{1\le k\le\tau\colon\lmc (\vr_k)\cdot\lmc (\tg_k)=\lmc (f),\lcc (\vr_k)\cdot\lcc (\tg_k)\in\rr^\ast\}$. Then we can obtain an identity of leading terms as follows. \begin{equation}\label{LeadingTermIdentity} \ltc (f)=\mr^{-1}\sum_{k\in\dss}\ltc (\vr_k)\cdot\ltc (g_k)\in\langle\ltc (\prb)\rangle \end{equation} since we have $\lcc (f)\in\rr^\ast$ as well as $\ee=0$ in \eqref{ProperReps} in this case. To obtain\eqref{EqLeadTermChar}, for every $f\in\Ie$, let $\iq$ be the injection defined as in \eqref{FinalEmbedding} such that $\iq (f)\in\Iq$. Now consider the identity \eqref{ProperReps} that holds for $\iq (f)$. We can apply the epimorphism $\mq$ in \eqref{FinalProjection} to the identity \eqref{ProperReps} for $\iq (f)$ to obtain an identity of leading terms that is similar to \eqref{LeadingTermIdentity} since $\lcc (f)\in\re^\ast$ and $\mq (\ee)=0$. The proofs for the equivalence between the characterizations in \eqref{item:LeadTermQr} and \eqref{item:MemberCharQr}, as well as between those in \eqref{item:LeadTermEq} and \eqref{item:MemberCharEq}, are verbatim repetitions of that for Theorem \ref{Thm:MemberChar}. In fact, it suffices to substitute $\Iq$, $\prb$ and $\mo$, as well as $\Ie$, $\Bn$ and $\mq$, for $I_\er$, $\pb_\er$ and $\md$ respectively. \end{proof} \begin{remark} We used \Gd-reductions in Theorem \ref{Thm:MemberChar} and Theorem \ref{Thm:MemberCharModulo} \eqref{item:MemberCharQr} and \eqref{item:MemberCharEq} to address the ideal membership problem for the new type of bases. However we avoided making \Gd-reductions of the $S$-polynomials in the computations of the pseudo-bases and pseudo-eliminants in Algorithm \ref{Algo:PseudoEliminant}, as well as the proper bases and proper eliminants in Algorithm \ref{Algo:ProperEliminant}. The reason becomes clear in Section \ref{Sec:ComplexityComparison} when we show that the \Gd-computations not only incur complexity issues but also contain B\'ezout coefficients\ that tend to swell to an excruciating magnitude over the rational field $K=\bQ$. This is also the reason why we do not adopt the so-called \emph{strong} Gr\"obner\ bases for polynomial rings over principal ideal rings. And the PQR is a special kind of principal ideal rings. Please refer to \cite[P251, Definition 4.5.6]{AL94} for the definition of strong Gr\"obner\ bases over PIDs. \end{remark} \begin{notation}[Unified notations for modular bases]\label{Notation:ModularBasis} \hfill Let us use $\Bm$ to denote the various modular bases that were defined previously as follows: \begin{inparaenum}[(i)] \item The proper basis $\prb$ with the moduli being a composite divisor $\qr$ as in Definition \ref{Def:ProperEliminant}; \item The modular basis $\pb_\er=\md (\pbs)$ with the moduli being the compatible part $\er=\Cp (\pel)$ as in Lemma \ref{Lemma:IdealMembership}; \item The modular basis $\Bn=\mq (\Bm)$ with the moduli being a proper eliminant $\ee$ as in Theorem \ref{Thm:MemberCharModulo} \eqref{item:LeadTermEq}. \end{inparaenum} In accordance with the above notation of $\Bm$, we also use $\mm$ as a unified notation for the epimorphisms $\mo$ in \eqref{ExtendedProjectionPQR}, $\md$ in \eqref{ModuloCompatiblePart} and $\mq$ in \eqref{FinalProjection}. Moreover, we denote the coefficient ring $\rr$, $\rc$ or $\re$ simply as $\Rm$ and ideal $\Iq$, $I_\er$ or $\Ie$ as $\Iq$ respectively such that $\Bm\subset\Iq\subset\Rm [\tilde{\bx}]$ henceforth. \end{notation} The unified modular basis $\Bm$ satisfies the similar identities \eqref{LeadTermMod}, \eqref{QrLeadTermChar} and \eqref{EqLeadTermChar} that can be assimilated into a unified identity as follows. \begin{equation}\label{UnifiedIdentity} \langle\ltc (\Iq)\rangle=\langle\ltc (\Bm)\rangle. \end{equation} Now we furnish the unified identity \eqref{UnifiedIdentity} with an interpretation via the ideal $I+\langle\qr\rangle$. Let $\pbs=\{\tg_k\colon 1\le k\le s\}$ be a pseudo-basis of $I$ and $\Bm=\mo (\pbs)=\{\ibr_j\colon 1\le j\le t\}\subset\Iq$ the unified notation for the modular bases as in Notation \ref{Notation:ModularBasis}. We can also define a unified notation for the injection $\io\colon\rx\rightarrow\knx$ similar to \eqref{ExtendedEmbedding} and \eqref{FinalEmbedding} such that $\mo\circ\io$ is the identity map. By the canonical isomorphism as follows: \begin{equation*} (I+\langle\qr\rangle)/\langle\qr\rangle\simeq I/(I\cap\langle\qr\rangle)\simeq\mo (I)=\Iq, \end{equation*} it is easy to deduce that $\io (\Bm)\subset\pbs+\langle\qr\rangle:=\{g_k+f\qr\colon 1\le k\le s,f\in\knx\}$. Then it readily follows that $(\io (\Bm)\cup\{0\})+\langle\qr\rangle=\pbs+\langle\qr\rangle$. Here $\io (\Bm)\cup\{0\}$ is for the possibility that $0\in\mo (\pbs)$. We can deduce that for every $f\in I+\langle\qr\rangle$, there exists $g\in\knx$ such that $f-gq\in\langle\io (\Bm)\rangle$. In fact, we can invoke Theorem \ref{Thm:MemberChar} and Theorem \ref{Thm:MemberCharModulo} \eqref{item:MemberCharQr} and \eqref{item:MemberCharEq} on $\mo (f)\in\Iq$. According to the \Gd-reduction by the modular basis $\Bm=\{\ibr_j\colon 1\le j\le t\}$ as in \eqref{GcdDivisionExpression}, there exist a multiplier $\mr\in\rr^\times$ and quotients $h_j\in\rx$ with $1\le j\le t$ such that $\mr\cdot\mo (f)=\sum_{j=1}^th_j\ibr_j$. Since $h_j\ibr_j=\mo (\io (h_j)\cdot\io (\ibr_j))$ for $1\le j\le t$, there exists $h\in\knx$ such that $\mr\cdot\io (\mo (f))-h\qr=\sum_{j=1}^t\io (h_j)\cdot\io (\ibr_j)\in\langle\io (\Bm)\rangle$. Moreover, we also have $f-\io (\mo (f))\in\langle\qr\rangle$. Hence $f-gq\in\langle\io (\Bm)\rangle$ for some $g\in\knx$. Thus we have proved the following interpretation for the identity \eqref{UnifiedIdentity} since $(\io (\Bm)\cup\{0\})+\langle\qr\rangle=\pbs+\langle\qr\rangle$. \begin{lemma}\label{Lemma:ModularBasisBack} If a modular basis $\Bm$ satisfies the identity \eqref{UnifiedIdentity}, then $\pbs\cup\{\qr\}$ or $\io (\Bm)\cup\{\qr\}$ constitute a basis for $I+\langle\qr\rangle$. \end{lemma} Let $\pbs$ be a pseudo-basis and $\er=\Cp (\pel)$ the compatible part of a pseudo-eliminant $\pel$ of $I$. We still use $\qel$ to denote the nontrivial proper divisors for $\qr\in\ntd$ with $\ntd$ being the corresponding set of composite divisors as in \eqref{IdealDecomposition}. Then we have the following new type of bases in accordance with the ideal decomposition of $I$ in \eqref{IdealDecomposition}: \begin{equation}\label{NewBases} (\pbs\cup\{\er\})\cup\bigcup_{\qr\in\ntd}(B\cup\{\qel\}), \end{equation} where the basis $B$ stands for either $\pbs$ or $\io (\Bm)$. The modular version of the new type of bases specified in \eqref{NewBases} correspond to the modular bases $\prb$, $\pb_\er$ and $\Bn$ as in Notation \ref{Notation:ModularBasis}. These modular bases are especially suited for the Chinese Remainder Theorem. \begin{lemma}\label{Lemma:ChineseIdealLemma} Let $\ntd$ be the set of composite divisors whose proper divisors are nontrivial as in \eqref{IdealDecomposition} and \eqref{NewBases}. We use the unified notation $\Iq$ as in Notation \ref{Notation:ModularBasis} for the modular ideals $\Iq$ in \eqref{QrLeadTermChar} and $\Ie$ in \eqref{EqLeadTermChar} but we keep the notation for $I_\er$ as in \eqref{LeadTermMod} unaltered. If we denote $I_\el:=I/I\cap\langle\el\rangle$, then we have a decomposition as follows. \begin{equation}\label{ChineseModuleTheorem} I_\el\simeq I_\er\times\prod_{\qr\in\ntd}\Iq. \end{equation} \end{lemma} \begin{proof} The identity \eqref{ChineseModuleTheorem} amounts to proving that the canonical homomorphism $\varphi$ as follows is an isomorphism: \begin{equation}\label{ChineseMorphism} \varphi\colon I/I\cap\langle\el\rangle\longrightarrow (I/I\cap\langle\er\rangle)\times\prod_{\qr\in\ntd}I/I\cap\langle\qr\rangle. \end{equation} The proof is similar to that for the Chinese Remainder Theorem. In fact, it is obvious that $\varphi$ is an injection since we have the decomposition: \begin{equation}\label{DecompIdeal} (I\cap\langle\er\rangle)\cap\bigcap_{\qr\in\ntd}(I\cap\langle\qr\rangle)=I\cap\langle\el\rangle \end{equation} by the factorization of the eliminant $\el$ in Theorem \ref{Thm:Eliminant}. For every $\qr\in\ntd\cup\{\er\}:=\widehat\ntd$, let us define $J_\qr:=\bigcap_{\qr'\in\widehat\ntd\setminus\{\qr\}}(I\cap\langle\qr'\rangle)$. Fix a $\qr\in\widehat\ntd$. For every $\qr'\in\widehat\ntd\setminus\{\qr\}$, there exist $\icr_\qr,\icr_{\qr'}\in\kxn$ such that $\icr_\qr\qr+\icr_{\qr'}\qr'=1$. Hence for every $f\in I$, we have the following identity: \begin{equation*} f=f\prod_{\qr'\in\widehat\ntd\setminus\{\qr\}}(\icr_\qr\qr+\icr_{\qr'}\qr')\in (I\cap\langle\qr\rangle)+J_\qr, \end{equation*} from which we can deduce the following ideal identity for every $\qr\in\widehat\ntd$: \begin{equation}\label{ChineseIdealIdentity} I=(I\cap\langle\qr\rangle)+J_\qr. \end{equation} We can substitute \eqref{ChineseIdealIdentity} into \eqref{ChineseMorphism} to obtain the following equivalence for every $\qr\in\widehat\ntd$: \begin{equation}\label{NovelIdentity} I/I\cap\langle\qr\rangle=((I\cap\langle\qr\rangle)+J_\qr)/(I\cap\langle\qr\rangle)\simeq J_\qr/((I\cap\langle\qr\rangle)\cap J_\qr) =J_\qr/(I\cap\langle\el\rangle), \end{equation} where the last equality is based on \eqref{DecompIdeal}. The identity \eqref{NovelIdentity} simplifies the canonical monomorphism $\varphi$ in \eqref{ChineseMorphism} into the following form: \begin{equation} \varphi\colon I/I\cap\langle\el\rangle\longrightarrow\prod_{\qr\in\widehat\ntd}J_\qr/(I\cap\langle\el\rangle) \simeq\prod_{\qr\in\widehat\ntd}I/I\cap\langle\qr\rangle\stackrel{\pi_\qr}{\longrightarrow}I/I\cap\langle\qr\rangle, \end{equation} where $\pi_\qr$ is the canonical projection onto the components. The surjectivity of $\varphi$ follows from the fact that for $s_\qr\in J_\qr$ with $\qr\in\widehat\ntd$, we have $\pi_\qr\circ\varphi (s)=s_\qr$ with $s:=\sum_{\qr\in\widehat\ntd}s_\qr$. \end{proof} \begin{lemma}\label{Lemma:CRT} For the eliminant $\el$ of an ideal $I$, let $R_\el :=\knx/\langle\el\rangle$ and $I_\el$ be defined as in \eqref{ChineseModuleTheorem} respectively. Then we have the following isomorphism in accordance with the ideal decompositions in \eqref{IdealDecomposition} and \eqref{DecompIdeal}: \begin{equation}\label{ChineseAlgebras} R_\el/I_\el\simeq (R_\er/I_\er)\times\prod_{q\in\ntd}(R_\qr/\Iq), \end{equation} where $\ntd$ is defined as in \eqref{IdealDecomposition} and \eqref{ChineseModuleTheorem} and $I_\er$ and $\Iq$ are defined as in \eqref{ChineseModuleTheorem}. \end{lemma} \begin{proof} By Chinese Remainder Theorem we have the algebra decomposition: \begin{equation}\label{PrimitiveAlgebras} R/(I+\langle\el\rangle)\simeq R/(I+\langle\er\rangle)\times\prod_{q\in\ntd}R/(I+\langle\qr\rangle) \end{equation} with $R=\knx$. Then the decomposition \eqref{ChineseAlgebras} follows from \eqref{PrimitiveAlgebras} by the following observation: \begin{equation*} R/(I+\langle\qr\rangle)\simeq (R/\langle\qr\rangle)\big/((I+\langle\qr\rangle)/\langle\qr\rangle)\simeq R_\qr/\Iq. \end{equation*} Similarly we have $R/(I+\langle\el\rangle)\simeq R_\el/I_\el$ and $R/(I+\langle\er\rangle)\simeq R_\er/I_\er$. \end{proof} It is easy to show that the \Gd-reduced remainder $\dr$ obtained in the \Gd-reduction \eqref{GcdDivisionExpression} of Theorem \ref{Thm:GcdReduction} is unique. In this way we can define a unique \emph{normal form} $\dr$ for every $f\in\rx$ with respect to $\pb$. This combined with the identity \eqref{ChineseAlgebras} yield a normal form for every $f\in R_\el$ with respect to $I_\el$. It is evident that Definition \ref{Def:RelativeReducible} and Lemma \ref{Lemma:LeadTermCond} apply to all these modular bases denoted as $\Bm$. In the remaining part of this section, let us address the uniqueness of this new type of bases. The first step is to minimize the number of elements in a basis. \begin{definition}[Irredundant basis]\label{Def:IrredundantBasis} \hfill A basis $\Bm$ as in Notation \ref{Notation:ModularBasis} satisfying \eqref{UnifiedIdentity} is said to be \emph{irredundant} if $\langle\ltc (\Bm\setminus\{\ibr\})\rangle$ is a proper subset of $\langle\ltc (\Bm)\rangle$ for every element $b\in\Bm$. That is, $\ltc (\ibr)$ is \Gd-reduced with respect to $\Bm\setminus\{\ibr\}$ for every $b\in\Bm$ by Lemma \ref{Lemma:LeadTermCond}. \end{definition} \begin{lemma}\label{Lemma:IrredundantBases} If $\Bm$ and $\Bm'$ are irredundant bases as in Definition \ref{Def:IrredundantBasis} of the same ideal $\Iq$, then we have two equal sets of leading monomials $\lmc (\Bm)=\lmc (\Bm')$. \end{lemma} \begin{proof} From $\ltc (\ibr)\in\langle\ltc (\Bm')\rangle$ for every $\ibr\in\Bm$, we know that $\ltc (\ibr)$ is \Gd-reducible with respect to $\Bm'$ as per Lemma \ref{Lemma:LeadTermCond}. Further, for every $\ibr'\in\dmb{\Bm'}\ibr$ as in Notation \ref{Notation:DivisorSet}, $\ltc (\ibr')$ is also \Gd-reducible with respect to $\Bm$. Nonetheless we know for sure that not every element of $\ltc (\dmb{\Bm'}\ibr)$ be \Gd-reducible with respect to $\Bm\setminus\{\ibr\}$. Otherwise $\ltc (\ibr)$ would be \Gd-reducible with respect to $\Bm\setminus\{\ibr\}$, contradicting the assumption that $\Bm$ is irredundant. As a result, there exists a $\ibr'\in\dmb{\Bm'}\ibr$ satisfying $\ibr\in\dmb\Bm{\ibr'}$. These two conditions indicate that $\lmc (\ibr)$ is divisible by $\lmc (\ibr')$ and vice versa. Hence we have $\lmc (\ibr')=\lmc (\ibr)$, from which we can deduce that $\lmc (\Bm)\subset\lmc (\Bm')$ since $\ibr\in\Bm$ is arbitrary. Similarly we have $\lmc (\Bm')\subset\lmc (\Bm)$. \end{proof} \begin{remark}\label{Rmk:BasisCardinal} The two irredundant bases $\Bm$ and $\Bm'$ in Lemma \ref{Lemma:IrredundantBases} do not have to contain the same number of elements. In fact, consider the scenario of $\ibr_j\in\Bm$ with $j=1,2$ such that $\lmc (\ibr_1)=\lmc (\ibr_2)=\min\{\lmc (\Bm)\}$. Suppose that $\lcc (\ibr_j)\in\rat$ and $\lcc (\ibr_j)/\idr\in\rat$ for $j=1,2$ with $\idr:=\gcd (\lcc (\ibr_1),\lcc (\ibr_2))$. By Lemma \ref{Lemma:LinearEqGcd} there exist $\icr_j\in\rr$ for $j=1,2$ such that $\idr=\icr_1\cdot\lcc (\ibr_1)+\icr_2\cdot\lcc (\ibr_2)$. Now we construct a new irredundant basis $\Bm'$ by substituting $\ibr:=\icr_1\ibr_1+\icr_2\ibr_2$ for both $\ibr_1$ and $\ibr_2$ in $\Bm$. Then we still have $\langle\ltc (\Bm)\rangle=\langle\ltc (\Bm')\rangle$ as in \eqref{UnifiedIdentity} and moreover, $\lmc (\Bm)=\lmc (\Bm')$. \end{remark} \begin{definition}[Minimal basis]\label{Def:MinimalBasis} \hfill An irredundant basis $\Bm$ as in Definition \ref{Def:IrredundantBasis} is called a \emph{minimal} basis if for every $\tf\in\Bm$, its leading coefficient $\lcc (\tf)=\mr\cdot\gcd (\{\lcc (\ibr)\colon\ibr\in\dmb\Bm\tf\})$ with $\mr\in\Bm^\times$ being a unit. Here $\dmb\Bm\tf$ and $\gcd$ are defined as in Notation \ref{Notation:DivisorSet}. \end{definition} In the example in Remark \ref{Rmk:BasisCardinal}, the basis $\Bm$ is irredundant but not minimal. \begin{lemma} Let $\Bm\subset\Rm [\tilde{\bx}]$ be an irredundant basis as in Definition \ref{Def:IrredundantBasis}. For every $\tf\in\Bm$, let us denote $\idr:=\gcd (\{\lcc (\ibr)\colon\ibr\in\dmb\Bm\tf\}$ as in Notation \ref{Notation:DivisorSet}. Assume that $\idr=\sum_{\ibr\in\dmb\Bm\tf}c_\ibr\cdot\lcc (\ibr)$ with $c_\ibr\in\Rm$ as in Lemma \ref{Lemma:LinearEqGcd}. For every $\tf\in\Bm$, if we substitute $\tf$ by \begin{equation}\label{MiniCoeff} \tg:=\sum_{\ibr\in\dmb\Bm\tf}\frac{c_\ibr\cdot\lmc (\tf)}{\lmc (\ibr)}\ibr, \end{equation} we shall obtain a new basis denoted as $\Bm'$. We delete every $\tg\in\Bm'$ for which there exists $\tg'\in\Bm'$ with $\tg'\ne\tg$ such that $\ltc (\tg')=\mr\cdot\ltc (\tg)$ with $\mr\in\rr^\times$. If we still denote the new basis set as $\Bm'$, then $\Bm'$ is a minimal basis. \end{lemma} \begin{proof} We prove the irredundancy of $\Bm'$ by contradiction. Assume that there exists $\tg\in\Bm'$ such that $\tg$ is \Gd-reducible with respect to $\Bm'\setminus\{\tg\}$. Let us denote $\Omega:=\dmb{(\Bm'\setminus\{\tg\})}\tg\ne\emptyset$ as in Notation \ref{Notation:DivisorSet}. That is, for every $\ibr\in\Omega$ there is $c_\ibr\in\Rm$ such that the following identity holds. \begin{equation}\label{RedundantAssumption} \lcc (\tg)=\sum_{\ibr\in\Omega}c_\ibr\cdot\lcc (\ibr). \end{equation} Moreover, for every $\ibr\in\Omega$, we have $\lmc (\ibr)\prec\lmc (\tg)$. The reason is that if $\lmc (\ibr)=\lmc (\tg)$, we would have $\ltc (\ibr)=\mr\cdot\ltc (\tg)$ with $\mr\in\rr^\times$ by the definition of $\Bm'$ based on \eqref{MiniCoeff}. Then one of $\ibr$ and $\tg$ would have been deleted from $\Bm'$. Now every $\ibr\in\Omega$ satisfies $\ltc (\ibr)\in\langle\ltc (\Bm)\rangle$ since $\Omega\subset\Bm'\subset\Iq$ and $\Bm$ satisfies the identity $\langle\ltc (\Iq)\rangle=\langle\ltc (\Bm)\rangle$. Hence $\ltc (\ibr)$ is \Gd-reducible with respect to $\Bm$ as per Lemma \ref{Lemma:LeadTermCond}, i.e., for every $\ar\in\dmb\Bm\ibr:=\Gamma_\ibr\ne\emptyset$, there exists $h_\ar\in\Rm$ such that $\lcc (b)=\sum_{\ar\in\Gamma_\ibr}h_\ar\cdot\lcc (\ar)$. This combined with \eqref{RedundantAssumption} lead to: \begin{equation}\label{FinalCombination} \lcc (\tg)=\sum_{\ibr\in\Omega}\sum_{\ar\in\Gamma_\ibr}c_\ibr h_\ar\cdot\lcc (\ar). \end{equation} From \eqref{FinalCombination} we can infer that $\ltc (\tg)$ is \Gd-reducible with respect to $\bigcup_{\ibr\in\Omega}\Gamma_\ibr$. Moreover, the construction of $\tg\in\Bm'$ in \eqref{MiniCoeff} is based on an $\tf\in\Bm$ such that $\lmc (\tf)=\lmc (\tg)$ and $\lcc (\tf)$ is divisible by $\lcc (\tg)$. Hence $\ltc (\tf)$ is \Gd-reducible with respect to $\bigcup_{\ibr\in\Omega}\Gamma_\ibr$. But we already proved that $\lmc (\ibr)\prec\lmc (\tg)=\lmc (\tf)$ for every $\ibr\in\Omega$. This indicates that $\tf\notin\bigcup_{\ibr\in\Omega}\Gamma_\ibr$ and thus we have $\bigcup_{\ibr\in\Omega}\Gamma_\ibr\subset\dmb\Bm{\tg}\setminus\{\tf\}\subset\Bm\setminus\{\tf\}$. Hence $\ltc (\tf)$ is \Gd-reducible with respect to $\Bm\setminus\{\tf\}$. This contradicts the assumption that $\Bm$ is irredundant. The minimality of $\Bm'$ readily follows from Definition \ref{Def:MinimalBasis}. \end{proof} \begin{lemma}\label{Lemma:MinimalBases} If $\Bm$ and $\Bm'$ are minimal bases of the same ideal $\Iq$ as in Definition \ref{Def:MinimalBasis}, then we have two equal sets of leading terms $\ltc (\Bm)=\ltc (\Bm')$ and moreover, $\Bm$ and $\Bm'$ have the same number of basis elements. \end{lemma} \begin{proof} Since minimal bases are irredundant, we already have $\lmc (\Bm)=\lmc (\Bm')$ as per Lemma \ref{Lemma:IrredundantBases}. For every $\ibr\in\Bm$, there exists $\ibr'\in\Bm'$ such that $\lmc (\ibr)=\lmc (\ibr')$. Moreover, $\ltc (\ibr')$ is \Gd-reducible with respect to $\Bm$ and hence $\lcc (\ar)$ is divisible by $\lcc (\ibr')$ for every $\ar\in\dmb\Bm{\ibr'}$ due to the minimality of $\Bm'\ni\ibr'$. Now $\ibr\in\dmb\Bm{\ibr'}$ and hence $\lcc (\ibr)$ is divisible by $\lcc (\ibr')$. Similarly $\lcc (\ibr')$ is divisible by $\lcc (\ibr)$ as well. Thus we also have $\lcc (\ibr)=\mr\cdot\lcc (\ibr')$ with $\mr\in\rr^\times$. The conclusion for the number of elements is immediate from $\ltc (\Bm)=\ltc (\Bm')$ and the irredundancy of $\Bm$ and $\Bm'$. \end{proof} \begin{definition}[Reduced basis]\label{Def:ReducedBases} \hfill A minimal basis $\Bm$ as in Definition \ref{Def:MinimalBasis} is said to be a \emph{reduced} basis if every $\ibr\in\Bm$ is \Gd-reduced with respect to $\Bm\setminus\{\ibr\}$. That is, every nonzero term of $\ibr$ is \Gd-reduced with respect to $\Bm\setminus\{\ibr\}$. \end{definition} \begin{lemma}\label{Lemma:ReducedBases} If both $\Bm$ and $\Bm'$ are reduced bases of the same ideal $\Iq$ as in Definition \ref{Def:ReducedBases}, then we have two equal bases $\Bm=\Bm'$. \end{lemma} \begin{proof} According to Lemma \ref{Lemma:MinimalBases}, we have $\ltc (\Bm)=\ltc (\Bm')$ since both of them are minimal bases. Hence a term $c_\alpha\tilde{\bx}^\alpha$ is \Gd-reduced with respect to $\Bm$ if and only if it is \Gd-reduced with respect to $\Bm'$. Now for every $\ibr\in\Bm$, there is $\ibr'\in\Bm'$ such that $\ltc (\ibr)=\ltc (\ibr')$. Let us assume that $\ibr-\ibr'\ne 0$. By $\ibr-\ibr'\in\Iq$, we have $\ltc (\ibr-\ibr')\in\langle\ltc (\Bm)\rangle$ as per the identity \eqref{UnifiedIdentity}. Then we have $\ltc (\ibr-\ibr')\in\langle\ltc (\Bm\setminus\{\ibr\})\rangle$ since $\ltc (\ibr-\ibr')\prec\ltc (\ibr)$. Hence $\ltc (\ibr-\ibr')$ is \Gd-reducible with respect to $\Bm\setminus\{\ibr\}$ by Lemma \ref{Lemma:LeadTermCond}. On the other hand, every nonzero term of $\ibr$ and $\ibr'$ is \Gd-reduced with respect to $\Bm\setminus\{\ibr\}$ and $\Bm'\setminus\{\ibr'\}$ respectively by Definition \ref{Def:ReducedBases}. By $\ltc (\Bm\setminus\{\ibr\})=\ltc (\Bm'\setminus\{\ibr'\})$, we can infer that every nonzero term of $\ibr'$ is \Gd-reduced with respect to $\Bm\setminus\{\ibr\}$ as well. Thus we can conclude that every nonzero term of $\ibr-\ibr'$ is \Gd-reduced with respect to $\Bm\setminus\{\ibr\}$. In particular, $\ltc (\ibr-\ibr')$ is \Gd-reduced with respect to $\Bm\setminus\{\ibr\}$. This constitutes a contradiction. \end{proof} \begin{remark} Please note that a reduced basis is not necessarily a strong basis like the strong Gr\"obner\ basis. A strong Gr\"obner\ basis is defined as\footnote{Please also refer to \cite[P251, Definition 4.5.6]{AL94}.} a finite set $\tbs:=\{\tg_j\colon 1\le j\le s\}$ such that for every $\tf\in\langle\tbs\rangle$, there exists a $\tg_j\in\tbs$ such that $\ltc (\tf)$ is divisible by $\ltc (\tg_j)$. Consider the following counterexample. Let $I=\langle\tf,\tg\rangle\subset\rr [x,y]$ be an ideal with basis $\tf=(z+1)^2x+r(z)$ and $\tg=(z^2-1)y+s(z)$ such that $r,s\in\rr$. An invocation of Algorithm \ref{Algo:ProperEliminant} shows that their $S$-polynomial as in \eqref{SPolyPQR} can be properly reduced to $0$. Hence $\{\tf,\tg\}$ constitutes a proper basis of $I$ by Definition \ref{Def:ProperEliminant} and satisfies the identity \eqref{QrLeadTermChar} by Theorem \ref{Thm:MemberCharModulo}. Then it is easy to verify that $\{\tf,\tg\}$ constitutes a reduced basis by Definition \ref{Def:ReducedBases}. Nevertheless it does not constitute a strong basis of $I$. In fact, consider $h:=y\tf-x\tg\in I$. Then $\ltc (h)=2(z+1)xy$ is divisible by neither $\ltc (\tf)=(z+1)^2x$ nor $\ltc (\tg)=(z^2-1)y$. \end{remark} \begin{remark} If you are enticed by the uniqueness of the reduced ideal bases as in Lemma \ref{Lemma:ReducedBases}, you should be prepared to embrace the phenomenal sizes of the intermediate B\'ezout coefficients\ such as the $c_\ibr$'s in \eqref{MiniCoeff}. We shall elaborate on this issue when we make a complexity comparison between the new type of bases and Gr\"obner\ bases in Section \ref{Sec:ComplexityComparison}. The exemplary wild growth of B\'ezout coefficients\ can be found especially in Example \ref{Expl:BezoutCoeffs}. \end{remark} \section{Further Improvements on the Algorithm} \label{Sec:MinorImprovements} We already strove to simplify our algorithms via Corollary \ref{Cor:CoprimePair} and Corollary \ref{Cor:PQRCoprime} as well as the triangular identities in Lemma \ref{Lemma:TriangleIdentity} and Lemma \ref{Lemma:TriangleIdentityPQR}. In this section we make further improvements on the efficiency and complexity of Algorithm \ref{Algo:PseudoEliminant} and Algorithm \ref{Algo:ProperEliminant}. Recall that in Algorithm \ref{Algo:PseudoEliminant} and Algorithm \ref{Algo:ProperEliminant} we have temporary sets $\tbs$ and $\tas$ respectively containing the basis elements, as well as the temporary sets $\mathfrak{S}$ containing the $S$-polynomials. We also have \Po P for the pseudo-reduction and proper reduction of the $S$-polynomials in $\mathfrak{S}$ respectively. Let us supplement the following principles to improve the efficiency of Algorithm \ref{Algo:PseudoEliminant} and Algorithm \ref{Algo:ProperEliminant}. \begin{principle}[Minimal principle]\label{Principle:PseudoRedPrinciple} \begin{inparaenum}[(i)] \hfill \item\label{item:ListOrderPrinciple} We always list the elements in the temporary set $\tbs$ in Algorithm \ref{Algo:PseudoEliminant} and temporary set $\tas$ in Algorithm \ref{Algo:ProperEliminant} such that their leading terms are in increasing order with respect to the monomial ordering. \item\label{item:ReduceOrderPrinciple} When we make a pseudo-reduction or proper reduction of an $S$-polynomial in $\mathfrak{S}$ in \Po P, we always use the basis elements in the temporary set $\tbs$ in Algorithm \ref{Algo:PseudoEliminant} or $\tas$ in Algorithm \ref{Algo:ProperEliminant} first whose leading terms are as small as possible with respect to the monomial ordering. \item\label{item:LeastSPoly} When we choose an $S$-polynomial in $\mathfrak{S}$ for pseudo-reduction or proper reduction by \Po P, we always choose the one whose leading term is as small as possible with respect to the monomial ordering. \item\label{item:OnceTriangle} For each triplet we can invoke a triangular identity as in \eqref{TriangleIdentity} or \eqref{TriangleIdentityPQR} at most once. Moreover, we choose the triangular identity such that the multiplier $\mr$ in \eqref{TriangleIdentity} or \eqref{TriangleIdentityPQR} is as simple as possible. More specifically, in the case of \eqref{TriangleIdentity} we require that the degree of the squarefree part of $\mr$ as in Definition \ref{Def:SquarefreePart} be as small as possible. In the case of \eqref{TriangleIdentityPQR}, we require that the degrees of both the unit factor $\mr^\times$ and standard factor $\mr_\st$ as in \eqref{StdRepsPQR} be as small as possible. And the degree of $\mr_\st$ has the priority for the comparison. \item\label{item:SquarefreeChoice} In Algorithm \ref{Algo:ProperEliminant} when we initialize the temporary set $\tas:=\mo (\tbs)$ by applying the epimorphism $\mo$ in \eqref{ExtendedProjectionPQR} to the original basis $\tbs$, we should choose the representations in $\rr$ of the coefficients of the basis elements in $\tas$ such that their squarefree parts are as simple as possible in terms of both degrees and coefficients. The priority of the comparison is given to the degrees of the squarefree parts of the leading coefficients. This is especially true for the case when we already have the factorizations of the coefficients. \end{inparaenum} \end{principle} Principle \ref{Principle:PseudoRedPrinciple} enhances efficiency by imposing a preference or direction for the implementation of Algorithm \ref{Algo:PseudoEliminant} to follow. In effect whenever we make a pseudo-reduction of an $S$-polynomial in $\mathfrak{S}$, we usually obtain a remainder $r$ with the least leading term in $\tbs$. Then the remainder $r$ generates a new $S$-polynomial with the least leading term in $\mathfrak{S}$. We choose to make a pseudo-reduction of this new $S$-polynomial in $\mathfrak{S}$ according to Principle \ref{Principle:PseudoRedPrinciple} \eqref{item:LeastSPoly}. A repetition of this process with strictly decreasing leading terms inevitably leads to a temporary pseudo-eliminant in $\kxn$ before we make a pseudo-reduction of all the $S$-polynomials in $\mathfrak{S}$. Let us denote the temporary pseudo-eliminant as $\tel$. It is easy to see that the pseudo-eliminant $\pel$, the final output of Algorithm \ref{Algo:PseudoEliminant}, satisfies $\tel\in\pid\pel\subset\kxn$. In what follows let us use the temporary pseudo-eliminant $\tel$ to further improve Lemma \ref{Lemma:TriangleIdentity} and Corollary \ref{Cor:CoprimePair}. \begin{lemma}\label{Lemma:TriangleIdentityImprove} Let $\tel$ be a temporary pseudo-eliminant in Algorithm \ref{Algo:PseudoEliminant} such that $\tel\in\pid\pel\subset\kxn$ with $\pel$ being the pseudo-eliminant. For $f,g,h\in\Rd$, suppose that $\lcm (\lmc (f),\lmc (g))\in\langle\lmc (h)\rangle$. If the multiplier $\mr=\lcc (h)/\idr$ as in \eqref{TriangleIdentity} is relatively prime to $\tel$, then it is unnecessary to add $\mr$ into the multiplier set $\mrs$ in \Po Q of Algorithm \ref{Algo:PseudoEliminant}. We simply disregard the $S$-polynomial $S(f,g)$. \end{lemma} \begin{proof} The multiplier $\mr$ in \eqref{TriangleIdentity} is relatively prime to the temporary pseudo-eliminant $\tel$ and hence the pseudo-eliminant $\pel$ as well as the eliminant $\el$. The proof of Theorem \ref{Thm:CompatiblePart} demonstrates that such kind of multipliers have impact on the soundness of neither our arguments nor our conclusions since they would appear as factors of the multiplier $\fir$ in \eqref{Finally}. \end{proof} \begin{lemma}\label{Lemma:CoprimePair} Let $\tel$ be a temporary pseudo-eliminant in Algorithm \ref{Algo:PseudoEliminant} such that $\tel\in\pid\pel\subset\kxn$ with $\pel$ being the pseudo-eliminant. Suppose that $\lmc (f)$ and $\lmc (g)$ are relatively prime for $f,g\in\Rd$. If $\idr=\gcd (\lcc (f),\lcc (g))$ as in \eqref{CoprimeReduction} is relatively prime to $\tel$, then it is unnecessary to add $\mr=\idr$ into the multiplier set $\mrs$ in \Po Q of Algorithm \ref{Algo:PseudoEliminant}. We simply disregard the $S$-polynomial $S(f,g)$. \end{lemma} The reason for disregarding the multiplier $\idr$ in Lemma \ref{Lemma:CoprimePair} is the same as that for disregarding the multiplier $\mr$ in Lemma \ref{Lemma:TriangleIdentityImprove}. \begin{remark} Please note that after we obtain the temporary pseudo-eliminant $\tel$ in Algorithm \ref{Algo:PseudoEliminant}, we refrain from simplifying the leading coefficients of the basis elements by substituting $\gcd (\lcc (f),\tel)$ for $\lcc (f)$ notwithstanding the temptation of converting $f$ into a monic polynomial. The reason is that such simplifications involve the B\'ezout coefficients\ that most probably have gigantic sizes over $\bQ$. Please refer to Example \ref{Expl:BezoutCoeffs} for an example on this. \end{remark} The conformity to Principle \ref{Principle:PseudoRedPrinciple} during the implementation of Algorithm \ref{Algo:ProperEliminant} also yields a temporary proper eliminant denoted as $\tee$ in most cases before the output of the proper eliminant $\ee$ such that $\tee\in\pid\ee\subset\rr$. When $\tee\in\rr^\ast$, similar to Lemma \ref{Lemma:TriangleIdentityImprove} and Lemma \ref{Lemma:CoprimePair}, we can make improvements on Lemma \ref{Lemma:TriangleIdentityPQR} and Corollary \ref{Cor:PQRCoprime} as follows. \begin{lemma}\label{Lemma:NewTriangleIdentity} Let $\tee\in\rr^\ast$ be a temporary proper eliminant in Algorithm \ref{Algo:ProperEliminant} such that $\tee\in\pid\ee\subset\rr$ with $\ee$ being the proper eliminant. For $f,g,h\in\rnd$ with at most one of them in $\rat$, suppose that $\lcm (\lmc (f),\lmc (g))\in\langle\lmc (h)\rangle$. If the multiplier $\mr=\mo (l_h/\idr)$ as in \eqref{TriangleIdentityPQR} is relatively prime to the temporary proper eliminant $\tee$, then we simply disregard the $S$-polynomial $S(f,g)$ in \Po R of Algorithm \ref{Algo:ProperEliminant}. \end{lemma} We omit the proof of Lemma \ref{Lemma:NewTriangleIdentity} since it is almost a verbatim repetition of that for Lemma \ref{Lemma:TriangleIdentityImprove}. In fact, if the multiplier $\mr$ is relatively prime to the temporary proper eliminant $\tee$, then so is it to the proper eliminant $\ee$. Such kind of multipliers have impact on neither our arguments nor our conclusions in Theorem \ref{Thm:IncompatiblePart}. \begin{lemma}\label{Lemma:PQRCoprime} Let $\tee\in\rr^\ast$ be a temporary proper eliminant in Algorithm \ref{Algo:ProperEliminant} such that $\tee\in\pid\ee\subset\rr$ with $\ee$ being the proper eliminant. Suppose that $\lmc (f)$ and $\lmc (g)$ are relatively prime for $f,g\in\rqd$. If $\idr:=\gcd_\qr (\lcc (f),\lcc (g))$ as in \eqref{CoprimeReductionPQR} is relatively prime to the temporary proper eliminant $\tee$, then we simply disregard their $S$-polynomial $S(f,g)$ in \Po R of Algorithm \ref{Algo:ProperEliminant}. \end{lemma} \begin{remark}\label{Rmk:ModuloBaseChange} After we obtain a temporary proper eliminant $\tee$ in Algorithm \ref{Algo:ProperEliminant}, we can simplify computations by implementing the remaining part of the algorithm in $\rex$ over the normal PQR $\re\simeq\rr/\pid\tee$, which are defined similar to \eqref{ProperPQR} and \eqref{FinalProjection}. \end{remark} In Lemma \ref{Lemma:PQRCoprime} we can deduce that the multiplier $\idr$ for the proper reduction is relatively prime to the proper eliminant $\ee$ since $\tee\in\pid\ee\subset\rr$. By Lemma \ref{Lemma:PQRProperties} \eqref{item:SimpleCoprime} we have a unit multiplier $\mq (\idr)\in\re^\times$ with $\mq$ defined as in \eqref{FinalProjection}. Hence the $S$-polynomial $S(f,g)$ can be disregarded for our conclusions. \section{Complexity Comparison with Gr\"obner\ Bases} \label{Sec:ComplexityComparison} A conspicuous phenomenon in the computation of Gr\"obner\ bases over $\bQ$ in \lex\ ordering is the explosion of intermediate coefficients. The consumption of time and memory in the computation of $S$-polynomials constitutes another burdensome computational complexity. In this section we prove two lemmas as exemplary illustrations showing that our new type of bases minimize these two problems to a substantial extent. Let $K$ be a field and $f,g\in (K[x])^\ast$. The polynomials $u,v\in K[x]$ satisfying $uf+vg=\gcd (f,g)$ are called the \emph{B\'ezout coefficients}\ of $f$ and $g$. The following Example \ref{Expl:BezoutCoeffs} shows that albeit $f,g\in\bQ[x]$ have moderate integral coefficients, their B\'ezout coefficients\ can swell to quite unpalatable sizes. \begin{example}\label{Expl:BezoutCoeffs} \begin{align*} f(x)&:=(7x^{10}-9x^8-21x^7+13x^6+29x^5-34x^4- 56x^3-14x^2+3x+1)^2;\\ g(x)&:=(6x^{10}+15x^9+x^8-16x^7-37x^6+64x^5+18x^4+ 5x^3-3x^2-4x-1)^2. \end{align*} \end{example} I refrain from printing out the B\'ezout coefficients\ in Example \ref{Expl:BezoutCoeffs} since they can trigger a bit discomfort and be calculated for private appreciations via any popular software for symbolic computations. For a field $K$ and polynomial $f\in K[\bm{x}]$, let us use $\lt (f)$, $\lm (f)\in\bM$ and $\lc (f)\in K^\ast$ to denote the leading term, leading monomial and leading coefficient of $f$ over the field $K$ respectively. This is to discriminate from our previous notations for the leading term $\ltc (f)$, leading monomial $\lmc (f)\in\tM$ and leading coefficient $\lcc (f)\in\kxn$ of $f$ over the PID $\kxn$ when we treat $K[\bm{x}]$ as $\knx$. For $f,g\in K[\bm{x}]\setminus\{0\}$ with $\lm (f)\in\langle\lm (g)\rangle$, recall that after the first step of polynomial division of $f$ by $g$, we have: \begin{equation}\label{OneOldDivision} h=f-\frac{\lt (f)}{\lt (g)}g. \end{equation} A continuation of the polynomial division in \eqref{OneOldDivision} yields a representation: \begin{equation}\label{OldDivision} f=qg+r \end{equation} with the quotient $q$ and remainder $r$ in $K[\bm{x}]$ such that $r$ is reduced with respect to $g$, that is, $r\notin\langle\lm (g)\rangle\setminus\{0\}$. Also recall that in the computation of Gr\"obner\ bases over the field $K$, the $S$-polynomial of $f,g\in K[\bm{x}]\setminus\{0\}$ over $K$ is defined as: \begin{equation}\label{OldSPolyn} S(f,g):=\frac{\bm{x}^\gamma}{\lt (f)}f-\frac{\bm{x}^\gamma}{\lt (g)}g \end{equation} with $\bm{x}^\gamma:=\lcm (\lm (f),\lm (g))\in\bM$. Please note that when $\lm (f)\in\langle\lm (g)\rangle$ in \eqref{OldSPolyn}, then $\bm{x}^\gamma=\lm (f)$ and we have the following relationship between the $S$-polynomial $S(f,g)$ in \eqref{OldSPolyn} and polynomial division in \eqref{OneOldDivision}: \begin{equation}\label{SPolyDivisionRelation} \lc (f)\cdot S(f,g)=h. \end{equation} \begin{lemma}\label{Lemma:SimpleBezout} For a field $K$ and elimination ordering $z\prec x$ on $K[x,z]$, consider the ideal $I:=\langle ax+c,bx+d\rangle$ with $a,b,c,d\in (K[z])^\ast$. Suppose that $I$ is a zero-dimensional ideal such that $ad-bc\ne 0$. Then the computation of Gr\"obner\ basis of $I$ contains Euclidean algorithm for the computation of $\gcd (a,b)$. In particular, the B\'ezout coefficients\ of $a$ and $b$ appear in the intermediate coefficients of the computation. \end{lemma} \begin{proof} Without loss of generality, suppose that $\lt (a)=c_\alpha z^\alpha$ and $\lt (b)=c_\beta z^\beta$ with $c_\alpha,c_\beta\in K^\ast$ and $\alpha\ge\beta$. Suppose that the first step of polynomial division of $ax+c$ by $bx+d$ as in \eqref{OneOldDivision} is $a_1x+c_1:=ax+c-(c_\alpha/c_\beta)z^{\alpha-\beta}(bx+d)$. According to the identity in \eqref{SPolyDivisionRelation}, the classical $S$-polynomial as in \eqref{OldSPolyn} is essentially $a_1x+c_1$ up to a unit multiplier $c_\alpha$, that is, $c_\alpha S(ax+c,bx+d)=a_1x+c_1$. If we have $\deg (a_1)\ge\deg (b)$, a continuation of the division of $a_1x+c_1$ by $bx+d$ as in \eqref{OldDivision} coincides with the reduction of the $S$-polynomial $c_\alpha S(ax+c,bx+d)$ by $bx+d$ in the computation of Gr\"obner\ basis for $I$: \begin{equation}\label{OldReduction} c_\alpha S(ax+c,bx+d)=a_1x+c_1=q(bx+d)+b_1x+d_1 \end{equation} with $q,b_1,d_1\in K[z]$ such that $\deg (b_1)<\deg (b)$. Let us assume that $b_1d_1\ne 0$ in \eqref{OldReduction}. In the computation of Gr\"obner\ basis, we add $b_1x+d_1$ into the temporary set of basis for $I$ in this case. Then we compute the $S$-polynomial $S(bx+d,b_1x+d_1)$ and further reduce it by $b_1x+d_1$. Similar to the above discussion in \eqref{OldReduction} based on the identity \eqref{SPolyDivisionRelation}, this exactly coincides with the step of Euclidean algorithm in which we make a polynomial division of $bx+d$ by $b_1x+d_1$. A repetition of the above process shows that the computation of Gr\"obner\ basis for $I$ amounts to an implementation of Euclidean algorithm for the computation of $\gcd (a,b)$, i.e., the greatest common divisor of the leading coefficients $a,b\in (K[z])^\ast$, albeit with unit multipliers like $c_\alpha=\lc (a)$ in \eqref{OldReduction}. Let us denote $\rho:=\gcd (a,b)$ with B\'ezout coefficients\ $u,v\in K[z]$ such that $\rho=ua+vb$. According to Euclidean algorithm, after the above computations we shall obtain $u(ax+c)+v(bx+d)=\rho x+uc+vd$. We can obtain the same result via the computation and reduction of $S$-polynomials if we assimilate the unit multipliers like $c_\alpha$ in \eqref{OldReduction} into the B\'ezout coefficients\ $u,v$. We add $\rho x+uc+vd$ into the temporary set of basis for $I$ and use it to eliminate the variable $x$ so as to obtain the eliminant $\el$ of $I$. Let us denote $a=m\rho$ and $b=n\rho$ with $m,n\in K[z]$. Similar to the above discussions, the process of reducing the $S$-polynomial $c_\alpha S(ax+c,\rho x+uc+vd)$ by $\rho x+uc+vd$ amounts to the elimination of the variable $x$ as follows. \begin{equation}\label{EliminateX1} ax+c-m(\rho x+uc+vd)=c-m(uc+vd)=\frac{c\rho-a(uc+vd)}{\rho}=\frac{v(bc-ad)}{\rho} \end{equation} since $\rho=ua+vb$. Similarly we have: \begin{equation}\label{EliminateX2} bx+d-n(\rho x+uc+vd)=-\frac{u(bc-ad)}{\rho}. \end{equation} From $u(a/\rho)+v(b/\rho)=1$ we can infer that the B\'ezout coefficients\ $u$ and $v$ are relatively prime to each other. Hence the eliminant $\el$ can be obtained from \eqref{EliminateX1} and \eqref{EliminateX2} as following: \begin{equation}\label{OldEliminant} \el=\gcd\Bigl(\frac{v(bc-ad)}{\rho},-\frac{u(bc-ad)}{\rho}\Bigr)=\frac{bc-ad}{\rho}. \end{equation} The B\'ezout coefficients\ $u$ and $v$ appear in \eqref{EliminateX1} and \eqref{EliminateX2} but not in the eliminant $\el$ in \eqref{OldEliminant}. \end{proof} Lemma \ref{Lemma:SimpleBezout} represents a more generic scenario than the ideal $I=\langle ax+c,bx+d\rangle$ appears to be. In fact, in the final steps in the computation of the eliminant of a zero-dimensional ideal, we often run into the situation in the lemma. In the case of Lemma \ref{Lemma:SimpleBezout} over the PID $K[z]$, the intermediate coefficients in \eqref{EliminateX1} and \eqref{EliminateX2} contain the B\'ezout coefficients\ $u$ and $v$ of the leading coefficients $a$ and $b$ as their factors that tend to swell over the rational field $\bQ$ like in Example \ref{Expl:BezoutCoeffs}. Nonetheless in terms of our new type of $S$-polynomials as in \eqref{SPolynomialDef} over the PID $K[z]$, we have a straightforward computation as follows. \begin{equation}\label{NewEliCompute} S(ax+c,bx+d)=\lambda (ax+c)-\mu (bx+d)=\lambda c-\mu d=\frac{bc-ad}{\rho}=\el, \end{equation} where the two multipliers $\lambda:=l/a=b/\rho=n$ and $\mu:=l/b=a/\rho=m$ with $l:=\lcm (a,b)$ and $\rho=\gcd (a,b)$ such that $a=m\rho$ and $b=n\rho$. The eliminant $\el$ in \eqref{OldEliminant} is obtained in one step in \eqref{NewEliCompute} without resorting to the B\'ezout coefficients\ $u$ and $v$ of the leading coefficients $a$ and $b$. Now let us generalize Lemma \ref{Lemma:SimpleBezout} to contrive a generic scenario as follows. \begin{lemma}\label{Lemma:OldGbasis} With a field $K$ and elimination ordering on $\bM$ as in Definition \ref{Def:EliminationOrdering}, suppose that the generators $f$ and $g$ of the ideal $I=\langle f,g\rangle\subset\knx$ satisfy $\ltc (f)=a\tilde{\bx}^\alpha$ and $\ltc (g)=b\tilde{\bx}^\beta$ with the leading coefficients $a,b\in (\kxn)^\ast$. Then the computation of Gr\"obner\ basis of $I$ contains Euclidean algorithm for the computation of $\gcd (a,b)$. In particular, the B\'ezout coefficients\ of $a$ and $b$ appear in the intermediate coefficients of the computation. \end{lemma} \begin{proof} Without loss of generality, suppose that $s=\deg (a)\ge\deg (b)=t$. Let us denote $\lc (f)=c$ and $\lc (g)=d$ in $K^\ast$ respectively. Then $\lt (f)=cx_1^s\tilde{\bx}^\alpha$ and $\lt (g)=dx_1^t\tilde{\bx}^\beta$. With $\tilde{\bx}^\gamma:=\lcm (\tilde{\bx}^\alpha,\tilde{\bx}^\beta)$, we have $\lcm (\lm (f),\lm (g))=x_1^s\tilde{\bx}^\gamma$. The $S$-polynomial in \eqref{OldSPolyn} now bears the following form: \begin{equation}\label{GenericSPolyn} cS(f,g)=\tilde{\bx}^{\gamma-\alpha}f-\frac{cx_1^{s-t}}d\tilde{\bx}^{\gamma-\beta}g=\Bigl(a-\frac{cx_1^{s-t}}db\Bigr)\tilde{\bx}^\gamma+\tilde{\bx}^{\gamma-\alpha}\Bigl(f_1-\frac{\lt (f)}{\lt (g)}g_1\Bigr) \end{equation} with $f_1:=f-\ltc (f)=f-a\tilde{\bx}^\alpha$ and $g_1:=g-\ltc (g)=g-b\tilde{\bx}^\beta$. Since we also have $c=\lc (a)$ and $d=\lc (b)$ respectively, the leading coefficient of $cS(f,g)$ in $\knx$ in \eqref{GenericSPolyn}, i.e., $\lcc (cS(f,g))=a-(c/d)x_1^{s-t}b:=a_1\in\kxn$, is exactly the first step of the polynomial division of $a$ by $b$ in $\kxn$. If $\deg (a_1)\ge\deg (b)$, then a further reduction of the $S$-polynomial $cS(f,g)$ in \eqref{GenericSPolyn} by $g$ amounts to a polynomial division of $a_1$ by $b$ as in \eqref{OldDivision}. Suppose that we have $a_1=qb+r$ with $q,r\in\kxn$ such that $\deg (r)<\deg (b)$. Then $cS(f,g)$ is reduced by $g$ to a polynomial $h\in\knx$ bearing the form $h=r\tilde{\bx}^\gamma+h_1$ such that $\ltc (h)=r\tilde{\bx}^\gamma$. For simplicity, let us assume that $h_1$ is already reduced with respect to $g$, that is, no term of $h_1$ is in $\langle\lm (g)\rangle\subset K[\bm{x}]$. Next in the computation of Gr\"obner\ basis, we add $h$ into the basis $\{f,g\}$ of $I$ and compute the $S$-polynomial $S(g,h)$. Similar to \eqref{GenericSPolyn}, the leading coefficient $\lcc (dS(g,h))$ of $\tilde{\bx}^\gamma$ amounts to the first step of the polynomial division of $b$ by $r$ in $\kxn$. This together with a further reduction of $dS(g,h)$ by $h$ exactly coincide with the step of Euclidean algorithm in which we make a polynomial division of $b$ by $r$ in $\kxn$. A repetition of the above process shows that the computation of Gr\"obner\ basis for $I$ contains an implementation of Euclidean algorithm for the computation of $\gcd (a,b)$, i.e., the greatest common divisor of the leading coefficients $a=\lcc (f)$ and $b=\lcc (g)$ in $\kxn$, albeit with unit multipliers like $c$ in \eqref{GenericSPolyn}. Let us denote $\rho:=\gcd (a,b)$ with B\'ezout coefficients\ $u,v\in\kxn$ such that $\rho=ua+vb$. In essence the computation of Gr\"obner\ basis amounts to a computation of the greatest common divisor of the leading coefficients. Hence based on Euclidean algorithm we have: \begin{equation}\label{GcdLeadCoeff} u\tilde{\bx}^{\gamma-\alpha}(a\tilde{\bx}^\alpha+f_1)+v\tilde{\bx}^{\gamma-\beta}(b\tilde{\bx}^\beta+g_1) =\rho\tilde{\bx}^\gamma+u\tilde{\bx}^{\gamma-\alpha}f_1+v\tilde{\bx}^{\gamma-\beta}g_1:=\iwr. \end{equation} We add $\iwr$ in \eqref{GcdLeadCoeff} into the basis of $I$ and then compute the $S$-polynomial $S(f,\iwr)$. We make a reduction of the $S$-polynomial $S(f,\iwr)$ by $\iwr$. Let us denote $a=m\rho$ and $b=n\rho$ with $m,n\in K[z]$. From the perspective of $\knx$, this reduction process can be summarized as being equivalent to the following elimination of the leading term $a\tilde{\bx}^\alpha$ of $f=a\tilde{\bx}^\alpha+f_1$: \begin{equation}\label{CancelLeadTerm} \begin{aligned} \tilde{\bx}^{\gamma-\alpha}f-m\iwr&=\rho[(1-mu)\tilde{\bx}^{\gamma-\alpha}f_1-mv\tilde{\bx}^{\gamma-\beta}g_1]/\rho\\ &=\frac{v(b\tilde{\bx}^{\gamma-\alpha}f_1-a\tilde{\bx}^{\gamma-\beta}g_1)}\rho. \end{aligned} \end{equation} Similarly we have: \begin{equation}\label{AlsoCancelLeadTerm} \tilde{\bx}^{\gamma-\beta}g-n\iwr=-\frac{u(b\tilde{\bx}^{\gamma-\alpha}f_1-a\tilde{\bx}^{\gamma-\beta}g_1)}\rho. \end{equation} Thus the B\'ezout coefficients\ $u$ and $v$ appear in the computation of Gr\"obner\ basis for $I$. And they might swell over the rational field $K=\bQ$. \end{proof} With $l:=\lcm (a,b)$, let us denote two multipliers $\lambda:=l/a=b/\rho=n$ and $\mu:=l/b=a/\rho=m$. In terms of the $S$-polynomials as in \eqref{SPolynomialDef} over the PID $\kxn$, we have a straightforward computation of the $S$-polynomial $S(f,g)$ as follows. \begin{equation}\label{NewSPolynComput} \begin{aligned} S(f,g)&=\lambda\tilde{\bx}^{\gamma-\alpha}(a\tilde{\bx}^\alpha+f_1)-\mu\tilde{\bx}^{\gamma-\beta}(b\tilde{\bx}^\beta+g_1)\\ &=\lambda\tilde{\bx}^{\gamma-\alpha}f_1-\mu\tilde{\bx}^{\gamma-\beta}g_1 =(b\tilde{\bx}^{\gamma-\alpha}f_1-a\tilde{\bx}^{\gamma-\beta}g_1)/\rho. \end{aligned} \end{equation} We obtained a simpler result in \eqref{NewSPolynComput} in one step than those in \eqref{CancelLeadTerm} and \eqref{AlsoCancelLeadTerm} without the B\'ezout coefficients\ $u$ and $v$ of the leading coefficients $a=\lcc (f)$ and $b=\lcc (g)$ that might swell to an unexpected size over the rational field $K=\bQ$ like in Example \ref{Expl:BezoutCoeffs}. \section{Examples and Paradigmatic Computations} \label{Sec:Examples} I furnish this section with two examples to demonstrate the computations of the new type of bases for zero-dimensional ideals. In theses examples it is conspicuous that the intermediate coefficients as well as the coefficients of the basis elements are restrained to moderate sizes and do not swell like in the case of Gr\"obner\ bases over $\bQ$. Example \ref{Expl:SimplePseudoEliminant} is an excerpt from the textbook \cite[Chapter 8, P426, \S 4]{CLO05} with minor modifications. In this simple example the multiplier set $\mrs$ in Algorithm \ref{Algo:PseudoEliminant} is empty except in the final step. The pseudo-eliminant $\pel$ procured in this way is exactly the eliminant $\el$. \begin{example}\label{Expl:SimplePseudoEliminant} Suppose that the ideal $I=\langle f,g,h\rangle\subset\bQ [x,y,z]$ with \begin{equation}\label{OldBasisSimple} f=-x+y+z^2-1;\quad g=-zx+y^3+2;\quad h=x^2+x-zy. \end{equation} For the purpose of comparison, we list its classical Gr\"obner\ basis with respect to the \lex\ ordering $z\prec y\prec x$ as $\{p,g_1,g_2\}$ such that: \begin{equation}\label{ClassicalOutput} \begin{aligned} p&=z^{12}-3z^{10}-2z^8+4z^7+6z^6+14z^5-15z^4- 17z^3+z^2+9z+6;\\ g_1&=38977y+1055z^{11}+515z^{10}+42z^9-3674z^8-12955z^7+ 5285z^6-\\ &\quad -1250z^5+36881z^4+7905z^3+42265z^2- 63841z-37186;\\ g_2&=38977x+1055z^{11}+515z^{10}+42z^9-3674z^8-12955z^7+ 5285z^6-\\ &\quad -1250z^5+36881z^4+7905z^3+3288z^2- 63841z+1791. \end{aligned} \end{equation} Let us denote the temporary pseudo-basis set $\tbs:=\{f,g,h\}$. As per Principle \ref{Principle:PseudoRedPrinciple} \eqref{item:ListOrderPrinciple}, we list the elements in $\tbs$ in increasing order of their leading terms with respect to the \lex\ ordering as in \eqref{OldBasisSimple}. As in \Po Q of Algorithm \ref{Algo:PseudoEliminant}, we can disregard the $S$-polynomial $S(g,h)$ as per the triangular identity \eqref{TriangleIdentity}. In fact, $\lcm (\ltc (g),\ltc (h))=-zx^2$ is divisible by $\ltc (f)=-x$ and hence the multiplier $\mr=1$ in \eqref{TriangleIdentity} in this case. The temporary $S$-polynomial set is $\mathfrak{S}=\{S(f,g),S(f,h)\}$. According to Principle \ref{Principle:PseudoRedPrinciple} \eqref{item:LeastSPoly}, we first compute the $S$-polynomial: \begin{equation}\label{OrderThreeBasis} S(f,g)=zf- g=-y^3+zy+z^3-z-2:=e. \end{equation} We add it into $\tbs$ such that $\tbs=\{e,f,g,h\}$. We name it as the first element $e$ according to Principle \ref{Principle:PseudoRedPrinciple} \eqref{item:ListOrderPrinciple} because $\ltc (e)$ is less than every element in $\ltc (\tbs\setminus\{e\})$ and hence cannot be pseudo-reduced by $\tbs\setminus\{e\}$ as in Theorem \ref{Thm:PseudoReduction}. Then we delete $S(f,g)$ from $\mathfrak{S}$. We can disregard the $S$-polynomials $S(e,f)$, $S(e,g)$ and $S(e,h)$ as in \Po Q of Algorithm \ref{Algo:PseudoEliminant}. This is based on Corollary \ref{Cor:CoprimePair} since $\ltc (e)$ is relatively prime to every element in $\ltc (\tbs\setminus\{e\})$. We compute the $S$-polynomial \begin{equation*} S(f,h)=xf+h=xy+z^2x-zy \end{equation*} and pseudo-reduce its leading term $xy$ by $f$ like in \eqref{TermReduction} as per Principle \ref{Principle:PseudoRedPrinciple} \eqref{item:ReduceOrderPrinciple}. The remainder of the term pseudo-reduction is as follows: \[ r=S(f,h)+yf=z^2x+y^2+(z^2-z-1)y. \] We make a pseudo-reduction of the leading term $\ltc (r)=z^2x$ by $f$ like in \eqref{TermReduction} for another time. The final remainder is as follows. \begin{equation}\label{OrderTwoBasis} d:=r+z^2f=y^2+(2z^2-z-1)y+z^2(z^2-1) \end{equation} that cannot be further pseudo-reduced by $\tbs$. We add it into $\tbs$ such that $\tbs=\{d,e,f,g,h\}$. The reason for naming it as the first element $d$ of $\tbs$ is still Principle \ref{Principle:PseudoRedPrinciple} \eqref{item:ListOrderPrinciple}. Then we delete $S(f,h)$ from $\mathfrak{S}$. We disregard the $S$-polynomials $S(d,f)$, $S(d,g)$ and $S(d,h)$ as in \Po Q of Algorithm \ref{Algo:PseudoEliminant} based on Corollary \ref{Cor:CoprimePair}. We add $S(d,e)$ into $\mathfrak{S}$ such that $\mathfrak{S}=\{S(d,e)\}$. We compute the $S$-polynomial $S(d,e)$ as following: \begin{equation*} S(d,e)=yd+e=(2z^2-z-1)y^2+(z^3-z+1)zy+z^3-z-2 \end{equation*} and pseudo-reduce its leading term $(2z^2-z-1)y^2$ by $d$ as in \eqref{TermReduction}. For the same reason as above, we name the remainder of the term pseudo-reduction as the first element in $\tbs$ such that: \begin{equation}\label{OrderOneBasis} \begin{aligned} c&:=S(d,e)-(2z^2-z-1)d\\ &=(3z^4-4z^3-2z^2+z+1)y+2z^6-z^5-3z^4+z^2+z+2. \end{aligned} \end{equation} Now $\tbs=\{c,d,e,f,g,h\}$. Then we delete $S(d,e)$ from $\mathfrak{S}$. We disregard the $S$-polynomials $S(c,f)$, $S(c,g)$ and $S(c,h)$ as in \Po Q of Algorithm \ref{Algo:PseudoEliminant} based on Corollary \ref{Cor:CoprimePair} due to their relatively prime leading terms. By the triangular identity \eqref{TriangleIdentity}, we also disregard the $S$-polynomial $S(c,e)$ as in \Po Q of Algorithm \ref{Algo:PseudoEliminant} since $\lcm (\ltc (c),\ltc (e))=-(3z^4-4z^3-2z^2+z+1)y^3$ is divisible by $\ltc (d)=y^2$. Here the multiplier $\mr$ as in \eqref{TriangleIdentity} satisfies $\mr=1$. We add $S(c,d)$ into $\mathfrak{S}$ such that $\mathfrak{S}=\{S(c,d)\}$. We compute the $S$-polynomial \begin{align*} S(c,d)&=-yc+(3z^4-4z^3-2z^2+z+1)d=y(4z^6-10z^5+8z^3+2z^2-\\ &\quad -3z-3)+(3z^6-4z^5-5z^4+5z^3+3z^2-z-1)z^2 \end{align*} and then pseudo-reduce it by $c$. The multiplier for the pseudo-reduction is $3z^4-4z^3-2z^2+z+1$ and we add it into a multiplier set $\mrs=\{3z^4-4z^3-2z^2+z+1\}$. The remainder of the pseudo-reduction is a temporary pseudo-eliminant \begin{equation*} \tel:=z^{12}-3z^{10}-2z^8+4z^7+6z^6+14z^5-15z^4-17z^3+z^2+9z+6. \end{equation*} Then we delete $S(c,d)$ from $\mathfrak{S}$ such that $\mathfrak{S}=\emptyset$ and we have completed the procedure of Algorithm \ref{Algo:PseudoEliminant}. Hence the pseudo-eliminant $\pel=\tel$. Moreover, the pseudo-eliminant $\pel$ is relatively prime to the multiplier $\mr$ in $\mrs$ and hence is compatible. Hence the eliminant $\el=\pel$ according to Theorem \ref{Thm:CompatiblePart}. Now the pseudo-basis of $I$ as in Definition \ref{Def:PseudoBasis} is $\pbs=\tbs=\{c,d,e,f,g,h\}$ as in \eqref{OldBasisSimple}, \eqref{OrderThreeBasis}, \eqref{OrderTwoBasis} and \eqref{OrderOneBasis} respectively. Let us define $\qr:=\el$ and the epimorphism $\mo\colon (K[z])[x,y]\longrightarrow\rr [x,y]$ as in \eqref{ModuloCompatiblePart} over the normal PQR $\rr\simeq K[z]/\pid\qr$. Now both $\ltc (\mo (g))=-zx$ and $\ltc (\mo (h))=x^2$ are divisible and hence \Gd-reducible with respect to $\mo (f)$ as in Definition \ref{Def:RelativeReducible}. Thus in order to obtain an irredundant modular basis as in Definition \ref{Def:IrredundantBasis}, we can delete $\mo (g)$ and $\mo (h)$ from $\Bm=\mo (\pbs)$. Moreover, we also delete $\mo (d)$ and $\mo (e)$ from $\Bm$ since both $\ltc (\mo (d))=y^2$ and $\ltc (\mo (e))=-y^3$ are \Gd-reducible with respect to $\mo (c)$ as per Definition \ref{Def:RelativeReducible} for \Gd-reducibility. In fact, we have $\lcc (\mo (c))=3z^4-4z^3-2z^2+z+1\in\rr^\times$ is a unit. Hence the following basis constitutes an irredundant basis of $\Iq=\mo (I)$ under the above epimorphism $\mo$: \begin{equation}\label{PreliminaryBasis} \begin{aligned} \mo (c)&=(3z^4-4z^3-2z^2+z+1)y+2z^6-z^5-3z^4+z^2+z+2;\\ \mo (f)&=-x+y+z^2-1. \end{aligned} \end{equation} For simplicity we also call $\{c,f\}$ an irredundant basis of $I$. The irredundant basis in \eqref{PreliminaryBasis} is automatically a minimal basis as in Definition \ref{Def:MinimalBasis} but not a reduced basis as in Definition \ref{Def:ReducedBases}. We can make a \Gd-term reduction of the term $y$ in $\mo (f)$ by $\mo (c)$ with multiplier $\iur=3z^4-4z^3-2z^2+z+1\in\rr^\times$ as in Definition \ref{Def:GcdTermReduction} to obtain a remainder denoted as $b$. Now $\{\mo (c),b\}$ constitutes a reduced basis for $\Iq$. For simplicity we still denote $\mo (c)$ as $c$ and they are our new type of basis for $\Iq$ that we still denote as $\Bm$ as follows. \begin{equation}\label{UniqueBasis} \biggl\{ \begin{aligned} c&=(3z^4-4z^3-2z^2+z+1)y+2z^6-z^5-3z^4+z^2+z+2;\\ b&=(3z^4-4z^3-2z^2+z+1)x-z^6+3z^5+2z^4-5z^3-2z^2+2z+3. \end{aligned} \biggr. \end{equation} \end{example} The reduced basis $\Bm$ of $\Iq$ in \eqref{UniqueBasis} is unique by Lemma \ref{Lemma:ReducedBases}. It satisfies the identity \eqref{LeadTermMod} in Lemma \ref{Lemma:IdealMembership} and hence the unified identity \eqref{UnifiedIdentity}. Since $b=\io (b)$ and $c=\io (c)$, the set $\{b,c,\el\}$ constitutes another form of our new type of basis for $I+\langle\el\rangle=I$. It is in a simpler form than the one in \eqref{ClassicalOutput} with moderate coefficients and exponents for the variable $z$. The following Example \ref{Expl:FullModularAlgo} is a slight complication of Example \ref{Expl:SimplePseudoEliminant} in that the multiplier set $\mrs$ is not empty during the implementation of Algorithm \ref{Algo:PseudoEliminant}. Hence we have to invoke Algorithm \ref{Algo:ProperEliminant} over a PQR with zero divisors to procure the exact form of the eliminant and new type of bases. The coefficients of the classical Gr\"obner\ basis in Example \ref{Expl:FullModularAlgo} also cause a bit more psychological disturbances than those in Example \ref{Expl:SimplePseudoEliminant}. \begin{example}\label{Expl:FullModularAlgo} Suppose that the ideal $I=\langle f,g,h\rangle\subset\bQ [x,y,z]$ with \begin{equation}\label{OldBasisFull} f=-z^2(z+1)^3x+y;~g=z^4(z+1)^6x-y^2;~h=-x^2y+y^3+z^4(z-1)^5. \end{equation} For the purpose of comparison, in the following we list its classical Gr\"obner\ basis $\tbs=\{p,g_1,g_2,g_3,g_4\}$ with respect to the \lex\ ordering $z\prec y\prec x$: \begin{equation}\label{GrobnerEliminant} \begin{aligned} p&=(z-1)^5z^6(z^{13}+9z^{12}+36z^{11}+84z^{10}+126z^9+ 126z^8+85z^7+\\ &\quad +31z^6+19z^5-9z^4+4z^3-4z^2-3z-1). \end{aligned} \end{equation} \begin{align*} g_1&=20253807z^2y+264174124z^{23}+1185923612z^{22}+850814520z^{21}-\\ &\quad -3776379304z^{20}-6824277548z^{19}+1862876196 z^{18}+12815317453z^{17}+\\ &\quad +3550475421z^{16}+2124010584z^{15}-35582561480z^{14}+42918431554z^{13}-\\ &\quad -41728834070z^{12}+35649844325z^{11}-17049238505z^{10}+3388659963z^9+\\ &\quad +930240431z^8-61146095z^7-518331181z^6;\\ g_2&=20253807y^2+903303104z^{23}+4102316224z^{22}+3140448384z^{21}-\\ &\quad -12683487983z^{20}-23996669428z^{19}+4804720290z^{18}+43739947868z^{17}+\\ &\quad +14906482335z^{16}+9051639768z^{15}-121400613331z^{14}+\\ &\quad +139970660534z^{13}-138071007235z^{12}+118589702914z^{11}-\\ &\quad -55199680030z^{10}+11927452134z^9+2021069107z^8-38017822z^7-\\ &\quad -1768266833z^6;\\ g_3&=2592487296z^2x+(7777461888z-2592487296)y+108083949263z^{23}+\\ &\quad +486376518055z^{22}+349557551130z^{21}-1558206505718z^{20}-\\ &\quad -2820179010211z^{19}+788268739077z^{18}+5350420983851z^{17}+\\ &\quad +1476923019345z^{16}+689330555757z^{15}-14602936038043z^{14}+\\ &\quad +17386123487861z^{13}-16350039201517z^{12}+13787524468420z^{11}-\allowdisplaybreaks[4]\\ &\quad -6235683207154z^{10}+786997920594z^9+628350552934z^8-\\ &\quad -64382649769z^7-206531133875z^6;\\ g_4&=20253807x^2y+1037047036z^{23}+4686773132z^{22}+3455561112z^{21}-\\ &\quad -14868243976z^{20}-27470438972z^{19}+6731446644z^{18}+51651585868z^{17}+\\ &\quad +16267315284z^{16}+7429467573z^{15}-141636109619z^{14}+\\ &\quad +163168836472z^{13}-155454190640z^{12}+135706468958z^{11}-\\ &\quad -62903516282z^{10}+11263865469z^9+2500312823z^8+197272975z^7-\\ &\quad -1682438629z^6-101269035z^5+20253807z^4. \end{align*} We define the temporary pseudo-basis set $\tbs:=\{f,g,h\}$ as in \eqref{OldBasisFull}. As per Principle \ref{Principle:PseudoRedPrinciple} \eqref{item:ListOrderPrinciple}, we list the elements in $\tbs$ in increasing order of their leading terms with respect to the \lex\ ordering as in \eqref{OldBasisFull}. We can disregard the $S$-polynomial $S(g,h)$ as in \Po Q of Algorithm \ref{Algo:PseudoEliminant} based on the triangular identity in Lemma \ref{Lemma:TriangleIdentity}. In fact, $\lcm (\ltc (g),\ltc (h))=-z^4(z+1)^6x^2y$ is divisible by $\ltc (f)=-z^2(z+1)^3x$ and hence in this case the multiplier $\mr=1$ in \eqref{TriangleIdentity}. We shall not take into account the triangular identity of $S(f,h)$ with respect to $g$ since we already invoked a triangular identity in the above on the same triplet $\{f,g,h\}$ according to Principle \ref{Principle:PseudoRedPrinciple} \eqref{item:OnceTriangle}. Hence the temporary $S$-polynomial set is $\mathfrak{S}=\{S(f,g),S(f,h)\}$. According to Principle \ref{Principle:PseudoRedPrinciple} \eqref{item:LeastSPoly}, we first compute the $S$-polynomial \begin{equation}\label{FirstAddedBasis} S(f,g)=z^2(z+1)^3f+g=-y^2+z^2(z+1)^3y:=e \end{equation} that cannot be further pseudo-reduced by $\tbs$ as in Theorem \ref{Thm:PseudoReduction}. We add $e$ into $\tbs$ such that $\tbs=\{e,f,g,h\}$. We name it as the first element $e$ in $\tbs$ because $\ltc (e)$ is less than every element in $\ltc (\tbs\setminus\{e\})$. Then we delete $S(f,g)$ from $\mathfrak{S}$ such that $\mathfrak{S}=\{S(f,h)\}$. We can disregard the $S$-polynomials $S(e,f)$ and $S(e,g)$ like in \Po Q of Algorithm \ref{Algo:PseudoEliminant} based on Corollary \ref{Cor:CoprimePair} since $\ltc (e)$ is relatively prime to $\ltc (f)$ and $\ltc (g)$. We can also disregard the $S$-polynomial $S(e,h)$ as in \Po Q of Algorithm \ref{Algo:PseudoEliminant} based on the triangular identity \eqref{TriangleIdentity} with respect to $f$. In fact, the leading monomials of the triplet satisfy $\lcm (\lmc (e),\lmc (h))=x^2y^2$ being divisible by $\lmc (f)=x$. The multiplier $\mr$ in the identity \eqref{TriangleIdentity} equals $\mr=\lcc (f)=-z^2(z+1)^3$. We add $\mr$ into the multiplier set $\mrs$ such that $\mrs=\{z^2(z+1)^3\}$. We compute the $S$-polynomial \begin{equation*} S(f,h)=z^2(z+1)^3h-xyf=-xy^2+z^2(z+1)^3y^3+z^6(z+1)^3(z-1)^5 \end{equation*} and pseudo-reduce it as in Theorem \ref{Thm:PseudoReduction} by $e$ and $f$ according to Principle \ref{Principle:PseudoRedPrinciple} \eqref{item:ReduceOrderPrinciple}. More specifically, we first pseudo-reduce $\ltc (S(f,h))=-xy^2$ by $e$ in \eqref{FirstAddedBasis} with interim multiplier $\iur=1$ as in \eqref{TermReduction} to obtain the following remainder: \begin{equation*} r_1=S(f,h)-xe=-z^2(z+1)^3xy+z^2(z+1)^3y^3+z^6(z+1)^3(z-1)^5. \end{equation*} Then we make a further pseudo-reduction of $\ltc (r_1)=-z^2(z+1)^3xy$ by $f$ in \eqref{OldBasisFull} as in \eqref{TermReduction} also with interim multiplier $\iur=1$. The new remainder is as follows. \begin{equation*} r_2=r_1-yf=z^2(z+1)^3y^3-y^2+z^6(z+1)^3(z-1)^5. \end{equation*} An ensuing pseudo-reduction of $\ltc (r_2)=z^2(z+1)^3y^3$ by $e$ in \eqref{FirstAddedBasis} with interim multiplier $\iur=1$ yields the following remainder: \begin{equation*} r_3=r_2+z^2(z+1)^3ye=(z^4(z+1)^6-1)y^2+z^6(z+1)^3(z-1)^5. \end{equation*} We obtain the following final remainder $d$ after a repetition of the above pseudo-reduction of $\ltc (r_3)=(z^4(z+1)^6-1)y^2$ by $e$ in \eqref{FirstAddedBasis} with interim multiplier $\iur=1$: \begin{equation}\label{FinalBasisMember} d:=r_3+(z^4(z+1)^6-1)e=z^2(z+1)^3[(z^4(z+1)^6-1)y+z^4(z-1)^5]. \end{equation} For the same reason as above, we name the remainder $d$ as the first element in $\tbs$ such that $\tbs=\{d,e,f,g,h\}$. Then we delete $S(f,h)$ from $\mathfrak{S}$ such that $\mathfrak{S}=\emptyset$. The leading monomial $\lmc (d)=y$ is relatively prime to $\lmc (f)=\lmc (g)=x$. But their leading coefficients satisfy $\gcd (\lcc (d),\lcc (f))=\gcd (\lcc (d),\lcc (g))=z^2(z+1)^3$, which is already in the multiplier set $\mrs$. Hence we can just disregard the $S$-polynomials $S(d,f)$ and $S(d,g)$ as in \Po Q of Algorithm \ref{Algo:PseudoEliminant}. Moreover, we also disregard the $S$-polynomial $S(d,h)$ by the triangular identity with respect to $f$ as in \eqref{TriangleIdentity} since $\lcm (\ltc (d),\ltc (h))=z^2(z+1)^3(z^4(z+1)^6-1)x^2y$ is divisible by $\ltc (f)=-z^2(z+1)^3x$. We add the $S$-polynomial $S(d,e)$ into $\mathfrak{S}$ such that $\mathfrak{S}=\{S(d,e)\}$. We compute the $S$-polynomial $S(d,e)$ as following: \begin{align*} S(d,e)&=yd+z^2(z+1)^3(z^4(z+1)^6-1)e\\ &=z^4(z+1)^3(z^{13}+9z^{12}+36z^{11}+84z^{10}+126z^9+126z^8+85z^7+\\ &\quad +31z^6+19z^5-9z^4+4z^3-4z^2-3z-1)y. \end{align*} Then we make a pseudo-reduction of $\ltc (S(d,e))=S(d,e)$ by $d$ in \eqref{FinalBasisMember} as in \eqref{TermReduction} with the interim multiplier $\iur=z^4(z+1)^6-1$. The remainder of the term pseudo-reduction is: \begin{align*} r_4&=z^4(z+1)^3(z^{13}+9z^{12}+36z^{11}+84z^{10}+126z^9+126z^8+85z^7+31z^6+\\ &\quad +19z^5-9z^4+4z^3-4z^2-3z-1)[2(z^4(z+1)^6-1)y+z^4(z-1)^5]. \end{align*} We add the multiplier $\iur$ into the multiplier set $\mrs$ such that \begin{equation}\label{FullMultiplierSet} \mrs=\{z^2(z+1)^3,~z^4(z+1)^6-1\}. \end{equation} After a further pseudo-reduction of the above remainder $r_4$ by $d$ in \eqref{FinalBasisMember} with interim multiplier $\mu=1$, we can obtain a temporary pseudo-eliminant as follows. \begin{equation}\label{FullPseudoEliminant} \begin{aligned} \tel&=(z-1)^5z^8(z+1)^3(z^{13}+9z^{12}+36z^{11}+84z^{10}+126z^9+126z^8+\\ &\quad +85z^7+31z^6+19z^5-9z^4+4z^3-4z^2-3z-1). \end{aligned} \end{equation} Then we delete $S(d,e)$ from $\mathfrak{S}$ such that $\mathfrak{S}=\emptyset$. Since we have exhausted all the $S$-polynomials in $\mathfrak{S}$, the pseudo-eliminant $\pel=\tel$. By a comparison as in Algorithm \ref{Algo:CompatiblePartPseudoEliminant} between the pseudo-eliminant $\pel$ in \eqref{FullPseudoEliminant} and multiplier set $\mrs$ in \eqref{FullMultiplierSet}, we can compute the compatible part $\Cp (\pel)$ of the pseudo-eliminant $\el$, which is defined in Definition \ref{Def:CompatibleDivisors}, as following: \begin{equation}\label{ExplCompatiblePart} \begin{aligned} \Cp (\pel)&=(z-1)^5(z^{13}+9z^{12}+36z^{11}+84z^{10}+126z^9+126z^8+85z^7+\\ &\quad +31z^6+19z^5-9z^4+4z^3-4z^2-3z-1). \end{aligned} \end{equation} Moreover, the composite divisors of the incompatible part $\Ip (\pel)$ of the pseudo-eliminant $\pel$ in \eqref{FullPseudoEliminant} are $z^8$ and $(z+1)^3$ as per Definition \ref{Def:CompositeDivisor}. For the composite divisor $\qr=z^8$, in what follows let us invoke Algorithm \ref{Algo:ProperEliminant} to compute its corresponding proper eliminant $\ee$. The computations are based on Theorem \ref{Thm:IncompatiblePart} over the normal PQR $\rr\simeq K[z]/\pid{z^8}$. The epimorphism $\mo$ as in \eqref{ExtendedProjectionPQR} transforms the basis elements $g$ and $h$ into: \begin{align*} \mo (g)&=(20z^3+15z^2+6z+1)z^4x-y^2;\\ \mo (h)&=-x^2y+y^3+(10z^3-10z^2+5z-1)z^4. \end{align*} The squarefree part of $\lcc (g)$ in \eqref{OldBasisFull} equals $z(z+1)$ whereas it equals $z(20z^3+15z^2+6z+1)$ as above. Hence we should use the representation of $g$ in \eqref{OldBasisFull} as per Principle \ref{Principle:PseudoRedPrinciple} \eqref{item:SquarefreeChoice}. The same reason for using the representation of $h$ in \eqref{OldBasisFull}. Thus although we start with the basis elements $\tas=\{\mo (f),\mo (g),\mo (h)\}$, we still denote it as $\{f,g,h\}$ in \eqref{OldBasisFull} henceforth with the understanding that $\tas\subset\rr [x,y]$ and $\rr\simeq K[z]/\pid{z^8}$. We can disregard the $S$-polynomial $S(g,h)$ as in \Po R of Algorithm \ref{Algo:ProperEliminant} by the triangular identity with respect to $f$ as in Lemma \ref{Lemma:TriangleIdentityPQR}. In fact, by the representations in \eqref{OldBasisFull}, we have $\lcm (\ltc (g),\ltc (h))=-z^4(z+1)^6x^2y$ being divisible by $\ltc (f)=-z^2(z+1)^3x$. Hence it is easy to deduce that the multiplier $\mr$ in \eqref{TriangleIdentityPQR} now becomes $\mr=1$ and thus it is redundant to compute the $S$-polynomials $S(g,h)$. The temporary $S$-polynomial set is $\mathfrak{S}=\{S(f,g),S(f,h)\}$. According to Principle \ref{Principle:PseudoRedPrinciple} \eqref{item:LeastSPoly}, we first compute the $S$-polynomial $S(f,g)$: \begin{equation}\label{SecondNewBasis} \begin{aligned} S(f,g)&=z^2(z+1)^3f+g\mod (\qr=z^8)\\ &=-y^2+z^2(z+1)^3y:=e. \end{aligned} \end{equation} The $S$-polynomial $S(f,g)$ cannot be properly reduced by $\tas=\{f,g,h\}$. We add $S(f,g)$ into $\tas$ and name it as the first element $e$ in $\tas$ according to Principle \ref{Principle:PseudoRedPrinciple} \eqref{item:ListOrderPrinciple} such that $\tas=\{e,f,g,h\}$. Then we delete $S(f,g)$ from $\mathfrak{S}$. We can disregard the $S$-polynomials $S(e,f)$ and $S(e,g)$ as in \Po R of Algorithm \ref{Algo:ProperEliminant} based on Corollary \ref{Cor:PQRCoprime} because $\ltc (e)$ is relatively prime to both $\ltc (f)$ and $\ltc (g)$. We add the $S$-polynomial $S(e,h)$ into $\mathfrak{S}$ such that $\mathfrak{S}=\{S(e,h),S(f,h)\}$. By Principle \ref{Principle:PseudoRedPrinciple} \eqref{item:LeastSPoly} we first compute the $S$-polynomial $S(f,h)$: \begin{align*} S(f,h)&=xyf-z^2(z+1)^3h\mod (\qr=z^8)\\ &=xy^2-z^2(z+1)^3y^3-(2z-1)z^6. \end{align*} We make a proper reduction of $S(f,h)$ as in Theorem \ref{Thm:ProperReduction} by $e$ and $f$ according to Principle \ref{Principle:PseudoRedPrinciple} \eqref{item:ReduceOrderPrinciple}. More specifically, we first properly reduce $\ltc (S(f,h))=xy^2$ by $e$ in \eqref{SecondNewBasis} with interim multiplier $\iur=1$ as in \eqref{TermReductionPQR}. The remainder $r_1$ is as follows. \begin{align*} r_1=S(f,h)+xe=z^2[(z+1)^3xy-(z+1)^3y^3+z^4-2z^5]. \end{align*} Then we make a further proper term reduction of $\ltc (r_1)=z^2(z+1)^3xy$ by $f$ also with interim multiplier $\iur=1$. The remainder of the reduction is as follows. \begin{equation*} r_2=r_1+yf=-z^2(z+1)^3y^3+y^2-2z^7+z^6. \end{equation*} An ensuing proper term reduction of $\ltc (r_2)=-z^2(z+1)^3y^3$ by $e$ with interim multiplier $\iur=1$ yields the following remainder: \begin{equation*} r_3=r_2-z^2(z+1)^3ye=(-20z^7-15z^6-6z^5-z^4+1)y^2-2z^7+z^6. \end{equation*} We obtain the following final remainder after a repetition of the above proper term reduction of $\ltc (r_3)$ by $e$ with interim multiplier $\iur=1$: \begin{equation}\label{OrderOneRemainder} d:=z^2[(-9z^5-z^4+z^3+3z^2+3z+1)y-2z^5+z^4]. \end{equation} According to Principle \ref{Principle:PseudoRedPrinciple} \eqref{item:ListOrderPrinciple}, we name the remainder $d$ as the first element in $\tas$ such that $\tas=\{d,e,f,g,h\}$. Then we delete $S(f,h)$ from $\mathfrak{S}$. We disregard the $S$-polynomials $S(d,g)$ and $S(d,h)$ like in \Po R of Algorithm \ref{Algo:ProperEliminant} by the triangular identities with respect to $f$ as in Lemma \ref{Lemma:TriangleIdentityPQR}. In fact, in the case of $S(d,g)$ the multiplier $\mr$ as in \eqref{TriangleIdentityPQR} satisfies $\mr=1$ whereas in the case of $S(d,h)$ the multiplier $\mr=(z+1)^3\in\rr^\times$. We add the $S$-polynomials $S(d,e)$ and $S(d,f)$ into $\mathfrak{S}$ such that $\mathfrak{S}=\{S(d,e),S(d,f),S(e,h)\}$. By Principle \ref{Principle:PseudoRedPrinciple} \eqref{item:LeastSPoly} we compute the $S$-polynomial $S(d,e)$ first: \begin{align*} S(d,e)&=yd+(-9z^5-z^4+z^3+3z^2+3z+1)z^2e\mod (\qr=z^8)\\ &=z^4(18z^3+16z^2+6z+1)y. \end{align*} We make a proper term reduction of $\ltc (S(d,e))$ by $d$ as in \eqref{TermReductionPQR} with interim multiplier $\iur=-9z^5-z^4+z^3+3z^2+3z+1\in\rr^\times$. The remainder of the reduction is $0$ over $\rr$. We delete $S(d,e)$ from $\mathfrak{S}$. By Principle \ref{Principle:PseudoRedPrinciple} \eqref{item:LeastSPoly} we then compute the $S$-polynomial $S(d,f)$: \begin{align*} S(d,f)&=(z+1)^3xd+(-9z^5-z^4+z^3+3z^2+3z+1)yf\mod (\qr=z^8)\\ &=(z+1)z^6x+(-9z^5-z^4+z^3+3z^2+3z+1)y^2. \end{align*} We make a proper term reduction of $\ltc (S(d,f))=(z+1)z^6x$ by $f$ in \eqref{OldBasisFull} with interim multiplier $\iur=(z+1)^2\in\rr^\times$ as in \eqref{TermReductionPQR}. The remainder of the term reduction is: \begin{align*} r_1=-(z+1)^2(9z^5+z^4-z^3-3z^2-3z-1)y^2+z^4y. \end{align*} Next we make a proper term reduction of $\ltc (r_1)$ by $e$ in \eqref{SecondNewBasis} with interim multiplier $\iur=1$ to obtain a new remainder: \begin{equation*} r_2=z^2(42z^5+69z^4+56z^3+29z^2+8z+1)y. \end{equation*} A further proper term reduction of $\ltc (r_2)$ by $d$ in \eqref{OrderOneRemainder} with interim multiplier $\iur=-9z^5-z^4+z^3+3z^2+3z+1\in\rr^\times$ as in \eqref{TermReductionPQR} yields a temporary proper eliminant: \begin{equation}\label{ExplProperEliminant} \tee=-z^6(6z+1). \end{equation} Then we delete $S(d,f)$ from $\mathfrak{S}$. Since $\tee$ has a standard representation $\tee^\st=z^6$, according to Remark \ref{Rmk:ModuloBaseChange}, we can simplify computations by implementing the algorithm over the normal PQR $\re\simeq\rr/\pid{z^6}$, which is similar to \eqref{FinalProjection}. Let us now compute the final $S$-polynomial $S(e,h)\in\mathfrak{S}$ in $\re [x,y]$ as follows. \begin{align*} S(e,h)&=x^2e-yh\mod (\qr=z^6)\\ &=z^2(z+1)^3x^2y-y^4+(z^4-5z^5)y. \end{align*} This is followed by a proper term reduction of $\ltc (S(e,h))=z^2(z+1)^3x^2y$ by $h$ as in \eqref{TermReductionPQR} with interim multiplier $\iur=1$. The remainder of the term reduction is as follows. \begin{equation*} r_1=S(e,h)+z^2(z+1)^3h=-y^4+z^2(z+1)^3y^3+(z^4-5z^5)y. \end{equation*} A further proper term reduction of $\ltc (r_1)=-y^4$ by $e$ also with interim multiplier $\iur=1$ leads to the remainder $r_2=r_1-y^2e=z^4(1-5z)y$. We make a proper term reduction of this remainder $r_2$ by $d$ as in \eqref{TermReductionPQR} with interim multiplier $\iur=-9z^5-z^4+z^3+3z^2+3z+1\in\re^\times$. The remainder of the reduction is $0$ over $\re$. We delete $S(e,h)$ from $\mathfrak{S}$ such that $\mathfrak{S}=\emptyset$. \end{example} For the \Po Q in Algorithm \ref{Algo:ProperEliminant}, now we have $\tee^\st=z^6$. Hence in \eqref{BeheadSPolyPQR} the multiplier $\lnr_f=z^4$ and the $S$-polynomial $S(f,\tee^\st)=z^4y$ over $\re$. A proper term reduction of $S(f,\tee^\st)$ by $d\in\tas$ in \eqref{OrderOneRemainder} with the multiplier $-9z^5-z^4+z^3+3z^2+3z+1\in\re^\times$ yields the remainder $0$ over $\re$. We also disregard the $S$-polynomial $S(g,\tee^\st)$. In fact, over the normal PQR $\rr=K[z]/\pid{z^8}$ as above, we have $\lcm (l_g,l_e)=z^6(z+1)^6$ is divisible by $l_f=-z^2(z+1)^3$ with $l_g:=\io (\lcc (g))=z^4(z+1)^6$, $l_e:=\io (\tee^\st)=z^6$ and $l_f:=\io (\lcc (f))=-z^2(z+1)^3$. Hence we can invoke the triangular identity \eqref{TriangleIdentityPQR} on $S(g,\tee^\st)$ with respect to $f$ to show that the $S$-polynomial $S(g,\tee^\st)$ can be disregarded. Moreover, back to the normal PQR $\re=K[z]/\pid{z^6}$, we can prove that the $S$-polynomial $S(d,\tee^\st)=0$ as in \eqref{BeheadSPolyPQR}. For the composite divisor $\qr=(z+1)^3$ as in \eqref{FullPseudoEliminant}, our computations are over the normal PQR $\rr\simeq K[z]/\pid{(z+1)^3}$. Under the epimorphism $\mo$ as in \eqref{ExtendedProjectionPQR}, the ideal $\Iq:=\mo (I)$ is generated by $\tas:=\mo (\tbs)\subset\rr [x,y]$ with $\tbs$ as in \eqref{OldBasisFull}: \begin{equation*} f=y;\quad g=-y^2;\quad h=-x^2y+y^3+z^4(z-1)^5. \end{equation*} Please note that here we choose the representation of $h$ in \eqref{OldBasisFull} by Principle \ref{Principle:PseudoRedPrinciple} \eqref{item:SquarefreeChoice}. And we abuse the notations a bit and still use $\{f,g,h\}$ to denote the elements in $\tas$. It is easy to corroborate that $\tas$ is not consistent such that the ideal $\Iq=\{1\}$ and should be disregarded. Now the pseudo-basis of $I$ as in Definition \ref{Def:PseudoBasis} is $\pbs:=\tbs=\{d,e,f,g,h\}$ as in \eqref{OldBasisFull}, \eqref{FirstAddedBasis} and \eqref{FinalBasisMember}. To obtain an irredundant basis of $\Iq$ as in Definition \ref{Def:IrredundantBasis} over the normal PQR $\rr\simeq K[z]/\pid{\qr}$ with $\qr:=\Cp (\pel)$ as in \eqref{ExplCompatiblePart}, we delete $g$ from $\pbs$ since $\ltc (g)=z^4(z+1)^6x$ is divisible by $\ltc (f)=-z^2(z+1)^3x$. In fact, under the epimorphism $\mo\colon (K[z])[x,y]\rightarrow\rr [x,y]$ as in \eqref{ModuloCompatiblePart}, $\ltc (\mo (g))$ is \Gd-reducible with respect to $\mo (f)$ as per Definition \ref{Def:RelativeReducible} for \Gd-reducibility. Similarly we also delete $h$ from $\pbs$ since $\ltc (\mo (h))=-x^2y$ is \Gd-reducible with respect to $\mo (f)$ based on $\lcc (\mo (f))=-z^2(z+1)^3\in\rr^\times$. Further, we delete $e$ in \eqref{FirstAddedBasis} from $\pbs$ since $\ltc (\mo (e))=-y^2$ is \Gd-reducible with respect to $\mo (d)$ as in \eqref{FinalBasisMember} due to the fact that $\lcc (\mo (d))=(z+1)^3z^2(z^4(z+1)^6-1)\in\rr^\times$. Hence an irredundant basis of $\Iq$, which we denote as $\Bm$, is $\Bm=\{\mo (d),\mo (f)\}$ with $d$ and $f$ being defined in \eqref{FinalBasisMember} and \eqref{OldBasisFull} respectively. This irredundant basis $\Bm$ is automatically a minimal basis as in Definition \ref{Def:MinimalBasis} but not a reduced basis as in Definition \ref{Def:ReducedBases}. We can make a proper term reduction of the term $y$ in $f$ by $d$ as in \eqref{TermReductionPQR} to obtain a remainder denoted as $c$. In this way we obtain a reduced basis that we still denote as $\Bm$. That is, $\Bm=\{c,\mo (d)\}$ with $d$ defined in \eqref{FinalBasisMember} and \begin{equation}\label{ReducedBasisMember} c:=-z^4(z+1)^6(z^4(z+1)^6-1)x-z^6(z+1)^3(z-1)^5. \end{equation} The reduced basis $\Bm=\{c,\mo (d)\}$ is unique by Lemma \ref{Lemma:ReducedBases} and satisfies the identity \eqref{LeadTermMod} in Lemma \ref{Lemma:IdealMembership} and hence the unified identity \eqref{UnifiedIdentity}. For the composite divisor $\qr=z^8$, our computations over the normal PQR $\rr\simeq K[z]/\pid{z^8}$ started with the basis $\tas$ of $\Iq$ that bears the same form as the one in \eqref{OldBasisFull} and ended with the proper eliminant $\ee=z^6$ as the standard factor of $\tee$ in \eqref{ExplProperEliminant}. Hence let us consider the modular basis of $\Ie=\mq (I)$ over the normal PQR $\re\simeq\rr/\pid{z^6}$. What we already have is a modular basis $\tas=\{d,e,f,g,h\}$ of $\Iq=\mo (I)$ over $\rr\simeq K[z]/\pid{z^8}$ that are defined in \eqref{OldBasisFull}, \eqref{SecondNewBasis} and \eqref{OrderOneRemainder} respectively. The basis elements $g$ and $h$ of $\Iq$ in \eqref{OldBasisFull} would bear the following form over $\re$: \begin{equation*} g=z^4(6z+1)x-y^2;\quad h=-x^2y+y^3+(5z-1)z^4. \end{equation*} Nonetheless by Principle \ref{Principle:PseudoRedPrinciple} \eqref{item:SquarefreeChoice}, we still use the representations of $g$ and $h$ in \eqref{OldBasisFull}. In order to have an irredundant proper basis of $\Ie$, we delete $\mq (g)$ from $\mq (\tas)$ since $\ltc (\mq (g))=z^4(z+1)^6x$ is divisible by $\ltc (\mq (f))=-z^2(z+1)^3x$. The supplemented basis element $e$ in \eqref{SecondNewBasis} is invariant under the epimorphism $\mq$ whereas the supplemented basis element $d$ in \eqref{OrderOneRemainder} bears the form $\mq (d)=z^2(z+1)^3y$ over $\re$. Now it is evident that we can use $\mq (d)$ to make a term reduction of $\mq (e)$ such that it bears a reduced form denoted as $b_2:=y^2$. Moreover, we can render $\mq (h)$ reduced by $b_2$ such that $\mq (h)=-x^2y+z^4(z-1)^5$. Altogether we obtain a reduced basis of $\Ie$ denoted as $\Bn$ over $\re\simeq K[z]/\pid{z^6}$ as follows. \begin{equation}\label{FinalNewBasis} \Bn\biggl\{ \begin{aligned} b_1&:=\mq (d)=z^2(z+1)^3y;\\ b_3&:=-\mq (f)=z^2(z+1)^3x-y; \end{aligned}\quad \begin{aligned} b_2&=y^2;\\ b_4&:=-\mq (h)=x^2y-z^4(z-1)^5. \end{aligned} \biggr. \end{equation} With the compatible part $\qr=\Cp (\pel)$ of the pseudo-eliminant $\pel$ defined in \eqref{ExplCompatiblePart}, a reduced basis $\Bm$ of $\Iq$ is defined in \eqref{ReducedBasisMember} and \eqref{FinalBasisMember} respectively as follows. \begin{equation}\label{NewTypeBasis} \Bm\biggl\{ \begin{aligned} a_1&:=\mo (d)=z^2(z+1)^3[(z^4(z+1)^6-1)y+z^4(z-1)^5];\\ a_2&:=c=z^4(z+1)^3[(z+1)^3(z^4(z+1)^6-1)x+z^2(z-1)^5]. \end{aligned} \biggr. \end{equation} Let $\io$ be the injection as in \eqref{ExtendedEmbedding} associated with $\rr\simeq K[z]/\pid{z^8}$. In this example we have a unique proper divisor $\qel=\io (\tee^\st)=z^6$ and hence the proper factor $\pf=\qel=z^6$ as per Definition \ref{Def:ProperFactor}. By Theorem \ref{Thm:Eliminant} the eliminant $\el=\Cp (\pel)\cdot\pf$ with $\Cp (\pel)$ as in \eqref{ExplCompatiblePart}. This coincides with the eliminant $p$ obtained in the classical Gr\"obner\ basis as in \eqref{GrobnerEliminant}. Nonetheless our new type of bases in \eqref{FinalNewBasis} and \eqref{NewTypeBasis} not only have much more moderate coefficients than those of the classical Gr\"obner\ basis under \eqref{GrobnerEliminant} but also completely obviate the intermediate coefficient swell problem. Moreover, based on the modular bases $\Bm$ in \eqref{NewTypeBasis} and $\Bn$ in \eqref{FinalNewBasis}, we can use Lemma \ref{Lemma:ModularBasisBack} to obtain the new type of bases $\io (\Bm)\cup\{\qr\}$ and $\iq (\Bn)\cup\{z^6\}$ for $I+\langle\qr\rangle$ and $I+\langle z^6\rangle$ respectively. Here $\qr=\Cp (\pel)$ is as in \eqref{ExplCompatiblePart} and $\iq$ as in \eqref{FinalEmbedding}. According to Lemma \ref{Lemma:IdealDecomposition}, we have $I=(I+\langle\qr\rangle)\cap (I+\langle z^6\rangle)$. \section{Conclusion and Remarks} In this paper we defined a new type of bases as in \eqref{UnifiedIdentity} and \eqref{NewBases} in accordance with a decomposition of the original ideal in \eqref{ChineseModuleTheorem} and \eqref{IdealDecomposition} respectively. The characterizations of the new type of bases in Theorem \ref{Thm:MemberChar} and Theorem \ref{Thm:MemberCharModulo} are in effect solutions to the ideal membership problem. The computations and logical deductions in this paper suggest that it is much easier to study a zero-dimensional ideal over principal quotient rings (PQR) modulo the factors of its eliminant or even pseudo-eliminant than over fields. An obvious direction for future research is to generalize this new type of bases to ideals of positive dimensions. The new type of bases and their algorithms can be easily generalized to such kind of ideals $I\subset\mathbb{Z}[\bm{x}]$ as $I\cap\mathbb{Z}\ne\{0\}$. We shall address the generic case of $I\cap\mathbb{Z}=\{0\}$ in forthcoming papers. It is meaningful to enhance the computational efficiency of the new type of bases by the normal selection strategies and signatures as well as the conversions between different monomial orderings as aforementioned in the Introduction. A complexity analysis on the new type of bases that is similar to those in \cite{MM82} \cite{MM84} \cite{May89} \cite{Dub90} \cite{KM96} \cite{MR13} on Gr\"obner\ bases should be interesting.
{ "attr-fineweb-edu": 1.974609, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdmI5qsNCPYBLwdGp
\section{The \textsc{Spectrum Connectivity}\xspace Problem}\label{sec:general} In this section, we study the the \textsc{Spectrum Connectivity}\xspace problem from both complexity and algorithmic points of view. \subsection{NP-completeness Results} We show that the \textsc{Spectrum Connectivity}\xspace problem is NP-complete even if the number of channels is fixed. In fact we give a complete characterization of the complexity of \textsc{SpecCon}($k,\beta$) by proving the following dichotomy result: \begin{theorem}\label{thm:general} \textsc{SpecCon}($k,\beta$) is {\rm NP}\xspace-complete for any integers $k>\beta\geq 2$, and is in \P if $\beta=1$ or $k\leq \beta$. \end{theorem} The second part of the statement is easy: When $\beta=1$, each SU can only open one channel, and thus all SUs should be connected through the same channel. Therefore, the network is connectable if and only if there there exists a channel that belongs to every SU's spectrum map\xspace (and of course the potential graph\xspace must be connected), which is easy to check. When $k\leq \beta$, each SU can open all channels in its spectrum map\xspace, and the problem degenerates to checking the connectivity of the potential graph\xspace. In the sequel we prove the {\rm NP}\xspace-completeness of \textsc{SpecCon}($k,\beta$) when $k>\beta\geq 2$. First consider the case $k=\beta+1$. We will reduce a special case of the Boolean Satisfiability (SAT) problem, which will be shown to be {\rm NP}\xspace-complete, to \textsc{SpecCon}($\beta+1,\beta$), thus showing the {\rm NP}\xspace-completeness of the latter. A clause is called \emph{positive} if it only contains positive literals, and is called \emph{negative} if it only contains negative literals. For example, $x_1 \lor x_3 \lor x_5$ is positive and $\overline{x_2} \lor \overline{x_4}$ is negative. A clause is called \emph{uniform} if it is positive or negative. A \emph{uniform} CNF formula is the conjunction of uniform clauses. Define \textsc{Uniform-SAT}\xspace as the problem of deciding whether a given uniform CNF formula is satisfiable. \begin{lemma}\label{lem:unisat} \textsc{Uniform-SAT}\xspace is {\rm NP}\xspace-complete. \end{lemma} \begin{proof} Let $F$ be a CNF formula with variable set $\{x_1,x_2,\ldots,x_n\}$. For each $i$ such that $\overline{x_i}$ appears in $F$, we create a new variable $y_i$, and do the following: \begin{itemize} \item substitute $y_i$ for all occurrences of $\overline{x_i}$; \item add two clauses $x_i \lor y_i$ and $\overline{x_i} \lor \overline{y_i}$ to $F$. More formally, let $F \leftarrow F \land (x_i \lor y_i) \land (\overline{x_i} \lor \overline{y_i})$. This ensures $y_i = \overline{x_i}$ in any satisfying assignment of $F$. \end{itemize} Call the new formula $F'$. For example, if $F=(x_1\lor \overline{x_2}) \land (\overline{x_1} \lor x_3)$, then $F'=(x_1\lor y_2) \land (y_1 \lor x_3) \land (x_1 \lor y_1) \land (\overline{x_1} \lor \overline{y_1}) \land (x_2 \lor y_2) \land (\overline{x_2} \lor \overline{y_2})$. It is easy to see that $F'$ is a uniform CNF formula, and that $F$ is satisfiable if and only if $F'$ is satisfiable. This constitutes a reduction from \textsc{SAT} to \textsc{Uniform-SAT}\xspace, which concludes the proof. \qed \end{proof} \begin{theorem}\label{thm:npc} \textsc{SpecCon}($\beta+1,\beta$) is {\rm NP}\xspace-complete for any integer $\beta\geq 2$. \end{theorem} \begin{proof} The membership of \textsc{SpecCon}($\beta+1,\beta$) in {\rm NP}\xspace is clear. In what follows we reduce \textsc{Uniform-SAT}\xspace to \textsc{SpecCon}($\beta+1,\beta$), which by Lemma~\ref{lem:unisat} will prove the {\rm NP}\xspace-completeness of the latter. Let $c_1\land c_2\land \ldots \land c_m$ be an input to \textsc{Uniform-SAT}\xspace where $c_j$, $1\leq j\leq m$, is a uniform clause. Assume the variable set is $\{x_1,x_2,\ldots,x_n\}$. We construct an instance of \textsc{SpecCon}($\beta+1,\beta$) as follows. \begin{itemize} \item \textbf{Channels:} There are $\beta+1$ channels $\{0,1,2,\ldots,\beta\}$. \item \textbf{SUs:} \begin{itemize} \item For each variable $x_i$, there is a corresponding SU $X_i$ with spectrum map $\textsc{SpecMap}(X_i)=\{0,1,2,\ldots,\beta\}$ (which contains all possible channels); \item for each clause $c_j$, $1\leq j\leq m$, there is a corresponding SU $C_j$ with $\textsc{SpecMap}(C_j)=\{p_j\}$, where $p_j=1$ if $c_j$ is positive and $p_j=0$ if $c_j$ is negative; \item there is an SU $Y_{2}$ with $\textsc{SpecMap}(Y_2)=\{2\}$. For every $1\leq i\leq n$ and $2\leq k\leq \beta$, there is an SU $Y_{i,k}$ with $\textsc{SpecMap}(Y_{i,k})=\{k\}$; and \item all SUs have the same antenna budget\xspace $\beta$. \end{itemize} \item \textbf{Potential Graph:} For each clause $c_j$ and each variable $x_i$ that appears in $c_j$ (either as $x_i$ or $\overline{x_i}$), there is a potential edge\xspace between $X_i$ and $C_j$. For each $1\leq i\leq n$ and $3\leq k\leq \beta$, there is a potential edge\xspace between $X_i$ and $Y_{i,k}$. Finally, there is a potential edge\xspace between $Y_2$ and every $X_i$, $1\leq i\leq n$. \end{itemize} Denote the above cognitive radio network\xspace by $\mathcal{I}$, which is also an instance of \textsc{SpecCon}($\beta+1,\beta$). We now prove that $c_1\land c_2\land \ldots \land c_m$ is satisfiable if and only if $\mathcal{I}$ is connectable. First consider the ``only if'' direction. Let $A:\{x_1,\ldots,x_n\}\rightarrow\{0,1\}$ be a satisfying assignment of $c_1\land c_2\land \ldots \land c_m$, where $0$ stands for FALSE and $1$ for TRUE. Define a spectrum assignment\xspace as follows. For each $1\leq i\leq n$, let user $X_i$ open the channels $\{2,3,\ldots,\beta\} \cup \{A(i)\}$. Every other SU opens the only channel in its spectrum map\xspace. We verify that $\mathcal{I}$ is connected under the above spectrum assignment\xspace. For each $1\leq i\leq n$, $X_i$ is connected to $Y_2$ through channel 2. Then, for every $2\leq l\leq \beta$, $Y_{i,l}$ is connected to $X_i$ through channel $l$. Now consider SU $C_j$ where $1\leq j\leq m$. Since $A$ satisfies the clause $c_j$, there exists $1\leq i\leq n$ such that: 1) $x_i$ or $\overline{x_i}$ occurs in $c_j$; and 2) $A(x_i)=1$ if $c_j$ is positive, and $A(x_i)=0$ if $c_j$ is negative. Thus $X_i$ and $C_j$ are connected through channel $A(x_i)$. Therefore the realization graph\xspace is connected, completing the proof of the ``only if'' direction. We next consider the ``if'' direction. Suppose there is a spectrum assignment\xspace that makes $\mathcal{I}$ connected. For every $1\leq i\leq n$ and $2\leq l\leq \beta$, $X_i$ must open channel $l$, otherwise $Y_{i,l}$ will become an isolated vertex in the realization graph\xspace. Since $X_i$ can open at most $\beta$ channels in total, it can open at most one of the two remaining channels $\{0,1\}$. We assume w.l.o.g. that $X_i$ opens exactly one of them, which we denote by $a_i$. Now, for the formula $c_1\land c_2\land \ldots \land c_m$, we define a truth assignment $A:\{x_1,\ldots,x_n\}\rightarrow\{0,1\}$ as $A(x_i)=a_i$ for all $1\leq i\leq n$. We show that $A$ satisfies the formula. Fix $1\leq j\leq m$ and assume that $c_j$ is negative (the case where $c_j$ is positive is totally similar). Since the spectrum map\xspace of SU $C_j$ only contains channel $0$, some of its neighbors must open channel 0. Hence, there exists $1\leq i\leq n$ such that $\overline{x_i}$ appears in $c_j$ and the corresponding SU $X_i$ opens channel 0. By our construction of $A$, we have $A(x_i)=0$, and thus the clause $c_j$ is satisfied by $A$. Since $j$ is chosen arbitrarily, the formula $c_1\land c_2\land \ldots \land c_m$ is satisfied by $A$. This completes the reduction from \textsc{Uniform-SAT}\xspace to \textsc{SpecCon}($\beta,\beta+1$), and the theorem follows. \qed \end{proof} \begin{corollary}\label{cor:npc} \textsc{SpecCon}($k,\beta$) is {\rm NP}\xspace-complete for any integers $k>\beta\geq 2$. \end{corollary} \begin{proof} By a simple reduction from \textsc{SpecCon}($\beta+1,\beta$): Given an instance of \textsc{SpecCon}($\beta+1,\beta$), create $k-\beta-1$ new channels and add them to the spectrum map\xspace of an (arbitrary) SU. This gives a instance of \textsc{SpecCon}($k,\beta$). Since the new channels are only contained in one SU, they should not be opened, and thus the two instances are equivalent. Hence the theorem follows. \qed \end{proof} Theorem~\ref{thm:npc} indicates that the \textsc{Spectrum Connectivity}\xspace problem is {\rm NP}\xspace-complete even if the cognitive radio network\xspace only has three channels. We further strengthen this result by proving the following theorem: \begin{theorem}\label{thm:npc_2channels} The \textsc{Spectrum Connectivity}\xspace problem is {\rm NP}\xspace-complete even if there are only two channels. \end{theorem} \begin{proof} We present a reduction from \textsc{Uniform-SAT}\xspace similar as in the proof of Theorem~\ref{thm:npc}. Let $c_1\land c_2\land \ldots \land c_m$ be a uniform CNF clause with variable set $\{x_1,x_2,\ldots,x_n\}$. Construct a cognitive radio network\xspace as follows: There are two channels \{0,1\}. For each variable $x_i$ there is a corresponding SU $X_i$ with spectrum map\xspace $\textsc{SpecMap}(X_i)=\{0,1\}$ and antenna budget\xspace $\beta(X_i)=1$. For each clause $c_j$ there is a corresponding SU $C_j$ with $\textsc{SpecMap}(C_j)=\{p_j\}$ and $\beta(C_j)=1$, where $p_j=1$ if $c_j$ is positive and $p_j=0$ if $c_j$ is negative. There is an SU $Y$ with $\textsc{SpecMap}(Y)=\{0,1\}$ and $\beta(Y)=2$. Note that, unlike in the case of \textsc{SpecCon}($k,\beta$), SUs can have different antenna budgets. Finally, the edges of the potential graph\xspace include: $\{X_i,C_j\}$ for all $i,j$ such that $x_i$ or $\overline{x_i}$ appears in $c_j$, and $\{Y,X_i\}$ for all $i$. This completes the construction of the cognitive radio network\xspace, which is denoted by $\mathcal{I}$. By an analogous argument as in the proof of Theorem~\ref{thm:npc}, $c_1\land c_2\land \ldots \land c_m$ is satisfiable if and only if $\mathcal{I}$ is connectable, concluding the proof of Theorem~\ref{thm:npc_2channels}. \qed \end{proof} Theorem~\ref{thm:npc_2channels} is sharp in that, as noted before, the problem is polynomial-time solvable when there is only one channel. \subsection{Exact Algorithms} In this subsection we design algorithms for deciding whether a given cognitive radio network\xspace is connectable. Since the problem is NP-complete, we cannot expect a polynomial time algorithm. Let $n,k,t$ denote the number of SUs, the number of channels, and the maximum size of any SU's spectrum map\xspace, respectively ($t\leq k$). The simplest idea is to exhaustively examine all possible spectrum assignments to see if there exists one that makes the network connected. Since each SU can have at most $2^{t}$ possible ways of opening channels, the number of assignments is at most $2^{tn}$. Checking each assignment takes poly($n,k$) time. Thus the running time of this approach is bounded by $2^{tn}(nk)^{O(1)}$, which is reasonable when $t$ is small. However, since in general $t$ can be as large as $k$, this only gives a $2^{O(kn)}$ bound, which is unsatisfactory if $k$ is large. In the following we present another algorithm for the problem that runs faster than the above approach when $k$ is large. \begin{theorem}\label{thm:alg_general} There is an algorithm that decides whether a given cognitive radio network\xspace is connectable in time $2^{O(k+n\log n)}$. \end{theorem} \begin{proof} Let $\mathcal{I}$ be a given cognitive radio network\xspace with potential graph\xspace $\mathcal{PG}$. Let $n$ be the number of SUs and $k$ the number of channels. Assume that $\mathcal{I}$ is connected under some spectrum assignment\xspace. Clearly the realization graph contains a spanning tree of $\mathcal{PG}$, say $T$, as a subgraph. If we change the potential graph\xspace to $T$ while keeping all other parameters unchanged, the resulting network will still be connected under the same spectrum assignment\xspace. Thus, it suffices to check whether there exists a spanning tree $T$ of $\mathcal{G}$ such that $\mathcal{I}$ is connectable when substituting $T$ for $\mathcal{PG}$ as its potential graph\xspace. Using the algorithm of \cite{spanntree}, we can list all spanning trees of $\mathcal{PG}$ in time $O(Nn)$ where $N$ is the number of spanning trees of $\mathcal{PG}$. By Cayley's formula \cite{cayley,cayley_arxiv} we have $N\leq n^{n-2}$. Finally, for each spanning tree $T$, we can use the algorithm in Theorem~\ref{thm:alg_tree} (which will appear in Section~\ref{sec:trees}) to decide whether the network is connectable in time $2^{O(k)}n^{O(1)}$. The total running time of the algorithm is $O(n^{n-2})2^{O(k)}n^{O(1)}=2^{O(k+n\log n)}$. \qed \end{proof} Combining Theorem~\ref{thm:alg_general} with the brute-force approach, we obtain: \begin{corollary}\label{cor:alg_general} The \textsc{Spectrum Connectivity}\xspace problem is solvable can be solved in time $2^{O(\min\{kn,k+n\log n\})}$. \end{corollary} \section{\textsc{Spectrum Connectivity}\xspace with Complete Potential Graphs}\label{sec:complete} In this section we consider the special case of the \textsc{Spectrum Connectivity}\xspace problem, in which the potential graph\xspace of the cognitive radio network\xspace is complete. We first show that this restriction does not make the problem tractable in polynomial time. \begin{theorem}\label{thm:npc_complete} The \textsc{Spectrum Connectivity}\xspace problem is NP-complete even when the potential graph\xspace is complete and all SUs have the same antenna budget\xspace $\beta=2$. \end{theorem} \begin{proof} The membership in {\rm NP}\xspace is trivial. The hardness proof is by a reduction from the \textsc{Hamiltonian Path} problem, which is to decide whether a given graph contains a Hamiltonian path, i.e., a simple path that passes every vertex exactly once. The \textsc{Hamiltonian Path} problem is well-known to be NP-complete \cite{book_npc}. Let $G=(V,E)$ be an input graph of the \textsc{Hamiltonian Path} problem. Construct an instance of the \textsc{Spectrum Connectivity}\xspace problem as follows: The collection of channels is $E$ and the set of SUs is $V$; that is, we identify a vertex in $V$ as an SU and an edge in $E$ as a channel. For every $v\in V$, the spectrum map\xspace of $v$ is the set of edges incident to $v$. All SUs have antenna budget\xspace $\beta=2$. Denote this cognitive radio network\xspace by $\mathcal{I}$. We will prove that $G$ contains a Hamiltonian path if and only if $\mathcal{I}$ is connectable. First suppose $G$ contains a Hamiltonian path $P=v_1v_2\ldots v_n$, where $n=|V|$. Consider the following spectrum assignment\xspace of $\mathcal{I}$: for each $1\leq i\leq n$, let SU $v_i$ open the channels corresponding to the edges incident to $v_i$ in the path $P$. Thus all SUs open two channels except for $v_1$ and $v_n$ each of whom opens only one. For every $1\leq i\leq n-1$, $v_i$ and $v_{i+1}$ are connected through the channel (edge) $\{v_i,v_{i+1}\}$. Hence the realization graph of $\mathcal{I}$ under this spectrum assignment\xspace is connected. Now we prove the other direction. Assume that $\mathcal{I}$ is connectable. Fix a spectrum assignment\xspace under which the realization graph\xspace of $\mathcal{I}$ is connected, and consider this particular realization graph\xspace $\mathcal{RG}=(V,E')$. Let $\{v_i,v_j\}$ be an arbitrary edge in $E'$. By the definition of the realization graph\xspace, there is a channel opened by both $v_i$ and $v_j$. Thus there is an edge in $E$ incident to both $v_i$ and $v_j$, which can only be $\{v_i,v_j\}$. Therefore $\{v_i,v_j\}\in E$. This indicates $E'\subseteq E$, and hence $\mathcal{RG}$ is a connected spanning subgraph of $G$. Since each SU can open at most two channels, the maximum degree of $\mathcal{RG}$ is at most 2. Therefore $\mathcal{RG}$ is either a Hamiltonian path of $G$, or a Hamiltonian cycle which contains a Hamiltonian path of $G$. Thus, $G$ contains a Hamiltonian path. The reduction is complete and the theorem follows. \qed \end{proof} Notice that the reduction used in the proof of Theorem~\ref{thm:npc_complete} creates a cognitive radio network\xspace with an unbounded number of channels. Thus Theorem~\ref{thm:npc_complete} is not stronger than Theorem~\ref{thm:general} or \ref{thm:npc_2channels}. Recall that Theorem \ref{thm:npc_2channels} says the \textsc{Spectrum Connectivity}\xspace problem is {\rm NP}\xspace-complete even if there are only two channels. In contrast we will show that, with complete potential graphs\xspace, the problem is polynomial-time tractable when the number of channels is small. \begin{theorem}\label{thm:alg_complete} The \textsc{Spectrum Connectivity}\xspace problem with complete potential graphs\xspace can be solved in $2^{2^k+O(k)}n^{O(1)}$ time. \end{theorem} \begin{proof} Consider a cognitive radio network\xspace $\mathcal{I}$ with SU set $U$, channel set $C$ and a complete potential graph\xspace, i.e., there is a potential edge between every pair of distinct SUs. Recall that $n=|U|$ and $k=|C|$. For each spectrum assignment\xspace $\mathcal{SA}$, we construct a corresponding \emph{spectrum graph} $\mathcal{G}_{\mathcal{SA}}=(V, E)$ where $V=\{C'\subseteq C~|~\exists u\in U \textrm{~s.t.~} \mathcal{SA}(u)=C'\}$ and $E=\{\{C_1,C_2\}~|~C_1,C_2\in V; C_1 \cap C_2 \neq \emptyset\}$. Thus, $V$ is the collection of subsets of $C$ that is opened by some SU, and $E$ reflexes the connectivity between pairs of SUs that open the corresponding channels. Since each vertex in $V$ is a subset of $C$, we have $|V|\leq 2^{k}$, and the number of different spectrum graphs is at most $2^{2^{k}}$. We now present a relation between $\mathcal{G}_{\mathcal{SA}}$ and the realization graph of $\mathcal{I}$ under $\mathcal{SA}$. If we label each vertex $u$ in the realization graph with $\mathcal{SA}(u)$, and contract all edges between vertices with the same label, then we obtain precisely the spectrum graph $\mathcal{G}_{\mathcal{SA}}=(V, E)$. Therefore, in the language of graph theory, $\mathcal{G}_{\mathcal{SA}}=(V, E)$ is a minor of the realization graph under $\mathcal{SA}$. Since graph minor preserves connectivity, $\mathcal{I}$ is connectable if and only if there exists a connected spectrum graph. Hence we can focus on the problem of deciding whether a connected spectrum graph exists. Consider all possible graphs $G=(V,E)$ such that $V\subseteq 2^{C}$, and $E=\{\{C_1,C_2\}~|~C_1,C_2\in V; C_1 \cap C_2 \neq \emptyset\}$. There are $2^{2^k}$ such graphs each of which has size $2^{O(k)}$. Thus we can list all such graphs in $2^{2^k+O(k)}$ time. For each graph $G$, we need to check whether it is the spectrum graph of some spectrum assignment\xspace of $\mathcal{I}$. We create a bipartite graph in which nodes on the left side are the SUs in $\mathcal{I}$, and nodes on the right side all the vertices of $G$. We add an edge between an SU $u$ and a vertex $C'$ of $G$ if and only if $C' \subseteq \textsc{SpecMap}(u)$ and $|C'| \leq \beta(u)$, that is, $u$ can open $C'$ in a spectrum assignment\xspace. The size of $H$ is poly($n,2^k$) and its construction can be finished in poly($n,2^k$) time. Now, if $G$ is the spectrum graph of some spectrum assignment\xspace $\mathcal{SA}$, then we can identify $\mathcal{SA}$ with a subgraph of $H$ consisting of all edges $(u, \mathcal{SA}(u))$ where $u$ is an SU. In addition, in this subgraph we have \begin{itemize} \item every SU $u$ has degree exactly one; and \item every node $C'$ on the right side of $H$ has degree at least one. \end{itemize} Conversely, a subgraph of $H$ satisfying the above two conditions clearly induces a spectrum assignment\xspace whose spectrum graph is exactly $G$. Therefore it suffices to examine whether $H$ contains such a subgraph. Furthermore, the above conditions are easily seen to be equivalent to: \begin{itemize} \item every SU $u$ has degree at least one in $G$; and \item $G$ contains a \emph{matching} that includes all nodes on the right side. \end{itemize} The first condition can be checked in time linear in the size of $H$, and the second one can be examined by any polynomial time algorithm for bipartite matching (e.g., \cite{bmatching}). Therefore, we can decide whether such subgraph exists (and find one if so) in time poly($n,2^k$). By our previous analyses, this solves the \textsc{Spectrum Connectivity}\xspace problem with complete potential graphs. The total running time of our algorithm is $2^{2^k+O(k)}{\rm poly}(n,2^k)=2^{2^k+O(k)}n^{O(1)}$. \qed \end{proof} \begin{theorem}\label{cor:fpt_complete} The \textsc{Spectrum Connectivity}\xspace problem with complete potential graphs\xspace is fixed parameter tractable (FPT) when parameterized by the number of channels. \end{theorem} \section{Spectrum Connectivity on Trees and Bounded Treewidth Graphs}\label{sec:trees} In this section, we study another special case of the \textsc{Spectrum Connectivity}\xspace problem where the potential graph\xspace of the cognitive radio network\xspace is a tree. We will also investigate the problem on the class of bounded-treewidth graphs. Many {\rm NP}\xspace-hard combinatorial problems become easy on trees, e.g., the dominating set problem and the vertex cover problem. Nonetheless, as indicated by the following theorem, the \textsc{Spectrum Connectivity}\xspace problem remains hard on trees. \subsection{Trees} We state the complexity of the spectrum connectivity problem with trees as the potential graph\xspace in the following theorem. \begin{theorem}\label{thm:npc_tree} The \textsc{Spectrum Connectivity}\xspace problem is {\rm NP}\xspace-complete even if the potential graph\xspace is a tree of depth one. \end{theorem} \begin{proof} We give a reduction from the \textsc{Vertex Cover} problem which is well known to be {\rm NP}\xspace-complete \cite{book_npc}. Given a graph $G=(V,E)$ and an integer $r$, the \textsc{Vertex Cover} problem is to decide whether there exists $r$ vertices in $V$ that cover all the edges in $E$. Construct a cognitive radio network\xspace $\mathcal{I}$ as follows. The set of channels is $C=\{c_v~|~v\in V\}$. For each edge $e=\{u,v\}\in E$ there is an SU $U_{e}$ with $\textsc{SpecMap}(U_{e})=\{c_u,c_v\}$ and antenna budget\xspace 2. There is another SU $M$ with $\textsc{SpecMap}(M)=C$ and antenna budget\xspace $r$. The potential graph\xspace is a star centered at $M$, that is, there is a potential edge\xspace between $M$ and $U_{e}$ for every $e\in E$. This finishes the construction of $\mathcal{I}$. We prove that $G$ has a vertex cover of size $r$ if and only if $\mathcal{I}$ is connectable. First assume $G$ has a vertex cover $S \subseteq V$ with $|S|\leq r$. Define a spectrum assignment\xspace $A(S)$ as follows: $M$ opens the channels $\{c_v~|~v\in S\}$, and $U_e$ opens both channels in its spectrum map\xspace for all $e\in E$. Since $S$ is a vertex cover, we have $u\in S$ or $v\in S$ for each $e=\{u,v\}\in E$. Thus at least one of $c_u$ and $c_v$ is opened by $M$, which makes it connected to $U_e$. Hence the realization graph\xspace is connected. On the other hand, assume that the realization graph\xspace is connected under some spectrum assignment\xspace. For each $e=\{u,v\}\in E$, since the potential edge\xspace $\{M,U_{e}\}$ is realized, $M$ opens at least one of $c_u$ and $c_v$. Now define $S=\{v\in V~|~c_v \textrm{~is opened by~}M\}$. It is clear that $S$ is a vertex cover of $G$ of size at most $\beta(M)=r$. This completes the reduction, and the theorem follows. \qed \end{proof} We next show that, in contrast to Theorems~\ref{thm:npc} and \ref{thm:npc_2channels}, this special case of the problem is polynomial-time solvable when the number of channels is small. \begin{theorem}\label{thm:alg_tree} Given a cognitive radio network\xspace whose potential graph\xspace is a tree, we can check whether it is connectable in $2^{O(t)}(kn)^{O(1)}$ time, where $t$ is the maximum size of any SU's spectrum map\xspace. In particular, this running time is at most $2^{O(k)}n^{O(1)}$. \end{theorem} \begin{proof} Let $\mathcal{I}$ be a given cognitive radio network\xspace whose potential graph\xspace $\mathcal{PG}=(V,E)$ is a tree. Root $\mathcal{PG}$ at an arbitrary node, say $r$. For each $v\in V$ let $\mathcal{PG}_v$ denote the subtree rooted at $v$, and let $\mathcal{I}_v$ denote the cognitive radio network\xspace obtained by restricting $\mathcal{I}$ on $\mathcal{PG}_v$. For every subset $S\subseteq \textsc{SpecMap}(v)$, define $f(v,S)$ to be 1 if there exists a spectrum assignment\xspace that makes $\mathcal{I}_v$ connected in which the set of channels opened by $v$ is exactly $S$; let $f(v,S)=0$ otherwise. For each channel $c\in C$, define $g(v,c)$ to be 1 if there exists $S$, $\{c\}\subseteq S \subseteq \textsc{SpecMap}(v)$, for which $f(v,S)=1$; define $g(v,c)=0$ otherwise. Clearly $\mathcal{I}$ is connectable if and only if there exists $S\subseteq \textsc{SpecMap}(r)$ such that $f(r,S)=1$. We compute all $f(v,S)$ and $g(v,c)$ by dynamic programming in a bottom-up manner. Initially all values to set to 0. The values for leaf nodes are easy to obtain. Assume we want to compute $f(v,S)$, given that the values of $f(v',S')$ and $g(v',c)$ are all known if $v'$ is a child of $v$. Then $f(v,S)=1$ if and only if for every child $v'$ of $v$, there exists $c\in S$ such that $g(v',c)=1$ (in which case $v$ and $v'$ are connected through channel $c$). If $f(v,S)$ turns out to be 1, we set $g(v,c)$ to 1 for all $c\in S$. It is easy to see that $g(v,c)$ will be correctly computed after the values of $f(v,S)$ are obtained for all possible $S$. After all values have been computed, we check whether $f(r,S)=1$ for some $S\subseteq \textsc{SpecMap}(r)$. Recall that $n=|V|$, $k=|C|$, and denote $t=\max_{v\in V}|\textsc{SpecMap}(v)|$. There are at most $n(2^{t}+k)$ terms to be computed, each of which takes time ${\rm poly}(n,k)$ by our previous analysis. The final checking step takes $2^{t}{\rm poly}(n,k)$ time. Hence the total running time is $2^{t}{\rm poly}(n,k)=2^{t}(kn)^{O(1)}$, which is at most $2^{O(k)}n^{O(1)}$ since $t\leq k$. Finally note that it is easy to modify the algorithm so that, given a connectable network it will return a spectrum assignment\xspace that makes it connected. \qed \end{proof} \begin{corollary}\label{cor:fpt_tree} The \textsc{Spectrum Connectivity}\xspace problem with trees as potential graphs\xspace is fixed parameter tractable when parameterized by the number of channels. \end{corollary} \subsection{Bounded Treewidth Graphs} In this part we deal with another class of potential graphs, namely the class of graphs with bounded treewidth. Our main result is the following theorem, which generalizes Theorem~\ref{thm:alg_tree} as a tree has treewidth one. \begin{theorem}\label{thm:alg_treewidth} There is an algorithm that, given a cognitive radio network\xspace whose potential graph\xspace has bounded treewidth, checks whether it is connectable in $2^{O(k)}n^{O(1)}$ time. \end{theorem} \begin{proof} Suppose we are given a cognitive radio network\xspace $\mathcal{I}$ with potential graph\xspace $G=(V,E)$, which has treewidth $\mathsf{tw}=O(1)$. Let $(T=(I,F),\{X_i~|~i\in I\})$ be a nice tree decomposition of $G$ of width $\mathsf{tw}$ (see Section~\ref{subsec:treedecomp} for the related notions). Recall that $T$ is a rooted binary tree with $O(|V|)$ nodes and can be found in polynomial time. Let $r$ be the root of $T$. For every non-leaf node $i$ of $T$, let $i_L$ and $i_R$ be the two children of $i$. (We can always add dummy leaf-nodes to make every non-leaf node have exactly two children, which at most doubles the size of $T$.) For each $i\in I$, define $$Y_i:=\{v\in X_j~|~j=i \textrm{~or~} j \textrm{~is a descendent of~}i\},$$ and let $\mathcal{I}_i$ be a new instance of the problem that is almost identical to $\mathcal{I}$ except that we replace the potential graph\xspace with $G[Y_i]$, i.e., the subgraph of $G$ induced on the vertex set $Y_i \subseteq V$. For each $i\in I$, suppose $X_i=\{v_1,v_2,\ldots,v_{t}\}$ where $t=|X_i|$ and $v_j\in V$ for all $1\leq j\leq t$. For each tuple $(S_1,S_2,\ldots,S_t)$ such that $S_j \subseteq \textsc{SpecMap}(v_j)$ for all $1\leq j\leq t$, we use a Boolean variable $\mathcal{B}_i(S_1, S_2,\ldots,S_t)$ to indicate whether there exists a spectrum assignment $\mathcal{SA}_i$ that makes $\mathcal{I}_i$ connected such that $\mathcal{SA}_i(v_j)=S_j$ for all $1\leq j\leq t$. Notice that for each $i$, the number of such variables is at most $(2^k)^{|X_i|}\leq 2^{k\cdot \mathsf{tw}}$, and we can list them in $2^{O(k\cdot \mathsf{tw})}$ time. Initially all variables are set to FALSE. Assume $X_r=\{w_1,w_2,\ldots,w_{|X_r|}\}$ (recall that $r$ is the root of $T$). Then, clearly, deciding whether $\mathcal{I}$ is connectable is equivalent to checking whether there exists $(S_1,S_2,\ldots,S_{|X_r|})$, where $S_j\subseteq \textsc{SpecMap}(w_j)$ for all $1\leq j\leq |X_r|$, such that $\mathcal{B}_r(S_1,S_2,\ldots,S_{|X_r|})$ is TRUE. We will compute the values of all possible $\mathcal{B}_i(S_1,S_2,\ldots,S_t)$ by dynamic programming. For each leaf node $l$, we can compute the values of all the variables related to $\mathcal{I}_l$ in time $2^{O(k\cdot \mathsf{tw})}n^{O(1)}$ by the brute-force approach. Now suppose we want to decide the value of $\mathcal{B}_i(S_1,S_2,\ldots,S_{|X_i|})$ for some non-leaf node $i$, provided that the variables related to any children of $i$ have all been correctly computed. Recall that $i_L$ and $i_R$ are the two children of $i$. We define: \begin{itemize} \item $NEW=X_i \setminus (Y_{i_L}\cup Y_{i_R})$; \item $OLD=X_i \setminus NEW = X_i \cap (Y_{i_L}\cup Y_{i_R})$; \item $Z_L=Y_{i_L} \setminus X_i$, and $Z_R = Y_{i_R} \setminus X_i$. \end{itemize} It is clear that $Y_i = NEW \cup OLD \cup Z_L \cup Z_R$. By using the properties of a tree decomposition, we have the following fact: \begin{lemma}\label{lem:disjoint} $NEW$, $Z_L$, and $Z_R$ are three pairwise disjoint subsets of $V$, and there is no edge of $G$ whose endpoints lie in different subsets. \end{lemma} \begin{proof} Since $NEW \subseteq X_i$ and $Z_L=Y_{i_L}\setminus X_i$, we have $NEW \cap Z_L=\emptyset$, and similarly $NEW \cap Z_R = \emptyset$. Assume that $Z_L \cap Z_R \neq \emptyset$, and let $v\in Z_L \cap Z_R$. Since $Z_L \subseteq Y_{i_L}$ and $Z_R \subseteq Y_{i_R}$, we have $v\in Y_{i_L} \cap Y_{i_R}$. By the definition of a tree decomposition, $v\in X_i$, so $v \in X_i \cap Z_L = X_i \cap (Y_{i_L}\setminus X_i)=\emptyset$, a contradiction. Therefore $Z_L \cap Z_R=\emptyset$. This proves the pairwise disjointness of the three sets. Now assume that there exists an edge $e=(u,v)\in E$ such that $u\in Z_L$ and $v\in Z_R$. Then, by the definition of a tree decomposition, there exists $p \in I$ such that $\{u,v\}\subseteq X_p$. We know that $p\neq i$. So there are three possibilities: $p$ lines in the subtree rooted at $X_{i_L}$, or in the subtree rooted at $X_{i_R}$, or it is not in the subtree rooted at $X_i$. It is easy to verify that, in each of the three cases, we can find a path that connects two tree nodes both containing $u$ (or $v$) and goes through $i$, which implies $u \in X_i$ or $v\in X_i$ by the property of a tree decomposition. This contradicts our previous result. Thus there is no edge with one endpoint in $Z_L$ and another in $Z_R$. Similarly, we can prove that there exists no edge with one endpoint in $NEW$ and another in $Z_L$ or $Z_R$. This completes the proof of the lemma. \qed \end{proof} We now continue the proof of Theorem~\ref{thm:alg_treewidth}. Recall that we want to decide $\mathcal{B}_i(S_1,S_2,\ldots,S_{|X_i|})$, i.e., whether $\mathcal{I}_i$, the network with $G[Y_i]$ as the potential graph\xspace, is connectable under some spectrum assignment $\mathcal{SA}$ such that $\mathcal{SA}(v_j)=S_j$ for all $1\leq j\leq |X_i|$ (we assume that $X_i=\{v_1,v_2,\ldots,v_{|X_i|}\}$). Note that $Y_i = NEW \cup OLD \cup Z_L \cup Z_R$. Due to Lemma~\ref{lem:disjoint}, the three subsets $NEW$, $Z_L$ and $Z_R$ can only be connected through $OLD$ (or, we can think $OLD$ as an ``intermediate'' set). Therefore, for any spectrum assignment $\mathcal{SA}$ such that $\mathcal{SA}(v_j)=S_j$ for all $j$, $\mathcal{I}_i$ is connected under $\mathcal{SA}$ if and only if the following three things simultaneously hold: \begin{itemize} \item $G[X_i]$ is connected under $\mathcal{SA}$; \item $\mathcal{B}_{i_L}(S'_1,\ldots,S'_{|X_{i_L}|})$ is TRUE for some $(S'_1,\ldots,S'_{|X_{i_L}|})$ that accords with $(S_1,\ldots,S_{|X_i|})$, i.e., the two vectors coincide on any component corresponding to a vertex in $X_i \cap X_{i_L}$; \item $\mathcal{B}_{i_R}(S'_1,\ldots,S'_{|X_{i_R}|})$ is TRUE for some $(S'_1,\ldots,S'_{|X_{i_R}|})$ that accords with $(S_1,\ldots,S_{|X_i|})$, i.e., the two vectors coincide on any component corresponding to a vertex in $X_i \cap X_{i_R}$. \end{itemize} The first condition above can be checked in polynomial time, and the last two conditions can be verified in $2^{O(k\cdot \mathsf{tw})}n^{O(1)}$ time. Thus the time spent on determining $\mathcal{B}_i(S_1,\ldots,S_{|X_i|})$ is $2^{O(k\cdot \mathsf{tw})}n^{O(1)}$. After all such terms have been computed, we can get the correct answer by checking whether there exists $(S_1,\ldots,S_{|X_r|})$ such that $\mathcal{B}_r(S_1,\ldots,S_{|X_r|})$ is TRUE, which costs another $2^{O(k\cdot \mathsf{tw})}n^{O(1)}$ time. Since there are at most $O(|V|)=O(n)$ nodes in $T$, the total running time of the algorithm is $2^{O(k\cdot \mathsf{tw})}n^{O(1)}=2^{O(k)}n^{O(1)}$ as $\mathsf{tw}=O(1)$. The proof is complete. \qed \end{proof} \begin{corollary}\label{cor:fpt_tree} The \textsc{Spectrum Connectivity}\xspace problem on bounded treewidth graphs is fixed parameter tractable when parameterized by the number of channels. \end{corollary} \section{Conclusion and Future Work}\label{sec:con} In this paper, we initiate a systematic study on the algorithmic complexity of connectivity problem in cognitive radio networks through spectrum assignment. The hardness of the problem in the general case and several special cases are addressed, and exact algorithms are also derived to check whether the network is connectable. In some applications, when the given cognitive radio network\xspace is not connectable, we may want to connect the largest subset of the secondary users. This optimization problem is NP-hard, since the decision version is already NP-complete on very restricted instances. Thus it is interesting to design polynomial time approximation algorithms for this optimization problem. In some other scenarios, we may wish to connect all the secondary users but keep the antenna budget\xspace as low as possible. That is, we want to find the smallest $\beta$ such that there exists a spectrum assignment\xspace\ connecting the graph in which each SU opens at most $\beta$ channels. It is easy to see that this problem generalizes the minimum-degree spanning tree problem \cite{book_npc}, which asks to find a spanning tree of a given graph in which the maximum vertex degree is minimized. The latter problem is NP-hard, but there is a polynomial time algorithm that finds a spanning tree of degree at most one more than the optimum \cite{DBLP:journals/jal/FurerR94}. It would be interesting to see whether this algorithm can be generalized to the min-budget version of our connectivity problem, or whether we can at least obtain constant factor approximations. Another meaningful extension of this work is to design distributed algorithms to achieve network connectivity. Moreover, due to interference in wireless communications, the connected nodes using the same channel may not be able to communicate simultaneously. Therefore, it is also interesting to investigate distributed algorithms with channel assignment and link scheduling jointly considered to achieve some network objective such as connectivity and capacity maximization, especially under the realistic interference models. \section*{Acknowledgements} The authors would like to give thanks to Dr. Thomas Moscibroda at Microsoft Research Asia for his introduction of the original problem. This work was supported in part by the National Basic Research Program of China Grant 2011CBA00300, 2011CBA00302, the National Natural Science Foundation of China Grant 61073174, 61103186, 61202360, 61033001, and 61061130540. \bibliographystyle{abbrv} \section{Introduction} Cognitive Radio is a promising technology to alleviate the spectrum shortage in wireless communication. It allows the unlicensed \emph{secondary users} to utilize the temporarily unused licensed spectrums, referred to as \emph{white spaces}, without interfering with the licensed \emph{primary users}. Cognitive Radio Networks (CRNs) is considered as the next generation of communication networks and attracts numerous research from both academia and industry recently. In CRNs, each secondary user (SU) can be equipped with one or multiple antennae for communication. With multiple antennae, a SU can communicate on multiple channels simultaneously (in this paper, channel and spectrum are used interchangeably.). Through spectrum sensing, each SU has the capacity to measure current available channels at its site, i.e. the channels are not used by the primary users (PUs). Due to the appearance of PUs, the available channels of SUs have the following characteristics~\cite{sigcomm09Bahl}: \begin{itemize} \item \emph{Spatial Variation}: SUs at different positions may have different available channels; \item \emph{Spectrum Fragmentation}: the available channels of a SU may not be continuous; and \item \emph{Temporal Variation}: the available channels of a SU may change over time. \end{itemize} Spectrum assignment is to allocate available channels to SUs to improve system performance such as spectrum utilization, network throughput and fairness. Spectrum assignment is one of the most challenging problems in CRNs and has been extensively studied such as in~\cite{infocom12Li,suveryWang,mobihoc12Huang,mobihoc07Yuan,auctionZhou}. Connectivity is a fundamental problem in wireless communication. Connection between two nodes in CRNs is not only determined by their distance and their transmission powers, but also related to whether the two nodes has chosen a common channel. Due to the spectrum dynamics, communication in CRNs is more difficult than in the traditional multi-channel radio networks studied in~\cite{discDolev}. Authors in ~\cite{Infocom12Lu,CoRoNetRen,JsacRen} investigated the impact of different parameters on connectivity in large-scale CRNs, such as the number of channels, the activity of PUs, the number of neighbors of SUs and the transmission power. \eat{ \begin{table}[t] \begin{minipage}[b]{\textwidth} \centering \begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{3}{c|}{Quasi-identifiers} & Sensitive\\ \hline & Zipcode & Age& Education&Disease\\ \hline 1 & 98765 & 38 & Bachelor & Viral Infection\\ 2 & 98654 & 39 & Doctorate & Heart Disease\\ 3 & 98543 & 32 & Master & Heart Disease\\ 4 & 97654 & 65 & Bachelor & Cancer\\ 5 & 96689 & 45 & Bachelor & Viral Infection\\ 6 & 97427 & 33 & Bachelor & Viral Infection\\ 7 & 96552 & 54 & Bachelor & Heart Disease\\ 8 & 97017 & 69 & Doctorate & Cancer\\ 9 & 97023 & 55 & Master & Cancer\\ 10 & 97009 & 62 & Bachelor & Cancer\\ \hline \end{tabular} \vspace{1mm} \caption{The raw microdata table.}\label{tab:1} \end{minipage} \begin{minipage}[b]{0.5\textwidth} \centering \small \begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{3}{c|}{Quasi-identifiers} & Sensitive\\ \hline & Zipcode & Age& Education&Disease\\ \hline 1 & 98$\star$$\star$$\star$ & 3$\star$ & $\star$ & Viral Infection\\ 2 & 98$\star$$\star$$\star$ & 3$\star$ & $\star$ & Heart Disease\\ 3 & 98$\star$$\star$$\star$ & 3$\star$ & $\star$ & Heart Disease\\ \hline 4 & 9$\star$$\star$$\star$$\star$ & $\star$$\star$ & Bachelor & Cancer\\ 5 & 9$\star$$\star$$\star$$\star$ & $\star$$\star$ & Bachelor & Viral Infection\\ 6 & 9$\star$$\star$$\star$$\star$ & $\star$$\star$ & Bachelor & Viral Infection\\ 7 & 9$\star$$\star$$\star$$\star$ & $\star$$\star$ & Bachelor & Heart Disease\\ \hline 8 & 970$\star$$\star$ & $\star$$\star$ & $\star$ & Cancer\\ 9 & 970$\star$$\star$ & $\star$$\star$ & $\star$ & Cancer\\ 10 & 970$\star$$\star$ & $\star$$\star$ & $\star$ & Cancer\\ \hline \end{tabular} \vspace{1mm} \caption{A 3-anonymous partition.}\label{tab:2} \normalsize \end{minipage} \begin{minipage}[b]{0.5\textwidth} \centering \begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{3}{c|}{Quasi-identifiers} & Sensitive\\ \hline & Zipcode & Age& Education&Disease\\ \hline 1 & 98$\star$$\star$$\star$ & 3$\star$ & $\star$ & Viral Infection\\ 2 & 98$\star$$\star$$\star$ & 3$\star$ & $\star$ & Heart Disease\\ \hline 3 & 9$\star$$\star$$\star$$\star$ & $\star$$\star$ & $\star$ & Heart Disease\\ 5 & 9$\star$$\star$$\star$$\star$ & $\star$$\star$ & $\star$ & Viral Infection\\ 8 & 9$\star$$\star$$\star$$\star$ & $\star$$\star$ & $\star$ & Cancer\\ 9 & 9$\star$$\star$$\star$$\star$ & $\star$$\star$ & $\star$ & Cancer\\ \hline 4 & 97$\star$$\star$$\star$ & $\star$$\star$ & Bachelor & Cancer\\ 6 & 97$\star$$\star$$\star$ & $\star$$\star$ & Bachelor & Viral Infection\\ \hline 7 & 9$\star$$\star$$\star$$\star$ & $\star$$\star$ & Bachelor & Heart Disease\\ 10 & 9$\star$$\star$$\star$$\star$ & $\star$$\star$ & Bachelor & Cancer\\ \hline \end{tabular} \vspace{1mm} \caption{A 2-diverse partition.}\label{tab:3} \end{minipage} \vspace{-10mm} \end{table} } \begin{figure}[t \begin{minipage}[b]{0.5\textwidth} \centering \includegraphics[width=0.9\textwidth]{general} \end{minipage} \begin{minipage}[b]{0.5\textwidth} \centering \includegraphics[width=0.9\textwidth]{general2} \end{minipage} \caption{the general case. a) the potential graph: the set besides each SU is its available channels, and $\beta$ is its number of antennae. $u_2$ and $u_4$ are not connected directly because they are a pair of heterogenous nodes or their distance exceeds at least one of their transmission ranges. b) the realization graph which is connected: the set beside each SU is the channels assigned to it. } \label{fig:general} \end{figure} In this paper, we initiate the first systematic study on the complexity of connectivity in CRNs through spectrum assignment. We model the network as a potential graph and a realized graph before and after spectrum assignment respectively (refer to Section~\ref{sec:def}). We start from the most general case, where the network is composed of heterogenous SUs\footnote{We assume two heterogenous SUs cannot communicate even when they work on a common channel and their distance is within their transmission ranges.}, SUs may be equipped with different number of antennae and the potential graph can be arbitrary (Figure~\ref{fig:general}). Then, we proceed to study the special case when all the SUs have the same number of antennae. If all the SUs are homogenous with transmission ranges large enough, the potential graph will be a complete graph. For some hierarchically organized networks, e.g. a set of SUs are connected to an access point, the potential graph can be a tree. Therefore, we also study these special cases. Exact algorithms are also derived to determine connectivity for different cases. Our results are listed below. To the best of knowledge, this is the first work that systematically studies the algorithmic complexity of connectivity in CRNs with multiple antennae. \paragraph{Our Contributions:} In this paper we study the algorithmic complexity of the connectivity problem through spectrum assignment under different models. Our main results are as follows. \begin{itemize} \item When the potential graph is a general graph, we prove that the problem is NP-complete even if there are only two channels. This result is sharp as the problem is polynomial-time solvable when there is only one channel. We also design exact algorithms for the problem. For the special case when all SUs have the same number of antennae, we prove that the problem is NP-complete when $k>\beta\geq 2$, where $k$ and $\beta$ are the total amount of channels in the white spaces and the number of antennae on an SU respectively. \item When the potential graph is complete,\footnote{The complete graph is a special case of disk graphs, which are commonly used to model wireless networks such as in~\cite{wireless08Kuhn,jsac12Du}.} the problem is shown to be NP-complete even if each node can open at most two channels. However, in contrast to the general case, the problem is shown to be polynomial-time solvable if the number of channels is fixed. In fact, we prove a stronger result saying that the problem is fixed parameter tractable when parameterized by the number of channels. (See \cite{book_para} for notations in parameterized complexity.) \item When the potential graph is a tree, we prove that the problem is NP-complete even if the tree has depth one. Similar to the complete graph case, we show that the problem is fixed parameter tractable when parameterized by the number of channels. We then generalize this result, showing that the problem remains fixed parameter tractable when parameterized by the number of channels if the underlying potential graph has bounded treewidth. \end{itemize} \paragraph{Paper Organization:} In Section~\ref{sec:def} we formally define our model and problems studied in this paper. We study the problem with arbitrary potential graphs in Section~\ref{sec:general}. The special cases where the potential graph is complete or a tree are investigated in Sections~\ref{sec:complete} and \ref{sec:trees}. The paper is concluded in Section~\ref{sec:con} with possible future works. \section{Preliminaries}\label{sec:def} \subsection{System Model and Problem Definition}\label{subsec:def} We first describe the model used throughout this paper. A \emph{cognitive radio network\xspace~}is comprised of the following ingredients: \begin{itemize} \item $U$ is a collection of secondary users (SUs) and $C$ is the set of channels in the white spaces. \item Each SU $u\in U$ has a \emph{spectrum map\xspace,} denoted by \textsc{SpecMap}($u$), which is a subset of $C$ representing the available channels that $u$ can open. \item The \emph{potential graph\xspace~}$\mathcal{PG}=(U,E)$, where each edge of $E$ is also called a \emph{potential edge}. If two nodes are connected by a potential edge, they can communicate as long as they choose a common available channel. \item Each SU $u \in U$ is equipped with a number of antennas, denoted as \emph{antenna budget\xspace~}$\beta(u)$, which is the maximum number of channels that $u$ can open simultaneously. \end{itemize} For a set $S$, let $2^{S}$ denote the power set of $S$, i.e., the collection of all subsets of $S$. A \emph{spectrum assignment\xspace~}is a function $\mathcal{SA}: U \rightarrow 2^{C}$ satisfying that $$\mathcal{SA}(u) \subseteq \textsc{SpecMap}(u) \textrm{~and~} |\mathcal{SA}(u)| \leq \beta(u) \textrm{~for all~} u\in U.$$ Equivalently, a spectrum assignment\xspace is a way of SUs opening channels such that each SU opens at most $\beta$ channels and can only open those in its spectrum map\xspace. Given a spectrum assignment\xspace $\mathcal{SA}$, a potential edge\xspace $\{u,v\}\in E$ is called \emph{realized~}if $\mathcal{SA}(u)\cap \mathcal{SA}(v)\neq \emptyset$, i.e., there exists a channel opened by both $u$ and $v$. The \emph{realization graph\xspace~}under a spectrum assignment\xspace is a graph $\mathcal{RG}=(U,E')$, where $E'$ is the set of realized edges in $E$. Note that $\mathcal{RG}$ is a spanning subgraph of the potential graph\xspace $\mathcal{PG}$. A cognitive radio network\xspace is called \emph{connectable~}if there exists a spectrum assignment\xspace under which the realization graph\xspace is connected, in which case we also say that the cognitive radio network\xspace is \emph{connected} under this spectrum assignment\xspace. Now we can formalize the problems studied in this paper. \vspace{1.5mm} \noindent\textbf{The \textsc{Spectrum Connectivity}\xspace Problem}. The \textsc{Spectrum Connectivity}\xspace problem is to decide whether a given cognitive radio network\xspace is connectable. \vspace{1.5mm} We are also interested in the special case where the number of possible channels is small\footnote{Commonly, the white spaces include spectrums from channel 21 (512Mhz) to 51 (698Mhz) excluding channel 37, which is totally 29 channels~\cite{sigcomm09Bahl}.} and SUs have the same antenna budget. Therefore, we define the following subproblem of the \textsc{Spectrum Connectivity}\xspace problem: \vspace{1.5mm} \noindent\textbf{The Spectrum $(k,\beta)$-Connectivity Problem}. For two constants $k,\beta\geq 1$, the \textsc{Spectrum} $(k,\beta)$-\textsc{Connectivity} problem is to decide whether a given cognitive radio network\xspace with $k$ channels in which all SUs have the same budget $\beta$ is connectable. For convenience we write \textsc{SpecCon}($k,\beta$) to represent this problem. \vspace{1.5mm} Finally, we also consider the problem with special kinds of potential graphs\xspace, i.e. the potential graph is complete or a tree. In the sequel, unless otherwise stated, we always use $n:=|U|$ and $k:=|C|$ to denote respectively the number of secondary users and channels. \subsection{Tree Decomposition}\label{subsec:treedecomp} In this subsection we give some basic notions regarding the tree decomposition of a graph, which will be used later. The concept of treewidth was introduced by Robertson and Seymour in their seminal work on graph minors \cite{rs_ja86}. A {\em tree decomposition} of a graph $G=(V,E)$ is given by a tuple $(T=(I,F), \{X_i~|~i\in I\})$, where $T$ is a tree and each $X_i$ is a subset of $V$ called a {\em bag} satisfying that \begin{itemize} \item $\bigcup_{i\in I} X_i = V$; \item For each edge $\{u,v\}\in E$, there exists a tree node $i$ with $\{u,v\} \subseteq X_i$; \item For each vertex $u\in V$, the set of tree nodes $\{i\in I~|~u\in X_i\}$ forms a connected subtree of $T$. Equivalently, for any three vertices $t_1,t_2,t_3 \in I$ such that $t_2$ lies in the path from $t_1$ to $t_3$, it holds that $X_{t_1} \cap X_{t_3} \subseteq X_{t_2}$. \end{itemize} The {\em width} of the tree decomposition is $\max_{i\in I} \{|X_i|-1\}$, and the {\em treewidth} of a graph $G$ is the minimum width of a tree decomposition of $G$. For each fixed integer $d$, there is a polynomial time algorithm that decides whether a given graph has treewidth at most $d$, and if so, constructs a tree decomposition of width $d$ \cite{bod_sjc96}. Such a decomposition can easily be transformed to a \emph{nice tree decomposition} $(T,\{X_i\})$ of $G$ with the same width, in which $T$ is a rooted binary tree with at most $O(|V|)$ nodes (see e.g. \cite{kloks94}).
{ "attr-fineweb-edu": 1.975586, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdmg5qoYA4kbtHG7b
\section{Introduction}\label{sec:intro} Competitive online optimization is a fundamental tool for decision making with uncertainty. We have witnessed its wide applications spreading from EV charging~\cite{zhao2017robust,alinia2018competitive,lin2021minimizing,wang2018electrical}, micro-grid operations~\cite{lu2013online,zhang2016peak}, energy storage scheduling~\cite{mo2021eEnergy,mo2021infocom} to data center provisioning~\cite{lin2012dynamic,lu2012simple}, network optimization~\cite{shi2018competitive,guo2018joint}, and beyond. {Theoretically, there are multiple paradigms of general interest in the online optimization literature. Typical examples include the online covering and packing problem~\cite{buchbinder2009design}, online matching~\cite{TCS-allocation}, online knapsack packing~\cite{zhou2008budget}, one-way trading problem~\cite{el2001optimal}, and online optimization with switching cost~\cite{lin2012online,bansal20152}.} {Among these applications and paradigms, we focus on optimizing the allocation of limited resources, like inventories, budgets, or power, across a multi-round decision period with dynamic per-round revenues and allocation conditions.} For example, in online ads display platform (e.g., google search), the operator allocates display slots to multiple advertisers (\emph{cf.} inventories) with contracts on the maximum number (\emph{cf.} capacity) of impressions~\cite{Feldman2009Online}. At each round, the real-time search reveals the number of total available display slots and interesting slots for each advertiser (\emph{cf.} allocation constraint). It also reveals the payoff of each advertiser from impressions at the display slots (\emph{cf.} revenue function). The revenue of an advertiser relies on the number of obtained impressions and the quality of each impression relating to dynamic factors like the click rate and engagement rate~\cite{Devanur2012Concave}. The above observations motivate us to study the important paradigm~--~competitive online optimization problem under multiple inventories (\probname), where a decision-maker with multiple inventories of fixed capacities seeks to maximize the per-round separable revenue function by optimizing the inventory allocation at each round. The decision maker further faces two allocation constraints at each round, the allowance constraint that limits the total allocation of different inventories at the round and the rate constraint that limits the allocation of each inventory at the round. The problem has two main challenges. First, as in online optimization under (single) inventory constraints ~\cite{cr_pursuit2018,Sun2020Competitive}, the decision-maker does not have access to future revenue functions, while the limited capacity of each inventory coupling the online decisions regarding each inventory across time. Second, the allocation constraints that couples the allocation decisions across the multiple inventories at each round. The combination of allowance constraint and rate limit constraint appears frequently and is with known challenges in online matching and allocation problems~\cite{azar2006maximizing,KALYANASUNDARAM20003bmatching,TCS-allocation,Deng2018Capacity}. In the literature, the authors in~\cite{ma2020algorithms,Sun2020Competitive} tackle the problem using the well-established online primal-and-dual framework~\cite{buchbinder2009design,Devanur2013Randomized,niazadeh2013unified}. They design a threshold function for each inventory with regard to the allocated amount, which can be view as the marginal cost of the inventory. They then greedily allocate the inventory at each round by maximizing the pseudo-revenue function defined by the difference between the revenue function and the threshold function. In contrast, we propose, in this paper, a divide-and-conquer approach to the online optimization with multiple inventories. Our approach is novel and provides additional insights to the problem. It allows us to separate the two challenges of the problem, 1) the online allocation for each inventory subject to the limited inventory capacity and unknown future revenue functions, and 2) the coupled allocation among multiple inventories due to the allowance constraint at each round. In the following, we summarize our contributions. First, in Sec.~\ref{sec:single}, we generalize the \textsf{CR-Pursuit($\pi$)}~algorithm~\cite{cr_pursuit2018} to tackle the single inventory case, \textsf{OOIC-S}, which is an important component in our divide-and-conquer approach. We show that it achieves the optimal competitive ratio (CR) among all online algorithm for \textsf{OOIC-S}. Second, in Sec.~\ref{sec:online}, we propose a divide-and-conquer approach to design online algorithms for online optimization under multiple inventories and dynamic allocation constraints. By adding an allowance allocation step, we decompose the multiple inventory problem into several single inventory problems with a small optimality loss in terms of CR. We show that when the revenue functions have bounded gradients, the CR achieved by our algorithm is optimal at a small number of inventories and within an additive constant to the lower bound of the best possible CR among all online algorithms for the problem at an arbitrary number of inventories. Third, in Sec.~\ref{sec:discussion-general-application}, we discuss generalizations of our proposed approach to broader classes of revenue functions. We provide a sufficient condition for applying our online algorithm and derive the corresponding CR it can achieve. For example, we consider revenue functions capturing the one-way trading problem with price elasticity, where only the results on single inventory case are available in existing literature~\cite{Sun2020Competitive,cr_pursuit2018}. We show that our approach obtains an online algorithm that achieves the optimal CR up to a constant factor. Finally, our results in Sec.~\ref{sec:Step-I} generalize the online allocation maximization problem in~\cite{azar2006maximizing} and the online allocation with free disposal problem in~\cite{Feldman2009Online} by admitting allowance augmentation in online algorithms, which is of independent interest. We show that we can improve the CR from~$\frac{e}{e-1}$ to~$\frac{1}{\pi}\cdot \frac{e^{\frac{1}{\pi}}}{e^{\frac{1}{\pi}}-1}$, when our online algorithms are endowed with~$\pi$-time augmentation in allowance and allocation rate at each round. \section{Related Work}\label{sec:related} \input{figs/related-tab-cmp.tex} We focus on the competitive online optimization problem with multiple inventories and dynamic allocation constraints. Our problem generalizes a couple of well-studied online problems, including the one-way trading problem~\cite{el2001optimal}, the online optimization under a single inventory constraint~\cite{cr_pursuit2018,Sun2020Competitive}, and the online fractional matching problem~\cite{KALYANASUNDARAM20003bmatching,azar2006maximizing}. Our results also reproduce the optimal CR under such settings. Our problem is studied in ~\cite{Sun2020Competitive}, and a discrete counterpart is studied in~\cite{ma2020algorithms}. Compared with the existing study, we propose a novel divide-and-conquer approach. We show that our approach achieves a close-to-optimal CR, which notably matches the lower bound when the number of inventories is relatively small. We summarize the related literature on online optimization under inventory constraints in Table~\ref{tab:related}. Our problem also covers a fractional version of online ads display problem~\cite{TCS-allocation}, which is an online matching problem with vertex capacity and edge value. No positive result is possible when the value is unbounded~\cite{Feldman2009Online}. In~\cite{Feldman2009Online}, the authors consider a model of ``free disposal'', i.e., the online decision maker can remove the past allocated edge without any cost (but can not re-choose the past edge). Here, we instead consider the case that the values of all edges are bounded in a pre-known positive range and no removable on past decision is available. We are interested in how the CR of an online algorithm behaviors with regard to the uncertain range of value. Interestingly, by our divide-and-conquer approach, we can extend the results in the ``free disposal'' model to the irremovable setting. Also, we provide additional insight and results to the problem when considering an online augmentation scenario that the online decision maker is with larger allowance and allocation rate at each round; see more details in~\ref{sec:Step-I}. Another related problem is online knapsack packing problem~\cite{zhou2008budget}. In the problem, items are associated with weight and value and come online. An online decision maker with a capacity limited knapsack determines whether to pack the item at its arrival to maximize the total value while guaranteeing that the total weight would not exceed the capacity. A single knapsack problem with infinitesimal assumption is studied in~\cite{zhou2008budget} with the application on key-work bidding. It can be viewed as a special case of the one-way trading problem~\cite{Sun2020Competitive}, which is covered by the online optimization problem with a single inventory~\cite{cr_pursuit2018}. The fractional multiple knapsack packing problem with unit weight is studied in~\cite{Sun2020Competitive}, where the decision maker can pack any fraction of the item instead of $0/1$ decision. Our problem is also related to the online packing problem~\cite{buchbinder2009design,azar2016online} where the authors consider general packing constraints. Here, we focus on specific inventory constraints and allocation constraints. \section{Problem Formulation}\label{sec:problem} In the section, we formulate the optimal allocation problem with multiple inventories. We discuss the practical online scenario and the performance metric for online algorithms. We further discuss the state-of-the-art to the problem. We summarize the important notions in Table~\ref{tab:notation}. \begin{table}[t] \centering \caption{Notation Table.} \label{tab:notation} \begin{tabular}{|l|l|} \hline \textbf{Notation} & \textbf{Meaning} \\ \hline \hline $g_{i,t}(\cdot)$ &Revenue function of inventory $i$ at slot $t$ \\ $v_{i,t}$ & Allocation of inventory $i$ at slot $t$ \\ $C_i$ & Capacity of inventory $i$ \\ $A_t$ & Allowance of total allocation among all inventories at slot $t$\\ $\delta_{i,t}$ & The maximum allocation of inventory $i$ at slot $t$\\ $\theta$ & \specialcell{$\theta=p_{\max}/p_{\min}$, where \\$[p_{\min},p_{\max}]$ is the range of the gradient of revenue functions}\\ \hlin $\hat{a}_{i,t}$ & Online allowance allocation to inventory $i$ at slot $t$\\ $\hat{v}_{i,t}$ & Online inventory allocation of inventory $i$ at slot $t$\\ $\eta_{i,t}$ & Online revenue of inventory $i$ up to slot $t$\\ $\tilde{OPT}_{i,t}$ & \specialcell{Optimal objective of \textsf{\probname$_i$} given allowance allocations \\ $\;\;$ and revenue functions up to slot $t$}\\ $OPT_t$ & Optimal offline total revenue of \probname~up to slot $t$\\ $\Phi(\pi)$ & Maximum total online allocation of \textsf{CR-Pursuit($\pi$)} \\\hline \end{tabular} \end{table} \subsection{Problem Formulation} We consider $N$ inventories, and a decision period with $T$ slots. We denote the capacity of inventory $i$ as $C_i$. At each slot $t\in[T]$, each inventory $i$ is associated with a revenue function $g_{i,t}(v_{i,t})$, which represents the revenue of allocating an amount of $v_{i,t}$ inventory $i$ at slot $t$. However, at each slot $t\in[T]$, we are restricted to allocate at most $\delta_{i,t}$ of inventory $i$, and the total allocation of all inventories at each slot $t$ is bounded by the allowance $A_t$. Our goal is to find an optimal allocation scheme that maximizes the total revenue in the decision period, while satisfying the allocation restrictions. Overall, we consider the following problem, \begin{align} \probname:\; \max \quad& \sum_{i\in[N]}{\sum_{t\in[T]}{g_{i,t}(v_{i,t})}}\\ \text{s.t.}\quad & \sum_{t} v_{i,t}\leq C_i, \forall i\in[N], \label{eq:inventory}\\ & \sum_{i} v_{i,t}\leq A_t,\forall t\in[T],\label{eq:allowance}\\ & 0\leq v_{i,t}\leq \delta_{i,t}, \forall t\in[T],i\in[N],\label{eq:rate} \end{align} In \probname, we optimize the inventory allocation $\{v_{i,t}\}_{i\in[N],t\in[T]}$ to achieve the maximum total revenue subjecting to the capacity constraint of each inventory ~\eqref{eq:inventory}, the allowance constraint at each slot~\eqref{eq:allowance}, and the allocation rate limit constraint for each inventory at each slot~\eqref{eq:rate}. Without loss of generality, we assume that $\delta_{i,t}\leq A_t, \forall i,t$. We consider the following set of revenue functions, denoted as $\mathcal{G}$, \begin{itemize} \item $g_{i,t}(\cdot)$ is concave and differentiable with $g(0)=0$; \item $g_{i,t}'(v_{i,t})\in[p_{\min},p_{\max}], \forall v_{i,t}\in[0,\delta_{i,t}]$. \end{itemize} We consider that $p_{\max}\geq p_{\min}>0$ and denote $\theta=p_{\max}/p_{\min}$. The revenue functions capture the case where the marginal revenue of allocating more inventory is non-increasing in the allocation amount but always between $p_{\min}$ and $p_{\max}$. We also discuss how we can apply our approach to different sets of revenue functions with corresponding applications in Sec~\ref{sec:discussion-general-application}. In the offline setting, the problem input is known in advance, and \probname~is a convex optimization problem with efficient optimal algorithms. However, in practice, we are facing an online setting as we describe next. \subsection{Online Scenario and Performance Metric} \label{sec:online_setting} In the online setting, we consider that the pre-known problem parameters include the class of revenue function $\mathcal{G}$ and corresponding range $[p_{\min},p_{\max}]$, the number of inventories $N$, and the capacity of each inventory $\{C_i\}_{i\in[N]}$. Other problem parameters are revealed sequentially. More specifically, at each slot $t$, the online decision maker without the information of the decision period $T$ is fed the revenue functions $\{g_{i,v}(\cdot)\}_{i\in[N]}$, the allowance $A_t$ and the allocation limits $ \{\delta_{i,t}\}_{i\in[N]}$. We needs to irrevocably determine the allocation at slot $t$, i.e., $ \{v_{i,t}\}_{i\in[N]}$. After that, if the decision period ends, we stop and know the information of $T$. Otherwise, we move to next slot and continue the allocation. We denote a possible input as \begin{equation} \sigma=\left(T, \{g_{i,t}(\cdot)\}_{i\in[N],t\in[T]}, \{A_t\}_{t\in[T]}, \{\delta_{i,t}\}_{i\in[N],t\in[T]}\right) \end{equation} We use the CR as a performance metric for online algorithms. The CR of an algorithm $\mathcal{A}$ is defined as, \begin{equation} \mathcal{CR(A)} = \sup_{\sigma\in\Sigma} \frac{OPT(\sigma)}{ALG(\sigma)}, \end{equation} where $\sigma$ denotes an input, $OPT(\sigma)$ and $ALG(\sigma)$ denote the offline optimal objective and the online objective applying $\mathcal{A}$ under input $\sigma$, respectively. We use $\Sigma$ to represent all possible input we are interested in. Specifically, \begin{equation} \Sigma\triangleq \{\sigma\left|T\in \mathbb{Z}^+,g_{i,t}(\cdot)\in\mathcal{G}, A_t\geq 0, \delta_{i,t}\geq 0,\forall i\in[N],t\in[T]\right.\}, \end{equation} In competitive analysis, we focus on the worst-case guarantee of an online algorithm, which is defined by the maximum performance ratio between the offline optimal and the online objective of the algorithm. In the online setting, we are facing two main challenges, 1) the decision maker does not known the future revenue functions, while the allocation now would affect the future decision due to the capacity constraint~\cite{cr_pursuit2018}; and 2) the online allowance constraints and rate constrains couple the decisions across the inventories, which are with known challenges in online matching and allocation problems~\cite{KALYANASUNDARAM20003bmatching,TCS-allocation}. \subsection{State of the Art} The online problem has been studied in ~\cite{Sun2020Competitive} under the same revenue function set $\mathcal{G}$. A discrete counterpart of the problem is studied in~\cite{ma2020algorithms}. In some special cases, our revenue functions cover the linear functions with slopes between $[p_{\min},p_{\max}]$. In addition, when $p_{\max} = p_{\min}$, our problem reduces to maximizing the total amount of allocation, which has been widely studied in~\cite{azar2006maximizing,Deng2018Capacity,karp1990optimal,Devanur2013Randomized,KALYANASUNDARAM20003bmatching}. Here, we introduce a novel divide-and-conquer approach for the problem and show the our approach can achieve a close-to-optimal CR. In Sec.~\ref{sec:discussion-general-application}, we also show that our approach can be applied to different sets of revenue functions, which have not been studied in the existing literature. Before proceeding, we discuss the two most relevant works, namely~\cite{ma2020algorithms} and ~\cite{Sun2020Competitive} in the literature. They both apply the online primal-dual analysis and design threshold functions for the online decision-makings of \probname. While the work~\cite{ma2020algorithms} studied a discrete setting differing from the continuous setting studied in~\cite{Sun2020Competitive} and our work, it is known in~\cite{niazadeh2013unified} that the same threshold-based function can be directly applied to the continuous setting, attaining the same CR. In the following, let us reproduce the algorithm for the continuous setting and the CR achieved by~\cite{ma2020algorithms}. The CR is better than the one proposed in~\cite{Sun2020Competitive}, and thus we deem it as the state of the art in the literature. We also compare the CR they achieve and ours in Sec.~\ref{sec:online}; see an illustration example in Fig.~\ref{fig:cr}. Let~$\phi_i(w)$ denote the threshold function for each inventory~$i$, where~$w$ refers to the amount of allocated capacity of the inventory and~$\phi_i(w)$ can be viewed as a pseudo-cost of the allocation. At each slot, the algorithm determines the allocated amount~$v_{i,t}$ of inventory~$i$ at slot~$t$ by maximizing the per-round pseudo-revenue, which is the difference between the revenue and the threshold function, i.e., \begin{align} \textsf{(P\&D):}\; \max \quad& \sum_{i\in[N]}\left({g_{i,t}(v_{i,t})}-\int_{w_{i,t-1}}^{w_{i,t-1}+v_{i,t}} \phi_i(w) dw\right) \label{eq:primal-dual-algorithm}\\ \text{s.t.}\quad & \sum_{i} v_{i,t}\leq A_t,\label{eq:allowance-D}\\ & 0\leq v_{i,t}\leq \delta_{i,t}, \forall i\in[N], \end{align} where $w_{i,t-1}$ is the total online allocation of inventory $i$ from the first slot to slot $t-1$. The algorithm is proposed in~\cite{Sun2020Competitive}, which can be viewed as a continuous reinterpretation of the discrete algorithm in~\cite{ma2020algorithms}. According to Appendix E of~\cite{ma2020algorithms}, we can apply the following threshold function given that the gradient of $g_{i,t}(v_{i,t})$ is uniformly bounded in range $[p_{\min},p_{\max}]$ \begin{equation}\label{eq:thresholdfunction} \phi_i(w) = \begin{cases} (p_{\min})^{\frac{1-w/C_i}{1-\chi}}(p_{\max})^{\frac{w/C_i}{1-\chi}}, & w\in[0,\chi\cdot C_i]; \\ p_{\min}\cdot\frac{e^{w/C_i}-1}{e^{\chi}-1},& w\in[\chi\cdot C_i,C_i]; \end{cases} \end{equation} where $\chi={W(\ln(\theta)\cdot e^{\ln(\theta)-1})-\ln\theta+1} $ ($W(\cdot)$ is the Lambert-W function). \begin{prop}[\cite{ma2020algorithms}] \label{thm:state-of-the-art} With the threshold function~(\ref{eq:thresholdfunction}), the threshold-based algorithm can achieve a competitive ratio of \begin{equation}\label{eq:cr_state_of_art} \tilde{\chi} = \frac{1}{1-e^{-\chi}}. \end{equation} \end{prop} We provide the proof in Appendix~\ref{app:state-of-the-art}. We note that the proofs in~\cite{Sun2020Competitive} and~\cite{ma2020algorithms} follow similar ideas based on the online primal-dual framework but are different in presentations as one is discussing in the continuous setting~\cite{Sun2020Competitive} and the other in the discrete setting~\cite{ma2020algorithms}. As we are considering the continuous setting, our proof follows the same presentation discussed in~\cite{Sun2020Competitive} and applies the properties of the threshold function~\eqref{eq:thresholdfunction} discussed in~\cite{ma2020algorithms}. \section{CR-Pursuit for Single Inventory Problem} \label{sec:single} In this section, we first discuss the problem with single inventory. We extend the \textsf{CR-Pursuit} in~\cite{cr_pursuit2018} to cover the rate limit constraint and provide additional insights that will facilitate our algorithm design under the multiple inventories case. In the single inventory case, \probname~is reduced to the following problem, \begin{align} \textsf{{\textsf{OOIC-S}}}: \; \max & \sum_{t} g_{t}(v_{t})\\ \text{s.t.} \; & \sum_{t} v_{t}\leq C\\ \text{var.} \; & 0 \leq v_{t}\leq \delta_{t}, \forall t, \end{align} where $C$ denotes the capacity of the inventory, $g_t(v_t)$ represents the revenue of allocating $v_t$ quantity of inventory at slot $t$, and $\delta_t$ is the rate limit restricting the maximum allocation at slot $t$. The goal is still to maximize the total revenue by determining the inventory allocation $v_t$ at each slot. We focus on the online setting described in Sec.~\ref{sec:online_setting} with $N$ specified to be one. We note that the \textsf{OOIC-S}~has been studied in~\cite{Sun2020Competitive}. Also, the case under different assumptions on revenue functions and without rate limit has also been studied in~\cite{cr_pursuit2018}. Here, we generalize the results in~\cite{cr_pursuit2018} to consider revenue function set $\mathcal{G}$ and involve rate limit constraint. The online algorithm \textsf{CR-Pursuit($\pi$)}, proposed in~\cite{cr_pursuit2018}, is a single-parametric online algorithm with $\pi$ as the parameter. At slot $t$, the algorithm first computes the optimal value of \textsf{OOIC-S}~given the input revenue functions and rate limits up to $t$, which we denote as $OPT_S(t)$. It then determines the allocation $\hat{v}_t$ at slot $t$ such that \begin{equation} \label{eq:cr-pursuit-single-online-decision} g_t(\hat{v}_t) = \frac{1}{\pi} \left(OPT_S(t)-OPT_S(t-1)\right). \end{equation} Under \textsf{CR-Pursuit($\pi$)}, we define the maximum total allocation of the algorithm, \begin{equation} \label{eq:maximum_allocation_cr_pursuit} \Phi(\pi) \triangleq \sup_{\sigma\in\Sigma}\sum_{t}{\hat{v}_t}, \end{equation} where $\hat{v}_t$ is determined by~\eqref{eq:cr-pursuit-single-online-decision}. By design, we clearly have the following properties of the CR-Pursuit($\pi$) algorithm. \begin{lem} \label{thm:single-rate-limit-feasibility} We have $\hat{v}_t \leq \frac{1}{\pi}\cdot\delta_t$. \end{lem} \begin{proof} As $g_t(v_t)$ is increasing and concave function, and \begin{equation} g_t(\hat{v}_t) = \frac{1}{\pi} \left(OPT_S(t)-OPT_S(t-1)\right)\leq \frac{1}{\pi}\cdot g_t(\delta_t) \end{equation} We have $\hat{v}_t \leq \frac{1}{\pi}\cdot\delta_t$. \end{proof} Lemma~\ref{thm:single-feasible-competitive} shows a upper bound on the online allocation, which guarantees the existence of $\hat{v}_t$ at each slot. It will also be useful for our algorithm design and CR analysis for \probname, which we will discuss in Sec.~\ref{sec:online}. \begin{lem} \label{thm:single-feasible-competitive} \textsf{CR-Pursuit($\pi$)} is feasible and $\pi$-competitive for \textsf{{\textsf{OOIC-S}}} if $\Phi(\pi)\leq C$. \end{lem} \begin{proof} Considering an arbitrary input $\sigma$, we first note that it is clear that $\hat{v}_t\leq \delta_t$ according to Lemma~\ref{thm:single-rate-limit-feasibility}. Then if $ \Phi(\pi)\leq C$, we have $\sum_{t}\hat{v}_t\leq C$, i.e., it satisfies the inventory constraint under input $\sigma$. Summarizing~\eqref{eq:cr-pursuit-single-online-decision} over all $t$, we have, the online objective \begin{equation} ALG(\sigma) = \sum_{t=1}^{T} g_t(\hat{v}_t)= \frac{1}{\pi} OPT_S(T)= \frac{1}{\pi}\cdot OPT_S(\sigma), \end{equation} where $OPT_S(\sigma)$ is the optimal offline objective. Thus the algorithm is $\pi$-competitive. \end{proof} Lemma~\ref{thm:single-feasible-competitive} shows that we can rely on characterizing $\Phi(\pi)$ to optimize the choice of $\pi$ in \textsf{CR-Pursuit($\pi$)}. Further, we can interpret $\Phi(\pi)$ as the inventory the online algorithm $\textsf{CR-Pursuit($\pi$)}$ needs to maintain $\pi$-competitive. For example, suppose for an online algorithm, we now can utilize $\Phi(1)$ capacity of the inventory while the capacity of the offline optimal remains $C$. Then we can run $\textsf{CR-Pursuit(1)}$ and achieve that same performance as the offline optimal, i.e., 1-competitive. We have the following results on the upper bound on $\Phi(\pi)$. \begin{lem} \label{thm:upper-bound-class-I} We have \begin{equation} \Phi(\pi)\leq \frac{\ln\theta+1}{\pi}\cdot C \end{equation} \end{lem} We summarize the proof idea of Lemma~\ref{thm:upper-bound-class-I} here while leaving the detailed proof to Appendix~\ref{proof:upper-bound}. We first notice that a more general results in~\cite{cr_pursuit2018} can be extended to the case with rate limit constraint, as shown in Proposition~\ref{thm:upper-bound-general} in Appendix~\ref{proof:upper-bound}. Although the results in~\cite{cr_pursuit2018} do not cover the revenue functions we consider here, it covers the revenue functions in the maximizer of $\Phi(\pi)$ as special cases. This observation leads to Lemma~\ref{thm:upper-bound-class-I}. According to the above discussion, we can provide the competitive analysis of \textsf{CR-Pursuit($\pi$)}~ for \textsf{OOIC-S}. \begin{thm} \label{thm:single-competitive-class-I} For \textsf{{\textsf{OOIC-S}}}, \textsf{CR-Pursuit($\pi_{1}$)} is $\pi_{1}$-competitive, where $\pi_{1}=\ln\theta+1$. And, it is optimal among all online algorithms for the problem. \end{thm} According to Lemma~\ref{thm:single-feasible-competitive} and Lemma~\ref{thm:upper-bound-class-I}, it is clear that \textsf{CR-Pursuit($\pi_{1}$)} is feasible and $\pi_{1}$-competitive. Further, according to the results in~\cite{cr_pursuit2018,Sun2020Competitive}, we know that $\ln\theta+1$ is the lower bound or the optimal CR to \textsf{OOIC-S}. Thus, \textsf{CR-Pursuit($\pi_{1}$)} is also optimal. \section{Online Algorithms for Multiple Inventory Problem}\label{sec:online} In this section, we introduce our divide-and-conquer online algorithm \textsf{\algname($\pi$)} for \probname, where $\pi$ is a parameter to be specified. We first outline the algorithm structure. Following the structure, we then propose our general online algorithm for arbitrary $N$. We next show a simple and optimal online algorithm when $N$ is relatively small. Finally, we summarize our algorithm and provide the competitive analysis. An illustration of our approach and results is shown in Fig~\ref{fig:flowchart}. \subsection{Algorithm structure}\label{sec:alg-structure} We consider a divide-and-conquer approach for deriving online algorithms for \probname. The general idea is that we can optimize \probname~ by first allocating the allowance at each slot to the inventories and then separately optimizing the allocation of each inventory given the allocated allowance. More specifically, we define the following subproblem for each $i\in[N]$, \begin{align} \textsf{${\probname_i}$}: \; \max & \sum_{t} \tilde{g}_{i,t}(v_{i,t})\\ s.t. & \sum_{t} v_{i,t}\leq C_i\\ & 0 \leq v_{i,t}\leq a_{i,t}, \forall t, \end{align} where $a_{i,t}$ is the allocated allowance to user $i$ at slot $t$. $\tilde{g}_{i,t}(v_{i,t})$ is another algorithmic design space that allows us to exploit the online augmentation scenario when allocating the allowance allocation in Sec.~\ref{sec:Step-I}. Under the offline setting, we note that such a decomposition is of no optimality loss. For example, we can choose $\tilde{g}_{i,t}(v_{i,t})=g_{i,t}(v_{i,t})$ and set the allowance allocation $a_{i,t}=v^*_{i,t}$ for all $i$ and $t$, where ${\{v^*_{i,t}\}}_{i\in[N], t\in[T]}$ is the offline optimal solution of \probname. Then, optimizing the subproblems separately given the allowances would reproduce the offline optimal solution. Following this structure, we can design an online algorithm that mainly consists of two steps at each slot $t$, \begin{enumerate} \item \textsf{Step-I:} Determine the allowance allocation, $\{\hat{a}_{i,t}\}_{i\in[N]}$, irrevocably. \item \textsf{Step-II:} Determine the the inventory allocation for each online \textsf{${\probname_i}$}, $\{\hat{v}_{i,t}\}_{i\in[N]}$, irrevocably. \end{enumerate} We note that this divide-and-conquer approach allows us to separately tackle the two main challenges of the problem. First, the revenue functions come online while the allocation across the decision period is coupled due to the capacity constraint for each inventory, which is mainly handled by \textsf{Step-II}. Second, the online allowance constraints and the rate constraints couple the decisions across the inventories, which we tackle in \textsf{Step-I}. We can directly apply $\{\hat{v}_{i,t}\}_{t\in[T]}$ as the output of an online algorithm for each inventory $i$. We can view the \textsf{Step-II} as solving \textsf{${\probname_i}$} in an online manner. More specifically, at each $t$, for each $i$, we observe input $\hat{a}_{i,t}$ (determined at \textsf{Step-I}) and $\tilde{g}_{i,t}(\cdot)$, and need to determine $v_{i,t}$ irrevocably. In terms of feasibility, an immediate advantage is that, it satisfies the inventory constraint~\eqref{eq:inventory} if it is a feasible solution to \textsf{${\probname_i}$}. However, we need further care to make sure the satisfaction of the allowance constraint~\eqref{eq:allowance} and allocation rate limit~\eqref{eq:rate}. As for performance guarantees, we can first analyze the performance of the each step and then combine them to show the overall competitive analysis. In the following, we will discuss how the proposed online algorithm behaviors at both steps to ensure the feasibility and achieve a close-to-optimal CR. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{figure_divide_and_conquer2.pdf \caption{An Illustration of Our Divide-and-Conquer Approach and Results.} \label{fig:flowchart} \end{figure} \subsection{The \textsf{A\&P$_l$($\pi$)} Algorithm for General $N$}\label{sec:general-N} In this subsection, we will propose an online algorithm for general number of inventories following the divide-and-conquer structure discussed in Sec.~\ref{sec:alg-structure}. We denote the algorithm as \textsf{A\&P$_l$($\pi$)}, where $\pi$ is parameter to be specified. In the following, we first introduce the \textsf{Step-II} of \textsf{A\&P$_l$($\pi$)}, determining the allocation of \textsf{\probname$_i$} given the allowance from \textsf{Step-I}. We then introduce \textsf{Step-I}, determining the allowance of each inventories. We denote the allocated allowance from \textsf{Step-I} as $\hat{a}_{i,t},\forall i,t$. \subsubsection{\textsf{Step-II} of \textsf{A\&P$_l$($\pi$)}}\label{sec:Step-II} In this step, \textsf{A\&P$_l$($\pi$)} determines the online inventory allocation for each \textsf{${\probname_i}$} given the allocated allowance (denoted as $\{\hat{a}_{i,t}\}_{i\in[N]}$) from \textsf{Step-I}. In \textsf{Step-II} of \textsf{A\&P$_l$($\pi$)}, it sets \begin{equation} \label{eq:choose-of-g} \tilde{g}_{i,t}(v_{i,t}) = \pi\cdot g_{i,t}(\frac{v_{i,t}}{\pi}). \end{equation} We denote the optimal objective of \textsf{${\probname_i}$} given its input up to slot $t$ as $\tilde{OPT}_{i,t}$. We set $\tilde{OPT}_{i,0}=0$. At each slot $t$, it determines the allocation $\hat{v}_{i,t}$ such that it satisfies \begin{equation} g_{i,t}(\hat{v}_{i,t}) = \frac{1}{\pi} \left(\tilde{OPT}_{i,t}-\tilde{OPT}_{i,t-1}\right).\label{eq:subproblem-crpursuit} \end{equation} While it looks similar to the \textsf{CR-Pursuit($\pi$)} algorithm discussed in Sec.~\ref{sec:single} and~\cite{cr_pursuit2018}, we note that a major difference is that we are using the original revenue function $g_{i,t}(\cdot)$ to pursuit a fraction of $1/\pi$ of the optimal objective achieved over the revenue function $\tilde{g}_{i,t}$ instead of $g_{i,t}(\cdot)$. In general, $\tilde{g}_{i,t}$ defined in~\eqref{eq:choose-of-g} is no less than $g_{i,t}(\cdot)$ (take equality under the linear function case). Thus, it may be more difficult for the $\textsf{Step-II}$ to achieve the same performance ratio ($\pi_1$) between the optimal objective to the online objective for each \textsf{${\probname_i}$} as that in Sec.~\ref{sec:single}. However, we would show the performance ratio remains achievable in Lemma~\ref{thm:step-II-class-I}. The design of $\tilde{g}_{i,t}(\cdot)$ is important for achieving a better approximation ratio between the total optimal objective of the subproblems and the optimal objective of {\probname}, which we would discuss in \textsf{Step-I}, Sec.~\ref{sec:Step-I}. To analyze \textsf{Step-II}, we first propose the following proposition on the properties of the online allocation $\hat{v}_{i,t}, \forall i,t$. \begin{lem} \label{thm:step-II-online-bound} We have $\hat{v}_{i,t}\leq \frac{1}{\pi} \cdot \hat{a}_{i,t},\forall i,t$. \end{lem} \begin{proof} We have \begin{equation} g_{i,t}(\hat{v}_{i,t}) = \frac{1}{\pi} \left(\tilde{OPT}_{i,t}-\tilde{OPT}_{i,t-1}\right)\leq\frac{1}{\pi}\cdot \pi\cdot g_{i,t}(\frac{\hat{a}_{i,t}}{\pi})\leq g_{i,t}(\frac{\hat{a}_{i,t}}{\pi}) \end{equation} Thus, as $g_{i,t}(\cdot)$ is increasing, we conclude $\hat{v}_{i,t}\leq \frac{1}{\pi} \cdot \hat{a}_{i,t}$. \end{proof} While this is a simple observation on the online solution, it plays an important role on designing the allowance allocation in \textsf{Step-I}, Sec.~\ref{sec:Step-I} and improving the overall performance of our online algorithm \textsf{\algname$_l$($\pi$)} for \probname. We denote the objective value of the online solution to \textsf{\probname$_i$} at slot $t$ as $\eta_{i,t}$, \begin{equation} \label{eq:step2-online-ojective} \eta_{i,t} \triangleq \sum_{\tau=1}^{t} g_{i,\tau}(\hat{v}_{i,\tau}) \end{equation} We provide performance analysis of \textsf{Step-II} in the following lemma. Recall that $\pi_1=\ln\theta+1$. \begin{lem} \label{thm:step-II-class-I} We have that for each $i\in[N]$, \textsf{Step-II} of \algname($\pi_{1}$) always produces a feasible solution to \textsf{${\probname_i}$}, and for any slot $t$, the online objective \begin{equation} \label{eq:step-II-guarantee} \eta_{i,t}\geq \frac{1}{\pi_{1}}\cdot \tilde{OPT}_{i,t}. \end{equation} \end{lem} \begin{proof} The performance guarantee in~\eqref{eq:step-II-guarantee} is simply implied by~\eqref{eq:subproblem-crpursuit} when choosing $\pi=\pi_{1}$. We now show the feasibility. We note that, when choosing $\pi=\pi_{1}$, \textsf{${\probname_i}$} with $\tilde{g}_{i,t}(\cdot)$ determined by~\eqref{eq:choose-of-g} and a factor of $\frac{1}{\pi_{1}}$ in objective value is equivalent to \begin{align} \textsf{R-${\probname_i}$}: \; \max & \sum_{t} g_{i,t}(z_{i,t})\\ s.t. & \sum_{t} z_{i,t}\leq \frac{C_i}{\pi_{1}}\\ & 0 \leq z_{i,t}\leq \frac{\hat{a}_{i,t}}{\pi_{1}}, \forall t \end{align} , where $z_{i,t} \triangleq v_{i,t}/\pi_1$. Then, determining the online allocation according to~\eqref{eq:subproblem-crpursuit} is equivalent to find $\hat{v}_{i,t}$ such that \begin{equation} \label{eq:step-II-equivalent-v} g_{i,t}(\hat{v}_{i,t}) = \hat{OPT}_{i,t}-\hat{OPT}_{i,t-1}, \end{equation} where $\hat{OPT}_{i,t}$ is optimal objective of \textsf{R-${\probname_i}$} at slot $t$. It is clear that $\hat{v}_{i,t}\leq a_{i,t}/\pi_{1}$ as $\hat{OPT}_{i,t}-\hat{OPT}_{i,t-1}\leq g_{i,t}(\hat{a}_{i,t}/\pi_{1})$. Thus, the rate limit constraint in \textsf{${\probname_i}$} is satisfied. We note that the \textsf{R-${\probname_i}$} is a single inventory problem we discuss in Sec.~\ref{sec:single} with inventory capacity of $\frac{C_i}{\pi_1}$. The online decision we make according to~\eqref{eq:step-II-equivalent-v} suggests that we are running \textsf{CR-Pursuit($1$)} over online \textsf{R-${\probname_i}$}. According to Lemma~\ref{thm:upper-bound-class-I}, for \textsf{R-${\probname_i}$}, we have \begin{equation} \sum_{t}\hat{v}_{i,t}\leq \frac{\ln\theta+1}{1}\cdot \frac{C_i}{\pi_{1}}= C_i \end{equation} , noting that the inventory capacity of \textsf{R-${\probname_i}$} equals $C_i/\pi_{1}$. Thus, the online solution satisfies the capacity constraint in \textsf{${\probname_i}$}. \end{proof} \subsubsection{\textsf{Step-I} of \textsf{A\&P$_l$($\pi$)}}\label{sec:Step-I} The \textsf{Step-I} is to determine $\{\hat{a}_{i,t}\}_{i\in[N]}$, the allowance allocation to different inventories at each slot $t$. Our goal is to determine an allocation such that we can guarantee the a larger approximation ratio ($\triangleq\alpha$) between $\sum_{i\in[N]} \tilde{OPT}_{i,t}$ and $OPT_t$ at any slot $t$, i.e., $\sum_{i\in[N]} OPT_{i,t} \geq \alpha \cdot OPT_t$. Recall that $\tilde{OPT}_{i,t}$ is the optimal objective of \textsf{${\probname_i}$} up to slot $t$. And, $OPT_t$ is the optimal objective of \probname~up to slot $t$. As discussed in Sec.~\ref{sec:alg-structure}, we need further consideration to guarantee the satisfaction of the allowance constraint~\eqref{eq:allowance} and allocation rate limit~\eqref{eq:rate}. We characterize a sufficient condition on the allowance allocation such that the online solution $\{\hat{v}_{i,t}\}_{i\in[N]}$ determined at \textsf{Step-II} (as discussed in Sec.~\ref{sec:Step-II}) satisfies constraints~\eqref{eq:allowance} and ~\eqref{eq:rate}. \begin{lem} \label{thm:allowance-allocation-constraint} If the allowance allocation at each slot $t$ satisfies \begin{equation} \label{eq:condition-for-feasibility} \sum_{i\in[N]} \hat{a}_{i,t}\leq \pi\cdot A_t, 0\leq \hat{a}_{i,t}\leq \pi \cdot \delta_{i,t}, \end{equation} then the online solution $\{\hat{v}_{i,t}\}_{i\in[N]}$ determined by~\eqref{eq:subproblem-crpursuit} at \textsf{Step-II} satisfies the allowance constraint~\eqref{eq:allowance} and rate limit constraints~\eqref{eq:rate}. \end{lem} The idea is that according to Lemma~\ref{thm:step-II-online-bound}, $\hat{v}_{i,t}\leq \frac{1}{\pi}\hat{a}_{i,t}$. Together with~\eqref{eq:condition-for-feasibility}, it implies that we have $\sum_{i\in[N]} \hat{v}_{i,t}\leq \frac{1}{\pi}\sum_i \hat{a}_{i,t}\leq A_t$ and $\hat{v}_{i,t}\leq\delta_{i,t}$. Lemma~\ref{thm:allowance-allocation-constraint} means that at each slot $t$, we can actually allocate $\pi$-time more total allowance to the subproblems while guaranteeing the online solution satisfies constraints~\eqref{eq:allowance} and ~\eqref{eq:rate}. We would show that it can help us significantly improve the approximation ratio $\alpha$ compared with allocating the allowance respecting constraints~\eqref{eq:allowance} and ~\eqref{eq:rate} directly. In the online literature, the \textsf{Step-I} problem is similar to the online ads allocation problem with free disposal studied in~\cite{Feldman2009Online} where each advertisers can be allocated more ads display slots than their pre-agreed numbers but would only count the most valuable ones. In \textsf{Step-I}, while the total allowance we allocate to inventory $i$ may exceed its capacity $C_i$, but the total revenue of inventory $i$, $\tilde{OPT}_{i,t}$, only counts the most valuable $C_i$ of them. Compared with~\cite{Feldman2009Online}, we further consider the setting that online decision maker can allocate $\pi$-time more allowance with a $\pi$-time relaxer rate limit constraint at each slot. We call it \emph{allowance augmentation scenario}. We note that this is different with other works in online literature, e.g.,~\cite{azar2006maximizing,Deng2018Capacity,Deng2019Capacity}, where the authors consider the capacity augmentation scenario, i.e., how one can improve the performance guarantee (in particular, CR) of online algorithms when the online decision maker is equipped with more inventory capacity compared with the offline optimal. Here, we are the first one to consider allowance augmentation scenario. At \textsf{Step-I} of \textsf{A\&P$_l$($\pi$)}, we determine the allowance allocation by solving the following problem at each $t$. We call the problem \textsf{\allallt-A($\pi$)} standing for Allowance Allocation at slot $t$ with Augmentation. \begin{align} \textsf{\allallt-A($\pi$)}: \quad {\max}\quad & \sum_i\left(\tilde{g}_{i,t}( \hat{a}_{i,t})-\int_0^{\hat{a}_{i,t}}\Psi_{i,t}(a)\; da\right) \label{eq:pseudo-revenue-function}\\ \text{s.t.} \quad & \sum_{i} \hat{a}_{i,t} \leq \pi\cdot A_t \label{eq:AAt-allowance-C} \\ & 0\leq \hat{a}_{i,t} \leq \pi\cdot \delta_{i,t},\forall i\in[N] \label{eq:AAt-rate-C} \end{align} In \textsf{\allallt-A($\pi$)}, $\{\hat{a}_{i,t}\}_{i\in[N]}$ is the allowance allocation at slot $t$. $\tilde{g}_{i,t}(\cdot)$ is defined as in~\eqref{eq:choose-of-g}. $\Psi_{i,t}(a)$ is define as follows. \begin{equation}\label{eq:defn_Phi-C} \Psi_{i,t}(a) = f_i(C_i)\cdot G_{i,t}(C_i,a)-\frac{1}{\pi\cdot C_i}\int_0^{C_i} G_{i,t}\left(x,a\right)\cdot f_i(x)\; dx. \end{equation} where $f_i(x)$ and $G_i(x,a)$ are defined as, \begin{equation} \label{eq:defn_f-C} f_i(x)=\frac{1}{\pi\cdot C_i}\frac{1}{e^{\frac{1}{\pi}}-1}\cdot e^{x/{\left(\pi\cdot C_i\right)}}, \end{equation} \begin{align} G_{i,t}\left(x,a\right) = \max\quad & \sum_{\tau\in[t]}\tilde{g}_{i,\tau}( v_{i,\tau}) \label{eq:defn_G-C}\\ \text{s.t.}\quad & \sum_{\tau\in[t]} v_{i,\tau}\leq x\label{eq:dfn_G_x}\\ & 0\leq v_{i,t}\leq a\\ &0\leq v_{i,\tau}\leq \hat{a}_{i,\tau}, \forall \tau\in [t-1]. \end{align} We can show that \textsf{\allallt-A($\pi$)} is a convex optimization problem with simple linear packing constraints by checking that $\Psi_{i,t}(a)$ is non-decreasing in $a$ (shown in Proposition~\ref{thm:psi_increment} in Appendix~\ref{app:thm:step-I}). We can solve it using projected gradient descent where at each step an evaluation of $\Psi_{i,t}(a)$ is required. Although we do not have a close form of $G_{i,t}(\cdot)$, we can evaluate $\Psi_{i,t}(a)$ efficiently using numerical integration methods. Our algorithm in \textsf{Step-I}, i.e., solving \textsf{\allallt-A($\pi$)}, can be view as a continuous counterpart of the exponential weighting approach proposed in~\cite{Feldman2009Online}. $f_{i,t}(\cdot)$ defines the weight on per-unit revenue of $G_{i,t}(x,a)$, the optimal allocation of inventory $i$. $\Phi_{i,t}(a)$ is the exponential weighting total revenue of the optimal allocation of inventory $i$, which matches the dynamic threshold defined in~\cite{Feldman2009Online}. Here, we provide two novel understandings for exploiting the augmentation scenario. First, we redesign the weighted function $f_{i,t}(\cdot)$ which tunes the weight according to the allowance augmentation level $\pi$. Second, we design the revenue function $\tilde{g}_{i,t}(v_{i,t})=\pi\cdot g_{i,t}(v_{i,t}/\pi)$ to ensure that increasing the allowance allocation for inventory $i$ can substantially increase the revenue when compared with $g_{i,t}(v_{i,t})$ in the offline problem. If we simply apply $g_{i,t}(v_{i,t})$ directly, we would suffer the diminishing return of concave function and can not fully exploit the augmentation. We can show the approximation guarantee of \textsf{\allallt-A($\pi$)} in the following theorem. \begin{thm} \label{thm:step-I} Given the allowance allocation $\hat{a}_{i,t}$ by solving \textsf{\allallt-A($\pi$)}, we have \begin{equation} \sum_{i\in[N]} \tilde{OPT}_{i,t} \geq \alpha(\pi) \cdot OPT_t, \end{equation} where $\alpha(\pi) = \pi\frac{e^\frac{1}{\pi}-1}{e^\frac{1}{\pi}}$. Furthermore, $\alpha(\pi)$ equals $\frac{e-1}{e}$ when $\pi=1$ and $1$ when $\pi\to\infty$. \end{thm} The proof of Theorem~\ref{thm:step-I} is provided in Appendix~\ref{app:thm:step-I}. Our proof follows the online primal-and-dual analysis in~\cite{Feldman2009Online,buchbinder2009design}. We use the dual problem of \probname~as a baseline for comparison. By carefully design or update the dual variable at each slot, we show that our increment in the total optimal objective of all subproblems \textsf{\probname$_i$} is at least a fraction of $\alpha(\pi)$ of the increment on the objective of dual \probname~at each slot $t$. This directly leads to Theorem~\ref{thm:step-I}. \textbf{Remark.} The results we show in Theorem~\ref{thm:step-I} is with broader application scenarios and of independent interest. When all the revenue function is linear with a constant slope, i.e., all inventories have the uniform unit price, the \textsf{Step-I} problem reduces to maximum the total amount of allocation, which is studied in~\cite{azar2006maximizing,KALYANASUNDARAM20003bmatching}. Our result (Theorem~\ref{thm:step-I}) implies that, when there is no allowance augmentation (i.e., $\pi=1$), it reproduces the CR $\frac{e}{e-1}$ as shown in~\cite{azar2006maximizing,KALYANASUNDARAM20003bmatching}. Also, when considering linear revenue functions, our problem can be viewed as a continuous counterpart of the online ad allocation problem with free disposal studied in ~\cite{Feldman2009Online}. We recover the $\frac{e}{e-1}$ CR when there is no allowance augmentation (i.e., $\pi=1$). In both case, our results generalize to the case with allowance augmentation and show an improved CR of $\frac{1}{\pi}\frac{e^\frac{1}{\pi}}{e^\frac{1}{\pi}-1}$ with $\pi$-time augmentation, which tends to one when $\pi\to\infty$. We also note that Theorem~\ref{thm:step-I} holds for arbitrary increasing and differential concave functions starting from the origin, not restricted to the revenue functions we consider in set $\mathcal{G}$. This would be useful for generalizing our approaches to a broader application area with different sets of revenue functions beyond $\mathcal{G}$, which we will discuss in Sec.~\ref{sec:discussion-general-application}. \subsubsection{Competitive analysis of \textsf{A\&P$_l$($\pi$)}} We first summarize \textsf{A\&P$_l$($\pi$)}. At each slot $t$, (\textsf{Step-I}) it solves \textsf{\allallt-A($\pi$)} to obtain the allowance allocation $\hat{a}_{i,t}, \forall i\in[N]$, and (\textsf{Step-{II}}) it determines $\hat{v}_{i,t}$ according to~\eqref{eq:subproblem-crpursuit}, for all $i\in[N]$. We then show its performance guarantee off \probname~in the following theorem. \begin{thm} \label{thm:AP-general-N} The \textsf{A\&P$_l$($\pi_{1}$)} algorithm is $\frac{e^\frac{1}{\pi_1}}{e^\frac{1}{\pi_1}-1}$-competitive for \probname. \end{thm} \begin{proof} We fist show the feasibility of \textsf{A\&P$_l$($\pi_{1}$)}. By solving \textsf{\allallt-A($\pi_1$)}, $\{\hat{a}_{i,t}\}_{i\in[N],t\in[T]}$ satisfies condition~\eqref{eq:condition-for-feasibility} with $\pi=\pi_1$ in Lemma~\ref{thm:allowance-allocation-constraint}. According to Lemma~\ref{thm:allowance-allocation-constraint}, the online solution satisfies the allowance constraint and rate limit constraint of \probname. Beside, according to Lemma~\ref{thm:step-II-class-I}, the online solution is always feasible to \textsf{\probname$_i$}, i.e., it satisfies the capacity constraint of \probname. We conclude the online solution of\textsf{A\&P$_l$($\pi_{1}$)} is feasible. As for the CR, combining Theorem~\ref{thm:step-I} we obtain in \textsf{Step-I} of \textsf{A\&P$_l$($\pi_{1}$)} and Lemma~\ref{thm:step-II-class-I} in \textsf{Step-II}, we have that the online objective of \textsf{\algname$_l$($\pi_{1}$)}, \begin{equation} \sum_{i\in[N]}\eta_{i,t}\geq \frac{1}{\pi_{1}}\cdot \sum_{i\in[N]} \tilde{OPT}_{i,t} \geq \frac{1}{\pi_{1}}\alpha(\pi_{1}) \cdot OPT_t=\frac{e^\frac{1}{\pi_1}-1}{e^\frac{1}{\pi_1}} \cdot OPT_t, \forall t. \end{equation} Thus, at the final slot $T$, we also have $\sum_{i\in[N]}\eta_{i,T}\geq \frac{e^\frac{1}{\pi_1}}{e^\frac{1}{\pi_1}-1} \cdot OPT_T$, and we conclude that \textsf{A\&P$_l$($\pi_{1}$)} is $\frac{e^\frac{1}{\pi_1}}{e^\frac{1}{\pi_1}-1}$-competitive. \end{proof} We note that $\frac{e^\frac{1}{\pi}}{e^\frac{1}{\pi}-1}\leq \pi+1$. Thus, comparing with the result under the single-inventory case shown in Theorem~\ref{thm:single-competitive-class-I}, Theorem~\ref{thm:AP-general-N} implies that we can achieve a CR with at most an additive constant (one) for the case with arbitrary number of inventories. Also, $\frac{e^\frac{1}{\pi}}{e^\frac{1}{\pi}-1}\sim\pi$, i.e., $\lim_{\pi\to\infty}\frac{e^\frac{1}{\pi}}{e^\frac{1}{\pi}-1}/\pi=1$. It shows that the CR we achieve for \probname~under arbitrary number of inventories is asymptotically equivalent to the one under the single-inventory case. \subsection{A Simple Algorithm for Small $N$}\label{sec:small-N} From the design of our divide-and-conquer approach, we note that our online algorithm is able to allocate $\pi$-times more allowance to the subproblems according to Lemma~\ref{thm:allowance-allocation-constraint}. It reveals that when the number of the inventory is small (e.g., less than $\pi$), the allowance constraint could becomes redundancy in our design. Leveraging the above insight, we show a simple and optimal online algorithm for \probname~when $N$ is relative small compared with $\theta$. More specifically, we consider the case that $N\leq\pi_{1}$. We denote our online algorithm as \textsf{A\&P$_s$($\pi$)} with $\pi$ as a parameter to be specified. \textsf{A\&P$_s$($\pi$)} consists of two steps, where the first step is to allocate the allowance, and the second step is to pursuit a $\pi$ performance ratio for each subproblem. In the first step, \textsf{A\&P$_s$($\pi$)} determines the allowance allocation as \begin{equation} \label{eq:small-N-allowance} \hat{a}_{i,t} = \delta_{i,t}. \end{equation} In the second step, for each \textsf{${\probname_i}$}, it chooses $\tilde{g}_{i,t}(v_{i,t})$ as $g_{i,t}(v_{i,t})$. We note that in such case \textsf{${\probname_i}$} reduces to the single inventory problem we discuss in Sec.~\ref{sec:single}. The \textsf{A\&P$_s$($\pi$)} determines the online solution running \textsf{CR-Pursuit($\pi$)}. That is, it choose $\hat{v}_{i,t}$ such that \begin{equation} \label{eq:small-N-allocation} g_{i,t}(\hat{v}_{i,t}) =\frac{1}{\pi}(OPT_{i,t}-OPT_{i,t}) \end{equation} where $OPT_{i,t}$ is the optimal objective of \textsf{${\probname_i}$} given $\{\hat{a}_{i,\tau}\}_{\tau\in[t]}$ and $\{\tilde{g}_{i,t}(\cdot)\}_{\tau\in[t]}$ at slot $t$. \begin{thm} \label{thm:AP-small-N} The \textsf{A\&P$_s$($\pi_{1}$)} is $\pi_{1}$-competitive when $N\leq\pi_{1}$. \end{thm} \begin{proof} We first check the feasibility of \textsf{A\&P($\pi_{1}$)}. The rate limit constraints and inventory constraints are directly guaranteed by the second step of \textsf{A\&P($\pi_{1}$)} where we run the \textsf{CR-Pursuit($\pi_{1}$)} (as shown in Theorem~\ref{thm:single-competitive-class-I}). We then check the allowance constraints, for any $t$, we have \begin{equation} \sum_{i} \hat{v}_{i,t} \leq \sum_{i} \frac{1}{\pi_{1}}\cdot \hat{a}_{i,t} = \sum_{i} \frac{1}{\pi_{1}}\cdot \delta_{i,t}\leq \frac{1}{\pi_{1}} N\cdot A_t\leq A_t. \end{equation} Recall that we have $\hat{v}_{i,t} \leq \frac{1}{\pi_{1}}\cdot \hat{a}_{i,t}$ according to Lemma~\ref{thm:single-rate-limit-feasibility}, and without loss of generality, we consider $\delta_{i,t}\leq A_t$, as discussed in Sec.~\ref{sec:problem}. We then show the performance analysis of the algorithm. It is clear that at each slot $t$, we have \begin{equation} \sum_i{OPT_{i,t}} \geq OPT_t,\forall t. \end{equation} where $OPT_t$ is the optimal objective of \probname~at slot $t$. This is because $\sum_i{OPT_{i,t}}$ equals the optimal objective of of \probname~at slot $t$ without the allowance constraint. Then, we have the online objective \begin{equation} \sum_{i,t} g_{i,t}(\hat{v}_{i,t}) = \frac{1}{\pi_{1}}\sum_i{OPT_{i,T}}\geq \frac{1}{\pi_{1}} OPT_T \end{equation} Thus, \textsf{A\&P($\pi_{1}$)} is $\pi_{1}$-competitive. \end{proof} Theorem~\ref{thm:AP-small-N} shows that when the total number of inventories is relatively small compared with the uncertainty range of the revenue functions (i.e., $\theta$), we can reduce the multiple inventory problem to the single inventory case with the same performance guarantee. \subsection{Summary of Our Proposed Online Algorithm}\label{sec:alg-cr} \begin{algorithm}[!ht] \caption{ \algname($\pi$) Algorithm \label{alg:online-algorithm}} \begin{algorithmic}[1] \STATE At slot $t$, $\{g_{i,t}(\cdot)\}_{i\in[N]}$, $A_t$, and $\{\delta_{i,t}\}_{i\in[N]}$ are revealed, \IF{$N\leq\pi$} \STATE Run \textsf{A\&P$_s$($\pi$)}, i.e., determine $\hat{a}_{i,t}=\delta_{i,t}$ as in~\eqref{eq:small-N-allowance}\\ and determine $\hat{v}_{i,t}$ according to~\eqref{eq:small-N-allocation}, for all $i\in[N]$, \RETURN $\{\hat{v}_{i,t}\}_{i\in[N]}$. \ELSE \STATE Run \textsf{A\&P$_l$($\pi$)}: \STATE \textsf{Step-I}: solve \textsf{\allallt-A($\pi$)} to obtain the allowance allocation $\hat{a}_{i,t}, \forall i\in[N]$, \STATE \textsf{Step-II}: determine $\hat{v}_{i,t}$ according to~\eqref{eq:subproblem-crpursuit}, for all $i\in[N]$, \RETURN $\{\hat{v}_{i,t}\}_{i\in[N]}$. \ENDIF \end{algorithmic} \end{algorithm} In this section, we summarize the our online algorithm, denoted as \algname($\pi$), and provide its performance analysis. An illustration of our approach and results is provided in Fig.~\ref{fig:flowchart}. We also compare our results with existing ones in Fig~\ref{fig:cr}. The pseudocode of \algname($\pi$)~ is provided in Algorithm~\ref{alg:online-algorithm}. Depending on the value of $N$ and $\theta$, we run either \textsf{A\&P$_s$($\pi$)} or \textsf{A\&P$_l$($\pi$)}. The CR of our online algorithm is shown in the following theorem. Recalled that $\pi_{1} = \ln\theta+1$. \begin{thm} \label{thm:competitive-ratio-class-I} Our online algorithm \algname($\pi_{1}$) achieves the following CR, \begin{equation} \label{eq:compeptitve-ratio-class-I} \mathcal{CR}_1(A\&P(\pi_{1})) = \begin{cases} \pi_{1}, & \pi_{1} \geq N,\\ \frac{e^\frac{1}{\pi_1}}{e^\frac{1}{\pi_1}-1}, & \pi_{1}< N. \end{cases} \end{equation} \end{thm} Theorem~\ref{thm:competitive-ratio-class-I} simply combines the results we show in Theorem~\ref{thm:AP-small-N} and Theorem~\ref{thm:AP-general-N}. The CR we obtain is tight and optimal when $N$ is smaller than $\ln\theta+1$. This also recovers the results for single-inventory case discussed in Sec.~\ref{sec:single} and~\cite{Sun2020Competitive}. It is within an additive constant of one to the lower bound when $N$ is larger than $\ln\theta+1$. When $\theta=1$, our problem reduces to maximizing total allocation, and our result recovers the optimal CR $\frac{e}{e-1}$ achieved in~\cite{azar2006maximizing,KALYANASUNDARAM20003bmatching}. In~\cite{ma2020algorithms}, the authors show a CR that is within $[\pi_1,\frac{e^\frac{1}{\pi_1}}{e^\frac{1}{\pi_1}-1}]$, independent of $N$, and consistently lower than the one achieved in~\cite{Sun2020Competitive}. They also show that the CR they achieve is tight when $N$ tends to infinity. While our achieved CR at large $N$ is worse compared with~\cite{ma2020algorithms}, the gap between is no greater than an additive constant of one. In additional, we achieve a better (and optimal) CR when $N$ is small. We provide an illustration on the CRs achieved by~\cite{ma2020algorithms,Sun2020Competitive} and ours in Fig.~\ref{fig:cr}. \begin{figure}[!h] \centering \includegraphics[width=0.75\textwidth]{myfigure-N_3.pdf} \caption{The competitive ratios obtained by Ma et al.~\cite{ma2020algorithms}, Sun et al.~\cite{Sun2020Competitive} and ours. We fix $N=3$ and vary $\theta$ from $1$ to $60$. } \label{fig:cr} \end{figure} \section{Extension to General Concave Revenue Function}\label{sec:discussion-general-application In addition to the set of revenue functions $\mathcal{G}$ we discuss above, our divide-and-conquer approach can be applied under a broader range of functions with corresponding applications. For example, we widely observe the logarithmic functions (e.g., $\log(v+1)$) in wireless communication~\cite{Niv2012Dynamic,Niv2010Multi}, which is not covered by the revenue function set $\mathcal{G}$ when considering sufficiently large capacity. Also, the revenue functions in the application of one-way trading with price elasticity~\cite{cr_pursuit2018}. In general, let us consider a given set of concave revenue functions with zero value at the origin; say $\tilde{\mathcal{G}}$. We define $\Phi_{\tilde{G}}(1)$ as the maximum online total allocation of running \textsf{CR-Pursuit(1)} under revenue functions $\tilde{\mathcal{G}}$ in the single inventory case (as defined in~\eqref{eq:maximum_allocation_cr_pursuit} with $\pi=1$). It represents the maximum capacity we required to maintain the same performance of the offline optimal at all time. We have the following results for the \probname~under the set of revenue functions $\tilde{\mathcal{G}}$. \begin{prop} Suppose we can find $\tilde{\pi}$ such that for \textsf{OOIC-S}, we have \begin{equation} \label{eq:phi_1_condition} \Phi_{\tilde{G}}(1)\leq \tilde{\pi} \cdot C. \end{equation} We can run the \textsf{A\&P($\tilde{\pi}$)} for \probname~under $\tilde{\mathcal{G}}$. The competitive ratio of \textsf{A\&P($\tilde{\pi}$)} is given by, \begin{equation} \label{eq:compeptitve-ratio-class-general} \mathcal{CR}_{\tilde{G}}(\textsf{A\&P(${\tilde{\pi}}$)} = \begin{cases} \tilde{\pi}, & \tilde{\pi} \geq N,\\ \frac{e^\frac{1}{\tilde{\pi}}}{e^\frac{1}{\tilde{\pi}_1}-1}, & \tilde{\pi}< N. \end{cases} \end{equation} \end{prop} The proof follow the same idea as discussed in Sec.~\ref{sec:online} and is omitted here. When $\tilde{\pi}< N$, we simply recover Lemma~\ref{thm:step-II-class-I} with the condition~\eqref{eq:phi_1_condition}. Together with Theorem~\ref{thm:step-I}, we show the results in ~\eqref{eq:compeptitve-ratio-class-general} when $\tilde{\pi}\geq N$. As for the case $\tilde{\pi}< N$, we can recover Theorem~\ref{thm:AP-small-N} by noting that $\Phi_{\tilde{\mathcal{G}}}(\tilde{\pi})\leq\frac{1}{\tilde{\pi}}\Phi_{\tilde{\mathcal{G}}}(1)\leq C$ (due the concavity of the revenue functions), i.e., \textsf{CR-Pursuit($\tilde{\pi}$)} is feasible and $\tilde{\pi}$-competitive for \textsf{OOIC-S}. For example, we can consider the one-way trading with price elasticity problem with multiple inventories, where the single-inventory case is proposed in~\cite{cr_pursuit2018}. More specifically, we consider the following type of revenue function, which we denote as $\hat{\mathcal{G}}$. \begin{itemize} \item $g_{i,t}(v_{i,t})=\left(p_{i,t}-q_{i,t}(v_{i,t})\right)\cdot v_{i,t}$, $v_{i,t}\in [0,\delta_{i,t}]$, where $q_{i,t}(\cdot)$ is convex increasing function with $q_{i,t}(0)=0$ and $p_{i,t}\in[p_{\min},p_{\max}]$. \footnote{{There exist an $v$ (may be infinity) such the function $g_{i,t}(\cdot)$ is increasing in $[0,v]$ and decreasing in $[v,\infty]$. We only need to consider the case that $\delta_{i,t}\leq v$ as no reasonable algorithm would allocate more than $v$ at $g_{i,t}(\cdot)$. Thus, we consider that $g_{i,t}(\cdot)$ is increasing in $[0,\delta_{i,t}]$.}} \end{itemize} The revenue functions in $\hat{\mathcal{G}}$ consider that the price of selling (allocating) the inventory decreases as the supply (allocation) increases, which follows the basic law in supply and demand in microeconomics. In particular, the price elasticity is captured by a convex increasing function $q_{i,t}(\cdot)$, meaning that more supply would further decrease the price. The marginal revenue is bounded between $p_{\min}$ and $p_{\max}$ only at the origin and could be even zero otherwise, which implies $\hat{\mathcal{G}}$ is not covered by $\mathcal{G}$. According to Lemma 15 in~\cite{cr_pursuit2018} (while it does not consider rate limit constraint, we can check that the proof simply follows with limit constraint), we have \begin{equation} \Phi_{\hat{G}}(1)\leq 2\cdot (\ln\theta+1) \cdot C. \end{equation} For \probname~under revenue function set, $\hat{\mathcal{G}}$, we have \begin{prop} \label{thm:competitive-ratio-class-II} Let $\pi_2 =2\cdot(\ln\theta+1)$. \algname($\pi_{2}$) achieves the following competitive ratio for \probname~under revenue function set $\hat{\mathcal{G}}$, \begin{equation} \label{eq:compeptitve-ratio-class-I} \mathcal{CR}_{\hat{\mathcal{G}}}(A\&P(\pi_{2})) = \begin{cases} \pi_{2}, & \pi_{2} \geq N,\\ \frac{e^\frac{1}{\pi_2}}{e^\frac{1}{\pi_2}-1}, & \pi_{2}< N. \end{cases} \end{equation} \end{prop} The CR we achieve is upper bounded by $2\ln\theta+3$, which is up to a constant factor multiplying the lower bound $\ln\theta+1$. This provides the first CR of the \probname~under revenue function set $\hat{\mathcal{G}}$ with application to the one-way trading with price elasticity under multiple-inventory scenario. It is interesting to see whether we can fine a tiger bound on $\Phi_{\hat{\mathcal{G}}}(1)$ and achieve a better competitive ratio. In addition, while the determination of $\Phi_{\tilde{G}}(1)$ could be problem-specific, how we can show a general way for it would be another interesting future direction. \section{Concluding Remarks}\label{sec:conclusion} In this work, we study the competitive online optimization problem under multiple inventories, \probname. {The online decision maker allocates the capacity-limited inventories to maximize the overall revenue, while the revenue functions and the allocation constraints at each slot come in an online manner.} Our key result is a divide-and-conquer approach that reduces the multiple-inventory problem to the single-inventory case with a small optimality loss in terms of the CR. In particular, we show that our approach achieves the optimal when $N$ is small and is within an additive constant to the lower bound when $N$ is larger, when considering gradient bounded revenue functions. We also provide a general condition for applying our approach to broader applications with different interesting sets of revenue functions. In particular, for revenue functions appeared in one-way trading with price elasticity, our approach obtains an optimal CR for the problem that is up to a constant factor to the lower bound. As a by-product, we also provide the first allowance augmentation results for the online fractional matching problem and the online fraction allocation problem with free disposal. As a future direction, we are interested in how our divide-and-conquer approach can be used to solve other online optimization problems with multi-entity and how it can be applied in more application scenarios. \bibliographystyle{unsrt}
{ "attr-fineweb-edu": 1.466797, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdpM5qX_AY1mDItsM
\section{Introduction} The Wasserstein distance (see, e.g., \cite{peyre2019computational, panaretos2020invitation} and the references therein) is an important metric from optimal transport theory (see, e.g., \cite{villani2003topics, villani2008optimal, santambrogio2015optimal} and the references therein). In this short note, we focus on a property of the $\mathcal{W}_2$ Wasserstein distance which indicates that independent elliptical distributions minimize their $\mathcal{W}_2$ Wasserstein distance from given independent elliptical distributions with the same density generators; see Theorem~\ref{t1}. Moreover, we investigate what this property implicates in the Gelbrich bound for the $\mathcal{W}_2$ Wasserstein distance when the distributions are not necessarily elliptical; see Corollary~\ref{c1}. We then generalize the results in Theorem~\ref{t1} and Corollary~\ref{c1} to the cases when the distributions are not independent; see Theorem~\ref{t2} and Corollary~\ref{c2}, respectively. It is worth mentioning that, in a broad sense, the results presented in this note parallel those of \cite{KLproperties} (for the Kullback--Liebler divergence) as well as \cite{fang2017towards} (for entropy and mutual information). \section{Preliminaries} Throughout the note, we consider zero-mean real-valued continuous random variables and random vectors. We represent random variables and random vectors using boldface letters, e.g., $\mathbf{x}$, while the probability density function of $\mathbf{x}$ is denoted as $p_\mathbf{x}$. The $\mathcal{W}_p$ Wasserstein distance (see, e.g., \cite{santambrogio2015optimal, peyre2019computational, panaretos2020invitation}) is defined as follows. \begin{definition} The $\mathcal{W}_p$ (for $p \geq 1$) Wasserstein distance between distribution $p_{\mathbf{x}}$ and distribution $p_{\mathbf{y}}$ is defined as \begin{flalign} \mathcal{W}_p \left( p_{\mathbf{x}} ; p_{\mathbf{y}} \right) = \left( \inf_{ \mathbf{x}, \mathbf{y}} \mathbb{E} \left[ \left\| \mathbf{x} - \mathbf{y} \right\|^p \right] \right)^{\frac{1}{p}}, \nonumber \end{flalign} where $\mathbf{x}$ and $\mathbf{y}$ denote $m$-dimensional random vectors with distributions $p_{\mathbf{x}}$ and $p_{\mathbf{y}}$, respectively. \end{definition} Particularly when $p = 2$, the $\mathcal{W}_2$ distance is given by \begin{flalign} \mathcal{W}_2 \left( p_{\mathbf{x}} ; p_{\mathbf{y}} \right) = \sqrt{ \inf_{ \mathbf{x}, \mathbf{y}} \mathbb{E} \left[ \left\| \mathbf{x} - \mathbf{y} \right\|^2 \right] }. \nonumber \end{flalign} The following lemma (see, e.g., \cite{santambrogio2015optimal, peyre2019computational, panaretos2020invitation}) provides an explicit expression for the $\mathcal{W}_2$ distance between elliptical distributions with the same density generator. Note that Gaussian distributions are a special class of elliptical distributions (see, e.g., \cite{peyre2019computational}). Note also that hereinafter the random vectors are assumed to be zero-mean for simplicity. \begin{lemma} \label{Gaussian} Consider $m$-dimensional elliptical random vectors $\mathbf{x}$ and $\mathbf{y}$ with the same density generator, while with covariance matrices $\Sigma_\mathbf{x}$ and $\Sigma_\mathbf{y}$, respectively. The $\mathcal{W}_2$ distance between distribution $p_\mathbf{x}$ and distribution $p_\mathbf{y}$ is given by \begin{flalign} \mathcal{W}_2 \left( p_{\mathbf{x}} ; p_{\mathbf{y}} \right) = \sqrt{ \tr \left[ \Sigma_{\mathbf{x}} + \Sigma_{\mathbf{y}} - 2 \left( \Sigma_{\mathbf{x}}^{\frac{1}{2}} \Sigma_{\mathbf{y}} \Sigma_{\mathbf{x}}^{\frac{1}{2}} \right)^{\frac{1}{2}} \right] }. \nonumber \end{flalign} \end{lemma} \vspace{3mm} Meanwhile, the Gelbrich bound (see, e.g., \cite{peyre2019computational, panaretos2020invitation}) is given as follows, which provides a generic lower bound for the $\mathcal{W}_2$ distance between distributions that are not necessarily elliptical. \begin{lemma} \label{Gelbrich} Consider $m$-dimensional random vectors $\mathbf{x}$ and $\mathbf{y}$ with covariance matrices $\Sigma_\mathbf{x}$ and $\Sigma_\mathbf{y}$, respectively. The $\mathcal{W}_2$ distance between distribution $p_\mathbf{x}$ and distribution $p_\mathbf{y}$ is lower bounded by \begin{flalign} \mathcal{W}_2 \left( p_{\mathbf{x}} ; p_{\mathbf{y}} \right) \geq \sqrt{ \tr \left[ \Sigma_{\mathbf{x}} + \Sigma_{\mathbf{y}} - 2 \left( \Sigma_{\mathbf{x}}^{\frac{1}{2}} \Sigma_{\mathbf{y}} \Sigma_{\mathbf{x}}^{\frac{1}{2}} \right)^{\frac{1}{2}} \right] }. \nonumber \end{flalign} \end{lemma} \vspace{3mm} \section{$\mathcal{W}_2$ Distance Minimizing Distributions} We first present the following proposition. \begin{proposition} \label{half} Consider a positive definite matrix $\Sigma_{\mathbf{x}} \in \mathbb{R}^{m \times m}$. Denote the diagonal terms of $\Sigma_{\mathbf{x}}$ by $\sigma_{\mathbf{x} \left( 1 \right)}^2, \ldots, \sigma_{\mathbf{x} \left( m \right)}^2$, while denote the eigenvalues of $\Sigma_{\mathbf{x}}$ by $\lambda_{1}, \ldots, \lambda_{m}$. Then, \begin{flalign} \tr \left( \Sigma_{\mathbf{x}}^{\frac{1}{2}} \right) = \sum_{i=1}^{m} \lambda_i^{\frac{1}{2}} \leq \sum_{i=1}^{m} \left[ \sigma_{\mathbf{x} \left( i \right)}^2 \right]^{\frac{1}{2}}, \end{flalign} where equality holds if and only if $\Sigma_{\mathbf{x}}$ is a diagonal matrix. \end{proposition} \begin{proof} To begin with, denote \begin{flalign} \Sigma_{\mathbf{x}} = \begin{bmatrix} \sigma_{\mathbf{x} \left( 1 \right)}^2 & \sigma_{\mathbf{x} \left( 1 \right) \mathbf{x} \left( 2 \right)}^2 & \cdots & \sigma_{\mathbf{x} \left( 1 \right) \mathbf{x} \left( m \right)}^2 \\ \sigma_{\mathbf{x} \left( 1 \right)\mathbf{x} \left( 2 \right)}^2 & \sigma_{\mathbf{x} \left( 2 \right)}^2 & \cdots & \sigma_{\mathbf{x} \left( 2 \right)\mathbf{x} \left( m \right)}^2 \\ \vdots & \vdots & \ddots & \vdots \\ \sigma_{\mathbf{x} \left( 1 \right)\mathbf{x} \left( m \right)}^2 & \sigma_{\mathbf{x} \left( 2 \right) \mathbf{x} \left( m \right)}^2 & \cdots & \sigma_{\mathbf{x} \left( m \right)}^2 \end{bmatrix} ,\nonumber \end{flalign} and \begin{flalign} \Lambda_{\mathbf{x}} = \begin{bmatrix} \sigma_{\mathbf{x} \left( 1 \right)}^2 & 0 & \cdots & 0 \\ 0 & \sigma_{\mathbf{x} \left( 2 \right)}^2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \sigma_{\mathbf{x} \left( m \right)}^2 \end{bmatrix} .\nonumber \end{flalign} It is clear that $\lambda_i > 0, i = 1, \ldots, m$ and $\sigma_{\mathbf{x} \left( i \right)}^2 > 0, i = 1, \ldots, m$, since $\Sigma_{\mathbf{x}}$ is positive definite. Meanwhile, the eigenvalues of $\Sigma_{\mathbf{x}}^{\frac{1}{2}}$ are given by $\lambda_{1}^{\frac{1}{2}}, \ldots, \lambda_{m}^{\frac{1}{2}}$, and thus \begin{flalign} \tr \left( \Sigma_{\mathbf{x}}^{\frac{1}{2}} \right) = \sum_{i=1}^{m} \lambda_i^{\frac{1}{2}} . \nonumber \end{flalign} On the other hand, we have \begin{flalign} \tr \left( \Lambda_{\mathbf{x}}^{\frac{1}{2}} \right) = \sum_{i=1}^{m} \left[ \sigma_{\mathbf{x} \left( i \right)}^2 \right]^{\frac{1}{2}} . \nonumber \end{flalign} It now suffices to prove that \begin{flalign} \tr \left( \Sigma_{\mathbf{x}}^{\frac{1}{2}} \right) \leq \tr \left( \Lambda_{\mathbf{x}}^{\frac{1}{2}} \right) , \nonumber \end{flalign} where equality holds if and only if $\Sigma_{\mathbf{x}}$ is a diagonal matrix. To prove this, note first that according to Klein's inequality (see, e.g., \cite{ruelle1999statistical}), we have \begin{flalign} \tr \left[ \left( - \Sigma_{\mathbf{x}}^{\frac{1}{2}} \right) - \left( - \Lambda_{\mathbf{x}}^{\frac{1}{2}} \right) - \left( \Sigma_{\mathbf{x}} - \Lambda_{\mathbf{x}} \right) \frac{1}{2} \left( - \Lambda_{\mathbf{x}}^{ - \frac{1}{2}} \right) \right] \geq 0 , \nonumber \end{flalign} or equivalently, \begin{flalign} \tr \left[ \left( - \Sigma_{\mathbf{x}}^{\frac{1}{2}} \right) - \left( - \Lambda_{\mathbf{x}}^{\frac{1}{2}} \right) \right] \geq \tr \left[ \left( \Sigma_{\mathbf{x}} - \Lambda_{\mathbf{x}} \right) \frac{1}{2} \left( - \Lambda_{\mathbf{x}}^{ - \frac{1}{2}} \right) \right] , \nonumber \end{flalign} since $f \left( x \right) = - x^{\frac{1}{2}}$ is operator convex and $f' \left( x \right) = - \frac{1}{2} x^{- \frac{1}{2} }$; in addition, herein equality holds if and only if $\Sigma_{\mathbf{x}} = \Lambda_{\mathbf{x}}$, i.e., if and only if $\Sigma_{\mathbf{x}}$ is diagonal, since $f \left( x \right) = - x^{\frac{1}{2}}$ is strictly convex. Meanwhile, note that \begin{flalign} \Sigma_{\mathbf{x}} - \Lambda_{\mathbf{x}} & = \begin{bmatrix} \sigma_{\mathbf{x} \left( 1 \right)}^2 & \sigma_{\mathbf{x} \left( 1 \right) \mathbf{x} \left( 2 \right)}^2 & \cdots & \sigma_{\mathbf{x} \left( 1 \right) \mathbf{x} \left( m \right)}^2 \\ \sigma_{\mathbf{x} \left( 1 \right)\mathbf{x} \left( 2 \right)}^2 & \sigma_{\mathbf{x} \left( 2 \right)}^2 & \cdots & \sigma_{\mathbf{x} \left( 2 \right)\mathbf{x} \left( m \right)}^2 \\ \vdots & \vdots & \ddots & \vdots \\ \sigma_{\mathbf{x} \left( 1 \right)\mathbf{x} \left( m \right)}^2 & \sigma_{\mathbf{x} \left( 2 \right) \mathbf{x} \left( m \right)}^2 & \cdots & \sigma_{\mathbf{x} \left( m \right)}^2 \end{bmatrix} \nonumber \\ &~~~~ - \begin{bmatrix} \sigma_{\mathbf{x} \left( 1 \right)}^2 & 0 & \cdots & 0 \\ 0 & \sigma_{\mathbf{x} \left( 2 \right)}^2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \sigma_{\mathbf{x} \left( m \right)}^2 \end{bmatrix} \nonumber \\ & = \begin{bmatrix} 0 & \sigma_{\mathbf{x} \left( 1 \right) \mathbf{x} \left( 2 \right)}^2 & \cdots & \sigma_{\mathbf{x} \left( 1 \right) \mathbf{x} \left( m \right)}^2 \\ \sigma_{\mathbf{x} \left( 1 \right)\mathbf{x} \left( 2 \right)}^2 & 0 & \cdots & \sigma_{\mathbf{x} \left( 2 \right)\mathbf{x} \left( m \right)}^2 \\ \vdots & \vdots & \ddots & \vdots \\ \sigma_{\mathbf{x} \left( 1 \right)\mathbf{x} \left( m \right)}^2 & \sigma_{\mathbf{x} \left( 2 \right) \mathbf{x} \left( m \right)}^2 & \cdots & 0 \end{bmatrix} .\nonumber \end{flalign} and \begin{flalign} \Lambda_{\mathbf{x}}^{- \frac{1}{2}} & = \begin{bmatrix} \frac{1}{\sigma_{\mathbf{x} \left( 1 \right)}} & 0 & \cdots & 0 \\ 0 & \frac{1}{\sigma_{\mathbf{x} \left( 2 \right)}} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \frac{1}{\sigma_{\mathbf{x} \left( m \right)}} \end{bmatrix} ,\nonumber \end{flalign} where $\sigma_{\mathbf{x} \left( i \right)} = \sqrt{\sigma_{\mathbf{x} \left( i \right)}^2}, i = 1, \ldots, m$. As a result, \begin{flalign} &\left( \Sigma_{\mathbf{x}} - \Lambda_{\mathbf{x}} \right) \frac{1}{2} \left( - \Lambda_{\mathbf{x}}^{ - \frac{1}{2}} \right) = - \frac{1}{2} \left( \Sigma_{\mathbf{x}} - \Lambda_{\mathbf{x}} \right) \left( \Lambda_{\mathbf{x}}^{ - \frac{1}{2}} \right) \nonumber \\ &~~~~ = - \frac{1}{2} \begin{bmatrix} 0 & \sigma_{\mathbf{x} \left( 1 \right) \mathbf{x} \left( 2 \right)}^2 & \cdots & \sigma_{\mathbf{x} \left( 1 \right) \mathbf{x} \left( m \right)}^2 \\ \sigma_{\mathbf{x} \left( 1 \right)\mathbf{x} \left( 2 \right)}^2 & 0 & \cdots & \sigma_{\mathbf{x} \left( 2 \right)\mathbf{x} \left( m \right)}^2 \\ \vdots & \vdots & \ddots & \vdots \\ \sigma_{\mathbf{x} \left( 1 \right)\mathbf{x} \left( m \right)}^2 & \sigma_{\mathbf{x} \left( 2 \right) \mathbf{x} \left( m \right)}^2 & \cdots & 0 \end{bmatrix} \nonumber \\ &~~~~~~~~ \times \begin{bmatrix} \frac{1}{\sigma_{\mathbf{x} \left( 1 \right)}} & 0 & \cdots & 0 \\ 0 & \frac{1}{\sigma_{\mathbf{x} \left( 2 \right)}} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \frac{1}{\sigma_{\mathbf{x} \left( m \right)}} \end{bmatrix} \nonumber \\ &~~~~ = - \frac{1}{2} \begin{bmatrix} 0 & \frac{ \sigma_{\mathbf{x} \left( 1 \right) \mathbf{x} \left( 2 \right)}^2} {\sigma_{\mathbf{x} \left( 2 \right)}} & \cdots & \frac{ \sigma_{\mathbf{x} \left( 1 \right) \mathbf{x} \left( m \right)}^2 }{\sigma_{\mathbf{x} \left( m \right)}} \\ \frac{ \sigma_{\mathbf{x} \left( 1 \right)\mathbf{x} \left( 2 \right)}^2 } {\sigma_{\mathbf{x} \left( 1 \right)}} & 0 & \cdots & \frac{ \sigma_{\mathbf{x} \left( 2 \right)\mathbf{x} \left( m \right)}^2 } {\sigma_{\mathbf{x} \left( m \right)}} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{ \sigma_{\mathbf{x} \left( 1 \right)\mathbf{x} \left( m \right)}^2 } {\sigma_{\mathbf{x} \left( 1 \right)}} & \frac{ \sigma_{\mathbf{x} \left( 2 \right) \mathbf{x} \left( m \right)}^2 }{\sigma_{\mathbf{x} \left( 2 \right)}} & \cdots & 0 \end{bmatrix} ,\nonumber \end{flalign} and hence \begin{flalign} \tr \left[ \left( \Sigma_{\mathbf{x}} - \Lambda_{\mathbf{x}} \right) \frac{1}{2} \left( - \Lambda_{\mathbf{x}}^{ - \frac{1}{2}} \right) \right] = 0 . \nonumber \end{flalign} Consequently, \begin{flalign} \tr \left( \Lambda_{\mathbf{x}}^{\frac{1}{2}} \right) - \tr \left( \Sigma_{\mathbf{x}}^{\frac{1}{2}} \right) &= \tr \left( \Lambda_{\mathbf{x}}^{\frac{1}{2}} - \Sigma_{\mathbf{x}}^{\frac{1}{2}} \right) \nonumber \\ &\geq \tr \left[ \left( \Sigma_{\mathbf{x}} - \Lambda_{\mathbf{x}} \right) \frac{1}{2} \left( - \Lambda_{\mathbf{x}}^{ - \frac{1}{2}} \right) \right] = 0 , \nonumber \end{flalign} where equality holds if and only if $\Sigma_{\mathbf{x}}$ is diagonal. \end{proof} As a matter of fact, it can be proved more generally that for any $0 < q < 1$, \begin{flalign} \tr \left( \Sigma_{\mathbf{x}}^{q} \right) = \sum_{i=1}^{m} \lambda_i^{q} \leq \sum_{i=1}^{m} \left[ \sigma_{\mathbf{x} \left( i \right)}^2 \right]^{q}, \end{flalign} where equality holds if and only if $\Sigma_{\mathbf{x}}$ is a diagonal matrix, in a similar spirit to the case of $q = \frac{1}{2}$. Although this note only concerns the case of $q = \frac{1}{2}$, the more general case of $0 < q < 1$ may be found useful in other settings. We now proceed to present the main results of this note. \begin{theorem} \label{t1} Consider $m$-dimensional random vectors $\mathbf{x}$ and $\mathbf{y}$ with positive definite covariance matrices $\Sigma_{\mathbf{x}}$ and $\Sigma_{\mathbf{y}}$, respectively. Suppose that $\mathbf{x}$ is elliptically distributed with density generator $g_\mathbf{x} \left( \mathbf{u} \right)$, whereas $\mathbf{y}$ is not necessarily elliptical. In addition, suppose that $\Sigma_{\mathbf{x}}$ is a diagonal matrix ($\mathbf{x}$ is independent element-wise), i.e., \begin{flalign} \Sigma_{\mathbf{x}} = \Lambda_{\mathbf{x}} = \mathrm{diag} \left( \sigma_{\mathbf{x} \left( 1 \right)}^2, \ldots, \sigma_{\mathbf{x} \left( m \right)}^2 \right). \end{flalign} Meanwhile, denote the diagonal terms of $\Sigma_{\mathbf{y}}$ by $\sigma_{\mathbf{y} \left( 1 \right)}^2, \ldots, \sigma_{\mathbf{y} \left( m \right)}^2$, whereas $\Sigma_{\mathbf{y}}$ is not necessarily diagonal. Then, \begin{flalign} \label{b1} \mathcal{W}_2 \left( p_{\mathbf{x}} ; p_{\mathbf{y}} \right) & \geq \sqrt{\sum_{i=1}^{m} \left\{ \sigma_{\mathbf{x} \left( i \right)}^2 + \sigma_{\mathbf{y} \left( i \right)}^2 - 2 \left[ \sigma_{\mathbf{x} \left( i \right)}^2 \sigma_{\mathbf{y} \left( i \right)}^2 \right]^{\frac{1}{2}} \right\}} \nonumber \\ & = \sqrt{\sum_{i=1}^{m} \left\{ \left[ \sigma_{\mathbf{x} \left( i \right)}^2 \right]^{\frac{1}{2}} - \left[ \sigma_{\mathbf{y} \left( i \right)}^2 \right]^{\frac{1}{2}} \right\}^2 }, \end{flalign} where equality holds if $\mathbf{y}$ is elliptical with the same density generator as $\mathbf{x}$, i.e., $g_\mathbf{y} \left( \mathbf{u} \right) = g_\mathbf{x} \left( \mathbf{u} \right)$, while $\Sigma_{\mathbf{y}}$ is a diagonal matrix as \begin{flalign} \Sigma_{\mathbf{y}} = \Lambda_{\mathbf{y}} = \mathrm{diag} \left( \sigma_{\mathbf{y} \left( 1 \right)}^2, \ldots, \sigma_{\mathbf{y} \left( m \right)}^2 \right), \end{flalign} i.e., $\mathbf{y}$ is independent element-wise. \end{theorem} \begin{proof} It follows from Lemma~\ref{Gaussian} and Lemma~\ref{Gelbrich} that \begin{flalign} \mathcal{W}_2 \left( p_{\mathbf{x}} ; p_{\mathbf{y}} \right) \geq \sqrt{ \tr \left[ \Sigma_{\mathbf{x}} + \Sigma_{\mathbf{y}} - 2 \left( \Sigma_{\mathbf{x}}^{\frac{1}{2}} \Sigma_{\mathbf{y}} \Sigma_{\mathbf{x}}^{\frac{1}{2}} \right)^{\frac{1}{2}} \right] } ,\nonumber \end{flalign} where equality holds if $\mathbf{y}$ is elliptical with the same density generator as $\mathbf{x}$. Note then that \begin{flalign} & \sqrt{ \tr \left[ \Sigma_{\mathbf{x}} + \Sigma_{\mathbf{y}} - 2 \left( \Sigma_{\mathbf{x}}^{\frac{1}{2}} \Sigma_{\mathbf{y}} \Sigma_{\mathbf{x}}^{\frac{1}{2}} \right)^{\frac{1}{2}} \right] } \nonumber \\ &~~~~ = \sqrt{ \tr \left[ \Lambda_{\mathbf{x}} + \Sigma_{\mathbf{y}} - 2 \left( \Lambda_{\mathbf{x}}^{\frac{1}{2}} \Sigma_{\mathbf{y}} \Lambda_{\mathbf{x}}^{\frac{1}{2}} \right)^{\frac{1}{2}} \right] } \nonumber \\ &~~~~ = \sqrt{ \tr \left( \Lambda_{\mathbf{x}} \right) + \tr \left( \Sigma_{\mathbf{y}} \right) - 2 \tr \left[ \left( \Lambda_{\mathbf{x}}^{\frac{1}{2}} \Sigma_{\mathbf{y}} \Lambda_{\mathbf{x}}^{\frac{1}{2}} \right)^{\frac{1}{2}} \right]} ,\nonumber \end{flalign} where \begin{flalign} \tr \left( \Lambda_{\mathbf{x}} \right) = \sum_{i=1}^{m} \sigma_{\mathbf{x} \left( i \right)}^2, \nonumber \end{flalign} and \begin{flalign} \tr \left( \Sigma_{\mathbf{y}} \right) = \sum_{i=1}^{m} \sigma_{\mathbf{y} \left( i \right)}^2. \nonumber \end{flalign} It then remains to prove that \begin{flalign} \tr \left[ \left( \Lambda_{\mathbf{x}}^{\frac{1}{2}} \Sigma_{\mathbf{y}} \Lambda_{\mathbf{x}}^{\frac{1}{2}} \right)^{\frac{1}{2}} \right] \leq \sum_{i=1}^{m} \left[ \sigma_{\mathbf{x} \left( i \right)}^2 \sigma_{\mathbf{y} \left( i \right)}^2 \right]^{\frac{1}{2}} ,\nonumber \end{flalign} where equality holds if $\Sigma_{\mathbf{y}}$ is diagonal as \begin{flalign} \Sigma_{\mathbf{y}} = \Lambda_{\mathbf{y}} = \mathrm{diag} \left( \sigma_{\mathbf{y} \left( 1 \right)}^2, \ldots, \sigma_{\mathbf{y} \left( m \right)}^2 \right). \nonumber \end{flalign} To prove this, denote first that \begin{flalign} \Sigma_{\mathbf{y}} = \begin{bmatrix} \sigma_{\mathbf{y} \left( 1 \right)}^2 & \sigma_{\mathbf{y} \left( 1 \right) \mathbf{y} \left( 2 \right)}^2 & \cdots & \sigma_{\mathbf{y} \left( 1 \right) \mathbf{y} \left( m \right)}^2 \\ \sigma_{\mathbf{y} \left( 1 \right)\mathbf{y} \left( 2 \right)}^2 & \sigma_{\mathbf{y} \left( 2 \right)}^2 & \cdots & \sigma_{\mathbf{y} \left( 2 \right)\mathbf{y} \left( m \right)}^2 \\ \vdots & \vdots & \ddots & \vdots \\ \sigma_{\mathbf{y} \left( 1 \right)\mathbf{y} \left( m \right)}^2 & \sigma_{\mathbf{y} \left( 2 \right) \mathbf{y} \left( m \right)}^2 & \cdots & \sigma_{\mathbf{y} \left( m \right)}^2 \end{bmatrix} ,\nonumber \end{flalign} and accordingly, \begin{flalign} &\Lambda_{\mathbf{x}}^{\frac{1}{2}} \Sigma_{\mathbf{y}} \Lambda_{\mathbf{x}}^{\frac{1}{2}} \nonumber \\ &= \Lambda_{\mathbf{x}}^{\frac{1}{2}} \begin{bmatrix} \sigma_{\mathbf{y} \left( 1 \right)}^2 & \sigma_{\mathbf{y} \left( 1 \right) \mathbf{y} \left( 2 \right)}^2 & \cdots & \sigma_{\mathbf{y} \left( 1 \right) \mathbf{y} \left( m \right)}^2 \\ \sigma_{\mathbf{y} \left( 1 \right)\mathbf{y} \left( 2 \right)}^2 & \sigma_{\mathbf{y} \left( 2 \right)}^2 & \cdots & \sigma_{\mathbf{y} \left( 2 \right)\mathbf{y} \left( m \right)}^2 \\ \vdots & \vdots & \ddots & \vdots \\ \sigma_{\mathbf{y} \left( 1 \right)\mathbf{y} \left( m \right)}^2 & \sigma_{\mathbf{y} \left( 2 \right) \mathbf{y} \left( m \right)}^2 & \cdots & \sigma_{\mathbf{y} \left( m \right)}^2 \end{bmatrix} \Lambda_{\mathbf{x}}^{\frac{1}{2}} \nonumber \\ &= \begin{bmatrix} \sigma_{\mathbf{x} \left( 1 \right)}^2 \sigma_{\mathbf{y} \left( 1 \right)}^2 & \cdots & \sigma_{\mathbf{x} \left( 1 \right)} \sigma_{\mathbf{x} \left( m \right)} \sigma_{\mathbf{y} \left( 1 \right) \mathbf{y} \left( m \right)}^2 \\ \vdots & \ddots & \vdots \\ \sigma_{\mathbf{x} \left( 1 \right)} \sigma_{\mathbf{x} \left( m \right)}\sigma_{\mathbf{y} \left( 1 \right)\mathbf{y} \left( m \right)}^2 & \cdots & \sigma_{\mathbf{x} \left( m \right)}^2 \sigma_{\mathbf{y} \left( m \right)}^2 \end{bmatrix} ,\nonumber \end{flalign} where \begin{flalign} \Lambda_{\mathbf{x}}^{\frac{1}{2}} &= \left[ \mathrm{diag} \left( \sigma_{\mathbf{x} \left( 1 \right)}^2, \ldots, \sigma_{\mathbf{x} \left( m \right)}^2 \right) \right]^{\frac{1}{2}} \nonumber \\ &= \mathrm{diag} \left( \sqrt{\sigma_{\mathbf{x} \left( 1 \right)}^2}, \ldots, \sqrt{\sigma_{\mathbf{x} \left( m \right)}^2} \right) \nonumber \\ &= \mathrm{diag} \left( \sigma_{\mathbf{x} \left( 1 \right)}, \ldots, \sigma_{\mathbf{x} \left( m \right)} \right). \nonumber \end{flalign} It is clear that $\Lambda_{\mathbf{x}}^{\frac{1}{2}} \Sigma_{\mathbf{y}} \Lambda_{\mathbf{x}}^{\frac{1}{2}}$ is positive semi-definite. On the other hand, denote the eigenvalues of $\Lambda_{\mathbf{x}}^{\frac{1}{2}} \Sigma_{\mathbf{y}} \Lambda_{\mathbf{x}}^{\frac{1}{2}}$ as $\lambda_1, \ldots, \lambda_m$. Then, \begin{flalign} \tr \left[ \left( \Lambda_{\mathbf{x}}^{\frac{1}{2}} \Sigma_{\mathbf{y}} \Lambda_{\mathbf{x}}^{\frac{1}{2}} \right)^{\frac{1}{2}} \right] = \sum_{i=1}^{m} \lambda_i^{\frac{1}{2}} ,\nonumber \end{flalign} and it is known from Proposition~\ref{half} that \begin{flalign} \sum_{i=1}^{m} \lambda_i^{\frac{1}{2}} \leq \sum_{i=1}^{m} \left[ \sigma_{\mathbf{x} \left( i \right)}^2 \sigma_{\mathbf{y} \left( i \right)}^2 \right]^{\frac{1}{2}} ,\nonumber \end{flalign} where equality holds if $\Lambda_{\mathbf{x}}^{\frac{1}{2}} \Sigma_{\mathbf{y}} \Lambda_{\mathbf{x}}^{\frac{1}{2}}$ is a diagonal matrix, i.e., \begin{flalign} \sigma_{\mathbf{x} \left( i \right)} \sigma_{\mathbf{x} \left( j \right)} \sigma_{\mathbf{y} \left( i \right) \mathbf{y} \left( j \right) }^2 = 0 \nonumber \end{flalign} for any $i, j = 1, \ldots, m; i \neq j$. Since $\Sigma_{\mathbf{x}}$ is positive definite (and hence $\sigma_{\mathbf{x} \left( i \right)} > 0,~\forall i = 1, \ldots, m$), this is equivalent to \begin{flalign} \sigma_{\mathbf{y} \left( i \right) \mathbf{y} \left( j \right) }^2 = 0 \nonumber \end{flalign} for any $i, j = 1, \ldots, m; i \neq j$, i.e., $\Sigma_{\mathbf{y}}$ is diagonal as \begin{flalign} \Sigma_{\mathbf{y}} = \Lambda_{\mathbf{y}} = \mathrm{diag} \left( \sigma_{\mathbf{y} \left( 1 \right)}^2, \ldots, \sigma_{\mathbf{y} \left( m \right)}^2 \right). \nonumber \end{flalign} This completes the proof. \end{proof} Accordingly, we may examine the implications of Theorem~\ref{t1} for the Gelbrich bound, as shown in the following corollary; it is worth mentioning that herein $\mathbf{x}$ is not necessarily elliptical in the first place. \begin{corollary} \label{c1} Consider $m$-dimensional random vectors $\mathbf{x}$ and $\mathbf{y}$ with positive definite covariance matrices $\Sigma_{\mathbf{x}}$ and $\Sigma_{\mathbf{y}}$, respectively. Suppose that $\Sigma_{\mathbf{x}}$ is a diagonal matrix, i.e., \begin{flalign} \Sigma_{\mathbf{x}} = \Lambda_{\mathbf{x}} = \mathrm{diag} \left( \sigma_{\mathbf{x} \left( 1 \right)}^2, \ldots, \sigma_{\mathbf{x} \left( m \right)}^2 \right). \end{flalign} Denote the diagonal terms of $\Sigma_{\mathbf{y}}$ by $\sigma_{\mathbf{y} \left( 1 \right)}^2, \ldots, \sigma_{\mathbf{y} \left( m \right)}^2$, whereas $\Sigma_{\mathbf{y}}$ is not necessarily a diagonal matrix. Then, \begin{flalign} \label{b2} \mathcal{W}_2 \left( p_{\mathbf{x}} ; p_{\mathbf{y}} \right) & \geq \sqrt{\sum_{i=1}^{m} \left\{ \sigma_{\mathbf{x} \left( i \right)}^2 + \sigma_{\mathbf{y} \left( i \right)}^2 - 2 \left[ \sigma_{\mathbf{x} \left( i \right)}^2 \sigma_{\mathbf{y} \left( i \right)}^2 \right]^{\frac{1}{2}} \right\}} \nonumber \\ & = \sqrt{\sum_{i=1}^{m} \left\{ \left[ \sigma_{\mathbf{x} \left( i \right)}^2 \right]^{\frac{1}{2}} - \left[ \sigma_{\mathbf{y} \left( i \right)}^2 \right]^{\frac{1}{2}} \right\}^2 }. \end{flalign} \end{corollary} \vspace{3mm} \begin{proof} It is known from the proof of Theorem~\ref{t1} that \begin{flalign} &\sqrt{ \tr \left[ \Sigma_{\mathbf{x}} + \Sigma_{\mathbf{y}} - 2 \left( \Sigma_{\mathbf{x}}^{\frac{1}{2}} \Sigma_{\mathbf{y}} \Sigma_{\mathbf{x}}^{\frac{1}{2}} \right)^{\frac{1}{2}} \right] } \nonumber \\ &~~~~ \geq \sqrt{\sum_{i=1}^{m} \left\{ \sigma_{\mathbf{x} \left( i \right)}^2 + \sigma_{\mathbf{y} \left( i \right)}^2 - 2 \left[ \sigma_{\mathbf{x} \left( i \right)}^2 \sigma_{\mathbf{y} \left( i \right)}^2 \right]^{\frac{1}{2}} \right\}} \nonumber \\ &~~~~ = \sqrt{\sum_{i=1}^{m} \left\{ \left[ \sigma_{\mathbf{x} \left( i \right)}^2 \right]^{\frac{1}{2}} - \left[ \sigma_{\mathbf{y} \left( i \right)}^2 \right]^{\frac{1}{2}} \right\}^2 }. \nonumber \end{flalign} As such, \eqref{b2} follows directly from the Gelbrich bound. \end{proof} More generally, we may consider the cases when $\Sigma_{\mathbf{x}}$ is not necessarily diagonal, i.e., when $\mathbf{x}$ is not necessarily independent element-wise. \begin{theorem} \label{t2} Consider $m$-dimensional random vectors $\mathbf{x}$ and $\mathbf{y}$ with positive definite covariance matrices $\Sigma_{\mathbf{x}}$ and $\Sigma_{\mathbf{y}}$, respectively. Suppose that $\mathbf{x}$ is elliptically distributed with density generator $g_\mathbf{x} \left( \mathbf{u} \right)$, whereas $\mathbf{y}$ is not necessarily elliptical. Denote the eigen-decomposition of $\Sigma_{\mathbf{x}}$ as \begin{flalign} \Sigma_{\mathbf{x}} = U_{\mathbf{x}} \Lambda_{\mathbf{x}} U_{\mathbf{x}}^{\mathrm{T}}, \end{flalign} where \begin{flalign} \Lambda_{\mathbf{x}} = \mathrm{diag} \left( \lambda_1, \ldots, \lambda_m \right). \end{flalign} Meanwhile, denote \begin{flalign} \overline{\Sigma}_{\mathbf{y}} = U_{\mathbf{x}}^{\mathrm{T}} \Sigma_{\mathbf{y}} U_{\mathbf{x}}, \end{flalign} and denote the diagonal terms of $\overline{\Sigma}_{\mathbf{y}}$ by $\overline{\sigma}_{\mathbf{y} \left( 1 \right)}^2, \ldots, \overline{\sigma}_{\mathbf{y} \left( m \right)}^2$, whereas $\overline{\Sigma}_{\mathbf{y}}$ is not necessarily diagonal; note also that $\Sigma_{\mathbf{y}}$ is not necessarily diagonal in the first place. Then, \begin{flalign} \mathcal{W}_2 \left( p_{\mathbf{x}} ; p_{\mathbf{y}} \right) & \geq \sqrt{\sum_{i=1}^{m} \left\{ \lambda_i + \overline{\sigma}_{\mathbf{y} \left( i \right)}^2 - 2 \left[ \lambda_i \overline{\sigma}_{\mathbf{y} \left( i \right)}^2 \right]^{\frac{1}{2}} \right\}} \nonumber \\ & = \sqrt{\sum_{i=1}^{m} \left\{ \lambda_i^{\frac{1}{2}} - \left[ \overline{\sigma}_{\mathbf{y} \left( i \right)}^2 \right]^{\frac{1}{2}} \right\}^2 }, \end{flalign} where equality holds if $U_{\mathbf{x}}^{\mathrm{T}} \mathbf{y}$ is elliptical with the same density generator as $\mathbf{x}$, i.e., $g_\mathbf{y} \left( \mathbf{u} \right) = g_\mathbf{x} \left( \mathbf{u} \right)$, while $\Sigma_{\mathbf{y}}$ is given by \begin{flalign} \Sigma_{\mathbf{y}} = U_{\mathbf{x}} \left[ \mathrm{diag} \left( \overline{\sigma}_{\mathbf{y} \left( 1 \right)}^2, \ldots, \overline{\sigma}_{\mathbf{y} \left( m \right)}^2 \right) \right] U_{\mathbf{x}}^{\mathrm{T}}. \end{flalign} \end{theorem} \vspace{3mm} \begin{proof} Note first that when \begin{flalign} \Sigma_{\mathbf{x}} = U_{\mathbf{x}} \Lambda_{\mathbf{x}} U_{\mathbf{x}}^{\mathrm{T}}, \nonumber \end{flalign} it can be verified that \begin{flalign} \Sigma_{\mathbf{x}}^{\frac{1}{2}} = \left( U_{\mathbf{x}} \Lambda_{\mathbf{x}} U_{\mathbf{x}}^{\mathrm{T}} \right)^{\frac{1}{2}} = U_{\mathbf{x}} \Lambda_{\mathbf{x}}^{\frac{1}{2}} U_{\mathbf{x}}^{\mathrm{T}}, \nonumber \end{flalign} and hence \begin{flalign} &\sqrt{ \tr \left[ \Sigma_{\mathbf{x}} + \Sigma_{\mathbf{y}} - 2 \left( \Sigma_{\mathbf{x}}^{\frac{1}{2}} \Sigma_{\mathbf{y}} \Sigma_{\mathbf{x}}^{\frac{1}{2}} \right)^{\frac{1}{2}} \right] } \nonumber \\ & = \sqrt{ \tr \left[ U_{\mathbf{x}} \Lambda_{\mathbf{x}} U_{\mathbf{x}}^{\mathrm{T}} + \Sigma_{\mathbf{y}} - 2 \left( U_{\mathbf{x}} \Lambda_{\mathbf{x}}^{\frac{1}{2}} U_{\mathbf{x}}^{\mathrm{T}} \Sigma_{\mathbf{y}} U_{\mathbf{x}} \Lambda_{\mathbf{x}}^{\frac{1}{2}} U_{\mathbf{x}}^{\mathrm{T}}\right) \right] } \nonumber \\ & = \sqrt{ \tr \left\{ U_{\mathbf{x}} \left[ \Lambda_{\mathbf{x}} + U_{\mathbf{x}}^{\mathrm{T}} \Sigma_{\mathbf{y}} U_{\mathbf{x}} - 2 \left( \Lambda_{\mathbf{x}}^{\frac{1}{2}} U_{\mathbf{x}}^{\mathrm{T}} \Sigma_{\mathbf{y}} U_{\mathbf{x}} \Lambda_{\mathbf{x}}^{\frac{1}{2}} \right) \right] U_{\mathbf{x}}^{\mathrm{T}} \right\} } \nonumber \\ & = \sqrt{ \tr \left\{ \left[ \Lambda_{\mathbf{x}} + U_{\mathbf{x}}^{\mathrm{T}} \Sigma_{\mathbf{y}} U_{\mathbf{x}} - 2 \left( \Lambda_{\mathbf{x}}^{\frac{1}{2}} U_{\mathbf{x}}^{\mathrm{T}} \Sigma_{\mathbf{y}} U_{\mathbf{x}} \Lambda_{\mathbf{x}}^{\frac{1}{2}} \right) \right] U_{\mathbf{x}}^{\mathrm{T}}U_{\mathbf{x}} \right\} } \nonumber \\ & = \sqrt{ \tr \left[ \Lambda_{\mathbf{x}} + U_{\mathbf{x}}^{\mathrm{T}} \Sigma_{\mathbf{y}} U_{\mathbf{x}} - 2 \left( \Lambda_{\mathbf{x}}^{\frac{1}{2}} U_{\mathbf{x}}^{\mathrm{T}} \Sigma_{\mathbf{y}} U_{\mathbf{x}} \Lambda_{\mathbf{x}}^{\frac{1}{2}} \right) \right] } \nonumber \\ & = \sqrt{ \tr \left[ \Lambda_{\mathbf{x}} + \overline{\Sigma}_{\mathbf{y}} - 2 \left( \Lambda_{\mathbf{x}}^{\frac{1}{2}} \overline{\Sigma}_{\mathbf{y}} \Lambda_{\mathbf{x}}^{\frac{1}{2}} \right) \right] }, \nonumber \end{flalign} where \begin{flalign} \overline{\Sigma}_{\mathbf{y}} = U_{\mathbf{x}}^{\mathrm{T}} \Sigma_{\mathbf{y}} U_{\mathbf{x}}. \nonumber \end{flalign} Note also that $\overline{\Sigma}_{\mathbf{y}}$ is essentially the covariance of $U_{\mathbf{x}}^{\mathrm{T}} \mathbf{y}$. Then, it is known from the proof of Theorem~\ref{t1} that \begin{flalign} &\sqrt{ \tr \left[ \Lambda_{\mathbf{x}} + \overline{\Sigma}_{\mathbf{y}} - 2 \left( \Lambda_{\mathbf{x}}^{\frac{1}{2}} \overline{\Sigma}_{\mathbf{y}} \Lambda_{\mathbf{x}}^{\frac{1}{2}} \right) \right] } \nonumber \\ &~~~~ \geq \sqrt{\sum_{i=1}^{m} \left\{ \lambda_i + \overline{\sigma}_{\mathbf{y} \left( i \right)}^2 - 2 \left[ \lambda_i \overline{\sigma}_{\mathbf{y} \left( i \right)}^2 \right]^{\frac{1}{2}} \right\}} \nonumber \\ &~~~~ = \sqrt{\sum_{i=1}^{m} \left\{ \lambda_i^{\frac{1}{2}} - \left[ \overline{\sigma}_{\mathbf{y} \left( i \right)}^2 \right]^{\frac{1}{2}} \right\}^2 }, \nonumber \end{flalign} where equality holds if $U_{\mathbf{x}}^{\mathrm{T}} \mathbf{y}$ is elliptical with the same density generator as $\mathbf{x}$, i.e., $g_\mathbf{y} \left( \mathbf{u} \right) = g_\mathbf{x} \left( \mathbf{u} \right)$, while $\overline{\Sigma}_{\mathbf{y}}$ is diagonal as \begin{flalign} \overline{\Sigma}_{\mathbf{y}} = \mathrm{diag} \left( \overline{\sigma}_{\mathbf{y} \left( 1 \right)}^2, \ldots, \overline{\sigma}_{\mathbf{y} \left( m \right)}^2 \right), \nonumber \end{flalign} or equivalently, \begin{flalign} \Sigma_{\mathbf{y}} = U_{\mathbf{x}} \overline{\Sigma}_{\mathbf{y}} U_{\mathbf{x}}^{\mathrm{T}} = U_{\mathbf{x}} \left[ \mathrm{diag} \left( \overline{\sigma}_{\mathbf{y} \left( 1 \right)}^2, \ldots, \overline{\sigma}_{\mathbf{y} \left( m \right)}^2 \right) \right] U_{\mathbf{x}}^{\mathrm{T}}. \nonumber \end{flalign} This concludes the proof. \end{proof} Correspondingly, we may examine what Theorem~\ref{t2} implicates for the Gelbrich bound for the general case where $\mathbf{x}$ is not necessarily elliptical nor independent element-wise. \begin{corollary} \label{c2} Consider $m$-dimensional random vectors $\mathbf{x}$ and $\mathbf{y}$ with positive definite covariance matrices $\Sigma_{\mathbf{x}}$ and $\Sigma_{\mathbf{y}}$, respectively. Denote the eigen-decomposition of $\Sigma_{\mathbf{x}}$ as \begin{flalign} \Sigma_{\mathbf{x}} = U_{\mathbf{x}} \Lambda_{\mathbf{x}} U_{\mathbf{x}}^{\mathrm{T}}, \end{flalign} where \begin{flalign} \Lambda_{\mathbf{x}} = \mathrm{diag} \left( \lambda_1, \ldots, \lambda_m \right). \end{flalign} Meanwhile, denote \begin{flalign} \overline{\Sigma}_{\mathbf{y}} = U_{\mathbf{x}}^{\mathrm{T}} \Sigma_{\mathbf{y}} U_{\mathbf{x}}, \end{flalign} and denote the diagonal terms of $\overline{\Sigma}_{\mathbf{y}}$ by $\overline{\sigma}_{\mathbf{y} \left( 1 \right)}^2, \ldots, \overline{\sigma}_{\mathbf{y} \left( m \right)}^2$, whereas $\overline{\Sigma}_{\mathbf{y}}$ is not necessarily a diagonal matrix. Then, \begin{flalign} \mathcal{W}_2 \left( p_{\mathbf{x}} ; p_{\mathbf{y}} \right) & \geq \sqrt{\sum_{i=1}^{m} \left\{ \lambda_i + \overline{\sigma}_{\mathbf{y} \left( i \right)}^2 - 2 \left[ \lambda_i \overline{\sigma}_{\mathbf{y} \left( i \right)}^2 \right]^{\frac{1}{2}} \right\}} \nonumber \\ & = \sqrt{\sum_{i=1}^{m} \left\{ \lambda_i^{\frac{1}{2}} - \left[ \overline{\sigma}_{\mathbf{y} \left( i \right)}^2 \right]^{\frac{1}{2}} \right\}^2 }. \end{flalign} \end{corollary} \vspace{3mm} \section{Conclusion} We have presented a property of the $\mathcal{W}_2$ Wasserstein distance: Independent elliptical distributions minimize their $\mathcal{W}_2$ Wasserstein distance from given independent elliptical distributions with the same density generators. We have also examined the implications of this property in the Gelbrich bound when the distributions are not necessarily elliptical, while generalizing the results to the cases when the distributions are not independent. It might be interesting to further examine the implications of this property. \balance \bibliographystyle{IEEEtran}
{ "attr-fineweb-edu": 1.466797, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdq3xK1fBGsWmv4Zm
\section{Introduction} One of the main tasks of the current LHC run is to establish the properties of the Higgs boson discovered at the LHC in 2012~\cite{Aad:2012tfa, Chatrchyan:2012ufa}. The production process in association with top quarks, $pp \to t \tb H$, provides a direct way to probe the strength of the top Yukawa coupling without making any assumptions regarding its nature. This necessitates an improvement of the theoretical accuracy with which theoretical predictions for $pp \to t \tb H$ are known. A great amount of progress has been achieved in the recent years in this field. Although the next-to-leading-order (NLO) QCD, i.e. ${\cal O}(\alpha_{\rm s}^3\alpha)$ predictions are already known for some time ~\cite{Beenakker:2001rj, Reina:2001sf}, they have been newly recalculated and matched to parton showers in~\cite{Hirschi:2011pa, Frederix:2011zi, Garzelli:2011vp, Hartanto:2015uka}. As of late, the mixed QCD-weak corrections~\cite{Frixione:2014qaa} and QCD-EW corrections~\cite{Yu:2014cka, Frixione:2015zaa} of ${\cal O}(\alpha_{\rm s}^2\alpha^2)$, as well as the NLO QCD corrections to the hadronic $ t \tb H$ production with top and antitop quarks decaying into bottom quarks and leptons~\cite{Denner:2015yca} are also available. However, calculations of the next-to-next-to-leading-order (NNLO) QCD corrections are currently technically out of reach. It is nevertheless interesting to ask the question what is the size and the effect of certain classes of QCD corrections of higher than NLO accuracy. One type of contribution is from soft gluon emission in the threshold limit. In this context the absolute threshold corrections have been included to all orders in perturbation theory~\cite{Kulesza:2015vda}. Also in the invariant mass threshold limit the approximation of the NNLO QCD corrections has been obtained~\cite{Broggio:2015lya}. In this contribution we report on the inclusion of the contributions from soft gluon emission in the invariant mass threshold limit to all orders in perturbation theory. The traditional (Mellin-space) resummation formalism which is often applied in this type of calculations has been very well developed and copiously employed for description of the $2 \to 2$ type processes at the Born level. The universality of resummation concepts warrants their applications to scattering processes with many partons in the final state, as shown in a general analytical treatment developed for arbitrary number of partons~\cite{Bonciani:2003nt, Aybat:2006wq}. In particular, using a concept of individual weights for each of the functions describing different type of dynamics, be it hard, soft/collinear or soft, the factorization of the cross sections into these functions can be shown~\cite{Contopanagos:1996nh}. At the level of a specific process, adding one more particle or a jet in the final state requires accounting for more complicated kinematics and a possible change in the colour structure of the underlying hard scattering. In the general framework the former will manifest itself in the appearance of new type of weights, strictly related to the definition of a considered observable, while the latter influences the soft and hard functions. More specifically, for processes with more than three partons involved at the Born level, the non-trivial colour flow influences the contributions from wide-angle soft gluon emissions which have to be included at the next-to-leading-logarithmic (NLL) accuracy. The evolution of the colour exchange at NLL is governed by the one-loop soft anomalous dimension which then needs to be calculated. In the following we discuss these modifications for a generic $ ij \to kl B$ process, where $i,j$ denote massless coloured partons, $k, l$ are two massive coloured particles and $B$ is a massive colour-singlet particle. The corrections are considered in the limit of invariant mass threshold with the corresponding weight given by $z_5 = 1-(p_k+p_l+p_B)^2/\hat{s}$. Subsequently we apply the results to the case of the associated Higgs boson production with top quarks, where in the threshold limit the cross section receives enhancements in the form of logarithmic corrections in $z_5$, i.e. $(\log^i z_5/z_5)_+,\;i=0,1,..$. The quantity $z_5$ measures the fraction of the total initial state energy that goes into the gluon emission. An additional improvement of the calculation at the NLL accuracy is achieved by including the ${\cal O}(\alpha_{\rm s})$ non-logarithmic threshold corrections originating from hard dynamics. \section{Resummation at production threshold} At the partonic level, the Mellin moments for the process $ ij \to kl B$ are given by \begin{equation} \frac{d\hat \sigma_{ij \to kl B, N}}{dQ^2} (m_k, m_l, m_B, \mu_F^2, \mu_R^2) = \int_0^1 d \hat\rho \, \hat\rho^{N-1} \frac{d\hat \sigma_{ij \to kl B}}{dQ^2} (\hat \rho, m_k, m_l, m_B, \mu_F^2, \mu_R^2) \end{equation} with $\hat \rho = 1- z_5=Q^2/\hat s$, $Q^2=(p_l+p_k+p_B)^2$. At LO, the $ t \tb H$ production receives contributions from the $q \bar q$ and $gg$ channels. We analyze the colour structure of the underlying processes in the $s$-channel colour bases, $\{ c_I^q\}$ and $\{c_I^g\}$, with \mbox{$c_{\bf 1}^q = \delta^{\alpha_{i}\alpha_{j}} \delta^{\alpha_{k}\alpha_{l}},$} \mbox{$c_{\bf 8}^{q} = T^a_{\alpha_{i}\alpha_{j}} T^a_{\alpha_{k}\alpha_{l}},$} \mbox{$c_{\bf 1}^{g} = \delta^{a_i a_j} \, \delta^{\alpha_k \alpha_l}, $} \mbox{$c_{\bf 8S}^{g}= T^b _{\alpha_l \alpha_k} d^{b a_i a_j} ,$} \mbox{$c^{g}_{\bf 8A} = i T^b _{\alpha_l \alpha_k} f^{b a_i a_j} $}. In this basis the soft anomalous dimension matrix becomes diagonal in the absolute production threshold limit~\cite{KS}. However, for the invariant mass kinematics the soft anomalous dimension matrix with full kinematic dependence is required, which is not diagonal. The NLL resummed cross section in the $N$-space has the form~\cite{Contopanagos:1996nh,Kidonakis:1998nf} \begin{equation} \label{eq:res:fact} \frac{d\hat \sigma^{{\rm (res)}}_{ij{\scriptscriptstyle \to} kl B,N}}{dQ^2} = \sum_{I,J} \,H_{ij{\scriptscriptstyle \to} kl B,IJ}(N)\,\tilde{S}_{ij{\scriptscriptstyle \to} kl B,JI}(N)\,\Delta^i_{N+1} \Delta^j_{N+1} \end{equation} where we suppress explicit dependence on the scales. The indices $I$ and $J$ in Eq.~(\ref{eq:res:fact}) indicate colour space matrix indices. The colour-channel-dependent hard contributions originating from the LO partonic cross sections in Mellin-moment space and at, higher orders, the contributions beyond LO and are denoted by $H_{ ij{\scriptscriptstyle \to} kl B,IJ}(N)$. The radiative factors $\Delta^i_{N}$ describe the effect of the soft gluon radiation collinear to the initial state partons and are universal, see e.g.~\cite{BCMNtop} . Large-angle soft gluon emission is accounted for by the factors $S_{ ij{\scriptscriptstyle \to} kl B,IJ}(N)$ which are directly related to the soft gluon anomalous dimension calculated in~\cite{Kulesza:2015vda}. As indicated by the lower indices, the wide-angle soft emission depends on the partonic process under consideration and the colour configuration of the participating particles. In the limit of $p_B,m_B\to0$ the matrix $S_{ ij{\scriptscriptstyle \to} kl B,IJ}(N)$ coincides with the corresponding matrix for a $2 \to 2$ process $ij \to kl$. In addition the absolute threshold limit also reproduces the same matrix as for this $2 \to 2$ process~\cite{Kulesza:2015vda}. In our calculations we consider all perturbative functions governing the radiative factors up to the terms needed to obtain NLL accuracy in the resummed expressions. The function $S_{ ij{\scriptscriptstyle \to} kl B,IJ}(N)$ is given by~\cite{Kidonakis:1998nf} \begin{eqnarray} \tilde{S}_{ij\to kl}\left(N\right) & = & \bar{\mathrm{P}}\exp\left[\int_{\mu}^{Q/N}\frac{dq}{q}\Gamma_{ij\to kl}^{\dagger}\left(\alpha_{\mathrm{s}}\left(q^{2}\right)\right)\right]\tilde{S}_{ij\to kl}\nonumber \\ & & \times\mathrm{P}\exp\left[\int_{\mu}^{Q/N}\frac{dq}{q}\Gamma_{ij\to kl}\left(\alpha_{\mathrm{s}}\left(q^{2}\right)\right)\right]\nonumber \end{eqnarray} where at the lowest order the matrix $\tilde{S}_{ij\to kl,IJ}=\mathrm{Tr}\left[c_I^\dagger c_J\right]$ and $\mathrm{P}$ and $\bar{\mathrm{P}}$ denote the path- and reverse path-ordering in the variable $q$ respectively. If the soft anomalous dimension matrix is diagonal this expression simplifies, however this is not the case for the invariant mass threshold. Therefore we shall make use of the method of~\cite{Kidonakis:1998nf} in order to diagonalize the soft anomalous dimension matrix. Using this method we transform to a diagonal basis $R$ \begin{eqnarray} \Gamma_{R} & = & R^{-1}\Gamma R\nonumber \\ H_{R} & = & R^{-1}H\left(R^{-1}\right)^{\dagger}\\ S_{R} & = & R^{\dagger}SR\nonumber \end{eqnarray} In this basis we can write $S_{ ij{\scriptscriptstyle \to} kl B,R,IJ}(N)$ as \begin{equation} \tilde{S}_{ij\to kl,R,IJ}\left(N\right) = \tilde{S}_{ij\to kl,R,IJ}\exp\left[\int_{\mu}^{Q/N}\frac{dq}{q}\left\{ \lambda_{R,I}^{*}\left(\alpha_{\mathrm{s}}\left(q^{2}\right)\right)+\lambda_{R,J}\left(\alpha_{\mathrm{s}}\left(q^{2}\right)\right)\right\} \right] \end{equation} with $\lambda_{R,I}$ the eigenvalues of the matrix $\Gamma_{IJ}$. The matrix $H_{ ij{\scriptscriptstyle \to} kl B,IJ}(N)$ is described by the Born cross section projected onto the colour basis. However beyond NLL accuracy higher order terms in $H_{ ij{\scriptscriptstyle \to} kl B,IJ}(N)$ and $S_{ ij{\scriptscriptstyle \to} kl B,IJ}$ start contributing. These terms lead to the invariant mass resummation equivalent of the matching coefficient for absolute threshold resummation. In practice we split this term into the LO matrix in colour space and a single coefficient averaged over colour space for the higher order contributions \begin{equation} H_{ ij{\scriptscriptstyle \to} kl B,IJ}(N)=H^{(0)}_{ ij{\scriptscriptstyle \to} kl B,IJ}{C}_{ij{\scriptscriptstyle \to} klB} \end{equation} The coefficient ${C}_{ij{\scriptscriptstyle \to} klB}= 1 + \frac{\alpha_{\rm s}}{\pi} {C}^{(1)}_{ij{\scriptscriptstyle \to} klB}+ \dots$ contains all non-logarithmic contributions to the NLO cross section taken in the invariant mass threshold limit. More specifically, these consist of the full virtual corrections including the Coulomb corrections and $N$-independent non-logarithmic contributions from soft emissions. Although formally the coefficient $C_{ij{\scriptscriptstyle \to} kl B}$ begin to contribute at NNLL accuracy, in our numerical studies of the $pp \to t \tb H$ process we consider both the case of $C_{ij{\scriptscriptstyle \to} kl B}=1$, i.e. with the first-order corrections to the coefficients neglected, as well as the case with these corrections included. In the latter case we treat the Coulomb corrections and the hard contributions additively, i.e. ${C}_{ij{\scriptscriptstyle \to} klB}^{(1)}={C}_{ij{\scriptscriptstyle \to} klB}^{(1, \rm hard)}+{C}_{ij{\scriptscriptstyle \to} klB}^{(1, \rm Coul)} . $ For $k,l$ denoting massive quarks the Coulomb corrections are ${C}_{ij{\scriptscriptstyle \to} klB,{\bf 1}}^{(1, \rm Coul)} = C_{\mathrm{F}} \pi^2 /(2 \beta_{kl})$ and ${C}_{ij{\scriptscriptstyle \to} klB,{\bf 8}}^{(1, \rm Coul)} = (C_{\mathrm{F}} -C_{\mathrm{A}}/2) \pi^2 /(2 \beta_{kl})$ with $\beta_{kl}=\sqrt{1- 4m_t^2/\hat s_{kl}}$ and $\hat s_{kl}=(p_t+p_{\bar t})^2$. As the $N$-independent non-logarithmic contributions from soft emission are accounted for using a modification of the techniques developed for the $2\to2$ case~\cite{Beenakker:2011sf,Beenakker:2013mva}, the problem of calculating the $C_{ij{\scriptscriptstyle \to} t \tb H}^{(1)}$ coefficient reduces to calculation of virtual corrections to the process. We extract them numerically using the publicly available POWHEG implementation of the $ t \tb H$ process~\cite{Hartanto:2015uka}, based on the calculations developed in~\cite{Reina:2001sf}. The results are then cross-checked using the standalone MadLoop implementation in aMC@NLO~\cite{Hirschi:2011pa}. The resummation-improved NLO+NLL cross sections for the $pp \to t \tb H$ process are then obtained through matching the NLL resummed expressions with the full NLO cross sections \begin{eqnarray} \label{hires} && \sigma^{\rm (NLO+NLL)}_{h_1 h_2 {\scriptscriptstyle \to} kl B}(\rho, \mu_F^2, \mu_R^2)\! =\! \sigma^{\rm (NLO)}_{h_1 h_2 {\scriptscriptstyle \to} kl B}(\rho,\mu_F^2, \mu_R^2) + \sigma^{\rm (res-exp)}_ {h_1 h_2 {\scriptscriptstyle \to} kl B}(\rho, \mu_F^2, \mu_R^2) \nonumber \\ &&\!\!\!\!\!\!\!\!\!{\rm with} \nonumber \\ && \sigma^{\rm (res-exp)}_{h_1 h_2 {\scriptscriptstyle \to} kl B} \! = \sum_{i,j}\, \int_{\cal C}\,\frac{dN}{2\pi i} \; \rho^{-N} f^{(N+1)} _{i/h{_1}} (\mu_F^2) \, f^{(N+1)} _{j/h_{2}} (\mu_F^2) \nonumber \\ && \! \times\! \left[ \hat \sigma^{\rm (res)}_{ij{\scriptscriptstyle \to} kl B,N} (\mu_F^2, \mu_R^2) - \hat \sigma^{\rm (res)}_{ij{\scriptscriptstyle \to} kl B,N} (\mu_F^2, \mu_R^2) { \left. \right|}_{\scriptscriptstyle({NLO})}\! \right], \end{eqnarray} where $\hat \sigma^{\rm (res)}_{ij{\scriptscriptstyle \to} kl B,N}$ is given in Eq.~(\ref{eq:res:fact}) and $ \hat \sigma^{\rm (res)}_{ij{\scriptscriptstyle \to} kl B,N} \left. \right|_{\scriptscriptstyle({NLO})}$ represents its perturbative expansion truncated at NLO. The moments of the parton distribution functions (pdf) $f_{i/h}(x, \mu^2_F)$ are defined in the standard way $f^{(N)}_{i/h} (\mu^2_F) \equiv \int_0^1 dx \, x^{N-1} f_{i/h}(x, \mu^2_F)$. The inverse Mellin transform (\ref{hires}) is evaluated numerically using a contour ${\cal C}$ in the complex-$N$ space according to the ``Minimal Prescription'' method developed in Ref.~\cite{Catani:1996yz}. \section{Numerical predictions} The numerical results presented in this section are obtained with $m_t=173$ GeV, $m_H=125$~GeV and MMHT14 pdf sets~\cite{Harland-Lang:2014zoa}. We choose the central renormalization and factorization scales as $\mu_{F, 0} =\mu_{R, 0}= m_t +m_H/2$, in accordance with~\cite{Dittmaier:2011ti}. The NLO cross section is calculated using the aMC@NLO code~\cite{Alwall:2014hca}. In Figure~\ref{f:scaledependence:sim} we analyse the scale dependence of the resummed total cross section for $pp \to t \tb H$ at 14 TeV, varying simultaneously the factorization and renormalization scales, $\mu_F$ and $\mu_R$. In Figure~\ref{f:scaledependence:sim}~(a) a comparison between including and excluding the matching coefficient is shown for invariant mass resummation, where the inclusion of the matching coefficient is indicated by "w~$C$" and the use of invariant mass resummation is indicated by its scale $Q$. Whereas, Figure~\ref{f:scaledependence:sim}~(b) compares the previous result for absolute threshold resummation to the new result for invariant mass resummation, here absolute threshold resummation is indicated by its scale $M=2m_t+m_H$ and invariant mass resummation is again indicated by $Q$. Figure~\ref{f:scaledependence:sim}~(a) demonstrates that adding the soft gluon corrections and higher order hard contributions stabilizes the dependence on $\mu=\mu_F=\mu_R$ of the NLO+NLL predictions with respect to NLO. The central values, calculated at $\mu=\mu_0= m_t +m_H/2$, and the scale uncertainty from simultaneous variation of the scales at $\sqrt S=14$ TeV changes from $613_{-9.4\%}^{+6.2\%}$ fb at NLO to $619_{-2.4\%}^{+5.2\%}$ fb at NLO+NLL (with $C^{(1)}_{ij {\scriptscriptstyle \to} t \tb H}$ coefficients included). The increase in cross section for low scales can possibly be attributed to the fact that the $qg$ channel only begins to contribute at NLO and therefore does not undergo the resummation procedure and is not taken into account at higher orders. It is also clear from Figure~\ref{f:scaledependence:sim}~(a) that the coefficient $C_{ij{\scriptscriptstyle \to} t \tb H}^{(1)}$ strongly impact the predictions, especially at higher scales. In fact, their effect is more important than the effect of the logarithmic corrections alone for large scales. This observation also indicates the relevance of the contributions originating from the region away from the threshold which need to be known in order to further improve theoretical predictions. In Figure~\ref{f:scaledependence:sim}~(b) it can be seen that there is a difference in the size of the correction for invariant mass threshold resummation and absolute threshold resummation. \begin{figure} \centering \begin{tabular}{cc} \includegraphics[width=0.45\textwidth]{Htt-scale-MMHTNLO_NLO+NLL-Q_14TeV.pdf} & \includegraphics[width=0.45\textwidth]{Htt-scale-MMHTNLO_NLO+NLL-MQ_14TeV.pdf} \tabularnewline \hspace{1.7em} (a) &\hspace{1.7em} (b) \tabularnewline \end{tabular} \caption{Scale dependence of the LO, NLO and NLO+NLL cross sections at $\sqrt S=14$ TeV LHC collision energy. The results are obtained while simultaneously varying $\mu_F$ and $\mu_R$, $\mu=\mu_F=\mu_R$.} \label{f:scaledependence:sim} \end{figure} Unlike absolute threshold resummation, invariant mass resummation allows differential distributions to be computed, specifically the invariant mass distribution. Figure~\ref{f:Qdependence} shows the invariant mass distribution with the scale variation for simultaneous $\mu_R$ and $\mu_F$ variation. From this we can see that the invariant mass distribution is stable with respect to higher order soft gluon emission at the chosen central scale. At the hand of the increase in the size of the cross section for the lower uncertainty bound, which is taken at $2\;\mu_0$, it can be seen that for larger scale choices the corrections are significantly larger. An example of such a larger scale choice is the peak of the invariant mass distribution, $\mu\approx2.64\;\mu_0$ as is used in~\cite{Broggio:2015lya}. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{Htt-Q-diff-MMHTNLO_NLO+NLL_14TeV.pdf} \caption{Invariant mass dependence of the NLO and NLO+NLL cross sections differential at $\sqrt S=14$ TeV LHC collision energy. The uncertainty bands are obtained by simultaneously varying $\mu_F$ and $\mu_R$ by a factor 2 around $\mu_0$. The lower half shows the ratio with respect to the central value of the NLO result.} \label{f:Qdependence} \end{figure} The effect of including NLL corrections is summarized in Table~\ref{t:results} for the LHC collision energies of 13 and 14 TeV. Here we choose to estimate the theoretical uncertainty due to scale variation using the 7-point method, where the minimum and maximum values obtained with $(\mu_F/\mu_0, \mu_R/\mu_0) = (0.5,0.5), (0.5,1), (1,0.5), (1,1), (1,2), (2,1), (2,2)$ are considered. The invariant mass threshold NLO+NLL predictions including the $C^{(1)}_{ij {\scriptscriptstyle \to} t \bar t H}$ show a significant reduction of the scale uncertainty, compared to NLO results. The reduction of the positive and negative scale errors amounts to around 20-30\% of the NLO error for $\sqrt S=13, 14$ TeV. If invariant mass threshold is compared to absolute threshold resummation the scale uncertainty is more greatly reduced and, as previously stated, the K-factor is smaller. The scale uncertainty of the predictions are still larger than the pdf uncertainty of the NLO predictions which is not expected to be significantly influenced by the soft gluon corrections. \begin{table} \begin{center} \begin{tabular}{|c c c c c c|} \hline $\sqrt{S}$ {[}TeV{]} & NLO {[}fb{]} & \multicolumn{2}{c}{NLO+NLL M with $C$} & \multicolumn{2}{c|}{NLO+NLL Q with $C$} \tabularnewline & & Value {[}fb{]} & K-factor & Value {[}fb{]} & K-factor \tabularnewline \hline 13 & $506_{-9.4\%}^{+5.9\%}$ & $537_{-5.5\%}^{+8.2\%}$ & 1.06 & $512_{-6.2\%}^{+5.1\%}$ & 1.01 \tabularnewline \hline 14 & $613_{-9.4\%}^{+6.2\%}$ & $650_{-5.7\%}^{+7.9\%}$ & 1.06 & $619_{-6.4\%}^{+5.2\%}$ & 1.01 \tabularnewline \hline \end{tabular} \end{center} \caption{NLO+NLL and NLO total cross sections for $pp \to t \tb H$ at $\sqrt S=13$ and 14 GeV. The NLO+NLL results are shown with $C$-coefficient and for both absolute threshold and invariant mass threshold. The error ranges given together with the NLO and NLO+NLL results indicate the scale uncertainty computed by use of the seven point method.} \label{t:results} \end{table} \noindent {\bf Acknowledgements} This work has been supported in part by the German Research Foundation (DFG) grant KU 3103/1. Support of the Polish National Science Centre grants no.\ DEC-2014/13/B/ST2/02486 is gratefully acknowledged. TS acknowledges support in the form of a scholarship of Marian Smoluchowski Research Consortium Matter Energy Future from KNOW funding. This work is also partly supported by the U.S. National Science Foundation, under grant PHY--0969510, the LHC Theory Initiative.
{ "attr-fineweb-edu": 1.898438, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdqM5qWTBLpFf1b7J
\section{Introduction} Photoinduced changes in electronic properties of solids offer a fascinating research field in condensed matter physics~\cite{Kirilyuk_RMP10,Bukov_AP15,Ishihara_JPSJ19}. In particular, the emergence of topologically nontrivial phases under light irradiation has attracted much attention. A representative phenomenon was predicted theoretically in graphene~\cite{Oka_PRB09}, in which its massless Dirac fermions acquire a topological gap when subjected to irradiation of circularly polarized light. Such nonequilibrium states are called Floquet topological insulators, which are characterized by a nonzero Chern number and their novel light-induced anomalous Hall effect~\cite{Oka_PRB09,Kitagawa_PRB11,Kitagawa_PRB11n2}. For systems driven by a time-periodic external field such as continuous-wave light, the Floquet theorem provides a mapping from the original time-periodic Hamiltonian to an effective static Hamiltonian, called the Floquet Hamiltonian, that enables us to study nonequilibrium physical properties in a time-independent manner~\cite{Rahav_PRA03,Kitagawa_PRB11,Kitagawa_PRB11n2,Goldman_PRX14,Mikami_PRB16}. This approach has played a major role in studying various photoinduced phases including the Floquet topological insulator phase~\cite{Linder_NP11,Ezawa_PRL13,Grushin_PRL14,Zou_PRB16,Takasan_PRB17,Kitayama_PRR20}. However, to observe these phases experimentally, an intense electric field of light is often required; therefore, using ultrafast laser pulses is more feasible rather than a continuous-wave light. Indeed, for graphene, the light-induced anomalous Hall effect has been observed using ultrafast pulses of circularly polarized light~\cite{Mclver_NP20}. In this respect, how characteristic properties of the Floquet topological insulator phase appear under pulse excitations, especially in transient dynamics that are not accessible with approaches based on the Floquet theorem, is of great interest. Theoretical investigations in this direction have been reported recently, which was based on the time-dependent Schr\"odinger equation ~\cite{Sentef_NatCom15,Schuler_PRB17,Gavensky_PRB18,Sato_NJP19,Sato_PRB19,Schuler_PRX20,Aschlimann_Arxiv21}. To date, direct evidence of a photoinduced topological state has been observed in surface Dirac fermions of a topological insulator Bi$_2$Se$_3$~\cite{Wang_SCI13}, for which the time- and angle-resolved photoemission spectroscopy measurements showed a gap opening at the surface Dirac point by irradiation with circularly polarized light. However, the number of such materials has been severely limited. Moreover, the light-induced anomalous Hall effect, which is another approach to assess photoinduced topological changes, has been observed only in graphene~\cite{Mclver_NP20}. The search for new candidate materials hosting the Floquet topological insulator phase is thus indispensable to deepen our understanding of this novel phenomenon. One material of interest for this purpose is the quasi-two-dimensional organic conductor $\alpha$-(BEDT-TTF)$_2$I$_3$. Here BEDT-TTF stands for bis(ethylenedithio)tetrathiafulvalene. At ambient pressure, this compound exhibits a charge-order insulating state below 135 K~\cite{Bender_MCLC84,Kakiuchi_JPSJ07,Kino_JPSJ95,Seo_JPSJ00}. Applying pressure destabilizes the charge order, and a zero-gap state with tilted Dirac cones at the Fermi level appears (Fig. \ref{fig:fig1})~\cite{Katayama_JPSJ06,Katayama_EPJB08,Tajima_JPSJ06}. The appearance of this zero-gap state under pressure was verified by theoretical studies based on the first-principles calculations~\cite{Kino_JPSJ06} and supported experimentally by transport measurements~\cite{Tajima_JPSJ06,Kajita_JPSJ14}. A recent theoretical study based on the Floquet theory~\cite{Kitayama_PRR20} has shown that this zero-gap state turns into the Floquet topological insulator phase by irradiation with circularly polarized light, and that rich phase diagrams are obtained in the plane of the frequency and amplitude of light electric field. This motivates our study of the transient dynamics associated with the Floquet topological insulator phase in this compound. In this paper, we investigate the photoinduced topological state and the formation of Floquet-dressed bands in $\alpha$-(BEDT-TTF)$_2$I$_3$ from the viewpoint of real-time dynamics. By numerically solving the time-dependent Schr\"odinger equation for a tight-binding model of $\alpha$-(BEDT-TTF)$_2$I$_3$ with ac electric field of circularly polarized light, we obtain photoinduced dynamics for continuous-wave and pulse excitations. We first examine the continuous-wave excitations and calculate time profiles of the Chern number and the Hall conductivity, from which we explore new physics of the photoinduced topological phase transitions and the resulting Floquet Chern insulator phase. We also discuss what aspects of these photoinduced phenomena and nonequilibrium phases the Floquet theory captures or misses. Also, we show that the Hall conductivity has an oscillation component with frequency much smaller than the light frequency. The center of the oscillation coincides with the quantized Hall conductivity characterized by a nonzero Chern number, whereas its frequency corresponds to the direct gap extracted from the Floquet bands. We then consider the pulse excitations and calculate the transient excitation spectra and the Hall conductivity. These quantities reveal the Floquet band formation and dynamical gap opening in a time-resolved manner, which are not found from the Floquet theory. We discuss the relevance of our results to experiments in $\alpha$-(BEDT-TTF)$_2$I$_3$. \begin{figure} \includegraphics[scale=0.25]{fig1.eps} \caption{(a) Schematic illustration of the conductive BEDT-TTF layer in $\alpha$-(BEDT-TTF)$_2$I$_3$. The dashed rectangle marks the unit cell, which contains four molecular sites (A, A$^{\prime}$, B, C). (b) Energy dispersion relations for the two highest bands ($\nu=3$, $4$), in which the tilted Dirac cones are located at the Fermi level that is taken as 0 eV. Enlarged views of the tilted Dirac cones are also shown.} \label{fig:fig1} \end{figure} \section{Model and Method} We consider the two-dimensional tight-binding model for $\alpha$-(BEDT-TTF)$_2$I$_3$ [Fig. \ref{fig:fig1}(a)] defined by~\cite{Kitayama_PRR20,Tanaka_JPSJ10,Miyashita_JPSJ10} \begin{equation} {\mathcal H}(\tau)=\sum_{\langle ij\rangle,\sigma}t_{i,j}e^{i(e/\hbar){\bm \delta}_{i,j}\cdot {\bm A}(\tau)}c^{\dagger}_{i\sigma}c_{j\sigma}+{\rm H.c.}, \label{eq:ham} \end{equation} where $\langle ij\rangle$ signifies the summation over pairs of nearest-neighbor sites, and $c^{\dagger}_{i\sigma}$ ($c_{i\sigma}$) denotes the creation (annihilation) operator for an electron with spin $\sigma$ at the $i$th site. The effects of light electric field are incorporated by the Peierls phase for the transfer integrals $t_{i,j}$, through which the Hamiltonian depends on time $\tau$. We define ${\bm \delta}_{i,j}={\bm r}_j-{\bm r}_i$ with ${\bm r}_i$ being the position vector for the $i$th site. We note that the model in Eq. (\ref{eq:ham}) describes the energy bands of $\alpha$-(BEDT-TTF)$_2$I$_3$ near the Fermi level and these bands are constructed from a single orbital, the highest occupied molecular orbital of a BEDT-TTF molecule \cite{Kino_JPSJ06}. Since the bands originating from other molecular orbitals are energetically separated from these bands, the dipole transition terms do not appear in Eq. (\ref{eq:ham}). For the global band structure of this compound, see Ref.~\cite{Kino_JPSJ06}. Hereafter, we use natural units with $e=\hbar=1$. The lattice constant along the $a$ axis is chosen as the unit of length. Note that the unit cell containing four molecules A, A$^{\prime}$, B, and C [Fig. \ref{fig:fig1}(a)] is rectangular with different lattice constants of $a$ and $b$ in reality. However, because the difference is small, we assume a squared unit cell in the present study. We have confirmed that this simplification does not alter the results and conclusions so much. The vector potential for the ac electric field of circularly polarized light is given by ${\bm A}(\tau)$. For the continuous-wave excitations, we write \begin{equation} {\bm A}(\tau)=A_0f(\tau)(\cos \omega \tau, \sin \omega \tau), \end{equation} where $A_0=E^{\omega}/\omega$ with $E^{\omega}$ and $\omega$ being, respectively, the amplitude and frequency of the ac electric field of light. We consider a left-handed circularly polarized light unless otherwise specified. A factor $f(\tau)$ defined by $f(\tau)=e^{-\tau^2/\tau_{\rm ac}^2}$ for $\tau\leq0$ and $f(\tau)=1$ for $\tau>0$ is set to obtain a quasiadiabatic time evolution~\cite{Dalessio_NatCom15,Mochizuki_APL18}. We use $\tau_{\rm ac}=300\,T$, where $T=2\pi/\omega$ is the period of light. In contrast, for the pulse excitations, we use \begin{equation} {\bm A}(\tau)=A_0\exp \Big[-\frac{(\tau-\tau_{\rm pu})^2}{2\sigma^2_{\rm pu}}\Big] (\cos \omega \tau, \sin \omega \tau), \end{equation} which gives a pulse of width $\sigma_{\rm pu}$ centered around time $\tau_{\rm pu}$. The time evolution of the system is calculated by the time-dependent Schr\"odinger equation \begin{equation} |\psi_{{\bm k},\nu}(\tau+d\tau)\rangle={\mathcal T}{\rm exp} \Bigl[-i\int^{\tau+d\tau}_{\tau} d\tau^{\prime}{\mathcal H}_{{\bm k}}(\tau^{\prime}) \Bigr]|\psi_{{\bm k},\nu}(\tau)\rangle, \end{equation} where ${\mathcal H}_{\bm k}$ denotes the momentum representation of the Hamiltonian matrix in Eq. (\ref{eq:ham}), $|\psi_{{\bm k},\nu}(\tau)\rangle$ the $\nu$th ($\nu=1$--$4$) one-particle state with wave vector ${\bm k}$ at time $\tau$, and ${\mathcal T}$ the time-ordering operator. The spin index $\sigma$ is omitted for brevity. This equation is numerically solved by writing~\cite{Kuwabara_JPSJ95,Terai_PTPS93,Tanaka_JPSJ10} \begin{equation} |\psi_{{\bm k},\nu}(\tau+\Delta\tau)\rangle \simeq {\rm exp}\Bigl[-i\Delta \tau {\mathcal H}_{{\bm k}}(\tau+\Delta \tau/2)\Bigr] |\psi_{{\bm k},\nu}(\tau) \rangle, \end{equation} which gives the time-evolving one-particle states within an error of the order of $(\Delta \tau)^3$. We use $\Delta \tau=0.01$ throughout, which guarantees sufficient numerical accuracy. For the values of $t_{i,j}$, we adopt those for a compound under uniaxial pressure of $P=4$ kbar. They are estimated from the relation $t_{i,j}=t_{i,j}^{\rm ap}(1+K_{i,j}P)$ where the coefficients $K_{i,j}$ are given in Ref.~\cite{Kobayashi_JPSJ04}. The transfer integrals at ambient pressure $t_l^{\rm ap}$ are given by $t_{b1}^{\rm ap}=0.127$, $t_{b2}^{\rm ap}=0.145$, $t_{b3}^{\rm ap}=0.062$, $t_{b4}^{\rm ap}=0.025$, $t_{a1}^{\rm ap}=-0.035$, $t_{a2}^{\rm ap}=-0.046$, and $t_{a3}^{\rm ap}=0.018$~\cite{Kakiuchi_JPSJ07}; all values are in units of eV. Here, $l$ is the index that specifies the bonds [see Fig. \ref{fig:fig1}(a)]. We take $L_a=L_b=200$ with $L_a$ ($L_b$) being the number of unit cells in the $a$ direction ($b$ direction). The energy dispersion relations of the two highest bands ($\nu=3, 4$) are given in Fig. \ref{fig:fig1}(b). The electron density of this compound is specified at 3/4 filling. Before the photoexcitation, the system is a zero-gap state with the Fermi level coinciding with the contact points of the tilted Dirac cones, which are located at ${\bm k}^{\pm}/\pi=(1,1)\pm (0.40, 0.67)$. \section{Continuous-wave excitations} We next consider continuous-wave excitations. The light frequency is chosen at $\omega=0.8$ (in units of eV) for which the Floquet theory predicts the emergence of a Floquet topological insulator phase for nonzero $A_0$~\cite{Kitayama_PRR20}. In Fig. \ref{fig:fig2}(a), we plot the time profile of Chern numbers $N_{\rm Ch}^{\nu}(\tau)$. Following the computational method to calculate the Chern number in equilibrium proposed in Ref.~\cite{Fukui_JPSJ05}, we write $N_{\rm Ch}^{\nu}(\tau)$ in nonequilibrium as \begin{equation} N_{\rm Ch}^{\nu}(\tau)=\frac{1}{2\pi i}\sum_{\bm k}F^{\nu}({\bm k},\tau), \end{equation} where \begin{equation} \begin{split} F^{\nu}({\bm k},\tau)&=\ln [U^{\nu}_x({\bm k},\tau)U^{\nu}_y({\bm k}+{\bm \delta}_x,\tau) \\ &\times U^{\nu}_x({\bm k}+{\bm \delta}_y,\tau)^{-1}U^{\nu}_y({\bm k},\tau)^{-1}], \label{eq:F} \end{split} \end{equation} which is defined by the principal value of the logarithm. In Eq. (\ref{eq:F}), we define \begin{equation} {\bm \delta}_{x}=(2\pi/L_b,0),\ {\bm \delta}_{y}=(0,2\pi/L_a), \end{equation} and \begin{equation} U^{\nu}_{\mu}({\bm k})=\langle \psi_{{\bm k},\nu}(\tau)|\psi_{{\bm k}+{\bm \delta}_{\mu},\nu}(\tau)\rangle/|\langle \psi_{{\bm k},\nu}(\tau)|\psi_{{\bm k}+{\bm \delta}_{\mu},\nu}(\tau)\rangle|, \end{equation} with $\mu=x,y$. The results for $A_0=0.8$ are shown at stroboscopic times $\tau/T=n$ with $n$ integer. We note that for $\alpha$-(BEDT-TTF)$_2$I$_3$, the contact points come from the accidental degeneracy of the energy bands in the thermodynamic limit~\cite{Herring_PR37,Suzumura_JPSJ16}. This is in contrast to graphene, for which they are exactly on the symmetric points in the Brillouin zone (BZ). Because of this property, for finite size systems, the gap closing points between the two bands $\nu=3$ and $4$ are absent in the discrete BZ. This allows temporal changes of the Chern numbers of these bands reflecting a topological phase transition in our simulation~\cite{Dalessio_NatCom15,Ge_PRA17}. Before photoexcitation, we have $N^{\nu}_{\rm Ch}=0$ for all $\nu$; the initial state is topologically trivial. Although the Chern numbers of the two lowest bands, $N^{1}_{\rm Ch}$ and $N^{2}_{\rm Ch}$, exhibit a complex time dependence for $-440\gtrsim \tau/T\gtrsim -180$ for which the temporal variation in $f(\tau)$ becomes large, they are basically conserved at zero for $\tau/T> -180$. We note that the electric field is maximal for $\tau\geq 0$. The value of $N^{3}_{\rm Ch}$ ($N^{4}_{\rm Ch}$) changes from 0 to 1 ($-1$) at $\tau/T\sim -500$ and thereafter remains unchanged. In Fig. \ref{fig:fig2}(b), we plot the time profile of $N_{\rm Ch}(\tau)=\sum_{\nu=1}^3 N_{\rm Ch}^{\nu}(\tau)$ at stroboscopic times demonstrating the appearance of a topologically nontrivial state with $N_{\rm Ch}=1$. This result is consistent with that obtained by the Floquet theory~\cite{Kitayama_PRR20}. Here we mention that in the time profile of $N_{\rm Ch}$ there appear two peaks at $\tau/T\sim -430$ and $-330$. They come from a slight difference between the stroboscopic times at which $N^{1}_{\rm Ch}$ and $N^{2}_{\rm Ch}$ change [Fig. \ref{fig:fig2}(a)]. Except for these two peaks, we have $N^{1}_{\rm Ch}=-N^{2}_{\rm Ch}$ for all stroboscopic times and thus $N_{\rm Ch}=N^{3}_{\rm Ch}$ holds. To verify the dynamically prepared Floquet topological insulator phase in more detail, we calculate the overlap $\alpha_{{\bm k},\nu}$ which is defined by \begin{equation} \alpha_{{\bm k},\nu}=|\langle \psi^F_{{\bm k},\nu}|\psi_{{\bm k},\nu}(\tau)\rangle|, \end{equation} where $|\psi^F_{{\bm k},\nu}\rangle$ denotes the ground-state wave function of the Floquet Hamiltonian ${\mathcal H}_F$ for a system subject to a continuous ac electric field with amplitude $E^{\omega}$. Because we have~\cite{Dalessio_NatCom15} \begin{equation} U(T,0)=e^{-i{\mathcal H}_FT}, \end{equation} where $U(T,0)$ denotes the time-evolution operator over one period, we obtain $|\psi^F_{{\bm k},\nu}\rangle$ by diagonalizing $U(T,0)$. For $\omega=0.8$, the off-resonance condition is satisfied where the Floquet bands with different photon numbers do not overlap, making it possible to identify the $\nu$th Floquet band unambiguously~\cite{Kitayama_PRR20,Dalessio_NatCom15}. In Fig. \ref{fig:fig2}(b), we show the time evolutions of ${\rm min}[\alpha_{{\bm k},3}]$ and ${\rm min}[\alpha_{{\bm k},4}]$, for which ${\rm min}[\alpha_{{\bm k},\nu}]$ is the minimum of $\alpha_{{\bm k},\nu}$ in the BZ for a fixed $\nu$. They are nearly identical to each other; ${\rm min}[\alpha_{{\bm k},3}]$ and ${\rm min}[\alpha_{{\bm k},4}]$ show a gradual increase in accordance with $f(\tau)$ and then approach $1$ at $\tau/T=0$. The overlaps $\alpha_{{\bm k},3}$ and $\alpha_{{\bm k},4}$ have their minimum values at the contact points where the photoinduced gap opens. We note that for $\tau/T>0$, $\alpha_{{\bm k},\nu}$ is conserved~\cite{Dalessio_NatCom15}. These results indicate that the time-evolving one-particle states are well described by the ground state of ${\mathcal H}_F$. \begin{figure} \includegraphics[scale=1.0]{fig2.eps} \caption{(a) Time profiles of the Chern numbers $N_{\rm Ch}^{\nu}$, for which $\nu$ is the band index. The time dependence of $f(\tau)$ is also depicted. (b) Time profiles of $N_{\rm Ch}$ and ${\rm min}[\alpha_{{\bm k},\nu}]$ with $\nu=3$ and 4. We use continuous-wave excitations with $\omega=0.8$ and $A_0=0.8$. The results are shown at stroboscopic times.} \label{fig:fig2} \end{figure} In regard to the physical quantity that characterizes the Floquet topological insulator, we calculate the Hall conductivity $\sigma_{xy}$. For this purpose, we consider the vector potential ${\bm A}_{\rm dc}$ for a static electric field ${\bm E}_{\rm dc}=(E_{\rm dc}^x,E_{\rm dc}^y)$ that is switched on at $\tau=0$; we have \begin{equation} {\bm A}_{\rm dc}(\tau)=-\gamma(\tau)(E^x_{\rm dc}\tau,E^y_{\rm dc}\tau), \end{equation} where we introduce a factor $\gamma(\tau)$ that is given by $\gamma(\tau)=0$ for $\tau<0$ and $\gamma(\tau)=1-e^{-\tau^2/\tau^2_{\rm dc}}$ for $\tau\geq0$ with $\tau_{\rm dc}=10T$. We define the current operator by \begin{equation} {\bm J}=-\frac{\partial {\mathcal H}({\bm A}+{\bm A}_{\rm dc})} {\partial {\bm A}_{\rm dc}}, \end{equation} from which we obtain $\sigma_{xy}$, \begin{equation} \sigma_{xy}=\frac{1}{2\pi N}\frac{\langle J_x \rangle}{E_{\rm dc}^y}, \end{equation} where $N=L_aL_b$ is the total number of unit cells. We set $E_{\rm dc}^x=0$ and $E_{\rm dc}^y\ll E^{\omega}$ so that the static electric field does not affect the photoinduced dynamics qualitatively. Because the obtained $\sigma_{xy}$ strongly oscillates with the light frequency $\omega$, we compute a time-averaged quantity $\sigma_{xy}^T$ over one period, \begin{equation} \sigma_{xy}^T(\tau)=\int_{\tau-T/2}^{\tau+T/2}\sigma_{xy}(\tau^{\prime})d\tau^{\prime}. \end{equation} In Fig. \ref{fig:fig3}(a), we present the time evolution of $\sigma_{xy}^T$ for different values of $A_0$ with $\omega=0.8$ and $E_{\rm dc}^y=2\times 10^{-5}$; the results pertaining to right-handed circularly polarized light, for which we use ${\bm A}(\tau)=A_0f(\tau)(\cos \omega \tau, -\sin \omega \tau)$, are also depicted. It is apparent that $\sigma_{xy}^T$ with left-handed (right-handed) circularly polarized light exhibits an oscillation, the center of which is $2$ ($-2$) corresponding to the quantized Hall conductivity of the Floquet topological insulator with $N_{\rm Ch}=1$~\cite{Thouless_PRL82}. The frequency of the oscillation in $\sigma_{xy}^T$ is much smaller than $\omega$. To analyze this slow oscillation quantitatively, we show the Fourier transform of $\sigma_{xy}^T$ in Fig. \ref{fig:fig3}(b) for left-handed circularly polarized light. In each spectrum, there is a sharp peak, the frequency of which is denoted $\Omega$ and increases with increasing $A_0$. In Fig. \ref{fig:fig3}(c), we plot $\Omega$ as a function of $A_0$; the magnitude of the photoinduced direct gap $\Delta_{\rm FL}$ that is obtained from the eigenvalues of ${\mathcal H}_F$ is also shown. Evidently, $\Omega\sim \Delta_{\rm FL}$ holds; the slow oscillation in $\sigma_{xy}^T$ reflects the direct gap that emerges in the Floquet band structure. In the time profile of $\sigma_{xy}^T$ shown in Fig. \ref{fig:fig3}(a), we observe that the amplitude of the slow oscillation first decreases and then increases with time, which is prominent for the right-handed circularly polarized light with $A_0=1.4$ and $2.0$. We have confirmed that this behavior is not a finite-size effect. Instead, it is a beat of two oscillations with slightly different frequencies close to $\Delta_{\rm FL}$. In calculating the Hall conductivity in the photoirradiated $\alpha$-(BEDT-TTF)$_2$I$_3$, the static electric field $E^y_{\rm dc}$ is applied, which gives rise to slightly different gap amplitudes between the two Dirac points because of the tilting of the Dirac cones. The difference in the gap amplitude is estimated to be less than 10\% of the photoinduced gap. \begin{figure} \includegraphics[scale=1.0]{fig3.eps} \caption{(a) Time evolution of $\sigma^T_{xy}$ for different values of $A_0$ with $\omega=0.8$ and $E_{\rm dc}^y=2\times 10^{-5}$. The results with left-handed (L) and right-handed (R) circularly polarized light are shown. The dashed horizontal lines indicate the values of the quantized Hall conductivity for Floquet topological insulators with $N_{\rm Ch}=\pm 1$. (b) Fourier transform of $\sigma^T_{xy}$. The peak position in each spectrum is indicated by an arrow. (b) Peak frequency $\Omega$ as a function of $A_0$, where the solid line marks the magnitude of the direct gap calculated from the Floquet bands with $\nu=3$ and $4$.} \label{fig:fig3} \end{figure} \begin{figure*} \includegraphics[scale=1.0]{fig4.eps} \caption{(a) Time profiles of electron densities $n_{\alpha}$ ($\alpha=$A, A$^{\prime}$, B, C). The dashed curve plots the $\tau$ dependence of $|{\bm A}(\tau)|$. (b)--(d) Transient excitation spectra $A_{\bm k}(\varepsilon, \tau_{\rm pr})$ at $k_y=k_y^+$ as a function of $k_x$. We show results for $\tau_{\rm pr}=\tau_{\rm pu}/2$, $3\tau_{\rm pu}/4$, and $\tau_{\rm pu}$, for which the corresponding values of $\tau_{\rm pr}/T$ are indicated by the vertical arrows in (a). The dashed vertical line in each panel indicates $k_x=k_x^+$. In (b), the dashed green curve plots the energy dispersion before photoexcitation, whereas in (d) it indicates the Floquet bands obtained by diagonalizing ${\mathcal H}_F$. (e) Transient energy bands for $\tau_{\rm pr}=\tau_{\rm pu}$ near ${\bm k}={\bm k}^+$. We use $\omega=0.8$, $A_0=1.4$, $\sigma_{\rm pu}/T=76$, and $\tau_{\rm pu}/T=255$.} \label{fig:fig4} \end{figure*} \section{Pulse excitations} Next, we consider pulse excitations. For the pump pulse, we use $\omega=0.8$, $\sigma_{\rm pu}=600$ ($\sigma_{\rm pu}/T=76$), and $\tau_{\rm pu}=2\times 10^3$ ($\tau_{\rm pu}/T=255$). In Fig. \ref{fig:fig4}(a), we plot the time profiles of charge densities $n_{\alpha}$ ($\alpha=$A, A$^{\prime}$, B, C) for $A_0=1.4$. The quantities $n_{\rm A}$ and $n_{{\rm A}^{\prime}}$, which are equivalent to each other in the absence of light electric field, strongly oscillate in opposite phase. The temporal variations in $n_{\rm B}$ and $n_{\rm C}$ are small compared with those in $n_{\rm A}$ and $n_{{\rm A}^{\prime}}$. The time dependence of the oscillation amplitudes of the electron densities can be understood from the pulse shape ($|{\bm A}(\tau)|$), also shown in Fig. \ref{fig:fig4}(a). To reveal the real-time dynamics of the Floquet band formation under pulse excitations, we calculate the transient excitation spectrum~\cite{Sentef_NatCom15,Freericks_PRL09} using \begin{eqnarray} A_{\bm k}(\varepsilon, \tau_{\rm pr})&=&{\rm Im} \sum_{\alpha} \int d\tau_1d\tau_2s(\tau_1-\tau_{\rm pr}) s(\tau_2-\tau_{\rm pr}) \nonumber \\ &&\times e^{i\varepsilon(\tau_1-\tau_2)}[G^{<}_{{\bm k},\alpha\alpha}(\tau_1,\tau_2) -G^{>}_{{\bm k},\alpha\alpha}(\tau_1,\tau_2)], \label{eq:ak} \end{eqnarray} where $G^{<}_{{\bm k},\alpha\beta}(\tau_1,\tau_2)=i\langle c^{\dagger}_{{\bm k},\beta}(\tau_2)c_{{\bm k},\alpha}(\tau_1)\rangle$ and $G^{>}_{{\bm k},\alpha\beta}(\tau_1,\tau_2)=-i\langle c_{{\bm k},\alpha}(\tau_1)c^{\dagger}_{{\bm k},\beta}(\tau_2)\rangle$ denote the lesser and greater Green's functions, respectively, and $s(\tau-\tau_{\rm pr})=\frac{1}{\sigma_{\rm pr}\sqrt{2\pi}}\exp[-\frac{(\tau-\tau_{\rm pr})^2}{2\sigma_{\rm pr}^2}]$ denotes the Gaussian function for a probe pulse of width $\sigma_{\rm pr}$ centered around time $\tau_{\rm pr}$. We define the operator $c^{\dagger}_{{\bm k},\alpha}$ ($c_{{\bm k},\alpha}$) using the Fourier transform of $c^{\dagger}_{{\gamma},\alpha}$ ($c_{{\gamma},\alpha}$) where $\gamma$ indexes the unit cells. In Figs. \ref{fig:fig4}(b)--\ref{fig:fig4}(d), we present $A_{\bm k}(\varepsilon, \tau_{\rm pr})$ as a function of $k_x$ with $k_y=k_y^+$ for different values of $\tau_{\rm pr}$. We use $A_0=1.4$ and $\sigma_{\rm pr}=200$ ($\sigma_{\rm pr}/T=25$). When $\tau_{\rm pr}=\tau_{\rm pu}/2$, the structure of $A_{\bm k}(\varepsilon, \tau_{\rm pr})$ is almost identical to the energy bands in the ground state [Fig. \ref{fig:fig4}(b)]. For $\tau_{\rm pr}=3\tau_{\rm pu}/4$, $A_{\bm k}(\varepsilon, \tau_{\rm pr})$ becomes strongly blurred and at this stage the opening of the gap at the Dirac point is not visible [Fig. \ref{fig:fig4}(c)]. However, at $\tau_{\rm pr}=\tau_{\rm pu}$ for which the electric field amplitude of light has its maximum, $A_{\bm k}(\varepsilon, \tau_{\rm pr})$ exhibits a sharp structure again and a gap appears at $k_y=k_y^+$ [see Fig. \ref{fig:fig4}(d)]. The structure of $A_{\bm k}(\varepsilon, \tau_{\rm pr})$ notably coincides with the Floquet bands, which are calculated by diagonalizing ${\mathcal H}_{\rm F}$. In Fig. \ref{fig:fig4}(e), we plot the peak positions of $A_{\bm k}(\varepsilon, \tau_{\rm pr})$ near ${\bm k}={\bm k}^+$, which evidently shows that transient energy bands acquire a photoinduced gap during the pulse. \begin{figure} \includegraphics[scale=1.0]{fig5.eps} \caption{Time profile of $\sigma^T_{xy}$ under pulse excitations for (a) $A_0=0.8$, (b) $A_0=1.4$, and (c) $A_0=2.0$. We use $\omega=0.8$ and $E_{\rm dc}^y=4\times 10^{-5}$. In each panel, the dashed horizontal line marks the value of the quantized Hall conductivity with $N_{\rm Ch}=1$, whereas the vertical line marks $\tau/T=\tau_{\rm pu}/T$ where the electric field amplitude of light becomes maximum. The double-headed arrows in (b) and (c) indicate one cycle of the slow oscillation in $\sigma^T_{xy}$ near $\tau=\tau_{\rm pu}$.} \label{fig:fig5} \end{figure} In Fig. \ref{fig:fig5}, we plot $\sigma_{xy}^T$ for different values of $A_0$ with $\omega=0.8$ and $E^y_{\rm dc}=4\times 10^{-5}$. We note that a low-pass filter is used to eliminate fast oscillations in $\sigma_{xy}^T$ with frequencies around $\omega$ that originate from the pulse of circularly polarized light. Regardless of the values of $A_0$, the time profile of $\sigma_{xy}^T$ shows several features. For $\tau/T\lesssim 100$, we have $\sigma_{xy}^T\sim 0$ except for some oscillations near $\tau=0$. For $\tau/T\gtrsim 100$, $\sigma_{xy}^T$ starts to increase and oscillate around the value of the quantized Hall conductivity with $N_{\rm Ch}=1$. This oscillation is robust near the peak of the pump pulse ($\tau\sim \tau_{\rm pu}$), especially for large $A_0$, and its frequency is much smaller than $\omega$ as in the case of the continuous-wave excitations. Then, the center of this slow oscillation moves to zero at $\tau/T\sim 400$. Near $\tau=\tau_{\rm pu}$, the oscillation periods for $A_0=1.4$ and $2.0$ [Figs. \ref{fig:fig5}(b) and \ref{fig:fig5}(c)] correspond to frequencies 0.0129 and 0.0250, respectively. They are close to $\Delta_{\rm FL}$ marked in Fig. \ref{fig:fig3}(c), where we have $\Delta_{\rm FL}=0.0136$ for $A_0=1.4$ and $\Delta_{\rm FL}=0.0249$ for $A_0=2.0$; the frequency of the slow oscillation in $\sigma_{xy}^T$ near the pulse peak coincides with the magnitude of the photoinduced gap. We note that such slow oscillations in the time-resolved Hall conductivity reflecting photon-dressed topological bands under pulse excitations have been reported also in graphene systems~\cite{Gavensky_PRB18}. For these values of $A_0$, the frequency gradually increases as $\tau$ increases toward the pulse peak, indicating a transient growth of the photoinduced gap. For $A_0=0.8$, the period of the slow oscillation is longer than those for $A_0=1.4$ and $2.0$, making precise estimations difficult. This means that the photoinduced gap is small. Indeed, for $A_0=0.8$, we have $2\pi/\Delta_{\rm FL}=167T$, which is longer than the pulse width ($\sigma_{\rm pu}=76T$); for smaller values of $A_0$, a longer pulse is needed to observe the slow oscillation in $\sigma_{xy}^T$ fully. \section{Discussions and Summary} We discuss the relevance of our results to experiments and the feasibility of the experiments. First, we mention the strength of the electric field of light considered in this study. In $\alpha$-(BEDT-TTF)$_2$I$_3$, the unit lengths in the $a$ and $b$ directions are close to $10$ \AA. From these values, we estimate that $A_0=1.4$ corresponds to $E^{\omega}=11.2$~MV/cm. It has been recently reported that pump-probe experiments using an intense pulse with $E^{\omega}$ exceeding 10~MV/cm have been successfully performed~\cite{Kawakami_NATP18,Kawakami_NATCM20}. These experimental reports support the feasibility of the proposed experiment using the pulse with $E^{\omega}$ as intense as 11.2~MV/cm. Next, we discuss the electron-electron scattering effects which give rise to heating \cite{Schuler_PRX20}. To invoke the Floquet adiabatic picture under photoirradiation, the pulse width of the laser should be longer than the time scale of the photoinduced gap $\Delta_{\rm FL}$, which is of the order of 0.01~eV. Specifically, the oscillation period in the Hall conductivity shown in Figs. \ref{fig:fig3}(a) and \ref{fig:fig5} is $2\pi/\Delta_{\rm FL}=150\sim 400$ fs depending on the value of $E^{\omega}$. However, when the pulse is long, scattering effects due to electron-electron and electron-phonon interactions may become important. This point has been argued in recent studies of photoinduced dynamics in graphene \cite{Schuler_PRX20}. In graphene, photoirradiation with near-infrared light with frequency $\sim 1$ eV induces a large amount of photocarriers since the conduction and valence bands forming the Dirac cones have a wide bandwidth of about 15 eV. This photocarrier generation contributes to heating via electron-electron scattering \cite{Schuler_PRX20}. However, in $\alpha$-(BEDT-TTF)$_2$I$_3$, the four bands near the Fermi level lie in the energy range of 0.7 eV \cite{Kino_JPSJ06} which is comparable to the photon energy. Since these four bands are well separated by other upper and lower bands, this compound is a unique system where the off-resonance condition is realized by near-infrared light, which is in contrast to graphene. This makes the photocarrier generation ineffective and thus will result in considerable suppression of heating. In fact, for the setup of continuous-wave laser in Sec. III, the negligibly small amount of photocarriers is evident from the fact that we have $\alpha_{{\bm k},\nu}\sim 1$ for $\tau>0$ [see Fig. \ref{fig:fig2}(b)]. In addition, we have evaluated the occupied part of the excitation spectra $A_{\bm k}(\varepsilon,\tau_{\rm pr})$ in Figs. \ref{fig:fig4}(b)-\ref{fig:fig4}(d), which corresponds to the $G^<$ term in Eq. (\ref{eq:ak}), and have found that the electron occupation of the conduction band is negligibly small even during the photoexcitation process. In graphene, it has been argued that the anomalous Hall conductivity under circularly polarized light deviates from the quantized value expected from the Berry curvature of the Floquet Chern insulator, which has been ascribed to a large contribution from photocarriers \cite{Sato_PRB19}. In contrast, the Hall conductivity in Figs. \ref{fig:fig3}(a) and \ref{fig:fig5} exhibits a nearly quantized value. This indicates that it comes almost only from the Berry curvature of the photoinduced topological phase and thus $\alpha$-(BEDT-TTF)$_2$I$_3$ is a promising candidate for observing the Floquet Chern insulator through the quantized Hall conductivity. The slow oscillation with the period $\sim 2\pi/\Delta_{\rm FL}$ in the Hall conductivity [Figs. \ref{fig:fig3}(a) and \ref{fig:fig5}] would be damped by scattering effects. In organic compounds, the timescale of the electron-electron scattering is $\tau_e=2\pi/t\sim 40$ fs with $t$ being the typical transfer integral of 0.1 eV. However, it is expected that the electron-electron scattering does not severely hamper the slow oscillation in the Hall conductivity because the off-resonance condition offers a nearly coherent time evolution without the photocarrier generation. On the other hand, the timescale of the electron-phonon scattering ($\tau_{\rm ph}$) that gives rise to dissipation would be one order of magnitude longer than that of $\tau_e$ and would be comparable to the pulse width. More specifically, we have used the pulse width $\sigma_{\rm pu}=76T\sim 400$ fs. Thus, if the electron-phonon scattering dominates the damping of the Hall conductivity, the amplitude of the slow oscillation in Fig. \ref{fig:fig5} will decrease to $e^{-\sigma_{\rm pu}/\tau_{\rm ph}}=37$ \% during the pulse. The electron-electron interactions in $\alpha$-(BEDT-TTF)$_2$I$_3$ are manifested by the charge-order phenomenon that appears at ambient pressure. However, physical properties associated with the Dirac fermions have been well explained by weak coupling theories or even with the noninteracting model \cite{Katayama_JPSJ06,Katayama_EPJB08,Hirata_SCI17}. Moreover, it has been argued in graphene that the electron-electron and electron-phonon interactions have only little effect on the magnitude of the photoinduced gap as well as the Floquet band structure \cite{Schuler_PRX20}. These facts suggest the validity of our approach with the noninteracting model. However, heating and dissipation are inevitably present in photoinduced dynamics. Thus, it is important to examine their effects in order to clarify the experimental feasibility of our results. Also, under long laser pulses, possible sample damage may further reduce the experimental feasibility. In this sense, the creation of a large photoinduced gap with short pulses of strong electric field is favored in observing the dynamical gap formation associated with the Floquet topological insulator phase in $\alpha$-(BEDT-TTF)$_2$I$_3$. In summary, we have investigated the real-time dynamics of the photoinduced topological state in the organic conductor $\alpha$-(BEDT-TTF)$_2$I$_3$ with a pair of tilted Dirac cones in its energy-band structure. We have solved the time-dependent Schr\"odinger equation numerically for the tight-binding model of $\alpha$-(BEDT-TTF)$_2$I$_3$ coupling with an ac electric field of circularly polarized light. For the continuous-wave excitations, time profiles of the Chern number and the Hall conductivity demonstrate the appearance of the Floquet topological insulator phase that was predicted using the Floquet theory~\cite{Kitayama_PRR20}. We have shown that the Hall conductivity has a slow oscillation component for which the frequency coincides with the photoinduced direct gap at the Dirac point. For the pulse excitations, we have calculated transient excitation spectra, by which the formation of the Floquet bands with the photoinduced gap is elucidated in a time-resolved manner. We have shown that the slow oscillation component of the Hall conductivity exhibits the signature associated with dynamical growth of the gap during the pulse irradiation. \begin{acknowledgments} This work was partly supported by JSPS KAKENHI (Grants No. 17H02924, No. 16H06345, No. 19H00864, No. 19K21858, No. 19K23427, No. 20K03841, and No. 20H00337) and Waseda University Grant for Special Research Projects (Projects No. 2019C-253 and No. 2020C-269). \end{acknowledgments}
{ "attr-fineweb-edu": 1.977539, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdqM5qsBDH-wqoleM
\section{Analysis} We found that the presence of an `oracle prediction' (perfect utterance) was dependent on the number of slots in the MR. When the number of slots was 7 or 8, the presence of an oracle in the top-20 predictions decreased significantly, as opposed to the case when the number of slots was less than 7. However, the most prominent issue was that of omissions, among the utterances produced in first position (by forward model). There were no additions or non-words. We observed a similar issue of omissions in human references (target for our model) as well. Our two different strategies, thus, improved the semantic adequacy by re-ranking the probable candidates and successfully finding the `oracle' prediction in the top-20 list. However, in terms of automatic evaluation, the BLEU score showed an inverse relationship with adequacy. Nevertheless, we chose our primary system to be the re-ranker with a classifier over the forward model. We did not find any issues while ``copying'' the restaurant `name' or `near' slots on the dev set. However, on the test set, as the statistics of the data changed in terms of both slots, we found a tendency of the model to generate the more frequent slot values (corresponding to both slots in the training dataset), instead of copying the actual slot value. \section*{Appendix: Sample predictions} \begin{table*} \section*{Appendix: Sample predictions} \begin{tabular}{lll} \hline \textbf{Slots} & \textbf{Type} & \textbf{Utterance} \\ \hline \hline \multirow{2}{*}{ 3} & MR & name[Blue Spice], eatType[coffee shop], area[city centre] \\ & Pred & Blue Spice is a coffee shop located in the city centre. \\ \hline \multirow{2}{*}{ 4} & MR & \begin{tabular}[c]{@{}l@{}} name[Blue Spice], eatType[coffee shop], customer rating[5 out of 5], near[Crowne \\ Plaza Hotel] \end{tabular} \\ & Pred & \begin{tabular}[c]{@{}l@{}} Blue Spice is a coffee shop near Crowne Plaza Hotel with a customer rating of 5 \\ out of 5. \end{tabular} \\ \hline \multirow{2}{*}{ 5} & MR & \begin{tabular}[c]{@{}l@{}} name[The Cricketers], eatType[coffee shop], customer rating[1 out of 5], \\ familyFriendly[yes], near[Avalon] \end{tabular}\\ & Pred & \begin{tabular}[c]{@{}l@{}} The Cricketers is a children friendly coffee shop near Avalon with a customer rating of \\ 1 out of 5. \end{tabular} \\ \hline \multirow{2}{*}{\small 6} & \small MR & \begin{tabular}[c]{@{}l@{}} name[Blue Spice], eatType[pub], food[Chinese], area[city centre], \\ familyFriendly[no], near[Rainbow Vegetarian Caf\'{e}] \end{tabular} \\ & Pred & \begin{tabular}[c]{@{}l@{}} Blue Spice is a Chinese pub located in the city centre near Rainbow Vegetarian Caf\'{e}. \\ It is not family friendly. \end{tabular} \\ \hline \multirow{2}{*}{ 7} & MR & \begin{tabular}[c]{@{}l@{}} name[The Mill], eatType[pub], food[English], priceRange[high], area[riverside], \\ familyFriendly[yes], near[Raja Indian Cuisine]\end{tabular} \\ & Pred & \begin{tabular}[c]{@{}l@{}} The Mill is a children friendly English pub with a high price range near Raja Indian \\ Cuisine in riverside. \end{tabular} \\ \hline \multirow{2}{*}{8} & MR & \begin{tabular}[c]{@{}l@{}} name[The Cricketers], eatType[restaurant], food[Chinese], priceRange[\textsterling 20-25], \\ customer rating[high], area[city centre], familyFriendly[no], near[All Bar One] \end{tabular} \\ & Pred & \begin{tabular}[c]{@{}l@{}} The Cricketers is a restaurant providing Chinese food in the \textsterling 20-25 price range. It is \\ located in the city centre near All Bar One. It has a high customer rating and is not \\ kid friendly. \end{tabular}\\ \hline \end{tabular} \caption{Sample predictions. For the first MR of each arity (3 to 8) in the testset, we show the prediction of our primary submission.} \label{utterances} \end{table*} \begin{figure*}[] \centering{\includegraphics[scale=.45]{images/ReversePipeline}} \caption{Illustration of the pipeline for the re-ranking approach (based on inverse reconstructions using reverse model). Apart from Forward and Reverse seq2seq models, we have a re-ranker based on the edit distance of the actual MR and the inverse reconstructed MR.} \label{fig:ReversePipeline} \end{figure*} \section{Conclusion} We show how a char2char model can be employed for the task of NLG and show competitive results in this challenge. Our vanilla character based model, building on \newcite{Agarwal2017}, requires minimal effort in terms of any processing of dataset while also producing great diversity in the generated utterances. We then propose two re-ranking strategies for further improvements. Even though re-ranking methods show improvements in terms of semantic adequacy, we find a reversal of trend in terms of BLEU. Our synthetic data creation technique could be adapted for augmenting NLG datasets and the classifier-based score could also be used as a reward in a Reinforcement Learning paradigm. \section{Evaluation} We chose our primary system to be the re-ranker using the classifier. Table \ref{table:test-bleu} summarizes our ranking among all the 60+ submissions (primary as well as additional) on the test set. In terms of BLEU, two of our systems were in the top 5 among all 60+ submissions to the challenge. \begin{table}[h] \centering \resizebox{0.48\textwidth}{!}{ \begin{tabular}{llc } \hline Submission & BLEU & Overall Rank \\ \hline \hline Re-ranking using classifier (Primary) & 0.653 & 18 \\ Re-ranking using reverse (Secondary) & 0.666 & 5 \\ Forward (Third) & 0.667 & 4 \\ Baseline & 0.659 & 10 \\ \hline \end{tabular}} \caption{Automatic BLEU evaluations released by organizers on the final challenge submission. We had 3 submissions as described in Section \ref{sect:Model}. Two of our systems were in the top 5 among all 60+ submissions.} \label{table:test-bleu} \end{table} \begin{table}[h] \centering \resizebox{0.38\textwidth}{!}{ \begin{tabular}{ccccc} \hline Metric & TrueSkill & Range & Cluster \\ \hline \hline Quality & 0.048 & (8-12) & 2 \\ Naturalness & 0.105 & (4-8) & 2 \\ \hline \end{tabular}} \caption{Human evaluation was crowd-sourced on the primary system according to the TrueSkill algorithm \citep{Sakaguchi2014}} \label{table:test-human} \end{table} Results for human evaluation, as released by the challenge organizers, are summarized in Table \ref{table:test-human}. They followed the TrueSkill algorithm \citep{Sakaguchi2014} judging all the primary systems on \textit{Quality} and \textit{Naturalness}. We obtained competitive results in terms of both metrics, our system being in the 2nd cluster out of 5 (for both evaluations). On the other hand, most systems ranked high on quality tended to have lower ranks for naturalness and vice versa. \section{Experiments} The updated challenge dataset comprises 50K canonically ordered and systematically structured (MR,RF) pairs, collected following the crowdsourcing protocol explained in \newcite{novikova2016crowd}. Consisting of 8 different slots (and their respective different values), note that the statistics in the test set differ significantly from the training set. We used the open source \textit{tf-seq2seq} framework\footnote{\url{https://github.com/google/seq2seq}\ .}, built over TensorFlow \citep{abadi2016tensorflow} and provided along with \citep{britz2017massive}, with some standard configurations. We experimented with different numbers of layers in the encoder and decoder as well as different beam widths, while using the bi-directional encoder with an ``additive'' attention mechanism. In terms of BLEU, our best performing model had the following configuration: encoder 1 layer, decoder 2 layers, GRU cell, beam-width 20, length penalty 1. \section{Introduction} Natural Language Generation from Dialogue Acts involves generating human understandable utterances from slot-value pairs in a Meaning Representation (MR). This is a component in Spoken Dialogue Systems, where recent advances in Deep Learning are stimulating interest towards using end-to-end models. Traditionally, the Natural Language Generation (NLG) component in Spoken Dialogue Systems has been rule-based, involving a two stage pipeline: `sentence planning' (deciding the overall structure of the sentence) and `surface realization' (which renders actual utterances using this structure). The resulting utterances using these rule-based systems tend to be rigid, repetitive and limited in scope. Recent approaches in dialogue generation tend to directly learn the utterances from data \citep{mei2015talk,lampouras2016imitation,duvsek2016sequence,wen2015semantically}. Recurrent Neural Networks with gated cell variants such as LSTMs and GRUs \citep{hochreiter1997long,cho2014learning} are now extensively used to model sequential data. This class of neural networks when integrated in a Sequence to Sequence \citep{cho2014learning,sutskever2014sequence} framework have produced state-of-art results in Machine Translation \citep{cho2014learning,sutskever2014sequence,bahdanau2014neural}, Conversational Modeling \citep{vinyals2015neural}, Semantic Parsing \citep{xiao2016sequence} and Natural Language Generation \citep{wen2015semantically,mei2015talk}. While these models were initially developed to be used at word level in NLP related tasks, there has been a recent interest to use character level sequences, as in Machine Translation \citep{chung2016character,zhao2016efficient,ling2015character}. Neural seq2seq approaches to Natural Language Generation (NLG) are typically word-based, and resort to delexicalization (a process in which named entities (slot values) are replaced with special `placeholders' \citep{wen2015semantically}) to handle rare or unknown words (out-of-vocabulary (OOV) words, even with a large vocabulary). It can be argued that this de-lexicalization is unable to account for phenomena such as morphological agreement (gender, numbers) in the generated text \citep{sharma2016natural,nayak2017}. However, \newcite{Goyal2016} and \newcite{Agarwal2017} employ a char-based seq2seq model where the input MR is simply represented as a character sequence, and the output is also generated char-by-char; avoiding the rare word problem, as the character vocabulary is very small. This work builds on top of the formulation of \newcite{Agarwal2017} and describes our submission for the E2E NLG challenge \citep{novikova2017e2e}. We further explore re-ranking techniques in order to identify the perfect `oracle prediction' utterance. One of the strategies for re-ranking uses an approach similar to the `inverted generation' technique of \citep{chisholm2017learning}. \newcite{sennrich2015improving}, \newcite{li2015diversity} and \newcite{konstas2017neural} have also trained a reverse model for back translation in Machine Translation and NLG. A synthetic data creation technique is used by \newcite{duvsek2017referenceless} and \newcite{logacheva2015role} but as far as we know, our protocol is novel. Our contributions in this paper and challenge can, thus, be summarized as: \vspace{-2mm} \begin{enumerate} \item We show how a vanilla character-based sequence-to-sequence model performs successfully on the challenge test dataset in terms of BLEU score, while having a tendency to omit semantic material. As far as we know, we are the only team using character based seq2seq for the challenge. \vspace{-2mm} \item We propose a novel data augmentation technique in Natural Language Generation (NLG) which consists of `editing' the Meaning Representation (MR) and using the original ReFerences (RF). This fabricated dataset helps us in extracting features (to detect errors), used for re-ranking the generated candidates (\mbox{Section \ref{sub-sect:dataAugment}}). \vspace{-2mm} \item We introduce two different re-ranking strategies corresponding to our primary and secondary submission (in the challenge), defined in \mbox{Section \ref{sub-sect:reranking}}.\footnote{Due to space limitations, our description here omits a number of aspects. For a more extensive description, analysis and examples, please refer to \url{http://www.macs.hw.ac.uk/InteractionLab/E2E/final_papers/E2E-NLE.pdf}.} \end{enumerate} \section*{Acknowledgements} We like to thank Chunyang Xiao and Matthias Gall\'{e} for their useful suggestions and comments. \section{Model} \label{sect:Model} \vspace{-1mm} In the sequel, we will refer to our vanilla char2char model with the term Forward Model. \vspace{-1mm} \subsection{Forward Model} We use a Character-based Sequence-to-Sequence RNN model \citep{sutskever2014sequence,cho2014learning} with attention mechanism \citep{bahdanau2014neural}. We feed a sequence of embeddings of the individual characters composing the source Meaning Representation (MR) -seen as a string- to the Encoder RNN and try to predict the character sequence of the corresponding utterances (RF) in the generation stage with the Decoder RNN. Coupled with the attention mechanism, seq2seq models have become de-facto standard in generation tasks. The encoder RNN embeds each of the source characters into vectors exploiting the hidden states computed by the RNN. The decoder RNN predicts the next character based on its current hidden state, previous character, and also the ``context'' vector $c_i$, computed by the attention model. While several strategies have been proposed to improve results using Beam Search in Machine Translation \citep{freitag2017beam}, we used the length normalization (aka length penalty) approach \newcite{wu2016google} for our task. A heuristically derived length penalty term is added to the scoring function which ranks the probable candidates used to generate the best prediction. \subsection{Protocol for synthetic dataset creation} \label{sub-sect:dataAugment} We artificially create a training set for the classifier (defined in Section \ref{sect-classifier}) to detect errors (primarily omission of content) in generated utterances, by a data augmentation technique. The systematic structure of the slots in MR gives us freedom to naturally augment data for our use case. To the best of our knowledge, this is the first approach of using data augmentation in the proposed fashion and opens up new directions to create artificial datasets for NLG. We first define the procedure for creating a dataset to detect omission and then show how a similar approach can be used to create a synthetic dataset to detect additions. \textit{Detecting omissions.} This approach assumes that originally there are no omissions in RF for a given MR (in the training dataset). These can be considered as positive pairs when detecting omissions. Now if we artificially add another slot to the original MR and use the same RF for this new (constructed) MR, naturally the original RF tends to show omission of this added slot. \vspace{-0.5mm} \begin{equation} \begin{split} MR_{original} \xrightarrow{\text{+ Added slot}} MR_{new} \\ MR_{original} \xrightarrow{\text{- Removed slot}} MR_{new} \end{split} \end{equation} \vspace{-0.5mm} This is a two stage procedure: (a) Select a slot to add. (b) Select a corresponding slot value. Instead of sampling a particular slot in step (a), we add all the slots one by one (that could be augmented in MR apart from currently present slots). Having chosen the slot type to be added, we add the slot value according to probability distribution of the slot values for that slot type. The original (MR$_{original}$,RF) pair is assigned a class label of 1 and the new artificial pairs (MR$_{new}$,RF) a label of 0, denoting a case of omission (first line of (1)). Thus, these triplets (MR, RF, Class Label) allow us to treat this as a classification task. \textit{Detecting additions.} In order to create a dataset which can be used for training our model to detect additions, we proceed in a similar way. The difference is that now we systematically remove one slot in the original MR to create the new MRs (second line of (1)). In both cases, we control the procedure by manipulating MRs instead of the Natural Language RF. This kind of augmented dataset opens up the possibility of using any classifier to detect the above mentioned errors. \subsection{Re-ranking Models} \label{sub-sect:reranking} In this section, we define two techniques to re-rank the n-best list and these serve as primary and secondary submissions to the challenge. \subsubsection{Reverse Model} \begin{figure*}[ht] \centering{\includegraphics[scale=.5]{images/ReversePipeline}} \caption{Illustration of the pipeline for the re-ranking approach (based on inverse reconstructions using reverse model) as described in Section \ref{sub-sect:reranking}. Apart from Forward and Reverse seq2seq models, we have a re-ranker based on the edit distance of the actual MR and the inverse reconstructed MR.} \label{fig:ReversePipeline} \end{figure*} We generated a list of top-k predictions (using Beam Search) for each MR in what we call the \textit{forward} phase of the model. In parallel, we trained a \textit{reverse} model which tries to reconstruct the MR given the target RF, similar to the autoencoder model by \newcite{chisholm2017learning}. This is guided by an intuition that if our prediction omits some information, the reverse reconstruction of MR would also tend to omit slot-value pairs for the omitted slot values in the prediction. We then score and re-rank the top-k predictions based on a distance metric, namely the edit distance between the original MR and the MR generated by the reverse model, starting from the utterance predicted in the forward direction. To avoid defining the weights when combining edit distance with the log probability of the model, we used a simplified mechanism. At the time of re-ranking, we choose the first output in our n-best list with zero edit distance as our prediction. If no such prediction can be found, we rely upon the first prediction in our (probabilistically) sorted n-best list. Figure \ref{fig:ReversePipeline} illustrates our pipeline approach. \subsubsection{Classifier as a re-ranker} \label{sect-classifier} To treat omission (or more generally any kind of \textit{semantic adequacy} mis-representation such as repetition or addition of content) in the predictions as a classification task, we developed a dataset (consisting of triplets) using the protocol defined earlier. However, to train the classifier we relied on hand-crafted features based on string matching in the prediction (with corresponding slot value in the MR). In total, there were 7 features, corresponding to each slot (except `name' slot). To maintain the class balance, we replicated the original (MR,RF) pair (with a class label of 1) for each artificially generated (MR,RF) pair (with a class label of 0, corresponding to omissions). We used a logistic regression classifier to detect omissions following a similar re-ranking strategy as defined for the reverse model. For each probable candidate by the forward model, we first extracted these features and predicted the label by this logistic regression classifier. The first output in our n-best list with a class label 1 is then chosen as the resulting utterance. As a fallback mechanism, we rely on the best prediction by the forward model (similar to the reverse model). We chose the primary submission to the challenge as the pipeline model with classifier as re-ranker. Our second submission was based on re-ranking using the reverse model while the vanilla forward char2char model was our third submission. \section{Related Work}
{ "attr-fineweb-edu": 1.210938, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdtU4eIZjwV-u6ENP
\section{Introduction}\label{sec:intro} Following the discovery of the Higgs boson by the ATLAS and CMS experiments \cite{ATLAS:2012gk,CMS:2012gu} particle physics has entered a new epoch. The particle spectrum of the Standard Model is now complete yet nevertheless, we know that the Standard Model cannot be a complete theory of particle interactions, even if we do not worry about gravity. The more fundamental theory should be able to address and predict the matter-anti-matter asymmetry of the universe, the observed dark matter abundance, and it should stabilise the Standard Model Higgs potential. It should also incorporate neutrino masses and mixings. In addition it is desirable to have a particle physics implementation of cosmological inflation and possibly a solution to the strong CP problem. Finally there is still a question of the naturalness of the electroweak scale; the Standard Model accommodates and provides the description of the Higgs mechanism, but it does not, and of course was not meant to, explain the origin of the electroweak scale and why it is so much lighter than the UV cut-off scale. In this paper we concentrate on a particular approach of exploring Beyond the Standard Model (BSM) physics, based on the fact that the Standard Model contains a single mass scale, the negative Higgs mass squared parameter, $-\mu^2_{\scriptscriptstyle \rm{SM}}$, in the SM Higgs potential, \begin{equation} V(H)_{\scriptscriptstyle \mathrm{SM}}=-\frac{1}{2}\mu^2_{\scriptscriptstyle \mathrm{SM}} H^\dagger H +\lambda_{\scriptscriptstyle \mathrm{SM}}( H^\dagger H)^2 \,. \label{eq:Vsm} \end{equation} In the unitary gauge, $H(x)=\frac{1}{\sqrt{2}}(0, h(x)),$ the vacuum expectation value (vev) $v$ and the mass $m_{h\, \scriptscriptstyle \mathrm{SM}}$ of the physical SM Higgs field $h(x)$ are triggered by the $\mu_{\scriptscriptstyle \mathrm{SM}}$ scale, \begin{equation} v =\, \frac{\mu_{\scriptscriptstyle \mathrm{SM}}}{(2\lambda_{\scriptscriptstyle \mathrm{SM}})^{1/2} } \,\,\simeq 246 \,{\rm GeV}\, , \qquad m_{h\, \scriptscriptstyle \mathrm{SM}} = \mu_{\scriptscriptstyle \mathrm{SM}} \,\,\simeq 126\, {\rm GeV} \,. \label{SMfirst} \end{equation} If this single mass scale is generated dynamically in some appropriate extension of the SM, the resulting theory will be manifestly classically scale-invariant. Such theories contain no explicit mass-scales ({\it all} masses have to be generated dynamically), but allow for non-vanishing beta functions of their dimensionless coupling constants. In section {\bf \ref{sec:2}} we employ the seminal mechanism of mass-scale generation due to Coleman and Weinberg (CW) \cite{Coleman:1973jx} and show how the electroweak scale emerges in the Standard Model coupled to the CW sector. Classically Scale-Invariant Extensions of the Standard Model -- {\it CSI ESM} -- amount to a highly predictive model building framework. The high degree of predictivity/falsifiability of CSI ESM arises from the fact that one cannot start extending or repairing a CSI model by introducing new mass thresholds where new physics might enter~\cite{Meissner:2006zh, Englert:2013gz}. All masses have to be generated dynamically and, at least in the simple models studied in this paper, they are all related to the same dynamical scale, which is not far above the electroweak scale. This is consistent with the manifest CSI and as the result protects the electroweak scale itself by ensuring that there are no heavy mass-scales contributing radiatively to the Higgs mass. Furthermore, in the CSI ESM approach one naturally expects the {\it common origin} of all mass scales, i.e. the EW scale relevant to the SM, and the scales of new physics. In other words the CSI ESM framework, if it works, realises the Occam's razor succinctness. The CSI ESM theory is a minimal extension of the SM which should address all the sub-Planckian shortcomings of the SM, such as the generation of matter-anti-matter asymmetry, dark matter, stabilisation of the SM Higgs potential, neutrino masses, inflation, without introducing scales much higher the electroweak scale. It was shown recently in Ref.~\cite{Khoze:2013oga} that the CSI U(1)$_{\scriptscriptstyle \mathrm{CW}}\,\times$ \!SM theory where the Coleman-Weinberg U(1)$_{\scriptscriptstyle \mathrm{CW}}$ sector is re-interpreted as the gauged $\mathrm{B-L}$ U(1) symmetry of the SM, can generate the observed value of the matter-anti-matter asymmetry of the Universe without introducing additional mass scales nor requiring a resonant fine-tuning. This CSI U(1)$_{\bf \scriptscriptstyle B-L}\,\times$ \!SM theory also generates Majorana masses for the right-handed sterile neutrinos in the range between 200 MeV and 500 GeV and leads to visible neutrino masses and mixings via the standard sea-saw mechanism \cite{Iso:2009ss,Khoze:2013oga}. It follows that not only the baryonic matter-anti-matter asymmetry, but also the origin of dark matter must be related in the CSI ESM to the origin of the electroweak scale and the Higgs vacuum stability. Papers \cite{Hambye:2013dgv,Carone:2013wla} have shown that in the non-Abelian CSI SU(2)$_{\scriptscriptstyle \mathrm{CW}}\,\times$~\!SM theory there is a common origin of the vector dark matter and the electroweak scale. It was also pointed out in \cite{Khoze:2013uia} that a CSI ESM theory with an additional singlet that is coupled non-minimally to gravity, provides a viable particle theory implementation of the slow-roll inflation. Furthermore, the singlet responsible for inflation also provides an automatic scalar dark matter candidate. The main motivation of the present paper is to study in detail the link between the stability of the electroweak vacuum and the properties of multi-component (vector and scalar) dark matter in the context of CSI ESM theory. Our main phenomenological results are described in sections {\bf \ref{sec:Higgs}} and {\bf \ref{sec:DM}}. There, in a model by model basis we determine regions on the CSI ESM parameter space where the SM Higgs vacuum is stabilised and the extended Higgs sector phenomenology is consistent with the LHC exclusion limits. We then investigate the dark matter phenomenology, compute the relic abundance and impose constraints from direct detection for vector and scalar components of dark matter from current and future experiments. Our discussion and computations in sections {\bf \ref{sec:Higgs}} and {\bf \ref{sec:DM}} are based on the CSI EST model-building features and results derived in section {\bf \ref{sec:2}} and on solving the renormalisation group equations in section {\bf \ref{sec:RGeqs}}. \section{CSI ESM building \texorpdfstring{$\&$}{\&} generation of the EW scale} \label{sec:2} In the minimal Standard Model classical scale invariance is broken by the Higgs mass parameter $\mu^{2}_{\scriptscriptstyle \mathrm{SM}}$ in eq.~\eqref{eq:Vsm}. Scale invariance is easily restored by reinterpreting this scale in terms of a vacuum expectation value (vev) of a new scalar $\Phi$, coupled to the SM via the Higgs portal interaction, $-\,\lambda_{\rm P}|H|^2|\Phi|^{2}.$ Now, as soon as an appropriate non-vanishing value for $\langle \Phi\rangle\ll M_{\scriptscriptstyle \mathrm{UV}}$ can emerge dynamically, we get $\mu^2_{\scriptscriptstyle \mathrm{SM}} = \lambda_{\rm P}\langle|\Phi|\rangle^2$ in~\eqref{eq:Vsm} which triggers electroweak symmetry breaking. In order to generate the required vev of $\Phi$ we shall follow the approach reviewed in \cite{Englert:2013gz,Khoze:2013uia} and employ the seminal mechanism of the mass gap generation due to Coleman and Weinberg~\cite{Coleman:1973jx}. In order for the CW approach to be operational, the classical theory should be massless and the scalar field $\Phi$ should be charged under a gauge group $G_{\scriptscriptstyle \mathrm{CW}}$. The vev of the CW scalar $\Phi$ appears via the dimensional transmutation from the running couplings, leading to spontaneous breaking of $G_{\scriptscriptstyle \mathrm{CW}}$ and ultimately to EWSB in the SM. The CSI realisations of the Standard Model which we will concentrate on in this paper are thus characterised by the gauge group $G_{\scriptscriptstyle \mathrm{CW}}\times {\rm SU(3)}_{\mathrm{c}}\times {\rm SU(2)}_{\mathrm{L}} \times {\rm U(1)}_Y$ where the first factor plays the role of the hidden sector. The requirement of classical scale invariance implies that the theory has no input mass scales in its classical Lagrangian; as we already mentioned, all masses have to be generated dynamically via dimensional transmutation. The basic tree-level scalar potential is \begin{equation} V_{\rm cl}(H,\Phi)=\lambda_\phi (\Phi^\dagger \Phi)^2+\lambda_H(H^\dagger H)^2-\lambda_{\rm P}( H^\dagger H)(\Phi^\dagger \Phi)\,. \label{Vhphi} \end{equation} The matter content of the hidden sector gauge group $G_{\scriptscriptstyle \mathrm{CW}}$ can vary: in the minimal case it consists only of the CW scalar $\Phi$; more generally it can contain additional matter fields, including for example the SM fermions. We will discuss a few representative examples involving Abelian and non-Abelian gauge groups with with a more- and a less-minimal matter content. The minimal U(1)$_{\scriptscriptstyle \mathrm{CW}}$ theory coupled to the SM via the Higgs portal with the scalar potential \eqref{Vhphi} was first considered in \cite{Hempfling:1996ht}. The phenomenology of this model was analysed more recently in the context of the LHC, future colliders and low energy measurements in~\cite{Englert:2013gz}. Classical scale invariance is not an exact symmetry of the quantum theory, but neither is it broken by an arbitrary amount. The violation of scale invariance is controlled by the anomaly in the trace of the energy-momentum tensor, or equivalently, by the logarithmic running of dimensionless coupling constants and their dimensional transmutation scales. In weakly coupled perturbation theory, these are much smaller than the UV cutoff. Therefore, in order to maintain anomalously broken scale invariance, one should select a regularisation scheme that does not introduce explicit powers of the UV cut-off scale~\cite{Bardeen:1995kv}. In the present paper we use dimensional regularisation with the $\overline{\rm MS}$ scheme. In dimensional regularisation, and in theories like ours that contain no explicit mass scales at the outset, no large corrections to mass terms can appear. In this regularisation, which preserves classical scale invariance, the CSI ESM theory is not fine-tuned in the technical sense \cite{Khoze:2013uia,Englert:2013gz}. Other related studies of CSI ESModels can be found in~\cite{Chang:2007ki,Foot:2007ay,Foot:2007iy,Holthausen:2009uc,Iso:2012jn,Bian:2013wna,Guo:2014bha}. We would also like to briefly comment on two scale-invariance-driven approaches which are different from ours. The authors of Refs.~\cite{Meissner:2006zh,Hambye:2007vf,Gabrielli:2013hma,Holthausen:2013ota,Hill:2014mqa} envision CSI models with dimensional transmutation, which are not based on the CW gauge-sector-extension of the SM, but rather appeal to an extended matter content within the SM, or to a strongly coupled hidden sector. One can also consider model building based on the approach with an exact quantum scale invariance of the UV theory, as discussed recently in \cite{Tavares:2013dga} and \cite{Abel:2013mya}. It is important to keep in mind that classical scale invariance of the effective theory below the Planck scale does not necessarily assume or is directly related to the hypothesised conformal invariance of the UV embedding of the SM. \subsection{CSI \texorpdfstring{U(1)$_{\bf \scriptscriptstyle \mathrm{CW}}\,\times$ \!SM}{U(1)CWSM}} \label{sec2:U1CW} This is the minimal classically scale-invariant extension of the SM. The SM Higgs doublet $H$ is coupled via the Higgs-portal interactions to the complex scalar \[ \Phi \,=\, \frac{1}{\sqrt{2}}(\phi + i \phi_2)\,, \] where $\Phi$ is a Standard Model singlet, but charged under the U(1)-Coleman-Weinberg gauge group. The hidden sector consists of this U(1) with $\Phi$ plus nothing else. In the unitary gauge one is left with two real scalars, \[ H=\frac{1}{\sqrt{2}}(0,h)\, , \quad \Phi=\frac{1}{\sqrt{2}}\phi\,, \] and the tree-level scalar potential \eqref{Vhphi} reads \begin{equation} V_0(h,\phi)=\frac{\lambda_{\phi}^{(0)}}{4}\phi^4+ \frac{\lambda_H^{(0)}}{4}h^4 -\frac{\lambda_{\rm P}^{(0)}}{4} h^2 \phi^2\,, \label{V0hphi} \end{equation} where the superscripts indicate that the corresponding coupling constants are the tree-level quantities. We now proceed to include radiative corrections to the classically scale-invariant potential above. Our primary goal in this section is to show how quantum effects generate the non-trivial vacuum with non-vanishing vevs $\langle \phi\rangle$ and $v=\langle h \rangle$, to derive the matching condition between coupling constants in the vacuum and to compute the scalar mass eigenstates, $m_{h_1}^2$ and $m_{h_2}^2$ of the mixed scalar fields $h$ and $\phi$. We then determine the SM Higgs self-coupling $\lambda_{\scriptscriptstyle \mathrm{SM}}$ in terms of $\lambda_{H}$ and other parameters of the model. The fact that $\lambda_{\scriptscriptstyle \mathrm{SM}}$ is not identified with $\lambda_{H}$ will be of importance later when we discuss the stability of the SM Higgs potential in our model(s). For most of this section we will follow closely the analysis of Ref.~\cite{Englert:2013gz}, but with a special emphasis on two aspects of the derivation. First, is that the effective potential and the running couplings need to be computed in the $\overline{\rm MS}$ scheme, which is the scheme we will also use later on for writing down and solving the RG equations. Following the approach outlined in \cite{Englert:2013gz} one can simplify the derivation considerably by first concentrating primarily on the CW sector and singling out the 1-loop contributions $\propto e_{\scriptscriptstyle \mathrm{CW}}^4$ arising from the hidden U(1) gauge field.\footnote{Radiative corrections due to the CW scalar self-coupling $\propto \lambda_{\phi}^2$ will be sub-leading in this approach cf.~eq.~\eqref{eq:cwmsbar} below. } Perturbative corrections arising from the SM sector will then be added later. Effective potentials and running couplings in this paper will always be computed in the $\overline{\rm MS}$ scheme. In this scheme the 1-loop effective potential for $\phi$ reads, cf.~\cite{Q}, \[ V_1(\phi;\mu)=\frac{\lambda^{(0)}_\phi}{4}\,\phi^4 \,+\,\frac{3}{64\pi^2}\,e^4_{\scriptscriptstyle \mathrm{CW}}(\mu)\,\phi^4\left(\,\log\frac{ e^2_{\scriptscriptstyle \mathrm{CW}}(\mu)\,\phi^2}{\mu^2}-\frac{5}{6}\right) \,, \label{V1one} \] which depends on the RG scale $\mu$ that appears both in the logarithm and also in the 1-loop running CW gauge coupling constant $e_{\scriptscriptstyle \mathrm{CW}}(\mu)$. The running (or renormalised) self-coupling $\lambda_{\phi} $ at the RG scale $\mu$ is defined via \[ \lambda_{\phi} (\mu) \,=\, \frac{1}{3!}\,\left(\frac{\partial^4 V_1 (\phi;\mu)}{\partial \phi^4}\right)_{\phi=\mu} \,=\, \lambda_{\phi}^{(0)} \,+\, \frac{10 e_{\scriptscriptstyle \mathrm{CW}}(\mu)^4 +3 e_{\scriptscriptstyle \mathrm{CW}}(\mu)^4 \log \left(e_{\scriptscriptstyle \mathrm{CW}}(\mu)^2\right)}{16 \pi ^2}\,. \label{lphiR1} \] We can now express the effective potential in terms of this renormalised coupling constant by substituting $\lambda_\phi^{(0)} \,=\, \lambda_{\phi} \,-\, (10 e_{\scriptscriptstyle \mathrm{CW}}^4+3 e_{\scriptscriptstyle \mathrm{CW}}^4 \log e_{\scriptscriptstyle \mathrm{CW}}^2)/{(16 \pi^2)}$ into eq.~\eqref{V1one}, obtaining \[ V_1 (\phi;\mu)\,=\, \frac{\lambda_\phi(\mu) \phi^4}{4}+ \frac{3e_{\scriptscriptstyle \mathrm{CW}}(\mu)^4}{64 \pi ^2} \phi^4 \left(\log \left(\frac{\phi^2}{\mu^2}\right)-\frac{25}{6}\right) \,. \label{V1R} \] The vacuum of the effective potential above occurs at $\langle \phi \rangle \neq 0.$ Minimising the potential \eqref{V1R} with respect to $\phi$ at $\mu=\langle \phi \rangle$ gives the characteristic Coleman-Weinberg-type $\lambda_\phi \propto e_{\scriptscriptstyle \mathrm{CW}}^4$ relation between the scalar and the gauge couplings, \[ \lambda_\phi \,=\, \frac{11}{16\pi^2} \,e_{\scriptscriptstyle \mathrm{CW}}^4 \qquad {\rm at} \quad \mu=\langle \phi\rangle\,. \label{eq:cwmsbar} \] It is pleasing to note that this matching relation between the couplings takes exactly the same form as the one obtained in the CW paper in the cut-off scheme -- i.e. accounting for the 3! mismatch in the definition of the coupling in \cite{Coleman:1973jx} we have $\lambda_\phi=\frac{1}{3!} \lambda$ where $\lambda$ is the coupling appearing in \cite{Coleman:1973jx}, $\lambda=\frac{33}{8\pi^2}e^4$. Shifting the CW scalar by its vev $\phi \to \langle \phi \rangle + \phi$ and expanding the effective potential in \eqref{V1R}, we find the mass of $\phi$, \[m_\phi^2\,=\, \frac{3e_{\scriptscriptstyle \mathrm{CW}}^4}{8\pi^2}\langle \phi \rangle^2\,, \label{eq:mphiZ} \] and the mass of the $Z'$ U(1) vector boson, \[ M_{Z'}^2 = e_{\scriptscriptstyle \mathrm{CW}}^2 \langle \phi \rangle^2\, \quad \gg \quad m_\phi^2 =\frac{3e_{\scriptscriptstyle \mathrm{CW}}^4}{8\pi^2}\langle \phi \rangle^2\, \,. \label{masses2} \] The $\overline{\rm MS}$ expressions above are once again identical to those derived in the cut-off scheme in~\cite{Coleman:1973jx,Englert:2013gz}. We now turn to the SM part of the scalar potential \eqref{V0hphi}, specifically \begin{equation} V_0(h)= \frac{\lambda_H^{(0)}}{4}h^4 -\frac{\lambda_{\rm P}\langle \phi\rangle^2}{4} h^2 \,, \label{VSM0hphi} \end{equation} where we have dropped the $(0)$ superscript for the portal coupling, as it will turn out that $\lambda_{\rm P}$ does not run much. The SM scale $\mu^2_{\scriptscriptstyle \mathrm{SM}}$ is generated by the CW vev in the second term, \[ \mu^2_{\scriptscriptstyle \mathrm{SM}}=\lambda_{\rm P} \langle \phi\rangle^2\,, \] and this triggers in turn the appearance of the Higgs vev $v$ as in the first equation in \eqref{SMfirst}. The presence of the portal coupling in the potential \eqref{VSM0hphi} (or more generally \eqref{V0hphi}) provides a correction to the CW matching condition \eqref{eq:cwmsbar} and the CW mass \eqref{eq:mphiZ}. By including the last term on the {\it r.h.s} of \eqref{V0hphi} to the effective potential in \eqref{V1one} and \eqref{V1R}, we find the $\lambda_{\rm P}$-induced correction to the equations \eqref{eq:cwmsbar}-\eqref{eq:mphiZ} which now read \begin{align} \lambda_\phi &= \frac{11}{16\pi^2} \,e_{\scriptscriptstyle \mathrm{CW}}^4 +\lambda_{\rm P}\frac{v^2}{2\langle\phi\rangle^2} \qquad {\rm at} \quad \mu=\langle \phi\rangle \label{eq:cwmsbar-P} \\ m_\phi^2 &= \frac{3e_{\scriptscriptstyle \mathrm{CW}}^4}{8\pi^2}\langle \phi \rangle^2 +\lambda_{\rm P} v^2 \label{eq:mphiZ-P} \end{align} in full agreement with the results of \cite{Englert:2013gz}. In this paper, we consider small values of $\lambda_{\rm P}$ so that these corrections are negligible, since $\lambda_{\rm P} v^2/(2 \langle \phi \rangle^2) \sim \lambda_{\rm P}^2/(4 \lambda_{H}) \ll 1.$ Our next task is to compute the Higgs mass including the SM radiative corrections. To proceed we perform the usual shift, $h(x) \to v+h(x)$, and represent the SM scalar potential \eqref{VSM0hphi} as follows, \begin{equation} V(h)= \frac{\lambda_H^{(0)}}{4}(v+h)^4 -\frac{\mu^2_{\scriptscriptstyle \mathrm{SM}}}{4} (v+h)^2 +\frac{1}{2} \Delta m^{2}_{h,\rm{SM}} \, h^2 \,, \label{VSMH} \end{equation} where for overall consistency we have also included one-loop corrections to the Higgs mass arising in the Standard Model, \begin{equation} \Delta m^{2}_{h,\rm{SM}}=\frac{1}{16\pi^2}\frac{1}{v^2}\left(6m^{4}_{W}+3m^{4}_{Z}+m^{4}_{h}-24m^{4}_{t}\right) \approx -2200\, {\rm GeV}^{2}\,. \end{equation} These corrections are dominated by the top-quark loop and are therefore negative. The appearance of $v^2$ in the denominator of $ \Delta m^{2}_{h,\rm{SM}}$ is slightly misleading, and it is better to recast it as, \begin{equation} \Delta m^{2}_{h,\rm{SM}}= 2\Delta \lambda_H \, v^2 \, , \quad {\rm where}\quad \Delta \lambda_H \simeq - 0.018\,. \end{equation} The vev $v$ is determined from \eqref{VSMH} by minimisation and setting $h(x)=0$, and thus the last term in \eqref{VSMH} does not affect the value of $v$, however it does contribute to the one-loop corrected value of the Higgs mass. We have, \[ v^2=\frac{\lambda_{\rm P}}{2\lambda^{(0)}_H}\, \langle \phi \rangle^2 \, , \qquad m_h^2 = 2\lambda_H\, v^2 \, , \qquad \lambda_H = \lambda^{(0)}_H + \Delta \lambda_H \simeq \lambda^{(0)}_H - 0.018 \,, \label{masses1} \] where $\lambda_H$ is the one-loop corrected value of the self-coupling. The two scalars, $h$ and $\phi$, both have vevs and hence mix via the mass matrix, \begin{equation} \label{Mmixing} M^{2}=\left( \begin{array}{cc} 2\lambda_H\, v^2 & - \sqrt{2\lambda_{\rm P} \lambda^{(0)}_{H}} v^2 \\ - \sqrt{2\lambda_{\rm P} \lambda^{(0)}_{H}} v^2 & m^{2}_{\phi} \end{array}\right)\,, \end{equation} where $m^{2}_{\phi}$ is given in \eqref{eq:mphiZ-P} (and already includes the $\lambda_{\rm P}$ correction).\footnote{The mass mixing matrix \eqref{Mmixing} is equivalent to the mass matrix derived in \cite{Englert:2013gz} which was of the form: $ M^{2}=\left( \begin{array}{cc} m^{2}_{h,0}+\Delta m^{2}_{h,\rm SM} & -\kappa \,m^{2}_{h,0} \\ -\kappa\, m^{2}_{h,0} & m^{2}_{\phi,0} +\kappa^{2} m^{2}_{h,0} \end{array}\right)$ in terms of $m^{2}_{h,0}= \, {2\lambda^{(0)}_H}\, v^2$ and $m_{\phi,0}^2 = 3e_{\scriptscriptstyle \mathrm{CW}}^4\langle \phi \rangle^2/(8\pi^2)$, with $\kappa=\sqrt{\lambda_{\rm P}/(2\lambda^{(0)}_{H})}.$ } The mass eigenstates are the two Higgs fields, $h_1$ and $h_2$ with the mass eigenvalues, \[ m_{h_1,h_2}^2=\frac{1}{2} \left(2 \lambda _H v^2 +m_{\phi }^2 \pm \sqrt{\left(2 \lambda _H v^2 -m_{\phi }^2\right)^2 +8\lambda_{\rm P} \lambda _{H}^{(0)} v^4}\right)\,. \label{mh1h2} \] It is easy to see that in the limit where the portal coupling $\lambda_{\rm P}$ is set to zero, the mixing between the two scalars $h$ and $\phi$ disappears resulting in $m_h^2$ and $m_\phi^2$ mass eigenvalues, as one would expect. However, for non-vanishing $\lambda_{\rm P}$, the mass eigenstates $h_1$ and $h_2$ are given by \[ \left( \begin{array}{cc} h_1\\ h_2 \end{array} \right)=\left(\begin{array}{cc} \cos\, \theta & -\sin \,\theta\\ \sin\, \theta &\, \,\,\,\cos \,\theta \end{array} \right) \left( \begin{array}{cc} h\\ \phi \end{array} \right)\] with a nontrivial mixing angle $\theta$. Which of these two mass eigenstates should be identified with the SM Higgs $m^2_{h\, \scriptscriptstyle \mathrm{SM}} = \,\simeq (126\, {\rm GeV} )^2 $ of eq.~\eqref{SMfirst}? The answer is obvious, the SM Higgs is the eigenstate $h_1$ which is `mostly' the $h$ scalar (i.e. $\cos \theta \times$the scalar coupled to the SM electroweak sector) for small values of the mixing angle, \[ h_{\scriptscriptstyle \mathrm{SM}} \,:=\, h_1 \,=\, h\, \cos\, \theta \,-\, \phi\, \sin\, \theta\, , \qquad m_{h_1} = 125.66\, {\rm GeV} \,. \label{SMh1-ident} \] The SM Higgs self-coupling constant $\lambda_{\scriptscriptstyle \mathrm{SM}}$ appearing in the SM Higgs potential \eqref{eq:Vsm} can be inferred from $ m_{h_1}^2 = 2 \lambda_{\scriptscriptstyle \mathrm{SM}} v^2$, but it is not the relevant or primary parameter in our model ($\lambda_H$ is). In our computations for the RG evolution of couplings and the analysis of Higgs potential stabilisation carried out in this paper, we solve the initial condition \eqref{SMh1-ident} for the eigenvalue problem of \eqref{Mmixing} numerically without making analytical approximations. However, we show some simple analytic expressions to illuminate our approach. In the approximation where $(8\lambda_{\rm P} \lambda^{(0)}_H v^4 )/(2\lambda_H v^2-m_\phi^2)^2$ is small we can expand the square root in \eqref{mh1h2} and obtain: \begin{align} m^2_{h_1} \,=\, m^2_{+} &= 2v^2\lambda_H \left(1+\frac{\lambda_{\rm P}(\lambda_H^{(0)}/\lambda_H)\,v^2}{2\lambda_H v^2-m_\phi^2}\right)\,, \quad{\rm for}~~2\lambda_H v^2 >m_\phi^2\,, \label{m2+} \\ m^2_{h_1} \,=\, m^2_{-} &= 2v^2\lambda_H \left(1-\frac{\lambda_{\rm P}(\lambda_H^{(0)}/\lambda_H)\,v^2}{m_\phi^2-2\lambda_H v^2}\right)\,, \quad{\rm for}~~m_\phi^2>2\lambda_H v^2 \,. \label{m2-} \end{align} Note that our requirement of assigning the SM Higgs mass value of 126 GeV to the `mostly $h$ state' selects two different roots of \eqref{mh1h2} in the equations above, depending on whether the $h$ state or the $\phi$ state is lighter. As the result, there is a `discontinuity of the SM Higgs identification' with $m^2_{h_1} > 2v^2\lambda_H$ in the first equation, while $m^2_{h_1} < 2v^2\lambda_H$ in the second equation. Similarly, the value of $\lambda_{H}$ is smaller or greater than the perceived value of $\lambda_{\scriptscriptstyle \mathrm{SM}}$ in the SM, in particular, \begin{equation} \lambda_{\scriptscriptstyle \mathrm{SM}}= \lambda_H \left(1-\frac{\lambda_{\rm P}(\lambda_H^{(0)}/\lambda_H)\,v^2}{m_\phi^2-2\lambda_H v^2}\right)\,, \quad{\rm for}~~m_\phi^2>2\lambda_H v^2 \,. \label{lSM-} \end{equation} One concludes that in the case of the CW scalar being heavier than the SM Higgs, it should be easier to stabilise the SM Higgs potential, since the initial value of $\lambda_H$ here is larger than the initial value of the $\lambda_{\scriptscriptstyle \mathrm{SM}}$ coupling and as such, it should be useful in preventing $\lambda_H$ from going negative at high values of the RG scale.\footnote{This point has been noted earlier in the literature in \cite{Lebedev:2012zw,EliasMiro:2012ay}, \cite{Hambye:2013dgv} in the context of assisting the stabilisation of the SM Higgs by integrating out a heavy scalar. In our case the second scalar does not have to be integrated out. In fact, the required stabilising effect arises when the second scalar is not much heavier than the SM Higgs, which manifests itself in keeping the denominator in \eqref{lSM-} not much greater than the square of the EW scale.} On a more technical note, in our computations we also take into account the fact that the requirement of stability of the Higgs potential at high scales goes beyond the simple condition $\lambda_{H} (\mu)> 0$ at all values of $\mu$, but should be supplemented by the slightly stronger requirement emerging from the tree-level stability of the potential \eqref{V0hphi}, which requires that $\lambda_{H} > \lambda_{\rm P}^2/(4 \lambda_\phi).$ In the following sections {\bf \ref{sec2:U1BL}-\ref{sec2:U1BLsc}}, we extend the construction above to models with more general hidden sectors. First of all, the G$_{\scriptscriptstyle \mathrm{CW}}$ Coleman-Weinberg sector can be extended so that SM fermions are charged under G$_{\scriptscriptstyle \mathrm{CW}}$, and, secondly, G$_{\scriptscriptstyle \mathrm{CW}}$ can also be non-Abelian. In addition, these CSI ESM models can include a gauge singlet with portal couplings to the Higgs and the CW scalar field. In sections {\bf \ref{sec:Higgs}} and {\bf \ref{sec:DM}} we will explain how the combination of constraints arising from the Higgs vacuum stability, collider exclusions, and dark matter searches and phenomenology will apply to and discriminate between these varieties of CSI SM extensions. \subsection{CSI \texorpdfstring{U(1)$_{\bf \bf \scriptscriptstyle B-L}\,\times$ \!SM}{U(1) B-L SM}} \label{sec2:U1BL} The $\mathrm{B-L}$ theory was originally introduced in \cite{Mohapatra:1980qe}, and in the context of the CW classically scale-invariant extension of the SM this theory was recently studied in \cite{Iso:2012jn} and by the two of the present authors in \cite{Khoze:2013oga}. In the latter reference it was shown that this model can explain the matter-antimatter asymmetry of the universe by adopting the `Leptogenesis due to neutrino oscillations' mechanism of \cite{Akhmedov:1998qx} in a way which is consistent with the CSI requirement that there are no large mass scales present in the theory. The U(1)$_{\bf \bf \scriptscriptstyle B-L}\times$ SM theory is a particularly appealing CSI ESM realisation, since the gauge anomaly of U(1)$_{\bf \scriptscriptstyle B-L}$ cancellation requires that the matter content of the model automatically includes three generations of right-handed Majorana neutrinos. All SM matter fields are charged under the U(1)$_{\bf \scriptscriptstyle B-L}$ gauge group with charges equal to their Baryon minus Lepton number. In addition, the CW field $\phi$ carries the $\mathrm{B-L}$ charge 2 and its vev generates the Majorana neutrino masses and the mass of the U(1)$_{\bf \scriptscriptstyle B-L}$ $Z'$ boson. The standard see-saw mechanism generates masses of visible neutrinos and also leads to neutrino oscillations. The scalar field content of the model is the same as before, with $H$ being the complex doublet and $\Phi \,=\, \frac{1}{\sqrt{2}}(\phi + i \phi_2)$, the complex singlet under the SM. The tree-level scalar potential is given by \eqref{Vhphi} which in the unitary gauge takes the form \eqref{V0hphi}. Our earlier discussion of the mass gap generation in the CW sector, the EWSB and the mass spectrum structure, proceeds precisely as in the previous sections, with the substitution $e_{\scriptscriptstyle \mathrm{CW}} \to \,2\,e_{\bf \scriptscriptstyle B-L}$. The one-loop corrected potential \eqref{V1R} becomes: \[ V_1(\phi)=\frac{\lambda_\phi(\mu)}{4}\phi^4 +\frac{3}{64\pi^2}(2e_{\bf \scriptscriptstyle B-L}(\mu))^4\phi^4\left(\log\frac{\phi^2}{\mu^2}-\frac{25}{6}\right) -\frac{\lambda_{\rm P}(\mu)}{4} h^2 \phi^2\,. \] Minimising it at $\mu=\left<\phi\right>$ gives the matching condition for the couplings and the expansion around the vacuum at $\left<\phi\right>$ determines the mass of the CW scalar field (cf.~\eqref{eq:cwmsbar-P}-\eqref{eq:mphiZ-P}), \begin{align} \lambda_\phi &= \frac{11}{\pi^2} \,e_{\bf \scriptscriptstyle B-L}^4 +\lambda_{\rm P}\frac{v^2}{2\langle\phi\rangle^2} \qquad {\rm at} \quad \mu=\langle \phi\rangle \label{eq:cwmsbar-PBL} \\ m_\phi^2 &= \frac{6e_{\bf \scriptscriptstyle B-L}^4}{\pi^2}\langle \phi \rangle^2 +\lambda_{\rm P} v^2 \label{eq:mphiZ-PBL} \end{align} in agreement with \cite{Khoze:2013oga}. The expressions for the Higgs field vev, $v$, and the Higgs mass, $m_h$, are unchanged and given by \eqref{masses1}. The mass mixing matrix is the same as in \eqref{Mmixing} with $m_\phi^2$ given by \eqref{eq:mphiZ-PBL}. \subsection{CSI \texorpdfstring{SU(2)$_{\bf \scriptscriptstyle \mathrm{CW}}\,\times$ \!SM}{SU(2) CW SM}} \label{sec2:SU2CW} One can also use a non-Abelian extension of the SM in the CSI ESM general framework. In this section we concentrate on the simple case where the CW group is SU$(2)$ and for simplicity we assume that there are no additional matter fields (apart from the CW scalar~$\Phi$) charged under this hidden sector gauge group. This model was previously considered in~\cite{Hambye:2013dgv} and subsequently in \cite{Carone:2013wla}. The novel feature of this model is the presence of the vector dark matter candidate -- the SU$(2)$ Coleman-Weinberg gauge fields \cite{Hambye:2013dgv}. The classical scalar potential is the same as before, \[V_{\rm cl} (H,\Phi)=\lambda_\phi (\Phi^\dagger \Phi)^2+\lambda_H( H^\dagger H)^2-\lambda_{\rm P}(H^\dagger H)(\Phi^\dagger \Phi)\,, \label{VSU2} \] where $\Phi$ as well as the Higgs field $H$ are the complex doublets of the SU(2)$_{\scriptscriptstyle \mathrm{CW}}$ and the SU(2)$_{\scriptscriptstyle \mathrm{L}}$ respectively. In the unitary gauge for both of the SU(2) factors we have, \[ H=\frac{1}{\sqrt{2}}(0,h),\Phi=\frac{1}{\sqrt{2}}(0,\phi)\,. \] The analogue of the one-loop corrected scalar potential \eqref{V1R} now becomes: \[ V_1(\phi)=\frac{\lambda_\phi(\mu)}{4}\phi^4 +\frac{9 }{1024\,\pi^2}\, g^4_{\scriptscriptstyle \mathrm{CW}}(\mu) \,\phi^4\left(\log\frac{\phi^2}{\mu^2}-\frac{25}{6}\right) -\frac{\lambda_{\rm P}(\mu)}{4} h^2 \phi^2\,, \] where $g_{\scriptscriptstyle \mathrm{CW}}$ is the coupling of the SU(2) CW gauge sector. Minimising at $\mu=\left<\phi\right>$ gives: \begin{align} \lambda_\phi &= \frac{33}{256\,\pi^2} \,g_{\scriptscriptstyle \mathrm{CW}}^4 +\lambda_{\rm P}\frac{v^2}{2\langle\phi\rangle^2} \qquad {\rm at} \quad \mu=\langle \phi\rangle \label{eq:cwmsbar-PSU2} \\ m_\phi^2 &= \frac{9 }{128\,\pi^2}\,g_{\scriptscriptstyle \mathrm{CW}}^4\, \langle \phi \rangle^2 +\lambda_{\rm P} v^2\,. \label{eq:mphiZ-PSU2} \end{align} \subsection{CSI ESM \texorpdfstring{$\oplus$}{plus} singlet} \label{sec2:U1BLsc} All Abelian and non-Abelian CSI extensions of the SM introduced above can be easily extended further by adding a singlet degree of freedom, a one-component real scalar field $s(x)$. Such extensions by a real scalar were recently shown in Ref.~\cite{Khoze:2013uia} to be instrumental in generating the slow-roll potential for cosmological inflation when the scalar $s(x)$ is coupled non-minimally to gravity. The two additional features of models with the singlet, which are particularly important for the purposes of this paper, are that (1) the singlet portal coupling to the Higgs will provide an additional (and powerful) potential for the Higgs stabilisation, and (2) that the singlet $s(x)$ is also a natural candidate for scalar dark matter. The gauge singlet $s$ field is coupled to the ESM models of sections {\bf \ref{sec2:U1CW}-\ref{sec2:SU2CW}} via the scalar portal interactions with the Higgs and the CW field $\Phi$, \begin{equation} \label{potentialcoupled3} V_{\rm cl}(H,\phi,s)\,=\, \frac{\lambda_{Hs}}{2}H^\dagger H s^2 \,+\, \frac{\lambda_{\phi s}}{2} \Phi^\dagger \Phi s^2 \,+\, \frac{\lambda_{s}}{4} s^4 \,+\, V_{\rm cl}(H,\Phi)\,. \end{equation} Equations~\eqref{Vhphi},~\eqref{potentialcoupled3} describe the general renormalisable gauge-invariant scalar potential for the three classically massless scalars as required by classical scale invariance. The coupling constants in the potential \eqref{potentialcoupled3} are taken to be all positive, thus the potential is stable and the positivity of $\lambda_{Hs}$ and $\lambda_{\phi s}$ ensure that no vev is generated for the singlet $s(x)$. Instead the CW vev $\langle \phi \rangle$ generates the mass term for the singlet, \begin{equation} \label{ms2} m_s^2\,=\, \frac{\lambda_{Hs}}{2}\,v^2 \,+\, \frac{\lambda_{\phi s}}{2} \, |\langle \phi \rangle|^{2} \,, \end{equation} in the vacuum $s=0$, $\phi=\langle \phi \rangle$, $H=\frac{v}{\sqrt{2}}=\,\sqrt{ \frac{\lambda_{\rm P}}{\lambda_{\rm H}}}\, |\langle \phi \rangle |$. \section{RG Evolution} \label{sec:RGeqs} In this section our aim is to put together a tool kit which will be necessary to determine regions of the parameter spaces of CSI ESModels where the Higgs vacuum is stable. To do this we first need to specify the RG equations for all CSI ESM theories of interest, with and without the additional singlet. We will also fix the initial conditions for the RG evolution. Following this more technical build up in the present section, the Higgs vacuum stability and collider constraints on the Higgs-sector phenomenology will be analysed in section~{\bf \ref{sec:Higgs}}. \subsection{Standard Model \texorpdfstring{$\times$ U(1)${_{\bf \scriptscriptstyle CW}}$}{xU(1)CW}} This is the simplest scale-invariant extension of the SM. The hidden sector is an Abelian U(1) which couples only to the CW scalar (of charge 1) and no other matter fields. We now proceed to write down the renormalisation group equations for this model. The scalar couplings $\lambda_H$, $\lambda_\phi$ and $\lambda_{\rm P}$ are governed by: \begin{eqnarray} (4\pi)^2 \frac{d \lambda_H}{d \log \mu}&=&-6 y_t^4+24\lambda_H^2+ \lambda_{\rm P}^2 + \lambda_H \left(12y_t^2-\frac{9}{5}g_1^2-9g_2^2 -3g_{\rm mix}^2\right) \nonumber\\ &&+\frac{27}{200}g_1^4+\frac{9}{20}g_2^2g_1^2 +\frac{9}{8}g_2^4 +\frac{3}{4}g_2^2g_{\rm mix}^2 +\frac{9}{20}g_1^2g_{\rm mix}^2+\frac{3}{8}g_{\rm mix}^4 \label{lHU1}\\ \nonumber \\ (4\pi)^2 \frac{d \lambda_\phi}{d \log \mu} &=& 20\lambda_\phi^2 +2\lambda_{\rm P}^2 -12\lambda_\phi \, e_{\scriptscriptstyle \mathrm{CW}}^2 +6 e_{\scriptscriptstyle \mathrm{CW}}^4\\ (4\pi)^2 \frac{d \lambda_{\rm P}}{d \log \mu}&=&\lambda_{\rm P}\left(6y_t^2 +12\lambda_H+8\lambda_\phi -4\lambda_{\rm P} -6e_{\scriptscriptstyle \mathrm{CW}}^2 -\frac{9}{10}g_1^2 -\frac{9}{2}g_2^2 -\frac{3}{2}g_{\rm mix}^2\right) -3g_{\rm mix}^2e_{\scriptscriptstyle \mathrm{CW}}^2 \nonumber\\ \end{eqnarray} The RG equation for the top Yukawa coupling $y_t$ is, \begin{eqnarray} (4\pi)^2 \frac{d y_t}{d \log \mu}=y_t\left(\frac{9}{2}y_t^2 -\frac{17}{20}g_1^2-\frac{9}{4}g_2^2-8g_3^2 -\frac{17}{12}g_{\rm mix}^2 \right)\,. \end{eqnarray} Finally, $e_{\scriptscriptstyle \mathrm{CW}}$, $g_{\rm mix}$ and $g_{i}$ denote the gauge couplings of the U(1)$_{\scriptscriptstyle \mathrm{CW}} \times \mathrm{SM}$, which obey, \begin{eqnarray} && (4\pi)^2 \frac{d e_{\scriptscriptstyle \mathrm{CW}}}{d \log \mu}=\frac{1}{3}e_{\scriptscriptstyle \mathrm{CW}}^3+\frac{41}{6}e_{\scriptscriptstyle \mathrm{CW}}g_{\rm mix}^2 \\ &&(4\pi)^2 \frac{d g_{\rm mix}}{d \log \mu}=\frac{41}{6}g_{\rm mix}\left(g_{\rm mix}^2+2g_1^2\right)+\frac{1}{3}e_{\scriptscriptstyle \mathrm{CW}}^2g_{\rm mix}\\ &&(4\pi)^2 \frac{d g_3}{d \log \mu}=-7g_3^3 \,,\quad (4\pi)^2 \frac{d g_2}{d \log \mu}=-\frac{19}{6}g_2^3 \,,\quad (4\pi)^2 \frac{d g_1}{d \log \mu}=\frac{41}{10}g_1^3\,. \label{gaugeU1} \end{eqnarray} A characteristic feature of the Abelian ESM theory is $g_{\rm mix}$, the kinetic mixing of the two Abelian factors, U(1)$_{\scriptscriptstyle \mathrm{CW}} \times {\rm U(1)}_Y$. For a generic matter field $\varphi$ transforming under both U(1)'s with the charges $Q^{\scriptscriptstyle \mathrm{CW}}$ and $Q^{Y}$, the kinetic mixing is defined as the coupling constant $g_{\rm mix}$ appearing in the the covariant derivative, \[D_{\mu} \varphi \,=\, \partial_{\mu} \varphi \,+\, i \sqrt{\frac{3}{5}}g_1 Q^Y A^Y_{\mu} \,+\, i (g_{\rm mix} Q^Y + e_{\scriptscriptstyle \mathrm{CW}} Q^{\scriptscriptstyle \mathrm{CW}})A^{\scriptscriptstyle \mathrm{CW}}_{\mu}\,. \] Kinetic mixing is induced radiatively in so far as there are matter fields transforming under both Abelian factors. In the present model it is induced by the mass eigenstates of the scalar fields. In what follows for simplicity we will choose $g_{\rm mix}(\mu=M_t)=0$ at the top mass. \subsection{Standard Model \texorpdfstring{$\times$~U(1)${_{\bf \scriptscriptstyle B-L}}$}{xU(1)B-L}} The RG equations in the $\mathrm{B-L}$ theory are the appropriate generalisation of the equations above. These equations were first derived in \cite{Basso:2010jm} and were also discussed recently in \cite{Iso:2012jn}. In our conventions the RG evolution in the CSI U(1)${_{\bf \scriptscriptstyle B-L}}\times$ SM theory with the classical scalar potential \eqref{Vhphi} is determined by the set of RG equations below: \begin{eqnarray} (4\pi)^2 \frac{d \lambda_H}{d \log \mu}&=& r.h.s.\,\eqref{lHU1} \label{lHBL}\\ (4\pi)^2 \frac{d \lambda_\phi}{d \log \mu} &=& 20\lambda_\phi^2 +2\lambda_{\rm P}^2 -48\lambda_\phi \, e_{\bf \scriptscriptstyle B-L}^2 +96 e_{\bf \scriptscriptstyle B-L}^4 -Tr[(y^M)^4]+8\lambda_\phi Tr[(y^M)^2] \label{lphiBL} \\ (4\pi)^2 \frac{d \lambda_{\rm P}}{d \log \mu}&=&\lambda_{\rm P}\left(6y_t^2 +12\lambda_H+8\lambda_\phi -4\lambda_{\rm P} -24e_{\bf \scriptscriptstyle B-L}^2 -\frac{9}{10}g_1^2 -\frac{9}{2}g_2^2 -\frac{3}{2}g_{\rm mix}^2 \right. \nonumber\\ &&\qquad +4 Tr[(y^M)^2] \bigg)-12g_{\rm mix}^2e_{\bf \scriptscriptstyle B-L}^2 \label{lPBL} \,. \end{eqnarray} The Yukawas for the top quark and for 3 Majorana neutrinos are determined via \begin{eqnarray} (4\pi)^2 \frac{d y_t}{d \log \mu}&=&y_t\left(\frac{9}{2}y_t^2 -\frac{17}{20}g_1^2-\frac{9}{4}g_2^2-8g_3^2 -\frac{17}{12}g_{\rm mix}^2 -\frac{2}{3}e_{\bf \scriptscriptstyle B-L}^2-\frac{5}{3}g_{\rm mix}e_{\bf \scriptscriptstyle B-L} \right) \label{ytBL} \\ (4\pi)^2 \frac{d y_i^M}{d \log \mu}&=&y_i^M\left(4(y_i^M)^2+Tr[(y^M)^2]-6e_{\bf \scriptscriptstyle B-L}^2\right) \label{ymBL}\,, \end{eqnarray} and the gauge couplings are given by eqs.~\eqref{gaugeU1} together with \begin{eqnarray} (4\pi)^2 \frac{d e_{\bf \scriptscriptstyle B-L}}{d \log \mu} &=&12e_{\bf \scriptscriptstyle B-L}^3+\frac{32}{3}e_{\bf \scriptscriptstyle B-L}^2\,g_{\rm mix} +\frac{41}{6}e_{\bf \scriptscriptstyle B-L}\,g_{\rm mix}^2 \label{eBL}\\ (4\pi)^2 \frac{d g_{\rm mix}}{d \log \mu} &=& \frac{41}{6}g_{\rm mix}\left(g_{\rm mix}^2+\frac{6}{5} g_1^2\right) +2\frac{16}{3}e_{\bf \scriptscriptstyle B-L}\left(g_{\rm mix}^2+\frac{3}{5}g_1^2\right) +12e_{\bf \scriptscriptstyle B-L}^2\,g_{\rm mix} \label{gmixBL}\,. \end{eqnarray} \subsection{Standard Model \texorpdfstring{$\times$ U(1)${_{\bf \scriptscriptstyle B-L}}$ $\oplus$}{xU(1) B-L plus} real scalar} When discussing the Higgs vacuum stability we will soon find out that the size of the available region on the CSI ESM parameter space will be significantly dependent on whether or not the theory includes an additional singlet field. We are thus led to extend the RG equations above to the case with the singlet. The scalar self-couplings and portal couplings in this model are governed by the following equations, \begin{eqnarray} (4\pi)^2 \frac{d \lambda_H}{d \log \mu}&=& r.h.s.\,\eqref{lHBL} \,+\, \frac{1}{2}\lambda^2_{H s} \label{lHBLs}\\ (4\pi)^2 \frac{d \lambda_\phi}{d \log \mu} &=& r.h.s.\,\eqref{lphiBL} \,+\, \frac{1}{2}\lambda^2_{\phi s} \label{lphiBLs} \\ (4\pi)^2 \frac{d \lambda_{\rm P}}{d \log \mu}&=& r.h.s.\,\eqref{lPBL} \,-\, \lambda_{H s}\lambda_{\phi s} \label{lPBLs} \\ (4\pi)^2 \frac{d \lambda_{s}}{d \log \mu} &=& 18\lambda_s^2+\lambda_{\phi s}^2+2\lambda_{H s}^2 \label{lsBLs}\\ (4\pi)^2 \frac{d \lambda_{Hs}}{d \log \mu} &=& \lambda_{Hs}\left(6y_t^2+12\lambda_H+ 6\lambda_s + 4 \lambda_{Hs} -\frac{9g_1^2}{10}-\frac{9g_2^2}{2} \right) -2\lambda_{\rm P}\lambda_{\phi s} \label{lHsBLs} \\ (4\pi)^2 \frac{d \lambda_{\phi s}}{d \log \mu}&=& \lambda_{\phi s}\left(12\lambda_{\phi}+ 6\lambda_s +4 \lambda_{\phi s} -18e_{\bf \scriptscriptstyle B-L}^2 \right) -4\lambda_{\rm P}\lambda_{H s} \,. \label{lphisBLs} \end{eqnarray} The rest of the RG equations are the same as before. Equations for Yukawa couplings are \eqref{ytBL}-\eqref{ymBL}, and the gauge couplings are given by eqs.~\eqref{gaugeU1} together with \eqref{eBL}-\eqref{gmixBL}. As always, we set $g_{\rm mix}(\mu=M_t)=0$. Note that it is easy to derive a simple formula, eq.~\eqref{eq:simple} below, which computes the coefficients in front of scalar couplings on the right hand sides of the RG equations. First, let us write the classical scalar potential in the form, \[ V_0 = \sum_{\varphi} \,\frac{\lambda_{\varphi}}{4}( \vec\varphi^{\,2})^2 \,+\, \sum_{\varphi < \varphi'} \,\frac{\lambda_{\varphi \varphi'}}{4}( \vec\varphi^{\,2})( \vec\varphi^{\,\prime\,2})\,, \] where in our case $\varphi =\{h, \phi, s\}$, and the second sum is understood as over the three pairs of indices, $(h,\phi)$, $(h,s)$ and $(\phi,s)$. The notation $\vec \varphi$ denotes the canonically normalised real components of the Higgs, $\vec h= (h_1,\ldots, h_4)$, the complex doublet $\vec \phi= (\phi_1,\ldots, \phi_4)$ and the real singlet $\vec s= s$. In general we denote the number of real components of each of the species of $\vec \varphi$ and $N_{\varphi}$. It is then easy to derive the expressions for scalar-coupling contributions to all the self-interactions, by counting the contributing 4-point 1PI diagrams involving 2 scalar vertices. For the beta functions of the self-couplings we get, \[ (4\pi)^2 \frac{d \lambda_{\varphi}}{d \log \mu}\,\ni\, 2 (N_{\varphi}+8)\, \lambda_{\varphi}^2 \,+\, \sum_{\tilde{\varphi}} \frac{N_{\tilde{\varphi}}}{2} \,\lambda^2_{\varphi \tilde{\varphi}}\,, \] and the portal couplings are governed by, \[ (4\pi)^2 \frac{d \lambda_{\varphi \varphi'}}{d \log \mu}\,\ni\, \sum_{\varphi} 2 (N_{\varphi}+2)\, \lambda_{\varphi}\lambda_{\varphi \varphi'} \,+\, \sum_{\varphi'} 2 (N_{\varphi'}+2)\,\lambda_{\varphi \varphi'} \lambda_{\varphi'} \,+\, \sum_{\tilde{\varphi}} N_{\tilde{\varphi}}\,\lambda_{\varphi \tilde\varphi}\lambda_{\varphi' \tilde\varphi} \,+\, 4\,\lambda^2_{\varphi \varphi'} \label{eq:simple} \] This formula is valid for all of the CSI ESM examples considered in this paper. \subsection{Standard Model \texorpdfstring{$\times$ SU(2)$_{\bf \scriptscriptstyle \mathrm{CW}}$}{x SU(2) CW}} We can also write down the relevant renormalisation group equations for the classically scale-invariant Standard Model $\times$ SU(2)$_{\scriptscriptstyle \mathrm{CW}}$ theory with the scalar potential given by eq.~\eqref{VSU2}. These RG equations were first derived in Refs.~\cite{Hambye:2013dgv,Carone:2013wla}. For scalar self-couplings $\lambda_H$ and $\lambda_\phi$, and the portal coupling $\lambda_{\rm P}$ we have: \begin{eqnarray} &&(4\pi)^2 \frac{d \lambda_H}{d \log \mu}=-6 y_t^4+24\lambda_H^2+2\lambda_{\rm P}^2 + \lambda_H \left(12y_t^2-\frac{9}{5}g_1^2-9g_2^2\right) +\frac{27}{200}g_1^4+\frac{9}{20}g_2^2g_1^2 +\frac{9}{8}g_2^4\qquad \quad \label{lHeqn} \\ &&(4\pi)^2 \frac{d \lambda_\phi}{d \log \mu}= 24\lambda_\phi^2 +2\lambda_{\rm P}^2 -9\lambda_\phi \, g_{\scriptscriptstyle \mathrm{CW}}^2 +\frac{9}{8} g_{\scriptscriptstyle \mathrm{CW}}^4\\ &&(4\pi)^2 \frac{d \lambda_{\rm P}}{d \log \mu}=\lambda_{\rm P}\left(6y_t^2 +12\lambda_H+12\lambda_\phi -4\lambda_{\rm P} -\frac{9}{2}g_{\scriptscriptstyle \mathrm{CW}}^2-\frac{9}{10}g_1^2-\frac{9}{2}g_2^2\right)\,, \end{eqnarray} where the top Yukawa coupling obeys \begin{eqnarray} (4\pi)^2 \frac{d y_t}{d \log \mu}=y_t\left(\frac{9}{2}y_t^2 -\frac{17}{20}g_1^2-\frac{9}{4}g_2^2-8g_3^2\right)\,, \end{eqnarray} and $g_{\scriptscriptstyle \mathrm{CW}}$, $g_{3,2,1}$ are the gauge couplings of the SU(2)$_{\scriptscriptstyle \mathrm{CW}} \times$ SU(3) $\times$ SU(2) $\times$ U(1), \begin{eqnarray} &&(4\pi)^2 \frac{d g_{\scriptscriptstyle \mathrm{CW}}}{d \log \mu}=-\frac{43}{6}g_{\scriptscriptstyle \mathrm{CW}}^3-\frac{1}{(4\pi)^2}\frac{259}{6}g_{\scriptscriptstyle \mathrm{CW}}^5\\ \nonumber\\ &&(4\pi)^2 \frac{d g_3}{d \log \mu}=-7g_3^3 \,,\quad (4\pi)^2 \frac{d g_2}{d \log \mu}=-\frac{19}{6}g_2^3 \,,\quad (4\pi)^2 \frac{d g_1}{d \log \mu}=\frac{41}{10}g_1^3\,, \end{eqnarray} where for the U(1) coupling we use the normalisation $g_1^2 = \frac{5}{3} g_Y^2$. All running couplings are computed in the $\overline{\rm MS}$ scheme and furthermore we use the physical freeze-out condition for the SU(2)$_{\scriptscriptstyle \mathrm{CW}}$ degrees of freedom at the RG scales below their mass shell. In other words, the SU(2)$_{\scriptscriptstyle \mathrm{CW}}$ contributions to the $\beta$-functions for $g_{\scriptscriptstyle \mathrm{CW}}$, $\lambda_\phi$ and $\lambda_{\rm P}$ will be set to zero when $\mu<M_{Z'}=\frac{1}{2}g_{\scriptscriptstyle \mathrm{CW}} \langle\phi\rangle.$ \subsection{Standard Model \texorpdfstring{$\times$ SU(2)$_{\bf \scriptscriptstyle \mathrm{CW}}$ $\oplus$}{x SU(2)CWplus} real scalar} RG-equations for the three scalar self-couplings now take the form: \begin{eqnarray} (4\pi)^2 \frac{d \lambda_H}{d \log \mu}&=&-6 y_t^4+24\lambda_H^2+2\lambda_{\rm P}^2 + \frac{1}{2}\lambda_{Hs}^2 \label{lHSU2s} \nonumber\\ &&+ \lambda_H \left(12y_t^2-\frac{9}{5}g_1^2-9g_2^2\right) +\frac{27}{200}g_1^4+\frac{9}{20}g_2^2g_1^2 +\frac{9}{8}g_2^4\\ (4\pi)^2 \frac{d \lambda_\phi}{d \log \mu} &=& 24\lambda_\phi^2 +2\lambda_{\rm P}^2 + \frac{1}{2}\lambda_{\phi s}^2 -9\lambda_\phi \, g_{\scriptscriptstyle \mathrm{CW}}^2 +\frac{9}{8} g_{\scriptscriptstyle \mathrm{CW}}^4\\ (4\pi)^2 \frac{d \lambda_{s}}{d \log \mu}&=& 18 \lambda_s^2+2\lambda_{\phi s}^2+2\lambda_{H s}^2\,, \end{eqnarray} and for the three portal couplings we have, \begin{eqnarray} (4\pi)^2 \frac{d \lambda_{\rm P}}{d \log \mu}&=&\lambda_{\rm P}\left(6y_t^2 +12\lambda_H+12\lambda_\phi -4\lambda_{\rm P} -\frac{9}{2}g_{\scriptscriptstyle \mathrm{CW}}^2-\frac{9}{10}g_1^2-\frac{9}{2}g_2^2\right) - \lambda_{Hs}\lambda_{\phi s} \qquad \quad\\ (4\pi)^2 \frac{d \lambda_{Hs}}{d \log \mu}&=&\lambda_{Hs}\left(6y_t^2 +12\lambda_H+ 6\lambda_s +4\lambda_{\rm Hs} -\frac{9}{10}g_1^2 -\frac{9}{2}g_2^2\right) - 4\lambda_{\rm P}\lambda_{\phi s}\\ (4\pi)^2 \frac{d \lambda_{\phi s}}{d \log \mu}&=&\lambda_{\phi s}\left(12\lambda_{\phi}+ 6\lambda_s +4\lambda_{\rm \phi s} -\frac{9}{2}g_{\scriptscriptstyle \mathrm{CW}}^2 \right) - 4\lambda_{\rm P}\lambda_{H s}\,. \end{eqnarray} \subsection{Initial conditions and stability bounds} To solve the RG equations and determine the RG evolution of the couplings of our models, we first need to specify the initial conditions for all the couplings. First, we specify the initial conditions for the SM coupling constants at $M_t$: the top Yukawa coupling $y_t$ and the SM gauge couplings initial values are taken from Ref.~\cite{Buttazzo:2013uya}, \begin{eqnarray} y_t(\mu=M_t) &=& 0.93558 +0.00550 \left( \frac{M_t}{\rm GeV}-173.1\right)+ \nonumber\\ &&-0.00042 \frac{\alpha_3(M_z)-0.1184}{0.0007} -0.00042\frac{M_W-80.384 {\rm GeV}}{\rm GeV} \pm{0.00050}_{\rm th} \\ \nonumber\\ g_3(\mu=M_t) &=& 1.1666 +0.00314\frac{\alpha_3(M_z)-0.1184}{0.0007} -0.00046 \left( \frac{M_t}{\rm GeV}-173.1 \right) \\ \nonumber\\ g_2(\mu=M_t) &=& 0.64822 +0.00004 \left( \frac{M_t}{\rm GeV}-173.1 \right)+ 0.00011 \frac{M_W-80.384 {\rm GeV}}{\rm GeV} \\ \nonumber\\ g_1(\mu=M_t) &=& \sqrt{\frac{5}{3}}\left(0.35761 +0.00011 \left( \frac{M_t}{\rm GeV}-173.1 \right)- 0.00021 \frac{M_W-80.384{\rm GeV}}{\rm GeV}\right)\,. \nonumber\\ \end{eqnarray} In our numerical analysis we will always assume the central values for $M_t$ and $M_W$. The CW portal coupling, $\lambda_{\rm P}$ and the CW gauge coupling are taken as the two free input parameters specifying the 2-dimensional BSM parameter space of our U(1) or SU(2) $\times$ SM theories. When an additional singlet field $s(x)$ is present, the input parameters also include $\lambda_{Hs}$, $\lambda_{s}$ and $\lambda_{\phi s}$. The input values of the two remaining couplings, the Higgs self-coupling $\lambda_H$, and the self-coupling of the CW scalar, $\lambda_\phi$, are then determined from the value of the SM Higgs mass, and from the CW matching condition \eqref{eq:cwmsbar-P}, respectively. To find $\lambda_H$ we numerically compute the eigenvalues of the mass matrix \eqref{Mmixing} and set $m_{h_1}=125.66$ GeV, as was outlined in eq.~\eqref{SMh1-ident}. We then iteratively solve for $\lambda_\phi(\mu=M_t)$ by running it from the top mass scale to $\mu=\left<\phi\right>$ and checking that we fulfil the CW matching relation \eqref{eq:cwmsbar-P} at the latter scale. Having thus specified the initial conditions for all couplings at the low scale, $\mu=M_t$, we run them up to the high scale $\mu=M_{\rm Pl}$ by numerically solving the RG equations. To determine the region of the parameter space where the Higgs potential is stable, we check that the conditions, \[ 4\lambda_H (\mu)\, \lambda_\phi (\mu)\,>\, \lambda_{\rm P}^2 (\mu)\,, \qquad \lambda_H (\mu)\,>\,0\,, \quad {\rm for \,\, all} \,\, \mu\le M_{\rm Pl}\,, \label{treelst} \] arising from the positive definiteness of eq.~\eqref{Vhphi} are fulfilled. We also check that the model remains perturbative, requiring that all its scalar couplings are bounded by an order-1 constant all the way to the Plank scale, \[\lambda_i (\mu)\, <\, {\rm const}\, {\cal O}(1) \,=3\,,\] where for concreteness we chose a conservative numerical value of the upper bound = 3; in practice our results do not depend significantly on this choice. \begin{figure}[t!] \centering \includegraphics[width=0.7\columnwidth]{rge_sm.pdf} \caption{RG evolution in the Standard Model. The Higgs self-coupling turns negative at \mbox{$\mu \gtrsim 10^9$~GeV} thus signalling that the SM Higgs potential becomes unstable below the Planck scale. In this and all other Figures we use $M_t =173.1$ GeV.} \label{Ffig:rge_sm} \end{figure} \section{Higgs Physics: stability and phenomenology} \label{sec:Higgs} \begin{figure}[t!] \centering \includegraphics[width=0.61\columnwidth]{rg_u1bl.pdf}\\ \includegraphics[width=0.61\columnwidth]{rg_u1bl_s.pdf}\\ \includegraphics[width=0.61\columnwidth]{rg_su2.pdf} \caption{\noindent RG evolution in CSI $\mathrm{E}$SM theories with (a) $\mathrm{E}= \mathrm{U}(1)_{\bf \scriptscriptstyle B-L}$, (b) $\mathrm{E}=\mathrm{U}(1)_{\bf \scriptscriptstyle B-L} + s(x),$ and (c) $\mathrm{E}=\mathrm{SU}(2)_{\scriptscriptstyle \mathrm{CW}}$. With these initial conditions the Higgs coupling $\lambda_H$ stays positive and satisfies the tree-level stability bound \eqref{treelst}.}\label{fig:3runs} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=.75\columnwidth]{u1_stability.pdf} \caption{Parameter space in the minimal U(1)$_{\scriptscriptstyle \mathrm{CW}}\,\times$ \!SM classically scale invariant theory. The black wedge-shaped contour shows the region of the $(\lambda_{\rm P}, e_{\scriptscriptstyle \mathrm{CW}})$ parameter space of the model where the Higgs potential is stabilised. The dotted lines represent contours of fixed values $\sin^2 \theta = $ 0.05, 0.1 and 0.2 of the Higgs mixing angle. Finally, the colour-coding indicates the mass of the second scalar $h_2$ in GeV.} \label{U1Stab} \end{figure} It is well known that in the Standard Model the Higgs self-coupling becomes negative at \mbox{$\mu \sim 10^9$~GeV} making the SM Higgs potential unstable below the Planck scale \cite{Degrassi:2012ry,Buttazzo:2013uya} (see also~\cite{Sher:1988mj,Lindner:1985uk} for a review of earlier work). This effect can be seen in fig.~\ref{Ffig:rge_sm} which shows the solution of RG equations in the limit where all Higgs portal interactions are switched off. For our classically scale invariant extensions of the SM to be meaningful and practical natural theories valid all the way up to the Planck scale, the Higgs potential has to be stabilised.\footnote{In this paper we will concentrate on the more conservative case of absolute stability. Another phenomenologically acceptable possibility analysed recently in \cite{Buttazzo:2013uya} is that the SM vacuum is metastable, with a lifetime much greater than the age of the Universe. In that case one would also have to argue why after reheating the Universe ended up in the metastable vacuum near the origin, for example following the approach of \cite{Abel:2006cr}.} There are two mechanisms, both relying on the Higgs portal interactions, to achieve this: \begin{enumerate} \item The SM Higgs is the mixed mass eigenstate $h_1$ between $H$ and the CW scalar as dictated by eq.~\eqref{SMh1-ident}. As we explained at the end of section {\bf \ref{sec2:U1CW}} in the case where the second scalar is heavier than the Higgs, $m_{h_2} > m_{h_1}$, the initial value of the Higgs self-coupling $\lambda_H$ is larger than in the SM, cf.~eq.~\eqref{lSM-}, and this helps with the Higgs stabilisation \cite{Lebedev:2012zw,EliasMiro:2012ay,Hambye:2013dgv}. \item The portal couplings of other scalars to the Higgs, such as $\lambda_{P}$ and $\lambda_{Hs}$ contribute positively to the beta function of $\lambda_H$ as can be seen e.g. from the RG equation \eqref{lHSU2s} in the SU(2)$_{\scriptscriptstyle \mathrm{CW}}$ + scalar case, where $\beta_{\lambda_H} \ni 2\lambda_{\rm P}^2 + \frac{1}{2}\lambda_{Hs}^2$. This effect (in particular due to the otherwise unconstrained but still perturbative $\lambda_{Hs}$ coupling) will be instrumental in achieving the Higgs stability in models with an extra scalar, \cite{Gonderinger:2009jp,Lebedev:2011aq}. \end{enumerate} Examples of RG running for some specific input values of parameters for three different classes of models which result in stable Higgs potential are shown in fig.~\ref{fig:3runs} where cases (a) and (c) give an example of mechanism (1.) and the model with an additional scalar in case (b) is a representative of mechanism (2.) at work. In the rest of this section we will quantify the regions of the parameter spaces for individual models where the scalar potential is stabilised. We will also combine these considerations with the current LHC limits applied to the extended Higgs sectors of our Higgs portal theories in a model by model basis. \subsection{CSI \texorpdfstring{U(1)${_{\bf \scriptscriptstyle CW}}\,\times$ \!SM}{U(1)CWxSM}} \label{sec:4.1} In this theory the mechanism (1.) is operational for stabilising the Higgs potential in a region of the 2-dimensional parameter space of the model described by $\lambda_{\rm P}$ and the CW gauge coupling. As shown in fig.~\ref{U1Stab} we get a wedge shaped region inside the black contour inside which the Higgs potential is stable. Higgs stabilisation in this region can be traced to the initial value of $\lambda_H$ being enhanced compared to the SM due to mixing between $h$ and the CW scalar field. The wedge shape can be understood as follows. The upper edge of the wedge follows the mass contour where $m_{h_2}>m_h$ since the enhancement of the initial value of $\lambda_h$ only happens when $m_{h_2}>m_{h_1}$, see~\eqref{lSM-}. The mechanism is only effective when the two masses are not too far from each other (cf.~the denominator of the second term in eq.~\eqref{lSM-}). The lower contour of the wedge signifies when the mass difference becomes too large. The effect is enhanced when the off-diagonal element is larger as we get more mixing. This explains why the stability wedge in fig.~\ref{U1Stab} is wider for larger values of~$\lambda_{\rm P}$. We get an upper limit on~$e_{\scriptscriptstyle CW}\approx 0.9 $ since for larger values we find a Landau pole before the Planck scale. Higgs sector phenomenology of this model in the context of LHC and LEP, future colliders and low energy measurements was analysed recently in \cite{Englert:2013gz}. In particular, it was shown there that on the part of the parameter space where the second scalar is light, \mbox{$10^{-4}$ GeV $<\, m_{h_2}\,<\, m_{h_1}/2$} the presently available Higgs data (and specifically the limits on the invisible Higgs decays) constrain the model quite tightly by placing the upper limit on the portal coupling to be $\lambda_{\rm P} \lesssim 10^{-5}$. However, from fig.~\ref{U1Stab} we see that the Higgs stability in the minimal model (and more generally in all portal models without additional scalar $s(x)$, i.e. relying on the stabilisation mechanism (1.) ) requires the second scalar to be heavier than the SM Higgs, $m_{h_2} \,>\, m_{h_1}$ (see also figs.~\ref{U1BLStab}, \ref{SU2Stab}). Thus Higgs stability pushes these models in to the region of the parameter space with the heavier second scalar, precisely where the collider limits on invisible Higgs decays and on non-observation of other Higgs-like states are much less stringent. Collider limits which do constrain the stability region in fig.~\ref{U1Stab} are the exclusion limits on the heavier Higgs production normalised to the expected SM cross-section at this Higgs mass. In all Higgs portal models we consider in this paper, the expected cross-section for the $h_2$ scalar is given by the SM cross-section times $\sin^2 \theta$ of the mixing angle. With the currently available ATLAS and CMS data for the search of the heavier Higgs boson at integrated luminosity of up to 5.1 fb$^{-1}$ at $\sqrt{s}=7$ TeV and up to 5.3 fb$^{-1}$ at $\sqrt{s}=8$ TeV, the observed signal strength in the units of the SM cross-section for the heavier Higgs is roughly at the level of $10^{-1}$, or slightly above, as can be seen from plots in \cite{ATLASsc, CMSWWZZ, CMSZZ4l}. This gives an upper limit on the mixing angle $\sin^2 \theta \lesssim 0.1$. The contours of constant values of $\sin^2 \theta =0.05,\, 0.1$ and 0.2 are shown on fig.~\ref{U1Stab} as dotted lines. As we can see for $\sin^2 \theta \lesssim 0.1$ there is no overlap left between what is allowed by the collider limits and what is consistent with the Higgs stability in this model. We thus conclude that the combination of the Higgs potential stabilisation and the LHC limits on the heavier Higgs essentially rule out the minimal U(1)$_{\scriptscriptstyle \mathrm{CW}}\times$ SM theory. This conclusion is based on the one-loop RG analysis, on the methodology we adopted for the selection of initial values, and on the use of the central value for the top mass. As such there is an intrinsic theoretical uncertainty in the exact position of the wedge. By lowering the top mass from its central value by 1 GeV, the wedge in fig.~\ref{U1Stab} would touch the $\sin^2 \theta = 0.1$ contour making the model viable in the limited corner of the parameter space. Instead, to get a stable viable model with the current central value of the top mass and without relying upon the sub-leading RG effects, we will simply extend the theory by adding a singlet $s(x)$ in sections {\bf \ref{sec:4.3}}, {\bf \ref{sec:4.5}}. \subsection{CSI \texorpdfstring{U(1)${_{\bf \scriptscriptstyle B-L}}\,\times$ \!SM}{U(1) B-LxSM}} \label{sec:4.2} \begin{figure}[] \centering \includegraphics[width=.75\columnwidth]{u1_bl_stability.pdf} \caption{Parameter space of the U(1)$_{\bf \scriptscriptstyle B-L}\times$ SM theory showing the region where the Higgs potential is stabilised and the $\sin^2 \theta$ contours. The legend is the same as in Figure \ref{U1Stab}.} \label{U1BLStab} \end{figure} One way to extend the minimal model is to allow for interactions of the hidden sector with the SM fermions. As we have seen already, a simple implementation of this idea is described by the U(1)$_{\bf \scriptscriptstyle B-L}\times \mathrm{SM}$ classically scale invariant theory. We proceed to solve the RG equations in this model and search for the region on the parameter space where the scalar potential is stable, with the results shown in fig.~\ref{U1BLStab}. The stability region in fig.~\ref{U1BLStab} is shorter along the horizontal $e_{\bf \scriptscriptstyle B-L}$--direction than in the minimal CW model of fig.~\ref{U1Stab} before. This is caused by the slope of the $\mathrm{B-L}$ gauge coupling being steeper than for the minimal U(1)$_{\scriptscriptstyle \mathrm{CW}}\times$ SM theory, due to the SM quarks and leptons which are now charged under the U(1)$_{\bf \scriptscriptstyle B-L}$ gauge group. We therefore get a Landau pole before the Planck Scale if $e_{\bf \scriptscriptstyle B-L}(\mu=m_t)\gtrsim 0.35$, and this shortens the allowed region. The width of the stability wedge reflects the fact that in the $\mathrm{B-L}$ model the CW scalar $\phi$ has the charge of two. As the result one would expect that the width of the $\mathrm{B-L}$ model stability region for a fixed value of the gauge coupling, say at $e_{\bf \scriptscriptstyle B-L}=0.3$, should be of similar size to the case of the pure U(1) CW sector at the twice the value of the coupling, i.e. at $e_{\scriptscriptstyle \mathrm{CW}}=0.6$, which is indeed the case. Collider exclusion limits of $\sin^2 \theta \lesssim 0.1$ are indicated in fig.~\ref{U1BLStab} as before by the dotted lines showing contours of constant $\sin^2 \theta =0.05,\, 0.1$ and 0.2. We see that the combination of the Higgs potential stabilisation and the LHC limits on the heavier Higgs rules out also the U(1)$_{\bf \scriptscriptstyle B-L}\times$ SM theory without an additional singlet. In the $U(1)_{\bf \scriptscriptstyle B-L}$ model we also have a $Z'$ boson which couples to the Standard Model fermions. The ATLAS and CMS experiments give lower limits for $M_{Z'}$ of about 3 TeV \cite{ATLAS-C017,CMS-061}. This implies, \[ M_{Z'}=2e_{\bf \scriptscriptstyle B-L}\left<\phi\right>=2e_{\bf \scriptscriptstyle B-L}\sqrt{\frac{2\lambda_H}{\lambda_p}}v\,, \] and therefore \[ \sqrt{\lambda_{\rm P}}\, <\, \frac{2v\sqrt{2\lambda_H}}{3\, {\rm TeV}}e_{\bf \scriptscriptstyle B-L}\,\,\implies \,\,\lambda_p\lesssim (0.1\, e_{\bf \scriptscriptstyle B-L})^2\,. \] For $e_{\bf \scriptscriptstyle B-L}=0.35$ we find that $\lambda_{\rm P}\lesssim 10^{-3}$, which is clearly outside the stability wedge of the $\mathrm{B-L}$ model. Therefore Higgs stabilisation in the minimal U(1)$_{\bf \scriptscriptstyle B-L}\times\mathrm{SM}$ theory is also not compatible with the collider limits on $Z'$. \subsection{CSI \texorpdfstring{U(1)${_{\bf \scriptscriptstyle B-L}}\,\times$ \!SM $\oplus$}{U(1) B-LxSM plus} singlet} \label{sec:4.3} When we add a real scalar $s(x)$ to the U(1)$_{\scriptscriptstyle \mathrm{CW}}$ or U(1)$_{\bf \bf \scriptscriptstyle B-L} \times$ SM theory, the scalar potential is stabilised by the mechanism (2.) which relies on the positive shift in the $\beta$-function for~$\lambda_H$, \[ \beta_{\lambda_H} \,\ni\, + \frac{\lambda_{Hs}}{2}\,. \label{bhs} \] We have checked that the stabilisation occurs on the entire $(\lambda_{\rm P}, e)$ 2d parameter space for values of $\lambda_{Hs} \sim 0.34$ or above, as can be seen from the left table in Table \ref{U1BLSStab}. \begin{table}[t] \small \centering {% \hspace{.5cm}% \begin{tabular}{c c c} \hline\hline $\lambda_{\rm P}$ & $e_{\bf \scriptscriptstyle B-L}$ & $\lambda_{Hs}$ \\ [0.5ex] \hline $10^{-5}$&0.1&0.34\\ $10^{-5}$&0.2&0.34\\ $10^{-5}$&0.3&0.33\\ 0.0001&0.1&0.35\\ 0.0001&0.2&0.34\\ 0.0001&0.3&0.33\\ 0.001&0.1&0.35\\ 0.001&0.2&0.29\\ 0.001&0.3&0.33\\ \hline \end{tabular} \hspace{.5cm}% }\hspace{1cm} {% \hspace{.5cm}% \begin{tabular}{c c c } \hline\hline $\lambda_{\rm P}$ & $g_{\scriptscriptstyle \mathrm{CW}}$ & $\lambda_{Hs}$ \\ [0.5ex] \hline $10^{-5}$&0.8&0.35\\ $10^{-5}$&1.4&0.35\\ $10^{-5}$&2.0&0.35\\ 0.0001&0.8&0.35\\ 0.0001&1.4&0.35\\ 0.0001&2.0&0.35\\ 0.001&0.8&0.34\\ 0.001&1.4&0.35\\ 0.001&2.0&0.35\\ \hline \end{tabular} \hspace{.5cm}% } \caption{Minimal values of $\lambda_{Hs}$ needed to stabilise the Higgs potential in the CSI ESM $\oplus$ singlet models with $\lambda_s=0.1$ and $\lambda_{\phi s}= 0.01$. Left Table: U(1)$_{\bf \scriptscriptstyle B-L}$. Right Table: SU(2)$_{\scriptscriptstyle \mathrm{CW}}$.} \label{U1BLSStab} \end{table} \subsection{CSI \texorpdfstring{SU(2)$_{\bf \scriptscriptstyle \mathrm{CW}}\,\times$ \!SM}{SU(2)CWxSM}} \label{sec:4.4} Solving RG equations in the non-Abelian CW theory coupled to the SM, gives the Higgs stability region shown in fig.~\ref{SU2Stab} together with the $\sin^2 \theta$ exclusion contours. The stability wedge is now shifted to larger values of $g_{\scriptscriptstyle \mathrm{CW}}$ as $\phi$ has an equivalent charge of $1/2$. From fig.~\ref{SU2Stab} we conclude that the combination of the Higgs potential stabilisation and the LHC limits on the heavier Higgs leaves a small corner of the parameter space available in the minimal SU(2)$_{\scriptscriptstyle \mathrm{CW}}\times$ SM theory. \begin{figure}[t!] \centering \includegraphics[width=.75\columnwidth]{su2_stability.pdf} \caption{Parameter space of the SU(2)$_{\scriptscriptstyle \mathrm{CW}}\,\times$ \!SM theory showing the region where the Higgs potential is stabilised and the $\sin^2 \theta$ contours. The legend is the same as in Figure~\ref{U1Stab}.} \label{SU2Stab} \end{figure} \subsection{CSI \texorpdfstring{SU(2)$_{\bf \scriptscriptstyle \mathrm{CW}}\,\times$ \!SM~$\oplus$}{SU(2)CWxSMplus} singlet} \label{sec:4.5} The Higgs potential in the SU(2)$_{\scriptscriptstyle \mathrm{CW}}\times$ SM can be stabilised on the entire 2d plane $(\lambda_{\rm P}, g_{\scriptscriptstyle \mathrm{CW}})$ by extending the model with a vev-less singlet $s(x)$ portally coupled to the Higgs, as in~eq.~\eqref{bhs}. The table on the right in Table \ref{U1BLSStab} shows the critical value of $\lambda_{Hs}$ for this stabilisation mechanism to work in the CSI SU(2)$_{\bf \scriptscriptstyle \mathrm{CW}}\times \mathrm{SM}$ $\oplus$ singlet model. Before we conclude this section we would like to make a comment. We have shown that the minimal Higgs portal models without an additional scalar are largely ruled out by the combination of Higgs (in)stability and the LHC constraints (except for a small region of the parameter space still available in the non-Abelian model). At the same time we showed that if these models include an additional scalar field with a portal coupling $\lambda_{Hs} \sim 0.35$, the Higgs stability restrictions are completely lifted and the models are completely viable. The question arises if this conclusion would also apply to models without an additional scalar, but instead with the Higgs-CW portal coupling being relatively large, $\lambda_{\rm P} \sim 0.3$, so that $\beta_{\lambda_H}$ would instead receive a positive contribution from $2 \lambda_{\rm P}^2$. This approach would not work for the following reason. In order not to get a large mixing angle $\sin^2\theta>0.1$ in this case we require that the second scalar is quite heavy, $m_{h_2}>300$ GeV. This in turn requires a large CW gauge coupling of $g_{\scriptscriptstyle \mathrm{CW}} \approx 3.5$. Such a large gauge coupling leads to a large value for $\lambda_\phi$ at the scale of $\langle\phi\rangle$. $\lambda_\phi$ therefore develops a Landau pole already at low scales. \section{Dark Matter Physics: relic abundance and constraints} \label{sec:DM} Having demonstrated that the Higgs sector can be stabilised and that it is in agreement with all current observations, we now show that this framework can accommodate the observed dark matter content in the Universe. In the scenarios that we have studied, there are two potential dark matter candidates. The first candidate is the vector dark matter \cite{Hambye:2008bq,Hambye:2009fg,Arina:2009uq} given by the triplet of gauge bosons $Z'_i$ of the SU$(2)_{\scriptscriptstyle \mathrm{CW}}$ sector and considered recently in \cite{Hambye:2013dgv,Carone:2013wla}. These particles have the same mass $M_{Z'}$ and are stable because of an unbroken global $SO(3)$ `custodial symmetry', which also ensures that each component has the same relic abundance. The second candidate is the singlet scalar particle $s$ coupled to the Higgs through the Higgs portal.\footnote{Magnetic monopoles are also a possible third dark matter candidate~\cite{Evslin:2012fe}; in this work we ignore this possibility.} This is a much studied dark matter candidate \cite{Silveira:1985rk,S2,S3,Mambrini:2011ik,Djouadi:2011aa,Low:2011kp,Cheung:2012xb,Cline:2013gha} that is stable because of a $Z_2$ symmetry of the classically scale-invariant $\mathrm{SM}\times$G$_{\scriptscriptstyle \mathrm{CW}}$ theory with the real singlet \cite{Khoze:2013uia}.\footnote{The $s \to -s$ symmetry of the potential eq.~\eqref{potentialcoupled3} is an automatic consequence of scale-invariance and gauge invariance, which does not allow odd powers of $H$ and $\Phi$.} Having argued that the vector triplet and scalar particles are stable and therefore potential dark matter candidates, we must calculate the relic abundance in order to show that they can saturate, or form a component of the observed dark matter abundance, for which we take $\Omega_{\rm{DM}} h^2=0.1187\pm0.0017$, the value inferred from Planck+WP+HighL+BAO data~\cite{Ade:2013zuv}. Owing to the reasonable couplings to the Standard Model particles, the scalar and vector dark matter components are in thermal equilibrium with the Standard Model degrees of freedom in the early Universe. Their abundance is therefore determined by the thermal freeze-out mechanism. To calculate it, we must solve the Boltzmann equation, which is~\cite{Gondolo:1990dk, D'Eramo:2010ep}, \begin{equation} \frac{d n_i}{dt}+3Hn_i=-\langle\sigma_{ii}v\rangle\left(n_i^2-n_i^{\rm{eq}\,2} \right)-\sum_{j,k}\langle\sigma_{ijk}v\rangle\left(n_i n_j-\frac{n_k}{n_k^{\rm{eq}}}n_i^{\rm{eq}}n_j^{\rm{eq}}\right)\;, \end{equation} where $n_i$ is the number density of one component $\chi_i$ of the dark matter abundance, $\langle\sigma_{ii}v\rangle$ is the usual annihilation cross-section term for reactions of the form $\chi_i \chi_j\to X X$, where $X$ is a particle in equilibrium with the thermal bath, and $\langle\sigma_{ijk}v\rangle$ is the cross-section for the semi-annihilation reaction $\chi_i \chi_j\to \chi_k X$. \subsection{Vector dark matter} \label{sec:vDM} \begin{figure}[t!] \centering \includegraphics[width=0.99 \columnwidth]{semiann2.pdf} \includegraphics[width=0.99 \columnwidth]{ann2.pdf} \caption{ The upper three diagrams show the process $Z'_i Z'_j\to Z'_k h_2$, which is the dominant contribution to the semi-annihilation cross-section. The process $Z'_i Z'_j\to Z'_k h_1$ also occurs but is suppressed by $\tan^2\theta$. The lower four diagrams show the processes that dominate the annihilation of $Z'_i Z'_i$. Other diagrams are suppressed by at least one power of $\sin\theta$ or $\lambda_{\rm{P}}$.} \label{fig:semiann} \end{figure} We first consider the case of vector dark matter only, which is similar to Hambye's model~\cite{Hambye:2008bq} except that here there are no explicit $\mu$ terms. This model is interesting as it was the first example of a model containing both annihilation and semi-annihilation processes, as shown in fig.~\ref{fig:semiann}. The annihilation cross-section is dominated by the lower four diagrams of fig.~\ref{fig:semiann}, which contribute to the process $Z^{'}_i Z^{'}_i \to h_2 h_2$. The leading order terms contributing to the non-relativistic (s-wave) cross-section from these diagrams are \begin{equation} \label{eq:xsecij} \langle\sigma_{ii} v\rangle=\frac{11g_{\scriptscriptstyle \mathrm{CW}}^4-60 g_{\scriptscriptstyle \mathrm{CW}}^2 \lambda_{\phi}+108 \lambda_{\phi}^2}{2304\pi}\,\frac{ \cos^4\theta}{ M^2_{Z'}}+\mathcal{O}\left(\frac{m^2_{h_2}}{M^2_{Z'}},\sin\theta,\lambda_{\rm{P}}\right)\;. \end{equation} In our numerical work, we include all sub-leading terms in this cross-section as well as including the contributions from $Z'_i Z'_i\to h_1 h_1$, $Z'_i Z'_i\to\bar{f}f$, $Z'_i Z'_i\to W^{+}W^{-}$ and $Z'_i Z'_i\to Z^{0}Z^{0}$, all of which are suppressed by at least one power of $\sin\theta$ or $\lambda_{\rm{P}}.$ The diagrams that contribute to the semi-annihilation process are shown by the upper three diagrams in fig.~\ref{fig:semiann}. In the non-relativistic limit, the (s-wave) cross-section for $Z'_i Z'_j\to Z'_k h_2$ is \begin{equation} \label{eg:xsecijk} \langle\sigma_{ijk} v\rangle=\frac{3 g_{\scriptscriptstyle \mathrm{CW}}^4}{128 \pi}\frac{\cos^2\theta}{ M_{Z'}^2}\left(1-\frac{m_{h_2}^2}{3 M_{Z'}^2}\right)^{-2}\left(1-\frac{10 m_{h_2}^2}{9 M_{Z'}^2} +\frac{m_{h_2}^4}{9 M_{Z'}^4}\right)^{3/2}\;. \end{equation} There is also a subdominant process $Z'_i Z'_j\to Z'_k h_1$ whose cross-section is obtained from eq.~\eqref{eg:xsecijk} by substituting $m_{h_2}\to m_{h_1}$ and $\cos\theta\to\sin\theta$. For completeness, we include this in our numerical work. Comparing eqs.~\eqref{eq:xsecij} and \eqref{eg:xsecijk}, we observe that $\langle\sigma_{ijk} v\rangle\sim5\langle\sigma_{ij} v\rangle$ so the semi-annihilation processes dominate. The global custodial symmetry ensures that the vector triplet is degenerate in mass and each $Z'_i$ contributes one-third to the relic abundance. That is the total abundance $n_{Z'}$ is related to the individual components by $n_{Z'}=3 n_{Z'_1}=3 n_{Z'_2}=3 n_{Z'_3}.$ It should also be clear that $\langle\sigma_{11}v\rangle=\langle\sigma_{22}v\rangle=\langle\sigma_{33}v\rangle:=\langle\sigma v\rangle_{\rm{ann}}$ and $\langle\sigma_{123}v\rangle=\langle\sigma_{132}v\rangle=\langle\sigma_{213}v\rangle=\langle\sigma_{231}v\rangle=\langle\sigma_{312}v\rangle=\langle\sigma_{321}v\rangle:=\langle\sigma v\rangle_{\rm{semi-ann}}$. Therefore, the Boltzmann equation for the total abundance is \begin{equation} \label{eq:Boltzvect} \frac{d n_{Z'}}{dt}+3H n_{Z'}=-\frac{\langle\sigma v\rangle_{\rm{ann}}}{3}\left(n_{Z'}^2-n_{Z'}^{\rm{eq}\,2} \right)-\frac{2\langle\sigma v\rangle_{\rm{semi-ann}}}{3}n_{Z'}\left(n_{Z'} - n_{Z'}^{\rm{eq}}\right)\;. \end{equation} We solve this equation numerically by the method outlined in~\cite{Belanger:2012vp}. \begin{figure}[t!] \centering \includegraphics[width=0.49\columnwidth]{su2_dm.pdf}\hspace{2mm}\includegraphics[width=0.49 \columnwidth]{su2_dm_ma_mh2.pdf} \caption{ The coloured contours and the wedge-shaped regions in black in both panels indicate when the vector triplet forms more less than 100\%, 10\% and 1\% of the observed dark matter abundance, and the parameter values where the Higgs potential is stabilised respectively. Also shown in the left panel are the LUX and projected LZ limits (the region above these lines is excluded), which account for the fact that the dark matter is a subcomponent of the total density in much of the parameter space, and the limit $\sin^2\theta=0.1$. The right panel shows that the vector mass should lie between 500~GeV and 1~TeV to improve Higgs stability.} \label{fig:vector_relic} \end{figure} The coloured regions in the left and right panels of fig.~\ref{fig:vector_relic} show the total relic abundance of the vector triplet as a fraction of the observed abundance. For instance, in the lower left (blue) part of the left panel, the abundance exceeds the observed value and is therefore excluded. The thick black wedge indicates the region where the Higgs potential is stabilised up to the Planck scale (as in fig.~\ref{SU2Stab}). We see that for most of wedge, the vector triplet contributes between 1\% and 100\% of the total dark matter abundance. However, when we combine this with the LHC constraint on $\sin^2 \theta$, we see from fig.~\ref{fig:vector_relic} that the vector dark matter component contributes less than 10\% to the total relic abundance, and we need to add another dark matter component. The right panel in fig.~\ref{fig:vector_relic} shows the dark matter fraction as a function of $M_{Z'}$ and $m_{h_2}$. To lie within the Higgs vacuum stability wedge, we see that the $M_{Z'}$ lies between 500~GeV and 1000~GeV. Also shown on the left panel are the direct detection current constraints from LUX~\cite{Akerib:2013tjd} and the projected limits from LZ~\cite{Cushman:2013zza}. At a direct detection experiment, a vector $Z'_i$ can elastically scatter with a nucleon $N$ via exchange of $h_1$ or $h_2$. The resulting spin-independent scattering cross-section for this to occur is \begin{equation} \sigma_{N}^{\rm{SI}}=\frac{g_{\scriptscriptstyle \mathrm{CW}}^2 \sin^2 2\theta }{16 \pi}\frac{f_N^2 m_{N}^2 \mu_{\rm{red}}^2}{v^2}\left(\frac{1}{m_{h_2}^2}-\frac{1}{m_{h_1}^2} \right)^2\;, \end{equation} where $f_N:=\bra{N}\sum_q m_q \bar{q}q \ket{N}/m_N \approx 0.295$ is the Higgs-nucleon coupling~\cite{Cheng:2012qr}, $m_N$ is the nucleon mass and $\mu_{\rm{red}}$ is the vector-nucleon reduced mass. When setting a limit from the experimental data, we account for the fact that the the vector triplet forms a subcomponent of the total dark matter density over much of the parameter space of interest. We make a scaling ansatz that the fraction of the local dark matter density $\rho_{Z'}/\rho_{\rm{DM}}$ is the same as the fraction of the dark matter relic abundance $\Omega_{Z'}/\Omega_{\rm{DM}}$. The limits from LUX and LZ after taking into account this scaling are shown in fig.~\ref{fig:vector_relic} by the lines with the appropriate label. In the left panel, the region above and to the left of the lines are excluded. We have also checked that the LUX exclusion limit, when applied to the right panel, excludes the entire lower island. Therefore, while the current LUX limits do not constrain the region where the Higgs potential is stabilised, the projected LZ limit excludes all of this region. \subsection{Singlet scalar dark matter} \label{sec:sDM} \begin{figure}[t!] \centering \includegraphics[width=1.0 \columnwidth]{sstohh.pdf} \caption{ The leading contributions to the scalar annihilation cross-section $\langle\sigma v\rangle_{\rm{s,ann}}$. Other diagrams are suppressed by at least one power of $\sin\theta$.} \label{fig:scalann} \end{figure} We have previously motivated the introduction of a real singlet scalar field to allow the Higgs potential to be stabilised over a much larger range of the parameter space. Providing a candidate to saturate the observed dark matter abundance provides a second motivation. The two examples of CSI ESM with a U$(1)$ Coleman-Weinberg sector that we have considered in sections {\bf \ref{sec:4.1}} and {\bf \ref{sec:4.2}}, do not have a dark matter candidate. This is because the U(1)$_{\scriptscriptstyle \mathrm{CW}}$ gauge boson is unstable, owing to its kinetic mixing with hypercharge, and the only scalar field present, $\phi_{\scriptscriptstyle \mathrm{CW}},$ mixes with the SM Higgs. The SU(2)$_{\scriptscriptstyle \mathrm{CW}}$ sector does have a stable component in the form of the $Z'_i$ triplet, but we have seen cf.~left panel in fig.~\ref{fig:vector_relic}) that after LHC constraints have been taken into account, the vector triplet forms only a sub-component of the total dark matter abundance in the region where the Higgs potential is stabilised. Therefore, in the case of an SU$(2)$ extended Standard Model, an additional dark matter component is also required. Having motivated the singlet scalar as a dark matter candidate, we first study the case where the singlet forms all of the dark matter (as required in the U(1) case) before turning to the case where it forms a sub-component (as required in the SU(2) case). \begin{figure}[t!] \centering \includegraphics[width=0.75\columnwidth]{scalar_dm.pdf} \caption{Scalar dark matter $(m_s,\lambda_{Hs})$ plane in the CSI U(1)$_{\bf \scriptscriptstyle B-L}\,\times$ \!SM $\oplus$ singlet model. The solid lines show the fraction of of the total DM density the scalar singlet makes up. The dotted lines show the direct detection constraints from LUX and the project limits from LZ. In the shaded region the extra singlet does not stabilise the Higgs potential.} \label{fig:scal_omega} \end{figure} In the CSI U(1)${_{\bf \scriptscriptstyle B-L}}\times$ SM $\oplus$ singlet model, the ATLAS and CMS limit that \mbox{$M_{Z'}\gtrsim3$~TeV} implies that $\lambda_{\rm{P}}$, and therefore $\sin\theta$, is small. As a result, the diagrams that dominantly contribute to the total annihilation cross-section $\langle\sigma v\rangle_{\rm{s,ann}}$ are those shown in fig.~\ref{fig:scalann}. The $Z_2$ symmetry of this theory ensures that all semi-annihilation processes vanish, so that the Boltzmann equation describing the evolution of the scalar number density $n_s$ is the usual one: \begin{equation} \label{eq:Boltzscal} \frac{d n_{s}}{dt}+3H n_{s}=-\langle\sigma v\rangle_{\rm{s,ann}}\left(n_{s}^2-n_{s}^{\rm{eq}\,2} \right)\;. \end{equation} The main parameters of our singlet dark models are the scalar dark matter mass, $m_s,$ and its coupling, $\lambda_{Hs},$ to the Higgs field. We solve the Boltzmann equation numerically and the results are displayed in fig.~\ref{fig:scal_omega} on the $(m_s,\lambda_{Hs})$ plane. In this figure, we have initially fixed $e_{\bf \scriptscriptstyle B-L}=0.3$ and $\lambda_{\rm{P}}=5\times10^{-4}$ resulting in a mixing angle $\theta\approx5\times10^{-3}$ and mass $M_{Z'}=3.6$~TeV. When $e_{\bf \scriptscriptstyle B-L}$ and $\lambda_{\rm{P}}$ are chosen so that $M_{Z'}$ lies above the bounds from direct searches by ATLAS and CMS, we have found that the positions of the lines are not sensitive to the values of $e_{\bf \scriptscriptstyle B-L}$ and $\lambda_{\rm{P}}$. The coupling constant $\lambda_{\phi s}$ can be traded in for $m_s^2$ cf.~eq.~\eqref{ms2}) so that the only remaining free parameters are $m_s$ and $\lambda_{hs}$ (the quadratic coupling $\lambda_s$ plays no role in the Born-level freeze-out calculation). For each value of $m_s$, the value of $\lambda_{Hs}$ that gives 100\%, 10\% or 1\% of the observed dark matter density $\Omega_{\rm{DM}}$ is shown in fig.~\ref{fig:scal_omega}. The region below $\lambda_{Hs}\sim0.34$ is excluded because for these values of $\lambda_{Hs}$, the real scalar does not help to stabilise the Higgs potential cf.~Table~\ref{U1BLSStab}. We also impose that $\lambda_{Hs}\lesssim1$ in order that $\lambda_{Hs}$ does not develop a Landau pole before the Planck scale. In order that the singlet scalar saturates the observed dark matter density, we find that its mass should lie in the range between 1~TeV and 3.2~TeV. In this range, the annihilation channel $ss\to Z' Z'$ is not allowed kinematically, justifying its exclusion from the diagrams in fig.~\ref{fig:scalann}. Finally, we also show the current direct detection constraints from LUX and the projected limits from LZ. The scalar can scatter at a direct detection experiment through t-channel exchange of $h_1$ and $h_2$ and the resulting spin-independent scattering cross-section to scatter off a nucleon $N$ is \begin{equation} \sigma_{N}^{\rm{SI}}=\frac{\lambda_{Hs}^2 \cos^4\theta }{4 \pi}\frac{f_N^2 m_{N}^2 \mu_{\rm{red}}^2}{m_s^2 m_{h_1}^4}\left[1-\tan\theta \left(\frac{\lambda_{\phi s}}{\lambda_{Hs}}-\frac{m_{h_1}^2}{m_{h_2}^2}\left(\frac{\lambda_{\phi s}}{\lambda_{Hs}}+\tan\theta\right) \right) \right]^2\;. \end{equation} As in the case of the vector triplet, we account for the fact that the scalar makes up a sub-component of the dark matter in much of the parameter space. While the current LUX limit constrains low values of $m_s$ where the scalar density $\Omega_s$ is very low, the projected LZ limits should constrain the full parameter space of interest. \begin{figure}[t!] \centering \includegraphics[width=0.49\columnwidth]{combined_036.pdf}\hspace{2mm}\includegraphics[width=0.49 \columnwidth]{combined_1.pdf} \caption{The plots show the available parameter space when the scalar and vector dark matter together makes up the total dark matter density in the of the CSI SU(2)$_{\scriptscriptstyle \mathrm{CW}}\,\times$ \!SM $\oplus$ singlet model. The colour-coded regions show the scalar dark matter mass in GeV. In the white regions the combined density is either larger or smaller than the observed dark matter density. On the left we fixed $\lambda_{Hs}=0.36$, and the right panel has $\lambda_{Hs}=1$.} \label{fig:mix_relic} \end{figure} \subsection{Scalar and vector dark matter} \label{sec:svDM} Finally, we consider the CSI SU(2)$_{\scriptscriptstyle \mathrm{CW}}\times$~SM~$\oplus$~singlet model in which the dark matter is comprised of both the singlet scalar and vector triplet. In this case we solve the Boltzmann equations~\eqref{eq:Boltzvect} and~\eqref{eq:Boltzscal} as before, but we now include the annihilation process $ss\to Z'_i Z'_i$ or the reverse process, depending on which is kinematically allowed. Figure~\ref{fig:mix_relic} shows the results in the $(g_{\scriptscriptstyle \mathrm{CW}},\lambda_{\rm P})$ plane for $\lambda_{Hs}=0.36$ and $\lambda_{Hs}=1.0$ in the left and right panels respectively. The coloured contours indicate the values of $m_s$ that results in the total density of vector and scalar saturating the observed value i.e.~$\Omega_{Z'}+\Omega_s=\Omega_{\rm{DM}}$. There is a limited portion of the parameter space in which the vector and scalar make up all of the dark matter and this region is smaller in the case where $\lambda_{Hs}$ is bigger. These results can be understood with reference to figs.~\ref{fig:vector_relic} and~\ref{fig:scal_omega}. From Figure~\ref{fig:vector_relic}, we observe that in the upper right corner of the left panel, the vector density is very small, so that the scalar should make up most of the density. From the right panel, we also see that in this region, $M_{Z'}\lesssim1$~TeV, which because $g\approx2$, implies that $\langle \phi \rangle\lesssim1$~TeV. Now, from fig.~\ref{fig:scal_omega}, we see that for $\lambda_{Hs}=0.36$, we require $m_s\approx1$~TeV in order that $\Omega_s\approx\Omega_{\rm{DM}}$. However, given that $m_s^2\approx\lambda_{\phi s}| \langle \phi \rangle|^2/\sqrt{2}$ (cf.~eq.~\eqref{ms2}), we see that we can not achieve $m_s\approx1$~TeV unless $\lambda_{\phi s}\gtrsim1$, in which case, it develops a Landau Pole before the Planck scale. Figure~\ref{fig:scal_omega} also allows us to see why the parameter space is smaller for larger $\lambda_{Hs}$. This is because the value of $m_s$ that is required to obtain $\Omega_s\approx\Omega_{\rm{DM}}$ is larger for larger $\lambda_{Hs}$ and this is more difficult to do, again because of the perturbativity restriction on $\lambda_{\phi s}$. \begin{figure}[t!] \centering \includegraphics[width=0.7\columnwidth]{combined_masses.pdf} \caption{The region on the mass plane $(M_{Z'},m_{h_2})$ where the combined density of the scalar and vector dark matter equals the observed dark matter density. The colours show the scalar dark matter mass in GeV and in the white regions the combined density is either larger or smaller than the observed dark matter density. Here we have fixed $\lambda_{Hs}=0.36$. } \label{fig:m_mix_relic} \end{figure} Figure~\ref{fig:m_mix_relic} shows the vector and Coleman-Weinberg scalar mass and contours of the scalar mass in which the total density is saturated. This plot has $\lambda_{Hs}=0.36$. We see that both the vector and scalar are required to be around the TeV scale. \section{Conclusions} \label{sec:concl} The classically scale-invariant extensions of the Standard Model constitute a highly predictive and minimal model building framework. In this CSI ESM set-up, all mass scales have to be generated dynamically and should therefore have a common origin. These models have to address all sub-Planckian shortcomings of the Standard Model. In this paper we have analysed the CSI ESM theories from the perspective of solving the instability problem of the SM Higgs potential and at the same time providing viable dark matter candidates. In simple CSI models with Abelian hidden sectors, we identified regions of parameter space where the SM Higgs potential is stabilised all the way up to the Planck scale. These are the wedge-shaped regions in figs.~\ref{U1Stab} and \ref{U1BLStab}. When combined with LHC constraints on heavier Higgs bosons we found that these regions did not survive (see dotted lines in figs.~\ref{U1Stab} and \ref{U1BLStab}). In the case of a non-Abelian SU(2) hidden sector in fig.~\ref{SU2Stab} a small part of the parameter space with the stable Higgs potential is compatible with the LHC constraints. We then argued that by adding a real scalar singlet with a portal coupling to the Higgs $\lambda_{Hs} \gtrsim 0.35,$ all of our CSI ESM models have a stable Higgs potential and are consistent with the LHC exclusion limits on extended Higgs sectors. For Abelian models the singlet of mass $m_s$ is the only dark matter candidate, and fig.~\ref{fig:scal_omega} shows the available parameter space on the $(m_s, \lambda_{Hs})$ plane. If this singlet contributes 100\% of the total observed dark matter density, its mass lies between 1~TeV and 3~TeV. The LUX direct detection limits do not yet constrain the model, however the projected reach of LZ would cover all of the viable parameter space. In non-Abelian models we have two components of dark matter -- the singlet and the hidden sector SU(2) gauge bosons, $Z'_i$. Without the singlet, the combination of Higgs stability and LHC constraints implies that vector dark matter contributes less than 10\% of the observed relic density, as can be seen in fig.~\ref{fig:vector_relic}. Thus, to saturate the dark matter density and stabilise the Higgs potential we are required to have a singlet dark matter component. Finally, we have investigated the phenomenology of two-component dark matter. The viable regions of parameter space are shown in figs.~\ref{fig:mix_relic} and \ref{fig:m_mix_relic}. Typically, both components have mass close to 1~TeV. We see that CSI ESM models are viable and predictive. They provide a non-trivial link between the electroweak scale, including the Higgs vacuum stability, and the nature and origin of dark matter. Furthermore, future dark matter direct detection and collider experiments will be able to explore a significant fraction of their parameter space. \acknowledgments We would like to thank C.~Arina, C.~Boehm, C.~Englert, J.~Jaeckel, M.~Spannowsky, A.~Strumia for useful discussions and correspondence. This material is based upon work supported by STFC through the IPPP grant ST/G000905/1. VVK acknowledges the support of the Wolfson Foundation and Royal Society through a Wolfson Research Merit Award. GR acknowledges the receipt of a Durham Doctoral Studentship. \bibliographystyle{JHEP}
{ "attr-fineweb-edu": 1.767578, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUduU4ukPiEYUYdUdq
\section{Introduction} Ultracold atoms trapped in periodic optical potentials provide wide-ranging opportunities to study many-body physics in highly controllable systems \cite{Maciej2007,Bloch2008}. In all cases, the characteristic single-particle energy scale is set by the recoil energy, $E_R = h^2 /(8md^2)$, where $m$ is the mass of the atom and $d$ is the spatial period of the lattice. Although temperatures in such systems can be quite low, it is still challenging to reach temperatures well below the relevant many-body physics energy scales, which can be exceedingly small. Increasing the recoil energy can potentially increase both single-particle and many-body energy scales through tighter confinement, which may aid in creating systems well into the regime where many-body ground state physics is observable. An inherent obstacle to smaller lattice spacing is the optical diffraction limit, which prevents lattice periodicities below $d = \lambda/2$, where $\lambda$ is the wavelength of the light forming the lattice. Several approaches to move beyond the diffraction limit have been proposed and some realized based on multiphoton effects \cite{Dubetsky2002,Ritt2006,Anderson2019}, rf-dressed adiabatic potentials \cite{Yi2008,Lundblad2008,Lundblad2014}, and trapping in near-field guided modes with nanophotonic systems \cite{Gullans2012,Thompson2013,Romero-Isart2013,Gonzalez2015}. \\ \begin{figure}[t] \includegraphics[height=2.8in]{Fig1ver2.pdf} \caption{The stroboscopic approach to create a time-averaged effective potential with a lattice spacing of $\lambda/4$ by dynamically pulsing KP potentials with $\lambda/2$ spacing.} \label{fig:schematic} \end{figure} Here we report the realization of a recently proposed Floquet-based approach~\cite{Nascimbene2015,Subhankar2019b,Lacki2019} to create small-period lattices, specifically $\lambda/4$-spaced lattices, by time-averaging a modulated lattice potential that has subwavelength features. We load atoms into the ground band of this time-dependent lattice and measure their average probability density $|\psi_{\text{avg}}(x)|^2$ with nanoscale resolution \cite{Subhankar2019,McDonald2019,Tonyushkin2006}, to confirm the subwavelength nature of the lattice. We study the lifetime of atoms in the lattices over a range of modulation (Floquet) frequencies $\omega_F=2\pi/T$, where $T$ is the period of a complete cycle, to determine the frequency range over which the time-averaged approach works. Creating an effective time-averaged potential requires that the time-dependence of the lattice be motionally diabatic \cite{Eckardt2017,Rahav2003,Marin2015}, namely that $T$ is much smaller than the motional time scale of the atoms. Time-averaging a dynamically applied lattice potential cannot create an effective potential landscape with higher spatial Fourier components than the underlying progenitor lattice. This implies that in order to create landscapes with subwavelength periodicity, one must time-average a potential that itself has subwavelength features~\cite{Nascimbene2015}. In this work, we make use of the Kronig-Penney(KP)-like potential to generate the desired potential landscapes~\cite{Subhankar2019b,Lacki2019}. Such a KP potential is implemented via the dark state associated with a three-level $\Lambda$-system \cite{Lacki2016,Jendrzejewski2016,Wang2018}. The spin adiabaticity required to maintain the dark state during the stroboscopic cycle imposes additional constraints, as discussed below. \begin{figure}[t] \includegraphics[width=8.6cm]{Fig2.pdf} \caption{\label{fig:wavefunction}(a) The stroboscopically applied potential, shown here for $\Omega_{c0}=500\Gamma$ and $\Omega_{p}=50\Gamma$, is composed of KP barriers on top of a sinusoidal potential. The dotted line represents the potential shifted by $\lambda/4$. (b) The time-averaged effective potential $V_{\text{eff}}(x)$. (c) The black points are the measured $|\psi_{\text{avg}}(x)|^2$ of atoms in $V_{\text{eff}}(x)$. Number fluctuations between realizations result in number uncertainties of 5\%. The black line is the calculation based on independently measured lattice parameters. The grey line is the calculated $|\psi_{\text{avg}}(x)|^2$ in the lattice before the relaxation during the measurement. (d) The micromotion dynamics at different time within a Floquet period. The blue(red)-shaded areas represent regions in which $|\psi(x,t)|^2$ is higher(lower) than $|\psi_{\text{avg}}(x)|^2$, which is shown as a solid black line. } \end{figure} There are multiple ways to implement time-averaging with a KP lattice \cite{Lacki2019,Subhankar2019b}. The particular approach that we adopt, optimized for our experimental conditions, is shown in Fig.\ref{fig:schematic}. Periodic potentials with $\lambda/2$ spacing but subwavelength structure are stroboscopically applied to the atoms to create the desired potential landscape. Specifically, atoms are subjected to a KP potential for half of the Floquet cycle $T/2$; the potential is then ramped down to zero and its position is shifted by half of the lattice spacing $\lambda/4$; the shifted potential is ramped on again and held for another half cycle, before being ramped off and its position is restored. Two factors must be considered to ensure that time-averaging is an effective description of the system. First, motional diabaticity sets a lower bound on the Floquet frequency $\omega_F$, beyond which the band structure becomes unstable and severe heating limits the lifetime. Second, the dark-state nature of the KP lattice sets an upper bound to $\omega_F$. As the KP potential is a scalar gauge potential arising from a spatially varying dark state~\cite{Wang2018,Lacki2016,Jendrzejewski2016}, switching on and off such a potential requires atoms to adiabatically follow the spatio-temporal dark state at all times. We ensure this adiabatic following by carefully designing the pulse shapes of our light fields (Appendix~C), implementing stimulated Raman adiabatic passage (STIRAP) \cite{Vitanov2017}. Losses occur at high $\omega_F$, as the atom's dark-state spin composition fails to adiabatically follow the rapid changes in the light fields. In the following sections, we show that a frequency window that simultaneously satisfies both requirements exists and that there are momentum-dependent loss channels arising from the Floquet-induced coupling with higher excited bands for particular momenta. \section{Experiment} We work with fermionic $^{171}$Yb atoms that have a well isolated $\Lambda$-system (Appendix~A), consisting of two ground states $|g_1\rangle$, $|g_2\rangle$ and an excited state $|e\rangle$ coupled by laser light with $\lambda=556$ nm. We use the methods outlined in Refs.~\cite{Wang2018,Vaidya2015,Pisenti2016, Subhankar2019c,Appel2009} to generate and optically control this well isolated $\Lambda$-system. A control field $\Omega_c(x,t)=\Omega_{c1}e^{ikx}+\Omega_{c2}(t)e^{-i(kx+\phi(t))}$, where $k=2\pi/\lambda$ and $\phi(t)$ is the relative phase difference between the two fields, which couples $|g_2\rangle$ and $|e\rangle$, is comprised of two counterpropagating lattice beams. The maximum value of $\Omega_{c2}(t)$ is constrained to be equal to $\Omega_{c1}=\Omega_{c0}/2$, in which case it gives rise to a standing wave $\Omega_{c0}\, e^{-i\phi(t)/2}\cos {(kx +\phi (t)/2)}$. We control the strength and the position of the KP potential using $\Omega_{c2}(t)$ and $\phi(t)$ (Appendix~C). A homogeneous probe field $\Omega_p e^{iky}$, coupling $|g_1\rangle$ and $|e\rangle$, travels perpendicular to the control beams. The resulting spatially dependent dark state gives rise to a KP lattice of narrow subwavlength barriers~\cite{Lacki2016,Jendrzejewski2016,Wang2018}, plus an additional sinusoidal potential due to the light shifts caused by states outside the three-level system (Appendix~A) as shown in Fig.~\ref{fig:wavefunction}(a). Stroboscopically applying the lattice with different strengths and positions requires accurate and high bandwidth control of the amplitude and phase of the lasers coupling the three states, which we implement using dynamic control over the rf fields driving acousto-optic modulators (AOMs)~\cite{Subhankar2019}. We note that the spin adiabaticity condition depends significantly on the pulse shape~\cite{Subhankar2019b} in addition to the Floquet frequency, and control of the pulse shape within a Floquet period is critical~\cite{Subhankar2019}. We use arbitrary waveform generators that can control the rf amplitude and phase with a resolution of 8 ns and 4 ns respectively. However, we are limited by the bandwidth of the AOMs, which we measure to be 50 ns. This is a factor of 8 times smaller than the smallest half-period of 400 ns that we have used in this study. For typical experimental values of $\Omega_{c0}=500$~$\Gamma$ and $\Omega_p=50$~$\Gamma$, where $\Gamma=2\pi\times182$~kHz is the inverse lifetime of $|e\rangle$, the KP barrier has a minimum width of 0.02~$\lambda$ and a maximum height $\approx 100 E_R$, where $E_R/h=h/(2m_{\text{Yb}} \lambda^2)=3.7$~kHz, $m_{\text{Yb}}$ is the mass of a $^{171}$Yb atom, and the sinusoidal potential has a depth $\approx 145 E_R$, Fig.~\ref{fig:wavefunction}(a). Time-averaging this lattice applied at two positions results in an effective potential $V_{\text{eff}}(x)$ shown in Fig.~\ref{fig:wavefunction}(b), which includes the effect of the pulse shapes, with an effective barrier height $\approx7 E_R$. (The sinusoidal component of the potential averages to a spatially invariant offset.) \section{Measurement} We apply this lattice to $\approx 2\times10^5$ Yb atoms at an initial temperature of 0.3 $\mu$K that have been optically pumped into $|g_1\rangle$. To load the atoms into the ground band of $V_{\text{eff}}(x)$, we adiabatically increase the depth of the stroboscopically applied lattices in 200 $\mu s$ (typically $\sim$80 Floquet cycles) described in details in Appendix~B. After the loading stage, we measure the ensemble-averaged probability density $|\psi(x,t)|^2$ of atoms in the ground band of $V_{\text{eff}}(x)$ using a nanoresolution miscroscopy technique~\cite{Subhankar2019} with FWHM resolution of 25~nm. We also measure the momentum distribution of the atoms via absorption imaging after time-of-flight (TOF). \subsection{Probing wavefunction density in the stroboscopic lattice} Figure \ref{fig:wavefunction}(c) shows $|\psi(x,t)|^2$ averaged over a Floquet period $T=2.4\,\mu s$ ($\omega_F=2\pi \times 410$ kHz) for atoms in $V_{\text{eff}}(x)$ with a $\lambda/4 $ lattice spacing, and Fig.~\ref{fig:wavefunction}(d) shows $|\psi(x,t)|^2$ at different times within a Floquet cycle. By averaging the data over a Floquet period, we eliminate the effect of micromotion and obtain the averaged wavefunction density $|\psi_{\text{avg}}(x)|^2$ (dotted trace in Fig.\ref{fig:wavefunction}(c)) in the ground band of the effective potential. The black curve represents the ground-band probability density calculated from the time-averaged potential including the quasimomentum averaging, the effect of finite resolution of the microscope, and the relaxation of the wavefunction during the measurement. The good agreement between the data and calculation shows that time-averaging is a good description of the effective potential. The calculated wavefunction in the lattice before the relaxation during the measurement is plotted in grey. We resolve the micromotion in real space within a Floquet period by comparing $|\psi(x,t)|^2$ with $|\psi_{\text{avg}}(x)|^2$ (Fig.\ref{fig:wavefunction}(d)). The blue (red)-shaded areas represents regions in which $|\psi(x,t)|^2$ is higher (lower) than $|\psi_{\text{avg}}(x)|^2$. We observe that micromotion has the same time-periodicity as the Floquet drive, as expected. \begin{figure} \includegraphics[width=8.6cm]{Fig3.pdf} \caption{\label{fig:TOF}(a) Integrated TOF column density at different Floquet frequencies $\omega_F$. The atomic populations at high momenta indicate the presence of avoided crossings. The widths of the populations at avoided-crossings are primarily due to the physical dimensions of the atomic cloud. (b) The Floquet frequency $\omega_F$ is plotted versus the center momentum of the populations in (a) determined using Gaussian fits. Different series of avoided crossing are labeled and colored (L1: green, L2: red, R1: blue) and their fitted quadratic functions are drawn in solid lines respectively. The error bars are 1 standard deviation of the Gaussian fits. } \end{figure} \subsection{Momentum-dependent loss channels} A characteristic feature of a Bloch-Floquet bandstructure is the existence of avoided crossings at particular lattice momenta arising from coupling with high-lying states \cite{Holthaus_2015}, which for large Floquet frequency are approximately plane waves with high momenta. We measure the momentum distribution of the atoms in $V_{\text{eff}}(x)$ at different $\omega_F$ by taking an absorption image after ramping down the lattice in 100 $\mu s$ followed by a TOF of 3~ms. The atomic populations at high momenta in Fig.\ref{fig:TOF}(a) indicate the mixing of low momentum and high momentum states due to the presence of avoided crossings in our system. We use a Gaussian fit to determine the center momentum of the populations with respect to the ground band. The Floquet frequency $\omega_F$ is plotted against the center momentum (Fig.\ref{fig:TOF}(b)) for the three most prominent peaks (L1: green, L2: red, R1: blue). To first order, the avoided crossings can be understood as arising from the crossing of Floquet dressed high-lying bands, which are shifted in energy by integral multiples of $\omega_F$, and the low-lying occupied bands of $V_{\text{eff}}(x)$, which are relatively flat. To determine the integral multiple of $\omega_F$ for the band coupling, we fit the peak positions with a quadratic function $\hbar \omega_F=(p-p_0)^2/N +\hbar \omega_0$, where $p$ is the momentum, $N$ is an integer, $p_0$ and $\omega_0$ are fitting parameters, and the momentum and energy are in units of $\hbar k$ and $E_R$. For the L1 series, a good agreement with the data is found for $N=1$, indicating this series is due to coupling between bands with an energy difference of $\hbar\omega_F$. For the L2 and R1 series, $N=2$ gives the best fit, indicating second order coupling between bands that differ in energy by 2$\hbar\omega_F$. (The other visible peaks do not extend over a sufficient range to accurately determine their curvatures.) The fraction of atoms in the high momentum states decreases at higher Floquet frequency, suggesting weaker coupling to higher bands. The asymmetry in the avoided crossings with respect to $p=0$ is due to the fact that we are driving just the $\Omega_{c2}$ control beam, which gives rise to a vector gauge potential~\cite{Subhankar2019b}. The $^{171}$Yb atoms are nearly non-interacting ($s$-wave scattering length is $-3a_0$, where $a_0$ is the Bohr radius), so they are not likely to thermalize during the short loading and unloading sequence. However, the observed low-momentum component of the TOF distribution is consistent with the width of the ground band Brillouin zone for the $\lambda/4$-spaced stroboscopic lattice, which is twice as large as the ground band width of the progenitor $\lambda/2$ lattice. Given that the Fermi momentum at our density is of order the recoil momentum of the progenitor lattice, the filled ground band in the $\lambda/4$ lattice indicates that the effective temperature is higher than the ground band width but not a significant fraction of the band spacing. \begin{figure}[t] \centering \includegraphics[width= 8.6cm]{Fig4.pdf} \caption{\label{fig:lifetime} Lifetimes of atoms at different $\omega_F$ under different Rabi frequency configurations. Green squares: $\Omega_{c0}=500\Gamma$ and $\Omega_p=0$, where the spin degree of freedom is decoupled and the loss is due solely to failure of motional diabaticity at low $\omega_F$. Red triangles: $\Omega_{c1}=0$, $\Omega_{c2}=250\Gamma$ and $\Omega_p=80\Gamma$, where the spatial potential is homogeneous and the loss is due solely to the failure of spin adiabaticity at high $\omega_F$. Blue circles: $\Omega_{c0}=500\Gamma$ and $\Omega_p=80\Gamma$, where we show the lifetimes of atoms in the $\lambda/4$-spaced lattice, $V_{\text{eff}}(x)$. The error bars are 1 standard deviation of the exponential fits.} \end{figure} \subsection{Lifetime study} In order to determine the range of usable Floquet frequencies for the stroboscopic scheme, we study the lifetime at different $\omega_F$ under different Rabi frequency configurations as shown in Fig.~\ref{fig:lifetime}. We determine the lower bound on $\omega_F$ by studying the motional diabaticity of atoms in just a stroboscopically applied ac-Stark-shift lattice. This is done by setting $\Omega_p=0$, which decouples the spin degree of freedom from the dynamics with $\Omega_{c1}=250\Gamma$, while $\Omega_{c2}(t)$ is pulsed to a maximum value of $250\Gamma$(Appendix~C). At low $\omega_F$, the atoms are affected by the turning on and off, and phase shifting of the sinusoidal ac-Stark-shift potential, which causes heating and loss (green squares in Fig.\ref{fig:lifetime}). We determine the upper bound on $\omega_F$ by studying the reduction in the fidelity of STIRAP as a function of $\omega_F$ for a spatially homogeneous dark state. This is done by setting $\Omega_{c1}=0$, $\Omega_p=80\Gamma$, while $\Omega_{c2}(t)$ is pulsed to a maximum value of $250\Gamma$. The reduction in STIRAP fidelity manifests as heating and loss due to the decreasing spin adiabaticity at larger $\omega_F$. Most importantly, we also measure the frequency dependent lifetime of atoms loaded into $V_{\text{eff}}(x)$ for different $\omega_F$ (blue circles in Fig.\ref{fig:lifetime}). The reduction in spin adiabaticity accounts for the decrease in lifetime of atoms in $V_{\text{eff}}(x)$ at high $\omega_F$. The short lifetimes in the stroboscopically applied KP lattices are expected due to a few factors. First, couplings to the spatially and temporally dependent bright states reduce lifetimes in subwavelength-spaced lattices even for a perfect three-level system, through couplings with higher Floquet bands (as shown in Fig.~\ref{fig:TOF}) and off-resonant couplings with bright states~\cite{Subhankar2019b}. In principle, these couplings can be reduced by using larger Rabi frequencies. However, lifetimes are also limited by the breakdown of the three-level approximation at large Rabi frequencies due to admixing of states outside the three-level system (Appendix~A). This manifests as a dynamically varying and spatially dependent two-photon detuning (arising from $\Omega_c(x,t)$), which reduces the fidelity of STIRAP~\cite{Vitanov2017}. This competing requirement prevents us from benefiting from larger Rabi frequencies. \section{Conclusion} In conclusion, we demonstrate the creation of a time-averaged $\lambda/4$-spaced lattice using a recently proposed stroboscopic technique~\cite{Nascimbene2015} based on dynamically modulated dark states in a three-level system~\cite{Subhankar2019b,Lacki2019}. The subwavelength structure of the lattice is confirmed by measuring the probability density of the atoms averaged over the ground band of the lattice. We measure the loss rate of atoms in the lattice and observe high momentum excitation due to Floquet-induced coupling to higher bands. We measure the lifetime of the atoms in the $\lambda/4$-spaced lattice to be 2~ms, which is not long enough compared to the tunneling time to allow for many-body studies in the current realization. Further improvement of the $\lambda/4$-spaced lattice would require compensation of the two-photon detuning\st{s} or the identification of other atomic systems with a more favorable (isolated) three-level system~\cite{Bienias2018}. The lattice demonstrated here is limited by the off-resonant coupling to $|(6s6p)^3P_1,F$=$3/2,m_F$=$-3/2\rangle$, which is only detuned by the hyperfine splitting from the three-level system being used. Better candidates may make use of isolated {\em electronic} levels, which are detuned by much larger optical separations. For example, in $^{174}$Yb, the $(6s6p)^3P_0$ state and one of the states in the $(6s6p)^3P_2$ level could be used as the ground states, while one of the $(6s7s)^3S_1$ states could be used as the excited state, with appropriate choice of polarization to select the three states. In a more isolated three-level system the main limitation would be the available laser power needed to meet the Rabi frequency requirements. In addition to longer lifetimes, higher Rabi frequencies would allow for lattices with smaller spacings~\cite{Subhankar2019b}. Our work can be extended to 2D and additional dynamic control over the two-photon detuning---which makes subwavelength traps possible~\cite{Bienias2018}---allows for construction of arbitrary time-averaged potential landscapes not limited by diffraction. \section*{ACKNOWLEDGMENTS} We acknowledge support from NSF PFC at JQI (Grant No. PHY1430094) and ONR (Grant No. N000141712411). \section*{APPENDIX} \subsection{$^{171}$Yb ATOM LEVEL STRUCTURE} \label{LevelStructure} \begin{figure}[h] \includegraphics[width=8.6cm]{Model_new.pdf} \label{fig:LevelDiagram} \caption{Level structure of the $^1S_0$ and $^3P_1$ manifolds in $^{171}$Yb: $\Delta$ is the single photon detuning, and $\Delta_{\text{HFS}}\approx 6$ GHz is the $^3P_1$ hyperfine splitting. } \end{figure} \begin{figure*}[] \centering \includegraphics[width=17.8cm]{S2.pdf} \label{fig:Sequence} \caption{ Rabi frequencies of different light fields and the relative phase $\phi$ between $\Omega_{c1}$ and $\Omega_{c2}$ during three stages. The Floquet period is not shown to scale, the minimum number of Floquet cycles during the ramp-on of $\Omega_{c2}$ is 40.} \end{figure*} Fig. 5 shows the level structure of the $^1S_0$ and $^3P_1$ manifolds in $^{171}\text{Yb}$. The three hyperfine states $|g_1\rangle$, $|g_2\rangle$, and $|e\rangle$ constitute the $\Lambda$-system. We use a magnetic field of 36~mT to yield a frequency separation of 1 GHz between $|e\rangle$ and $|4\rangle$. The hyperfine splitting is $\Delta_{\text{HFS}}\approx6$~GHz. The ac-Stark shifts on the ground states $|g_1\rangle$ and $|g_2\rangle$ arise due to off-resonant couplings to states outside the $\Lambda$-system. The $\Omega_c(x,t)$ light field off-resonantly couples $|g_1\rangle$ with $|5\rangle$, and $|g_2\rangle$ with $|6\rangle$. The $\Omega_p$ light field off-resonantly couples $|g_2\rangle$ with $|4\rangle$, $|g_2\rangle$ with $|7\rangle$, and $|g_1\rangle$ with $|6\rangle$. The spatio-temporally dependent ac-Stark shifts due to $\Omega_c(x,t)$ give rise to the dynamic sinusoidal potential mentioned in the main text. \subsection{EXPERIMENTAL SEQUENCE} \label{ExperimentalSequence} Fig. 6 shows the experimental sequence that we use to load atoms into the ground band of the stroboscopic lattice. \begin{enumerate}[label=\Roman*.] \item We start with atoms optically pumped into $|g_1\rangle$. We then ramp on $\Omega_{c1}$ (red trace in Fig.6) followed by $\Omega_p$ (blue trace in Fig.6), transferring atoms into a spatially homogeneous dark state. Then, we turn on $\Omega_{c2}(t)$ (green trace in Fig.6) in 200$\mu$s (minimum number of Floquet cycles used during the ramp $\approx$ 40) to adiabatically load atoms into the ground band of the stroboscopic lattice. \item We pulse the stroboscopic lattice for a variable number of Floquet cycles. \item We measure the average probability density of the atoms in the ground band of the stroboscopic lattice using the nanoresolution microscopy technique described in Ref.~\cite{Subhankar2019}. \end{enumerate} The phase $\phi(t)$ of the $\Omega_{c2}$ light field, which controls the position of the stroboscopic lattice, is only changed when the dark-state spin composition is spatially homogeneous~\cite{Subhankar2019b}. The experimental techniques used to generate the pulses is detailed in Ref.~\cite{Subhankar2019}. \subsection{PULSE SCHEME} \label{PulseScheme} The functional form of $\Omega_{c2}(t)$ that we use to create the stroboscopic lattice is~\cite{Subhankar2019b}: $$ \Omega_{c2}(t)=\frac{\Omega_{c0}}{2}-\frac{\Omega_{p}\sin^2(\omega_F t)}{\sqrt{1+4\epsilon^2-\sin^4(\omega_F t)}},$$ $$ \omega_F=\Omega_pr_0\sqrt{1+4\epsilon^2}, $$ where $\epsilon=\Omega_p/\Omega_{c0}$. In Fig.~4 , changes in $\omega_F$ are parameterized using $r_0.$ Smaller $r_0$ implies slower, more spin-adiabatic pulses. In our experiment, we typically use $0.02\le r_0\le0.2$. One Floquet period of pulsing is shown in Fig.~7. \begin{figure}[] \centering \includegraphics[width=8.6cm]{onefloquetcycle.pdf} \label{fig:pulse} \caption{One Floquet cycle of pulsing: The pulse shapes for $\Omega_{c2}(t)$ and $\phi(t)$. } \end{figure} \subsection{DETAIL OF LIFETIME STUDY} \label{DetailofLifetimeStudy} When studying lifetime for the STIRAP-only case and for the stroboscopic lattice case, we observe that $\sim20\%$ of the atoms have a lifetime of $\sim$ 20 ms and are insensitive to change in $\omega_F$. We speculate that these atoms populate Floquet states that are immune to STIRAP due to the large dynamic two-photon detunings arising from the spatially-dependent ac-Stark shifts due to couplings to states outside the $\Lambda$-system (Appendix~A). The decay rates shown in the main text pertain to the major fraction of the atoms which show frequency-dependent loss rates both in the stroboscopic lattice as well as stroboscopic STIRAP case.
{ "attr-fineweb-edu": 1.907227, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdu_xK7IDKzHWNM1F
\section{Introduction} The question of whether or not the physics of black holes is described by quantum mechanics has a long history, going back to the seminal papers of Hawking \cite{Hawking:1974sw,Hawking:1976ra}. The majority of people working in the field now believe that it is, motivated primarily by the success of the BFSS matrix model \cite{Banks:1996vh} and especially the AdS/CFT correspondence \cite{Maldacena:1997re,Witten:1998qj,Gubser:1998bc}. In these examples one has a fully quantum mechanical description of black hole formation and evaporation, so the issue of whether it is possible to have a theory that describes black holes quantum mechanically appeared to be settled. There were always lingering doubts however about how exactly these quantum mechanical theories could be used to describe the experience of the infalling observer \cite{Mathur:2009hf,Giddings:2011ks}. In the last few years this lingering uncertainty has been crystallized into a relatively sharp set of paradoxes, all of which seem to imply that a description of the infalling observer requires some sort of extension or modification of the quantum mechanical theory used to describe the formation and evaporation of the black hole \cite{Braunstein:2009my,Almheiri:2012rt,Marolf:2012xe,Almheiri:2013hfa,Bousso:2013wia,Marolf:2013dba}. Recently Papadodimas and Raju have made an interesting proposal for the description of the black hole interior in AdS/CFT \cite{Papadodimas:2013wnh,Papadodimas:2013jku}. Their proposal is related to earlier ideas that are often grouped together under the slogan ``$A=R_B$'' \cite{Bousso:2012as,Susskind:2012uw,Papadodimas:2012aq,Verlinde:2012cy}, or somewhat more carefully ``ER=EPR'' \cite{Maldacena:2013xja,Susskind:2013lpa},\footnote{I here mean some version of these ideas which would prevent firewalls in generic states. Motivated by complexity-theoretic arguments Susskind has recently been exploring the possibility of a version where generic states would have firewalls, but black holes formed by short collapses would not \cite{Susskind:2014rva,Susskind:2014ira}.} but the new proposal is considerably more precise than any of this previous work. It moreover is able to cleverly avoid some of the objections \cite{Almheiri:2013hfa,Marolf:2013dba,Bousso:2013ifa} raised to $A=R_B$ (or to ER=EPR). The main new idea is to focus on a ``small algebra'' of potential observables, with respect to which one defines a set of ``equilibrium states'' that are expected to have smooth horizons. For each operator in the small algebra one can then define a ``mirror operator'', which for the case of the mirror of an operator related to a mode just outside the black hole horizon has the interpretation of acting on a mode just behind the horizon, which has been difficult to get at by other means. The most controversial part of the proposal, which it inherits from $A=R_B$ or ER=EPR, is that the mirror operators are defined in a ``state-dependent'' manner, which is not allowed in ordinary quantum mechanical measurement theory. In this note I attempt a careful critical analysis of the PR proposal. In section \ref{proposalsec} I will introduce the proposal, clarifying some aspects that were not completely transparent in the original papers, especially the treatment of $1/N$ corrections. In section \ref{nonusec} I argue that in the PR proposal, pure quantum states in the CFT associated with large black holes do not have definite physical interpretations for the infalling observer, unlike in ordinary quantum mechanics. In section \ref{genbhsec} I discuss new issues that arise in extending the proposal to more general black holes, namely two-sided AdS wormholes and evaporating Minkowski black holes. For the AdS wormholes I consider several possible extensions of the one-sided proposal, identifying one which I consider to be the most appealing. For evaporating black holes, there is a new problem in that the small algebra does not seem to be sufficient for describing the realm of possible experiments. In section \ref{sdsec} I study the ``state-dependence'' of the proposal in more detail, emphasizing the considerable extent to which it violates quantum mechanics. I compare this to more conventional physical situations where naively state dependent operators arise but the measurement theory is nonetheless consistent with quantum mechanics. Finally I close with some brief remarks on the expected validity of quantum mechanics for the infalling observer. The later sections can basically be read in any order. \section{Description of the Proposal}\label{proposalsec} Consider a big one-sided AdS black hole, made from some sort of infalling matter at early times. The Penrose diagram for this system is shown in figure \ref{adsinfall}. \begin{figure} \begin{center} \includegraphics[height=7cm]{adsinfall.pdf} \caption{A one-sided AdS black hole. On the left we have the shell of matter that created the black hole in orange, the black hole interior in light blue, the horizon as a dashed line, and an infalling observer in dark blue. On the right is a detail of the region where the observer crosses the horizon, with an interesting set of modes indicated. The purple modes are easily evolved back to region I, and the green modes are already there. The red modes would need to be evolved all the way back through the collapsing shell and reflected off of $r=0$ to get them out to region I. Smoothness of the horizon requires entanglement between the red and green modes.}\label{adsinfall} \end{center} \end{figure} Bulk effective field theory degrees of freedom in the region outside of the horizon, which I have denoted region I, can be fairly simply described in terms of microscopic CFT variables using the BDHM/HKLL map \cite{Banks:1998dd,Hamilton:2006az,Kabat:2011rz}. This construction essentially proceeds by solving the bulk operator equations of motion in from the boundary in the CFT \cite{Heemskerk:2012mn}; I review a few more details in the following subsection. To describe the interior, denoted as region II in the figure, is more challenging. One way to begin is to observe that the interior lies to the future of everything outside, so roughly we can think of the horizon as a Cauchy surface and then evolve up into region II using the bulk equations of motion \cite{Freivogel:2004rd,Horowitz:2009wm,Heemskerk:2012mn}. Left-moving modes just inside the horizon, shown in purple in figure \ref{adsinfall}, can indeed be simply understood as having ``just fallen in'' from region I.\footnote{This decomposition into left- and right-moving modes is made only in the vicinity of the horizon; because of mass, tranverse momentum, and/or interactions it will not be conserved globally. For brevity I will sometimes ignore this in the following heuristic discussion; mixing can systematically be included without affecting the main points here.} Right-moving modes behind the horizon however, which are shown in red in figure \ref{adsinfall}, are more subtle. If we try to evolve them back, they are more and more blue-shifted and eventually collide with the infalling matter at high center of mass energy. At this point the bulk equations of motion are insufficent to proceed further, and we are unable to reflect through $r=0$ and back out to find a simple CFT definition of these modes \cite{Almheiri:2013hfa}. This situation can be compared with the two-sided AdS-Schwarzschild wormhole, shown in figure \ref{ads2side}. \begin{figure} \begin{center} \includegraphics[height=5cm]{ads2side.pdf} \caption{The two-sided AdS-Schwarzschild wormhole. The infalling observer again jumps in from the right, but the red right-moving modes inside can now be understood as having come from the left side.}\label{ads2side} \end{center} \end{figure} The main difference for our purposes is that in figure \ref{ads2side} the red modes can be evolved back to the left boundary without encountering any high-energy collision. This suggests that a CFT description of region II in the two-sided case should be easier than for the one-sided case; a construction along the lines of BDHM/HKLL should be possible \cite{Banks:1998dd,Hamilton:2006az,Kabat:2011rz}, and indeed some of the details have been worked out in \cite{Papadodimas:2012aq}. In order to proceed similarly for the one-sided black hole the task then seems clear: where are the red modes in the CFT? \subsection{The Basics of Reconstruction} I'll first review a bit more about the BDHM/HKLL construction of local bulk operators in AdS/CFT \cite{Banks:1998dd,Hamilton:2006az,Kabat:2011rz}. Any such construction will at best be perturbative in the gravitational coupling constant, which I will refer to as $1/N$. For small numbers of operators any backreaction can be treated perturbatively, so by an appropriate gauge fixing \cite{Heemskerk:2012np} we can treat the bulk theory as a quantum field theory in curved spacetime (the gravitons will just be another matter field). It will be an effective field theory with nontrivial irrelevant operators appearing that are suppressed by powers of $1/N$; their coefficients can in principle be determined by comparison with the CFT. Which background we use depends on the classical properties of the state under consideration, such as its mass and charge. For simplicity I will assume that all interactions are suppressed by powers of $1/N$, as for example would be the case in the asymptotically $AdS_4 \times \mathbb{S}^7$ superselection sector of M-theory that is dual to the ABJM theory \cite{Aharony:2008ug}. To leading order in $1/N$ all fields are then free, so in particular for a massive scalar field we have the Heisenberg picture expression \begin{equation}\label{bulkphi} \phi(x)=\sum_n\left[f_n(x)a_n+f_n^*(x)a_n^\dagger\right], \end{equation} where $f_n$ are Klein-Gordon normalizeable solutions of the bulk wave equation in the background of interest and $a_n$ and $a_n^\dagger$ are annihilation and creation operators obeying the usual algebra. I will always consider the CFT on $\mathbb{R}\times \mathbb{S}^{d-1}$, so the index $n$ will be discrete. Moreover I will always take the modes $f_n$ to have definite angular momentum and positive ADM energy, meaning that they will approach $r^{-\Delta}e^{-i\omega t}Y_{\ell m_1\ldots m_{d-2}}(\Omega)$, with $\omega\geq0$, at large $r$ in coordinates where the metric approaches \begin{equation} ds^2=-(r^2+1)dt^2+\frac{dr^2}{r^2+1}+r^2 d\Omega_{d-1}^2. \end{equation} Here as usual \begin{equation} \Delta=\frac{d}{2}+\frac{1}{2}\sqrt{d^2+4m^2}, \end{equation} and I have set the AdS radius to one. In order for \eqref{bulkphi} to make sense as an operator expression in the CFT we need to give a CFT expression for $a_n$. The right choice \cite{Banks:1998dd} turns out to be to take \begin{equation}\label{opmap} a_n\propto \mathcal{O}_{\omega \ell m_1\ldots m_{d-2}}, \end{equation} where on the right hand side we have the Fourier transform \begin{equation} \mathcal{O}_{\omega \ell m_1\ldots m_{d-2}}=\int dt \int d\Omega e^{i\omega t}Y^*_{\ell m_1\ldots m_{d-2}}(\Omega)\mathcal{O}(t,\Omega), \end{equation} where $\mathcal{O}$ is the CFT primary operator dual to $\phi$. For compactness I will simply refer to these operators as $\mathcal{O}_\omega$ below; it will always be implicit that $\omega\geq 0$. \eqref{opmap} is uniquely determined by requiring that \eqref{bulkphi} obey its bulk equation of motion and be consistent with the ``extrapolate'' dictionary \cite{Banks:1998dd,Harlow:2011ke} \begin{equation} \lim_{r\to\infty}\phi(t,r,\Omega)r^{\Delta}=\mathcal{O}(t,\Omega). \end{equation} One can check that $\mathcal{O}_\omega$ and $\mathcal{O}_\omega^\dagger$ have the right algebra to leading order in $1/N$, this follows for example from the large $N$ operator product expansion \begin{equation} \mathcal{O}(y)\mathcal{O}(y')=\frac{1}{|y-y'|^{2\Delta}}+\mathcal{O}^2(y')+O(1/N). \end{equation} Here I use $y$ to denote a boundary point, as opposed to $x$, which is a bulk point. One can also write \eqref{opmap} in position space \cite{Hamilton:2006az,Kabat:2011rz} as \begin{equation}\label{position space} \phi(x)=\int d^d y \sqrt{\gamma(y)}K(x;y)\mathcal{O}(y), \end{equation} where $\gamma$ is the boundary metric and $K(x;y)$ is sometimes called the ``smearing function''.\footnote{For some backgrounds this expression does not quite exist as written \cite{Leichenauer:2013kaa}, but it can always be fixed up by smearing the bulk operator a little.} This position space expression is convenient in the treatment of $1/N$ corrections \cite{Heemskerk:2012mn}. For example in the presence of a cubic interaction $\frac{g}{3N}\phi^3$, solving the bulk equation of motion to next to leading order in $1/N$ gives \cite{Heemskerk:2012mn} \begin{align}\nonumber \phi(x)=&\int d^d y \sqrt{\gamma(y)}K(x;y)\mathcal{O}(y)\\\nonumber &+\frac{g}{N}\int d^{d+1} x'\sqrt{-g(x')} d^d y \sqrt{\gamma(y)}d^d y' \sqrt{\gamma(y')}G(x;x')K(x';y)K(x';y')\mathcal{O}(y)\mathcal{O}(y')\\\label{mapcorr} &+O(1/N^2). \end{align} Here $G$ is a particular type of bulk Green's function. The right hand side has an obvious diagrammatic interpretation that continues to higher orders. We will not need the details though, the point for us is just that the right hand side involves (nonlocal) polynomials of higher and higher order in $\mathcal{O}_\omega$ as we go to higher order in $1/N$. The $\mathcal{O}_\omega$'s are thus the ``building blocks'' one uses to perturbatively assemble bulk fields. In the CFT the $\mathcal{O}_\omega$'s are somewhat singular operators, for example they exactly obey \begin{equation} [H,\mathcal{O}_\omega]=-\omega \mathcal{O}_\omega, \end{equation} so they have nonzero matrix elements only between energy eigenstates that differ by exactly $\omega$. Papadodimas and Raju suggest redefining them by integrating over a small frequency range to make them more robust \cite{Papadodimas:2013wnh,Papadodimas:2013jku}; I will instead leave them as they are but insist on using them only in wave packets that are localized to within some time range $\Delta t$. \subsection{The Two-sided Bulk} Let's now consider interacting bulk fields propagating on the two-sided geometry of figure \ref{ads2side}. There is a natural CPT transformation $\Theta$ which exchanges fields on the two sides; in Schwarzschild coordinates we have the action \begin{equation} \Theta^\dagger \phi_{I}(t,r,\Omega)\Theta= \phi_{III}(-t,r,\Omega), \end{equation} where $\phi$ is a real bulk scalar field. The Hilbert space on the slice $t=0$ is a tensor product of states in region I and states in region III, each of which is conveniently spanned by eigenstates of the Hamiltonians $H_R$ and $H_L$ respectively.\footnote{This decomposition and the definition of $H_R$ and $H_L$ are straightforward for scalars, but for gauge fields and gravity there is some subtlety due to the constraints at the interface between regions I and region III \cite{Donnelly:2011hn,Donnelly:2012st,Casini:2013rba,Radicevic:2014kqa}. I discuss this at some length in appendix \ref{gaugeapp}, but the upshot is that I do not expect these subtleties to affect equations \eqref{HOcom}, \eqref{HOcom2}, and \eqref{bulkmir} below, with $H_R$ and $H_L$ interpreted as the ADM Hamiltonians.} It is convenient to define $\Theta$ as a map from the left Hilbert space to the right Hilbert space rather than from the full Hilbert space to itself, for example we can then use a basis $|i\rangle_R$ of $H_R$ eigenstates for the states in region I and a basis \begin{equation} |i^*\rangle_L \equiv \Theta^\dagger |i\rangle_R \end{equation} of $H_L$ eigenstates for the states in region III. An operator $A$ in region I with action \begin{equation} A|i\rangle_R=\sum_j A_{ji}|j\rangle_R \end{equation} has a $CPT$ conjugate which acts as \begin{equation}\label{CPTconj} \Theta^\dagger A \Theta|i^*\rangle_L=\sum_j A^*_{ji}|j^*\rangle_L. \end{equation} We will be interested in operators $\mathcal{O}_\omega$ on the right which obey \begin{align}\nonumber [H_R,\mathcal{O}_\omega]&=-\omega \mathcal{O}_\omega\\ [H_L,\mathcal{O}_\omega]&=0\label{HOcom} \end{align} Their CPT conjugates will then obey \begin{align}\nonumber [H_L,\Theta^\dagger \mathcal{O}_\omega \Theta]&=-\omega \mathcal{O}_\omega\\ [H_R,\Theta^\dagger \mathcal{O}_\omega \Theta]&=0.\label{HOcom2} \end{align} Intuitively $\mathcal{O}_\omega$'s create and annihilate excitations with $H_R-H_L=\omega$ in region I, their CPT conjugates create and annihilate excitations with $H_R-H_L=-\omega$ in region III, and both are needed to understand region II (or IV). Up to mixing the purple modes in figure \ref{ads2side} are created and annihilated by acting with $\mathcal{O}$'s and time-evolving, while the red modes are created by acting with their CPT conjugates and evolving. Let's now consider a bulk state \begin{equation}\label{psib} |\psi_{bulk}\rangle\equiv \sum_{ij}C_{ji}|i^*\rangle_L |j\rangle_R. \end{equation} If $C$ is invertible then this state has the interesting property that \begin{equation} \Theta^\dagger A \Theta|\psi_{bulk}\rangle=C A^\dagger C^{-1}|\psi_{bulk}\rangle. \end{equation} In other words, the action of an operator in region III on the state can be written (in a way that depends on the state) as an operator acting on region I. That this is possible is a consequence of the entanglement of the state \ref{psib}; it is the same basic idea that is at the heart of the Reeh-Schlieder theorem of relativistic quantum field theory \cite{Streater:1989vi}. A similar equation holds for the action of $\Theta^\dagger A \Theta$ on a more general state obtained by acting on the state \eqref{psib} with another operator $A'$ on the right: \begin{equation}\label{bulkmir} (\Theta^\dagger A \Theta) A'|\psi_{bulk}\rangle=A'(C A^\dagger C^{-1})|\psi_{bulk}\rangle. \end{equation} One interesting bulk state to consider is the Hartle-Hawking-Israel or thermofield double (TFD) state \cite{Hartle:1976tp,Israel:1976ur} \begin{equation}\label{TFD} C_{ji}\propto \delta_{ij} e^{-\beta E_i/2}, \end{equation} which is the natural choice of ``ground state'' for the system. In this case an operator $\mathcal{O}_\omega$ obeying \eqref{HOcom} will have a CPT conjugate that acts on the TFD state as \begin{equation} \Theta^\dagger \mathcal{O}_\omega \Theta|\psi_{bulk}\rangle=e^{-\beta\omega/2}\mathcal{O}_\omega^\dagger|\psi_{bulk}\rangle. \end{equation} In AdS/CFT the TFD state is dual to itself, with the bulk energy eigenstates replaced by energy eigenstates of two copies of the CFT \cite{Maldacena:2001kr}. We will be interested in more general states than the TFD, so for the most part we will stick to the general expression \eqref{psib}, assuming only that $C$ is invertible. The reduced density matrix on the right is then \begin{equation}\label{rhoR} \rho_R=CC^\dagger. \end{equation} \subsection{The Papadodimas-Raju proposal} The proposal of Papadodimas and Raju is to use equation \eqref{bulkmir} to motivate a construction of the red modes in figure \ref{adsinfall}; if the extra operators from the left side that we need in the two-sided case to construct the interior can be rewritten as operators acting on the right side, why don't we just find CFT operators with this action and use them in the one-sided case as well? There are several issues with trying to do this however. \begin{figure} \begin{center} \includegraphics[height=5cm]{ghost.pdf} \caption{The ``method of images''. We define operators in region III, but use them only to compute things which are localized above the orange shell. The expression of operators in region II in terms of the region III (and I) operators is found by using the two-sided \textit{bulk} equations of motion, \textit{assuming that the shell does not exist}. Of course the shell does exist, its existence can be confirmed just in region I using the ordinary AdS/CFT dictionary, and the regions below it in this figure therefore do not.}\label{ghost} \end{center} \end{figure} First of all any attempt to produce the $\Theta^\dagger A \Theta$ operators in the single CFT of a one-sided collapse seems like it will accomplish too much: in addition to constructing region II we will also construct region III. This would be overkill; if we really have region III then we should also have a second copy of the CFT. The situation here however is similar to the method of images in electrostatics; we use the $\Theta^\dagger A \Theta$ 's \textit{only} to compute things in region II. This is illustrated in figure \ref{ghost}. Secondly \eqref{bulkmir} depends on the matrix $C$, which came from the choice of bulk state \eqref{psib}. So if $C$ appears in our definition of operators behind the horizon, we are essentially putting in the state that we want to get out. But how do we choose it? For now I will just assume that there is some prescription which in some appropriate sense agrees with the TFD state to leading order in $1/N$; I return to this question in section \ref{TFDsec}. There is also an immediate technical problem. Say we want to define a ``mirror'' operator $\widetilde{\mathcal{O}}_\omega$ in the CFT whose bulk interpretation is the same as $\Theta^\dagger \mathcal{O}_\omega \Theta$ when used in constructing operators in region II. Moreover say we have some ``smooth horizon'' finite-energy state $|\psi\rangle$ in the CFT on which $\widetilde{\mathcal{O}}_\omega$ acts as $C\mathcal{O}^\dagger_\omega C^{-1}$. The full set of CFT states can be generated from this state by acting with enough local CFT operators \cite{Streater:1989vi}, but if we demand that $\widetilde{\mathcal{O}}_\omega$ acts as in equation \eqref{bulkmir} for $A'$ any polynomial of local CFT operators, then we would discover that $\widetilde{\mathcal{O}}_\omega$ commutes with all local operators and is thus proportional to the identity.\footnote{This is a version of the ``commutator'' argument of \cite{Almheiri:2013hfa}, which is a standard criticism of ``$A=R_B$''.} One of the two main new ideas of the PR papers \cite{Papadodimas:2013wnh,Papadodimas:2013jku} is to ameliorate this by requiring $\widetilde{\mathcal{O}}_\omega$ to act as \eqref{bulkmir} only when $A'$ is in some small set of operators $\mathcal{A}$. More precisely, they define the set of operators $\mathcal{A}$ as the set of all polynomials in the $\mathcal{O}_\omega$'s, their hermitian adjoints, the $C$ conjugates of both, the Hamiltonian, and the charges for any bulk gauge fields, with the restrictions that both the degree of the polynomial and the energy of all operators present cannot be too large.\footnote{The Hamiltonian and any conserved charges can be understood as zero modes of single-trace operators, but they are sufficiently special that they sometimes need to be discussed separately. From here on the set of $\mathcal{O}_\omega$'s and $\widetilde{\mathcal{O}}_\omega$'s should always be understood as \textit{not} including the zero modes of any conserved currents. The symbol $A_\alpha$ will denote a generic element of the algebra $\mathcal{A}$, which does include them.} I will denote the maximal degree as $d_{max}$, and require that the total energy be much less than the energy of the black hole. I will also demand that each frequency index $\omega$ is integrated against a wave packet which localizes it to within some particular time range $\Delta t$. We can estimate the total number of linearly independent elements of the set $\mathcal{A}$ as follows. First of all we can get the most operators for a given total energy by taking them all to have $\omega\lesssim \frac{1}{r_s}$, where $r_s$ is the Schwarzschild radius.\footnote{For simplicity I will occasionally assume that $r_s$ is not parametrically larger in $N$ than the AdS radius. The temperature will then also be of order the AdS scale.} To avoid the modes they create being confined to within a Planckian distance of the horizon we must cut off their total angular momentum at $\ell_{max}\sim \frac{r_s}{\ell_p}$. The total number of angular momentum modes at a given frequency then scales like $\ell_{max}^{d-1}\sim S$. The number of linearly independent wave packets at a given angular momentum is of order $\frac{\Delta t}{r_s}$, so we can estimate the number of linearly-independent elements of $\mathcal{A}$ as \begin{equation} |\mathcal{A}|\sim \left(\frac{\Delta t}{r_s} S\right)^{d_{max}}. \end{equation} In order to have a chance at nontrivial $\widetilde{\mathcal{O}}_\omega$'s we need \begin{equation}\label{algbound} |\mathcal{A}|\ll e^S, \end{equation} which we can obtain in various ways depending on what we assume about $\Delta t$. PR take $\Delta t\sim r_s e^{\sqrt{S}}$, which would then imply that we need $d_{max}$ to be at most $\sqrt{S}$, but we could also take $\Delta t \sim r_s S$, or even $\Delta t\sim r_s$, in which case we can have $d_{max}\sim S/\log S$. It is convenient to take the time range the wave packets are localized in to be centered at $t=0$, which we can do without loss of generality by moving the shell back in time as in figure \ref{ghost}. Intuitively the set $\mathcal{A}$ is supposed to be the ``set of observables outside the black hole that are easy for an infalling observer to measure''.\footnote{An important point here is that ``easy'' is different from ``possible''; things which involve $O(S)$ $\mathcal{O}_\omega$'s seem quite possible to measure. I return to this below.} These restrictions mean that $\mathcal{A}$ is not quite a Von Neumann algebra, since it is not closed under multiplication. I will nonetheless sometimes refer to it for convenience as the ``small algebra''. For any state $|\psi\rangle$ in the CFT one can then define a subspace \begin{equation} \mathcal{H}_\psi\equiv \mathcal{A} |\psi\rangle. \end{equation} Inspired by \eqref{bulkmir}, PR then suggest defining the action of the mirror operators on $\mathcal{H}_\psi$ as \begin{align}\nonumber \widetilde{\mathcal{O}}_{\omega}|\psi\rangle&=C \mathcal{O}_\omega^\dagger C^{-1}|\psi\rangle\\ \widetilde{\mathcal{O}}_{\omega}^\dagger|\psi\rangle&=C \mathcal{O}_\omega C^{-1}|\psi\rangle,\label{mireq} \end{align} together with\footnote{In fact PR actually instead require that $\widetilde{\mathcal{O}}$ commutes only with $\mathcal{O}$'s, while for the Hamiltonian $H$ they instead demand $[H,\widetilde{\mathcal{O}}_\omega]=\omega \widetilde{\mathcal{O}}_\omega$ (and a similar equation for any conserved charge $Q$). I explain in appendix \ref{gaugeapp} why I prefer the prescription given here. The difference comes from whether we interpret the CFT Hamiltonian $H$ as representing the bulk operator $H_R$ or the bulk operator $H_R-H_L+E_0$ with $E_0$ some constant. My choice is the former, whereas they would like the latter, but only the former seems consistent with the OPE structure of the CFT.} \begin{equation} [\widetilde{\mathcal{O}}_\omega,A_\alpha]\mathcal{H}_\psi=[\widetilde{\mathcal{O}}_\omega^\dagger,A_\alpha]\mathcal{H}_\psi=0.\label{algeq} \end{equation} Equations \eqref{mireq} say that the mirror operators act on the state $|\psi\rangle$ as if they were $\Theta^\dagger \widetilde{\mathcal{O}}\Theta$ acting on the bulk state \eqref{psib}, and equations \eqref{algeq} say that their algebra acting on $\mathcal{H}_\psi$ is the same as it would be in the bulk. In other words what the proposal is doing is ``simulating'' the two-sided bulk of the previous section within a single copy of the CFT. These equations can be interpreted as a set of linear constraints which must be solved to find the mirror operators; they should be solveable provided they are consistent.\footnote{Their consistency essentially follows from \eqref{algbound} and the equilibrium condition \eqref{eqcond} below; for details see the PR papers \cite{Papadodimas:2013wnh,Papadodimas:2013jku}.} Note that \eqref{mireq} and \eqref{algeq} are uncorrected to all orders in $1/N$; perturbative corrections all go into the map to bulk fields, as in equation \eqref{mapcorr}, and the choice of $C$. This definition immediately begs the question however of which states $|\psi\rangle$ should be used to define $\mathcal{H}_\psi$. The second main new idea of the PR papers \cite{Papadodimas:2013wnh,Papadodimas:2013jku} is to give a rule for which states $|\psi\rangle$ should be used. The idea is that we should only expect \eqref{mireq} to be satisfied if the state $|\psi\rangle$ is an ``equilibrium'' state. There are various ways to define the equilibrium condition, the one I will mostly use is that in any equilibrium state $|\psi\rangle$ we must have\footnote{This is different than equilibrium condition proposed by PR; they demand only that expectation values of elements of $\mathcal{A}$ are time-independent to exponential precision. This is not sufficient however for the correlation functions of $\mathcal{O}$'s and $\widetilde{\mathcal{O}}$'s to reproduce bulk correlators in the state \eqref{psib}. For example a superposition of two black holes of very different mass would be an equilibrium state according to their criterion, since no elements of $\mathcal{A}$ mix between them. My condition \eqref{eqcond} implies theirs when $C$ commutes with the Hamiltonian, but is also necessary and probably sufficient for the consistency of the proposal.} \begin{equation}\label{eqcond} \langle\psi|A_\alpha |\psi\rangle=\mathrm{tr} (C C^\dagger A_\alpha)+O\left(e^{-c S}\right) \end{equation} for any $A_\alpha\in \mathcal{A}$, for some $O(1)$ constant $c$. This condition has two motivations; first of all we certainly shouldn't expect a CFT state $|\psi\rangle$ to look like the bulk state \eqref{psib} (evolved up to region II) unless the expectation values of operators in region I constructed by the BDHM/HKLL map are consistent with this. This map is supposed to accurately reconstruct the bulk to all orders in $1/N$, so any differences should be non-perturbatively small.\footnote{Remember we are considering big black holes so $S$ is proportional to some positive power of $N$.} Secondly, since we assumed that to leading order in $1/N$ we have $C C^\dagger\approx \frac{1}{Z}e^{-\beta H_{CFT}}$, where we can now define this approximation more carefully as meaning that the expectation values of elements of $\mathcal{A}$ in the two ensembles agree to leading order in $1/N$, we can think of states obeying \eqref{eqcond} as being states where the black hole has ``settled down'' enough that the state looks thermal with respect to the small algebra $\mathcal{A}$. In particular any objects would have to have been thrown in more than a time of order $r_s S$ in the past in order for the excitations they created to die down to exponentially small size.\footnote{From the point of view of this observation it seems rather natural to take $\Delta t\sim r_s S$, since this gives the infalling observer the ability to do experiments involving equilibration to the level of precision involved in \eqref{eqcond}. Having $\Delta t$ be shorter, for example of order $r_s \log S$, seems too restrictive given our intuition that $\mathcal{A}$ should represent what is ``not too hard'' to do.} By the argument of Hayden and Preskill \cite{Hayden:2007cs} it is then much too late for them to affect the experience of an observer who jumps in near $t=0$. Thus the equilibrium states ``have a right'' to a smooth horizon. By using equations \eqref{mireq}, \eqref{algeq}, and \eqref{eqcond}, it is clear that any expectation value of bulk fields constructed from the $\mathcal{O}$'s in the small algebra $\mathcal{A}$ and their mirror $\widetilde{\mathcal{O}}$'s will agree with low energy effective field theory in the state \eqref{psib} to all orders in $1/N$. Equilibrium states obey an important ``KMS'' condition \begin{equation}\label{KMS} \langle\psi|A_\alpha A_\beta|\psi\rangle=\langle\psi|A_\beta C C^\dagger A_\alpha (C C^\dagger)^{-1}|\psi\rangle+O\left(e^{-cS}\right), \end{equation} for any two elements $A_\alpha$, $A_\beta$ in $\mathcal{A}$. This condition is necessary for the consistency of \eqref{mireq}, since it ensures that the action of $\widetilde{\mathcal{O}}_\omega^\dagger$ on the right is consistent with its natural action on the left induced from the action of $\widetilde{\mathcal{O}}_\omega$ on the right.\footnote{I thank Herman Verlinde and Xi Dong for discussions of this point.} The $\widetilde{\mathcal{O}}$ operators have the uncomfortable property that they are ``state-dependent''; ordinarily in quantum mechanics one first defines an observable to be some hermitian operator and then sticks to this hermitian operator regardless of the state of the system. For now we will just accept this, but I will give a detailed discussion of to what extent this is a modification of quantum mechanics (it is) in section \ref{sdsec}. \subsection{Choosing the bulk state}\label{TFDsec} I now return to the choice of the ``target'' bulk state \eqref{psib}. We should really think of the equilibrium condition \eqref{eqcond} as a ``compatability'' condition between a set of CFT equilibrium states $\mathcal{E}$ and a two-sided bulk state labelled by $C$; in order to realize the PR proposal we must look for compatible pairs $(\mathcal{E},C)$. The most obvious $C$ to consider is the TFD state, and a set of CFT states which is compatible with it is \begin{equation}\label{thermalpure} |\psi\rangle=\frac{1}{\sqrt{Z}}\sum_j e^{-\beta E_j/2+i\phi_j}|j\rangle, \end{equation} where $|j\rangle$ are energy eigenstates and $\phi_j$ are randomly chosen phases. The compatibility, meaning that expectation values of elements of $\mathcal{A}$ obey \eqref{eqcond} with $CC^\dagger=\frac{1}{Z}e^{-\beta H}$, follows from the eigenstate thermalization hypothesis \cite{Srednicki:1995pt} \begin{equation} A_{ij}=\delta_{ij}A(E_i)+e^{-S\left(\frac{E_i+E_j}{2}\right)/2}R_{ij}, \end{equation} where $A$ and $|R|$ are smooth $O(1)$ functions of $E$ but the phase of $R$ varies erratically. The states \eqref{thermalpure} are thus a very promising starting point for implementing the PR proposal. Interestingly however at the level of $1/N$ corrections other natural sets of ``black hole-like'' CFT states are \textit{not} compatible with the TFD state; this was briefly pointed out by PR \cite{Papadodimas:2013jku}. I now discuss the reason for this at some more length. Consider a black hole formed in a pure state from a collapse that lies in some thin energy shell. If the state is sufficiently generic, expectation values of simple operators should be exponentially close (in the entropy) to their \textit{microcanonical} expectation values. This follows for example from a theorem of Lloyd \cite{lloydthesis}, which states that for any operator $A$ on a Hilbert space of dimension $d$, we have \begin{equation}\label{lloydform} \int dU \left(\langle\psi(U)|A|\psi(U)\rangle-\langle A\rangle_{MM}\right)^2=\frac{1}{d+1}\left(\langle A^2\rangle_{MM}-\langle A\rangle_{MM}^2\right), \end{equation} where $MM$ denotes the expectation value in the maximally mixed density matrix $I/d$. Here $|\psi(U)\rangle$ denotes the state created by acting on some reference state with a unitary matrix $U$, which is then integrated over the Haar measure. In other words the expectation value of any operator in a particular pure state is generically exponentially close (in the entropy $\log d$) to its maximally mixed expectation value.\footnote{Here I assume that $A$ is sufficiently smooth that $A^2$ does not have an expectation value which is exponentially enhanced; this should be the case for any operators we consider here.} We'd like to take this Hilbert space to be the set of CFT states in some narrow energy band, ie the microcanonical ensemble, but we have the technical issue that not all operators in the small algebra $\mathcal{A}$ send this subspace into itself. For any operator $A_\alpha\in \mathcal{A}$ however we can always construct an operator that \textit{does} send the microcanonical subspace into itself by just sandwiching $A_\alpha$ between two projection operators that project onto the subspace. Lloyd's result \eqref{lloydform} applies to this projected operator, but actually we can ignore the projections in three of the four terms. Indeed we have \begin{equation}\label{lloyd2} \int dU \left(\langle\psi(U)|A_\alpha|\psi(U)\rangle-\langle A_\alpha\rangle_{MC}\right)^2=\frac{1}{d+1}\left(\langle A_\alpha\Pi A_\alpha\rangle_{MC}-\langle A_\alpha\rangle_{MC}^2\right), \end{equation} where the average is now over pure states in the energy shell, $\Pi$ is the projection operator onto states in the shell, and $MC$ means the expectation value in the microcanonical density matrix that is proportional to the identity on this energy shell and is zero otherwise. Finally if we take $A_\alpha$ to be hermitian then by inserting a complete set of energy eigenstates we see that \begin{equation} \langle A_\alpha\Pi A_\alpha\rangle_{MC}\leq \langle A_\alpha^2\rangle_{MC}, \end{equation} which together with \eqref{lloyd2} immediately shows that the expectation value of a reasonably smooth operator in a typical pure state drawn from the microcanonical ensemble will be exponentially close to its microcanonical expectation value.\footnote{One might worry that these Haar averaged states are ``too typical'' in the sense that they usually must be built up over exponentially long times. In fact Lloyd's theorem holds for averages over much simpler sets of states, such as those generated by unitary 2-designs \cite{Hayden:2007cs}.} For comparison we can study how accurately the canonical ensemble reproduces the expectation values of elements of $\mathcal{A}$ in a collapse state $|\psi\rangle$ of narrow energy width. Expectation values in the canonical ensemble tend to differ from those in the microcanonical ensemble by \textit{powers} in the inverse entropy, so in fact the TFD state will already get the expectation values wrong at low orders in perturbation theory.\footnote{It may be somewhat unfamiliar to see ensemble inequivalence competing with perturbation theory in interactions; the reason is that for a big black hole we have taken the entropy to be of order $N$ to some power while the interactions are suppressed by powers of $1/N$. This is different than the usual situation in statistical mechanics where interactions are suppressed only by factors like $1/137$ while entropies are of order $10^{23}$.} Indeed for any operator $A_\alpha$ with reasonably smooth diagonal matrix elements in energy, we can estimate its canonical expectation value as \begin{equation} \langle A_\alpha\rangle=\frac{\int dE e^{S(E)-\beta E}A_\alpha(E)}{\int dE e^{S(E)-\beta E}}. \end{equation} The saddle point approximation to these integrals gives back the microcanonical expectation value, but the perturbative corrections to the saddle point will only be suppressed by powers of the entropy, which is not good enough to satisfy the equilibrium condition \eqref{eqcond}. Thus we see that for black holes formed by a collapse that is well-localized in energy, it seems we should look for a ``target'' bulk state \eqref{psib} where the reduced density matrix $CC^\dagger$ is close to the \textit{microcanonical} density matrix, which is constant in some energy range and then very small outside of it. This however is actually rather problematic from the point of view of the PR construction. The obvious choice would be to take $C$ to be the ``microcanonical double state'', where $C$ is diagonal with real and positive elements. But in this case the action of the mirror operators is rather badly defined; consider the state \begin{equation} \widetilde{\mathcal{O}}_\omega|\psi\rangle=C\mathcal{O}_\omega^\dagger C^{-1}|\psi \rangle, \end{equation} where $\omega$ is parametrically larger than the width of the energy band from which we pull $|\psi\rangle$. The $C^{-1}$ keeps the state $|\psi\rangle$ within the band, but the $\mathcal{O}_\omega^\dagger$ takes it out. When we then act with $C$ again we will then get a huge suppression, with an amount that depends on how exactly we define the microcanonical ensemble $CC^\dagger$ outside of the energy range we are interested in. Thus if we compute a correlation function like $\langle \widetilde{\mathcal{O}} \mathcal{O}\rangle$ it will be exceedingly small. This suggests that to the extent the state has a geometric interpretation at all, it does not involve two sides which are separated only by a single bifurcate horizon. It then is far from clear that the $\mathcal{O}$'s and $\widetilde{\mathcal{O}}$'s provide sufficient initial data to reconstruct region II a la figure \ref{ghost}.\footnote{This argument does not apply to the TFD because its energy width is of order the temperature times $\sqrt{S}$, which is larger than $\omega$ for any operators of interest for the infalling observer.} This inability to deal with narrow states is problematic for the generality of the construction, since after all one would hope that for example exact energy eigenstates should have smooth interiors, and in appendix \ref{hamsec} I give some brief speculation on what might be done about it.\footnote{This difficulty with states of narrow energy width is one of the main reasons that Raju and Papadodimas attempted to have $[H,\widetilde{\mathcal{O}}]\neq 0$. If this were possible it would ameliorate the problem somewhat, but I argue in appendix \ref{hamsec} that it does not seem to be consistent within the CFT to do this.} For now to avoid this issue, which is something of a distraction from the main point of this paper, I will just restrict the discussion to bulk states where the energy fluctuations in $CC^\dagger$ are of order those in the TFD state. \section{Do States have Unique Interpretations?}\label{nonusec} Let's now try to understand better the global structure of the CFT Hilbert space in the PR proposal. There is a set $\mathcal{E}\subset\mathcal{H}_{CFT}$ of equilibrium states satisfying \eqref{eqcond}, relative to each of which one defines mirror operators $\widetilde{\mathcal{O}}$ with respect to which it resembles the bulk state \eqref{psib} for any infalling observer who jumps in in the vicinity of $t=0$. For observers who jump in much later or much earlier we use a different choice of the small algebra $\mathcal{A}$, so the set $\mathcal{E}$ is different. The set $\mathcal{E}$ is \textit{not} a linear subspace of the Hilbert space; in fact its span (including different energies) is just $\mathcal{H}_{CFT}$. On top of each equilibrium state $|\psi\rangle$ we then build a linear subspace $\mathcal{H}_\psi$ by acting with either elements of $\mathcal{A}$ or their mirror operators. The other states in this subspace are to be interpreted as ``excited'', in some particular way. This leads to what seems to be an important consistency requirement for the proposal: the linear subspaces constructed in this way must not intersect. Say that there was a state $|\chi\rangle$ in the Hilbert space which could be realized either by acting on some equilibrium state $|\psi\rangle$ with an operator $A_\alpha\in \mathcal{A}$ \textit{or} by acting on some other equilibrium state $|\psi'\rangle$ with a different operator $A_\beta\in \mathcal{A}$. In this case the physical interpretation of the state $|\chi\rangle$ would be ambiguous; would an infalling observer see it as acting on the bulk state \eqref{psib} with $A_\alpha$ or with $A_\beta$? I will now argue that this situation can indeed be generically realized and thus that in the PR proposal quantum states in $\mathcal{H}_{CFT}$ cannot have fixed physical interpretations. To demonstrate such a situation it is clearly sufficient to find a nontrivial element of the algebra $\mathcal{A}$ which sends equilibrium states to other equilibrium states. It is not immediately clear that this can be done, after all the equilibrium condition \eqref{eqcond} is rather restrictive. Acting with any small number of $\mathcal{O}_\omega$'s and $\mathcal{O}_\omega^\dagger$'s can always be detected by the expectation value of some other small number of $\mathcal{O}_\omega$'s and $\mathcal{O}_\omega^\dagger$'s, since we can always just arrange to have a non-vanishing correlation function. What we would like is a unitary transformation $\widetilde{U}$ that commutes with everything in $\mathcal{A}$ to exponential accuracy: we then would have \begin{equation} \langle\psi|\widetilde{U}^\dagger A_\alpha \widetilde{U}|\psi\rangle=\langle\psi| A_\alpha|\psi\rangle. \end{equation} An obvious guess for how to find such a $\widetilde{U}$ is to build it out of $\tilde{O}$ operators, since from \eqref{algeq} these commute with everything in $\mathcal{A}$. For example we can consider the operator \begin{equation}\label{tUeq} \widetilde{U}\equiv e^{i \alpha \widetilde{\mathcal{O}}_\omega^\dagger \widetilde{\mathcal{O}}_\omega}, \end{equation} At leading order in $1/N$ this is the exponential of the number operator for some mode behind the horizon; it rotates the phases of the number eigenstates for the mode. At higher order in $1/N$ it does not exactly have this interpretation, but it is well defined and from \eqref{algeq} it continues to commute with everything in $\mathcal{A}$ acting on the state $|\psi\rangle$.\footnote{If we could arrange $[H,\widetilde{\mathcal{O}}]\neq 0$ as advocated by PR, then here we would need to arrange for $\widetilde{U}$ to commute with $H$ within expectation values. This is rather restrictive, but seems to be possible by systematically ``improving'' \eqref{tUeq}. I argue in appendix \ref{hamsec} however that we must have $[H,\widetilde{\mathcal{O}}]=0$.} This operator thus sends the equilibrium state $|\psi\rangle$ to another equilibrium state according to \eqref{eqcond}, but according to bulk effective field theory the horizon is no longer smooth. More precisely the state $\widetilde{U}|\psi\rangle$ is no longer annihilated by the ``infalling'' annihilation operator proportional to $\widetilde{\mathcal{O}}_\omega-C \mathcal{O}_\omega^\dagger C^{-1}$, and in fact the ``infalling'' number operator has an $O(1)$ expectation value. The operator $\widetilde{U}$ is not actually an element of $\mathcal{A}$ since it involves the mirror operators, but we can use \eqref{mireq} to define a new operator that has the same action on $|\psi\rangle$:\footnote{Technically for this equation to be valid we must perform the truncation of the exponential discussed in the following paragraph.} \begin{equation} V\equiv e^{i \alpha C \mathcal{O}_\omega^\dagger \mathcal{O}_\omega C^{-1}}= C U^\dagger C^{-1}, \end{equation} where $U\equiv e^{-i\alpha \mathcal{O}_\omega^\dagger \mathcal{O}_\omega}$ is the unitary operator whose mirror is $\widetilde{U}$. $V$ is not unitary, but its action on $|\psi\rangle$ preserves the norm since it is equivalent to the action of $\widetilde{U}$. It may appear surprising that acting with $V$ on the state preserves the expectation values of all elements of $\mathcal{A}$, but this amusingly follows from the KMS condition \eqref{KMS}. Indeed \begin{align}\nonumber \langle\psi|V^\dagger A_\alpha V|\psi\rangle&=\langle\psi|(C^{\dagger})^{-1}U C^\dagger A_\alpha C U^\dagger C^{-1}|\psi\rangle\\\nonumber &=\langle\psi|A_\alpha C U^\dagger C^{-1}C C^\dagger (C^\dagger)^{-1}U C^\dagger (C^\dagger)^{-1}C^{-1}|\psi\rangle+O(e^{-cS})\\ &=\langle\psi|A_\alpha|\psi\rangle+O(e^{-cS}). \end{align} This argument applies for any $\widetilde{U}$ that we build out of $\widetilde{\mathcal{O}}$'s. To complete the argument we now would like to argue that $V\in \mathcal{A}$, but this isn't actually true, for two reasons. First of all it is not a polynomial in $\mathcal{O}_\omega$, $\mathcal{O}_\omega^\dagger$, and their $C$ conjugates of degree at most $d_{max}$. Secondly we are supposed to integrate any $\omega$ index against a wave packet that localizes it into a time range $\Delta t$. The wave packets are easily included, and to deal with the first problem the convergent series expansion for the exponential in the definition of $V$ can simply be truncated at order $d_{max}$. This breaks the unitarity of $U$, but only by an amount which is of order $e^{-d_{max} \log d_{max}+\# d_{max}}$, where $\#$ is some $O(1)$ number.\footnote{Here I have assumed that the we can think of the operator $C \mathcal{O}_\omega^\dagger \mathcal{O}_\omega C^{-1}$ as being bounded at order one. Since to leading order in $1/N$ it is just a number operator, this will clearly be true for a fermionic field. For a bosonic field, we need to use the property that eigenstates of the number operator with large eigenvalue are exponentially suppressed in an equilibrium state.} If we take $\Delta t$ to be at most some power of $S$, then we found in the discussion around \eqref{algbound} that we can consistently take $d_{max} \sim S/\log S$; the error is then of order $e^{-c S}$, which doesn't violate the equilibrium condition. If we take $\Delta t$ to be of order $e^{\sqrt{S}}$, then we can only make the error as small as $e^{-\sqrt{S}}$ (the actual choice of power here is unimportant, I take $1/2$ for simplicity of exposition). This is non-perturbatively small, but still parametrically larger than $e^{-cS}$. Is a deviation from the equilibrium condition of order $e^{-\sqrt{S}}$ ``large enough'' to no longer expect a smooth horizon? By the rules I've described so far it is, but there is some question as to whether or not it is really reasonable to insist on the equilibrium condition \eqref{eqcond} being so strong. Saying that the deviation is of order $e^{-c S}$ is a stronger statement that saying that it is non-perturbatively small; for example in string theory in the early 1990's it was a major accomplishment to realize that nonperturbative effects should scale like $e^{-1/g}$ instead of $e^{-1/g^2}$ \cite{Shenker:1990uf}. As I discussed in the previous section however, even in perturbation theory it is unclear whether or not a well defined procedure exists for determining the target bulk state \eqref{psib}. Beyond perturbation theory it is even less clear. Consider for example the non-perturbative process where a black hole of mass of order the Planck mass is spontaneously fluctuated out of the horizon and into the atmosphere. This decreases the entropy of the black hole by $S^{\frac{1}{d-1}}$, so we expect the probability of it happening is $e^{-S^{\frac{1}{d-1}}}$. For $AdS_4$, this is $e^{-\sqrt{S}}$. So apparently there are interesting non-perturbative effects of this size, which would be difficult to systematically include in determining the state \eqref{psib}, and since the matrix $C$ appears explicitly in the equilibrium condition it seems excessive to demand it to require such small deviations. Of course even if we do require this the issue only arises if we take $\Delta t$ to be exponentially large, and there is no clear reason why we should do this.\footnote{Another possible loophole to the argument of this section is that we could simply \textit{declare} that $d_{max}$ is parametrically smaller than it needs to be for mirror operators to be consistently defined. Since we are just making it up the rules anyway, there is no deep principle preventing this. As long as we take it to scale like some power of $S$ however, the error from truncating the exponential in defining $V$ will be exponentially small in that power of $S$. If the power is less than one then the caveats of this paragraph can again be applied to resist viewing this as a real resolution of the problem. In any event making the algebra smaller than necessary is unsatisfying, since it is increasing in size the set of experiments which are in principle doable but not described by the PR proposal.} We thus appear to have found a problem for the PR proposal; what is the bulk interpretation of the state $V|\psi\rangle$? Is the horizon excited or is it not? In fact this issue is somewhat related to the difficulty in identifying the right target state \eqref{psib}; who is to say that we shouldn't include some extra phases in $C$? Or even a generic unitary $\widetilde{U}$? In fact if we were sufficiently perverse, we could make what seems to be an equally consistent version of the PR proposal where the mirror operators are defined in such a way that equilibrium states \textit{always} have firewalls. The operators $\widetilde{U}$ are something like a ``zero mode'' that pushes us in the direction of such a definition. \section{More General Black Holes}\label{genbhsec} Having introduced and analyzed the PR proposal in the case where it is strongest, the big AdS black hole, I now discuss two more general cases which introduce new issues. Another interesting generalization which I will not discuss is to big AdS black holes in states that are slightly mixed \cite{Verlinde:2013qya}. \subsection{Two-sided black holes} I first consider two-sided AdS black holes. The TFD state is obviously an interesting choice of state, where the interior seems to be describable in the BDHM/HKLL formalism without recourse to mirror operators. We can also consider more general entangled states of the two CFT's, which should be dual to more complicated wormholes \cite{Maldacena:2013xja,Shenker:2013pqa,Shenker:2013yza}. The new interesting question here is how the small algebra $\mathcal{A}$ should be defined. Let's assume that the infalling observer will jump in from the right side; should $\mathcal{A}$ be given by its usual definition in the right CFT, or should it include ``simple'' operators from \textit{both} CFT's? Let's first imagine that we have $\mathcal{A}=\mathcal{A}_R$. In the TFD state the mirror operators will then be the left algebra $\mathcal{A}_L$. Any unitary operator acting on the left CFT preserves the equilibrium condition that operators in the right CFT have thermal expectation values, so in particular we could send in a freight train from the left boundary in figure \ref{ads2side} and it would not be detected by the small algebra $\mathcal{A}$. In this setup it is thus even easier to get into the situation of the previous section; how do we know whether or not we should interpret the state with the freight train as a \textit{new} TFD state with a smooth horizon? In fact the BDHM/HKLL construction here would say that we should not interpret it this way; the mapping between the left and the right CFT's and the bulk is fixed by the Euclidean construction of the TFD state, which connects the two sides in a single copy of the CFT; it does not leave any freedom to redefine the dictionary between the two sides. So it seems taking the small algebra to just be $\mathcal{A}_R$ produces an inconsistency between the PR rules and the BDHM/HKLL dictionary. We are thus led to consider the alternate choice of algebra where $\mathcal{A}$ is generated by the union of $\mathcal{A}_L$ and $\mathcal{A}_R$. Here there is a new subtlety however; how should the equilibrium condition be defined? One choice would be to require that all expectation values of elements of $\mathcal{A}$ resemble the TFD state. In this case we would of course decide that the TFD state itself is an equilibrium state, but if we tried to construct mirror operators we would fail. There would now be elements of $\mathcal{A}$ which annihilate the state, so the conditions \eqref{algeq} would not be solvable. This is perhaps the correct answer, since in this case we do not need state-dependent mirror operators. This choice however leads to a problem in that it seems incorrect when we consider more generic two-sided wormholes. Let's consider a generic entangled pure state of two CFT's with fixed $H_R+H_L$. According to the definition this would \textit{not} be an equilibrium state, since there would now not be any simple entanglement between the left and right algebras. The PR construction would then not be able to tell us whether or not these states have smooth horizons, whereas if we don't believe in firewalls we might expect that they should. We could instead use the two-sided algebra but define the equilibrium condition to be that the expectation values of the two-sided algebra are consistent with the product state \begin{equation}\label{2eq} \rho_{Th}=\frac{e^{-\beta H_L}\otimes e^{-\beta H_R}}{Z^2}. \end{equation} The typical two-sided pure state now will be an equilibrium state; the PR construction will produce two sets of mirror operators, one for each side, and it will construct a smooth horizon on each side.\footnote{In fact this will be the same construction as if we had just used the one-sided algebra for whichever side we jump in from. This is reasonable, since generic wormholes are expected (if they are not singular!) to be ``long'' \cite{Shenker:2013pqa,Shenker:2013yza}; infallers from different sides won't be able to meet in the middle.} The TFD state now will be far out of equilibrium, so the PR proposal will be silent on what its properties should be. This is a good thing however, since as we just discussed we don't expect to need mirror operators to reconstruct the interior in the TFD state. This last choice is thus probably the most appealing, even though for generic states it still will have two copies of the ambiguity of the previous section. \subsection{Evaporating black holes}\label{evapsec} I now turn to the evaporating black hole. It is sometimes convenient to arrange for a big black hole in AdS to evaporate by coupling the CFT to an auxiliary system \cite{Rocha:2008fe,Almheiri:2013hfa}, but this can lead to puzzling issues which I would prefer to avoid so I will focus on an evaporating Minkowski black hole where we mostly expect local semiclassical physics to approximately hold everywhere. The cost of course is that we cannot use the CFT language, so the discussion will be less precise. To be concrete I will model the state of an evaporating black hole as a qubit system, which factorizes into three parts \cite{Harlow:2013tf} \begin{equation} \mathcal{H}=\mathcal{H}_H\otimes\mathcal{H}_B\otimes\mathcal{H}_R. \end{equation} Here $H$ is the remaining black hole (the ``stretched horizon''), which I take to consist of $m$ qubits, $B$ is the thermal atmosphere (``the zone''), which I take to have $k$ qubits, and $R$ is the Hawking radiation, which I take to have $n$ qubits. We will mostly be interested in the situation where the black hole is ``old'', ie when $n>m+k$. It is natural to take the small algebra $\mathcal{A}$ to be generated by polynomials in the Pauli operators acting on $\mathcal{H}_B\otimes \mathcal{H}_R$, since these are the degrees of freedom which are accessible to the infalling observer. We will restrict to polynomials of degree at most $p$. There is no natural dynamics in this model, so there is no analogue of the frequency wave packets we needed in the previous discussion. Effectively we are just taking $\Delta t \sim r_s$. We will consider a state $|\psi\rangle$ to be an equilibrium state if \begin{equation} \langle\psi|A_\alpha|\psi\rangle=2^{-n-m-k}\mathrm{tr} A_\alpha+O(2^{-c(n+m+k)}). \end{equation} To implement a version of the PR proposal we need to pick a ``target'' state; we will imagine that the horizon is smooth in the infalling frame if we have \begin{equation} |\psi\rangle_{AB}=2^{-k/2}\sum_a|a\rangle_A|a\rangle_B, \end{equation} where $a$ runs over $0$ and $1$ for each qubit. $\mathcal{H}_A$ is the ``image'' Hilbert space analogous to the second exterior in the PR proposal; we are essentially saying that each mode and its Hawking partner must be in the state $\frac{1}{\sqrt{2}}\left(|00\rangle+|11\rangle \right)$. This state has the property that acting with the Pauli operator $Z_1$ on the first qubit has the same effect as acting with the Pauli operator $Z_2$ on the second qubit, and similarly for the Pauli $X_i$ operators and (up to a sign) the $Y_i$ operators. This property is the analogue of equation \eqref{bulkmir} above. We can then define mirror operators, for example by demanding that \begin{equation}\label{spinmir} \widetilde{X}_i A_\alpha |\psi\rangle=A_\alpha X_i |\psi\rangle, \end{equation} where $i$ runs over the qubits in $B$. It is interesting to see how large $p$ can be before we are no longer able to solve \eqref{spinmir} \cite{Papadodimas:2013jku}. This happens when the set of states $A_\alpha|\psi\rangle$ generated by acting on $|\psi\rangle$ with linearly independent elements of the algebra stop being linearly independent. The latest this can happen is when the number $|\mathcal{A}|$ of linearly independent elements of the algebra equals the dimensionality $2^{n+k+m}$ of the Hilbert space. For the qubit system the linearly independent elements of the algebra are just products of Pauli matrices on the various sites, so if we include all products of degree at most $p$ then \begin{equation} |\mathcal{A}|=\sum_{j=0}^p \begin{pmatrix} n+k\\ j \end{pmatrix}3^j. \end{equation} If we take $p=n+k$ then this sum can be evaluated to give $2^{2(n+k)}$, as expected since in this case $\mathcal{A}$ would just be the set of all operators acting on $n+k$ qubits. Mirror operators that commute with all operators on $B$ and $R$ can thus be defined only if $n+k<m$, or in other words if the black hole is ``young''.\footnote{This is a manifestation of Page's theorem, which says that when $m>n+k$ we can construct a purification of $B$ which lies entirely in $H$. The mirror operators then can be defined to act only on $H$, so they manifestly commute with operators on $B$ and $R$.} For old black holes however we clearly need to take $p<n+k$. We can estimate how much less by defining $p\equiv (n+k)\alpha$, with $0<\alpha<1$, and approximating the sum as an integral: \begin{equation} |\mathcal{A}|\sim\int_0^\alpha d\alpha' e^{(-\alpha' \log \alpha'-(1-\alpha')\log(1-\alpha')+\alpha' \log 3)(n+k)}. \end{equation} The integrand has a saddle point at $\alpha'=3/4$, where it is of order $2^{2(n+k)}$, so apparently when the black hole is old we need to take $\alpha<\frac{3}{4}$. In that case it will be dominated by its upper endpoint, so we can determine the maximally allowed value by solving \begin{equation} -\alpha \log \alpha-(1-\alpha)\log (1-\alpha)+\alpha \log 3=\log 2\left(1+\frac{m}{n+k}\right), \end{equation} which is solved by some $O(1)$ value of $\alpha$ that is about $.2$ in the limit that $\frac{m}{n+k}\to 0$. We thus see that the PR proposal is able to arrange for the mirror operators to commute with any exterior operator that acts on at most $20\%$ of the atmosphere and Hawking radiation. This is significantly more than was found in earlier attempts to make the idea of $A=R_B$ work, where it was typically found that the constructions of operators behind the horizon had $O(1)$ commutators even with single qubit operators on the radiation \cite{Almheiri:2013hfa,harlow}. The reason the PR proposal is able to do so much better is that the $\widetilde{X}_i$ operators are not actually Pauli operators in the sense of having a spectrum which is half $1$ and half $-1$. In other words there is no qubit factor in the Hilbert space on which they have the standard action; they are \textit{not} associated with some particular purification $R_B$ of $B$. Should we then be satisfied? In this version of the PR proposal we can still raise the objections of section \ref{nonusec}, but I would instead like to draw attention to a different issue. Namely, is it really reasonable to not allow a construction of the interior in situations where the infalling observer \textit{does} interact with more than $20\%$ of the Hawking radiation? There seems to be no major technical obstruction to an infalling observer doing so, and the observer has plenty of time to do it before the black hole evaporates.\footnote{Such experiments are much easier than any experiment requiring decoding of the Hawking radiation, where there may indeed be a good case that the such experiments cannot be done by an observer who can also probe the interior \cite{Harlow:2013tf,aaronson}.} Moreover even if the infalling observer does nothing, an $O(1)$ fraction of the radiation could interact with a dust cloud on its way out.\footnote{This objection was also raised by Raphael Bousso in a talk at the ``Bulk Microscopy from Holography'' workshop at the Princeton Center for Theoretical Science in November 2013. See also \cite{Bousso:2013ifa}.} Do we really expect a firewall in such situations? I will postpone further discussion of this to section \ref{obsec} below, but I believe that any compelling theory of black hole physics will need to be able to describe such experiments, and any others that we can reasonably imagine doing. It is not allowed to ``plead the fifth''. \section{Some Comments on State-Dependence}\label{sdsec} I now return to the question of state-dependence. The goal of this section is to contrast the state-dependence of the PR proposal from more conventional phenomena which have something of the same flavor. My basic strategy is to understand to what extent ``state-dependent measurements'' can be realized as unitary evolution of the system to coupled to some apparatus. I will argue that all ``standard examples'' of state-dependent measurement can be implemented in this way, but that the PR proposal cannot. Before giving the general discussion it is convenient to first introduce an example. \subsection{State-Dependence and Spontaneous Symmetry Breaking} Consider the $3+1$ dimensional $O(N)$ symmetric scalar field theory with Lagrangian \begin{equation}\label{ONlag} \mathcal{L}=-\frac{1}{2}\partial_\mu \phi_i \partial^\mu \phi_i+\frac{m^2}{2}\phi_i \phi_i-\frac{g}{4}\left(\phi_i\phi_i\right)^2. \end{equation} Here $i$ is an index which runs from $1$ to $N$, and the summation convention is in force. In infinite volume (and with $m^2>0$) this theory has a continuous set of degenerate vacua $|\hat{n}\rangle$, where $\hat{n}$ is a unit vector in $\mathbb{R}^N$. These vacua can be distinguished by the expectation value of the field $\phi_i$, which at leading order in $g$ is \begin{equation} \langle \phi_i(x)\rangle_n\equiv \langle \hat{n}|\phi_i(x)|\hat{n}\rangle=\frac{|m|}{\sqrt{g}}\hat{n}_i. \end{equation} The low-energy spectrum of this theory around one of the vacua has $N-1$ massless Goldstone bosons and one massive boson of mass squared $2m^2$. The point of interest for us here is that which fields create the Goldstone bosons seems to depend on the choice of state $|\hat{n}\rangle$. For example the Goldstone bosons are created by the field \begin{equation}\label{phistatedep} \phi^\perp_{i}(x,\hat{n})\equiv \phi_i(x)-\left(\phi\cdot \hat{n}\right)\hat{n}_i. \end{equation} Isn't this a state-dependent operator? It appears to be, but before rushing to conclusions it is important to think more carefully about what we actually mean by ``measuring the two-point function of the Goldstone boson field''. One option is to try to ``remove'' the state-dependence by defining a single operator whose correlation functions in any state $|\hat{n}\rangle$ are equivalent to those of $\phi^\perp(x,\hat{n})$. In infinite volume this can be done exactly using projection operators onto different superselection sectors, while in finite volume $V$ we can do it to leading order in $1/V$ by defining a ``$\hat{n}$ operator'' \begin{equation} \hat{n}_{op}\equiv \frac{\sqrt{g}}{|m|V}\int d^3 x \, \phi_i(x) \end{equation} and replacing $\hat{n}\to \hat{n}_{op}$ in the definition \eqref{phistatedep}. Since $\langle \hat{n}|\hat{n}_{op}|\hat{n}\rangle=\hat{n}$, and the commutator of $\hat{n}_{op}$ with any local operator is another local operator times a power of $1/V$, this operator has the same expectation values as $\phi^\perp_i(x,\hat{n})$ up to $O(1/V)$ corrections. I claim however that this procedure of ``removing'' the state-dependence does \textit{not} describe what we usually do in the laboratory when studying Goldstone bosons. Setting up an apparatus to measure this operator would be rather irritating, since it would have to involve the nonlocal operator $\hat{n}_{op}$ each time we measure the field. What we usually do instead is measure $\hat{n}_{op}$ \textit{once} to determine what state we are in, and then conditioned on the result of this measurement we then measure various combinations of the $\phi^\perp_i(x,\hat{n})$'s as defined in equation \ref{phistatedep}. The small commutator of the ``order parameter'' $\hat{n}_{op}$ with local operators ensures us that we do not need to measure it again later in the experiment. The distinction between this protocol and one where we measure the ``state-independent'' operator defined in the previous paragraph by the replacement $\hat{n}\to\hat{n}_{op}$ is not academic; if we started the system in a superposition of different $|\hat{n}\rangle$ states, the results would differ substantially (they also differ at any case at $O(1/V)$). It should be clear however that either is perfectly normal in principle, and they had better both be consistent with quantum mechanics. I now discuss this more abstractly in the context of general quantum measurement theory. \subsection{Measurement Theory} The basic idea of quantum measurement theory is as follows.\footnote{For a nice review see section 3.1 of \cite{preskillnotes}.} Say we have a system $S$ and we'd like to measure some hermitian operator $A$ that acts on it. We adjoin the system to a \textit{pointer} system $P$ whose dimensionality is equal to the number of distinct eigenvalues of $A$. We then arrange for the unitary evolution of the joint system to be \begin{equation}\label{measev} |i\rangle_S|0\rangle_P\to |i\rangle_S |a_i\rangle_P, \end{equation} where $|0\rangle_P$ is some particular ``initial'' state of the pointer and $|i\rangle_S$ is any eigenstate of $A$ with eigenvalue $a_i$. Note that the states $|a_i\rangle_P$ are not necessarily all distinct; different $|i\rangle_S$'s could have the same eigenvalues. If we now start the system in an arbitrary pure state $|\psi\rangle_S=\sum_i C_i |i\rangle_S$ then we have the evolution \begin{equation} |\psi\rangle_S|0\rangle_P\to \sum_i C_i |i\rangle_S |a_i\rangle_P. \end{equation} The pointer is now in a mixed state \begin{equation} \rho_P=\sum_a \sum_{i \, |\, a_i=a}|C_i|^2 |a\rangle\langle a|, \end{equation} so if we look at it then we will see a result $a$ drawn from the probability distribution predicted by the usual Born rule for measuring $A$. Of course in this last step we again have to make a measurement, but the pointer is usually assumed to be sufficiently classical that it is ``obvious'' what it means to measure it. The important point here is that the measurement process can be described as unitary evolution of the system coupled to some apparatus. The same is true for the protocol described at the end of the previous section, where we first measured $\hat{n}_{op}$ and then conditioned on the result measured some combination of $\phi^\perp_i(x,\hat{n})$'s, but to see it we first need to recall the standard idea of conditioned unitary evolution. Consider a bipartite system consisting of systems $S_1$ and $S_2$. We can then define an evolution \begin{equation}\label{condun} |i\rangle_{S_1} |j\rangle_{S_2}\to |i\rangle_{S_1} U_i|j\rangle_{S_2}, \end{equation} which we can interpret as looking at $S_1$ in the basis $|i\rangle$ and then, depending on the result, applying a unitary transformation $U_i$ to system $S_2$. The evolution \eqref{condun} is unitary for any choice of the $U_i$'s. With these tools we can now give a more general discussion of the two measurement protocols of the previous section. Say that we have a system $S$ on which $A$ and $B$ are two hermitian operators. Moreover say that we have some classical function $f(a,b)$ of their eigenvalues. The reader should think of $A$ as being analogous to $\hat{n}_{op}$ in the previous section. The first protocol, where we replaced $\hat{n}\to\hat{n}_{op}$ in \eqref{phistatedep}, corresponds to measuring the quantum operator $f(A,B)$ using \eqref{measev}. Our second protocol, measuring $\hat{n}_{op}$ and then conditionally measuring $\phi^\perp(x,\hat{n})$, generalizes to first to measuring $A$ and finding some result $a$, then measuring the quantum operator $f(a,B)$. We can describe this as unitary evolution as follows; first adjoin to the system $S$ two pointers, $P_A$ for the first measurement and $P_f$ for the second.\footnote{For simplicity we assume that the number of distinct eigenvalues of $f(a,B)$ is the same for all $a$, enabling us to use just one pointer for the second measurement.} Then apply the unitary measurement protocol \eqref{measev} to the system $S$ and the first pointer $P_A$. Finally apply the conditioned unitary evolution \eqref{condun}, where conditioned on the state of $P_A$ we apply the unitary measurement protocol to $S$ and $P_f$ for measuring $f(a,B)$. The quantum circuit diagram for this evolution is given in figure \ref{2meas}, \begin{figure} \begin{center} \includegraphics[height=5cm]{2meas.pdf} \caption{The unitary process for measuring $A$ and then conditionally measuring $f$. Time goes up. }\label{2meas} \end{center} \end{figure} explicitly the full evolution is \begin{equation} |i\rangle_S|0\rangle_{P_A}|0\rangle_{P_f}\to |i\rangle_S|a_i\rangle_{P_A}|0\rangle_{P_f}\to \sum_{j'}C^i_{j'}|j'\rangle_S|a_i\rangle_{P_A}|f(a_i,b_{j'})\rangle_{P_f}, \end{equation} where $|i\rangle_S$ is an eigenstate of $A$ with eigenvalue $a_i$, $|j'\rangle_S$ is an eigenstate of $B$ of eigenvalue $b_{j'}$, and $|i\rangle_S=\sum_{j'}C^i_{j'}|j'\rangle_S$. A classical observer can then look at the pointers to sample from the joint distribution for $a$ and $f$ (or the conditional distribution for $f$ given $a$). I believe that the second protocol captures the essence of what most people think of as ``state-dependent operators'' in ordinary quantum mechanics. There is some approximately classical observable which we first pin down with a measurement, and then use to decide which other operators to measure. The entire process can be described as unitary evolution on the system together with an apparatus. \subsection{State-Dependence in the PR proposal}\label{sdsec3} I now compare the state-dependence of the PR proposal (or its less-precise earlier cousins) to the above protocols. To warm up, let's first consider the operators $\mathcal{O}$ used in building fields \textit{outside} of the horizon. These apparently depend on some basic properties of the state, for example the mass of the black hole and where it is, but this information is essentially classical. We are thus in the situation where we can use either of the protocols of the previous subsections to interpret them. An infalling observer will probably use the second protocol; she will look to see where the black hole is and how big it is before aiming her jump. The situation for the interior operators $\widetilde{\mathcal{O}}$ is more interesting. Consider a complete basis of equilibrium states. We can define some operator $A$ which distinguishes them, and then try to use this information to define state-dependent operators $\widetilde{\mathcal{O}}_a$ for modes behind the horizon. To run the second protocol we would first measure $A$ and then do a conditional measurement of $\widetilde{\mathcal{O}}_a$. This would require the infalling observer to do an extremely sensitive measurement of the black hole, essentially determining which microstate it is in. It is unreasonable to require the infalling observer do this, so we conclude that the second protocol cannot be used to legitimize the PR proposal. We could also try the first protocol by defining the interior operators including explicitly the operator $A$ in our expressions, which I will denote as $\widetilde{\mathcal{O}}_A$. We now run into the issue however that the commutator of $A$ with $\mathcal{O}$ and $\widetilde{\mathcal{O}}_a$ will be quite large. This then will destroy the algebraic properties of the $\widetilde{\mathcal{O}}_A$'s, and their correlation functions will no longer agree with effective field theory. Thus the state-dependence of the PR proposal cannot be interpreted as arising from either of the two protocols we just discussed. In fact we can go further and argue that there is no possible unitary evolution on the system together with some apparatus which realizes the PR proposal. More explicitly, there is no single unitary operator which takes an arbitrary equilibrium state together with a given pointer, not depending on the equilibrium state, and uses that pointer to measure the $\widetilde{\mathcal{O}}$ appropriate for the equilibrium state. To get started I first observe in general that it is impossible to have a pointer which measures two distinct operators: say that $\widetilde{\mathcal{O}}_1$ and $\widetilde{\mathcal{O}}_2$ are two operators associated with different equilibrium states. Since they are supposed to have the same physical interpretations, they should have the same eigenvalues. An obvious way to try to get them to both be measured by the same pointer is to find a unitary which implements \begin{align}\nonumber |i,1\rangle |0\rangle&\to |i,1\rangle |\tilde{o}_i\rangle\\ |i,2\rangle |0\rangle&\to |i,2\rangle |\tilde{o}_i\rangle.\label{prevol} \end{align} Here $|i,1\rangle$ is some complete eigenbasis of $\widetilde{\mathcal{O}}_1$, with eigenvalues $\tilde{o}_i$, and similarly $|i,2\rangle$ for $\widetilde{\mathcal{O}}_2$. It is fairly straightforward to show however that this evolution is only possible if the two operators are in fact equal; for convenience of the reader I give a proof in appendix \ref{proofapp}.\footnote{One might think that the pointer should also be state-dependent, since it is behind the horizon as well, but for simplicity we can take it to be made out of the infalling purple modes in figure \ref{adsinfall}, which are expected to be state-dependent only in the weak sense of the previous two sections.} The basic idea of the proof is that the first line of \eqref{prevol} completely specifies the unitary operator, so there is no freedom left to fit the second line. This might be called a ``no state-dependent operators theorem'' of quantum mechanics. This theorem does not quite directly address the PR proposal however, since the types of states PR are interested in are equilibrium states and small perturbations thereof, rather than eigenstates of the $\widetilde{\mathcal{O}}$'s. The same intuition should still apply however; we can introduce a complete basis of equilibrium states, on which the action of the unitary coupling the pointer to the system is fixed. There would be no remaining freedom to deal with other equilibrium states that are superpositions in this basis. In fact we can get into trouble even faster by using the observation of section \ref{nonusec} above.\footnote{The argument that follows here is closely related to the ``frozen vacuum'' argument of \cite{Bousso:2013ifa}, but it is reworked a bit to more directly apply to the PR construction.} For convenience I will work in a simplified version of the qubit evaporation model of section \ref{evapsec}, where I take $B$ to have only a single qubit, I combine $H$ and $R$ into $\bar{B}$, and I take the algebra $\mathcal{A}$ to consist only of operators on $B$. In any equilibrium state $|\psi\rangle$ the density matrix on $B$ will be maximally mixed, so by the Schmidt decomposition we can write \begin{equation} |\psi\rangle=\frac{1}{\sqrt{2}}\left(|0\rangle_B|\bar{0}\rangle_{\bar{B}}+|1\rangle_B|\bar{1}\rangle_{\bar{B}}\right), \end{equation} where $|\bar{0}\rangle_{\bar{B}}$ and $|\bar{1}\rangle_{\bar{B}}$ are pure states of unit norm that are typically very complicated. Now let's consider a measurement of $\widetilde{Z}$, the mirror operator to the $Z$ operator on $B$. Our ``target'' bulk state is $(|00\rangle+|11\rangle)/\sqrt{2}$, so by construction measuring $\tilde{Z}$ should produce the same state as measuring $Z$. We will thus have \begin{equation} |\psi\rangle|0\rangle_P\to \frac{1}{\sqrt{2}}\left(|0\rangle_B|\bar{0}\rangle_{\bar{B}}|0\rangle_P+|1\rangle_B|\bar{1}\rangle_{\bar{B}}|1\rangle_P\right). \end{equation} Let's now consider however the set of four mutually orthogonal equilibrium states:\footnote{These are the types of equilibrium states one would construct acting on $|\psi\rangle$ with the ``unitary behind the horizon'' type operators discussed in section \ref{nonusec}, but here I will follow the rules of PR and construct mirror operators which see the ``horizon'' as unexcited.} \begin{align}\nonumber |\psi_\pm\rangle&=\frac{1}{\sqrt{2}}\left(|0\rangle_B|\bar{0}\rangle_{\bar{B}}\pm|1\rangle_B|\bar{1}\rangle_{\bar{B}}\right)\\ |\chi_\pm\rangle&=\frac{1}{\sqrt{2}}\left(|0\rangle_B|\bar{1}\rangle_{\bar{B}}\pm|1\rangle_B|\bar{0}\rangle_{\bar{B}}\right). \end{align} Since these are all equilibrium states, measuring $\tilde{Z}$ should in all cases be equivalent to measuring $Z$ so after taking superpositions we have the following evolution \begin{align}\nonumber |0\bar{0}\rangle_{B\bar{B}}|0\rangle_P&\to |0\bar{0}\rangle_{B\bar{B}}|0\rangle_P\\\nonumber |1\bar{1}\rangle_{B\bar{B}}|0\rangle_P&\to |1\bar{1}\rangle_{B\bar{B}}|1\rangle_P\\\nonumber |0\bar{1}\rangle_{B\bar{B}}|0\rangle_P&\to |0\bar{1}\rangle_{B\bar{B}}|0\rangle_P\\ |1\bar{0}\rangle_{B\bar{B}}|0\rangle_P&\to |1\bar{0}\rangle_{B\bar{B}}|1\rangle_P. \end{align} So far this evolution can be unitary, since after all it is equivalent to measuring $Z$, which is state-independent. It cannot however be unitary if it is restricted to act only on $P$ and $\bar{B}$, which is after all what we should demand; the infalling observer knows that she is measuring $\widetilde{Z}$, not $Z$, since these are done by different physical experiments. Indeed this would require both $|\bar{0}0\rangle_{\bar{B}P}\to |\bar{0}0\rangle_{\bar{B}P}$ and $|\bar{0}0\rangle_{\bar{B}P}\to |\bar{0}1\rangle_{\bar{B}P}$, as well as both $|\bar{1}0\rangle_{\bar{B}P}\to |\bar{1}0\rangle_{\bar{B}P}$ and $|\bar{1}0\rangle_{\bar{B}P}\to |\bar{1}1\rangle_{\bar{B}P}$. More operationally we could instead demand that measuring $\widetilde{Z}$, applying $X$ to flip the qubit $B$, and then measuring $\widetilde{Z}$ again returns the same result for both $\widetilde{Z}$ measurements, which leads to a similar contradiction. By presenting the argument this way we see that this contradiction is closely related to the ambiguity of section \ref{nonusec}; acting with $X$ sends $|\psi_+\rangle\to|\chi_+\rangle$, so if the formalism itself cannot decide whether or not $|\chi_+\rangle$ is excited we can hardly expect a pointer to be able to. \section{Including the Infalling Observer}\label{obsec} We have now seen that the PR proposal is inconsistent with quantum mechanics on two serious counts. We saw in section \ref{nonusec} that it can assign to a single quantum state more than one physical interpretation, and we saw in section \ref{sdsec} that its measurement process cannot be realized as unitary evolution, as opposed to ordinary quantum mechanics (with pointers) where it can. What then are we to conclude? It seems that what the proposal needs to work is some rule along the lines of the following: say I am going to jump into a black hole, which is in some equilibrium state $|\psi\rangle$. Moreover say that you are planning to act on the state with some operator $\widetilde{U}$ (or $V$) as defined in section \ref{nonusec}. If I \textit{know} that you are going to do this, then I conclude that I will see an excited horizon, and if I jump in that is what I see. But if I \textit{don't} know you are going to do this, then my $\widetilde{\mathcal{O}}$ operators are automatically redefined in such a way that I see a smooth horizon when I jump in, even though the quantum state of the black hole is the same in either case. The full state of the system in the two different situations wouldn't actually be the same, since the internal state of the observer is different in the two cases. Is this crazy? Undoubtedly, but that does not automatically mean that it is wrong.\footnote{In fact it is somewhat reminiscent of the Gottesman-Preskill refinement of the black hole final state proposal, where any thing happening behind the horizon is ``unhappened'' by post-selection from the point of view of somebody outside the horizon but presumably not for somebody who falls in \cite{Horowitz:2003he,Gottesman:2003up,Lloyd:2013bza,Bousso:2013uka}. It would be interesting to understand better the relationship between that proposal and the one under consideration here.} Rather than prolonging this paper further by trying to make a consistent theory that accommodates this kind of thing, which is what I expect would really be needed to make some version of the PR proposal (or $A=R_B$, ER=EPR, etc) work for black holes in generic states, I will instead close by making some general comments about the validity of quantum mechanics for infalling observers.\footnote{There seems to be some overlap with some of the ideas here and what Mathur and Turton call ``fuzzball complementarity'' \cite{Mathur:2013gua}, although I disagree with various parts of their discussion.} In ordinary situations where we study quantum mechanics, the system under study is ``small'' and our apparatus is ``big''. This allows us to basically treat the apparatus classically, up to a single pointer variable as we have discussed in the previous section. The detailed history and construction of the apparatus (and the experimenter) are completely irrelevant for the outcome of the experiment, at least as long as the experiment has been constructed correctly. This is to be contrasted with the type of thing described in the previous paragraph, where what the experimenter is aware of and intends to do is of paramount importance. It is interesting to note however that in the black hole situation the ``small/big'' situation is reversed; we are trying to study a giant black hole as a quantum mechanical object, using an infalling observer who is rather small in comparison. This is unlike any other situation where we have tested quantum mechanics, and it does not seem a priori absurd to imagine that the usual measurement theory would need to be modified in this case.\footnote{In the context of cosmology it has already been argued that the experiences of observers with fundamentally limited resources do not have precise quantum mechanical descriptions \cite{Harlow:2010my,Bousso:2011up}, and any lesson of this nature which we could learn from black holes would obviously be very valuable for cosmology.} The infalling observer simply cannot carry in an apparatus which is able to record any substantial fraction of the information which would be needed to describe the black hole in detail, since this would by necessity cause large back-reaction.\footnote{This can be quantified using the recently proven Bekenstein-Casini bound \cite{Bekenstein:1980jp,Casini:2008cr}, which roughly says that an object capable of storing $S$ bits of information much have a mass greater than $S/R$, where $R$ is the size of the object. To fit in the black hole the object must be small compared to the black hole, while to avoid back-reaction its mass must be small compared to the black hole mass; taken together with the bound these show that the memory capacity $S$ of the object must be much smaller than the black hole entropy.} AMPS have tried to avoid this problem by inventing an experiment that does not require the infalling observer to actually carry in a large number of qubits, but they do need to at some point perform a complicated experiment on an $O(1)$ fraction of the Hawking radiation. The original AMPS experiment involved an extremely sophisticated quantum computation which is probably impossible to really implement \cite{Harlow:2013tf}, but as we saw in section \ref{evapsec}, for the PR construction to fail one only needs to consider something like flying around and flipping the helicity of some $O(1)$ fraction of the Hawking photons. This is much easier than the AMPS quantum computation, there is no principled reason why it cannot be done. The point here however is that although the infalling observer can remember that a large number of photon helicities were flipped, she cannot actually carry in a list of which photons were flipped and which weren't. We might imagine that the definition of the $\widetilde{\mathcal{O}}$'s gets reset in this case and she sees a smooth horizon, with her inability to actually remember (or carry in a record of) which ones were flipped preventing various paradoxes. Should the theory really make use of this type of thing? One might hope not, but if the PR proposal or something like it is to prevent firewalls in generic states it seems more and more likely that it will have to. A solid example of a theory that does this (or a concrete explanation of how AdS/CFT secretly does it) would obviously be necessary before it could really be taken seriously. \paragraph{Acknowledgments} I'd like to thank Herman Verlinde, Edward Witten, and especially Juan Maldacena, Kyriakos Papadodimas,and Suvrat Raju for many helpful discussions of the PR proposal. I'd also like to thank Raphael Bousso, Xi Dong, Don Marolf, Shiraz Minwalla, Joe Polchinski, Vladimir Rosenhaus, Douglas Stanford, Steve Shenker, Jaimie Sully, Leonard Susskind, and Erik Verlinde for useful conversations. I am supported by the Princeton Center for Theoretical Science.
{ "attr-fineweb-edu": 1.725586, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUduzxK7IDOUSbu3Zq
\section{Introduction} There has been much recent interest in adapting models and techniques from deep learning to the domain of graph-structured data~\cite{NIPS_atwood,Bruna_2013,NIPS_defferrard,Henaff_2015,ICLR_kipf,Niepert_2016}. Proposed by Atwood and Towsley~\cite{NIPS_atwood}, Diffusion-convolutional neural networks (DCNNs) approach the problem by learning \lq filters' that summarize local information in a graph via a diffusion process. These filters have been observed to provide an effective basis for node classification. The DCNNs have been shown to possess attractive qualities like obtaining a latent representation for graphical data that is invariant under isomorphism, and utilizing tensor operations that can be efficiently implemented on the GPU. Nevertheless, as was remarked in \cite{NIPS_atwood}, when implemented using dense tensor operations, DCNNs have a $\mathcal{O}(N^2)$ memory complexity, which could get prohibitively large for massive graphs with millions or billions of nodes. In an effort to improve the memory complexity of the DCNN technique, we investigate two approaches of thresholding the diffusion process -- a pre-thresholding technique that thresholds the transition matrix itself, and a post-thresholding technique that enforces sparsity on the power series on the transition matrix. We show that pre-thresholding the transition matrix provides provably linear ($\mathcal{O}(N)$) memory requirements while the model's predictive performance remains unhampered for small to moderate thresholding values ($\rho \leq 0.1$). On the other hand, the post-thresholding technique did not offer any gains in memory complexity. This result suggests that pre-thresholded sparse DCNNs (sDCCNs) are suitable models for large graphical datasets. \section{Model} We study node classification on a single graph, say $\mathcal{G} = (V, E)$, with $V$ being the vertex or node set, and $E$ being the set of edges. No constraints are imposed on the graph $\mathcal{G}$; the graph can be weighted or unweighted, directed or undirected. Each vertex is assumed to be associated with $F$ features, leading to the graph being described by an $N \times F$ design matrix, $X$, and an $N \times N$ adjacency matrix, with $N=|V|$ being the number of vertices. In DCNN, we compute a degree-normalized transition matrix $P$ that gives the probability of moving from one node to another in a single step. However, in a sparse implementation of DCNN (sDCNN), rather than using the transition matrix directly, we remove edges with probabilities below a threshold in order to both improve memory complexity and regularize the graph structure. Assume the nodes are associated with labels, i.e., each node $i$ in $V$ has a label $y_i$ in $Y$. Given a set of labelled nodes in a graph, the node classification task is to find labels for unlabeled nodes. Note that, while in this work we focus on node classification tasks, this framework can be easily extended to graph classification tasks where graphs have labels associated with them rather than individual nodes~\cite{NIPS_atwood}. Next, we describe the DCNN framework in greater detail. The neural network takes the graph $\mathcal{G}$ and the design matrix $X$ as input, and returns a hard prediction for $Y$ or a conditional distribution $\mathbb{P}(Y|X)$ for unlabelled nodes. Each node is transformed to a diffusion-convolutional representation, which is an $H \times F$ real matrix defined by $H$ hops of graph diffusion over $F$ features. The core operation of a DCNN is a mapping from nodes and their features to the results of a diffusion process that begins at that node. The node class label is finally obtained by integrating the result of the diffusion process over the graph through a fully connected layer, thus combining the structural and feature information in the graph data. In sDCNN, the diffusion process itself is thresholded, to reduce the computational complexity of the diffusion process over a large graph. \begin{figure} \centering \includegraphics[scale=0.5]{simpletensormodelnodes} \caption{DCNN schematic diagram for node classification tasks} \label{fig:DCNN_model} \end{figure} \textbf{DCNN with no thresholding} Consider a node classification task where a label $Y$ is associated with each node in a graph. Let $P ^*$ be an $N \times H \times N$ tensor containing the power series of the transition matrix $P$. The probability of reaching node $l$ from node $i$ through $j$ hops is captured by $P^j_{il}$, or equivalently by $P ^* _{ijl}$. The diffusion-convolutional activation $Z_{ijk}$ for node $i$, hop $j$ and feature $k$ of graph $\mathcal{G}$ is given by \begin{equation} Z_{ijk} = f \left( W_{jk} ^c \cdot \sum _{l =1} ^N P_{ijl} ^* X_{lk} \right) \end{equation} where $\{ W ^c_{jk}, j=1,2,\ldots, H, \ k=1,2,\ldots,F\}$ are the learned weights of the diffusion-convolutional layer, and $f$ is the activation function. Briefly, the weights determine the effect of neighboring nodes' features on the class label of a particular node. The activations can be expressed more concisely using tensor notation as \begin{equation} Z = f(W^c \odot P^* X), \end{equation} where the $\odot$ operator represents element-wise multiplication. Observe that the model only entails $\mathcal{O}(H \times F)$ parameters, making the size of the latent diffusion-convolutional representation independent of the size of the input. The output from the diffusion-convolutional layer connects to the output layer with $N_t$ neurons through a fully connected layer. A hard prediction for $Y$, denoted $\hat{Y}$, can be obtained by taking the maximum activation, as follows \begin{equation} \hat{Y} = \arg \max \left( f \left( W ^d \odot Z \right) \right), \end{equation} whereas, a conditional probability distribution $\mathbb{P}(Y | X)$ can be obtained by applying the softmax function \begin{equation} \mathbb{P} (Y | X) = \mbox{softmax} \left( f \left( W ^d \odot Z \right) \right). \end{equation} Since DCNNs require computation and storage of tensors representing the power series of the transition matrices, it is costly in terms of computational resources. In this work, we investigate methods to enforce sparsity in such tensors, and consequently reduce the utilization of memory. In what follows, we describe two thresholding methods for enforcing sparsity. \textbf{Pre-thresholding} Through this technique the transition matrix is first thresholded, and then a power series of the thresholded transition matrix is computed. For a threshold value $\sigma$, the pre-thresholded activation $Z_{ijk}^{pre}$ for node $i$, hop $j$, and feature $k$ is given by \[ Z_{ijk} ^{pre} = f \left( W_{jk} ^c \cdot \sum _{l =1} ^N \bar{P}_{ijl} ^* X_{lk} \right) \] where \[ \bar{P}_{il} = 1[P_{il}\geq \sigma] P_{il} \] is the thresholded transition matrix, and \[ \bar{P}_{ijl} ^*=\bar{P}^j_{il}. \] Note that for an event $\mathcal{E}$, $1[\mathcal{E}]=1$, if the event is true, and $0$ otherwise. \textbf{Post-thresholding} This thresholding method enforces sparsity on the power series of the transition matrix $P$. For a threshold value $\rho$, the post-thresholded activation $Z_{ijk}^{post}$ for node $i$, hop $j$, and feature $k$ is given by \[ Z_{ijk} ^{post} = f \left( W_{jk} ^c \cdot \sum _{l =1} ^N 1[P_{ijl} ^* \geq \rho] P_{ijl} ^* X_{lk} \right). \] Qualitatively, pre-thresholding only considers strong ties within a particular node's neighborhood with all the intermediary ties being sufficiently strong, whereas, post-thresholding looks at the entire neighborhood of a node, and chooses the strong ties, allowing long hop ties to be chosen, potentially passing through multiple weak ties. For threshold parameter $\rho$ or $\sigma$ set to zero, all the ties are considered, and we obtain the DCNN setting in this limit. On the other hand, when the threshold parameter is set to the maximum value of one, only the relevant node's feature value is considered along with its neighboring node only if the node in question has one neighbor. This is qualitatively close to a logistic regression setting, even though not exactly the same. \section{Complexity results} \begin{lemma} For $H>1$, the memory complexity for the DCNN method is $\mathcal{O}(N^2+NFH)$. For $H=1$, the memory complexity is $\mathcal{O}(|E|+NF)$. \end{lemma} \begin{proof} An efficient way to store the power series would be to store the product of the power of transition matrices with the design matrix. However the intermediate powers of the transition matrices need to be stored, which requires $\mathcal{O}(N^2)$ memory. Storing the product of the power series tensor with the design matrix requires $\mathcal{O}(NFH)$ memory. Therefore, the upper bound on the memory usage is $\mathcal{O}(N^2+NFH)$. However if $H=1$, the transition matrix can be represented in $\mathcal{O}(|E|)$, thereby getting rid of the $\mathcal{O}(N^2)$ memory requirement. \end{proof} \begin{lemma} For $H>1$ and a fixed threshold $\sigma$, memory complexity under the pre-thresholding technique is $\mathcal{O} \left( \min \left( N\cdot \frac{1}{\sigma ^{H}} , N^2 \right) + NFH \right)$. For $H=1$, the memory complexity is $\mathcal{O} \left( \min \left( N\cdot \frac{1}{\sigma ^{H}} , |E|, N^2 \right) + NFH \right)$ \label{lemma:Pre_thresholding_memory} \end{lemma} \begin{proof} We argue that a sparse representation of the power series tensor product with the design matrix occupies $\mathcal{O} \left( \min \left( N\cdot \frac{1}{\sigma ^{H}}, N^2 \right) + NFH \right)$ memory in an inductive manner. For node $i$ in $V$ and hop $h$, we define the set of nodes in the $h$-hop neighborhood that influence node $i$ through pre-thresholding as \[ \mathcal{N} _i ^h = \{ l \ | \ \bar{P} ^{h}_{il} > 0 \}. \] For $j=0$, $\bar{P} ^j$ is the identity matrix $I_n$, which is sparse with exactly $N$ non-zero entries. For $j=1$, $\bar{P} ^j$ is the thresholded transition matrix. In the pre-thresholding operation, transition probabilities from node $i$ that are less than $\sigma$ are set to zero. For a particular node $i$, the set of nodes that have 1-hop transition probabilities greater than or equal to $\sigma$, is exactly $\mathcal{N} ^1_i$. Since, $\sum _{l \in V} P_{il} = 1$ and $\sum _{l \in V} \bar{P}_{il} \leq 1$, we must have $|\mathcal{N} ^1 _i| \leq \min \left(\frac{1}{\sigma}, N \right)$. Therefore, $\bar{P}$ can have at most $\min \left( \frac{N}{\sigma}, N^2 \right)$ entries. For $j=h$, $\bar{P} ^h$ is the thresholded transition matrix raised to the $h^{th}$ power. Suppose the sparse representation is such that $\{ \bar{P} _{ihl}, l \in V \}$ has only $ \min \left( \frac{1}{\sigma^h} , N \right) $ non-zero entries for each $i$ in $V$, implying that $\bar{P} ^h$ have $\mathcal{O}( \min \left( N \cdot \frac{1}{\sigma^h}, N^2 \right) )$ entries\\ The final step is to prove that for $j=h+1$, $\bar{P} ^{j}$ has $\mathcal{O} \left( \min \left( N \cdot \frac{1}{\sigma^{h+1}}, N^2 \right) \right)$ entries. For $i$ in V, let \[ \mathcal{N} _i ^{h+1} = \{ l \ | \ \bar{P} ^{h+1}_{il} > 0 \}. \] By assumption, $|\mathcal{N} _i ^h| \leq \min \left( \frac{1}{\sigma ^h}, N \right)$. Observe that $\mathcal{N} _i^{h+1}$ is the set $\{ m \ | \ j \in \mathcal{N} _i ^{h}, m \in \mathcal{N} _j ^1 \}$. Since $| \mathcal{N} _i ^h | \leq \min \left( \frac{1}{\sigma ^h} , N \right) $ and $|\mathcal{N}_j ^1| \leq \min \left( \frac{1}{\sigma} , N \right)$ for all $j$ that are in $\mathcal{N}_i ^h$, we have the bound $| \mathcal{N} _i ^{h+1} | \leq \min \left( \frac{1}{\sigma ^{h+1}}, N \right) $. Thus, we have proved that $\bar{P} ^{h+1}$ has $\mathcal{O} \left( \min \left( N \cdot \frac{1}{\sigma^{h+1}}, N^2 \right) \right)$ non-zero entries. Thus by induction, the costliest operation is computing $\bar{P} ^H$ which requires $\mathcal{O} \left( \min \left( N \cdot \frac{1}{\sigma ^{H}}, N^2 \right) \right)$ memory, and the result follows. \end{proof} \begin{lemma} For $H>1$ and a fixed threshold $\rho$, memory complexity under the post-thresholding technique is $\mathcal{O} \left( N^2 + NFH \right)$. For $H=1$, the memory complexity is $\mathcal{O}(|E|+NF)$. \end{lemma} \begin{proof} Even though the post-thresholded power series tensor can be proven to have $\mathcal{O} \left( \frac{NH}{\rho} \right)$, in a manner similar to that of Lemma \ref{lemma:Pre_thresholding_memory}, the intermediate powers of the dense transition matrix still have to be computed. This requires $\mathcal{O}(N^2)$ memory, and therefore, no improvement in memory utilization is obtained. \end{proof} Assume $H>1$; To obtain the computational complexity of the DCNN method, we observe that two $N \times N$ matrices are multiplied $H$ times, and a $N \times N$ matrix needs to be multiplied with another $N \times F$ matrix, $H$ times. Two $N \times N$ matrices can be multiplied in $\mathcal{O}(N^{2.38} )$ complexity using efficient matrix multiplication algorithms for square matrices~\cite{coppersmith}. The product between the transition matrix and the design matrix can be performed in $\mathcal{O}(N ^2 F)$ operations, so the overall complexity is $\mathcal{O}(N^{2.38} H)$. The computational complexity of the post-thresholded sDCNN is also $\mathcal{O}(N^{2.38} H)$, because the dense power series tensor needs to be computed. The pre-thresholded DCNN achieves an improvement in the computational complexity because the power series tensor is computed by multiplying two sparse $N \times N$ matrices. The costliest operation is computing $\bar{P} ^H$ which is obtained by multiplying $\bar{P} ^{H-1}$, a sparse matrix with at most $\frac{N}{\sigma ^H}$ non-zero entries, and $\bar{P}$, another sparse matrix with at most $\frac{N}{\sigma}$ non-zero entries. Using efficient sparse matrix multiplication methods~\cite{LeGall,yuster_zwick}, if the condition $\frac{1}{\sigma ^H} < N ^{0.14}$ holds, then the computational complexity of the sparse method is $\mathcal{O}(N^{2+o(1)})$. Therefore, pre-thresholded sDCNN achieves $\mathcal{O}(N)$ memory complexity and $\mathcal{O}(N^2)$ computational complexity, a significant improvement over DCNN, which requires $\mathcal{O}(N^2)$ memory and $\mathcal{O}(N^{2.38})$ computational complexity. However, post-thresholded sDCNN still requires the same memory and computational complexity as that of DCNN. Therefore, going forward we will simply be considering pre-thresholded sDCNN, although post-thresholding could be thought of a way of regularizing the DCNN method. \section{Experiments} In this section we explore how thresholding affects both the density of transient diffusion kernel and the performance of DCNNs. \subsection{Effect of Thresholding on Density} \begin{figure}[h!] \centering \includegraphics[scale=0.6]{thresholding_results_pretty.pdf} \caption{The effect of diffusion threshold on transient diffusion kernel density under pre- and post-thresholding strategies on the Cora dataset. The density is given by the proportion of non-zero entries in the transient diffusion kernel $I, P, P^2, ..., P^H$ and is plotted on a log scale. Lighter lines indicate smaller diffusion kernels.} \label{fig:diffusionDensity} \end{figure} Figure \ref{fig:diffusionDensity} shows the results of applying the two thresholding strategies to the Cora dataset. Observe that, both thresholding techniques show a decrease in diffusion kernel density as the threshold is increased. However the decrease is more gradual for the pre-thresholding method, due to the fact that, transition probabilities reach low values for greater number of hops, which when post-thresholded lead to low densities for relatively slight increase in the diffusion threshold. On the other hand, the pre-thresholding method is better behaved, with the kernel density decreasing in a more gradual fashion. The darker lines corresponding to larger diffusion kernels obtained through greater number of hops, $H$, have higher diffusion kernel density for low diffusion threshold. However, as the diffusion threshold is increased, the darker lines cross over the lighter lines around $0.5$, for the pre-thresholding method. The justification for this phenomenon is that as the diffusion threshold is increased to $0.5$, only the contribution of the identity matrix remains, and the larger diffusion kernels therefore show lower density. A similar phenomenon occurs for the pre-thresholding technique, except that the crossover region occurs much earlier. Although we show only the behavior of the Cora dataset, the behavior should hold for other datasets as well. \subsection{Effect of Thresholding on Performance} \begin{figure}[h!] \centering \includegraphics[scale=0.45]{pre_sparse_node_classification_all_acc} \includegraphics[scale=0.45]{pre_sparse_node_classification_all_fmic} \caption{The effect of pre-thresholding on classification performance. Small thresholds ($\rho \leq 0.1$) have no significant effect on classification performance, while larger thresholds ($\rho \geq 0.5$) remove the benefit of including neighborhood information.} \label{fig:thresholdingPerformance} \end{figure} Figure \ref{fig:thresholdingPerformance} shows the effect of thresholding on DCNN performance. Observe that, for both thresholding techniques, small-to-moderate thresholding values ($\rho \leq 0.1$) have no significant effect on performance, although performance degrades for larger thresholds. This suggests that applying a small threshold when computing the transient diffusion kernel is an effective means of scaling DCNNs to larger graphs. However, it should be noted that for moderate thresholds $0.1 < \rho < 0.5$, the performance begins to decay. Eventually, when the threshold reaches the reciprocal of the maximum degree $\rho \geq 0.5$, the benefit of including neighborhood information vanishes entirely because no edges are left in the graph. \section{Related Work} Neural networks for graphs were introduced by Gori et al.~\cite{Gori_2005} and followed by Scarselli et al.~\cite{Scarselli_2009}, in departure from the traditional approach of transforming the graph into a simpler representation which could then be tackled by conventional machine learning algorithms. Both the works used recursive neural networks for processing graph data, requiring repeated application of contraction maps until node representations reach a stable state. Bruna et al.~\cite{Bruna_2013} proposed two generalizations of CNNs to signals defined on general domains; one based upon a hierarchical clustering of the domain, and another based on the spectrum of the graph Laplacian. This was followed by Henaff et al.~\cite{Henaff_2015}, which used these techniques to address a setting where the graph structure is not known a priori, and needs to be inferred. However, the parametrization of CNNs developed in \cite{Bruna_2013,Henaff_2015} are dependent on the input graph size, while that of DCNNs or sDCNNs are not, making the technique transferable, i.e., a DCNN or sDCNN learned on one graph can be applied to another. Niepert et al.~\cite{Niepert_2016} proposed a CNN approach which extracts locally connected regions of the input graph, requiring the definition of node ordering as a pre-processing step. Atwood and Towsley~\cite{NIPS_atwood} had remarked that the DCNN technique uses $\mathcal{O}(N^2)$ memory. It is worth noting that the sparse implementation of DCNN yields an order improvement in the memory complexity. This will offer a performance parity with more recent techniques that report $\mathcal{O}(E)$ memory complexity, like the graph convolutional network method by Kipf et al.~\cite{ICLR_kipf} and the localized spectral filtering method of Defferard et al.~\cite{NIPS_defferrard} when the (unthresholded) input graph is sparse ($\mathcal{O}(E)$ = $\mathcal{O}(N)$), which is the case for many real-world datasets of interest, and a performance improvement when graphs are dense ($\mathcal{O}(E)$ > $\mathcal{O}(N)$). \section{Conclusion} We have shown that, by applying a simple thresholding technique, we can reduce the computational complexity of diffusion-convolutional neural networks to $\mathcal{O}(N^2)$ and the memory complexity to $\mathcal{O}(N)$. This is achieved without significantly affecting the predictive performance of the model. \medskip \small
{ "attr-fineweb-edu": 1.399414, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUduzxaKgQMdMtQc3m
\section{Introduction} \label{sec:introduction} Consider a disjunctive set $S = \bigcup_{i=1}^d P^i$, where each $P^i \subset \mathbb{R}^n$ is a rational bounded polyhedra. Disjunctive constraints of the form $x \in S$ abound in optimization: they are useful, for example, to model nonlinearities~\cite{Geisler:2012,Misener:2012} or discrete logic imposed by complex processes~\cite{Bergamini:2005,Trespalacios:2014b}. Therefore, we would like a way to represent these constraints in such a way that we can efficiently optimize over them. Additionally, we would like to do this in a composable way, as disjunctive constraints frequently arise as substructures in large, complex optimization problems. Mixed-integer programming (MIP) offers one such solution. A MIP formulation for $S$ is given by a linear programming (LP) relaxation \[ R = \left\{(x,y,z) \in \mathbb{R}^{n+m+r} : Ax + By + Cz \leq d \right\} \] such that, when integrality is imposed on the integer (control) variables $z$, the set projects down onto $S$. MIP formulations are useful because there are sophisticated algorithms--and corresponding high-quality software implementations--that can optimize over these representations efficiently in practice~\cite{Bixby:2007,Junger:2010}. Furthermore, combining MIP formulations for different substructures is trivial, and so this technology can be marshalled for very complex and large-scale optimization problems. Indeed, it is very often the case that MIP formulations outperform branch-and-bound methods that work directly on the disjunctions (e.g. \cite{Beaumont:1990,Land:1960}), despite the fact that they require additional integer variables and constraints. This can largely be attributed to the immense advances of MIP solvers over the past decades, as their algorithms are now much more complex than a traditional branch-and-bound approach. In particular, the development of a sophisticated theory on cutting planes plays a crucial role here~\cite{Bixby:2007,Junger:2010}. These techniques can easily combine information from multiple disjunctions and other constraints in the optimization problem to provide tighter relaxations and shorten computation time. Achieving this without the integer control variables can require significant theoretical developments even for very specific structures~\cite{Farias-Jr.:2013,Keha:2006}. The downside of the MIP approach, however, is that it requires additional integer control variables and constraints, which leads to larger (and therefore slower) LP relaxations. One goal of this work is to reduce the number of control variables needed to model disjunctions. A folklore result holds that any MIP formulation must use at least $\lceil \log_2(d) \rceil$ control variables to represent the union of $d$ sets. A recent generalization of this result~\cite[Lemma 2]{Lubin:2017a} shows that this bound holds even if we allow general integer control variables or a convex nonlinear relaxation. Therefore, if we hope to further reduce the number of control variables, we will need to go beyond traditional MIP formulations. Fortunately, there has recently been growing interest in studying the expressive power and computational properties of generalizations of traditional MIP formulations \cite{Bader:2015,Bonami:2017,Hildebrand:2017}. In particular, our work builds off of the ideas of Bonami et al.~\cite{Bonami:2017} for handling ``holes'' in integer sets. As a simple example, consider the disjunctive constraint $x \in S = \{1,2,4,5\}$. This constraint is nearly equivalent to a standard integrality constraint $x \in [1,5] \cap \mathbb{Z}$, but with a hole in the domain at $3$. A traditional MIP formulation for this might introduce a binary variable $z$ and impose the constraints \begin{equation} \label{eqn:simple-MIP-formulation} 1z + 4(1-z) \leq x \leq 2z + 5(1-z), \quad\quad (x,z) \in \mathbb{Z} \times \{0,1\}. \end{equation} Bonami et al. handle these holes directly in the original $x$ space, using \emph{wide split disjunctions}. A standard branch-and-bound algorithm~\cite{Land:1960} will perform \emph{variable branching} on fractional solutions: given the point $\hat{x}=2.5$, it rounds $\hat{x}$ and imposes the valid disjunction $x \leq 2 \vee x \geq 3$ to separate this point. However, this leaves us no way to separate the hole at $\hat{x}=3$, which is integer but not feasible for the original constraint. A natural way around this is to impose a wide split disjunction of the form $x \leq 2 \vee x \geq 4$ to separate $\hat{x}$ from $S$. This is a straightforward change to the branch-and-bound algorithm in a way that does not require any additional control variables. The crucial observation of Bonami et al. is that wide split disjunctions also readily admit standard cutting plane techniques, such as the intersection cut~\cite{Andersen:2005}. By combining the slightly modified branch-and-bound algorithm with cutting planes, Bonami et al. observe a considerable computational speed-up compared to a ``full'' formulation like \eqref{eqn:simple-MIP-formulation} when optimizing over integer sets with holes. Our work extends this general idea to the mixed-integer setting with the aim of constructing very small formulations for disjunctive constraints. We will see that if we allow holes in our mixed-integer formulations, we can drastically reduce the number of control variables and constraints we need to build formulations. We can optimize over these representations using variable branching or wide split disjunctions, meaning that the same cutting plane machinery (and, hopefully, computational performance) applied by Bonami et al. are applicable in our case as well. In certain degenerate cases we will need to deploy two-term non-parallel disjunctions, for which cut generation techniques have also been developed in the literature~\cite{Andersen:2005,Balas:1998,Bonami:2013,Kis:2014}. More concretely, our contributions are as follows. \begin{itemize} \item \textbf{An explicit geometric construction for strong formulations of disjunctive constraints.} This gives us a practical way to construct both traditional and generalized MIP formulations for the broad class of \emph{combinatorial disjunctive constraints}~\cite{Huchette:2016a}. For the traditional case, the resulting formulations are \emph{integral} or \emph{ideal} (i.e. the linear programming relaxation of the formulations have extreme points that satisfy the integrality constraints on the control variables), and for the generalized case they satisfy a natural generalization of this property. The construction also gives an upper bound on the number of general inequality constraints needed to construct an ideal formulation (traditional and/or generalized) for a given constraint. \item \textbf{A framework for generalized MIP formulations, and branching rules to optimize over them.} We present the family of \emph{mixed-integer branching formulations} as a mixed-integer generalization of the ``integer programming with holes'' approach of Bonami et al~\cite{Bonami:2017}. These generalized MIP formulations sidestep the logarithmic lower bound on the number of integer variables and have practical branching rules that can be used to generate cutting planes and implement branch-and-bound algorithms. Finally, explicit inequality descriptions for these formulations can be easily obtained from our main geometric construction result. \item \textbf{Very small formulations for disjunctive constraints.} We show that for any combinatorial disjunctive constraint there exists an ideal mixed-integer branching formulation with only two control variables and at most a linear number of constraints. For the SOS2 constraint, we improve this with an ideal mixed-integer branching formulation using two control variables and only four general inequality constraints. We also study a relaxation for the annulus, for which we construct an ideal mixed-integer branching formulation with two control variables and only six general inequality constraints. Finally, we apply our main result to also produce two new logarithmic-sized traditional MIP formulations for this relaxed annulus constraint. \end{itemize} \section{Preliminaries} We will use the following generalization of the traditional MIP formulation introduced in Section~\ref{sec:introduction}. \begin{definition} \label{def:valid-formulation} Take some set $S \subseteq \mathbb{R}^n$, along with a rational polyhedron $R = \left\{(x,z) \in \mathbb{R}^{n+r} : Ax + Cz \leq d\right\}$. \begin{itemize} \item We say that $F = \{\bra{x,z}\in R : z\in H\}$ is a \emph{formulation} of $x\in S$ (or just $S$) with respect to the set $H \subset \mathbb{R}^r$ if $\operatorname{Proj}_x\bra{F}=S$. \item We refer to $R$ as the \emph{linear programming (LP) relaxation} of the formulation. \item We call $x$ the \emph{original variables} and $z$ the \emph{control variables}. \item We say a linear inequality defining $R$ is a \emph{variable bound} if it has only one non-zero coefficient, and a \emph{general inequality constraint} otherwise. \end{itemize} \end{definition} Note that we are omitting auxiliary continuous variables in our formulation description. In theory, auxiliary variables could drastically reduce the number of constraints needed to describe the LP relaxation $R$. However, the cases we study in this work will admit very small formulations without auxiliary variables, so we omit them from our discussion for clarity. We also differentiate the two types of constraints describing our relaxation as a simplex-based algorithm can typically impose variable bounds with little or no computational overhead. Finally, we use the ``control variables'' terminology of Jeroslow~\cite{Jeroslow:1987} to emphasize that these variables will not necessarily be allowed to take arbitrary integer values. We can easily recover traditional MIP formulations as a special case of our more general definition. \begin{definition} We say that $H \subseteq \mathbb{Z}^r$ is \emph{hole-free} if $\operatorname{Conv}(H) \cap \mathbb{Z}^r = H$. \end{definition} If $H$ is hole-free, we can replace the set constraint $z \in H$ in our formulation with $z \in \operatorname{Conv}(H) \cap \mathbb{Z}^r$. In the case where $\operatorname{Conv}(H)$ is a polyhedron (which will be the case for the remainder), we recover a traditional (linear) MIP formulation for $S$. Of particular note are \emph{binary MIP formulations}, which correspond to the case where $H \subseteq \{0,1\}^r$. The usual notions of formulation strength~\cite{Vielma:2015} also carry over directly to our more general setting. \begin{definition} A formulation of $S \subset \mathbb{R}^n$ with respect to $H$ is \emph{ideal} if the extreme points of its LP relaxation $R$ satisfy $\operatorname{ext}(R) \subseteq \mathbb{R}^{n} \times H$. \end{definition} \subsection{The embedding approach} We will construct formulations for disjunctive sets $S = \bigcup_{i=1}^d P^i$ through what is known as the embedding approach~\cite{Vielma:2015a}. We assign each \emph{alternative} $P^i$ a unique \emph{code} $h^i \in \mathbb{R}^r$. We call such a collection of distinct vectors $H = (h^i)_{i=1}^d$ an \emph{encoding}. Given $\mathcal{P} = (P^i)_{i=1}^d$ and $H$, we construct the \emph{embedding} of $S$ in a higher-dimensional space as \[ \operatorname{Em}(\mathcal{P},H) \defeq \bigcup_{i=1}^d (P^i \times \{h^i\}). \] This object is useful as projecting out the control variables gives us the disjunctive set: $\operatorname{Proj}_x(\operatorname{Em}(\mathcal{P},H)) = S$. In particular, if the encoding satisfies a natural geometric condition, then $Q(\mathcal{P},H) \defeq \operatorname{Conv}(\operatorname{Em}(\mathcal{P},H))$ immediately gives us the LP relaxation for an ideal formulation of $S$ with respect to $H$. \begin{definition} A set $H \subset \mathbb{R}^r$ is in \emph{convex position} if $\operatorname{ext}(\operatorname{Conv}(H)) = H$. \end{definition} \begin{proposition} Take $\mathcal{P} = (P^i)_{i=1}^d$. If $H=(h^i)_{i=1}^d$ is an encoding in convex position, then $\{(x,z) \in Q(\mathcal{P},H) : z \in H\}$ is an ideal formulation for $\bigcup_{i=1}^d P^i$ with respect to $H$. \end{proposition} To use an embedding formulation like this in practice we will need (1) an explicit outer (inequality) description of $Q(\mathcal{P},H)$, and (2) a way to iteratively impose the set constraint $z\in H$. Theorem~\ref{thm:general-cdc-characterization} will give use a way to meet this first requirement in both the traditional and generalized MIP setting. For the second requirement, variable branching suffices in the traditional MIP setting when $H \subseteq \mathbb{Z}^r$ is hole-free, as $z\in \operatorname{Conv}(H) \cap \mathbb{Z}^r = H$. We will develop analogous branching schemes for our generalized MIP formulations in Section~\ref{section3}. \subsection{Combinatorial disjunctive constraints} In this work, we will be primarily interested in constructing formulations for \emph{combinatorial disjunctive constraints}~\cite{Huchette:2016a}, where each alternative is some face on the unit simplex. Notationally, take: \begin{itemize} \item $\llbracket d \rrbracket \defeq \{1,2,\ldots,d-1,d\}$ and $\llbracket j,k \rrbracket \defeq \{j,j+1,\ldots,k-1,k\}$; \item $[d]^2 \defeq \{\{i,j\} \in \llbracket d \rrbracket^2 : i < j\}$ (where notationally $\{i,j\} \in [d]^2$ implies $i < j$); \item The unit simplex as $\Delta^n \defeq \{\lambda \in \mathbb{R}^n_+ : \sum_{v=1}^n \lambda_v = 1\}$; \item The support of an element $\lambda \in \Delta^n$ as $\operatorname{supp}(\lambda) \defeq \{v \in \llbracket n \rrbracket : \lambda_v \neq 0\}$; and \item The face of $\Delta^n$ induced by $T \subseteq \llbracket n \rrbracket$ as $P(T) \defeq \{\lambda \in \Delta^n : \operatorname{supp}(\lambda) \subseteq T\}$. \end{itemize} \begin{definition} A \emph{combinatorial disjunctive constraint} is a constraint of the form $\lambda \in \bigcup_{i=1}^d P(T^i)$, given by the family of distinct nonempty sets $\mathcal{T} = (T^i \subseteq \llbracket n \rrbracket)_{i=1}^d$. We will denote the corresponding set of alternatives as $\mathcal{P}(\mathcal{T}) \defeq (P(T^i))_{i=1}^d$. \end{definition} Throughout we will assume that $\bigcup_{i=1}^d T^i = \llbracket n \rrbracket$; or, equivalently, that $\operatorname{Conv}(\bigcup_{i=1}^d P(T^i)) = \Delta^n$. This is without loss of generality (w.l.o.g.), as otherwise we could simply drop any missing component from the constraint. Due to the classical Minkowski-Weyl Theorem (e.g. \cite[Corollary 3.14]{Conforti:2014}), we can formulate \emph{any} disjunctive constraint as a combinatorial disjunctive constraint, provided we have a description for each alternative in terms of its extreme points. In particular, if we label the extreme points as $\bigcup_{i=1}^d \operatorname{ext}(P^i) = \{v^j\}_{j=1}^n$, then we can write the disjunctive set as \[ \bigcup_{i=1}^d P^i = \left\{ \sum_{j=1}^n \lambda_j v^j : \lambda \in \bigcup_{i=1}^d P(T^i) \right\}, \] where each $T^i = \{j \in \llbracket n \rrbracket : v^j \in \operatorname{ext}(P^i)\}$ corresponds to the indices of the extreme points of $P^i$ in our ordering. Combinatorial disjunctive constraints are a particularly natural way to formulate a number of disjunctive constraints of interest, including piecewise linear functions, non-convex set inclusion or collision avoidance constraints, and relaxations for multilinear functions~\cite{Huchette:2016a}. Unfortunately, combinatorial disjunctive constraints are a class of constraints for which the folklore lower bound on the number of integer control variables holds under a simple non-redundancy property. We can formalize this through the following simple proposition that we prove in Section~\ref{app:prove-cdc-properties}. \begin{proposition}\label{easycombinatorial} Take $\mathcal{T} = (T^i \subseteq \llbracket n \rrbracket)_{i=1}^d$ as a representation for a combinatorial disjunctive constraint, where $T^i \not\subseteq T^j$ and $T^j \not\subseteq T^j$ for each $\{i,j\} \in [d]^2$. Consider some encoding $H = (h^i)_{i=1}^d \subset \mathbb{R}^r$. Then there exists a formulation for $\bigcup_{i=1}^d P(T^i)$ with respect to $H$ if and only if $H$ is in convex position. In particular, if such a formulation exists and $H$ is hole-free, then $r \geq \lceil \log_2(d) \rceil$ necessarily. \end{proposition} Proposition~\ref{easycombinatorial} tells us that any traditional MIP formulation for a combinatorial disjunctive constraint requires at least a logarithmic number of control variables. Furthermore, Huchette and Vielma~\cite{Huchette:2017,Vielma:2015a} give a lower bound of $2\lceil \log_2(d) \rceil$ on the number of general inequality constraints for ideal (non-extended) formulations of the SOS2 constraint~\cite{Beale:1970}, where $\ensuremath{\calT^{\operatorname{SOS2}}}_d \defeq (\{i,i+1\})_{i=1}^d$. This result follows from an explicit outer description for $Q(\mathcal{P}(\ensuremath{\calT^{\operatorname{SOS2}}}_d),H)$ for the case where $H$ is any hole-free encoding that is in convex position. In other words, the result characterizes all non-extended ideal MIP formulations of the SOS2 constraint. This result can be used to produce the family of ``logarithmic'' MIP formulations for the SOS2 constraint which have proven computational efficacy~\cite{Huchette:2017,Vielma:2015,Vielma:2010,Vielma:2009a}. \subsection{Summary of main contributions} Motivated by the success of the logarithmic traditional MIP formulations for the SOS2 constraint, we extend the geometric characterization of Huchette and Vielma~\cite{Huchette:2017,Vielma:2015a} to any combinatorial disjunctive constraint. More precisely, we give an explicit linear inequality description of $Q(\mathcal{P}(\mathcal{T}),H)$ for any combinatorial disjunctive constraint given by the family $\mathcal{T}$, paired with any encoding $H$ that is in convex position. This gives a practical way to build ideal formulations, particularly for low-dimensional encodings. It also gives an upper bound on the number of general inequality constraints needed to construct any ideal formulation for a given combinatorial disjunctive constraint. The statement of our main technical result is as follows. \begin{theorem} \label{thm:general-cdc-characterization} Take $\mathcal{T} = (T^i \subseteq \llbracket n \rrbracket)_{i=1}^d$ and $H = (h^i)_{i=1}^d \subset \mathbb{R}^r$ as an encoding in convex position. Let $D = \{\{i,j\} \in [d]^2 : T^i \cap T^j \neq \emptyset\}$, and presume that $D$ is connected in the sense that the associated graph $G=(\llbracket d \rrbracket,D)$ is connected. Take $C = \{c^{i,j} \defeq h^j - h^i\}_{\{i,j\} \in D}$, and $\mathcal{L} = \operatorname{span}(C)$. Define $M(b;\mathcal{L}) \defeq \{y \in \mathcal{L} : b \cdot y = 0\}$ to be the hyperplane in the linear space $\mathcal{L}$ induced by the direction $b \neq {\bf 0}^r$. If $\{b^k\}_{k=1}^\Gamma\subset \mathbb{R}^r \backslash \{{\bf 0}^r\}$ is such that $\{M(b^k;\mathcal{L})\}_{k=1}^\Gamma$ is the set of linear hyperplanes spanned by $C$ in $\mathcal{L}$, then $(\lambda,z) \in Q(\mathcal{P}(\mathcal{T}),H)$ if and only if \begin{subequations} \label{eqn:general-V-formulation} \begin{gather} \sum_{v=1}^n \min_{s : v \in T^s}\{b^k \cdot h^s\} \lambda_v \leq b^k \cdot z \leq \sum_{v=1}^n \max_{s : v \in T^s}\{b^k \cdot h^s\} \lambda_v \quad \forall k \in \llbracket \Gamma \rrbracket \label{eqn:general-V-formulation-1} \\ (\lambda,z) \in \Delta^{n} \times \operatorname{aff}(H). \label{eqn:general-V-formulation-2} \end{gather} \end{subequations} \end{theorem} We defer the proof to Section~\ref{sec:prove-cdc-characterization}, and instead concentrate on its implication for non-traditional formulations, which can be summarized as follows. \begin{enumerate} \item \textbf{[Proposition~\ref{prop:general-cdc-moment-curve}]} For any combinatorial disjunctive constraint on $n$ components with $d$ alternatives, we can produce an ideal mixed-integer branching formulation with two control variables, $\mathscr{O}(d)$ general linear inequality constraints, $\mathscr{O}(n)$ variable bounds, and one equation. \item \textbf{[Proposition~\ref{prop:sos2-constant}]} For the SOS2 constraint on $n=d+1$ components, we can produce an ideal mixed-integer branching formulation with two control variables, four general linear inequality constraints, $\mathscr{O}(n)$ variable bounds, and one equation. \item \textbf{[Propositions~\ref{prop:log-annulus} and \ref{prop:zig-zag-annulus}]} For a relaxation of the annulus as a partition of $d$ quadrilaterals, we can produce two ideal traditional MIP formulations with $\lceil\log_2(d)\rceil$ control variables, $\mathscr{O}(\operatorname{polylog}(d))$ general linear inequality constraints, $\mathscr{O}(d)$ variable bounds, and one equation. \item \textbf{[Proposition~\ref{prop:exotic-annulus}]} For a relaxation of the annulus as a partition of $d$ quadrilaterals, we can produce an ideal mixed-integer branching formulation with two control variables, six general linear inequality constraints, $\mathscr{O}(d)$ variable bounds, and one equation. \end{enumerate} In other words, our new formulation approach allows us to construct ideal formulations for any combinatorial disjunctive constraint with very few control variables and at most a linear number of general linear inequality constraints. Furthermore, by taking advantage of structure, we can further reduce this to a constant number of general inequalities for the SOS2 constraint and a tight relaxation for the annulus. However, a formulation with two control variables implies a two-dimensional encoding, which cannot be hole-free if $d > 4$. Hence, the resulting formulation is not a traditional MIP formulation. In the following section, we present a way to optimize over such representations in a branch-and-bound setting, using branching schemes customized for a particular encoding. The branching schemes we present will use combinations of variable branching, wide axis-aligned split disjunctions (or just \emph{wide variable branching}) a la Bonami et al.~\cite{Bonami:2017}, and general two-term disjunctions. As a result, both standard and state-of-the-art cutting plane technology can be deployed to strengthen the relaxations of our formulations. \section{Branching schemes and mixed-integer branching formulations} \label{section3} We reiterate that traditional MIP formulations are useful because there exist algorithms--and high-quality implementations of those algorithms--that are able to optimize over these representations efficiently in practice. Roughly, these implementations work by applying the branch-and-bound method~\cite{Land:1960}, coupled with the judicious application of cutting planes to strengthen the LP relaxation. In this section, we formalize how our generalized notion of a MIP formulation fits into the branch-and-bound framework. \subsection{Branching schemes} We start by formally defining what we mean by a branching scheme. \begin{definition} \label{def:branching-scheme} A \emph{branching scheme} is a procedure that, given \begin{itemize} \item a polyhedron $Q \subset \mathbb{R}^{r}$, \item an encoding $H \subset \mathbb{R}^r$, and \item a point $\hat{z} \in Q$, \end{itemize} either verifies that $\hat{z} \in H$, or outputs two polyhedra $Q^1,Q^2 \subset \mathbb{R}^r$ such that \begin{itemize} \item $\hat{z} \not\in Q^1$ and $\hat{z} \not\in Q^2$, \item $Q \supseteq Q^1 \cup Q^2$, \item $Q \cap H = (Q^1 \cap H) \cup (Q^2 \cap H)$, and \item $Q^1 \cap Q^2 = \emptyset$. \end{itemize} \end{definition} We note that branching is described solely in terms of the control variables $z$, in contrast to a constraint branching approach \cite{Beale:1970,Farias-Jr.:2008,Farias-Jr.:2013,Keha:2006,Tomlin:1981}, which would work directly on the original variables $x$. Additionally, our branching schemes map back to our original setting in a straightforward way: if $R$ is the LP relaxation for our formulation in $(x,z)$-space, take $Q = \operatorname{Proj}_z(R)$. Then we can construct $Q^1$ and $Q^2$ by adding linear inequalities to $Q$. These inequalities will map to a linear inequality for $R$ with support only on the $z$ variables, giving two polyhedra $R^1$ and $R^2$ in the original $(x,z)$-space. We will call $R^1$ and $R^2$ the LP relaxations for the \emph{subproblems}, and $Q^1$ and $Q^2$ the \emph{code relaxations} for the subproblems. In the case that an encoding $H$ has an associated branching scheme, we will say that the corresponding formulation is a \emph{mixed-integer branching formulation} to emphasize that this is a strict generalization of traditional MIP formulations, and that mixed-integer branching formulations retain many of the computational properties of traditional MIP formulations relevant for branch-and-bound and branch-and-cut methods. Although a branch-and-bound method using variable branching may produce exponentially many subproblems, it enjoys a finite termination guarantee. This is not necessarily the case for any branching scheme constructed according to Definition~\ref{def:branching-scheme}. However, it is not difficult to see that, as $H$ is finite, a sufficient condition for finite termination is that $\operatorname{Conv}(Q^1 \cap H) = Q^1$ and $\operatorname{Conv}(Q^2 \cap H) = Q^2$. \subsection{The reflected binary Gray and zig-zag encodings} \label{sec:brg-zigzag} The first two encodings will will present have previously been used in the literature to construct traditional MIP formulations for the SOS2 constraint. Both are defined recursively~\cite{Huchette:2017} by the rows of the matrices $K^1 = C^1 = (0,1)^T$ and \[ K^{t+1} = \begin{pmatrix} K^t & {\bf 0}^t \\ \operatorname{rev}(K^t) & {\bf 1}^t \end{pmatrix}, \quad C^{t+1} = \begin{pmatrix} C^t & {\bf 0}^t \\ C^t + {\bf 1}^t \otimes C^t_t & {\bf 1}^t \end{pmatrix} \quad\quad \forall t \in \{2,3,\ldots\}, \] where ${\bf 0}^t \in \mathbb{R}^t$ (respectively ${\bf 1}^t \in \mathbb{R}^t$) is the vector will all entries equal to zero (respectively one), $A_i$ is the $i$-th row of the matrix $A$, $\operatorname{rev}(A)$ reverses the rows of the matrix $A$, and $u \otimes v = u v^T \subseteq \mathbb{R}^{m \times n}$ for any $u \in \mathbb{R}^m$ and $v \in \mathbb{R}^n$. For some fixed number $d \in \mathbb{N}$, take $r = \lceil \log_2(d) \rceil$. We will take the \emph{reflected binary Gray} encoding~\cite{Huchette:2017,Savage:1997,Vielma:2010,Vielma:2009a} $\ensuremath{H^{\operatorname{br}}}_d \subseteq \{0,1\}^r$ as the first $d$ rows of $K^r$, and the \emph{zig-zag} encoding~\cite{Huchette:2017} $\ensuremath{H^{\operatorname{zz}}}_d$ as the first $d$ rows of $C^r$. In Figure~\ref{fig:hole-free}, we see the two encodings for $d=8$ (cf. Figure~1 of \cite{Huchette:2017}). \begin{figure}[htpb] \centering \begin{tikzpicture} [thick, ->] \draw (0,0,0) -- (1,0,0); [thick, ->] \draw (1,0,0) -- (1,1,0); [thick, ->] \draw (1,1,0) -- (0,1,0); [thick, ->] \draw (0,1,0) -- (0,1,1); [thick, ->] \draw (0,1,1) -- (1,1,1); [thick, ->] \draw (1,1,1) -- (1,0,1); [thick, ->] \draw (1,0,1) -- (0,0,1); \begin{scope}[canvas is xy plane at z=0] \draw[help lines] (0,0) grid (2,2); [ultra thick] \node [right] at (2,0) {{\smaller $z_1$}}; [ultra thick] \node [left] at (0,0) {{\smaller $h^1$}}; \end{scope} \begin{scope}[canvas is xy plane at z=1] \node [left] at (0,0) {\smaller $h^8$}; \end{scope} \begin{scope}[canvas is xz plane at y=0] \draw[help lines] (0,0) grid (2,2); [ultra thick] \node [below] at (0,2) {{\smaller $z_3$}}; \end{scope} \begin{scope}[canvas is yz plane at x=0] \draw[help lines] (0,0) grid (2,2); [ultra thick] \node [above] at (2,0) {{\smaller $z_2$}}; \end{scope} \begin{scope}[canvas is xy plane at z=0] \draw [fill] (0,0) circle [radius=.05]; \end{scope} \end{tikzpicture} \begin{tikzpicture} [thick, ->] \draw (0,0,0) -- (1,0,0); [thick, ->] \draw (1,0,0) -- (1,1,0); [thick, ->] \draw (1,1,0) -- (2,1,0); [thick, ->] \draw (2,1,0) -- (2,1,1); [thick, ->] \draw (2,1,1) -- (3,1,1); [thick, ->] \draw (3,1,1) -- (3,2,1); [thick, ->] \draw (3,2,1) -- (4,2,1); \begin{scope}[canvas is xy plane at z=0] \draw[help lines] (0,0) grid (5,2); [ultra thick] \node [right] at (5,0) {{\smaller $z_1$}}; [ultra thick] \node [left] at (0,0) {{\smaller $h^1$}}; \end{scope} \begin{scope}[canvas is xy plane at z=1] \node [right] at (4,2) {\smaller $h^8$}; \end{scope} \begin{scope}[canvas is xz plane at y=0] \draw[help lines] (0,0) grid (5,2); [ultra thick] \node [below] at (0,2) {{\smaller $z_3$}}; \end{scope} \begin{scope}[canvas is yz plane at x=0] \draw[help lines] (0,0) grid (2,2); [ultra thick] \node [above] at (2,0) {{\smaller $z_2$}}; \end{scope} \begin{scope}[canvas is xy plane at z=0] \draw [fill] (0,0) circle [radius=.05]; \end{scope} \end{tikzpicture} \caption{Depiction of the binary reflected Gray encoding $\ensuremath{H^{\operatorname{br}}}_8$ (\textbf{Left}) and the zig-zag encoding $\ensuremath{H^{\operatorname{zz}}}_8$ (\textbf{Right}).} \label{fig:hole-free} \end{figure} It can be shown that both the reflected binary Gray and zig-zag encodings are hole-free and in convex position~\cite{Huchette:2017}, and so therefore lead to traditional MIP formulations. Indeed, Huchette and Vielma~\cite{Huchette:2017} use both encodings to construct traditional MIP formulations for the SOS2 constraint that use $\lceil \log_2(d) \rceil$ control variables and $2\lceil \log_2(d) \rceil$ general inequality constraints. In Section~\ref{sec:MIP-annulus}, we will apply Theorem~\ref{thm:general-cdc-characterization} using the two encodings to construct logarithmic traditional MIP formulations for a relaxation of the annulus. Additionally, the two hole-free encodings give us an opportunity to show how traditional variable branching fits into our branching scheme framework. Take either $H=\ensuremath{H^{\operatorname{br}}}_d$ or $H=\ensuremath{H^{\operatorname{zz}}}_d$, and consider some point $\hat{z} \in \operatorname{Conv}(H)$. If $\hat{z} \in \mathbb{Z}^r$, this verifies that $\hat{z} \in H$. Otherwise, we can select a component $k \in \llbracket r \rrbracket$ which is fractional, i.e. $\hat{z}_{k} \not\in \mathbb{Z}$. Then the two child code relaxations are created by rounding this component: $Q^1 = \{z \in Q : z_k \leq \lfloor \hat{z}_k \rfloor \}$ and $Q^2 = \{z \in Q : z_k \geq \lceil \hat{z}_k \rceil \}$. \subsection{Moment curve encoding} \label{sec:moment-curve} The $\eta$-dimensional moment curve is given by the function $m_\eta(t) = (t,t^2,\ldots,t^\eta)$. Given $d (\geq \eta)$ ordered points $t_1 < t_2 < \cdots < t_d$ on the real line, the corresponding \emph{cyclic polytope} is $\operatorname{Conv}(\{m_\eta(t_i)\}_{i=1}^d)$, a well-studied object~\cite{Bogomolov:2015,Ziegler:2007}. For our purposes, we are interested in constructing encodings that lie along the two-dimensional moment curve: $\ensuremath{H^{\operatorname{mc}}}_d \defeq (m_2(i))_{i=1}^d$. If $d > 2$, then this choice of encoding is not hole-free; for example, $\frac{1}{2}(m_2(1)+m_2(3)) = (2,5) \not\in \ensuremath{H^{\operatorname{mc}}}_d$. However, the encoding is in convex position, and it is straightforward to check if $\hat{z} \in \ensuremath{H^{\operatorname{mc}}}_d$. We also see that a description for $\Psi_d(l,u) \defeq \operatorname{Conv}(\{z \in \ensuremath{H^{\operatorname{mc}}}_d : l \leq z_1 \leq u\})$ is \begin{subequations} \label{eqn:moment-curve-convex-hull} \begin{align} z_2 - i^2 &\geq (2i+1)(z_1-i) \quad \forall i \in \llbracket l,u-1 \rrbracket \\ (u-l)(z_2 - l^2) &\leq (u^2-l^2)(z_1-l). \end{align} \end{subequations} Our branching scheme for the encoding $\ensuremath{H^{\operatorname{mc}}}_d$ starts with a relaxation of the form $Q = \Psi_d(\ell,u)$ for some $\ell, u \in \mathbb{Z}$. Provided that $\hat{z} \not\in \ensuremath{H^{\operatorname{mc}}}_d$, we create two child code relaxations of the form $Q^1 = \Psi_d(\ell,\lfloor \hat{z}_1 \rfloor)$ and $Q^2 = \Psi_d(\lfloor \hat{z}_1 \rfloor + 1,u)$. See Figure~\ref{fig:moment-curve-branching} for an illustration of the branching. We emphasize that while this branching scheme uses two-term disjunction branching, in nearly every case a (potentially wide) variable branching disjunction is also valid. For example, the variable branching disjunction $z_1 \leq 2 \vee z_1 \geq 3$ is valid for the point in the left side of Figure~\ref{fig:moment-curve-branching}. This will be the case in all but pathological cases: for example, that depicted in the right side of Figure~\ref{fig:moment-curve-branching}. This means that the branching portion of the algorithm can proceed using the branching scheme described above, while the cut generation procedure can also use the valid variable branching split disjunctions as well. \begin{figure}[htpb] \centering \begin{tikzpicture}[xscale=0.8,yscale=0.2] \draw [dashed, fill=gray!20] (1,1) -- (2,4) -- (3,9) -- (4,16) -- (5,25) -- (6,36) -- (7,49) -- (1,1); \draw [fill=gray!80] (1,1) -- (2,4) -- (3,9) -- (1,1); \draw [fill=gray!80] (4,16) -- (5,25) -- (6,36) -- (7,49) -- (4,16); \draw [fill,yscale=4] (3.5,17.5/4) circle [radius=.05]; \draw[help lines] (1,1) grid (7,49); \end{tikzpicture} \hspace{2em} \begin{tikzpicture}[xscale=0.8,yscale=0.2] \draw [dashed, fill=gray!20] (1,1) -- (2,4) -- (3,9) -- (4,16) -- (5,25) -- (6,36) -- (7,49) -- (1,1); \draw [fill=gray!80] (1,1) -- (2,4) -- (3,9) -- (4,16) -- (1,1); \draw [fill=gray!80] -- (6,36) -- (7,49) -- (5,25); \draw [fill,yscale=4] (4,25/4) circle [radius=.05]; \draw[help lines] (1,1) grid (7,49); \end{tikzpicture} \caption{Illustration of the branching scheme for the moment curve encoding $\ensuremath{H^{\operatorname{mc}}}_7$. The original code relaxation in the $z$-space is shown in the dashed region, and those for the two subproblems are shown in the darker shaded regions. The optimal solution for the original LP relaxation is depicted with a solid dot. We show the branching with a solution that is fractional \textbf{(Left)}, and one where there is no valid variable branching disjunction to separate the point \textbf{(Right)}.} \label{fig:moment-curve-branching} \end{figure} \subsection{A more exotic encoding} \label{sec:exotic-encoding} Consider the two-dimensional encoding in Figure~\ref{fig:sos2-constant}. Take some positive integer $r$, along with $d=4r$, and consider the encoding $\ensuremath{H^{\operatorname{ex}}}_d = (h^i)_{i=1}^{d}$ where \begin{subequations} \label{eqn:exotic-encoding} \begin{align} h^{4k-3} &= \left( k-r-1, \frac{1}{2}(k-1)(k-2r-2) \right) \\ h^{4k-2} &= \left( r-k+1, \frac{1}{2}(k-1)(k-2r-2) \right) \\ h^{4k-1} &= \left( r-k+1, -\frac{1}{2}k(k-2r-1) \right) \\ h^{4k} &= \left( k-r, -\frac{1}{2}k(k-2r-1) \right) \end{align} \end{subequations} for each $k \in \llbracket r \rrbracket$. These points are in convex position. \begin{proposition} For any $r \in \mathbb{N}$, the points $\ensuremath{H^{\operatorname{ex}}}_{4r}$ are in convex position. \end{proposition} \proof{} The result for $r=1$ follows from inspection, so presume that $r > 1$. For each point $h^i$, we propose an inequality $c^i \cdot z \leq b^i$ that strictly separates $h^i$ from the remaining codes in $\ensuremath{H^{\operatorname{ex}}}_d$. For each $k \in \llbracket r \rrbracket$, the coefficients are \begin{alignat*}{2} c^{4k-3} &= \left( -(r-k+2)-(r-k+1), -2 \right) \\ c^{4k-2} &= \left( (r-k+2)+(r-k+1), -2 \right) \\ c^{4k-1} &= \left( (r-k+1)+(r-k), 2 \right) \\ c^{4k} &= \left( -(r-k+1)-(r-k), 2 \right), \end{alignat*} where $b^i = c^{i} \cdot h^{i+4}$ for $i \in \llbracket 4 \rrbracket$ and $b^{i} = c^{i} \cdot h^{i-4}$ for $i \in \llbracket 5,4r \rrbracket$. \qed \endproof \begin{figure}[htpb] \centering \begin{tikzpicture}[scale=.3] \fill [fill=gray!20] (0,0) -- (1,4) -- (2,7) -- (3,9) -- (4,10) -- (5,10) -- (6,9) -- (7,7) -- (8,4) -- (8,0) -- (7,-4) -- (6,-7) -- (5,-9) -- (3,-9) -- (2,-7) -- (1,-4) -- (0,0); \draw [thick] (0,0) -- (8,0) -- (8,4) -- (1,4) -- (1,-4) -- (7,-4) -- (7,7) -- (2,7) -- (2,-7) -- (6,-7) -- (6,9) -- (3,9) -- (3,-9) -- (5,-9) -- (5,10) -- (4,10); \draw[help lines] (0,-9) grid (8,10); [ultra thick] \node [below left] at (0,0) {$h^1$}; \draw [fill] (0,0) circle [radius=.1]; [ultra thick] \node [below right] at (8,0) {$h^2$}; \draw [fill] (8,0) circle [radius=.1]; [ultra thick] \node [above right] at (8,4) {$h^3$}; \draw [fill] (8,4) circle [radius=.1]; [ultra thick] \node [above left] at (1,4) {$h^4$}; \draw [fill] (1,4) circle [radius=.1]; [ultra thick] \node [below left] at (1,-4) {$h^5$}; \draw [fill] (1,-4) circle [radius=.1]; [ultra thick] \node [below right] at (7,-4) {$h^6$}; \draw [fill] (7,-4) circle [radius=.1]; [ultra thick] \node [above right] at (7,7) {$h^7$}; \draw [fill] (7,7) circle [radius=.1]; [ultra thick] \node [above left] at (2,7) {$h^8$}; \draw [fill] (2,7) circle [radius=.1]; [ultra thick] \node [below left] at (2,-7) {$h^9$}; \draw [fill] (2,-7) circle [radius=.1]; [ultra thick] \node [below right] at (6,-7) {$h^{10}$}; \draw [fill] (6,-7) circle [radius=.1]; [ultra thick] \node [above right] at (6,9) {$h^{11}$}; \draw [fill] (6,9) circle [radius=.1]; [ultra thick] \node [above left] at (3,9) {$h^{12}$}; \draw [fill] (3,9) circle [radius=.1]; [ultra thick] \node [below left] at (3,-9) {$h^{13}$}; \draw [fill] (3,-9) circle [radius=.1]; [ultra thick] \node [below right] at (5,-9) {$h^{14}$}; \draw [fill] (5,-9) circle [radius=.1]; [ultra thick] \node [above right] at (5,10) {$h^{15}$}; \draw [fill] (5,10) circle [radius=.1]; [ultra thick] \node [above left] at (4,10) {$h^{16}$}; \draw [fill] (4,10) circle [radius=.1]; \end{tikzpicture} \caption{The exotic two-dimensional encoding $\ensuremath{H^{\operatorname{ex}}}_{16}$.} \label{fig:sos2-constant} \end{figure} The structure of this encoding also suggests a relatively simple branching scheme. Given a point $\hat{z} \not\in \ensuremath{H^{\operatorname{ex}}}_d$, we consider three cases, depicted in Figure~\ref{fig:sos2-constant-branching}. In the first case, $\hat{z}_1 \not\in \mathbb{Z}$, and we perform standard variable branching: $Q^1 = \{z \in Q : z_1 \leq \lfloor \hat{z}_1 \rfloor\}$ and $Q^2 = \{z \in Q : z_1 \geq \lceil \hat{z}_1 \rceil\}$. If $\hat{z}_1 \in \mathbb{Z}$, then we consider two other cases. Take $Y = \{h_2 : h \in \ensuremath{H^{\operatorname{ex}}}_d\}$ as the set of all values the encoding takes in the second component, $\underline{b} = \max\{t \in Y : t < \hat{z}_2\}$, and $\overline{b} = \min\{t \in Y : t > \hat{z}_2\}$. If $\hat{z}_2 \not \in Y$, then we apply a wide variable branching of the form $Q^1 = \{z \in Q : z_2 \leq \underline{b}\}$, and $Q^2 = \{z \in Q : z_2 \geq \overline{b}\}$. The final case remains where $\hat{z}_1 \in \mathbb{Z}$, $\hat{z}_2 \in Y$, and yet $\hat{z} \not\in \ensuremath{H^{\operatorname{ex}}}_d$. In this case, we will branch on a two-term non-parallel disjunction. Take $Z(b) = \{h \in \ensuremath{H^{\operatorname{ex}}}_d : h_2 = b\}$. We take the nearest point to the northeast of $\hat{z}$ as $h^{NE} \in Z(\overline{b})$ such that $h^{NE}_1 \geq \max_{h \in Z(\overline{b})}h_1$. Next, take the nearest point to the southwest $h^{SW} \in Z(\underline{b})$ such that $h^{SW} \leq \max_{h \in Z(\underline{b})}h_1$. Take the points directly to the west and east of $\hat{z}$, $h^{W}, h^{E} \in Z(\hat{z}_2)$ (i.e. $h^W_1 < h^E_1$), and we can express the two-term non-parallel disjunction branching with two child code relaxations as $Q^1 = \{z \in Q : (h^{NE}_1-h^W_1)(z_2-h^W_2) \geq (h^{NE}_2-h^W_2)(z_1-h^W_1)\}$ and $Q^2 = \{z \in Q : (h^{SW}_1-h^E_1)(z_2-h^E_2) \geq (h^{SW}_2-h^E_2)(z_1-h^E_1)\}$. \begin{figure} \centering \begin{tikzpicture}[scale=.3] \draw [dashed, fill=gray!20] (0,0) -- (1,4) -- (2,7) -- (3,9) -- (4,10) -- (5,10) -- (6,9) -- (7,7) -- (8,4) -- (8,0) -- (7,-4) -- (6,-7) -- (5,-9) -- (3,-9) -- (2,-7) -- (1,-4) -- (0,0); \draw[help lines] (0,-9) grid (8,10); \fill [fill=gray!80] (0,0) -- (1,4) -- (2,7) -- (3,9) -- (4,10) -- (5,10) -- (5,-9) -- (3,-9) -- (2,-7) -- (1,-4) -- (0,0); \fill [fill=gray!80] (6,9) -- (7,7) -- (8,4) -- (8,0) -- (7,-4) -- (6,-7); \draw [fill] (5.4,3) circle [radius=0.1]; \end{tikzpicture} \hspace{2em} \begin{tikzpicture}[scale=.3] \draw [dashed, fill=gray!20] (0,0) -- (1,4) -- (2,7) -- (3,9) -- (4,10) -- (5,10) -- (6,9) -- (7,7) -- (8,4) -- (8,0) -- (7,-4) -- (6,-7) -- (5,-9) -- (3,-9) -- (2,-7) -- (1,-4) -- (0,0); \draw[help lines] (0,-9) grid (8,10); \fill [fill=gray!80] (1,4) -- (2,7) -- (3,9) -- (4,10) -- (5,10) -- (6,9) -- (7,7) -- (8,4); \fill [fill=gray!80] (0,0) -- (8,0) -- (7,-4) -- (6,-7) -- (5,-9) -- (3,-9) -- (2,-7) -- (1,-4) -- (0,0); \draw [fill] (5,3) circle [radius=0.1]; \end{tikzpicture} \hspace{2em} \begin{tikzpicture}[scale=.3] \draw [dashed, fill=gray!20] (0,0) -- (1,4) -- (2,7) -- (3,9) -- (4,10) -- (5,10) -- (6,9) -- (7,7) -- (8,4) -- (8,0) -- (7,-4) -- (6,-7) -- (5,-9) -- (3,-9) -- (2,-7) -- (1,-4) -- (0,0); \draw[help lines] (0,-9) grid (8,10); \fill [fill=gray!80] (1,4) -- (2,7) -- (3,9) -- (4,10) -- (5,10) -- (6,9) -- (7,7); \fill [fill=gray!80] (0,0) -- (8,4) -- (8,0) -- (7,-4) -- (6,-7) -- (5,-9) -- (3,-9) -- (2,-7) -- (1,-4) -- (0,0); \draw [fill] (5,4) circle [radius=0.1]; \end{tikzpicture} \caption{Branching scheme for the exotic encoding $\ensuremath{H^{\operatorname{mc}}}_{16}$ ($r=4$) when the LP optimal solution for the control variables $\hat{z}$ has: \textbf{(Left)} $\hat{z}_1$ fractional, \textbf{(Center)} $\hat{z}_1 \in \mathbb{Z}$ but $\hat{z}_2 \notin Y = \{h_2 : h \in \ensuremath{H^{\operatorname{mc}}}_{16}\}$, and \textbf{(Right)} $\hat{z}_1 \in \mathbb{Z}$, $\hat{z}_2 \in Y$, and $\hat{z} \notin \ensuremath{H^{\operatorname{mc}}}_{16}$. The relaxations for the two subproblems in each are the two shaded regions in each picture.} \label{fig:sos2-constant-branching} \end{figure} \section{Very small mixed-integer branching formulations} We are now in a position to derive very small mixed-integer branching formulations for combinatorial disjunctive constraints. Each formulation will have only two control variables, and will be constructed using the two-dimensional encodings presented in the previous section. Along the way, we also present two new logarithmic-sized traditional MIP formulations for a relaxation of the annulus that follow as a natural consequence of Theorem~\ref{thm:general-cdc-characterization}. Combined, these results illustrate that Theorem~\ref{thm:general-cdc-characterization} can be practically used to construct both traditional MIP and mixed-integer branching formulations for disjunctive constraints. \subsection{Very small formulations for general combinatorial disjunctive constraints} First, we state a general result: given any combinatorial disjunctive constraint and any two-dimensional encoding in convex position, we can provide an explicit description for a very small ideal formulation. \begin{proposition} \label{prop:general-cdc-with-general-2D-embedding} Take $\mathcal{T} = (T^i \subseteq \llbracket n \rrbracket)_{i=1}^d$ and let $H = (h^i)_{i=1}^d \subset \mathbb{R}^2$ be a two-dimensional encoding in convex position. Take $b^{i,j} = (c^{i,j}_2,-c^{i,j}_1)$ for each $\{i,j\} \in [d]^2$. Then $(\lambda,z) \in Q(\mathcal{P}(\mathcal{T}),H)$ if and only if \begin{gather*} \sum_{v=1}^n \min_{s : v \in T^s} \{b^{i,j} \cdot h^s\} \lambda_v \leq b^{i,j} \cdot z \leq \sum_{v=1}^n \max_{s : v \in T^s} \{b^{i,j} \cdot h^s\} \lambda_v \quad \forall \{i,j\} \in [d]^2 \\ (\lambda,z) \in \Delta^{n} \times \mathbb{R}^2. \end{gather*} \end{proposition} \proof{} The result follows from Theorem~\ref{thm:general-cdc-characterization}. If $D$ is not connected, we may introduce an artificial $\lambda_{n+1}$ variable to the constraint, and append it $T \leftarrow T \cup \{n+1\}$ to each set $T \in \mathcal{T}$. The corresponding edge set $D' = [d]^2$ is now connected, and we can simply impose that $\lambda_{n+1} \leq 0$ to recover our original constraint. First, we observe that $b^{i,j} \cdot c^{i,j} = 0$, and so as $\mathcal{L}$ is two-dimensional, $M(b^{i,j};\mathcal{L})$ is the hyperplane spanned by $c^{i,j}$. Furthermore, we have that $D = \{\{i,j\} \in [d]^2 : T^i \cap T^j \neq \emptyset\} \subseteq D' = [d]^2$, and so this representation will recover all the inequalities in \eqref{eqn:general-V-formulation-1}. It just remains to show that any inequality given by $\{i,j\} \in D' \backslash D$ is valid for $Q(\mathcal{P}(\mathcal{T}),H)$. To see this, consider any $(\lambda,z) = ({\bf e}^w,h^u) \in \operatorname{Em}(\mathcal{P}(\mathcal{T}),H)$; that is, $w \in T^u$. Then $\sum_{v=1}^n \min_{s : v \in T^s}\{b^{i,j} \cdot h^s\} \lambda_v = \min_{s : w \in T^s}\{b^{i,j} \cdot h^s\} \leq b^{i,j} \cdot h^u$, as $b^{i,j} \cdot h^u$ is one of the terms appearing in the minimization. A similar argument holds for the other side of the constraint. \qed \endproof This result implies a quadratic $\mathscr{O}(d^2)$ upper bound on the number of general inequality constraints needed to construct an ideal mixed-integer branching formulations for \emph{any} combinatorial disjunctive constraint. This is in sharp contrast to the traditional MIP setting, where binary encodings can--and typically do--lead to an exponential number of facets~\cite{Vielma:2015a}. Furthermore, this can be strengthened to an $\mathscr{O}(d)$ upper bound on the number of general inequality constraints when we use the moment curve encoding. \begin{proposition} \label{prop:general-cdc-moment-curve} Take $\mathcal{T} = (T^i \subseteq \llbracket n \rrbracket)_{i=1}^d$. Then $(\lambda,z) \in Q(\mathcal{P}(\mathcal{T}),\ensuremath{H^{\operatorname{mc}}}_d)$ if and only if \begin{gather*} \sum_{v=1}^n \min_{s : v \in T^s}\{s(t-s)\} \lambda_v \leq tz_1 - z_2 \leq \sum_{v=1}^n \max_{s : v \in T^s} \{s(t-s)\} \lambda_v \quad \forall t \in \llbracket 3, 2d-1 \rrbracket \\ (\lambda,z) \in \Delta^{n} \times \mathbb{R}^2. \end{gather*} \end{proposition} \proof{} Take any $\{i,j\} \in [d]^2$. Observe that $c^{i,j} \equiv h^j-h^i = (j-i,j^2-i^2) = (j-i)\cdot(1,i+j)$, and that $3 \leq i+j \leq 2d-1$. Therefore, for each $\{i,j\} \in [d]^2$, there is some $t \in \llbracket 3,2d-1 \rrbracket$ and some $\alpha > 0$ such that $c^{i,j} = \alpha \cdot (1,t)$. Therefore, our representation here is equivalent to that in Proposition~\ref{prop:general-cdc-with-general-2D-embedding}, up to constant nonzero scalings of some of the inequalities. \qed \endproof \begin{figure}[htpb] \centering \begin{tikzpicture} \draw [fill=gray!20] (0,0) -- (1,0) -- (0,1) -- (0,0); \draw [fill=gray!20] (1,0) -- (1,1) -- (0,1) -- (1,0); \draw [fill=gray!20] (1,0) -- (2,0) -- (1,1) -- (1,0); \draw [fill=gray!20] (2,0) -- (2,1) -- (1,1) -- (2,0); \draw [fill=gray!20] (0,1) -- (1,1) -- (0,2) -- (0,1); \draw [fill=gray!20] (1,1) -- (1,2) -- (0,2) -- (1,1); \draw [fill=gray!20] (1,1) -- (2,1) -- (1,2) -- (1,1); \draw [fill=gray!20] (2,1) -- (2,2) -- (1,2) -- (2,1); \node [below left] at (0,0) {$1$}; \node [below right] at (1,0) {$2$}; \node [below right] at (2,0) {$3$}; \node [below left] at (0,1) {$4$}; \node [above right] at (1,1) {$5$}; \node [below right] at (2,1) {$6$}; \node [above left] at (0,2) {$7$}; \node [above right] at (1,2) {$8$}; \node [above right] at (2,2) {$9$}; \draw [fill] (0,0) circle [radius=.05]; \draw [fill] (1,0) circle [radius=.05]; \draw [fill] (2,0) circle [radius=.05]; \draw [fill] (0,1) circle [radius=.05]; \draw [fill] (1,1) circle [radius=.05]; \draw [fill] (2,1) circle [radius=.05]; \draw [fill] (0,2) circle [radius=.05]; \draw [fill] (1,2) circle [radius=.05]; \draw [fill] (2,2) circle [radius=.05]; \end{tikzpicture} \caption{A grid triangulation on the plane with $8$ alternatives (triangles). The nodes, or vertices for the triangles, are numbered.} \label{fig:triangulation} \end{figure} As a concrete example, consider the grid triangulation in Figure~\ref{fig:triangulation}. The sets $\mathcal{T} = (T^i)_{i=1}^8$ correspond to each of the triangles, where \begin{alignat*}{4} T^1 = \{1,2,4\}, \quad T^2 = \{5,6,8\}, \quad T^3 = \{3,5,6\}, \quad T^4 = \{4,5,7\}, \\ T^5 = \{5,7,8\}, \quad T^6 = \{2,3,5\}, \quad T^7 = \{2,4,5\}, \quad T^8 = \{6,8,9\}. \end{alignat*} Then a description for $Q(\mathcal{P}(\mathcal{T}),\ensuremath{H^{\operatorname{mc}}}_8)$ using the moment curve encoding is: \begin{align*} 4\lambda_1 + 4\lambda_2 + 6\lambda_3 + 4\lambda_4 + 6\lambda_5 + 6\lambda_6 + 4\lambda_7 + 6\lambda_8 - 24\lambda_9 &\geq 5z_1 - z_2 \\ 6\lambda_1 + 6\lambda_2 + 12\lambda_3 + 12\lambda_4 + 12\lambda_5 + 12\lambda_6 + 12\lambda_7 + 10\lambda_8 - 8\lambda_9 &\geq 7z_1 - z_2 \\ 7\lambda_1 + 7\lambda_2 + 12\lambda_3 + 7\lambda_4 + 7\lambda_5 + 0\lambda_6 + 15\lambda_7 + 0\lambda_8 + 0\lambda_9 &\leq 8z_1 - z_2 \\ 8\lambda_1 + 8\lambda_2 + 18\lambda_3 + 8\lambda_4 + 14\lambda_5 + 8\lambda_6 + 20\lambda_7 + 8\lambda_8 + 8\lambda_9 &\leq 9z_1 - z_2 \\ 8\lambda_1 + 18\lambda_2 + 18\lambda_3 + 20\lambda_4 + 20\lambda_5 + 18\lambda_6 + 20\lambda_7 + 20\lambda_8 + 8\lambda_9 &\geq 9z_1 - z_2 \\ 9\lambda_1 + 9\lambda_2 + 21\lambda_3 + 9\lambda_4 + 16\lambda_5 + 16\lambda_6 + 24\lambda_7 + 16\lambda_8 + 16\lambda_9 &\leq 10z_1 - z_2 \\ 10\lambda_1 + 30\lambda_2 + 30\lambda_3 + 28\lambda_4 + 30\lambda_5 + 24\lambda_6 + 30\lambda_7 + 30\lambda_8 + 24\lambda_9 &\geq 11z_1 - z_2 \\ 12\lambda_1 + 42\lambda_2 + 42\lambda_3 + 42\lambda_4 + 42\lambda_5 + 40\lambda_6 + 40\lambda_7 + 40\lambda_8 + 40\lambda_9 &\geq 13z_1 - z_2 \\ (\lambda,z) \in \Delta^9 \times \mathbb{R}^2. \end{align*} The construction of Proposition~\ref{prop:general-cdc-moment-curve} gives these 8 facet-inducing general inequality constraints, along with 16 others that are valid but not facet-inducing for $Q(\mathcal{P}(\mathcal{T}),\ensuremath{H^{\operatorname{mc}}}_8)$, and therefore are not necessary. In contrast, any ideal binary MIP formulation (i.e. the encoding $H$ is some ordering of $\{0,1\}^3$) requires three control variables and at least 9 general inequality constraints~\cite{Huchette:2017}. \subsection{A very small formulation for the SOS2 constraint} We can sharpen our general results from the previous subsection if we take advantage of structure and choose an encoding tailored for a particular constraint. For example, the exotic encoding $\ensuremath{H^{\operatorname{ex}}}_d$ was specifically designed for the SOS2 constraint, which we recall is given by the sets $\mathcal{T}^{\operatorname{SOS2}}_d = (\{i,i+1\})_{i=1}^d$. \begin{proposition}\label{prop:sos2-constant} Take $d=4r$ for some $r \in \mathbb{N}$, and label $\ensuremath{H^{\operatorname{ex}}}_d = (h^i)_{i=1}^d \subset \mathbb{R}^2$. Then $(\lambda,y) \in Q(\mathcal{P}(\mathcal{T}^{\operatorname{SOS2}}_d),\ensuremath{H^{\operatorname{ex}}}_d)$ if and only if \begin{subequations} \label{eqn:sos2-constant} \begin{align} h^1_k \lambda_1 + \sum_{i=2}^{d} \min\{h^{i-1}_k,h^{i}_k\}\lambda_i + h^{d}_k\lambda_{d+1} &\leq z_k \quad\quad \forall k \in \llbracket 2 \rrbracket \\ h^1_k \lambda_1 + \sum_{i=2}^{d} \max\{h^{i-1}_k,h^{i}_k\}\lambda_i + h^{d}_k\lambda_{d+1} &\geq z_k \quad\quad \forall k \in \llbracket 2 \rrbracket \\ (\lambda,z) \in \Delta^{d+1} \times \mathbb{R}^{2}. \end{align} \end{subequations} \end{proposition} \proof{} Apply Theorem~\ref{thm:general-cdc-characterization}, after observing that $C = \{c^{i,i+1} \equiv h^{i+1}-h^{i}\}_{i=1}^{d-1} \subseteq \{\pm {\bf e}^1, \pm {\bf e}^2\}$, and so taking $b^1 = {\bf e}^1$ and $b^2={\bf e}^2$ suffices. \qed \endproof As a concrete example, the formulation \eqref{eqn:sos2-constant} for the SOS2 constraint on $n=17$ components is \begin{subequations} \begin{gather} \notag -4\lambda_1 -4\lambda_{2} + 4\lambda_{3} -3\lambda_{4} -3\lambda_{5} -3\lambda_{6} + 3\lambda_{7} -2\lambda_{8} -2\lambda_{9} + \\ -2\lambda_{10} + 2\lambda_{11} -1\lambda_{12} -1\lambda_{13} -1\lambda_{14} + 1\lambda_{15} + 0\lambda_{16} + 0\lambda_{17} \leq z_1 \\ \notag -4\lambda_1 + 4\lambda_{2} + 4\lambda_{3} + 4\lambda_{4} -3\lambda_{5} + 3\lambda_{6} + 3\lambda_{7} + 3\lambda_{8} -2\lambda_{9} + \\ 2\lambda_{10} + 2\lambda_{11} + 2\lambda_{12} -1\lambda_{13} + 1\lambda_{14} + 1\lambda_{15} + 1\lambda_{16} + 0\lambda_{17} \geq z_1 \\ \notag 0\lambda_1 + 0\lambda_{2} + 0\lambda_{3} + 4\lambda_{4} -4\lambda_{5} -4\lambda_{6} -4\lambda_{7} + 7\lambda_{8} -7\lambda_{9} + \\ -7\lambda_{10} -7\lambda_{11} + 9\lambda_{12} -9\lambda_{13} -9\lambda_{14} -9\lambda_{15} + 10\lambda_{16} + 10\lambda_{17} \leq z_2 \\ \notag 0\lambda_1 + 0\lambda_{2} + 4\lambda_{3} + 4\lambda_{4} + 4\lambda_{5} -4\lambda_{6} + 7\lambda_{7} + 7\lambda_{8} + 7\lambda_{9} + \\ -7\lambda_{10} + 9\lambda_{11} + 9\lambda_{12} + 9\lambda_{13} -9\lambda_{14} + 10\lambda_{15} + 10\lambda_{16} + 10\lambda_{17} \leq z_2 \\ (\lambda,z) \in \Delta^{17} \times \mathbb{R}^2. \end{gather} \end{subequations} This ideal mixed-integer branching formulation uses only two control variables, along with only four general integer inequality constraints. We contrast this with the logarithmic formulations of Huchette and Vielma~\cite{Huchette:2017,Vielma:2010,Vielma:2009a}, which are also ideal but require 4 control variables and 8 general inequality constraints. Moreover, the numbers of control variables or general inequality constraints in formulation \eqref{eqn:sos2-constant} do not grow with $d$, and so this difference will be even more pronounced with larger instances of the SOS2 constraint. \subsection{Relaxations of the annulus} The annulus is a set in the plane $\mathcal{A} = \{x \in \mathbb{R}^2 : s \leq ||x||_2 \leq S\}$ for constants $s, S \in \mathbb{R}_{+}$; see the left side of Figure~\ref{fig:annulus} for an illustration. A constraint of the form $x \in \mathcal{A}$ might arise when modeling a complex number $z = x_1 + x_2 \mathbf{i}$, as $x \in \mathcal{A}$ bounds the magnitude of $z$ as $s \leq |z| \leq S$. Such constraints arise in power systems optimization: for example, in the ``rectangular formulation''~\cite{Kocuk:2015} and the second-order cone reformulation~\cite{Jabr:2006,Liu:2017} of the optimal power flow problem, and the reactive power dispatch problem~\cite{Foster:2013}. Another application is footstep planning in robotics~\cite{Deits:2014,Kuindersma:2016}, where $s=S=1$, $x = (\cos(\theta),\sin(\theta))$, and $x$ must satisfy the trigonometric identity $x_1^2 + x_2^2 = 1$. When $0 < s \leq S$, $\mathcal{A}$ is a nonconvex set. Moreover, the annulus is not \emph{mixed-integer convex representable}~\cite{Lubin:2017,Lubin:2017a}: that is, there do not exist mixed-integer formulations for the annulus even if we allow the relaxation $R$ to be an arbitrary convex set. Foster~\cite{Foster:2013} proposes a disjunctive relaxation for the annulus given as $\hat{\mathcal{A}} \defeq \bigcup_{i=1}^d P^i$, where each \begin{equation} \label{eqn:annulus-pieces} P^i = \operatorname{Conv}\left(\left\{v^{2i+s-4}\right\}_{s=1}^4\right) \quad \forall i \in \llbracket d \rrbracket \end{equation} is a quadrilateral based on the breakpoints \begin{alignat*}{3} v^{2i-1} &= \left(s\cos\left(\frac{2\pi i}{d}\right), s\sin\left(\frac{2\pi i}{d}\right)\right) \quad\quad &\forall i \in \llbracket d \rrbracket& \\ v^{2i} &= \left(S\sec\left(\frac{2\pi }{d}\right)\cos\left(\frac{2\pi i}{d}\right), S\sec\left(\frac{2\pi }{d}\right)\cos\left(\frac{2\pi i}{d}\right)\right) \quad\quad &\forall i \in \llbracket d \rrbracket&, \end{alignat*} where, for notational simplicity, we take $v^{0} \equiv v^{2d}$ and $v^{-1} \equiv v^{2d-1}$. We can in turn represent this disjunctive relaxation through the combinatorial disjunctive constraint given by the family $\ensuremath{\calT^{\operatorname{ann}}}_d \defeq (T^i = \{2i+s-4\}_{s=1}^4)_{i=1}^d$. See the right side of Figure~\ref{fig:annulus} for an illustration. For the remainder, when we refer to a formulation of the annulus, we understand this to mean that it is a formulation for the combinatorial disjunctive constraint given by the sets $\mathcal{P}(\ensuremath{\calT^{\operatorname{ann}}}_d)$. \subsubsection{Small (logarithmic) traditional MIP formulations for the annulus} \label{sec:MIP-annulus} We start by using Theorem~\ref{thm:general-cdc-characterization} to present new traditional MIP formulations for the annulus. Foster~\cite{Foster:2013} constructs a ``disaggregated logarithmic'' MIP formulation~\cite{Vielma:2010} for $\ensuremath{\calT^{\operatorname{ann}}}_d$. This formulation does not take any combinatorial structure of the constraint into account; in our framework, it corresponds to taking each set in $\mathcal{T}$ as nonintersecting by repeating shared breakpoints (and so $D = \emptyset$). This leads to an increase in the number of components of $\lambda$, as well as a degradation of computational performance relative to logarithmic MIP formulations that use structure~\cite{Huchette:2017,Vielma:2010}. We start by presenting an ideal logarithmic traditional MIP formulation for the annulus that uses $\lceil\log_2(d)\rceil$ control variables and $2\lceil\log_2(d)\rceil$ general inequality constraints. \begin{figure}[htpb] \centering \begin{tikzpicture}[scale=.7] \draw [<->] (-4,0) -- (4,0); \draw [<->] (0,-4) -- (0,4); \node [left] at (0,4) {$x_2$}; \node [above] at (4,0) {$x_1$}; \draw[fill=gray!50,even odd rule] circle (3) circle (2); \draw [<->,dashed] (0,0) -- (1.7320508075688774,1); \draw [<->,dashed] (0,0) -- (1.5,2.5980762113533156); \node [below right] at (1.7320508075688774/2,1/2) {$s$}; \node [above left] at (1.5/2,2.5980762113533156/2) {$S$}; \begin{scope}[shift={(9,0)}] \draw [fill=gray!30] (1.4142135623730951,1.414213562373095) -- (2.0,0.0) -- (3.2471766008771823,0.0) -- (2.296100594190539,2.2961005941905386); \draw [fill=gray!30] (1.2246467991473532e-16,2.0) -- (1.4142135623730951,1.414213562373095) -- (2.296100594190539,2.2961005941905386) -- (1.988322215265212e-16,3.2471766008771823); \draw [fill=gray!30] (-1.414213562373095,1.4142135623730951) -- (1.2246467991473532e-16,2.0) -- (1.988322215265212e-16,3.2471766008771823) -- (-2.2961005941905386,2.296100594190539); \draw [fill=gray!30] (-2.0,2.4492935982947064e-16) -- (-1.414213562373095,1.4142135623730951) -- (-2.2961005941905386,2.296100594190539) -- (-3.2471766008771823,3.976644430530424e-16); \draw [fill=gray!30] (-1.4142135623730954,-1.414213562373095) -- (-2.0,2.4492935982947064e-16) -- (-3.2471766008771823,3.976644430530424e-16) -- (-2.2961005941905395,-2.2961005941905386); \draw [fill=gray!30] (-3.6739403974420594e-16,-2.0) -- (-1.4142135623730954,-1.414213562373095) -- (-2.2961005941905395,-2.2961005941905386) -- (-5.964966645795635e-16,-3.2471766008771823); \draw [fill=gray!30] (1.4142135623730947,-1.4142135623730954) -- (-3.6739403974420594e-16,-2.0) -- (-5.964966645795635e-16,-3.2471766008771823) -- (2.296100594190538,-2.2961005941905395); \draw [fill=gray!30] (2.0,-4.898587196589413e-16) -- (1.4142135623730947,-1.4142135623730954) -- (2.296100594190538,-2.2961005941905395) -- (3.2471766008771823,-7.953288861060848e-16); \draw[fill=gray!50,even odd rule,dashed] circle (3) circle (2); \node at (2.309698831278217, 0.9567085809127245) {$P^1$}; \node at (0.9567085809127246, 2.309698831278217) {$P^2$}; \node at (-0.9567085809127243, 2.309698831278217) {$P^3$}; \node at (-2.309698831278217, 0.9567085809127247) {$P^4$}; \node at (-2.309698831278217, -0.9567085809127241) {$P^5$}; \node at (-0.9567085809127258, -2.309698831278216) {$P^6$}; \node at (0.956708580912725, -2.3096988312782165) {$P^7$}; \node at (2.309698831278216, -0.956708580912726) {$P^8$}; \draw (1.4142135623730951,1.414213562373095) -- (2.0,0.0) -- (3.2471766008771823,0.0) -- (2.296100594190539,2.2961005941905386); \draw (1.2246467991473532e-16,2.0) -- (1.4142135623730951,1.414213562373095) -- (2.296100594190539,2.2961005941905386) -- (1.988322215265212e-16,3.2471766008771823); \draw (-1.414213562373095,1.4142135623730951) -- (1.2246467991473532e-16,2.0) -- (1.988322215265212e-16,3.2471766008771823) -- (-2.2961005941905386,2.296100594190539); \draw (-2.0,2.4492935982947064e-16) -- (-1.414213562373095,1.4142135623730951) -- (-2.2961005941905386,2.296100594190539) -- (-3.2471766008771823,3.976644430530424e-16); \draw (-1.4142135623730954,-1.414213562373095) -- (-2.0,2.4492935982947064e-16) -- (-3.2471766008771823,3.976644430530424e-16) -- (-2.2961005941905395,-2.2961005941905386); \draw (-3.6739403974420594e-16,-2.0) -- (-1.4142135623730954,-1.414213562373095) -- (-2.2961005941905395,-2.2961005941905386) -- (-5.964966645795635e-16,-3.2471766008771823); \draw (1.4142135623730947,-1.4142135623730954) -- (-3.6739403974420594e-16,-2.0) -- (-5.964966645795635e-16,-3.2471766008771823) -- (2.296100594190538,-2.2961005941905395); \draw (2.0,-4.898587196589413e-16) -- (1.4142135623730947,-1.4142135623730954) -- (2.296100594190538,-2.2961005941905395) -- (3.2471766008771823,-7.953288861060848e-16); \node [left] at (2.0,0.0) {$v^{-1} \equiv v^{15}$}; \node [right] at (3.2471766008771823,0.0) {$v^{0} \equiv v^{16}$}; \node [below left] at (1.4142135623730951,1.414213562373095) {$v^{1}$}; \node [above right] at (2.296100594190539,2.2961005941905386) {$v^{2}$}; \node [below] at (1.2246467991473532e-16,2.0) {$v^{3}$}; \node [above] at (1.988322215265212e-16,3.2471766008771823) {$v^{4}$}; \node [below right] at (-1.414213562373095,1.4142135623730951) {$v^{5}$}; \node [above left] at (-2.2961005941905386,2.296100594190539) {$v^{6}$}; \node [right] at (-2,0) {$v^{7}$}; \node [left] at (-3.2471766008771823,0.0) {$v^{8}$}; \node [above right] at (-1.414213562373095,-1.4142135623730951) {$v^{9}$}; \node [below left] at (-2.2961005941905386,-2.296100594190539) {$v^{10}$}; \node [above] at (-1.2246467991473532e-16,-2.0) {$v^{11}$}; \node [below] at (-1.988322215265212e-16,-3.2471766008771823) {$v^{12}$}; \node [above left] at (1.4142135623730951,-1.414213562373095) {$v^{13}$}; \node [below right] at (2.296100594190539,-2.2961005941905386) {$v^{14}$}; \end{scope} \end{tikzpicture} \caption{(\textbf{Left}) The annulus $\mathcal{A}$ and (\textbf{Right}) its corresponding quadrilateral relaxation $\hat{\mathcal{A}}$ given by \eqref{eqn:annulus-pieces} with $d=8$.} \label{fig:annulus} \end{figure} \begin{proposition}\label{prop:log-annulus} Fix $d = 2^r$ for some $r \in \mathbb{N}$. Take the binary reflected Gray encoding $\ensuremath{H^{\operatorname{br}}}_d = (h^i)_{i=1}^d \subseteq \{0,1\}^r$, along with $h^0 \equiv h^d$ for notational convenience. Then $(\lambda,z) \in Q(\mathcal{P}(\ensuremath{\calT^{\operatorname{ann}}}_d),\ensuremath{H^{\operatorname{br}}}_d)$ if and only if \begin{subequations} \label{eqn:annulus-log-form} \begin{gather} \sum_{i=1}^d \min\{h^{i-1}_k,h^i_k\} (\lambda_{2i-1} + \lambda_{2i}) \leq z_k \quad \forall k \in \llbracket r \rrbracket \\ \sum_{i=1}^d \max\{h^{i-1}_k,h^i_k\} (\lambda_{2i-1} + \lambda_{2i}) \geq z_k \quad \forall k \in \llbracket r \rrbracket \\ (\lambda,z) \in \Delta^{2d} \times \mathbb{R}^r. \end{gather} \end{subequations} \end{proposition} \proof{} The result follows from Theorem~\ref{thm:general-cdc-characterization} after observing that $D = \{i,i+1\}_{i=1}^{d-1} \cup \{1,d\}$ and therefore that $C = \{\pm{\bf e}^k\}_{k=1}^r$, as the binary reflected Gray code is cyclic ($h^{d}-h^1 = {\bf e}^1$). \qed\endproof We can also apply Theorem~\ref{thm:general-cdc-characterization} using the zig-zag encoding of Huchette and Vielma~\cite{Huchette:2017} to produce another traditional MIP formulation for the annulus with $\lceil \log_2(d) \rceil$ control variables and $\mathscr{O}(\log^2(d))$ general inequality constraints. \begin{proposition} \label{prop:zig-zag-annulus} Fix $d = 2^r$ for some $r \in \mathbb{N}$. Take the zig-zag encoding $\ensuremath{H^{\operatorname{zz}}}_d = (h^i)_{i=1}^d \subseteq \{0,1\}^r$, along with $h^0 \equiv h^d$ for notational convenience. Then $(\lambda,z) \in Q(\mathcal{P}(\ensuremath{\calT^{\operatorname{ann}}}_d),\ensuremath{H^{\operatorname{zz}}}_d)$ if and only if \begin{subequations} \label{eqn:annulus-zigzag-form} \begin{gather} \sum_{i=1}^d \min\{h^{i-1}_k,h^i_k\} (\lambda_{2i-1} + \lambda_{2i}) \leq z_k \quad \forall k \in \llbracket r \rrbracket \\ \sum_{i=1}^d \max\{h^{i-1}_k,h^i_k\} (\lambda_{2i-1} + \lambda_{2i}) \geq z_k \quad \forall k \in \llbracket r \rrbracket \\ \sum_{i=1}^d \min\left\{\frac{h^{i-1}_k}{2^{\ell}}-\frac{h^{i-1}_\ell}{2^{k}},\frac{h^{i}_k}{2^{\ell}}-\frac{h^{i}_\ell}{2^{k}}\right\} (\lambda_{2i-1} + \lambda_{2i}) \leq \frac{z_k}{2^\ell} - \frac{z_\ell}{2^k} \quad \forall \{k,\ell\} \in [r]^2 \\ \sum_{i=1}^d \max\left\{\frac{h^{i-1}_k}{2^{\ell}}-\frac{h^{i-1}_\ell}{2^{k}},\frac{h^{i}_k}{2^{\ell}}-\frac{h^{i}_\ell}{2^{k}}\right\} (\lambda_{2i-1} + \lambda_{2i}) \geq \frac{z_k}{2^\ell} - \frac{z_\ell}{2^k} \quad \forall \{k,\ell\} \in [r]^2 \\ (\lambda,z) \in \Delta^{2d} \times \mathbb{R}^r. \end{gather} \end{subequations} \end{proposition} \proof{} The result follows from Theorem~\ref{thm:general-cdc-characterization}. As $D = \{i,i+1\}_{i=1}^{d-1} \cup \{1,d\}$, it follows that $C = \{{\bf e}^k\}_{k=1}^r \cup \{w \equiv (2^{r-1},2^{r-2},\ldots,2^0)\}$. We have that $B = \{{\bf e}^k\}_{k=1}^r$ induce all hyperplanes spanned by the vectors $C \backslash \{w\} = \{{\bf e}^k\}_{k=1}^r$. Now consider each hyperplane spanned by $\hat{C} = \{{\bf e}^k\}_{k \in I} \cup \{w\} \subset C$, where $\hat{I} \subseteq I$. As $|C| = r+1$ and $\dim(C) = r$, we must have $|I| = r-2$, i.e. that there are distinct indices $k,\ell \in \llbracket r \rrbracket \backslash I$ where $I \cup \{k,\ell\} = \llbracket r \rrbracket$. We may then compute that the corresponding hyperplane is given by the normal direction $b^{k,\ell} \defeq 2^{-\ell}{\bf e}^{k} - 2^{-k}{\bf e}^{\ell}$. Therefore, we have that the set $B = \{{\bf e}^k\}_{k=1}^r \cup \{b^{k,\ell}\}_{\{k,\ell\} \in [r]^2}$ suffices for the conditions of Theorem~\ref{thm:general-cdc-characterization}, giving the result. \qed\endproof The analysis of Huchette and Vielma~\cite{Huchette:2017} shows that the the zig-zag encoding enjoys the ``incremental branching''' behavior for univariate piecewise linear functions, which leads to computational performance gains relative to the existing logarithmic formulation of Vielma et al.~\cite{Vielma:2010,Vielma:2009a}. Therefore, it may be the case that the zig-zag formulation for the annulus similarly outperforms the logarithmic formulation \eqref{eqn:annulus-log-form}, despite the modest increase in general inequality constraints. \subsubsection{A very small mixed-integer branching formulation for the annulus} In addition to the two logarithmic traditional MIP formulations, we can also present a very small mixed-integer branching formulations for the annulus that requires only a constant number of control variables and general inequality constraints. \begin{proposition} \label{prop:exotic-annulus} Take $\ensuremath{H^{\operatorname{ex}}}_d=(h^i)_{i=1}^d$ as given in \eqref{eqn:exotic-encoding}, along with $h^0 \equiv h^d$ for notational convenience. Then $(\lambda,z) \in Q(\mathcal{P}(\ensuremath{\calT^{\operatorname{ann}}}_d),\ensuremath{H^{\operatorname{ex}}}_d)$ if and only if \begin{subequations} \label{eqn:annulus-constant} \begin{align} \sum_{i=1}^{d} \min\{h^{i-1}_k,h^{i}_k\}(\lambda_{2i-1}+\lambda_{2i}) &\leq z_k \quad\quad \forall k \in \llbracket 2 \rrbracket \\ \sum_{i=1}^{d} \max\{h^{i-1}_k,h^{i}_k\}(\lambda_{2i-1}+\lambda_{2i}) &\geq z_k \quad\quad \forall k \in \llbracket 2 \rrbracket \\ \sum_{i=1}^{d} \min\{w \cdot h^{i-1},w \cdot h^{i}\}(\lambda_{2i-1}+\lambda_{2i}) &\leq w \cdot z \\ \sum_{i=1}^{d} \max\{w \cdot h^{i-1},w \cdot h^{i}\}(\lambda_{2i-1}+\lambda_{2i}) &\geq w \cdot z \\ (\lambda,z) \in \Delta^{2d} \times \mathbb{R}^{2}, \end{align} where $w = (h^d_2-h^1_2,h^1_1-h^d_1)$. \end{subequations} \end{proposition} \proof{} As $D = \{i,i+1\}_{i=1}^{d-1} \cup \{1,d\}$, then $C \subseteq \{\pm {\bf e}^1, \pm {\bf e}^2, h^d-h^1\}$, and the result immediately follows from Theorem~\ref{thm:general-cdc-characterization}. \qed\endproof \subsection{A big-$M$ mixed-integer branching formulation for any disjunctive set} Our discussion to this point has been restricted to combinatorial disjunctive constraints, for which Theorem~\ref{thm:general-cdc-characterization} gives an explicit geometric construction for ideal formulations. Although combinatorial disjunctive constraints can be used to formulate any disjunctive constraint, this is not always prudent (for example, if the number of extreme points is large). Therefore, we close by presenting a a simple big-$M$ formulation for a disjunctive constraint given by an inequality description Consider a generic disjunctive set, where we have an explicit linear inequality description $\mathcal{P} = (P^i = \{x \in \mathbb{R}^n : A^ix \leq b^i\})_{i=1}^d$ for each alternative. In general, it will be very difficult to compute $Q(\mathcal{P},H)$ to produce an ideal formulation for the disjunctive constraint. However, we can still apply the standard big-$M$ technique to produce a \emph{non-ideal} mixed-integer branching formulation using only two control variables and a modest number of constraints. \begin{proposition} \label{prop:big-M} Take the family of bounded polyhedra $\mathcal{P} = (P^i = \{x \in \mathbb{R}^n : A^ix \leq b^i \})_{i=1}^d$, where $A^i \in \mathbb{R}^{m_i \times n}$ and $b^i \in \mathbb{R}^{m_i}$. Take $M^i \in \mathbb{R}^{m_i}$ for each $i \in \llbracket d \rrbracket$ such that $M^i_s \geq \max_{x \in \bigcup_{k \neq i} P^k} A^k_s x$ for each $s \in \llbracket m_i \rrbracket$. Then $(x,z) \in \operatorname{Em}(\mathcal{P},\ensuremath{H^{\operatorname{mc}}}_d)$ if and only if \begin{subequations}\label{eqn:big-M-formulation} \begin{alignat}{2} A^i x &\leq b^i + (M^i-b^i)\left( i^2 - 2iz_1 + z_2 \right) \quad &\forall i \in \llbracket d \rrbracket \label{eqn:big-M-formulation-1} \\ z &\in \ensuremath{H^{\operatorname{mc}}}_d. \label{eqn:big-M-formulation-2} \end{alignat} Additionally, we can construct a LP relaxation for a corresponding formulation of $\operatorname{Em}(\mathcal{P},\ensuremath{H^{\operatorname{mc}}}_d)$ by replacing \eqref{eqn:big-M-formulation-2} with the constraint $z \in \Psi_d(1,d)$. \end{subequations} \end{proposition} \proof{} Consider the constraints \eqref{eqn:big-M-formulation-1}, given $z = (i,i^2) \in \ensuremath{H^{\operatorname{mc}}}_d$. The $j$-th set of constraints in \eqref{eqn:big-M-formulation-1} simplifies to \[ A^jx \leq \begin{cases} b^j + (M^j-b^j)(i^2-2i^2+i^2) = b^i & j = i \\ b^j + (M^j-b^j)(j^2-2j \cdot i+i^2) \equiv \alpha^j & \text{o.w.} \end{cases} \] As $j^2-2i\cdot j+i^2 = (i-j)^2 \geq 1$ for each $i,j \in \mathbb{Z}$ with $i \neq j$, we have that $\alpha \geq M^j$. Therefore, given $z=(i,i^2) \in \ensuremath{H^{\operatorname{mc}}}_d$, $x$ satisfies these constraints if and only if $x \in P^i$. \qed \endproof We compare this formulation against a big-$M$ traditional MIP formulation~\cite{Vielma:2015}. Both require $\sum_{i=1}^d m_i$ general inequality constraints, along with $\mathscr{O}(d)$ additional constraints to describe either $\Psi_d(1,d)$ or variable bounds on binary variables. However, formulation \eqref{eqn:big-M-formulation} requires only two control variables, compared to the $\lceil \log_2(d) \rceil$ binary control variables needed for a traditional big-$M$ MIP formulation. \section{Omitted Proofs} \subsection{Proof of Proposition~\ref{easycombinatorial}} \label{app:prove-cdc-properties} \proof{} The ``if'' direction will follow if we can show that $Q(\mathcal{P}(\mathcal{T}),H) \cap (\mathbb{R}^n \times H) = \operatorname{Em}(\mathcal{P}(\mathcal{T}),H)$, as then $Q(\mathcal{P}(\mathcal{T},H))$ is the relaxation for a valid formulation for $\bigcup_{i=1}^d P(T^i)$. To see this, consider each $h^i \in H$, along with the associated $\operatorname{Slice}(h^i) = \{x : (x,h^i) \in Q(\mathcal{P}(\mathcal{T}),H)\}$. Clearly $\operatorname{Slice}(h^i) \supseteq P(T^i)$ from the definition of $\operatorname{Em}(\mathcal{P}(\mathcal{T}),H)$, and so the result will follow if we show that $\operatorname{Slice}(h^i) \subseteq P(T^i)$. Take some $\hat{x} \in \operatorname{Slice}(h^i)$; then $(\hat{x},h^i) \in Q(\mathcal{P}(\mathcal{T}),H)$ necessarily. Therefore, it is possible to express $(\hat{x},h^i)$ as a convex combination of points in $\operatorname{Em}(\mathcal{P}(\mathcal{T}),H)$. Equivalently, there exists some $\lambda \in \Delta^d$ and some points $\tilde{x}^j \in P(T^j)$ for each $j \in \llbracket d \rrbracket$ such that $(\hat{x},h^i) = \sum_{j=1}^d \lambda_j (\tilde{x}^j,h^j)$. As the codes in $H$ are in convex position, then $h^i = \sum_{j=1}^d \lambda_j h^j$ implies that $\lambda = {\bf e}^i$, giving the result. For the ``only if'' direction, we start by showing that any formulation for $\bigcup_{i=1}^d P(T^i)$ with relaxation $R$ must necessarily satisfy the property that for each $i \in \llbracket d \rrbracket$, $\operatorname{Slice}(h^i) = \{x : (x,h^i) \in R\} = P(T^{j_i})$ for some index $j_i \in \llbracket d \rrbracket$. For each such $i$, clearly $\operatorname{Slice}(h^i) \subseteq P(T^{j_i})$ for some $j_i \in \llbracket d \rrbracket$, since otherwise the formulation cannot be valid. By our nonredundancy assumption and the pigeon-hole principle, we can assign each $i \in \llbracket d \rrbracket$ a unique index $j_i \in \llbracket d \rrbracket$; w.l.o.g., take $j_i = i$ for each $i \in \llbracket d \rrbracket$. Now presume for contradiction that $H$ is not in convex position. This presumption, along with the fact that $H$ is a set of distinct vectors, implies there exists a code (w.l.o.g. $h^1$) where $h^1 = \sum_{i=1}^d \lambda_i h^i$ for some $\lambda \in \Delta^d$ with $\lambda_1 = 0$ and at least two fractional components: w.l.o.g., $0 < \lambda_2, \lambda_3 < 1$. From the validity of the formulation, it necessarily follows that \begin{align*} \sum\nolimits_{j=1}^d \lambda_j (P(T^j) \times \{h^j\}) &= \sum\nolimits_{j=1}^d (\lambda_j P(T^j) \times \{h^1\}) \\ &\subseteq R \cap (\mathbb{R}^{n} \times \{h^1\}) \\ &= P(T^1) \times \{h^1\}, \end{align*} where the inclusion follows from the convexity of $R$. Therefore, $P(T^1) \supseteq \sum_{j=2}^d \lambda_j P(T^j)$. Consider some $v \in T^2$. Since ${\bf e}^v \in P(T^2)$ and each point on the unit simplex is nonnegative, we must have that $\tilde{\lambda} \in P(T^1)$ for some $\tilde{\lambda} \in \Delta^n$ with $ \tilde{\lambda}_v \geq \lambda_v > 0$. Therefore, we must have ${\bf e}^v \in P(T^1)$, and hence $v \in T^1$. Repeating this for each $v \in T^2$, we conclude that $T^1 \supseteq T^2$. However, this contradicts the irredundancy assumption, and so we must have that $H$ is in convex position. The lower bound on the number of control variables for hole-free encodings follows as $\operatorname{Conv}(H)$ has at most $2^r$ extreme points \cite[Proposition 3]{Celaya:2015}, implying that $H$ has at most $d=2^r$ elements. \qed \endproof \subsection{Proof of Theorem~\ref{thm:general-cdc-characterization}} \label{sec:prove-cdc-characterization} \proof{} It is straightforward to show the ``only if'' direction; indeed, the essential argument already presented in the proof of Proposition~\ref{prop:general-cdc-with-general-2D-embedding} suffices. Therefore, we focus on the ``if'' direction. From the connectivity assumption on $D$, it is possible to show that $\mathcal{L} = \operatorname{aff}(H) - h^1$, where the choice of $h^1$ to subtract from $\operatorname{aff}(H)$ was arbitrary. Now, let $F$ be a facet of $Q(\mathcal{P},H)$. By possibly adding or subtracting multiples of $\sum_{i=1}^n \lambda_i = 1$ and the equations defining $\operatorname{aff}(H)$, we may assume w.l.o.g. that $F$ is induced by $\tilde{a} \cdot \lambda \leq \tilde{b} \cdot y$ for some $(\tilde{a},\tilde{b}) \in \mathbb{R}^{n+r}$. If $B = \operatorname{ext}(Q(\mathcal{P},H))$ is the set of all extreme points, we will have that $F$ is supported by some strict nonempty subset $\tilde{B} \subsetneq B$. Take $\tilde{D} = \{\{i,j\} \in D : \exists v \in \llbracket n \rrbracket \text{ s.t. } ({\bf e}^v,h^i),({\bf e}^v,h^j) \in \tilde{B}\}$ and $\tilde{C} = \{c^{i,j} \in C : \{i,j\} \in \tilde{D}\}$. In particular, we see that $\tilde{b} \cdot c^{i,j} = 0$ for each $c^{i,j} \in \tilde{C}$, as if $\{i,j\} \in \tilde{D}$, this implies that there is some $v \in \llbracket n \rrbracket$ whereby $\tilde{a} \cdot {\bf e}^v = \tilde{b} \cdot h^i = \tilde{b} \cdot h^j$. \paragraph{\underline{Case 1: $\dim(\tilde{C}) = \dim(C)$}} In this case, we show that $F$ corresponds to a variable bound on a single component of $\lambda$. As $\tilde{C} \subseteq C$ and $\dim(\tilde{C}) = \dim(C)$, we conclude that $\operatorname{span}(\tilde{C}) = \operatorname{span}(C) \equiv \mathcal{L}$. Then $\tilde{b} \in \mathcal{L}^\perp$, as $\tilde{b} \perp \tilde{C}$. Furthermore, $\mathcal{L}$ is the linear space parallel to $\operatorname{aff}(H)$. Therefore, we can w.l.o.g. presume that $\tilde{b} = {\bf 0}^r$, as \eqref{eqn:general-V-formulation-2} constrains $z \in \operatorname{aff}(H)$. We observe that $\tilde{a} \neq {\bf 0}^n$, as otherwise this would correspond to the vacuous inequality $0 \leq 0$, which is not a proper face. We now show that $\tilde{a}$ has exactly one nonzero element. Assume for contradiction that this is not the case, and w.l.o.g. $\tilde{a}_1, \tilde{a}_2 < 0$ (any strictly positive components will not yield a valid inequality for $B$). This would imply that $({\bf e}^1,h^j) \in B^\star$ for each $j \in \llbracket d \rrbracket$ such that $1 \in T^j$, and similarly that $({\bf e}^2,h^j) \in B^\star$ for each $j \in \llbracket d \rrbracket$ wherein $2 \in T^j$. However, we could then perform the simple tilting $\tilde{a}_2 \leftarrow 0$ to construct a distinct face with strictly larger support, as now $({\bf e}^2,h^j)$ is supported by the corresponding face for each $j$ such that $2 \in T^j$. Furthermore, as this new constraint does not support $({\bf e}^1,h^j)$ for each $j$ such that $1 \in T^j$, the new face is proper, and thus contradicts the original face $F$ being a facet. Therfore, we can normalize the coefficients to $\tilde{a} = -{\bf e}^1$, giving a variable bound constraint on a component of $\lambda$ which appears in the restriction $\lambda \in \Delta^n$ in \eqref{eqn:general-V-formulation-2}. \paragraph{\underline{Case 2: $\dim(\tilde{C}) = \dim(C) - 1$}} The fact that $b \perp \tilde{C}$, along with the dimensionality of $\tilde{C}$, implies that $M(\tilde{b};\mathcal{L}) = \operatorname{span}(\tilde{C})$ is a hyperplane in $\mathcal{L}$. This means we can assume w.l.o.g. that $\tilde{b} = s b^i$ for some $i \in \llbracket \Gamma \rrbracket$ and $s \in \{-1,+1\}$. We then compute for each $v \in \llbracket n \rrbracket$ that either $a_v = \min_{j : v \in T^j}\{b^i \cdot h^j\}$ if $s=+1$, or $a_v = -\max_{j : v \in T^j}\{b^i \cdot h^j\}$ if $s = -1$. \paragraph{\underline{Case 3: $\dim(\tilde{C}) < \dim(C)-1$}} We will show that this case cannot occur if $F$ is a general inequality facet. In fact, observe that if, w.l.o.g. ${\bf e}^1 \not\in \operatorname{Proj}_\lambda(\tilde{B})$, then $\tilde{a} \cdot \lambda \leq \tilde{b} \cdot z$ is either equal to, or dominated by, the variable bound $\lambda_1 \geq 0$. Therefore, we assume that $\operatorname{Proj}_\lambda(\tilde{B}) = \{{\bf e}^i\}_{i=1}^n$ for the remainder. Presume for contradiction that it is indeed the case that $F$ is a facet and $\dim(\tilde{C}) < \dim(C)-1$. As $F$ is a proper face, we know that there is some point in $B$ not supporting $F$, w.l.o.g. $({\bf e}^1,h^1) \in B \backslash \tilde{B}$. We will take all the remaining extreme points as $B^\star = B \backslash (\tilde{B} \cup \{({\bf e}^1,h^1)\})$ First, we show that $B^\star \neq \emptyset$. If this where not the case, then $\tilde{B} = B \backslash \{({\bf e}^1,h^1)\}$ necessarily, and this implies that $i=1$ for each $\{i,j\} \in D \backslash \tilde{D}$ (recall that $i < j$ notationally). Furthermore, $T^1 \cap T^j \subseteq \{1\}$ for each $j \in \llbracket 2,d \rrbracket$, else $c^{1,v} \in \tilde{C}$ and $\{1,v\} \in \tilde{D}$. Therefore, as $D$ is connected by assumption, $\tilde{D}$ is ``nearly connected'' in the sense that $G = (\llbracket 2,d \rrbracket, \{\{i,j\} \in \tilde{D} : i \neq 1, j \neq 1 \})$ is a connected graph. By the same argument as in the beginning of the proof, we conclude that $\operatorname{span}(\tilde{C}) \supseteq \operatorname{aff}(\{h^i\}_{i=2}^d) - h^2$. However, this would imply that $\dim(\tilde{C}) \geq \dim(\operatorname{aff}(\{h^i\}_{i=2}^d) - h^2) \geq \dim(\operatorname{aff}(\{h^i\}_{i=1}^d) - h^2) - 1 = \dim(\mathcal{L}) - 1 = \dim(C)-1$, a contradiction of our dimensionality assumption. Therefore, we conclude that $B^\star \neq \emptyset$. We now define the cone \[ K = \left\{(a,b) \in \mathbb{R}^n \times \mathcal{L} : a \cdot {\bf e}^v \leq b \cdot h^j \: \forall ({\bf e}^v,h^j) \in B^\star \right\} \] and the linear space \[ L = \left\{ (a,b) \in \mathbb{R}^n \times \mathcal{L} : a \cdot {\bf e}^v = b \cdot h^j \: \forall ({\bf e}^v,h^j) \in \tilde{B} \right\}. \] Furthermore, we see that the inequalities defining $K$ cannot be implied equalities. Therefore, as $(\tilde{a},\tilde{b}) \in K$ and this point strictly satisfies each inequality of $K$ indexed by $B^\star$, we conclude that $K$ is full-dimensional in $\mathbb{R}^n \times \mathcal{L}$, and that $(\tilde{a},\tilde{b}) \in \operatorname{relint}(K)$. Next, we show that $\dim(L) > 1$. To show this, we start by instead studying $L' = \{ b \in \mathcal{L} : b \cdot c = 0 \: \forall c \in \tilde{C} \}$. We can readily observe that $L' = \operatorname{Proj}_b(L)$. Furthermore, as $\operatorname{Proj}_a(\tilde{B}) = \{{\bf e}^i\}_{i=1}^n$ from the argument at the beginning of the case, we conclude that the set $\{a : (a,b) \in L\}$ is a singleton. In other words, the values for $a$ are completely determined by the values for $b$ in $L$. From this, we conclude that $\dim(L) = \dim(L')$. From the definition of $L'$, we see that $L'$ and $\operatorname{span}(\tilde{C})$ form an orthogonal decomposition of $\mathcal{L}$. Therefore, $\dim(\mathcal{L}) = \dim(L') + \dim(\tilde{C})$. Recalling that $\dim(\mathcal{L}) = \dim(C)$, and that we are assuming that $\dim(\tilde{C}) < \dim(C)-1$, we have that $\dim(L) = \dim(L') = \dim(\mathcal{L}) - \dim(\tilde{C}) = \dim(C) - \dim(\tilde{C}) > 1$, giving the result. We now show that $K \cap L$ is pointed. To see this, presume for contradiction that there exists a nonzero $(\hat{a},\hat{b})$ such that $(\hat{a},\hat{b}),(-\hat{a},-\hat{b}) \in K\cap L$. However, this would imply that $\hat{a} \cdot {\bf e}^v = \hat{b} \cdot h^j$ for all $({\bf e}^v,h^j) \in \tilde{B} \cup B^\star$. Because $B^\star\neq\emptyset$, this implies that $\hat{a} \cdot \lambda \leq \hat{b} \cdot z$ is a face strictly containing the facet $F$, and so must be a non-proper face (i.e. it is additionally supported by $({\bf e}^1,h^1)$ and hence by every point in $B$). However, this would imply that $\hat{b} \cdot c = 0$ for all $c \in C$, and as $\mathcal{L} = \operatorname{span}(C)$, this would necessitate that $\hat{b} \in \mathcal{L}^\perp$. As $\hat{b} \in \mathcal{L}$ from the definition of $K$, it follows that $\hat{b} = {\bf 0}^r$. However, this would imply that $\hat{a} \cdot \lambda = 0$ is valid for $B$, which cannot be the case unless $\hat{a} = {\bf 0}^n$, a contradiction. Therefore, $K \cap L$ is pointed. As $\dim(L) > 1$, we can take some two-dimensional linear subspace $L^2 \subseteq L$ such that $(\tilde{a},\tilde{b}) \in L^2$. As $(\tilde{a},\tilde{b}) \in L \cap \operatorname{relint}(K)$, it follows that $(\tilde{a},\tilde{b}) \in L^2 \cap \operatorname{relint}(K)$ as well. Similarly, as $K \cap L$ is pointed, it follows that $K^2 = L^2 \cap K$ is pointed as well. Furthermore, as $K$ is full-dimensional in $\mathbb{R}^n \times \mathcal{L}$, $K^2$ is full-dimensional in $L^2 \subset \mathbb{R}^n \times \mathcal{L}$ (i.e. 2-dimensional).Therefore, a minimal description for it includes the equalities that define $L^2$, along with exactly two nonempty-face-inducing inequality constraints from the definition of $K$. Add the single strict inequality $\hat{K}^2 = K^2 \cap \{(a,b) \in \mathbb{R}^n \times \mathcal{L} : a \cdot {\bf e}^1 < b \cdot h^1\}$. As $\tilde{a} \cdot {\bf e}^1 < \tilde{b} \cdot h^1$ and $(\tilde{a},\tilde{b}) \in K^2$, it follows that $\hat{K}^2$ is nonempty and also 2-dimensional, and can be described using only the linear equations defining $L^2$, the strict inequality $a \cdot {\bf e}^1 < b \cdot h^1$, and at least one (and potentially two) of the inequalities previously used to describe $K^2$. Select one of the defining nonempty-face-inducing inequalities given by $a \cdot {\bf e}^v \leq b \cdot h^j$, where $({\bf e}^v,h^j) \in B^\star$. Now construct the restriction $S = \{(a,b) \in \hat{K}^2 : a \cdot {\bf e}^v = b \cdot h^j\}$. As $a \cdot {\bf e}^v \leq b \cdot h^j$ induces a non-empty face on the cone $\hat{K}^2$, $S$ is nonempty. Furthermore, we see that any $(\hat{a},\hat{b}) \in S$ will correspond to a valid inequality $\hat{a} \cdot \lambda \leq \hat{b} \cdot z$ for $B$ with strictly greater support than our original face $\tilde{a} \cdot \lambda \leq \tilde{b} \cdot z$. In particular, we see that $({\bf e}^v,h^j) \in B^\star$, i.e. $\tilde{a} \cdot {\bf e}^v < \tilde{b} \cdot h^j$, but by construction $\hat{a} \cdot {\bf e}^v = \hat{b} \cdot h^j$. Additionally, since $\hat{a} \cdot {\bf e}^1 < \hat{b} \cdot h^1$, the corresponding face is proper, which implies that $F$ cannot be a facet, a contradiction.\qed \endproof \begin{acknowledgements} This material is based upon work supported by the National Science Foundation under Grant CMMI-1351619. We thank Matthias K\"oppe for suggesting a potential connection between the embedding object and points along the moment curve. \end{acknowledgements} \bibliographystyle{spmpsci}
{ "attr-fineweb-edu": 1.725586, "attr-cc_en_topic": 12, "domain": "arxiv" }